Human-Centered, AI-Supported

A Statement on AI Use at Digital Heritage Consulting

Artificial intelligence (AI) and our societal relationship to it feels new. Ethics, even the ethics of technology use, are not new, and they can help us grapple with ethical uses of AI. 

As a new user of Claude.ai, an early-adopter woman in tech who teaches data ethics, and a working artist, I choose to engage with these issues as they affect my own work as a musician and independent scholar in folklore and heritage interpretation. A recent Harvard Business School article asserts that women are adopting AI at a rate 25% less than that of men, primarily for ethical reasons. That article frames a stark choice between being “hurting their careers” and adopting AI on society’s current terms “without being judged for using it.” 

That judging is real, and the ethical concerns are legitimate. Discounting them devalues women in the workplace and in society. I choose a third path—to define my own ethical framework for personal and professional AI use, and to adopt on my own terms in this new space. I choose to speak out in my own voice, and to use my own judgment about my use of AI as guided by the ethics frameworks of my professions.

The American Folklore Society Position on Ethics defines communities to which a folklorist is ethically responsible: research informants, the public, the discipline, students, and sponsors. The National Association of Interpreters (NAI) Code of Ethics specifically adds an ethical responsibility to the resources under our care. The Data Management Association International (DAMA-I) Code of Ethics define principles of integrity and responsibility, data stewardship, transparency, and compliance. 

As a member of and a working practitioner in all three organizations, I am bound by these codes of ethics and actively working to apply them to my use of AI. My primary principles for AI use are thus grounded in:

  • Responsibility to the communities in which I work and the resources under my care
  • Stewardship of the data I work with and ethical use of data wherever I use it (not just in AI tools)
  • Transparency about my AI work and ethical compliance beyond regulatory frameworks

What I Won’t Do with AI

  • I won’t present AI-assisted work as my own without disclosure.
    • All AI-assisted content carries a transparency badge that links to this page.
  • I won’t use AI for first drafts before finding my own voice. No AI slop here.
    • When I want an AI tool to “speak in its own voice,” I will disclose that fully and transparently in context (examples below).
  • I won’t publish AI-assisted research without verification and validation with links to primary sources.
  • I won’t share or upload personally identifiable information (PII) to an AI tool.
  • I won’t use AI to represent or interpret traditional material without human oversight and source attribution.

Why Claude

Like many AI users in spring 2026, I chose Claude AI over other tools because its parent company Anthropic’s public reasoning about what it won’t do most closely matches my own values — not because it’s perfect, but because the reasoning is visible and the lines are being defended. When the Pentagon demanded unrestricted access including mass domestic surveillance and fully autonomous weapons, Anthropic refused at the cost of a $200 million contract. That can change, and I’m watching it. Read Claude’s Constitution for more information about how Anthropic governs Claude.

I use a Pro version of Claude most importantly because I’m a data management professional and I want to understand what a paid version can actually do. Hands-on practice is how I learn.

What I Do with AI

AI can edit and brand operational work — logos, web copy, slide decks, booking sheets, handouts — for content I’ve already authored with a brand Identity I designed. I like to work fast while ideas are fresh, and busy human reviewers take days to respond. Humans are more effective once I’ve done my homework and worked through a couple of drafts, so they can respond to the work itself and not do copyediting that can waste their valuable time. It’s highly effective at templating and saves me a lot of time in cleanup and consistency.

I use AI for researching bookings and for planning trips and tours, which it is also very good at: compiling venue lists and contact information, estimating driving distance, suggesting nearby stops, and more. It also has some marketing savvy to recommend communication sequences that save me from rejections and my target venues from blast emails. I’m still learning what it can do for a working musician as solopreneur.

I write in my own voice first, then collaborate with AI to refine and sharpen early drafts. I research as I write, embedding verification directly into the process rather than context-switching between tools — and to keep myself honest about what the AI is actually telling me. When I ask for writing assistance, Claude has an instruction to give a writing prompt instead. I start from my own words, and the final words are those I approve.

I have used AI to summarize and analyze research content, including feedback I receive on my work. Again, all raw material is anonymized for data privacy.

I have even used AI as a named dramatic voice, disclosed in the work itself, arguing that machines can’t replace human art. That’s the line. AI might replace my job — as technology has done before and will continue to do since the Industrial Revolution. But AI won’t replace me.

This statement was drafted collaboratively with AI. The final words are mine. That’s the point.

LAST UPDATED MARCH 11 2026

Scroll to Top