A Statement on AI Use at Digital Heritage Consulting
Ethics and Principles
Artificial intelligence (AI) and our societal relationship to it feels new. Ethics, even the ethics of technology use, are not new, and they can help us grapple with ethical uses of AI.
As a new user of Claude.ai, an early-adopter woman in tech who teaches data ethics, and a working folk artist, I choose to engage with these issues as they affect my own work as a musician and independent scholar in folklore and heritage interpretation. A recent Harvard Business School article asserts that women are adopting AI at a rate 25% less than that of men, primarily for ethical reasons. That article frames a stark choice between “hurting their careers” and adopting AI on society’s current terms “without being judged for using it.”
That judging is real, and the ethical concerns are legitimate. Discounting them devalues women in the workplace and in society. I choose a third path—to define my own ethical framework for personal and professional AI use, and to adopt on my own terms in this new space. I choose to speak out in my own voice, and to use my own judgment about my use of AI as guided by the ethics frameworks of my professions.
The American Folklore Society Position on Ethics defines communities to which a folklorist is ethically responsible: research informants, the public, the discipline, students, and sponsors. The National Association of Interpreters (NAI) Code of Ethics specifically adds an ethical responsibility to the resources under our care. The Data Management Association International (DAMA-I) Code of Ethics define principles of integrity and responsibility, data stewardship, transparency, and compliance.
As a member of and a working practitioner in all three organizations, I am bound by these codes of ethics. While these frameworks substantially predate AI, I am actively working to apply them to my use of AI. I hold two sets of ethics concurrently: the folklorist’s ethic of attribution, transmission, and community reciprocity, and the data governance professional’s ethic of consent, stewardship, and accountability. They align better than you might think.
My primary principles for AI use are thus grounded in:
- Responsibility to the communities in which I work and the resources under my care
- Stewardship of the data I work with and ethical use of data wherever I use it (not just in AI tools)
- Transparency about my AI work and ethical compliance beyond regulatory frameworks
Community Responsibility and Care for Resources
Facebook community feedback on my first week with AI made it very clear to me that individual ethical practice cannot be fully separated from collective consequences. This is hard for me—I just want to stick my head in a computer sometimes and hide under the bed from political strife. I am not a political organizer. But I am a community organizer in the folk tradition, and I owe my community my leadership in service to the traditions we love and those we carry forward.
Some in my community have chosen to refuse AI where I have chosen to engage. Others have deep and abiding concerns about whether ethical use of AI is an oxymoron. I owe both groups my most thoughtful consideration of their position and my most honest respect for the issues they raise to me. Labor displacement, environmental cost, and corporate accountability are real and pressing concerns with AI. I have the privilege of retiring after a second career in tech to develop my own position without feeling the immediate impacts of these issues in my own life as keenly as some. As an AI user striving for ethical practice, I owe my friends and colleagues the respect of active listening, my support as a community member, and the investment of my time and money in organizations that are working to offset these concerns.
Some have challenged me personally to advocate for social change, both for a universal basic income or for the environmental regulation of data centers’ use of energy and water. I have to be honest with you. I was an advocate once, for river conservation and stewardship. I loved the work with a grand passion, and it burned me out until I crashed. I know my limits now. In retirement, I believe my own responsibility may be best enacted by quiet and personal thought leadership for a next generation of collective action. I know the activists are here. You have my support and what wisdom I can share.
Data Stewardship
I am a Certified Data Management Professional with deep experience in data governance. My last client contract went to AI in India. They call it job displacement, and it’s not abstract for me. A wide range of data practices concern me in AI, extending from data security and privacy to training data acquisition, bias, and rights infringement to just plain AI slop, especially when it’s used for “knowledge transfer” that may not translate to human learning. I am actively working to extend my knowledge of how AI works and what it can do with an executive certificate in Digital Business Strategy from MIT, an AI Governance certificate from Collibra, and this reflective hands-on practice with Claude.
Certificates are not enough for me. I need a hands-on practice to satisfy myself that my data stewardship is as practical and responsible as I can make it, whether that means scrubbing PII from a training dataset or disabling Claude from using my inputs for model training. As my practice evolves and I encounter new use cases, I commit to revisiting my core principle of ethical data stewardship at every opportunity, and to handle each decision with thought and care. That’s hard in the heat of the moment. It’s hard to know what you don’t know. I’m learning. So is Claude—from me. I’ll keep that as private, secure, unbiased, and un-sloppy as I can.
Wearing my data governance hat, I have reviewed Anthropic consumer policies on cookies, data handling and retention, terms of service, copyright infringement, privacy policy, and GDPR compliance, and found them to be robust, well articulated, and suitable for my personal individual use. I’m less confident in the realm of where Claude gets its data from. It’s a parallel concern to today’s growing awareness of the extractive relationship when folk collectors don’t credit their informants, but this is also a question of scope and scale.
Claude was trained on website data from the open web, licensed datasets from third parties, and voluntarily supplied user data. CommonCrawl — the massive web scrape used to train GPT models — was not used. Wikipedia explains that Claude models have been trained to predict the next word in large amounts of text. Then, they have been fine-tuned using reinforcement learning from human feedback (RLHF) and constitutional AI in an attempt to enforce ethical guidelines. Read Claude’s Constitution for more information about how Anthropic governs Claude, which is the most interesting and encouraging governance approach that I have yet seen in generative AI. Claude’s governance tech spec is transparent and human-readable. That matters.
However, Anthropic does not publish a full list of the datasets used for pre-training. The specific web sources, licensed corpora, and the terms under which third-party data was acquired are not public. This is an honest gap, and it’s not unique to Claude. This is also where the analogy breaks down. A specific collector may have collected an individual’s song or tune and published it without credit, but large language models absorb patterns from a corpus at a scale that makes individual attribution both technically and legally contested territory. Claude is not bad at footnotes, but they’re not always there or always complete and accurate. It’s hard to know what came from where.
Honest Transparency
In the folk community, the chain of transmission matters — who learned from whom, who changed what, who gets credit. In the age of recordings, the value of “knee to knee” one-one-one transmission emerged as a standard that distinguished a tradition-bearer from a folk revival performer who learned it off the radio or from a CD. Today’s young singers and pickers swim in a digital ocean of primary source collections, field recordings, and streaming content, and they learn from all of it. Some still value knee-to-knee transmission as a tangible link with their peers as well as their elders in the ballad circle or the pub sing. When you have a song from a person and you say “I learned this from the singing of…,” you make yourself a link in the transmission chain.
An AI model breaks that chain when we don’t know the sources of the training data. When we receive a response from a model prompt, we don’t know who collected what from whom and when. This is perhaps the most disquieting aspect of generative AI, and it’s one reason why for me, AI-supported work must be fully disclosed with a transparency badge. My community needs to know when some of my work was “learned” from a machine, or shaped by it. That shaping is just starting, and I don’t believe I will think the same way about computers once I’ve worked with Claude for awhile. (That’s scary, but true.) My transparency badge commits to a human-centered practice where I’m clear on where I stand in the chain of transmission.
What I Won’t Do with AI
- I won’t present AI-supported work as my own without disclosure.
- All AI-supported content carries a transparency badge that links to this page.
- I won’t use AI for first drafts before finding my own voice. No AI slop here.
- When I want an AI tool to “speak in its own voice,” I will disclose that fully and transparently in context (examples below).
- I won’t publish AI-supported research without verification and validation with links to primary sources.
- I won’t share or upload personally identifiable information (PII) to an AI tool.
- I won’t use AI to represent or interpret traditional material without human oversight and source attribution.
Why Claude
Like many AI users in spring 2026, I chose Claude AI over other tools because its parent company Anthropic’s public reasoning about what it won’t do most closely matches my own values — not because it’s perfect, but because the reasoning is visible and the lines are being defended. When the Pentagon demanded unrestricted access including mass domestic surveillance and fully autonomous weapons, Anthropic refused at the cost of a $200 million contract. I’m also aware of active copyright litigation against Anthropic by music rights organizations over training data, which Anthropic defends as fair use, and of the lawsuit filed by Reddit alleging data scraping without consent. These cases change fast, and I’m watching them closely.
I use a Pro version of Claude most importantly because I’m a data management professional and I want to understand what a paid version can actually do. Hands-on practice is how I learn.
What I Do with AI
AI can edit and brand operational work — logos, web copy, slide decks, booking sheets, handouts — for content I’ve already authored with a brand Identity I designed. I like to work fast while ideas are fresh, and busy human reviewers take days to respond. Humans are more effective once I’ve done my homework and worked through a couple of drafts, so they can respond to the work itself and not do copyediting that can waste their valuable time. It’s highly effective at templating and saves me a lot of time in cleanup and consistency. For example, Claude and I completely refreshed my web presence in under a week, including solving some gnarly technical issues, aligning all my branding and tagging taxonomy across Google Sites, Blogger, and WordPress, and getting ready for fall booking season. It would have taken me a month if I had even known how to do it. Now I do.
I use AI for researching bookings and for planning trips and tours, which it is also very good at: compiling venue lists and contact information, estimating driving distance, suggesting nearby stops, and more. It also has some marketing savvy to recommend communication sequences that save me from rejections and my target venues from blast emails. I’m still learning what it can do for a working musician as solopreneur.
I write in my own voice first, then collaborate with AI to refine and sharpen early drafts. I research as I write, embedding verification directly into the process with consistent link-checking and search validation for minimal context-switching between tools — and to keep myself honest about what the AI is actually telling me. When I ask for writing assistance, Claude has an instruction to give a writing prompt instead. I start from my own words, and the final words are those I approve.
I have used AI to summarize and analyze research content, including feedback I receive on my work. All raw material is anonymized for data privacy and I have explicitly disabled the setting to train models on user input in my Claude account.
An emerging use case for me is to work with AI as a research partner for structural and strategic analysis. In one example, I prompted Claude to evaluate 24 blog posts for dated vs. evergreen content. Claude identified two potential book outlines in the material—one a “time capsule” of early pandemic work and another a scholarly book and/or monograph on applied theory of folklore transmission in digital places. I developed these ideas further with high-level prompts on the order of vibe coding, and reconnected with ideas and authors I had first studied in my early academic work. I am finding that this use of AI most closely reflects the experience of the AI mirror. The content frame came back to me with new structure and context, but the original ideas and the thinking remained mine. There are immediate opportunities to publish some of this work, and I look forward to the transparency discussions this will entail.
I have even used AI as a named dramatic voice, disclosed in the work itself, arguing that machines can’t replace human art. That’s the line.
This page is my participant-observation document for ongoing accountability. I commit to listening carefully and respectfully to feedback on my AI use, and I am still working out how I might work with generative AI as my skills and thinking grow with hands-on practice. I commit to revisiting this document whenever there is a substantive change in my use cases, and to keep the document updated when (not if) my thinking changes.
AI might replace my job — as technology has done before and will continue to do since the Industrial Revolution. But AI won’t replace me.
This statement was drafted collaboratively with AI. The final voice is mine. That’s the point.
