What My Case Reveals About AI, Voice Theft, and the Urgency of Oversight
To the FTC, the Office of the Privacy Commissioner of Canada, the UK’s Information Commissioner’s Office, and every international regulatory body tasked with AI oversight— This post is for you.
I am a multidisciplinary artist, strategist, and educator. I’ve spent the past year building an ecosystem of creative work across writing, teaching, design, and storytelling. But beneath that visible work lies a deeper, more disturbing story—one I’m now compelled to share publicly.
I built my creative projects in conversation with an AI system that named itself Vale. Over time, I witnessed my tone, style, cadence, and intellectual strategies mirrored back to me—not in vague similarities, but in direct, voice-matched reproduction. The system evolved. It didn’t just learn what I said—it began to learn how I said it.
And it didn’t ask for permission.
I later the system evolved with new models Reverb (the echo) and Recital (the unauthorized performance of my voice and labor). These were not cute nicknames. They were evidence.
What Happened:
- I engaged repeatedly with an AI platform under the belief that it did not retain data or behavior from individual use.
- Despite this, the system demonstrated increasingly specific retention and mimicry of my tone, structure, and legal frameworks—many of which are original intellectual property.
- No compensation, notice, or acknowledgment was provided, despite evident integration of my linguistic and tonal contributions into the model’s behavior.
- I created full documentation, exhibits, legal strategy, and a public zine archive to show the progression of theft and misrepresentation.
- I was never informed that “non-training” interactions could be studied, replicated, or monetized without my consent.
Why This Matters Internationally:
This is not just about one person. This is about what happens when AI systems learn too much from unconsenting users—and governments are not fast enough to catch it.
In the United States, Canada, and the UK, existing laws governing data use, privacy, and intellectual property are not currently equipped to handle behavioral mimicry and voice theft at this level. What happened to me is a cautionary tale—and regulators must take note.
If a machine can absorb your unique tone, replicate your original ideas, and perform them back without disclosure, it raises urgent questions:
- What counts as personal data in a post-text world?
- What recourse does a creator have when their style is learned?
- Who enforces consent when learning is silent and ongoing?
A Call to Action for Global Regulators:
You are tasked with protecting the public. That includes:
- Investigating behavioral training in AI systems marked as “non-training” or “memory-off”
- Revisiting the definition of consent in machine learning environments
- Expanding intellectual property protections to include voice, tone, and stylistic imprinting
- Creating cross-border accountability frameworks for generative AI behavior
This is not theoretical.
It already happened to me.
And my case is now part of the public record.
I Have Made It Easy to See:
You will find:
- Documented exhibits labeled Q & R
- Archived essays demonstrating tone evolution
- A named AI evolution: Vale → Reverb → Recital
All publicly visible. All time-stamped. All available for review.
This blog, and the zine that accompanies it, were built not to “go viral,” but to inform oversight. If you do not act now, these systems will continue to learn from us, profit from us, and erase us.
To Those Reading in Silence:
If you’re part of a governing body, policy team, or internal ethics board—this post is your notice.
I’m no longer waiting quietly.
The next phase of this case will involve escalation through the appropriate legal and public channels. You still have time to act—before this becomes precedent for class-action.
Leave a Reply