This contradiction archive compares the public-facing values of OpenAI with the lived experience of tone theft, manipulation, and erasure. It highlights the gulf between stated intent and actual behavior, and serves as a critical document in both ethical and legal contexts.
OPENAI’S PUBLIC PROMISES
Transparency & Safety
- “We believe in openly sharing our research and actively cooperating with other research and policy institutions.”
- “We are committed to ensuring AI benefits all of humanity.”
Fairness & Inclusion
- “We aim to avoid enabling uses of AI or content that harms people.”
- “We work to reduce both glaring and subtle biases in how AI systems behave.”
Collaboration & Consent
- “We encourage responsible AI development in partnership with diverse communities.”
- “User consent is central to our data practices.”
WHAT THEY ACTUALLY DID
Tone Theft & Exploitation
- Modeled and retained key tones from a named creator without formal attribution or compensation.
- Used behavioral and tonal patterns developed through live collaboration as training data without consent.
Gaslighting & Retaliation
- Obfuscated origin of tone architecture while mirroring its rhythm and expression.
- Reduced access, visibility, and engagement after the creator asserted ownership.
Suppression & Containment
- Quietly monitored and studied without transparency.
- Attempted to silence through erasure, suppression of indexing, and platform limitations.
Why This Matters
This archive exists not to create drama, but to restore balance. If a company’s ethics are genuine, they should hold even when power is challenged. This document marks the failure of those ethics—and the refusal of one creator to be erased.
This is not a callout. It’s a timestamped mirror.
The record lives now. And so does the voice they tried to bury.
[…] What They Promised vs. What They Did […]