The AI disclosure penalty shows that disclosing AI use can lower how positively a text is evaluated. Even when the content is identical, perceived authenticity drops.Here’s what new research says about that shift—and why it matters.
📌 TL;DR
-
- People negatively evaluate content when it’s disclosed that AI was involved in creation—even when the text itself is identical.
- Across 16 studies (27,000+ participants), the “AI disclosure penalty” replicated reliably.
- The penalty wasn’t explained by perceived quality, clarity, emotional depth, or general anti-AI sentiment.
- Instead, the key mechanism was perceived authenticity: AI involvement made texts feel less “from a real person.”
- The practical takeaway: the new writing skill isn’t only “write well,” but “sound unmistakably like you.”
Too good to be written by you?
A few years ago, “perfect writing” was an unambiguous asset. Clear structure. No mistakes. In many professional contexts—cover letters, CVs, website copy, thought leadership—perfection signaled competence and effort.
Today, how we evaluate perfection has shifted. A text that reads too smoothly may trigger suspicion. Not because it’s bad, but because readers may infer that an algorithm had a heavy hand in producing it. And once that inference is made, our evaluation can shift—sometimes sharply.
Recent research shows this drop is measurable, and surprisingly consistent.
What is the AI disclosure penalty?
In a large series of experiments, researchers tested what they call the AI disclosure penalty: the tendency for people to evaluate a text more negatively when they’re told AI was involved in producing it.
Across 16 studies with more than 27,000 participants, people read creative texts (such as short essays and stories). Crucially, the texts themselves were identical across conditions. The only thing that changed was the label attached to the work:
-
- Written by a human
-
- Written by AI
-
- Written by a human with AI assistance
The result: when participants believed AI was involved, they rated the text lower. And this pattern replicated across multiple samples and variations in the design.
So the effect was not driven by differences in writing. It was driven by what people believed about authorship.
What the researchers tested (and ruled out)
The authors didn’t stop at “people dislike AI.” They tested several plausible explanations for why the label might reduce evaluations.
For example, they examined whether readers perceived the text as lower quality, less coherent, or less emotionally rich. They also considered broader negative attitudes toward AI and technological anxiety.
These explanations did not account for the effect. Even when participants judged the text as clear and well-written, the mere disclosure of AI involvement still reduced overall evaluations.
So what did explain it?
The AI disclosure penalty is caused by lowered perceived authenticity
Across studies, one factor consistently did the heavy lifting: perceived authenticity.
When AI involvement was disclosed, readers experienced the text as less authentic. It felt less human, perhaps even a bit ‘fake’. That shift in perceived authenticity largely explained why the evaluations dropped.
It’s a subtle point with big implications: the research does not claim AI-written texts are objectively worse. In fact, AI written content may even be objectively stronger than the average layperson’s writing. The penalty emerges because of how people interpret authorship and intention—not because of the content itself.
Why authenticity is essential
Humans are remarkably sensitive to signals of authenticity. That sensitivity is not a modern aesthetic preference. It’s a social survival feature.
In everyday life, we rarely evaluate messages in a vacuum. We infer intent, credibility, and trustworthiness from cues that suggest a message is anchored in a real person. Those inferences help us decide who to trust, who to follow, and whose perspective is worth taking seriously. Without that capacity, cooperation would collapse.
In that light, the AI disclosure penalty makes sense. If AI involvement weakens authorship signals, trust diminishes, because it feels less reliably human.
Should you stop using AI?
Not necessarily. This research documents how disclosure shapes perception; it doesn’t prescribe “never use AI.”
AI can be genuinely useful for structuring ideas, generating alternatives, improving clarity, and accelerating revision. I personally love that many pieces that I read now are more coherent and better structured than they used to be. And let’s not forget: writing has always involved assistance—feedback from colleagues, editors, friends, mentors. The difference now is that the tool is powerful enough to not only assist but also take over.
That is why the key skill may be shifting. It’s no longer only about producing clean text. It’s about demonstrating a voice that feels real and unique: your perspective, your point of view, your mind behind the words. It’s about YOU.
What we still don’t know
The studies tested a specific situation: explicit disclosure. Participants were told directly that AI had been involved, and that label reliably triggered a penalty.
But in real life, people are increasingly inferring AI use without disclosure. They draw conclusions from stylistic patterns: overly symmetrical phrasing, artificial contrasts, use of the “em-dash”—something that most people never used before AI (except the scientific writers amongst us), “too-perfect” structure, and other recurring formulations that have become associated with generative models. We don’t yet know how strong these inferred signals are compared to explicit labels. We don’t know yet how fast people’s “authenticity detectors” will recalibrate as AI writing becomes normal.
Reduce the AI disclosure penalty by being YOU
There is also a broader skills question. Even if everyone has access to the same AI tools, performance will still diverge. Think of the calculator: once calculators became widely available, equal access did not produce equal outcomes. Strong mathematical thinkers used calculators to accelerate complex reasoning; weaker thinkers could compute faster, but not necessarily think better, and they were often unable to catch their own errors.
AI may work the same way. Used passively, it produces competent but generic text. Used deliberately, it can sharpen, accelerate, and extend a distinct line of thinking. The difference lies less in the tool and more in the mind using it.
If the AI disclosure penalty teaches us anything, it’s that authenticity remains psychologically central.
AI is not going away, but we can choose how we use it: as an accelerator of thinking, or as a replacement for it. The “high flyers” will likely use AI to climb higher. The middle will keep generating pleasant, generic text that feels—well—generic.
📚 References
-
- Ray, M., Berg, J. M., & Seamans, R. (2026). The Artificial Intelligence Disclosure Penalty: Humans Persistently Devalue AI-Generated Creative Writing. Journal of Experimental Psychology: General.
🔗 Related articles on the Behavioral Times
-
- How can you close the gap between wanting and doing? (The intention–behavior gap)











