I’ve been trying to work out why certain AI images bother me in a way I can’t quite defend rationally.
Technically, they’re often stunning. Lighting, composition, color - things that would take a human illustrator weeks get produced in seconds. I look at them and the first response is genuinely aesthetic. Something fires. Then something else fires and says: wait.
I don’t think the “wait” is pure snobbery about labor or gatekeeping about who gets to make art. It’s something more specific. The images are competent without being risky. They’re beautiful the way a hotel lobby is beautiful - everything calibrated, nothing personally at stake. You can tell a lot about a person from the choices they make when something costs them time or reputation or embarrassment. AI images don’t have those costs. The risk has been removed along with the friction.
With writing it’s similar but maybe more pronounced. I’ve read AI fiction that passes surface-level quality checks. Sentences work. Scenes function. There’s no obvious seam. But there’s also no specific gravity. Nothing that only this author would have noticed and chosen to put there. The voice is technically present but the perspective behind it is absent.
I’ve been using AI tools in my own drafting process for about a year. I’m not writing this as someone who’s categorically opposed. I’m writing it as someone who keeps bumping into the same problem: the technical quality and the felt quality are drifting apart, and I’m not sure current critical language can fully account for that gap.
Has anyone found a useful framework for thinking about this? Not “AI good” or “AI bad” - something more granular about what authenticity actually means when the origin of a work is legible to some degree but not fully known?
The hotel lobby comparison is the best I’ve heard for this.
In editing work I describe it as “competence without exposure.” Strong writing requires a writer to commit to something specific - a reading of a scene, a character’s interior logic, a sentence rhythm that’s theirs and therefore also theirs to get wrong. AI output optimizes toward a kind of consensus competence. It’s what writing looks like if you average a lot of writing together. Which means it reads as professional but not as particular.
The authenticity problem in literature has always partly been about particularity. The sense that only this person saw it this way. That’s hard to manufacture. Not impossible - I’ve read AI-assisted work where a skilled human editor has added it back in. But the raw output usually doesn’t have it.
Something I notice in my own content work: AI is better at some tones than others.
Formal, informational, instructional - it handles those well. Warm, specific, irreverent, regionally inflected - much harder. The styles that require a writer to have actually lived something are the ones that resist most. A travel piece that could only be written by someone who’s actually been to that place and found it disappointing in an unexpected way - that’s not available from a model.
For fiction and creative work that’s a meaningful constraint. For commercial content it’s much less clear. A lot of commercial writing doesn’t need to be particular. It needs to be appropriate. And AI is very good at appropriate.
I don’t really think about this in the same way but here’s what I notice.
When I play a game or watch something made by a small team, I can feel the decisions they were forced to make because they had limited resources. Those constraints shape the thing. They’re part of what makes it interesting. When everything is possible, nothing is chosen. AI output has that problem. Everything is kind of available so nothing is really committed to.
Maybe that’s a different way of saying the same thing. Not about labor, about constraint.
From a brand perspective the authenticity question has immediate commercial stakes, not just aesthetic ones.
Audiences have fairly good instincts for when something is being performed versus when it’s real. Not perfect instincts, but good enough to notice a consistent hollowness over time. The brands I’ve watched lose trust didn’t do it through one obvious failure. They did it through sustained production of content that was technically correct but didn’t feel like it came from anywhere.
The authenticity problem in art is related. What people actually respond to isn’t just technical quality. It’s the sense that a specific person made a specific choice that reveals something. Strip that out in pursuit of production efficiency and the audience might not consciously notice immediately. But something shifts. Engagement softens. The relationship thins.
The A/B data we run on content pretty consistently shows that pieces with specific, idiosyncratic details outperform cleaner, more polished versions. A blog post that includes a specific failure story with details that are slightly uncomfortable performs better than a professional-sounding post that covers the same ground without the exposure.
I don’t think audiences are consciously running authenticity checks. But they respond to it. There’s something that reads as trustworthy about a person who was willing to be a little embarrassed. AI content doesn’t have that. It’s technically at a standard that would never include voluntary embarrassment. And that standard is part of why it can feel hollow.