i’ve been trying to figure out what actually makes AI writing detectable at the human level – not the tool level. like, if you were reading something and your gut said “this feels off,” what would you be picking up on?
i’ve read a few pieces on this and the usual suspects come up: overly smooth transitions, hedging phrases that never commit to anything, a kind of uniform confidence across sentences that real writers don’t usually have. it sounds like it knows everything and is mildly pleased about it.
but i want to go deeper. i’m a content writer and i work with AI drafts regularly. the things that make me reach for the edit key aren’t always the obvious ones. sometimes it’s the absence of something – no specific example where a general claim called for one, no hesitation where hesitation would have been honest. the text is technically correct and completely hollow.
what are the tells you personally notice? not from a tool, just from reading. and is there a difference between what gives away AI to a human reader versus what a detector actually flags?
the absence point is the one that rings truest to me. AI writes toward the center of what’s expected. there’s no weird specific detail, no example that’s almost too personal to be useful, no sentence that clearly came from someone having a particular experience. it covers the topic. it doesn’t inhabit it.
i teach creative writing and the thing i always come back to is: real writing has something at stake, even in nonfiction. you can feel when a writer cares about what they’re saying versus when they’re just filling the expected container. AI fills the container very efficiently. that efficiency is actually part of the tell.
The honest answer is that the detectable tells shift as models improve, so a checklist built today will have a shorter shelf life than it used to. That said, a few things have been consistent in my experience reading submissions.
One is what I’d call epistemic uniformity – every claim carries roughly the same level of confidence and elaboration. Human writers naturally emphasize some things and rush past others based on what they actually know well. AI doesn’t do that. Everything gets the same treatment.
Another is the absence of a controlling voice. Good writing has a person behind it making decisions about what to include and exclude. AI-generated writing includes most things because exclusion requires judgment about what matters.
hot take: the most reliable tell right now isn’t any specific phrase or structure. it’s the relationship between confidence and specificity. AI writes confidently about general things and vaguely about specific things. humans tend to do the opposite – we hedge on big claims and get very specific and firm when we’re describing something we actually know.
when i see a piece making a broad confident claim supported by a broad confident explanation – no numbers, no named examples, no specific context – that pattern makes me read closer.
The distinction between human-detectable tells and tool-detectable tells is worth taking seriously because they don’t always overlap.
Tools flag statistical patterns – low perplexity, low sentence length variance, specific n-gram distributions. Human readers pick up on semantic and rhetorical patterns – the absence of personal stake, the uniform elaboration level, the generic example where a specific one was needed. You can have text that passes tools easily but reads as hollow to anyone paying attention. The reverse is also true.
y’all the “writing toward the center” framing is genuinely useful. i’ve noticed this too but couldn’t name it cleanly.
the one tell that gets me every time is the meta-commentary. AI loves to announce what it’s about to do before doing it. “in this section we will explore…” or “it is important to consider…” – not because it’s necessary but because it’s padding that looks like structure. real writers usually just do the thing.
for what it’s worth i’ve started using that as a quick scan. if the first sentence of every paragraph is a signpost, i read harder.