I’ve been deep in AI-assisted content for a while now and I want to articulate something I’ve been observing without a clean name for it.
AI writing sounds like AI not because it’s bad – technically it’s often very good. It sounds like AI because it optimizes for a different thing than human writers optimize for.
Human writers make choices. They leave things out. They make a bet that the reader will follow them somewhere specific. They sacrifice broad applicability for a specific effect. Those choices are what voice is.
AI optimizes for completeness and coverage. Given a topic, it tries to represent the topic fairly and fully. That optimization produces text that is well-organized, accurate, and readable. It also produces text that feels like it was written for no one in particular, because it was.
When I edit AI drafts, most of what I’m doing is introducing the choices that weren’t made. What can I cut? What am I actually trying to say and what is in here just because it’s technically relevant? Where does this piece take a position and where is it just cataloguing?
The mechanical version of this: AI produces first drafts that are too complete. Editing them toward human-sounding prose is mostly a subtraction process.
Curious whether this lands for others or whether I’m overcomplicating it.
the “too complete” framing is one of the cleanest descriptions of this problem i’ve read. it explains why AI drafts often feel padded even when they’re not technically long – every paragraph is earning its place in a comprehensive coverage sense but not in a “this piece needs exactly this” sense.
the subtraction process is real. most of my editing on AI drafts is cutting things that are true and relevant but that slow the piece down or diffuse its point. the original human instinct is to include less and trust the reader more. AI doesn’t trust the reader. it explains.
the optimization for coverage point is something i’d connect to the teaching problem. when i tell students to cut, they’re often cutting things that are technically relevant. the resistance comes from feeling like they’re leaving something important out. AI has the same instinct – it doesn’t want to leave relevant things out – except it’s not an instinct, it’s literally what the training optimizes for.
the writing skill of exclusion – knowing what to not say – is one of the hardest things to teach. it makes sense that it’s also one of the hardest things to encode.
“written for no one in particular, because it was” is a line i’m going to be thinking about for a while.
the no-one-in-particular problem is the one i bump against most in client work. when you write for a specific person or a specific moment, choices become obvious. this is too formal for her, that example won’t land for this audience, this isn’t actually the argument we need right now. AI can’t make those calls because it doesn’t have a specific person in mind. it has the average of all possible readers.
the completeness problem shows up in technical writing in a specific way: AI will document everything about a function or system whether or not the reader needs all of it. good technical documentation makes choices about what the reader is likely trying to do and structures the information around that task. AI structures the information around the system.
the output is comprehensive and often hard to use efficiently. editing it toward useful documentation is mostly about introducing that reader-task perspective that wasn’t there.
Worth saying: the subtraction-as-editing principle applies to human writing too. A lot of first drafts by human writers are also too complete in exactly the way you’re describing. The difference is that AI’s too-completeness is consistent and structural while human over-writing tends to be uneven – some sections are too full, others are too sparse, and the pattern tells you something about where the writer’s attention was.
The AI version is uniformly thorough in a way that’s almost diagnostic once you can see it.