i want to share a specific example from last month because i think it’s more useful than general advice.
i’m a fiction writer and writing coach. i use AI for first-draft material sometimes – mostly for scenes i’m stuck on or structural experiments. the gap between raw AI output and something that sounds like my actual prose is usually significant. here’s one example of what that closing process looked like.
original AI sentence: “She felt a deep sense of unease as she entered the room, aware that something was fundamentally wrong with the situation.”
that sentence is not wrong. it’s just inert. it tells me what she felt. it doesn’t put me in the room.
my revision: “The hallway light was still on. She’d turned it off before she left.”
same information – something is wrong, she’s unsettled – but now the reader is doing the work. now there’s a body in a space noticing a specific thing. that’s the difference i’m chasing.
the general principle: AI tends to state the emotional or thematic content directly. human prose, especially in fiction, tends to render it through specific observable detail and let the reader arrive at the emotion independently.
this is not a universal rule and it doesn’t apply the same way to nonfiction or content writing. but for anyone working in narrative modes, the “state vs. render” distinction is where i spend most of my editing time on AI drafts.
what principles are others using to close the gap in their specific content type?
the “state vs. render” distinction is a clean way to name something i’ve been fumbling toward in my own editing practice. i write brand content mostly, not fiction, but the principle holds – AI will tell you a brand is “trusted by thousands of businesses” and the human edit is to find the specific customer who had a specific problem and say what happened.
the specific beats the general every time for reader engagement. AI defaults to the general because it’s safer and more inclusive. human editors make the call to get specific and accept that it won’t include everyone.
the hallway light example is excellent. two sentences, no adjectives, no stated emotion, and it’s immediately more tense than the original paragraph.
for content writing the equivalent move is usually cutting the claim and leading with the implication. instead of “our tool saves you time” you find the specific action the reader no longer has to do and describe not doing it. the effect is the same, the register is completely different.
this is useful. i write a lot of technical documentation and readme files and the equivalent for my context is the difference between “this function handles errors gracefully” and “if the request fails, it returns a 404 with the original request ID in the response body.” the first is a claim, the second is a spec.
AI drafts for technical content tend toward the first. editing toward the second is most of my revision work.
In my experience editing manuscripts, the state vs. render problem shows up in nonfiction too, just in a different form. AI-drafted narrative nonfiction will tell you the subject “struggled with the decision” rather than giving you the specific moment – the phone call, the scratch paper, the thing they said to someone else – that makes the struggle real. The abstract summary is always there. The concrete scene rarely is.
The edit is almost always the same direction: go specific, go particular, go to the moment.
glad the example landed. the replies here are giving me a cleaner cross-context version of the same principle: AI defaults to abstraction because abstraction is broadly applicable and safe. human editing moves toward specificity because specificity is what makes writing feel true even when it’s narrow.
the interesting question is whether you can prompt toward specificity from the start – give the model enough concrete context that it generates at the particular level rather than the general one. i’ve had mixed results but when it works it significantly cuts the editing time.