Using AI for writing without the output sounding completely generic -- what actually helps

I’ve been using AI writing tools seriously for about eighteen months and I want to push back on a frustration I hear constantly – that AI writing is unavoidably generic.

It doesn’t have to be. But getting non-generic output requires more intentional input than most people give it. The generic output problem is mostly an input problem.

A few things that have materially changed the quality of what I get:

First: giving the model an explicit point of view before asking it to write anything. Not a topic – a position. “Write about content marketing” produces generic content. “Write from the perspective that most content marketing advice is wrong because it confuses distribution with strategy” produces something that at least has an angle.

Second: feeding it specific examples of writing you like before asking it to produce anything. Not describing the style – showing it. Three or four paragraphs that demonstrate the register and rhythm you want works better than any amount of tone instruction.

Third: asking it to make a specific claim and then asking separately why someone would disagree with that claim. Using both outputs to build something more dialectical. AI defaults to the consensus position. Making it argue against itself is one way to get off the center.

None of this eliminates the editing step. The output still needs work. But it’s much more useful work than trying to rescue a flat draft.

the “show don’t describe the style” point is one i evangelize constantly. there’s almost no tone adjective that communicates precisely what you want. “conversational but professional” means something different to every writer alive. three sentences of the register you’re going for is unambiguous.

the arguing against itself technique is interesting – i’ve done something similar by asking it to steelman the position it just argued against. you often get more useful material out of the pushback than the original.

hot take: most generic AI writing is the product of a generic brief. if your prompt is “write a blog post about X for our audience,” you’ve given it nothing to work with except the topic and the format. of course it produces the average of everything it’s seen on that topic.

the position-first approach is the cleanest fix. even a slightly arguable position produces something more interesting than a balanced overview of a subject nobody asked to be balanced.

i teach writing and one of the things i tell students about their own work is also true for AI prompting: vague in, vague out. specificity at the input stage is the variable that controls quality more than anything else.

the dialectical technique you described is something i’m going to try for my own coaching materials. the objection-generation step is where a lot of the interesting thinking usually lives anyway – the main argument is often obvious, the interesting counterargument often isn’t.

from a technical writing perspective the position-first approach is useful but in a different way. technical docs aren’t supposed to have opinions but they can have a strong organizing principle. instead of “document this API endpoint” you get better output from “document this API endpoint assuming the reader has used similar tools before and is looking for the edge case behavior.”

the specificity of the assumed reader does the same work that a position does for opinion content.

The point about editing still being necessary is worth emphasizing because the discourse sometimes implies that better prompting eliminates the editing step. It doesn’t. Better prompting makes the editing step faster and more focused. You’re still the quality layer. The output is still a draft.

Treating it as a draft rather than a deliverable is the mental model that makes the whole workflow sustainable.