AI writing in journalism -- where's the actual line and who's drawing it?

I’ve been watching how newsrooms are handling the AI question and the picture is inconsistent enough that I think it’s worth discussing directly.

Some outlets have published explicit policies: AI can be used for certain tasks – summarizing data, transcribing, initial research – but not for generating publishable prose without disclosure and human review. A few have gone further and banned AI-generated content from bylined pieces entirely.

Others have said nothing publicly, which in practice means each reporter is making their own call. That’s not a policy. That’s a liability waiting to surface.

What strikes me is how much the journalism conversation parallels the academic integrity conversation, with different stakes attached. In academia the concern is about student learning and credential validity. In journalism the concern is about accuracy, source accountability, and public trust. These are not abstract concerns – there have been published examples of AI-generated factual errors making it into print.

The question I’d put to this forum: should disclosure be mandatory when AI is used in any meaningful way in a reported piece? And if so, where’s the threshold – AI-assisted headline? AI-drafted section? AI-generated quotes that were then verified?

I don’t think “AI was used in the production of this article” is sufficient. I think specificity matters. But I also know that level of transparency would make a lot of current newsroom practices look very different in daylight.

hot take: “AI was used in the production of this article” as a disclosure is the journalism equivalent of “we use cookies.” technically a disclosure, functionally meaningless.

the specificity point is the right one. used to suggest headlines is different from used to draft the lede is different from used to generate quotes that were then verified. these are not the same thing and collapsing them into one disclosure label lets newsrooms be technically transparent while staying substantively opaque.

The accuracy concern is the one I keep returning to. AI models generate confident-sounding text regardless of whether the underlying information is correct. In academic writing that’s a problem. In journalism that’s a potential public harm depending on the topic.

The verification question is where I think the line should be drawn. Any AI-generated factual claim that reaches a reader should have been verified by a human against a primary source before publication. That’s not a new standard – it’s the basic standard journalism already claims to hold. The AI context just makes the failure mode more common and the consequences faster.

to be fair, some of what AI is replacing in newsrooms – aggregating wire reports, producing earnings summaries, writing sports recaps from structured data – was already fairly mechanical and the accuracy risk in those contexts is lower. the problems happen when those same tools get used for things that require genuine reporting judgment.

the “each reporter making their own call” situation you described is the one that worries me. that’s how you get a wide range of practices under the same masthead with no consistency and no clear accountability.

The parallel to the classroom is real and I’ve been thinking about it from the other direction – if I’m teaching students who want to go into journalism, what standards should I be preparing them for? The answer right now is “it depends on the outlet” which is not a satisfying thing to teach.

I think the disclosure question is going to force itself. One significant AI-generated error in a high-profile piece, clearly attributed to AI use that wasn’t disclosed, and the industry will move quickly. It usually takes one visible failure to produce a standard that should have existed before it.

Worth saying: the outlets that have moved fastest on clear AI policies tend to be the ones whose editorial credibility is their core product – investigative desks, long-form magazines, papers with strong editorial traditions. The outlets moving slowest are often those competing on speed and volume, where the economic pressure to use AI heavily is highest and the incentive to be transparent about it is lowest.

That’s not a coincidence. It’s a structural tension that disclosure requirements alone won’t resolve.