We’ve been using AI image generation as part of our content production workflow for about eight months. The speed and cost benefits are real and I’m not going to pretend they aren’t. But we’ve been dealing with an attribution and disclosure question that I don’t think has a clean answer yet.
Our current approach is to note ‘created with AI assistance’ on anything where AI was used to generate visual elements. Not in a prominent way. More of a disclosure line that’s there if you look for it. That feels like a reasonable baseline but I’m aware it’s not consistent with how other companies are handling it, which is often not at all.
The question I keep coming back to: who is owed that disclosure? Our readers, certainly. But what about the illustrators and photographers whose work trained the models we’re using? The disclosure line we’ve added doesn’t reach them and there’s no mechanism that does.
I’ve also found that our team’s comfort with AI images varies significantly and isn’t always connected to quality. Some people feel strongly that certain types of content, editorial images especially, should be human-created regardless of whether the AI version would be just as good. That’s a values position and it’s worth having explicitly rather than leaving it unresolved.
We landed on a rough internal policy: AI images are fine for abstract, decorative, or illustrative uses. Anything where the image is the argument, a portrait, a documentary-style photo, a representation of a real event, stays human. It’s a blunt line but it gives us something to work from.
Curious how others are drawing these distinctions.
The values position you’re describing among your team, that some content should be human-created regardless of quality, is worth taking seriously. It’s not irrational. It reflects a view about what certain types of images are for, what they’re supposed to represent, that isn’t purely about the pixel output. Editorial photography in particular has always carried claims about witnessing and truth-telling that AI generation can’t replicate even if the images look identical.
The line you’ve drawn between decorative and argumentative images is defensible and roughly maps to how responsible editorial photography works already. A stock image of a laptop illustrating a technology article carries different standards than a photo of a named person at a named event. Applying that existing logic to AI images makes sense and doesn’t require reinventing the whole framework.
For content work, I’ve landed on the same basic distinction. Abstract and decorative, fine. Anything that claims to represent something real or specific, human-created. The line does some work but it’s not airtight. You can make an argument that a generated stock photo of ‘a person using a laptop’ represents something, even if no specific person or event is implied.
The internal policy work you’ve done is further along than most. A lot of teams are using AI images without having explicitly decided anything. Having a stated line, even a blunt one, means you can have a real conversation when a borderline case comes up rather than making ad hoc calls that set inconsistent precedent.
The question of what’s owed to the people whose work trained the models is one I don’t think the disclosure-to-readers frame addresses at all. Those are two separate obligations. One can be met through labeling. The other requires licensing or compensation structures that don’t currently exist in most contexts. Conflating them makes it easy to check one box and feel like the problem is handled.