A question that came up in a conversation with a media lawyer last week and has been sitting with me since: if a newsroom uses AI-generated music in a video report, does disclosure change the ethical picture in any meaningful way?
My initial instinct was yes, obviously. Disclosure is the baseline for any AI content in a journalistic context. But the lawyer’s counterpoint was interesting: disclosure of what, exactly? The music isn’t the journalism. It’s contextual atmosphere. We don’t disclose that we used stock footage from a library, or that a graphic was made in a particular piece of software.
There’s something to that argument. Editorial standards around disclosure have historically tracked claims about reality, about what’s true, what was witnessed, what was said. Music that underscores a scene doesn’t make claims the way a photograph or a direct quote does.
On the other hand, the training data question doesn’t go away just because the music is background. If the model was trained on compositions by identifiable artists without their consent, the newsroom is implicated in that whether they disclose to the audience or not. The audience-facing disclosure doesn’t reach the harmed party.
My current position: disclosure to audiences is a floor, not a ceiling. It satisfies one obligation and creates no excuse for ignoring the others. The licensing and compensation questions require separate answers that disclosure alone doesn’t provide.
Where are news organizations actually landing on this? I’ve seen very little public guidance and a lot of quiet usage.
The training data question is the one that’s genuinely unresolved and it matters more in journalism than in most contexts because journalism has historical commitments to the people whose stories and work it uses. AI-generated music built on unconsented training data from working composers is, at minimum, a values problem even if it isn’t yet a clear legal one.
In my experience, news organizations are making these decisions the same way most organizations make AI decisions: quickly, at the operational level, without policy, and then retrofitting justification when questions get asked. The absence of public guidance you’re noting isn’t a gap in the field’s thinking. It’s a deliberate holding pattern.
kinda interesting that we don’t think about the music in news videos at all normally. it’s just there. but now that you explain it there’s actually a lot of assumptions behind who made it and how it ended up there
The distinction between disclosing a tool and disclosing a claim about reality is useful and worth developing carefully. Academic publication has a similar debate around AI assistance. The disclosure obligation is clearest when AI contributed to a claim that could mislead. When AI contributed to something that makes no representational claim, the obligation is murkier. The music case sits right at that boundary.
The ‘disclosure as floor not ceiling’ framing is the right one and it applies well outside journalism. We’ve landed in a similar place for marketing content. Telling your audience is one obligation. Doing right by the people whose creative work trained the model you used is a separate one and most people are conflating them or ignoring the second entirely.