The obvious arguments about AI and society – job displacement, misinformation, concentration of power in a small number of developers – get significant coverage. I want to raise a few that seem to get less attention.
One is the epistemic homogenization problem. When large populations use similar AI systems to draft, summarize, and communicate, the range of linguistic and conceptual patterns in circulation narrows. The models are trained on existing text and generate toward the center of that distribution. At scale, that might mean fewer genuinely novel framings, fewer ideas that arrive in unexpected language. This is speculative, but I think it’s worth examining seriously.
Another is the credentialing problem. AI makes it possible to produce work that looks like the product of expertise without possessing the underlying expertise. This is already showing up in academic and professional contexts. The longer-term concern is not individual instances of this but a gradual erosion of the signal value of certain credentials and demonstrated outputs – if everyone can produce work that looks like an expert produced it, what does expertise mean as a distinguishing category?
A third is something I’d call the legibility trap. AI systems optimize for producing outputs that are clear, organized, and sensible. There may be important ideas that are genuinely hard to express clearly – that require density, ambiguity, or unconventional structure to communicate. An environment that optimizes heavily against those features might lose something that isn’t easily named.
Curious whether any of these resonate or whether people think they’re overstated.
The credentialing point is the one I find most immediately consequential in my field. The honest answer is that the signal value of a polished, well-organized manuscript has already degraded as a quality indicator. What used to suggest editorial investment and developed thinking now only tells you that the author has access to an AI writing tool and knows how to run it.
The question of what replaces it as a quality signal is not one the field has answered. For books, it’s probably going to become something more like demonstrated knowledge in conversation, track record, community standing. Things that are harder to fake. But that’s a harder signal to scale.
the epistemic homogenization argument is the one i keep returning to. do with this what you will: i’ve noticed that the metaphors people reach for when describing AI-related concepts have gotten more similar over the past two years. that could be coincidence, or selection bias in what i’m reading, or something real. i genuinely don’t know.
what i do think is that if you want to say something genuinely new, saying it in language generated by a system trained on existing text is probably working against you. the new idea and the old language are in tension.
honestly the legibility trap is the one i hadn’t thought about before and it’s a bit unsettling. if you structure incentives around AI-readable, AI-evaluable outputs, you might inadvertently select against the kind of writing that resists easy summarization – which is often the kind that’s doing the hardest work.
this field is cooked if the answer to “how do we know this is good” becomes “how well does an AI describe it.”
the credentialing erosion is something i see in creative writing spaces too. the markers that used to signal a serious writer – the specific vocabulary, the structural confidence, the clear relationship to a literary tradition – can now be reproduced by anyone with a good prompt. that’s not entirely bad. gatekeeping was real and harmful. but the loss of legible quality signals creates its own problems for readers trying to find work worth their time.
i don’t have a clean answer for what replaces it. probably the things that were always the actual markers – specificity, distinctiveness, the sense of a particular mind behind it – but those are harder to scale as discovery mechanisms.
The responses here are doing something I find genuinely useful – extending the arguments rather than confirming or dismissing them. The credentialing point seems to have the most immediate traction across different fields.
What strikes me is that all three concerns share a structure: they’re about the second-order effects of a technology that’s optimizing for local quality at the expense of emergent properties that we value but don’t have good metrics for. That’s a pattern worth watching in how this technology develops.