I teach academic writing. I also sit on my institution’s Academic Integrity Committee. So I’m thinking about this from two directions at once.
Writing has always served a dual purpose in academia. It’s a communication tool - you write to share ideas. But it’s also an assessment tool - you write to demonstrate that you understand something. Those two functions are getting pulled apart right now and I’m not sure institutions have fully reckoned with that.
When a student submits an AI-generated paper, the communication function is fine. The ideas might be well-organized, the argument may be clear. But the assessment function has been circumvented. The paper doesn’t tell me what the student knows. It tells me what the model knows.
What worries me more broadly is how this cascades outside academia. In professional contexts, writing has also been an expertise signal. A well-crafted analysis, a tightly argued proposal, a report that shows genuine domain understanding - these gave observers information about the person who produced them. They were legible proxies for capability.
If AI flattens that signal - if everyone can produce competent professional writing regardless of underlying expertise - then what does competent writing tell us? Hiring managers are already describing this problem. Job candidates who submit strong written materials can’t perform at the same level in interviews or on the job. The writing no longer predicts the person.
The fields that relied heavily on written credentials as expertise proxies are facing a genuine credentialing crisis. I’m curious whether anyone has thoughts on what viable alternatives look like. Not just in academia but in professional hiring, journalism, research - anywhere writing was used as evidence of something.
The credentialing crisis is real and it’s already having downstream effects in peer review.
Reviewers are beginning to distrust submitted manuscripts in ways they didn’t before. Not because the manuscripts are necessarily lower quality, but because quality itself has become decoupled from the process of developing expertise. A methodologically sound paper doesn’t tell you anymore whether the authors actually understand what they’re doing. It tells you that someone - human or model - could produce a methodologically sound paper.
What fills the gap is unclear. Oral examination traditions are coming back in some fields. Live problem-solving sessions in hiring. More emphasis on portfolios with process documentation. These work partially. None of them scale well.
Hiring is already adjusting in ways that aren’t being publicly discussed much.
At the senior level, the written artifacts have always been somewhat gamed - polished by PR teams, ghostwritten, heavily edited. What hiring at that level relied on was reference networks and track records. Those remain relatively hard to fake.
The gap opening up is at mid-level, where written work was a genuine differentiator and where reference networks are thinner. A strong analyst brief or strategic memo used to tell you a lot about a 28-year-old candidate. That signal is weakening. The response I’m seeing is more emphasis on live work - case studies done in real time, structured interviews around specific decisions they’ve made and why. More friction-intensive processes that are harder to prepare AI output for.
In tech hiring this has been happening faster than anywhere else because AI coding tools changed the signal first.
Take-home coding assignments got gamed almost immediately once AI coding tools became good. Now most serious companies do live technical interviews, pair programming sessions, design discussions with follow-up questions. Things where you have to think out loud and respond to unexpected constraints.
The written/code signal collapsed and the industry adjusted within a couple of years. Academia adjusts slower. But the end state might look similar - more in-person assessment, more oral components, more emphasis on reasoning process rather than polished output.
The dissertation defense is the part of my degree that I think about completely differently now.
The defense was always supposed to be the moment where you proved you knew the work. AI makes that moment more important, not less. But it also makes the surrounding five years of work less legible as evidence of expertise, which changes what the defense has to do.
My advisor has started asking candidates to do an unscripted walkthrough of their methodology choices before the formal defense - basically showing their work live. Uncomfortable, very hard to fake, and actually more informative than a polished presentation. I think that’s the direction this goes.
Less polished written output as the primary evidence. More messy, live demonstration of thinking.
In literary publishing, the equivalent credential has always been the manuscript itself. If the writing is good, that’s the signal. Author credentials matter at the margins but the text was primary.
AI doesn’t collapse that signal entirely for literary work, because quality in literary fiction isn’t about technical correctness - it’s about particularity and judgment. A model can produce technically competent prose but it hasn’t lived enough to write about loss or estrangement in a way that carries genuine weight. The signal weakens for competence. It holds more for distinctive voice.
The places where the signal collapse is most severe are the ones where “good” meant technically correct and well-organized. For those fields the written credential was always a somewhat thin proxy for thinking. AI is just revealing how thin.