"AI-Assisted" Is Doing a Lot of Work Right Now and We Should Talk About What It Actually Means

I grade undergraduate work as part of my TA assignment. I also write my own academic work. So I’m on both sides of this and I want to be honest about how blurry this is getting.

When my students say “I used AI for help,” that phrase covers an enormous range of actual behaviors. On one end: they asked a chatbot a clarifying question and then wrote the whole paper themselves. On the other end: they fed the prompt in, got a full draft, rearranged two paragraphs, and submitted. Both qualify as “AI-assisted.” Both are happening. The gap between them is enormous.

The same problem exists in my own work. I use AI to help organize a literature review. I use it to generate outlines that I then break apart and rebuild. I use it to check whether my argument structure makes sense to a general reader. Is that AI-assisted? Sure. Does it cross a line? I genuinely don’t know. My committee hasn’t told me. My institution’s policy is three pages of language that doesn’t actually answer the question.

What I’ve started to notice is that “AI-assisted” has become the polite fiction everyone reaches for. Students use it to describe full ghostwriting. Researchers use it to describe light editing. The phrase is doing serious ethical work with very little definitional structure underneath it.

The disclosure question follows from this. How do you cite AI assistance when the usage ranges from “I asked it what a confidence interval was” to “it wrote my introduction”? The current citation formats don’t capture that. They treat AI like a source, not like a collaborator or a ghostwriter.

I’m curious whether anyone has found frameworks, institutional or personal, that actually help draw the line. Not just enforce it - draw it.

The line question is the one my committee keeps avoiding.

We spent three sessions debating policy language and what we landed on is “AI may be used as a writing aid but the intellectual contribution must be the student’s own.” Which sounds reasonable until you try to apply it. What counts as intellectual contribution? Argument structure? Evidence selection? Interpretation? All of the above?

I’ve started asking students to document their process. Not just submit the paper but submit a brief process note describing what they used AI for and what they didn’t. It doesn’t solve the verification problem but it changes the conversation. Students who actually did the work can explain it. Students who handed the whole thing to a model usually can’t describe their own argument with any depth.

I teach secondary and the threshold is different but the same problem exists.

“I used AI to help” from a 16-year-old is almost always a euphemism. I know this. They know I know this. We’ve reached a kind of polite impasse where as long as the work is plausibly theirs, I don’t push. That’s not great. But the alternative is running an interrogation after every assignment, which destroys the classroom relationship and still doesn’t produce reliable evidence.

The disclosure framing matters though. I’ve started telling students that my concern isn’t whether they used AI. It’s whether they can stand behind the thinking in the work. If you can’t explain your own argument, we have a problem regardless of where the words came from.

Student perspective: the reason nobody’s honest about this is because the rules are vague and the consequences are unpredictable.

My school says no AI without disclosure. But what counts as disclosure? Am I supposed to write a note on every assignment? What format? Nobody tells us. So everyone just uses AI quietly and says “I used it for help” if asked, which is technically true and tells you nothing.

If schools actually wanted honest disclosure they’d build a system for it. Tell us exactly what to report and how. Right now it’s “disclose” with no infrastructure, so it just becomes a legal escape hatch for both sides.

The citation problem is something I’ve thought about a lot as a researcher.

Current guidance treats AI as a source - something you reference like a book or dataset. But that misrepresents the relationship. When a language model helps me restructure an argument, it’s not a source. It’s a cognitive tool, more like a whiteboard than a library. We don’t cite whiteboards.

The distinction I find useful is between AI as a retrieval tool and AI as a generative tool. Retrieval: you’re asking it to surface information that exists elsewhere - that’s closer to a database search and probably should be disclosed. Generative: it’s producing language or structure that enters your work - that’s ghostwriting and needs a different framework entirely. The word “assisted” collapses both into one category and makes honest accounting nearly impossible.

Honest take from the professional content side: the “AI-assisted” framing exists partly because clients don’t want to know the actual number.

Some of my clients would be uncomfortable if they knew AI produced the first 90% of a deliverable. So I say “I use AI tools in my workflow” which is true and also tells them nothing. They don’t ask for more detail because they don’t want to have to update their pricing expectations.

This is the same dynamic you’re describing in academia, just with money instead of grades as the stake. Everyone has agreed to use a vague phrase that lets both sides avoid a harder conversation. It’s not great but it’s also pretty human.