Is AI actually bad for students or are we just scared of new tools?

I want to ask this question in good faith because I think the discourse has calcified into two camps that aren’t talking to each other.

Camp one: AI is fundamentally bad for student learning because it substitutes for the cognitive work that produces learning. If you use AI to write your essay, you don’t develop the thinking that writing requires. The process is the point, not the product.

Camp two: AI is a tool like any other and the question is how it’s used. Calculators didn’t destroy mathematical thinking. Word processors didn’t destroy writing. AI is the next iteration of this pattern and the moral panic will look familiar in retrospect.

I’m a high school creative writing teacher and I hold both of these views simultaneously depending on the day. When I watch a student use AI to generate an essay they then submit without reading, I think camp one is right. When I watch a student use AI to get past a blank page and then spend forty minutes revising and developing the ideas, I think camp two is right.

The tool isn’t the variable. The pedagogical context is the variable. What are we asking students to do and why, and does the presence of AI change whether doing that thing produces the intended learning?

I think most of the “AI is bad for students” arguments are actually arguments about specific bad uses in under-designed learning contexts. And I think most of the “AI is neutral” arguments are ignoring real risks in contexts where the thinking process genuinely is the point.

Where is the camp that holds both?

I’m in that camp and I think the framing you’ve outlined is right. The question is not “is AI bad” – it’s “bad for what learning outcome, in what context, used in what way.”

In my experience the hardest cases are the ones where the assignment was designed to produce learning through struggle and the AI removes the struggle entirely. Not because struggle is good in itself, but because the particular cognitive work being struggled through was the thing that was supposed to be learned. Removing it doesn’t just make the assignment easier – it removes the point of the assignment.

honestly i’m probably closer to camp two than camp one but i say that as someone who is also a TA watching undergrads submit work they clearly can’t discuss or defend. that’s not a tool problem. that’s a learning problem. the tool just makes it more visible faster.

the thing i keep coming back to is that most of the cognitive load arguments against AI apply equally well to heavy internet use, collaborative writing, and a dozen other things we don’t restrict the same way. if the argument is “you need to struggle to learn,” that has to be applied consistently or it’s just nostalgia about how things used to be hard.

The research on this is actually more nuanced than either camp acknowledges. There is good evidence that certain kinds of generation – writing out an argument, producing an explanation in your own words – do support learning in ways that consuming AI-generated content does not. There is also evidence that this effect is task-specific and varies considerably by student and by what’s being learned.

The pedagogical design question matters more than the tool question. Instructors who are designing assignments that require the kind of thinking AI cannot replicate have very different outcomes than instructors who haven’t changed their assignments since before AI tools existed.

The “can’t discuss or defend it” observation is the clearest signal I have in my own classroom. Not because I’m running oral exams on everything – I’m not – but because the students who used AI heavily for a piece tend to be unable to go deeper on it in our class discussions. There’s a thinness to their relationship to the ideas that shows up.

That’s not a moral judgment about the students. It’s an observation about what the tool did to the learning process in that context. The answer isn’t to ban AI – it’s to design contexts where that kind of shallow engagement doesn’t satisfy the requirement.

the cognitive load point is interesting and i’d extend it: the argument that “struggle produces learning” is true for the right kind of struggle. struggling to format a citation is not the kind of productive struggle that builds thinking. struggling to figure out what you actually believe about a complex question is. AI can remove the first kind without touching the second if the assignment is designed well.

the design problem is real. most assignments weren’t designed with any of this in mind.