I spent the last week deliberately using AI teacher tools for lesson planning across three different units, something I’ve been resistant to because I’ve seen how quickly these things get adopted without much critical thought. I wanted to form my own view before saying anything publicly.
Here’s where I landed.
For generating a first-draft structure for a lesson, they’re genuinely useful. The kind of initial scaffolding that used to take 20 minutes of staring at a blank document now takes about five minutes of refining something the tool produced. That’s a real time saving and the output was surprisingly adaptable.
Where things got frustrating: the tools seem calibrated for a generic, compliance-ready version of teaching that doesn’t map well to how I actually run my classroom. The suggested activities are safe and competent. They’re also often boring. Anything that requires knowing your specific students, their reading levels, what they responded to last week, where the energy in the room actually is, that’s missing entirely.
The bigger concern is what happens when teachers who are newer or more overwhelmed start treating the tool’s output as a default rather than a starting point. The scaffolding stops being a draft and starts being the plan. That’s where I think the tool does real harm, not because it’s producing bad content, but because it’s replacing the judgment calls that good teaching actually runs on.
Tools don’t replace judgment. They can make it easier to find a starting point. But ‘here is a five-paragraph structure for teaching theme in short fiction’ is not a lesson. It’s a template that still requires someone who knows what they’re doing to turn it into teaching.
Worth discussing whether professional development around these tools is keeping up with adoption rates, because in my building it definitely isn’t.