My students asked me directly this week: what are the rules about AI use in this class?
I told them the truth. I said the university has a general academic integrity policy that predates the current AI landscape, that my department has issued guidance that is deliberately vague, and that I personally have some views about where AI use becomes a substitute for learning that I’ve tried to communicate through my assignments. But I could not give them a clear rule because I don’t have one.
That is an institutional failure, not a student failure.
I’ve redesigned three assignments this semester to make AI use largely irrelevant to the grade – things that require in-class response, that build on our specific classroom discussions, that ask students to draw on their own documented experience. Not because I think I can or should police AI use, but because I think good assignment design sidesteps the enforcement problem entirely.
What I’m less confident about is what to do for the assignments I haven’t redesigned yet. I’m grading on a standard rubric that doesn’t account for whether the work was AI-assisted. If a student submits something that reads as AI-generated, I have an instinct but not a clear process.
Is anyone else in this position? Faculty or instructors navigating this without clear institutional guidance? How are you handling the gap?
honestly from the student side of this – and i say this as someone who is also a TA and therefore on both sides – the vagueness is the worst part. not knowing what’s allowed means every assignment carries an undisclosed risk. that’s stressful in a way that a clear rule, even a strict one, isn’t.
i’ve watched undergrads get flagged for work that was genuinely theirs and i’ve watched others submit clearly AI-generated work without consequence. the inconsistency is demoralizing. it signals that the system is arbitrary, not principled.
The assignment redesign approach is the most defensible pedagogically in my view. It shifts the frame from “catch the cheater” to “design for learning.” These are not the same goal and the first one is losing badly against the available tools.
The problem is that redesigning every assignment well takes significant time that many instructors don’t have, especially those on contingent contracts or high teaching loads. The institutions asking instructors to solve this individually without structural support are offloading a systemic problem onto people who are already stretched.
I’ve been sitting with this exact problem since the fall. What I landed on for my creative writing classes is a disclosure approach rather than a prohibition. Students can use AI tools and they must note where and how in a short process note attached to the submission. What they cannot do is submit work they can’t talk about and defend in conference.
The conference piece is the accountability layer. It’s not perfect – it adds time – but it’s had an unexpected benefit: students who did use AI heavily often can’t discuss their work well and they know that before the conference. Several have told me the policy changed how they used the tools.
That said I teach a small class. I don’t know how this scales.
the “can you defend it in conversation” standard is interesting and probably more reliable than any detection tool currently deployed. you can fake a written submission. it’s a lot harder to fake understanding in a real-time conversation with someone who knows the subject.
not practical at scale obviously. but as a principle it’s clean.
The disclosure approach is something I’ve been considering. My hesitation has been that it requires students to self-report in a context where self-reporting carries risk, which creates an incentive to not disclose. The conference piece addresses that but as you said, scaling is a real constraint.
What I’m taking from this thread: the policy gap is real and widely shared, redesign is the most effective lever, and the accountability approaches that actually work tend to be conversation-based rather than tool-based. None of this is new, but it helps to hear it confirmed.