My department has spent the last six months attempting to rewrite our academic integrity policy to account for AI. I say ‘attempting’ because we have not yet produced a policy anyone is satisfied with, which is itself a finding worth sharing.
The core problem we keep running into: the existing policy framework was built around a distinction between ‘your work’ and ‘someone else’s work.’ AI doesn’t map cleanly onto either category. It’s generated in response to your prompts, shaped by your direction, but not produced by your cognition in the way the policy assumes. Every attempt to define the line has produced either a rule so broad it prohibits reasonable use cases or one so narrow it fails to address the actual concerns.
The other issue is detection. We can’t build a policy around the assumption that AI use can be reliably identified after the fact. The detection tools available to us have error rates that would make any policy built on them legally indefensible in an appeal. So we’re writing policy for a situation where we can’t consistently enforce it, which is a poor position to be in.
What has worked: assignment redesign. Colleagues who restructured their assignments around processes rather than products, oral defenses, in-class components, iterative drafts with documented revisions, have found the AI question largely manages itself. If the assessment requires demonstrating understanding in real time, the shortcut path doesn’t produce a passing result.
What hasn’t worked: blanket prohibition language. It generates anxiety, produces inconsistent enforcement, and doesn’t distinguish between uses that undermine learning and uses that support it.
Where we’ve landed: disclosure requirements, assignment redesign guidance for faculty, and an explicit acknowledgment that the policy will need to be revised as the landscape changes. Not satisfying, but probably honest.