Academic integrity policy rewrites in the AI era: what's actually working?

My department has spent the last six months attempting to rewrite our academic integrity policy to account for AI. I say ‘attempting’ because we have not yet produced a policy anyone is satisfied with, which is itself a finding worth sharing.

The core problem we keep running into: the existing policy framework was built around a distinction between ‘your work’ and ‘someone else’s work.’ AI doesn’t map cleanly onto either category. It’s generated in response to your prompts, shaped by your direction, but not produced by your cognition in the way the policy assumes. Every attempt to define the line has produced either a rule so broad it prohibits reasonable use cases or one so narrow it fails to address the actual concerns.

The other issue is detection. We can’t build a policy around the assumption that AI use can be reliably identified after the fact. The detection tools available to us have error rates that would make any policy built on them legally indefensible in an appeal. So we’re writing policy for a situation where we can’t consistently enforce it, which is a poor position to be in.

What has worked: assignment redesign. Colleagues who restructured their assignments around processes rather than products, oral defenses, in-class components, iterative drafts with documented revisions, have found the AI question largely manages itself. If the assessment requires demonstrating understanding in real time, the shortcut path doesn’t produce a passing result.

What hasn’t worked: blanket prohibition language. It generates anxiety, produces inconsistent enforcement, and doesn’t distinguish between uses that undermine learning and uses that support it.

Where we’ve landed: disclosure requirements, assignment redesign guidance for faculty, and an explicit acknowledgment that the policy will need to be revised as the landscape changes. Not satisfying, but probably honest.

the assignment redesign finding is what my own advisor figured out by accident. his assignments have always required in-person explanation of written work and he’s never had to update his AI policy because the structure already handles it. worth noting that most traditional assessment design didn’t account for AI but a lot of the older pedagogical approaches, socratic discussion, oral exams, portfolio-based assessment, are naturally more resilient

The ‘process rather than product’ shift is where I’ve landed at the secondary level too. Documenting the writing process, requiring drafts, doing some of the work in class where I can see it happening. Not because I don’t trust my students, but because it makes the learning visible in ways that are useful for everyone. The AI question is almost secondary to the pedagogical improvement.

The legal defensibility point about detection tools is one that more institutions need to confront directly. Building a policy that relies on detection outputs to initiate a misconduct process, when those outputs have documented false positive rates, is an exposure that legal counsel at most institutions hasn’t fully assessed yet. Disclosure requirements sidestep that problem by shifting the burden of documentation to the student.