Core Claim
The question isn't "How do we catch students using AI?" The question is: Why are our assessments so easily bypassed by AI in the first place?
With systems focused on authentic assessment—collaborative, project-based learning on real-world problems meaningful to students—shortcuts to deeper learning become harder and less valued by students themselves.
Reframing Cheating
The Default Assumption Problem
When we ask "How do we make sure students aren't cheating?", we start from suspicion. Yes, some students will look for shortcuts. But as an educational community, we cannot first assume all individuals will cheat.
The Collaboration Paradox
In real-world contexts, we expect workers and citizens to collaborate. In schools, we often call this cheating.
This reveals a deeper issue: much of what we assess doesn't actually require individual, isolated performance. The assessment design creates the "cheating" problem.
The Real Question
With or without AI, we should expect more from instruction and assessment practices. When students engage deeply with complex, relevant tasks, the allure of cheating diminishes because they find intrinsic value in the learning process itself.
AI Detection Tools: The Evidence
They Don't Work
From direct investigation: most AI detection tools do not reliably detect AI-generated content at this point.
Why they fail:
- AI/ML models evolve faster than detection can keep pace
- Savvy users can modify content through multiple processes to obfuscate
- Detection is always one step behind generation
They Are Biased
Many AI and ML models were trained on standard academic English scraped from the web. This creates systematic bias:
- Tools privilege standard academic English over other languages and literacy practices
- Non-native English speakers are unfairly flagged as potential AI users
- The more someone's writing differs from "standard academic" patterns, the more likely false positives
This is an equity issue. Marginalized students are disproportionately harmed.
The Business Model Problem
There's a belief that detection tools work, partly fueled by companies selling the products. By using AI detection tools, you're:
- Training their models with your data
- Paying for tools that don't yet work reliably
- Betting they'll improve—while students face consequences now
The Fear Climate
Students experience real fear:
- Fear of being accused of cheating (whether they used AI or not)
- Fear of poor grades or worse consequences
- Fear that's not fair or equitable
This climate of suspicion damages:
- Trust between students and teachers
- Willingness to take intellectual risks
- The learning environment itself
The Spectrum of AI Use
Not all AI use is equal. Consider a spectrum:
Clearly Not Problematic
- Using an online thesaurus
- Grammar checking (basic Grammarly)
- Spell check
Gray Zone
- AI suggesting synonyms
- AI rewriting to remove jargon
- AI helping structure an outline
Clearly Problematic
- Generating entire text and submitting as your own
- No engagement with or transformation of AI output
Actually Sophisticated
- Using multiple AI outputs as starting points
- Transforming, questioning, and rewriting AI suggestions
- Integrating AI into an iterative process of thinking
The last category—using AI as a starting point for original work—represents where we should be directing future learners.
Scaffolding Reframe
In education, we call receiving help "scaffolding" and think about the learner's zone of proximal development. Supports help someone level up their skills to the point where they can do it independently.
Historically, scaffolding came from:
- Teachers
- Caregivers
- Friends
- "More knowledgeable others"
Now we have the opportunity to have AI serve as guide, coach, or expert. The question isn't whether to use AI support, but how to use it in ways that build capacity rather than bypass learning.
What Is and Isn't AI?
The "is it AI or not" game is often unproductive. Many tools are labeled AI as a buzzword.
Example: Grammarly
- Basic version: probably not generative AI
- Pro version: may use ML models, but we don't know what happens behind the scenes
- Using it to fix grammar: not AI misuse
- Using it to generate and submit without engagement: problematic
The tool matters less than the relationship between the learner and the output.
The Path Forward
Better Assessment Design
- Authentic assessment on real-world "wicked problems"
- Collaborative, project-based learning
- Tasks meaningful to students
- Process documentation, not just products
- Reflection and metacognition built in
Better Questions
Instead of "Did you use AI?", ask:
- "Walk me through your thinking process"
- "What did you learn while creating this?"
- "What would you do differently?"
- "How does this connect to [previous work]?"
Transparency About AI
- Acknowledge AI exists and students will encounter it
- Teach critical evaluation of AI outputs
- Discuss when and how AI use supports learning
- Model thoughtful AI integration
Open Questions
- How do we assess in ways that make AI use irrelevant rather than prohibited?
- What does "original work" mean when all thinking builds on prior thinking?
- How do we distinguish between AI as cognitive scaffold and AI as cognitive bypass?
- What would assessment look like if we assumed collaboration rather than prohibited it?
Key Formulations (Preserve These)
"We cannot first assume all individuals will cheat."
"In real world contexts, we expect workers and citizens to collaborate, but in our schools we often identify this as cheating."
"With systems focused on authentic assessment on real world, wicked problems that are meaningful to students, cheating is harder and not valued by students."
"Many AI detection tools are based on standard academic English, privileging users of standard academic English over other languages and literacy practices."
"Using AI outputs as a starting point for original work, incorporating personal insights and refinements, represents a sophisticated approach to learning and creativity."