Core Claim

The question isn't "How do we catch students using AI?" The question is: Why are our assessments so easily bypassed by AI in the first place?

With systems focused on authentic assessment—collaborative, project-based learning on real-world problems meaningful to students—shortcuts to deeper learning become harder and less valued by students themselves.


Reframing Cheating

The Default Assumption Problem

When we ask "How do we make sure students aren't cheating?", we start from suspicion. Yes, some students will look for shortcuts. But as an educational community, we cannot first assume all individuals will cheat.

The Collaboration Paradox

In real-world contexts, we expect workers and citizens to collaborate. In schools, we often call this cheating.

This reveals a deeper issue: much of what we assess doesn't actually require individual, isolated performance. The assessment design creates the "cheating" problem.

The Real Question

With or without AI, we should expect more from instruction and assessment practices. When students engage deeply with complex, relevant tasks, the allure of cheating diminishes because they find intrinsic value in the learning process itself.


AI Detection Tools: The Evidence

They Don't Work

From direct investigation: most AI detection tools do not reliably detect AI-generated content at this point.

Why they fail:

They Are Biased

Many AI and ML models were trained on standard academic English scraped from the web. This creates systematic bias:

This is an equity issue. Marginalized students are disproportionately harmed.

The Business Model Problem

There's a belief that detection tools work, partly fueled by companies selling the products. By using AI detection tools, you're:


The Fear Climate

Students experience real fear:

This climate of suspicion damages:


The Spectrum of AI Use

Not all AI use is equal. Consider a spectrum:

Clearly Not Problematic

Gray Zone

Clearly Problematic

Actually Sophisticated

The last category—using AI as a starting point for original work—represents where we should be directing future learners.


Scaffolding Reframe

In education, we call receiving help "scaffolding" and think about the learner's zone of proximal development. Supports help someone level up their skills to the point where they can do it independently.

Historically, scaffolding came from:

Now we have the opportunity to have AI serve as guide, coach, or expert. The question isn't whether to use AI support, but how to use it in ways that build capacity rather than bypass learning.


What Is and Isn't AI?

The "is it AI or not" game is often unproductive. Many tools are labeled AI as a buzzword.

Example: Grammarly

The tool matters less than the relationship between the learner and the output.


The Path Forward

Better Assessment Design

Better Questions

Instead of "Did you use AI?", ask:

Transparency About AI


Open Questions


Key Formulations (Preserve These)

"We cannot first assume all individuals will cheat."

"In real world contexts, we expect workers and citizens to collaborate, but in our schools we often identify this as cheating."

"With systems focused on authentic assessment on real world, wicked problems that are meaningful to students, cheating is harder and not valued by students."

"Many AI detection tools are based on standard academic English, privileging users of standard academic English over other languages and literacy practices."

"Using AI outputs as a starting point for original work, incorporating personal insights and refinements, represents a sophisticated approach to learning and creativity."