Core Claim

Human-in-the-Loop (HITL) is not just a technical framework—it's a pedagogical mindset. One that centers human judgment, creativity, and care in interactions with AI systems. The goal: classrooms where students don't just consume AI outputs but learn to shape them—ethically, critically, and creatively.

AI is a tool, but how we use it is a practice. HITL reminds us that how we use the tool matters just as much as what it can do.


Prompt Starters: Keep the Human in the Question

Use these templates to guide students' interactions with AI tools:

Prompt Purpose
"Show me multiple perspectives on..." Encourages critical comparison rather than single answers
"What assumptions is this answer making?" Promotes reflective questioning of AI bias and scope
"Here's what I'm trying to learn—what questions should I be asking?" Shifts AI to cognitive amplifier, not oracle
"Summarize this like I'm five. Now for a policymaker." Pushes audience awareness and knowledge transformation
"Give me a rough draft, but flag anything ethically questionable." Builds ethical reflection into the AI response itself

Student Activities for HITL Thinking

These exercises emphasize the loop—asking students to respond, revise, and reflect in ways that foreground their own agency.

1. AI Remix + Human Rewrite

  1. Students ask AI to generate a response to a prompt
  2. They annotate it: What's useful? What's wrong? What's missing?
  3. They rewrite the piece in their own voice or for a different audience

What it teaches: Critical evaluation, voice preservation, audience awareness

2. Human Edits, AI Learns

  1. Use iterative prompting to show how human feedback changes AI outputs
  2. Students document each iteration
  3. Analyze: How (and whether) did the AI "learn" from feedback?

What it teaches: The nature of AI adaptation, limits of machine learning from single sessions

3. Values Audit

After using AI for writing or research:

What it teaches: Values are always present in AI outputs—and in human choices about what to accept


Classroom Practices

Build in Feedback Loops

Make space for human review before AI outputs are accepted or shared, especially for high-stakes writing, research, or design work.

Encourage "Pause and Probe" Moments

Have students stop and ask:

Normalize critical interruptions in the AI process.

Scaffold AI as a Thinking Partner

Teach students to treat AI not as a source of truth, but as a sparring partner for:


Ethical Reflection Routines

Three Questions Before You Use AI

  1. Why am I using it?
  2. What do I want to stay in control of?
  3. What could go wrong if I don't review it?

HITL Exit Ticket

"What decision did I make today that the AI didn't? Why was that important?"

Weekly HITL Roundtable (15 minutes)


The Pedagogical Stance

HITL calls us to slow down, reflect, and keep our humanity in the loop—through curiosity, care, and judgment.

This isn't about rejecting AI or slowing "progress." It's about recognizing that:

The most important learning happens in the moments of friction—when students question, revise, and override.


Connection to Boundary Work

This toolkit operationalizes the theoretical framework in AI-Boundary-Co-Construction:

Boundary Work Concept Toolkit Practice
Restriction "Three Questions Before You Use AI"
Correction "AI Remix + Human Rewrite"
Refusal "Pause and Probe" moments
Modification Iterative prompting exercises
Articulation Values Audit, Exit Tickets

The theory tells us why boundary work matters. The toolkit shows how to cultivate it in classrooms.


Key Formulations (Preserve These)

"AI is a tool, but how we use it is a practice."

"HITL is a reminder that how we use the tool matters just as much as what it can do."

"Let's build classrooms where students don't just consume AI outputs but learn to shape them. Ethically, critically, and creatively."

"What decision did I make today that the AI didn't? Why was that important?"