AI-Boundary-Co-Construction

Core Claim

The central challenge of AI integration in education is not teaching students how to use tools, but supporting them in developing agentive, ethical boundary practices with cognitive systems. When pre-service teachers engage with generative AI, the real pedagogical work happens in the invisible interactional traces—the prompts, revisions, refusals, and corrections—not in the polished final artifacts.

Agency in AI-mediated literacy practice is not defined by tool use. It is defined by learners' capacity to interrupt, redirect, and refuse automated outputs in service of their epistemic goals. Put simply: agency is the refusal of the generic.

This reframes Human-in-the-Loop (HITL) practice from a compliance checkbox to an emergent, interactional accomplishment. HITL is not something institutions mandate; it is something learners do through boundary work.


Conceptual Framework

Boundary Work as Observable Practice

Boundary work refers to the interactional practices through which learners determine what cognitive, interpretive, and ethical labor remains human and what may be delegated to AI systems. These boundaries are enacted through observable actions:

Boundary work is the observable process through which students assert control over the technological process to ensure it adheres to humanistic and pedagogical values.

Two Interaction Profiles

Analysis of interaction logs reveals two contrasting trajectories:

The Orchestrator (High Agency)

The Outsourcer (High Delegation)

The Loop vs. The Line

The distinction between orchestration and outsourcing can be visualized as interactional trajectories:

Concept Boundary Work Interpretation
Correction Loop Active boundary maintenance
Straight Line Boundary collapse or delegation
Friction Boundary enforcement
Acceptance of generic output Boundary erosion

If the loop is too smooth, human agency is lost. True cognitive amplification requires productive friction.

From Verification to Valuation

In the era of search engines, we taught source evaluation through credibility and relevance. RAG models (like NotebookLM) typically pass credibility checks because outputs are grounded in user-provided sources. The new literacy skill is valuation:

Valuation demands assessing epistemic quality, depth, and alignment with one's own voice and standards—not merely factual accuracy.

Agency Through Constraint

A counterintuitive finding: high-level agency is often demonstrated not by utilizing AI's full generative capacity, but by the student's capacity to intentionally restrict it. This takes two forms:

  1. Bounding the AI's knowledge through specific source selection (RAG curation)
  2. Bounding its role through detailed prompt constraints

By deliberately constraining AI, students become curators of truth, enforcing boundaries that preserve critical human judgment.


Why This Matters (Research / Pedagogy)

For Research

This framework repositions the unit of analysis. Instead of evaluating final products or surveying attitudes, we examine interaction traces as evidence of ethical labor. Chat logs, revision histories, and prompt sequences become the data. Without attending to these traces, we cannot distinguish collaboration from delegation, agency from compliance.

The contribution is conceptual, not tool-bound. If generative AI were replaced by any other cognitive system that blurred authorship and delegation, this framework would still apply. The question persists: How do people decide what work remains human when cognitive systems are present?

For Teacher Education

Pre-service teachers occupy a dual positionality: they are simultaneously assessed as learners and socialized as future professionals. This intensifies the stakes. Decisions made in coursework are implicitly rehearsals for future classroom practice.

The institutional landscape produces not only conceptual confusion but affective consequences:

Boundary work therefore is not merely cognitive. It is also identity-protective and legitimacy-seeking, shaped by fear of sanction, internalized norms of "good student" behavior, and emerging conceptions of what it means to be an ethical teacher.

The Implication

The goal is not compliance with AI policies. The goal is designing loops worth living in—learning environments where productive friction, valuation, and refusal support ethical AI literacy and amplify rather than replace human cognition.


What This Helps Me Do

This framework provides:

  1. A theoretical spine for research on AI in literacy education that centers boundary work rather than tool adoption
  2. Observable indicators for coding interaction logs (restriction, correction, refusal, modification, articulation)
  3. Two archetypes (Orchestrator/Outsourcer) that make findings legible and actionable for practitioners
  4. Language for teaching that reframes AI literacy from skills to ethical positioning
  5. A way to name the shame that many educators feel but cannot articulate

It also clarifies what this work is not about: comparing AI tools, measuring learning gains, or producing best practices. The contribution is about ethical agency under technological uncertainty.


Open Questions / Tensions

Methodological

Theoretical

Practical

Unresolved


Key Formulations (Preserve These)

"Agency was defined by friction. They worked harder than the machine."

"Agency is the refusal of the generic."

"The real problem is not whether AI is allowed, but how boundaries are negotiated in practice."

"Boundary work in AI-mediated literacy practice is enacted through agentive refusals, constraints, and iterative corrections that maintain human epistemic authority within the loop."

"This study uses generative AI as a site for examining how pre-service teachers enact ethical boundary work and professional agency under conditions of institutional ambiguity."

"Ethical AI literacy is practiced, not declared. HITL is not a switch—it's a relationship."