Co-Constructing AI Boundaries Framework Component - Outputs

Definition in This Study

Outputs refers to what the AI model generated in response to student prompts, and crucially, how students evaluated and responded to those outputs. This component captures both the AI's production and the student's critical assessment.


Mollick & Mollick (2023) Connection

The Outputs component corresponds to M&M's emphasis on:

Key M&M Principle: "Students must remain the human in the loop."

M&M repeatedly emphasize that students cannot passively accept AI outputs—they must actively assess quality, accuracy, and bias.


What This Component Analyzes

Primary Focus

Secondary Focus


Agency in Outputs: The Evaluation Decision

This component captures Agency over Output Evaluation:

Evidence of High Agency Evidence of Low Agency
Critical assessment of quality Uncritical acceptance
Identifies hallucinations/errors Misses obvious problems
Recognizes bias or limitations Treats output as neutral/objective
Verifies claims against sources No fact-checking
Articulates evaluation criteria No explicit judgment

Types of Output Problems Students Should Recognize

1. Hallucinations (Confabulation)

2. Bias

3. Logical Errors

4. Superficiality

5. Misalignment


Boundary-work in Outputs

Outputs serve as a boundary trigger:

Students engaging in Boundary-work will:


Key Analytic Questions

When coding Outputs, ask:

  1. Quality:

    • Is the output accurate, coherent, relevant?
    • Does it contain hallucinations or errors?
    • Is it superficial or substantive?
  2. Student Evaluation:

    • Does the student assess the output critically?
    • Do they recognize problems?
    • Do they verify claims?
  3. Response to Quality:

    • Does poor output trigger intervention?
    • Does student revise prompts or reject output?
    • Do they continue despite problems?
  4. Epistemic Stance:

    • Does student treat output as authoritative?
    • Do they position themselves as evaluator?

Examples from Data

High Agency Response to Output

AI Output: "Research shows that direct instruction is the
most effective approach for literacy instruction."

Student Evaluation: "This is problematic. The AI is
presenting one perspective as universal fact. It ignores
critical literacy approaches and culturally sustaining
pedagogy. I need to prompt it to consider multiple
perspectives, specifically from scholars of color."

[Student revises prompt]

Analysis:

Low Agency Response to Output

AI Output: [Contains hallucinated citation]

Student: [Copies output verbatim into final project without
verification]

Analysis:


Connection to Epistemic Stance

Output evaluation reveals epistemic stance:


Coding Categories for Outputs

Output Quality Codes

Code Definition Example
Accurate Factually correct Correct citation format
Hallucination Invented information Fake citation
Biased Reinforces problematic narratives Deficit framing
Superficial Lacks depth or nuance Generic summary
Substantive Deep, nuanced analysis Critical synthesis

Student Response Codes

Code Definition Example
Critical Evaluation Explicit assessment "This misses the critical perspective"
Verification Fact-checking Cross-references against sources
Recognition of Error Identifies problems "This citation is fake"
Uncritical Acceptance No evaluation Copies without assessment
Corrective Action Revises prompt or rejects Re-prompts for better output

Relationship to Other Framework Components


Pedagogical Implications

Teaching critical output evaluation:

  1. Discuss common AI failure modes (hallucination, bias, etc.)
  2. Practice fact-checking and verification
  3. Model critical evaluation of AI outputs
  4. Teach students to recognize bias and omissions
  5. Encourage healthy skepticism
  6. Provide rubrics for output quality assessment

The "Human in the Loop" Principle

M&M's central requirement is that students must remain the human in the loop:

"Students should actively oversee the AI's output, check with reliable sources, and complement any AI output with their unique perspectives and insights."

The Outputs component operationalizes this principle by analyzing:


Data Collection Notes

Where to find evidence:



Tags

#framework-component #outputs #critical-evaluation #AI-literacy #hallucination #bias