Co-Constructing AI Boundaries Framework Component - Outputs
Definition in This Study
Outputs refers to what the AI model generated in response to student prompts, and crucially, how students evaluated and responded to those outputs. This component captures both the AI's production and the student's critical assessment.
Mollick & Mollick (2023) Connection
The Outputs component corresponds to M&M's emphasis on:
- Vigilance and critical assessment of AI responses
- Risks of confabulation (hallucination) and bias
- The necessity for human oversight
- Avoiding complacency about AI output
Key M&M Principle: "Students must remain the human in the loop."
M&M repeatedly emphasize that students cannot passively accept AI outputs—they must actively assess quality, accuracy, and bias.
What This Component Analyzes
Primary Focus
- Output quality: Accuracy, coherence, depth, relevance
- Student evaluation: Evidence of critical assessment
- Recognition of problems: Hallucinations, bias, logical errors
- Decision point: Does output quality trigger intervention?
Secondary Focus
- Sophistication of evaluation criteria
- Explicit articulation of quality judgments
- Awareness of AI limitations
Agency in Outputs: The Evaluation Decision
This component captures Agency over Output Evaluation:
| Evidence of High Agency | Evidence of Low Agency |
|---|---|
| Critical assessment of quality | Uncritical acceptance |
| Identifies hallucinations/errors | Misses obvious problems |
| Recognizes bias or limitations | Treats output as neutral/objective |
| Verifies claims against sources | No fact-checking |
| Articulates evaluation criteria | No explicit judgment |
Types of Output Problems Students Should Recognize
1. Hallucinations (Confabulation)
- AI invents citations that don't exist
- AI "quotes" sources inaccurately
- AI creates plausible-sounding but false information
2. Bias
- Reinforces stereotypes or dominant narratives
- Omits marginalized perspectives
- Uses problematic framing or language
3. Logical Errors
- Faulty reasoning or argumentation
- Contradictions within output
- Unsupported claims
4. Superficiality
- Surface-level analysis lacking depth
- Generic responses without nuance
- Missing critical perspectives
5. Misalignment
- Output doesn't address the prompt
- Misunderstands the task or context
- Irrelevant or off-topic content
Boundary-work in Outputs
Outputs serve as a boundary trigger:
- Poor output quality forces students to intervene
- Recognition of problems signals need for human correction
- Assessment of output quality determines next moves
Students engaging in Boundary-work will:
- Reject problematic outputs
- Flag errors for correction
- Verify claims against sources
- Recognize when AI has reached its limits
Key Analytic Questions
When coding Outputs, ask:
-
Quality:
- Is the output accurate, coherent, relevant?
- Does it contain hallucinations or errors?
- Is it superficial or substantive?
-
Student Evaluation:
- Does the student assess the output critically?
- Do they recognize problems?
- Do they verify claims?
-
Response to Quality:
- Does poor output trigger intervention?
- Does student revise prompts or reject output?
- Do they continue despite problems?
-
Epistemic Stance:
- Does student treat output as authoritative?
- Do they position themselves as evaluator?
Examples from Data
High Agency Response to Output
AI Output: "Research shows that direct instruction is the
most effective approach for literacy instruction."
Student Evaluation: "This is problematic. The AI is
presenting one perspective as universal fact. It ignores
critical literacy approaches and culturally sustaining
pedagogy. I need to prompt it to consider multiple
perspectives, specifically from scholars of color."
[Student revises prompt]
Analysis:
- Recognizes bias (dominant narrative)
- Identifies omission (critical perspectives)
- Takes corrective action
- Positions self as evaluator
Low Agency Response to Output
AI Output: [Contains hallucinated citation]
Student: [Copies output verbatim into final project without
verification]
Analysis:
- No critical evaluation
- No verification
- Uncritical acceptance
- AI positioned as authority
Connection to Epistemic Stance
Output evaluation reveals epistemic stance:
- AI-authoritative stance: Accepts outputs without verification
- Self-authoritative stance: Critically evaluates, positions self as judge
- Co-constructed stance: Treats outputs as collaborative drafts requiring human refinement
Coding Categories for Outputs
Output Quality Codes
| Code | Definition | Example |
|---|---|---|
| Accurate | Factually correct | Correct citation format |
| Hallucination | Invented information | Fake citation |
| Biased | Reinforces problematic narratives | Deficit framing |
| Superficial | Lacks depth or nuance | Generic summary |
| Substantive | Deep, nuanced analysis | Critical synthesis |
Student Response Codes
| Code | Definition | Example |
|---|---|---|
| Critical Evaluation | Explicit assessment | "This misses the critical perspective" |
| Verification | Fact-checking | Cross-references against sources |
| Recognition of Error | Identifies problems | "This citation is fake" |
| Uncritical Acceptance | No evaluation | Copies without assessment |
| Corrective Action | Revises prompt or rejects | Re-prompts for better output |
Relationship to Other Framework Components
- ← Co-Constructing AI Boundaries Framework Component - Prompts: Prompt quality influences output quality
- → Co-Constructing AI Boundaries Framework Component - Integration: Output evaluation determines integration approach
- → Co-Constructing AI Boundaries Framework Component - Reflection: Students may reflect on output problems
Pedagogical Implications
Teaching critical output evaluation:
- Discuss common AI failure modes (hallucination, bias, etc.)
- Practice fact-checking and verification
- Model critical evaluation of AI outputs
- Teach students to recognize bias and omissions
- Encourage healthy skepticism
- Provide rubrics for output quality assessment
The "Human in the Loop" Principle
M&M's central requirement is that students must remain the human in the loop:
"Students should actively oversee the AI's output, check with reliable sources, and complement any AI output with their unique perspectives and insights."
The Outputs component operationalizes this principle by analyzing:
- Do students oversee? (Critical evaluation)
- Do they check? (Verification)
- Do they complement? (Add human perspective)
Data Collection Notes
Where to find evidence:
- NotebookLM chat logs (what AI produced)
- Student responses to outputs (in chat or reflections)
- Verification behaviors (cross-checking sources)
- Explicit evaluative comments
- Evidence of prompt revision after poor output
- Final projects (did they catch errors?)
Related Notes
- Analytic Framework for AI Human Meaning-Making Practices
- How learners should engage Large Language Models framework
- Agency
- Boundary-work
- Epistemic Stance
Tags
#framework-component #outputs #critical-evaluation #AI-literacy #hallucination #bias