Tracing the AI-Human Conversation Framework
The 5-part framework (Inputs, Prompts, Outputs, Integration, Reflection) is strong. To make it even clearer for a conference audience, group the first three as the Observed Interaction and the last two as the Learner's Response/Boundary Work.
Here is the framework, with an emphasis on the "boundary-work" as intended:
| Framework Component | Focus (What is Analyzed) | Boundary Work/Agency Evidence |
|---|---|---|
| 1. Inputs 📚 | How they curated sources and framed the problem. | Agency over Knowledge Base: Did they select sources that challenge or expand the course's perspective? |
| 2. Prompts 💬 | How they directed or constrained the AI's response. | Agency over Cognitive Task: Were prompts complex, asking for critique, synthesis, or comparison? (Did they demand thinking?) |
| 3. Outputs 💻 | The AI model's response. | Boundary Setting: Did the output's quality or bias force them to correct or refuse it? |
| 4. Integration ✏️ | How they transformed, rejected, or incorporated the AI output into their final work. | Boundary of Authorship/Trust: Evidence of modification (high agency), rejection (high critical stance), or incorporation (low agency). |
| 5. Reflection 🧠 | How they articulated boundaries, ethics, and epistemic stance. | Metacognitive Boundary: Explicit statements about who is responsible for what in the final knowledge product. |
Conclusion: Both the analytical framework and your methodological justification are academically rigorous and perfectly suited for a research presentation. The term "Micro-interactional analysis" is precise and compelling for describing the case study's deep focus.
The 5-part framework is built around analyzing the entire workflow, which we can categorize into two main phases: the Observed Interaction and the Learner's Boundary Work.
Phase 1: The Observed Interaction (Items 1, 2, 3)
This phase captures the direct back-and-forth between the student and Notebook LM, showing the raw materials and the first response.
| Component | Focus (What is Analyzed) | Why It Matters for Your Presentation |
|---|---|---|
| 1. Inputs 📚 | Curated Sources & Problem Framing: What knowledge did the student feed the AI, and how did they set up the task? | Agency over Knowledge Base: This reveals the student's initial boundary—are they limiting the conversation to just the assigned readings, or are they bringing in outside, critical sources to challenge the AI? |
| 2. Prompts 💬 | Direction & Constraint: The specific questions or commands the student used. | Agency over Cognitive Task: This is the best place to find evidence of critical engagement. Simple prompts ask the AI to summarize (low effort); complex prompts ask it to critique, compare, or argue (demanding thinking and setting a high intellectual boundary). |
| 3. Outputs 💻 | The AI Model's Response: What the AI actually delivered. | Boundary Setting: The quality and bias of the output force the student to decide what to do next. An unsatisfactory or biased output is a key moment where the student must intervene (set a boundary). |
Phase 2: The Learner's Boundary Work (Items 4, 5)
This phase focuses on the student's response to the AI's contribution, which is where the "co-construction" of roles, ethics, and effort is most visible.
| Component | Focus (What is Analyzed) | Why It Matters for Your Presentation |
|---|---|---|
| 4. Integration ✏️ | Transformation, Rejection, or Incorporation: How did the AI text end up in the final project? | Boundary of Authorship/Trust: This is your evidence of agency in action. * High Agency: They modified or transformed the text, showing they saw the AI as a draft creator, not a final author. * High Critical Stance: They rejected the text entirely, demonstrating critical thinking and refusal to trust the output. |
| 5. Reflection 🧠 | Articulation of Boundaries, Ethics, and Epistemic Stance: The student's commentary on the process. | Metacognitive Boundary: This provides the students' own words on who is responsible for what. Look for statements where they assign responsibility—e.g., "The AI did the synthesis, but I had to correct the logic," clarifying the ultimate human responsibility for the final knowledge product. |
In short, this framework shows not just what the students did, but why they did it, and how their actions in the Notebook LM workspace demonstrate the difficult process of setting and adjusting the rules for an AI partnership.