Epistemic Stance
Definition
Epistemic stance refers to how students position themselves in relation to knowledge construction when working with AI. It encompasses their beliefs about:
- Who has authority to create and validate knowledge (human, AI, or collaboration)
- What counts as legitimate knowledge
- How knowledge should be constructed and verified
- Where responsibility lies for truth claims
Epistemic stance reveals whether students see themselves as knowledge consumers (passive) or knowledge constructors (active).
Theoretical Foundation
This concept draws from:
- Epistemology - Theories of knowledge and knowing (Hofer & Pintrich, 1997)
- Epistemic cognition - How people understand the nature and justification of knowledge (Greene et al., 2016)
- New Literacies - Knowledge construction in digital environments (Leu et al., 2013)
- Critical AI literacy - Questioning AI as authoritative knowledge source (Long & Magerko, 2020)
Dimensions of Epistemic Stance in AI-Literacy
1. Authority and Trust
Who is positioned as the knower?
| Stance | Description | Evidence |
|---|---|---|
| AI-Authoritative | AI positioned as expert/oracle | "The AI said..." (uncritical acceptance) |
| Self-Authoritative | Student as primary knowledge constructor | "I used AI to help me think..." |
| Co-Constructed | Knowledge emerges from collaboration | "AI suggested X, but I revised to Y because..." |
2. Knowledge Validation
How is knowledge verified?
| Stance | Description | Evidence |
|---|---|---|
| Outsourced Validation | AI output accepted without verification | No fact-checking, source verification |
| Human Validation | Student cross-checks against sources | "I verified the AI's claim against..." |
| Collaborative Validation | Iterative checking with AI and sources | Using AI to check AI, plus human judgment |
3. Responsibility for Truth
Who is accountable for accuracy?
| Stance | Description | Evidence |
|---|---|---|
| AI-Responsible | AI blamed for errors | "The AI got it wrong" |
| Self-Responsible | Student owns final product | "I should have caught that error" |
| Shared-Responsible | Acknowledges joint accountability | "I didn't verify the AI's output carefully enough" |
Epistemic Stance Continuum
Passive Consumption ←――――――――――――――――――――――――――→ Active Construction
(AI as Authority) (Human as Authority)
| | |
Low Agency Co-Construction High Agency
Low Critical Collaborative High Critical
Thinking Thinking Thinking
Evidence of Epistemic Stance in Framework Components
In Co-Constructing AI Boundaries Framework Component - Outputs
- Critical evaluation of AI responses
- Recognition of hallucinations, bias
- Questioning AI's "knowledge"
In Co-Constructing AI Boundaries Framework Component - Integration
- Modification signals self as knowledge authority
- Rejection demonstrates critical epistemic stance
- Verbatim copy suggests AI-authoritative stance
In Co-Constructing AI Boundaries Framework Component - Reflection
- Explicit statements about knowledge authority
- Metacognitive awareness of who is "thinking"
- Articulation of responsibility for truth claims
Relationship to Other Concepts
- Agency - Epistemic stance informs how students exercise agency
- Boundary-work - Boundaries reflect epistemic beliefs about authority
Key Questions for Analyzing Epistemic Stance
- Does the student position themselves as knowledge creator or knowledge consumer?
- Do they treat AI outputs as truth claims requiring verification or as authoritative?
- Do they claim ownership of ideas generated through AI collaboration?
- How do they talk about responsibility for accuracy and quality?
- Do they demonstrate awareness of AI limitations and biases?
Examples from Data
High Epistemic Agency
"I asked the AI to summarize the readings, but I noticed it
missed the critical perspective from hooks (1994), so I
prompted it to reconsider. Even then, I rewrote the analysis
in my own voice because the AI's framing was too neutral."
(Student positions self as authority, validates AI, transforms output)
Low Epistemic Agency
"Here's what the AI said about the topic."
[Pastes AI output verbatim]
(AI positioned as authority, no validation, no transformation)
Pedagogical Implications
To foster critical epistemic stance:
- Explicit discussion of knowledge authority with AI
- Practice identifying AI errors and biases
- Structured reflection on "who is thinking"
- Emphasis on human responsibility for final claims
- Modeling of verification and transformation practices
Related Notes
- Analytic Framework for AI Human Meaning-Making Practices
- How learners should engage Large Language Models framework
- Tracing the AI-Human Conversation Framework
Tags
#concept #epistemic-stance #knowledge-construction #AI-literacy #critical-thinking