AI Literacy
Knowing what the machine is doing — and what it's doing to you
AI literacy isn't about learning to prompt better. It's about understanding who built the system, what it optimizes for, and what it costs you to use it.
What This Is
AI literacy is the ability to critically evaluate, effectively interact with, and make informed decisions about artificial intelligence. But "literacy" here means more than technical skill. It means understanding AI as a system of power — who funds it, what it extracts, whose interests it serves, and where it fails.
This Grove gathers everything in the vault about AI from a literacy perspective: the foundational concepts, the educational frameworks, the safety concerns, and the human questions AI raises about identity, agency, and trust.
The Core Tension
AI tools are simultaneously:
- The most powerful learning amplifiers available to educators
- The most sophisticated extraction systems ever built for attention and data
Teaching AI literacy means holding both truths. Not technophobia. Not techno-optimism. A clear-eyed understanding of what you're working with.
Foundational Concepts
These seeds and evergreens define the landscape.
- AI Illiteracy 🌲 — What it looks like when people can't critically evaluate AI: misplaced trust, invisible bias, unchecked automation
- Big Five Ideas in AI 🌱 — Perception, representation, learning, natural interaction, societal impact — the educational framework
- Frame Problem in AI 🌱 — Why AI can't tell what's relevant and why that matters for everything from self-driving cars to essay graders
- Bias in AI 🌲 — Systematic discrimination baked into training data and algorithms, from healthcare to criminal justice
- AI and Emergent Behavior 🌿 — How simple rules at scale produce complex behavior — flocking birds, ant colonies, and LLMs
- AI Classification Matrix 🌿 — A 2x2 taxonomy: narrow vs. general, basic vs. advanced — helps people place what they're actually using
Human Agency & Boundary Work
The central question: who's in charge — you or the model?
- AI-Boundary-Co-Construction 🌲 — Research framework on how humans negotiate control with AI. Distinguishes "orchestrators" (high agency, active boundaries) from "outsourcers" (passive delegation)
- AI Boundary Work and Agency 🌱 — Explores friction as a feature, not a bug — active boundary-setting as a practice of agency
- Cognitive Amplification with AI 🌱 — Human-in-the-loop design focused on amplifying wisdom, not just speed. The case for productive friction.
- AI and Human Identity 2035 🌱 — What happens to empathy, creativity, and agency when AI partnerships deepen? Speculative but grounded
AI in the Classroom
Practical frameworks for teaching with and about AI.
- AI Workshop Framework for Educators 🌿 — 3-strand design: understanding AI/ML, designing AI-enhanced learning, AI as cognitive amplifier
- AI Summer Workshop for K-8 Teachers 🌿 — 5-day hands-on workshop with PRADA computational thinking framework, emotional anchors, and deliverables
- AI Detection and Authentic Assessment 🌲 — Detection tools don't work and create equity problems. Reframes assessment design as the real solution.
- Frameworks for Thinking About AI in Education 🌲 — Applies Bentoism (Now Me/Us, Future Me/Us) to AI decisions. Centers structural interests over individual adoption.
- AI Pedagogical Mastery - Educational Advantage 🌿 — Educators have a hidden superpower: scaffolding, assessment, metacognition all multiply AI interaction quality
- AI Communication Mastery - Linguistic Advantage 🌿 — Linguistic sophistication creates compound advantages in AI interaction through semantic precision
- AI Ethics in Education Policy 🌱 — Framework for transparent, trust-based AI use in courses with disclosure and reflection requirements
Safety, Sycophancy & Systemic Risk
Where AI literacy becomes a safety issue.
- AI Safety Spending Gap 🌿 — Frontier labs spend billions on capability. All U.S. safety orgs combined: $133M. The structural imbalance driving everything below.
- AI Policy and Regulation Beyond Blanket Bans to Nuanced Governance 🌲 — Risk-based regulation, sectoral approaches, and accountability mechanisms. The policy alternative to panic.
- AI Geopolitics and the Open Model Question 🌲 — Open vs. closed models as an infrastructure control issue. Who owns the compute?
- Best Available Human Standard - Pragmatic Framework 🌲 — Evaluate AI against actual available human resources, not idealized experts. A pragmatic lens.
- Cheating Tension - Moral Ambiguity in AI Use 🌱 — Student guilt around AI use. The gap between policy and embodied practice.
Identity, Trust & What AI Means for Being Human
The philosophical undercurrent.
- AI and the Question of Self 🌲 — How AI disrupts the self/subject distinction. Identity construction in the age of algorithmic personalization.
- Generative AI and Identity 🌱 — How generative tools challenge uniqueness and authenticity. Fluidity of subject positions.
- AI Beyond Simple Mimicry 🌿 — Modern AI has internal representations and reasoning, not just pattern matching. What does that mean for how we relate to it?
- Trust and Sincerity Detection in AI 🌱 — Can AI help us evaluate trustworthiness of information? Design concepts for a browser extension approach.
Related Groves
- Digital Self-determination — The parent framework: agency, context, and power in digital systems
- Digital Resilience — Staying strong and sustainable when the tools keep changing
- Privacy by Design — Tools that protect by default — the infrastructure AI literacy needs
Potential Forest
This Grove feeds into: Digital Literacy Framework
AI literacy is not a separate domain from digital literacy. It's the newest, most urgent frontier of the same fight: understanding who built the system you're using, what it costs you, and what alternatives exist.
AI literacy is not about keeping up with the technology. It's about refusing to be kept by it.