AI Literacy
AI literacy isn't about keeping up with the technology. It's about refusing to be kept by it.
Most conversations about AI focus on what it can do. AI literacy asks harder questions. Who built this? What does it optimize for? What does it cost you to use it, in data, in attention, in agency? These aren't technical questions. They're civic ones. And they matter for anyone who uses digital tools, not just people who work in tech.
Key Terms
AI Literacy is the ability to critically evaluate, effectively interact with, and make informed decisions about artificial intelligence. This goes beyond knowing how to use AI tools. It means understanding the systems behind them, who they serve, and what they get in return.
AI Illiteracy is the gap between using AI and understanding it. It shows up as misplaced trust (treating AI outputs as more reliable than they are), invisible bias (not seeing whose assumptions are embedded in the system), and unchecked automation (delegating decisions that deserve human judgment). It's not about being bad with technology. It's about not having the frameworks to evaluate what the technology is actually doing.
Bias in AI refers to the way AI systems reflect the data they were trained on, and that data was created by humans, in specific historical moments, with specific blind spots. Bias isn't a bug that gets patched. It's a structural feature of systems trained on human-generated content. Understanding this changes how you interpret AI outputs, especially in high-stakes contexts like healthcare, hiring, and education.
Human in the Loop is the principle that humans should remain involved in consequential AI-assisted decisions rather than fully delegating to automated systems. It isn't just a safety mechanism. It's an ethical commitment to maintaining accountability. When AI makes a decision that affects someone's life, a human should be able to explain and own that decision.
Agency in AI contexts means maintaining deliberate control over how you engage with AI tools, choosing when to use them, how to prompt them, and how critically to evaluate what they produce. High agency looks like using AI as a tool you're directing. Low agency looks like having the AI direct you.
Surveillance Capitalism is the business model underlying most free AI tools. Your data and behavior are the product. AI systems are often designed to maximize engagement, extract information, and build predictive profiles, not to serve your interests. Understanding this model is foundational to understanding why AI behaves the way it does.
What AI Cannot Know points to the real limits of AI systems that are often invisible in the outputs they produce. AI can't verify information in real time, doesn't know what it doesn't know, and can generate confident-sounding text about things it's wrong about. Knowing the edges of the tool matters as much as knowing what it can do.
Best Available Human Framework is a pragmatic way to evaluate AI. Compare it not to an idealized expert, but to the actual human resources available in a given context. A busy teacher with 30 students gets different value from AI than a well-resourced professional with time to verify everything. This framework cuts through both hype and blanket dismissal.
Go Deeper
Understanding AI
- Artificial Intelligence — what AI actually is, stripped of hype
- Machine Learning — how AI systems learn from data
- General Attitudes Toward Generative AI — how people are actually responding to these tools
- Orchestrated Collaboration vs Algorithmic Passivity — two very different ways of working with AI
AI in Education
- Frameworks for Thinking About AI in Education — structural lenses for making school-level decisions about AI
- AI Detection and Authentic Assessment — why detection tools don't work, and what to do instead
- Human in the Loop — keeping humans accountable in AI-assisted decisions
- 02 DEVELOP/HITL Pedagogy Toolkit — practical classroom tools for human-in-the-loop teaching
- PedagoGPT Complex — the risks of outsourcing pedagogical judgment to AI
AI and Power
- AI Geopolitics and the Open Model Question — who controls the infrastructure, and why it matters
- AI Policy Beyond Blanket Bans — what thoughtful AI governance actually looks like
- Surveillance Capitalism — the business model behind most AI tools
- AI and the Question of Self — what AI does to how we understand identity and authorship
AI Literacy in Practice
- AI Illiteracy — what it looks like, and why smart people have it
- AI Boundary Co-Construction — research on how people negotiate control with AI systems
- Agency — maintaining authorship in AI-assisted work
- Nexus Analysis for AI Literacy Research — research framework for studying AI literacy in real contexts
Start Here
New to this topic? Start with AI Illiteracy. It reframes the question from "how do I use AI?" to "what does it mean to actually understand what I'm using?"
Connected Groves
- Digital Self-determination — the parent framework, who controls your digital life
- Privacy by Design — tools that protect by default, including from AI extraction
- Internet Culture — the platform dynamics that AI systems operate within and amplify