DL 420
Agency Is Shifting
Published: February 08, 2026 • 📧 Newsletter
Machines are gaining more of it. Institutions are consolidating it. And learners, teachers, and citizens are being asked to make sense of it in real time.
So this issue is about that tension.
Not just what AI can do, but what counts as a mind. What learning is for, and who actually controls the infrastructure our lives run on.
Here’s what stood out, and why it matters.
If you've found value in these issues, subscribe here or support me here on Ko-fi.
🔖 Key Takeaways
- AI is becoming agentic. New models plan, execute, and persist across tasks. They act more like collaborators than tools.
- Education needs better questions. The issue isn’t “can AI grade?” but “what human capacities matter most now?”
- Fluency isn’t mind. Language models simulate understanding; they don’t experience or mean.
- Infrastructure is political. Platforms and data pipelines determine who has power long before policy catches up.
- Privacy is civic literacy. Knowing how your data can be demanded, bought, or surveilled is now part of basic digital competence.
🤖 AI Frontier: Dueling Model Releases from OpenAI & Anthropic
This week marked a major moment in the evolution of AI tools with two flagship models dropping almost simultaneously. This is another sign of a growing AI coding & agentic “race.”
- GPT-5.3-Codex is OpenAI’s latest agentic model designed to go beyond traditional code generation. It generates, debugs, and executes complex tasks, with improved speed and performance on real-world workflows. In fact, early iterations of the model were used to help build and debug itself, a notable milestone in AI development.
- Claude Opus 4.6 from Anthropic focuses on deep reasoning, long-context understanding, and professional knowledge-work support, enhancing planning, debugging, and multi-step execution across complex tasks.
The nearly simultaneous release, with both models landing within an hour of each other, highlights how quickly frontier AI is evolving and how competitive the space has become.
Why this matters: These aren’t incremental updates. They show a shift from AI as a conversational assistant to AI as an active collaborator in complex work, including coding, problem solving, research, and professional workflows. This is a signal that the nature of work, learning, and information literacy is changing rapidly.
🤔 What Are the Right Questions About AI in Education?
As generative AI matures, the conversation in education has largely centered on risks and operational effects. This focuses on grading, cheating, bias, privacy, and digital divides. These are important issues, and they’re being actively studied and worked on by scholars, developers, and policymakers.
Rather than asking what AI can do (can it personalize learning, detect cheating, or replace teachers), perhaps we should be probing why we educate in the first place and what our goals should be in a world where AI is increasingly capable.
Stephen Downes pulls this together in an essay asking whether our current educational objectives (economic, social, and personal) remain relevant when AI can do so much of the work we’ve traditionally relied on humans to perform. This reframes the debate from tool management to purpose and identity.
Why this matters: I appreciate this shift from controlling risk to examining intent and purpose. It forces educators, policymakers, and communities to think bigger than policies and safety guardrails. It invites us to consider the kind of education that will help humans thrive alongside powerful AI, rather than simply manage it.
🧠 What Counts as a Mind?
As AI systems grow more fluent and human-sounding, it’s easy to slide into treating them as if they think. A recent essay in Noema challenges that instinct by asking a deeper question. What actually counts as a mind...and are we even looking in the right places?
The piece argues that today’s large language models don’t understand meaning or experience the world. They predict patterns in data. Their “intelligence” is syntactic (statistical next-word guessing), not semantic or conscious. The emotional pull we feel when chatting with AI says more about us than about the system itself.
Meanwhile, researchers studying “minimal intelligence” in plants, microbes, and bio-hybrid organisms are showing that many forms of life without brains still sense, learn, adapt, and make decisions**. Intelligence, in other words, may be far more embodied and biological than computational. If consciousness emerges anywhere, it may come not from bigger chatbots but from living or hybrid systems that interact physically with the world.
Why this matters: For educators, this reframing matters. As AI becomes more capable, the challenge isn’t deciding whether machines are minds — it’s helping students develop the judgment to interpret AI critically rather than anthropomorphize it. Fluency isn’t understanding. Simulation isn’t thought. And literacy now includes recognizing the difference.
🧊 The New Normal of Data Overreach
One of the most sobering threads this week isn’t about new AI capabilities. It’s about how easily the government can reach into the data exhaust of everyday life.
A Slate report highlights DHS/ICE’s use of administrative subpoenas (demands for records that can be issued without a judge signing off first) to seek information from companies like Google and Meta. The chilling part isn’t only the legal tool. It’s how it can be used to identify protestors or critics, often without the targeted person learning about it unless a company notifies them.
This isn’t hypothetical. The ACLU describes an ongoing case (Doe v. DHS) where DHS issued an administrative subpoena to Google seeking subscriber information after a person engaged in constitutionally protected speech; the ACLU is arguing this kind of subpoena use is unlawful and retaliatory.
And even when warrants are required, agencies may try to route around them. A separate ACLU report details how DHS components have sought to purchase sensitive location data from brokers. An end-run that effectively treats constitutional protections as optional if the data is “for sale.”
If you want one clean explainer to pair with this story, the EFF breaks down the legal pathways law enforcement uses to obtain online data (subpoenas vs. court orders vs. warrants). The post also breaks down practical principles that actually reduce exposure: data minimization, shorter retention, transparency, and true end-to-end encryption where possible.
Why this matters: We continue to learn more about civic digital literacy. This includes understanding how institutions use data, how platforms respond, and how surveillance chills participation. In 2026, privacy isn’t a settings menu; it’s a condition for speech, learning, and organizing.
🤔 Consider
The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology..
—E. O. Wilson
We keep upgrading the tools.
We’re slower to upgrade the questions, the ethics, and the guardrails.
That gap is where most of the risk lives.
⚡ What You Can Do This Week
- Ask a better question The next time you hear “How can we use AI for ?”, pause and reframe it. What human capacity are we trying to protect or grow here? Start with purpose, not the tool.
- Practice critical AI reading Take one AI-generated output you receive this week (email draft, summary, lesson plan). Don’t accept it at face value. Mark what’s assumed, flattened, or missing. Treat it like a text to analyze, not an answer to trust.
- Reduce one data trail Turn on disappearing messages, delete an old account, shorten retention on a tool, or switch one conversation to end-to-end encrypted messaging. Small reductions compound.
- Talk about it out loud Have one conversation with a colleague or student about how AI or surveillance is changing your work. Collective literacy grows through shared sense-making.
🔗 Navigation
Previous: DL 419 • Next: DL 421 • Archive: 📧 Newsletter
🌱 Connected Concepts
- Digital Sovereignty — controlling the tools and infrastructure that shape your work and identity
- Civic Digital Literacy — understanding surveillance, rights, and platform power as part of everyday literacy
- Agentic Systems — software that remembers, plans, and acts across time
- Human–AI Collaboration — treating AI as scaffolding and augmentation, not substitution
- Privacy as Infrastructure — data minimization, encryption, and retention as design choices, not settings
- Critical AI Literacy — reading AI outputs as constructed artifacts, not authoritative voices