DL 429

The Handoff

Published: April 22, 2026 β€’ πŸ“§ Newsletter

We spend a lot of time asking what AI can do. We spend almost no time asking what we stop being able to do once we let it.

This week, the pattern is harder to ignore.


A note before we begin: This is the last issue of Digitally Literate from Buttondown. Next Wednesday, we move to Substack β€” with a new format, a paid tier, and a new section called The Understory. More next week.

First time here? Welcome. Subscribe to get this in your inbox every Wednesday. The digital garden is where the deeper threads live.


βš”οΈ The state is no longer debating AI. It’s deploying it.

Over the weekend, Palantir released a 22-point summary of CEO Alex Karp's book, The Technological Republic: Hard Power, Soft Belief, and the Future of the West. Palantir builds operational software for defense, intelligence, immigration, and police agencies.

The document is jarringly explicit. It argues for AI weapons development, universal national service, and a wholesale rejection of what it calls "regressive culture." Online backlash has focused on Karp's rhetoric. That's the wrong focus. This isn't a PR stunt. It is one of the most influential defense-AI companies in the world stating plainly that it views AI militarization as a moral imperative.

For years, warnings about surveillance infrastructure and AI weaponization were dismissed as doomerism. Now a major player is claiming that ground as intentional strategy. It is worth reading. Not to agree. To understand how the architecture of the future is being written in plain sight.

That makes the secondary story more important, not less.

Reports indicate the White House is preparing to grant federal agencies broad access to Anthropic's "Mythos" system, while the Pentagon has simultaneously flagged Anthropic as a "supply chain risk", and the NSA has reportedly been using Mythos anyway.

That contradiction is the real signal. AI tools are becoming operationally indispensable faster than the government can form coherent policy. The infrastructure is already live. The ethics debates are running to catch up.

This is the AI Indispensability Trap. Once a system like Mythos becomes the only viable defense against autonomous cyber-threats, the question of whether to use it stops being a question. Risk assessment becomes a luxury the state feels it can no longer afford. Karp's book and the White House's quiet deployment aren't opposite impulses. They're the same one.


🧠 Handing Over the Keys to Cognition

We often ask what AI can do. We almost never ask what we stop being able to do once we let it.

A recent study suggests that tools like ChatGPT are functioning as a "cognitive crutch." Researchers found that students using AI as a study aid performed significantly worse on long-term retention tests than those using traditional methods. The AI users felt productive, an illusion of competence. But they failed to exercise the retrieval networks that actual expertise requires.

Offloading effort has always changed us. Calculators altered arithmetic. GPS altered wayfinding. But this is different in scope. We aren't offloading math. We are offloading synthesis. The slow, effortful construction of meaning from fragments.

And the source of that synthesis matters. Most AI models were trained on the public web, but the frontier has shifted. Companies are now training models on "internal knowledge." The digital residue of Slack threads, archived emails, years of how specific people and institutions actually think and work. The result isn't a tool that assists cognition. It's a system that absorbs and reproduces it.

The trade is subtle enough that most people won't notice it until it's done. You get speed, fluency, and output. You lose repetition, friction, and the slow formation of expertise. We are watching a handoff of human cognition...gradual, voluntary, and largely invisible ... until the muscle has already atrophied.


🏫 The Ed-Tech Backlash is Here

AI education policy in the US is finally moving, but it's trailing the reality on the ground by a wide margin. On April 13, the Department of Education finalized a rule prioritizing "AI literacy" in federal grant applications, embedding AI into everything from teacher prep to special education. Simultaneously, 134 bills across 31 states are moving through legislatures. The most consequential year for ed-tech policy in a decade.

While the government sets those priorities, it is running headlong into something it didn't anticipate. A massive backlash from the students it is targeting.

According to new data from Gallup and the Walton Family Foundation:

That last gap is the one that matters most for those of us in education. Students are already using these tools. They're not waiting for curriculum. The question is what frameworks they're building, or not building, while they do.

What's worth naming here: when a generation views their primary cognitive tools as both operationally necessary and fundamentally threatening, we have moved past the moral panic over smartphones. We are looking at a structural distrust wired into the infrastructure of learning itself.


πŸ’­ Consider

Nothing vast enters the life of mortals without a curse..

β€” Sophocles

The handoff doesn’t feel like a decision when it happens. It feels like convenience.


πŸ“š Recent Work


🌱 Final Thought

This is the last issue of Digitally Literate from Buttondown. Twenty-some issues from this address β€” a short chapter in a much longer run.

The newsletter itself isn't going anywhere. It will still live at digitallyliterate.net, and next Wednesday it moves to Substack, with a new format, a paid tier, and The Understory for the longer threads. Same Wednesday rhythm. Same commitment to naming what's shifting before it becomes invisible.

Thanks for reading here. I'll see you on the other side.

If you value these reflections, subscribe here or support me on Ko-fi.

Thanks for reading. See you next week. Email:Β hello@wiobyrne.com


Previous: DL 428 β€’ Next: DL 430 β€’ Archive: πŸ“§ Newsletter


πŸ•ΈοΈ Connected Concepts