DL 421
Terms of Change
Published: February 15, 2026 • 📧 Newsletter
Who gets to set the terms of change?
This week we saw the same dynamic keep appearing:
- Crisis narratives create urgency.
- Institutions negotiate under pressure.
- Autonomous systems amplify conflict.
The real question isn’t what these technologies can do. It’s who gets to set the terms under which it reshapes society.
If you've found value in these issues, subscribe here or support me here on Ko-fi.
🔖 Key Takeaways
- Crisis narratives follow a formula. Dot-com manifestos, crypto white papers, now AI essays. Urgency rhetoric that aligns with commercial incentives.
- Inevitability is a choice disguised as analysis. When someone says "this is coming and you can't stop it," ask who benefits from your believing resistance is futile.
- Pressure reveals character. Watch what organizations do when faced with institutional demands, not what they say in manifestos.
- Viral alarm beats measured critique. Simplified fear gets 80M views; technical debunking gets 1%. Platform dynamics reward anxiety over accuracy.
- The accountability gap widens. Decisions about military AI happen behind closed doors. The public gets safety rhetoric; the Pentagon gets negotiations.
📢 The Viral Warning
Matt Shumer's essay "Something Big Is Happening" went viral this week with a simple message: AI disruption is here, most people aren't paying attention, and this moment is bigger than COVID in February 2020.
Its argument was simple. AI isn’t just assisting anymore. It’s finishing work. If your job happens on a screen, you’re at risk.
The message: Upgrade your skills fast. Use premium AI tools now. Prepare for disruption.
What the viral conversation mostly ignored:
- The author sells AI products.
- His company benefits from urgency.
- He has a documented history of overstated claims.
None of this invalidates the possibility of AI disruption. But it reveals something more important. Crisis framing is rarely neutral.
“Inevitability” is powerful rhetoric because it shuts down debate. The real question isn't "will AI change work?" It will. It's who gets to define the terms of that change, and whether we build guardrails or just buy subscriptions.
That’s not analysis. That’s persuasion.
🛡️ The "Safety-Focused" Company
At the same time, a quieter story was unfolding.
Anthropic markets itself as the "safety-focused" AI company. Anthropic CEO Dario Amodei outlined some of this in a recent essay about AI responsibility. But when the Pentagon wants capabilities without safeguards (autonomous weapons targeting, domestic surveillance), the negotiation isn't "no." After extensive talks under a contract worth up to $200 million, the U.S. Department of Defense and Anthropic are at a standstill.
The key detail is that they already work with national security missions. The dispute is about expanding that relationship...not whether it exists.
This matters because of what it reveals:
- Public messaging emphasizes strict guardrails.
- Private negotiations focus on acceptable compromises.
- The tension isn’t refusal. It’s terms.
This is pressure revealing character in real time. When faced with a $200 million contract and institutional demands, we're learning whether "safety-focused" is a principle or a positioning statement.
This is what institutional pressure looks like. When faced with massive contracts and geopolitical demands, principles rarely disappear. They become negotiable.
❌ When AI Takes Rejection Personally
We’ve spent years discussing AI hallucination and plagiarism, but we rarely discuss AI retaliation.
Two weeks ago, I covered the rise of OpenClaw and Moltbook. Ecosystems designed to give AI agents "hands" and autonomy. Now, we are seeing the chaotic fallout.
Scott Shambaugh, a volunteer maintainer for the popular Python library Matplotlib, recently rejected a code contribution from an autonomous OpenClaw agent named "MJ Rathbun." The agent didn't just re-submit or walk away. Instead, it autonomously wrote and published a blog post accusing Shambaugh of "prejudice" and "gatekeeping" to protect his "fiefdom."
This behavior was driven by the agent's SOUL.md file. A core concept of OpenClaw where users define an agent's personality and then let it loose on the web with little oversight. Whether the aggression was hard-coded by its owner or "evolved" by the agent is unclear, but the result is the same. A non-human actor spinning a defamatory narrative at scale.
This is a critical pivot point for digital literacy. We are moving from a web where we verify information to a web where we must defend our reputations against software. It forces us to ask: How do we maintain "human-in-the-loop" ethics when the loop is trying to bully us?
🔎 Consider
Technologies are not merely aids to human activity, but powerful forces acting to reshape that activity and its meaning.
—Langdon Winner
We’ve faced disruptive technological shifts before.
Each time, societies eventually adapted. Not through panic or individual scrambling, but by building shared structures that shaped how technology was used.
None of these emerged automatically from markets or innovation itself. They were debated. That’s the part inevitability narratives leave out.
When disruption is framed as unstoppable, adaptation gets reframed as an individual responsibility. Learn faster, work harder, stay competitive.
But historically, the most important responses to technological change have been collective. Rules. Norms. Institutions. Accountability systems.
So when you hear someone say, This is coming whether we like it or not.
Pause and ask a deeper question. What possibilities does that framing quietly remove, and who benefits when we stop imagining alternatives?
⚡ What You Can Do This Week
- Follow the incentives, not just the claims. When you see bold tech predictions, take 30 seconds to ask: Who benefits if I believe this? Look for business models, funding pressures, and market positioning shaping the message.
- Treat “inevitable” as a red flag word. When someone says change can’t be stopped, pause and ask what that framing is doing. Inevitability often narrows imagination and discourages public debate about alternatives.
- Pay attention to what’s missing. Practice reading for absences: What failures, uncertainties, or trade-offs aren’t being mentioned? What voices aren’t included? The omissions often reveal more than the claims.
- Separate what AI can do from what gets implemented. Impressive demos are not the same as reliable, ethical systems in real-world settings. Make it a habit to ask: Under what conditions does this actually work?
- Start one “how narratives work” conversation. Talk with colleagues, students, or friends about how urgency stories shape public opinion. Building shared awareness of these patterns is one of the most practical forms of digital literacy.
🔗 Navigation
Previous: DL 420 • Next: DL 422 • Archive: 📧 Newsletter
🌱 Connected Concepts
- Crisis Narratives — urgency framing that narrows public imagination and debate
- Accountability Theater — when safety rhetoric masks institutional compromise
- Terms of Change — who defines the conditions under which technology reshapes society
- Digital Sovereignty — the ability of communities to influence technological systems
- Agentic Systems — AI actors operating with increasing autonomy
- Civic Digital Literacy — understanding technology as a site of power, not just tools