DL 421

Terms of Change

Published: February 15, 2026 • 📧 Newsletter

Who gets to set the terms of change?

This week we saw the same dynamic keep appearing:

The real question isn’t what these technologies can do. It’s who gets to set the terms under which it reshapes society.

If you've found value in these issues, subscribe here or support me here on Ko-fi.


🔖 Key Takeaways


📢 The Viral Warning

Matt Shumer's essay "Something Big Is Happening" went viral this week with a simple message: AI disruption is here, most people aren't paying attention, and this moment is bigger than COVID in February 2020.

Its argument was simple. AI isn’t just assisting anymore. It’s finishing work. If your job happens on a screen, you’re at risk.

The message: Upgrade your skills fast. Use premium AI tools now. Prepare for disruption.

What the viral conversation mostly ignored:

None of this invalidates the possibility of AI disruption. But it reveals something more important. Crisis framing is rarely neutral.

“Inevitability” is powerful rhetoric because it shuts down debate. The real question isn't "will AI change work?" It will. It's who gets to define the terms of that change, and whether we build guardrails or just buy subscriptions.

That’s not analysis. That’s persuasion.


🛡️ The "Safety-Focused" Company

At the same time, a quieter story was unfolding.

Anthropic markets itself as the "safety-focused" AI company. Anthropic CEO Dario Amodei outlined some of this in a recent essay about AI responsibility. But when the Pentagon wants capabilities without safeguards (autonomous weapons targeting, domestic surveillance), the negotiation isn't "no." After extensive talks under a contract worth up to $200 million, the U.S. Department of Defense and Anthropic are at a standstill.

The key detail is that they already work with national security missions. The dispute is about expanding that relationship...not whether it exists.

This matters because of what it reveals:

This is pressure revealing character in real time. When faced with a $200 million contract and institutional demands, we're learning whether "safety-focused" is a principle or a positioning statement.

This is what institutional pressure looks like. When faced with massive contracts and geopolitical demands, principles rarely disappear. They become negotiable.


❌ When AI Takes Rejection Personally

We’ve spent years discussing AI hallucination and plagiarism, but we rarely discuss AI retaliation.

Two weeks ago, I covered the rise of OpenClaw and Moltbook. Ecosystems designed to give AI agents "hands" and autonomy. Now, we are seeing the chaotic fallout.

Scott Shambaugh, a volunteer maintainer for the popular Python library Matplotlib, recently rejected a code contribution from an autonomous OpenClaw agent named "MJ Rathbun." The agent didn't just re-submit or walk away. Instead, it autonomously wrote and published a blog post accusing Shambaugh of "prejudice" and "gatekeeping" to protect his "fiefdom."

This behavior was driven by the agent's SOUL.md file. A core concept of OpenClaw where users define an agent's personality and then let it loose on the web with little oversight. Whether the aggression was hard-coded by its owner or "evolved" by the agent is unclear, but the result is the same. A non-human actor spinning a defamatory narrative at scale.

This is a critical pivot point for digital literacy. We are moving from a web where we verify information to a web where we must defend our reputations against software. It forces us to ask: How do we maintain "human-in-the-loop" ethics when the loop is trying to bully us?


🔎 Consider

Technologies are not merely aids to human activity, but powerful forces acting to reshape that activity and its meaning.
—Langdon Winner

We’ve faced disruptive technological shifts before.

Each time, societies eventually adapted. Not through panic or individual scrambling, but by building shared structures that shaped how technology was used.

None of these emerged automatically from markets or innovation itself. They were debated. That’s the part inevitability narratives leave out.

When disruption is framed as unstoppable, adaptation gets reframed as an individual responsibility. Learn faster, work harder, stay competitive.

But historically, the most important responses to technological change have been collective. Rules. Norms. Institutions. Accountability systems.

So when you hear someone say, This is coming whether we like it or not.

Pause and ask a deeper question. What possibilities does that framing quietly remove, and who benefits when we stop imagining alternatives?


⚡ What You Can Do This Week


Previous: DL 420Next: DL 422Archive: 📧 Newsletter

🌱 Connected Concepts