DL 423

The War on Friction

Published: March 1, 2026 • 📧 Newsletter

In 2026, friction has become the villain.

From the Pentagon demanding “frictionless” autonomous systems to lawmakers pushing age verification infrastructure in the name of child safety, guardrails are being recast as inefficiencies. Boundaries are framed as bottlenecks. Deliberation is treated as delay.

But friction is not a design flaw. It is often the only thing keeping power accountable.

When we remove resistance in the name of scale, we don’t eliminate constraint. We relocate it. And usually, we relocate it downward.

This issue isn’t about AI capability. It’s about what happens when we decide that anything slowing acceleration must be stripped away.

If you've found value in these issues, subscribe here or support me here on Ko-fi.


📚 Recent Work

I posted the following this week:


🔖 Key Takeaways


🛡️ Follow-up: The "Safety" Standstill Shatters

Two weeks ago in DL 421, we discussed whether Anthropic’s "safety-focused" branding was a principle or a positioning statement. This week, we got our answer.

The standoff over a $200 million Pentagon contract ended in a total rupture. Anthropic CEO Dario Amodei refused to remove "guardrails" that prevent Claude from being used for mass domestic surveillance or fully autonomous lethal targeting. In response, the Trump administration didn't just cancel the contract. They went nuclear.

Defense Secretary Pete Hegseth has officially designated Anthropic a "supply chain risk to national security."

Why it matters: This doesn't just end the Pentagon deal. It forces any private company doing business with the military to cut ties with Anthropic. It is a state-sponsored attempt to isolate a major U.S. tech firm from the economy.


🕵️ The Rear-Guard Action: China’s Distillation War

While the U.S. government attacks Anthropic from the front, Chinese firms appear to be probing it from the back.

Anthropic recently disclosed that labs including DeepSeek allegedly used 24,000 fraudulent accounts to conduct what are known as “distillation attacks.” Systematically querying Claude to train smaller competing models on its outputs.

To understand why this matters, we need to be precise about terms.

Distillation is a legitimate AI training technique. A smaller “student” model learns by observing the outputs of a larger “teacher” model. Instead of training on raw internet data (expensive and compute-intensive), the student learns to mimic how the teacher responds. It’s a shortcut.

Inside a company, this is standard optimization. Outside of a company, using mass fake accounts to extract behavioral patterns. It starts to look less like learning and more like copying the exam key.

The Recursive Irony: Foundation AI models (including Claude) were trained on scraped public data created by millions of humans who were never paid.

Everyone in this stack is downstream of extraction. The difference isn’t whether extraction exists. It’s who is allowed to do it, and under what rules.


🆔 KOSMA: The "Safety" Trojan Horse

The pressure on "Big Tech" is moving from the Pentagon to the living room. Senators Ted Cruz and Brian Schatz are reviving the Kids Off Social Media Act (KOSMA).


📉 A Cognitive Decline?!?!

Why the sudden rush to “save” the kids? Because we’re finally seeing data from what amounts to a 25-year, $30 billion experiment in classroom technology. It’s once again provoking a predictable cultural panic. Schools nationwide now spend billions on laptops, tablets, and digital infrastructure, yet academic performance has not improved in the ways advocates once promised

In testimony before the U.S. Senate earlier this year, neuroscientist Jared Cooney Horvath argued that Gen Z may be the first generation in modern history to score lower on standardized cognitive performance measures than their parents. This is despite unprecedented access to digital tools and institutional schooling.

A couple of things:


🔎 Consider

A gem cannot be polished without friction, nor a man perfected without trials.
— Seneca

But we forget that institutions are shaped the same way.

Verification processes. Editorial review. Age limits. Guardrails in AI systems. These are not bureaucratic annoyances. They are polishing mechanisms.

When we remove friction from systems of power, we do not create purity. We create smooth surfaces where nothing can catch. Not oversight, not accountability, not dissent.

The question isn’t whether friction exists. It’s who gets to feel it.


⚡ What You Can Do This Week


Previous: DL 422Next: DL 424Archive: 📧 Newsletter

🌱 Connected Concepts