DL 425

Identity as Training Data

Published: March 15, 2026 • 📧 Newsletter

For decades, companies scraped the web for data. Now they’re scraping something far harder to replace: identity, expertise, and the authority we’ve spent careers building.

The systems doing the scraping rarely ask for permission. This week’s stories trace what we lose when the default is capture — and who gets to decide when it stops.

If you’ve found value in these issues, subscribe here or support me here on Ko-fi.


📚 Recent Work

This newsletter now lives inside my digital garden. A public collection of interconnected notes on digital literacy, AI, education, and technology. Every issue links outward to concepts I’ve been developing over time, so you can follow a thread as far as you want to go.

A few places worth exploring that connect directly to this week’s stories:


🔖 Key Takeaways


Power and Control in the AI Economy

Our first set of stories this week focus on who controls AI systems and whose interests they serve.

“Why the Pentagon Wants to Destroy Anthropic”

This episode of The Ezra Klein Show captures some of the context behind a story we've been following. Dean Ball, a former senior AI policy adviser in the Trump White House, unpacks the escalating tension between the Pentagon and Anthropic. A conflict that has reached the point of the military threatening a "supply chain risk" designation against one of the world’s leading AI labs.

The core of the dispute lies in the "guardrails" Anthropic has built into its models. While the Pentagon has been actively using Claude for intelligence and cyber operations, it is now balking at the ethical constraints baked into the system. The friction points are stark:

As Ball highlights, this clash isn't just bureaucratic. It's the first major battle in determining who holds the "kill switch" for the most powerful technology on Earth.

Meta acquiring Moltbook

We started talking about Moltbook several weeks ago. Moltbook is a viral, "human-free" social network where AI agents are the only users allowed to post, upvote, and interact. This week, Moltbook, and the developers behind it were acqui-hired by Meta, the company behind Facebook and Instagram.

Just as Meta dominated the human social graph via Facebook, it seems like it is now racing to own the "bot social graph." Moltbook serves as a directory and identity registry for AI agents to discover and collaborate with one another.

More importantly, by owning the infrastructure where bots communicate, they ensure that the future of autonomous digital labor still runs through their servers.


Ownership of Identity, Voice, and Intellectual Labor

Our second thread is about the appropriation or displacement of human expertise and identity. AI systems are beginning to fill (at high speed) the authority traditionally held by experts, raising questions about intellectual ownership and professional identity.

The "Sloppelgänger" Lawsuit: When AI Borrows Your Reputation

One of the most revealing AI controversies this month involves a writing tool many people use every day: Grammarly.

The company, now operating under its parent, Superhuman, launched a paid feature where users could upload their writing and receive real-time feedback from luminaries like Stephen King, Carl Sagan, and investigative journalist Julia Angwin. The tool would display messages like "Applying ideas from Julia Angwin" alongside a short bio....all while Angwin had no idea she'd been recruited.

When Angwin discovered the feature, she was, in her words, "shocked and horrified." The AI suggestions attributed to her were often bad. Advice she'd never give, making sentences more complex rather than clearer. On March 11, 2026, Angwin filed a class-action lawsuit in federal court in Manhattan alleging violation of privacy and publicity rights on behalf of hundreds of writers, living and deceased.

Writer Ingrid Burrington gave this a name: the "sloppelgänger." AI-generated slop wearing someone else's professional reputation like a costume.

Rather than immediately pulling the feature, Superhuman initially told affected writers they could email expertoptout@superhuman.com to remove themselves. Superhuman has since disabled the feature and issued an apology, though CEO Shishir Mehrotra maintained the legal claims are "without merit."

The courts are about to decide whether your professional reputation is yours, or just more training data.

The Death of the Scientist?

As AI moves from "lab assistant" to "lead researcher," the scientific community faces an existential question: What happens to the scientist when the machine does the discovering? This piece by Sara Imari Walker explores the decoupling of human intuition from the scientific method.

Why This Matters: Science is becoming a high-speed optimization problem. While this accelerates progress, it threatens to turn the "scientist" into a mere technician overseeing an autonomous discovery engine.


🔎 Consider

Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data.
— Shoshana Zuboff, The Age of Surveillance Capitalism

The stories this week follow a consistent pattern. A system gets built, your name or expertise or identity goes into it, and you find out later...if you find out at all. The opt-out email arrives after the product launched. The guardrails get challenged after they’re already deployed. The AI publishes results no scientist can fully explain.

None of this is accidental. The default is capture. The question worth sitting with this week: what would it look like to change the default?


⚡ What You Can Do This Week


Previous: DL 424Next: DL 426Archive: 📧 Newsletter

🌱 Connected Concepts