DL 425
Identity as Training Data
Published: March 15, 2026 • 📧 Newsletter
For decades, companies scraped the web for data. Now they’re scraping something far harder to replace: identity, expertise, and the authority we’ve spent careers building.
The systems doing the scraping rarely ask for permission. This week’s stories trace what we lose when the default is capture — and who gets to decide when it stops.
If you’ve found value in these issues, subscribe here or support me here on Ko-fi.
📚 Recent Work
This newsletter now lives inside my digital garden. A public collection of interconnected notes on digital literacy, AI, education, and technology. Every issue links outward to concepts I’ve been developing over time, so you can follow a thread as far as you want to go.
A few places worth exploring that connect directly to this week’s stories:
- Digital Literacy — The forest note that anchors the whole garden. A working definition of digital literacy as a stance toward power: understanding the systems you depend on, what they extract, and how to shape them on your own terms.
- AI and Digital Resilience Index — A growing map of concepts at the intersection of AI, privacy, surveillance, and digital autonomy. The ideas behind the Pentagon and Grammarly stories live here.
- Digital Sovereignty — Notes on data ownership, self-determination, and what it means to control your own digital life. Directly relevant as companies decide whose identity counts as property.
🔖 Key Takeaways
- Guardrails are the new battlefield. The Pentagon's clash with Anthropic isn't about capability. It's about who gets to set the ethical limits on AI used for surveillance and weapons systems.
- Owning the bot infrastructure means owning what comes next. Meta's acquisition of Moltbook is about controlling the registry where AI agents communicate. The same play it ran with the human social graph twenty years ago.
- Your professional reputation is someone else's feature. When AI uses your name and expertise without permission, it isn't an oversight. It's a business decision about what counts as property.
- Science is entering a post-theory era. AI can now identify patterns in data that would take researchers decades to find. But often can't explain why they work. Discovery is becoming a black box.
- Opt-out is not consent. Across this week's stories, the pattern is consistent: systems deploy first, ask later, and your only recourse is an email to a list that may or may not remove you.
Power and Control in the AI Economy
Our first set of stories this week focus on who controls AI systems and whose interests they serve.
“Why the Pentagon Wants to Destroy Anthropic”
This episode of The Ezra Klein Show captures some of the context behind a story we've been following. Dean Ball, a former senior AI policy adviser in the Trump White House, unpacks the escalating tension between the Pentagon and Anthropic. A conflict that has reached the point of the military threatening a "supply chain risk" designation against one of the world’s leading AI labs.
The core of the dispute lies in the "guardrails" Anthropic has built into its models. While the Pentagon has been actively using Claude for intelligence and cyber operations, it is now balking at the ethical constraints baked into the system. The friction points are stark:
- The Surveillance Loophole: AI has turned "commercially available data" into a weapon. Because buying bulk data isn't legally classified as "surveillance," the government can use AI to analyze private lives at a scale previously impossible, bypassing traditional legal oversight.
- The Autonomy Debate: While previous agreements banned fully autonomous lethal weapons, the military is pushing for fewer restrictions on how models like Claude make "moral judgments" or handle tactical data.
- The Power Struggle: This isn't just about a contract; it’s about sovereignty. Does a private company have the right to impose moral limits on the state's primary instrument of force?
As Ball highlights, this clash isn't just bureaucratic. It's the first major battle in determining who holds the "kill switch" for the most powerful technology on Earth.
Meta acquiring Moltbook
We started talking about Moltbook several weeks ago. Moltbook is a viral, "human-free" social network where AI agents are the only users allowed to post, upvote, and interact. This week, Moltbook, and the developers behind it were acqui-hired by Meta, the company behind Facebook and Instagram.
Just as Meta dominated the human social graph via Facebook, it seems like it is now racing to own the "bot social graph." Moltbook serves as a directory and identity registry for AI agents to discover and collaborate with one another.
More importantly, by owning the infrastructure where bots communicate, they ensure that the future of autonomous digital labor still runs through their servers.
Ownership of Identity, Voice, and Intellectual Labor
Our second thread is about the appropriation or displacement of human expertise and identity. AI systems are beginning to fill (at high speed) the authority traditionally held by experts, raising questions about intellectual ownership and professional identity.
The "Sloppelgänger" Lawsuit: When AI Borrows Your Reputation
One of the most revealing AI controversies this month involves a writing tool many people use every day: Grammarly.
The company, now operating under its parent, Superhuman, launched a paid feature where users could upload their writing and receive real-time feedback from luminaries like Stephen King, Carl Sagan, and investigative journalist Julia Angwin. The tool would display messages like "Applying ideas from Julia Angwin" alongside a short bio....all while Angwin had no idea she'd been recruited.
When Angwin discovered the feature, she was, in her words, "shocked and horrified." The AI suggestions attributed to her were often bad. Advice she'd never give, making sentences more complex rather than clearer. On March 11, 2026, Angwin filed a class-action lawsuit in federal court in Manhattan alleging violation of privacy and publicity rights on behalf of hundreds of writers, living and deceased.
Writer Ingrid Burrington gave this a name: the "sloppelgänger." AI-generated slop wearing someone else's professional reputation like a costume.
Rather than immediately pulling the feature, Superhuman initially told affected writers they could email expertoptout@superhuman.com to remove themselves. Superhuman has since disabled the feature and issued an apology, though CEO Shishir Mehrotra maintained the legal claims are "without merit."
The courts are about to decide whether your professional reputation is yours, or just more training data.
The Death of the Scientist?
As AI moves from "lab assistant" to "lead researcher," the scientific community faces an existential question: What happens to the scientist when the machine does the discovering? This piece by Sara Imari Walker explores the decoupling of human intuition from the scientific method.
- From Hypothesis to Computation: AI systems are now capable of scanning millions of molecular combinations or astronomical data points to find patterns that would take a human lifetime to notice. The "Eureka!" moment is moving from the brain to the processor.
- The Black Box Problem: We are entering an era of "post-theory" science, where AI provides accurate results (like a new drug or material) without a human-understandable explanation of why it works.
- Professional Identity: If discovery is automated, the role of the scientist shifts from "explorer" to "curator" or "validator." This raises urgent questions about intellectual ownership and the future of scientific prestige.
Why This Matters: Science is becoming a high-speed optimization problem. While this accelerates progress, it threatens to turn the "scientist" into a mere technician overseeing an autonomous discovery engine.
🔎 Consider
Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data.
— Shoshana Zuboff, The Age of Surveillance Capitalism
The stories this week follow a consistent pattern. A system gets built, your name or expertise or identity goes into it, and you find out later...if you find out at all. The opt-out email arrives after the product launched. The guardrails get challenged after they’re already deployed. The AI publishes results no scientist can fully explain.
None of this is accidental. The default is capture. The question worth sitting with this week: what would it look like to change the default?
⚡ What You Can Do This Week
- Search yourself. Spend a few minutes finding out how AI tools represent you or your work. Search your name in a few AI assistants. You may find something built with your name that you didn’t approve.
- Notice where the human disappears. When you encounter a system this week (an algorithmic recommendation, automated moderation, or AI-generated result) ask a simple question: Who made the decision here? The person who wrote the code, the model that generated the answer, or the system that structured the options?
- Check the default. Look at one platform or tool you use regularly and find its data-sharing settings. Not necessarily to change anything, just to notice whether the defaults reflect a choice you’d actually make.
- Slow down one conclusion. When a technological claim sounds inevitable (“AI will replace…”, “This is the future…”), pause before accepting it. Technology moves fast. Interpretation should not.
🔗 Navigation
Previous: DL 424 • Next: DL 426 • Archive: 📧 Newsletter
🌱 Connected Concepts
- Guardrail Inversion — the rhetorical shift where safeguards are reframed as obstacles, allowing their removal to appear pragmatic or inevitable.
- Compliance Culture — a social condition where participation requires passive acceptance rather than meaningful consent. Opt-out is not the same as choice.
- Surveillance Capitalism — the logic by which human experience is claimed as free raw material, converted into behavioral data, and sold as prediction products.
- Legibility Pressure — the demand that identities, expertise, and reputations become machine-readable and classifiable in order to participate in digital systems.
- Platform Accountability — the question of what obligations platforms have when the systems they build use, misrepresent, or profit from the people inside them.