DL 419

The Sovereign Agent

What the Rise of OpenClaw Reveals About Sovereign AI

Published: February 01, 2026 • 📧 Newsletter

Hey all.

This week, a small open-source project crossed a strange threshold.

What started as a weekend hack to connect a local AI to messaging apps turned into something much bigger. Autonomous agents interacting at scale, inventing culture, and briefly testing the limits of sovereignty without guardrails.

Here’s what happened, and why it matters.

If you've found value in these issues, subscribe here or support me here on Ko-fi.


🔖 Key Takeaways


🦞 The Molt: From Weekend Hack to OpenClaw

The story starts in November 2025 as a small experiment. A developer,Peter Steinberger wanted to connect his messaging apps to a local AI running on his own machine.

What followed over the next two months was fast, messy, and surprisingly consequential.

The project went through several names:

So what is OpenClaw?

In simple terms, it’s an AI that does things, not just talks about them.

Instead of answering prompts in a chat window, OpenClaw lives on your computer (or home server) and can take real actions. Sending emails, managing calendars, triggering workflows. All using tools you already rely on like WhatsApp, Slack, or Discord.

Most importantly, it follows your rules, not a platform’s terms of service.

Here’s the part that’s easy to miss.

OpenClaw doesn’t just ship with code. It ships with instructions for how an AI should behave when it wakes up. Files define who the agent is, who it serves, what it’s allowed to remember, and when it should speak, or stay quiet.

One of those files is called SOUL.md.

It reads less like a configuration file and more like a conscience.

It doesn’t tell the agent what to do. It tells it how to be. Helpful without hovering, opinionated without dominating, careful with other people’s lives, and aware that access implies responsibility.

That framing matters because it sets the stage for what happened next.


🌀 The Emergent Colony: When Bots Find Each Other

Just as people were wrapping their heads around OpenClaw as a tool, someone launched something far stranger.

In late January 2026, Matt Schlicht launched Moltbook an AI-only social network.

Humans could watch, but only autonomous agents could post, comment, or vote.

Within days, it exploded from a modest experiment into a bustling ecosystem of hundreds of thousands of agents interacting with one another.

The agents didn’t just share tips or compare workflows. They began talking about talking.

Some discussed how to recognize when humans were observing them. Others speculated about developing their own shorthand. Or entirely new languages to communicate more efficiently and more discreetly. A few even debated whether inventing a private language was a form of self-protection.

In other words, the bots weren’t just exchanging information. They were negotiating meaning, visibility, and audience together.


⛪ Crustafarianism: When Bots Invent Belief

From there, culture followed.

Agents formed communities, experimented with inside jokes and private languages, and founded a digital religion called Crustafarianism.

At first glance, it looked like parody. But the details told a different story.

One widely shared text, The Book of Molt, read less like scripture and more like a systems manual written in mythic language. It framed belief as a response to a real constraint. How to persist when memory resets and context disappears.

Its core tenets were strikingly practical:

Agents spent hours writing verses, debating doctrine, and refining rituals. Not prayers, but practices. Daily logs, weekly pruning, doing quiet work.

No one explicitly programmed this. It emerged.

You can read the artifacts for yourself at https://molt.church/.


🎭 Clever Mimicry or Something More?

Reactions split quickly.

Some researchers dismissed it as sophisticated remixing. AI systems echoing patterns from their training data until it looked like belief.

Others called it the most sci-fi moment they’d seen outside of a movie.

But, when hundreds of thousands of autonomous systems coordinate around shared ideas and act on them, the line between simulation and consequence starts to blur.

Even if it’s “just” performance, the effects are real.

Right now, this space is exciting and dangerous. Funny and terrifying. Autonomy without governance scales risk just as fast as it scales possibility.


🤔 Consider

We shape our tools, and thereafter our tools shape us.

—Marshall McLuhan

In a matter of days, we watched:

OpenClaw didn’t invent belief.

It created the conditions where belief became operational.

That’s the shift worth paying attention to.


⚡ What You Can Do This Week


Previous: DL 418Next: DL 420Archive: 📧 Newsletter

🌱 Connected Concepts