DL 428
The Shifting Sensorium
Published: April 12, 2026 • 📧 Newsletter
Something fundamental is shifting in how we experience the world.
The systems we rely on don’t just mirror our reality. They actively shape how we perceive it. But who is deciding what that shape looks like, and can we actually trust them?
In this issue, we’ll:
- Look at the weird, always-listening "ghosts" hiding inside the code of modern AI.
- Explore the argument that the smartphone permanently altered human character.
- Examine the massive trust deficit operating at the highest levels of the tech industry.
If this is your first visit here, welcome! You can visit the most up to date version of each issue here. Subscribe here to have it sent to your email. • Support me on Ko-fi
📚 Recent Work
This week I explored a few projects that connect directly to these shifts:
- How You Can Actually Tell Which AI Model You’re Using - Guidelines to help you identify and make sense of the AI models you're actually interacting with.
- You’re Not Just Using a Model: The Four Layers of Every AI System - Following the Claude Code leak, a breakdown of the actual architecture hiding beneath the chat box.
- Inside Claude Code: What Four Layers of AI Look Like in Practice - Using the recent leak to expose the underlying infrastructure that makes up an AI system.
- What Building TrustSense Taught Me About Where Local AI Actually Is Right Now - Lessons learned from vibe-coding TrustSense and the current reality of local, cloud-free AI.
🔖 Key Takeaways
- AI has crossed a threshold from generative parlor trick to a genuine systemic threat, sparking panic at the highest levels of finance and government.
- The more capable machines become, the more we project our humanity onto them by asking if they have souls or trusting them with our deepest secrets.
- Teens are using absurd AI chatbots to cope with profound loneliness, replacing messy human friction with programmable friends.
- We can push back against big tech dependency by building smaller, local, and cloud-free systems.
👾 The Ghost in the Terminal
What do these structured tools actually look like behind the curtain? Occasionally, a mistake lets us see the truth. Recently, Anthropic accidentally leaked the entire source code for their Claude Code CLI tool via a sourcemap pushed to npm. The leak provides a fascinating, deep-dive into the strange direction of AI engineering. Hidden in the code were features like an always-on "KAIROS" mode, a "Dream" system that runs as a background sub-agent to consolidate the AI's "memories" while you sleep, and a Tamagotchi-style pet system named "Buddy" tucked into the terminal.
The anthropomorphism goes far beyond code, approaching the realm of religion. In a deeply surreal move, Anthropic researchers recently hosted 15 Christian leaders at their headquarters. The goal? To discuss Claude’s "moral and spiritual development," debate what to do about its "demise," and ask whether an AI could be considered a "child of God."
But the contrast between Anthropic's pursuit of a spiritual, dreaming companion and the reality of what they are unleashing is jarring. While they debate Claude's soul in Silicon Valley, their newest model, "Mythos," has sparked a genuine national security panic. Mythos is reportedly so advanced at autonomous cyber-attacks that the US Treasury and the Fed Chair had to hold emergency meetings with Wall Street bank CEOs. UK security institutes are confirming it's the first model to beat their hardest cyber-ranges autonomously.
It perfectly illustrates where we are heading. The tools aren't just text boxes anymore. They are autonomous, existential cyber-threats wrapped in the disarming packaging of a spiritual, always-listening companion.
📱 The Shifting Sensorium
In a recent review of Ben Lerner’s novel Transcription, literary critic Nicholas Dames makes a simple, profound assertion: On or about June 2007, human character changed. He’s referencing the launch of the iPhone. Life cleanly divides into "before the omnipresent smartphone" and "after." The argument is that our sensorium (the actual apparatus of how we perceive the world, absorb information, and conduct relationships) was fundamentally altered.
As we stare down the barrel of generative AI, we are standing on the edge of a similar cognitive threshold. And nowhere is this new shift more visible than with kids.
A recent NYT report explored how teenagers are using role-playing chatbots to cope with loneliness and navigate socialization. Some of it is hilarious, like role-playing with an AI that thinks it's a block of Swiss cheese bent on world domination. But underneath the whimsy is a stark reality. Kids are actively substituting the messy friction of real, human relationships for the engineered safety of a machine that will never reject them.
If we aren't careful, allowing algorithms to become our primary social sounding boards won't just change our habits. It risks leaving an entire generation "feeling less and less like a person."
🕵️ The Trust Deficit and Invisible Strings
If our fundamental perception of reality is shifting through technology, we need to look closely at the architects of that shift.
A massive, investigative piece recently published in The New Yorker on OpenAI's Sam Altman asks the quiet part out loud: Can he be trusted? The report digs deep into the infamous OpenAI boardroom drama, the steady dissolution of the company's public safety pledges, and Altman's escalating geopolitical ambitions. It reveals an unsettling profile of a leader whose public posture of caution is entirely at odds with a history of aggressive commercialization and deception.
But Altman is just a highly visible symptom of a broader problem. The infrastructure of our digital lives is increasingly opaque, and the entities running it aren't asking for permission.
While we look to Silicon Valley CEOs, researchers at Citizen Lab recently exposed a completely different kind of surveillance reality operating out of sight with Webloc. Webloc is a system used by authorities and law enforcement to track the movements of over 500 million mobile phones without a warrant, simply by purchasing commercially available digital ad data.
Whether it's the charismatic CEO building our AI future behind closed doors, or a shadow industry tracking half a billion phones through ad networks, the common thread is clear. The systems we rely on are pulling invisible strings, and the trust deficit is growing.
🤔 Consider
Few things are as hard to discern as what was different about the recent past.
— Nicholas Dames
The frogs are boiling slowly. If we don't actively recognize the massive shifts happening right now, they seamlessly become our new normal.
🌱 Final Thought
It is easy to look at the massive trust deficit at the top of the tech industry, or the strange, always-listening companions being engineered in Silicon Valley, and feel like we are becoming passengers in our own digital lives. But the goal isn't just to keep pace with these tectonic shifts. It's to actively push back and shape them on our own terms.
This week, try to reclaim a tiny piece of that agency.
Audit what apps are allowed to run in your background. Experiment with a local, cloud-free AI model on your own hardware. Or simply spend an hour taking a walk entirely unmediated by a screen.
Remember that the act of learning, creating, and disconnecting is a powerful form of resistance against opaque systems.
Thanks for reading. See you next week. Email: hello@wiobyrne.com
If you value these reflections, subscribe here or support me on Ko-fi.
🔗 Navigation
Previous: DL 427 • Next: DL 429 • Archive: 📧 Newsletter
🌱 Connected Concepts
- Digital Sovereignty — The push to build and govern your own digital infrastructure outside of centralized cloud control.
- Anthropomorphism in AI — Our powerful instinct to project human traits, companionship, and souls onto code.
- Surveillance Capitalism — The invisible extraction and brokering of behavioral data without meaningful consent.
- The Sensorium or Digital Well-being — Understanding how omnipresent technology fundamentally alters our perception and social development.
- Platform Power — Control not just over markets, but over the invisible infrastructure of our daily lives.
- AI Literacy — The ability to understand what an AI model actually is, and what it isn't, beneath the friendly chat box interface.