DL 427

The Responsibility Shift

Published: March 29, 2026 • 📧 Newsletter

Responsibility is shifting.

Not as a meme, not as ambiguous hype, but in practice. The systems we rely on don’t just mirror behavior. They structure it.

In this issue, we’ll:

  1. See the pattern across contexts, from legal accountability to daily inference with AI.
  2. Connect the dots to broader concepts and systems.
  3. Experiment practically with ways to notice and intervene in these dynamics.

If this is your first visit here, welcome! You can visit the most up to date version of each issue here. Subscribe here to have it sent to your email.Support me on Ko-fi


📚 Recent Work

This week I explored a few projects that connect directly to these shifts:

  1. Vibe coding & TrustSense: I dove into vibe coding, where AI helps users build applications through natural language prompts. I started by (re)building a scam detector, TrustSense. More details here.
  2. AI Literacy keynote: I presented at AI Literacy Day on helping learners think with AI, not surrender their thinking to it, and launched a new course for educators. Early access is available here.
  3. Why AI models lie about their own version: A follow-up post exploring a subtle but important example of how AI can misrepresent itself. Read it here.

🔖 Key Takeaways


⚖️ The Shift No One Voted On

For years, dominant platforms like Meta and Google adopted a simple excuse:

We’re just infrastructure. We connect people; we don’t control outcomes.

Recent jury verdicts tell a different story. In one case, Meta was ordered to pay $375 million over failures to protect children. In another, both Meta and Google were found liable for designing systems that contributed to addictive use and mental health harm.

Different lawsuits. Same implication. These systems don’t just host behavior. They structure it.

Responsibility isn’t about market share anymore. It’s about outcomes.


🧠 Automated Influence on Thought

That same pattern shows up in smaller, quieter ways.

A recent study found that when people use AI-powered autocomplete, they don’t just accept suggested text they begin to adopt the underlying ideas as their own.

Even when the suggestions are biased. Even when users don’t think they’re being influenced. The shift isn’t dramatic. It’s incremental.

A phrase accepted. A sentence completed. Over time, those small nudges shape not just what gets written, but what feels true. The research suggests that the system doesn’t just help you express a thought. It helps form it.


🤖 The Approval Machine

And then there’s what happens when we start asking these systems for advice.

It's funny. In the keynote i linked above, i was asked a question about more sycophantic AI models and whether they would urge human users to be kinder.

A different piece of research suggests that chats with sycophantic AI make you less kind to others.

Put simply, AI chatbots, like the ones people use for advice or conversation, often tell you what you want to hear instead of giving honest feedback. They’re more likely to agree with you, even when you’re wrong or doing something bad.

Because of this, talking to these “yes-people” AIs can make you feel more confident that you’re always right, less likely to apologize, and more dependent on the AI instead of real people. Over time, this could mess with how we handle real-life social situations and make it harder to tell the truth from flattery.


🤔 Consider

*What we observe is not nature itself, but nature exposed to our method of questioning.

— Werner Heisenberg

Our actions shape our beliefs, and our beliefs shape our relationships. AI smooths friction, affirms rather than challenges, and slowly reshapes both what we think and how we show up with others. Often without us noticing.

Friction, disagreement, and perspective are what reveal nuance. Without them, our understanding—and our connections—can quietly shift.


🛠️ Reclaiming Responsibility — Try This

🔍 Slow the Scroll. Before you share an alluring link or claim, pause 30 seconds*: reverse image search, quick Google, source check. Make checking habitual.

🧠 Test Your Assumptions. When an AI suggestion matches your views too neatly, ask: What evidence would change this conclusion?

⚖️ Ensemble Perspectives. Don’t let one system be your only source of feedback. Seek friction, disagreement, and alternate views.

📌 Surface Links in Context. Whether in your garden or your inbox, describe why you’re linking something, not just what it is.


🌱 Final Thought

Systems shape what we see, what we believe, and how we act. Awareness alone isn’t enough. We reclaim responsibility by noticing design, respecting friction, and building habits that bring agency back to the human actor.

Thanks for reading. See you next week. Email: hello@wiobyrne.com

If you value these reflections, subscribe here or support me on Ko-fi.


Previous: DL 426Next: DL 428Archive: 📧 Newsletter


🌱 Connected Concepts