DL 427
The Responsibility Shift
Published: March 29, 2026 • 📧 Newsletter
Responsibility is shifting.
Not as a meme, not as ambiguous hype, but in practice. The systems we rely on don’t just mirror behavior. They structure it.
In this issue, we’ll:
- See the pattern across contexts, from legal accountability to daily inference with AI.
- Connect the dots to broader concepts and systems.
- Experiment practically with ways to notice and intervene in these dynamics.
If this is your first visit here, welcome! You can visit the most up to date version of each issue here. Subscribe here to have it sent to your email. • Support me on Ko-fi
📚 Recent Work
This week I explored a few projects that connect directly to these shifts:
- Vibe coding & TrustSense: I dove into vibe coding, where AI helps users build applications through natural language prompts. I started by (re)building a scam detector, TrustSense. More details here.
- AI Literacy keynote: I presented at AI Literacy Day on helping learners think with AI, not surrender their thinking to it, and launched a new course for educators. Early access is available here.
- Why AI models lie about their own version: A follow-up post exploring a subtle but important example of how AI can misrepresent itself. Read it here.
🔖 Key Takeaways
- Platforms and AI tools don’t just reflect behavior, they actively shape it.
- Responsibility is shifting from individuals to the systems we interact with.
- Friction, disagreement, and correction are essential to nuance and insight.
- The real shift isn’t about technology. It’s about where we locate responsibility.
⚖️ The Shift No One Voted On
For years, dominant platforms like Meta and Google adopted a simple excuse:
We’re just infrastructure. We connect people; we don’t control outcomes.
Recent jury verdicts tell a different story. In one case, Meta was ordered to pay $375 million over failures to protect children. In another, both Meta and Google were found liable for designing systems that contributed to addictive use and mental health harm.
Different lawsuits. Same implication. These systems don’t just host behavior. They structure it.
Responsibility isn’t about market share anymore. It’s about outcomes.
🧠 Automated Influence on Thought
That same pattern shows up in smaller, quieter ways.
A recent study found that when people use AI-powered autocomplete, they don’t just accept suggested text they begin to adopt the underlying ideas as their own.
Even when the suggestions are biased. Even when users don’t think they’re being influenced. The shift isn’t dramatic. It’s incremental.
A phrase accepted. A sentence completed. Over time, those small nudges shape not just what gets written, but what feels true. The research suggests that the system doesn’t just help you express a thought. It helps form it.
🤖 The Approval Machine
And then there’s what happens when we start asking these systems for advice.
It's funny. In the keynote i linked above, i was asked a question about more sycophantic AI models and whether they would urge human users to be kinder.
A different piece of research suggests that chats with sycophantic AI make you less kind to others.
Put simply, AI chatbots, like the ones people use for advice or conversation, often tell you what you want to hear instead of giving honest feedback. They’re more likely to agree with you, even when you’re wrong or doing something bad.
Because of this, talking to these “yes-people” AIs can make you feel more confident that you’re always right, less likely to apologize, and more dependent on the AI instead of real people. Over time, this could mess with how we handle real-life social situations and make it harder to tell the truth from flattery.
🤔 Consider
*What we observe is not nature itself, but nature exposed to our method of questioning.
— Werner Heisenberg
Our actions shape our beliefs, and our beliefs shape our relationships. AI smooths friction, affirms rather than challenges, and slowly reshapes both what we think and how we show up with others. Often without us noticing.
Friction, disagreement, and perspective are what reveal nuance. Without them, our understanding—and our connections—can quietly shift.
🛠️ Reclaiming Responsibility — Try This
🔍 Slow the Scroll. Before you share an alluring link or claim, pause 30 seconds*: reverse image search, quick Google, source check. Make checking habitual.
🧠 Test Your Assumptions. When an AI suggestion matches your views too neatly, ask: What evidence would change this conclusion?
⚖️ Ensemble Perspectives. Don’t let one system be your only source of feedback. Seek friction, disagreement, and alternate views.
📌 Surface Links in Context. Whether in your garden or your inbox, describe why you’re linking something, not just what it is.
🌱 Final Thought
Systems shape what we see, what we believe, and how we act. Awareness alone isn’t enough. We reclaim responsibility by noticing design, respecting friction, and building habits that bring agency back to the human actor.
Thanks for reading. See you next week. Email: hello@wiobyrne.com
If you value these reflections, subscribe here or support me on Ko-fi.
🔗 Navigation
Previous: DL 426 • Next: DL 428 • Archive: 📧 Newsletter
🌱 Connected Concepts
- Algorithmic Agency — Systems don’t just reflect choices; they actively structure them
- AI Literacy — Understanding where assistance ends and influence begins
- Choice Architecture — The design of systems that nudge decisions without appearing to
- Platform Power — Control not just over markets, but over behavior and visibility
- Critical Thinking — The ability to notice when a decision wasn’t entirely your own
- Sociotechnical Systems — Technology and human behavior as inseparable, co-shaping forces