DL 413
Published: November 23, 2025 • 📧 Newsletter
The Glass City Paradox
When Technology Shines, but the Foundation Cracks
We’ve built a City of Glass: dazzling with technological power, but terrifyingly brittle underneath. In this issue, I explore how our digital world amplifies us, and exposes us, too. We talk about AI’s double-edged nature, how fragile the internet really is, and why trust in our institutions feels more fragile than ever.
If you've found value in these issues, subscribe here or support me here on Ko-fi.
🔖 Key Takeaways
- AI’s Paradox: AI can expand human capacity, but it also risks deskilling us and fueling large-scale misinformation when guardrails fail.
- Fragile Foundations: The Cloudflare outage and FCC cybersecurity rollback show how dependent we are on brittle, aging infrastructure run by a handful of providers.
- Eroding Trust: From Meta’s youth harms to manipulated public-health information, institutions meant to protect us are increasingly shaping, or distorting, what we see as true.
- Shifting Power: Schools, states, and communities are beginning to push back with audits, new AI governance models, and tools that prioritize user agency and data sovereignty.
📚 Recent Work
This week I published the following:
- From Scrap Hardware to Stable Proxmox: Building the Core of My Homelab - The latest in my homelab series. In this post I discuss using Proxmox for my services.
- The Illusion of Simplicity - A followup to my earlier post about prompt engineering and moving to conversations with your AI tools.
🚀📉 AI: Elevator or Crutch?
AI is transforming how we think, learn, and make decisions. But the big question remains. Is AI lifting us up, or quietly weakening our core skills? This tension is at the heart of this week’s story.
Cognitive Deskilling: When Convenience Costs Us
More people are leaning on AI and smart tools for everyday thinking, providing summaries, reports, explanations, even opinions. This shift toward “Thinking as a Service” (TaaS) is incredibly convenient, but it comes with a hidden cost.
Patterns of AI dependence may contribute to cognitive deskilling, a gradual erosion of our ability to research, analyze, and evaluate on our own.
Historically, deskilling has been a strategy used in capitalist systems to separate conception from execution: a small group holds the high-level knowledge while everyone else simply carries out tasks. AI risks accelerating this divide by shifting more intellectual labor to machines.
AI as an Augmenter, Not a Job Killer
Despite fears of mass job loss, current research paints a different picture. Studies show that AI hasn’t reduced working hours or earnings in many fields; instead, it’s acting as a performance booster for humans.
A recent report from the University of Chicago finds no measurable reduction in hours worked or pay, suggesting AI is enhancing, not replacing, professional work.
Researchers describe this emerging dynamic as Connected Intelligence: a collaborative partnership where humans and AI agents work side-by-side, each doing what they do best.
But there’s a catch: this augmented future only works with fast, reliable, low-latency internet. Without it, the collaboration breaks down.
The Information Integrity Crisis
AI doesn’t just assist us. It also floods the information ecosystem with content, and not all of it trustworthy.
- Disinformation gets easier: Google’s Gemini and Nano Banana Pro models can now generate strikingly realistic images of highly sensitive historical events (JFK, 9/11) with little resistance. That makes it easier for bad actors to seed false narratives.
- Truth gets blurrier: Even the strongest language models still struggle with reliable facts. A recent Nature article shows that state-of-the-art LLMs still miss nearly 30% of basic general-knowledge questions.
This combination, easy disinformation + inconsistent accuracy, creates a powerful challenge for public understanding and trust.
🌐📉 The Cracks in Our Digital Infrastructure
The second big story this month is about how shaky our digital foundation really is. From fragile internet systems to political fights over who gets to regulate (or deregulate) the tech we depend on every day.
Internet Outages Reveal System Fragility
This month’s widespread outage across major services (X, ChatGPT, Amazon, Spotify, and more) highlighted just how vulnerable our internet infrastructure can be.
The trigger? A single configuration file at Cloudflare that grew bigger than the system could handle, causing a domino effect that disrupted apps around the world.
For users, it was a reminder that so much of the internet runs through just a few companies. As coverage in ZDNet noted, when Cloudflare goes down, a huge portion of the web goes with it.
Federal Power Grab Over AI Regulation
AI regulation is becoming a fierce political battleground. The Trump administration is reportedly considering an executive order that would give the federal government exclusive authority over AI laws, sidelining states that want stronger protections.
According to The Verge, the proposal would also create an “AI Litigation Task Force” to challenge state laws that industry groups find burdensome. Coverage in Wired frames the move as an effort to combat what the administration calls “woke” or restrictive AI rules at the state level. Potentially even tying federal broadband or tech grants to states aligning with federal AI policy.
No matter where one stands politically, this is a major shift. It centralizes power over AI regulation in Washington and weakens local control.
Cybersecurity Rollbacks at the FCC
In another surprising move affecting digital safety, the Federal Communications Commission voted 2–1 to scrap rules requiring phone and internet companies to meet minimum cybersecurity standards.
The decision rolls back protections put in place during the Biden administration, despite ongoing concerns about foreign hacking and critical-infrastructure vulnerabilities.
Critics warn this could make the U.S. more vulnerable to breaches, while supporters argue it reduces regulatory burden. Either way, it’s a major shift in how we approach national cyber defense.
⚖️🔒 Trust, Control & Power in the Digital Age
This month’s biggest story sits right at the intersection of trust, technology, and who actually has control over the systems we rely on. Across health, social media, and education, we’re seeing what happens when powerful digital platforms fail to act responsibly, and how those choices ripple out into the lives of everyday people.
Social Media, Youth Safety & the Meta Revelations
Newly unsealed court filings allege that Meta (Facebook/Instagram) knew its platforms were harmful to young people, and did far too little about it.
According to the disclosures, Meta was aware that its apps could be addictive and worsen mental health issues for teens. Yet executives allegedly pushed aside safety recommendations if they threatened growth.
Some of the most troubling details include:
- A “17 strikes” tolerance before removing accounts involved in sex trafficking
- Safety features being shelved because they might reduce engagement
- Millions of preventable “inappropriate interactions with children”
These filings reinforce a hard truth: when engagement becomes the top priority, safety often loses.
Public Health & the Erosion of Trust
Another worrying development this month. The integrity of official public-health information was further compromised.
The CDC’s website was altered to suggest a false link between vaccines and autism.
This contradicts decades of rigorous scientific research, and it wasn’t an accident. When political pressure reshapes scientific communication, it undermines the public’s ability to trust critical health guidance. Research shows that even small cracks in trust can have major consequences during real emergencies.
Surveillance in Schools: The Proctorio Case
Education isn’t immune from these issues either. A long-running legal battle between librarian Ian Linkletter and the remote-proctoring company Proctorio finally settled, but the implications remain significant.
The case centered on Linkletter sharing publicly available Proctorio training videos to raise concerns about:
- Invasive surveillance of students
- Algorithmic bias and error rates
- A chilling effect on public conversation about EdTech
- Lack of transparency in tools that make high-stakes decisions
Educators and researchers are responding by pushing for practices such as Algorithmic Impact Assessments. This is essentially, a health-and-safety inspection for AI tools before they’re allowed into classrooms. Initiatives like the Data & Society Algorithmic Impact Methods Lab are helping schools understand how to evaluate these systems responsibly.
🤔 Consider
We build our computer systems the way we build our cities—over time, without a plan, on top of ruins.
— Ellen Ullman
Our society’s relationship with technology is defined by a paradox. We live surrounded by systems of immense power. AI models that amplify our abilities, networks that link billions, platforms that shape our social and civic lives. Yet these systems are more fragile, more opaque, and more easily abused than we care to admit.
The City of Glass offers clarity and brilliance, but also exposure. Its cracks show where trust is thinning, where infrastructure is brittle, and where our cognitive foundations are shifting beneath us.
But this isn’t a story of collapse, it’s a call to craftsmanship:
- to build tools that strengthen, not replace, human judgment
- to design infrastructures that serve communities, not corporations
- to create digital ecosystems where truth, safety, and sovereignty are assumptions, not add-ons
If we build with care, transparency, and agency, the City of Glass can become something better:
a place where technology reflects our best values, not our blind spots.
⚡ What You Can Do This Week
- Ask What You Depend On: List the three digital services you rely on most (cloud storage, calendar, chat, etc.). Explore whether there are open-source or self-hosted alternatives you could try, even part-time.
- Be a Thoughtful Consumer of AI: Use AI as a partner, not a replacement, and keep exercising your own mental muscles.When using summaries or tools powered by AI, pause and ask: What skills is this replacing? What might I be losing by outsourcing this thinking?
- Demand Digital Ethics: If your school, workplace, or community is adopting AI tech, ask for transparency. How are decisions made? Who’s accountable? Are impact assessments in place?
- Support Digital Sovereignty: Consider contributing to or using platforms like Nextcloud. Spread the word: sovereignty is not just a “techie” issue. It’s about democracy, trust, and our ability to shape the future.
🔗 Navigation
Previous: DL 412 • Next: DL 414 • Archive: 📧 Newsletter
🌱 Connected Concepts:
- World Models — Why AI systems grounded in physical and causal reality matter more than next-token prediction.
- AI Literacy — Core competencies for evaluating, questioning, and contextualizing AI outputs.
- Engagement vs Learning — How design incentives in EdTech shape outcomes for students and institutions.
- Surveillance Capitalism — The economic logic behind the failures of Meta, Proctorio, and engagement-driven platforms.
- Privacy by Design — Principles for building resilient systems that minimize harm and maximize autonomy.
- 03 CREATE/🌲 Evergreens/Digital Sovereignty — Why owning your data, stack, and infrastructure becomes essential in a brittle digital world.
- Infrastructure as Literacy — Understanding outages, protocols, and governance as part of digital citizenship.