DL 413

Published: November 23, 2025 • 📧 Newsletter

The Glass City Paradox

When Technology Shines, but the Foundation Cracks

We’ve built a City of Glass: dazzling with technological power, but terrifyingly brittle underneath. In this issue, I explore how our digital world amplifies us, and exposes us, too. We talk about AI’s double-edged nature, how fragile the internet really is, and why trust in our institutions feels more fragile than ever.

If you've found value in these issues, subscribe here or support me here on Ko-fi.

🔖 Key Takeaways

📚 Recent Work

This week I published the following:

🚀📉 AI: Elevator or Crutch?

AI is transforming how we think, learn, and make decisions. But the big question remains. Is AI lifting us up, or quietly weakening our core skills? This tension is at the heart of this week’s story.

Cognitive Deskilling: When Convenience Costs Us

More people are leaning on AI and smart tools for everyday thinking, providing summaries, reports, explanations, even opinions. This shift toward “Thinking as a Service” (TaaS) is incredibly convenient, but it comes with a hidden cost.

Patterns of AI dependence may contribute to cognitive deskilling, a gradual erosion of our ability to research, analyze, and evaluate on our own.

Historically, deskilling has been a strategy used in capitalist systems to separate conception from execution: a small group holds the high-level knowledge while everyone else simply carries out tasks. AI risks accelerating this divide by shifting more intellectual labor to machines.

AI as an Augmenter, Not a Job Killer

Despite fears of mass job loss, current research paints a different picture. Studies show that AI hasn’t reduced working hours or earnings in many fields; instead, it’s acting as a performance booster for humans.

A recent report from the University of Chicago finds no measurable reduction in hours worked or pay, suggesting AI is enhancing, not replacing, professional work.

Researchers describe this emerging dynamic as Connected Intelligence: a collaborative partnership where humans and AI agents work side-by-side, each doing what they do best.

But there’s a catch: this augmented future only works with fast, reliable, low-latency internet. Without it, the collaboration breaks down.

The Information Integrity Crisis

AI doesn’t just assist us. It also floods the information ecosystem with content, and not all of it trustworthy.

This combination, easy disinformation + inconsistent accuracy, creates a powerful challenge for public understanding and trust.

🌐📉 The Cracks in Our Digital Infrastructure

The second big story this month is about how shaky our digital foundation really is. From fragile internet systems to political fights over who gets to regulate (or deregulate) the tech we depend on every day.

Internet Outages Reveal System Fragility

This month’s widespread outage across major services (X, ChatGPT, Amazon, Spotify, and more) highlighted just how vulnerable our internet infrastructure can be.

The trigger? A single configuration file at Cloudflare that grew bigger than the system could handle, causing a domino effect that disrupted apps around the world.

For users, it was a reminder that so much of the internet runs through just a few companies. As coverage in ZDNet noted, when Cloudflare goes down, a huge portion of the web goes with it.

Federal Power Grab Over AI Regulation

AI regulation is becoming a fierce political battleground. The Trump administration is reportedly considering an executive order that would give the federal government exclusive authority over AI laws, sidelining states that want stronger protections.

According to The Verge, the proposal would also create an “AI Litigation Task Force” to challenge state laws that industry groups find burdensome. Coverage in Wired frames the move as an effort to combat what the administration calls “woke” or restrictive AI rules at the state level. Potentially even tying federal broadband or tech grants to states aligning with federal AI policy.

No matter where one stands politically, this is a major shift. It centralizes power over AI regulation in Washington and weakens local control.

Cybersecurity Rollbacks at the FCC

In another surprising move affecting digital safety, the Federal Communications Commission voted 2–1 to scrap rules requiring phone and internet companies to meet minimum cybersecurity standards.

The decision rolls back protections put in place during the Biden administration, despite ongoing concerns about foreign hacking and critical-infrastructure vulnerabilities.

Critics warn this could make the U.S. more vulnerable to breaches, while supporters argue it reduces regulatory burden. Either way, it’s a major shift in how we approach national cyber defense.

⚖️🔒 Trust, Control & Power in the Digital Age

This month’s biggest story sits right at the intersection of trust, technology, and who actually has control over the systems we rely on. Across health, social media, and education, we’re seeing what happens when powerful digital platforms fail to act responsibly, and how those choices ripple out into the lives of everyday people.

Social Media, Youth Safety & the Meta Revelations

Newly unsealed court filings allege that Meta (Facebook/Instagram) knew its platforms were harmful to young people, and did far too little about it.

According to the disclosures, Meta was aware that its apps could be addictive and worsen mental health issues for teens. Yet executives allegedly pushed aside safety recommendations if they threatened growth.

Some of the most troubling details include:

These filings reinforce a hard truth: when engagement becomes the top priority, safety often loses.

Public Health & the Erosion of Trust

Another worrying development this month. The integrity of official public-health information was further compromised.

The CDC’s website was altered to suggest a false link between vaccines and autism.

This contradicts decades of rigorous scientific research, and it wasn’t an accident. When political pressure reshapes scientific communication, it undermines the public’s ability to trust critical health guidance. Research shows that even small cracks in trust can have major consequences during real emergencies.

Surveillance in Schools: The Proctorio Case

Education isn’t immune from these issues either. A long-running legal battle between librarian Ian Linkletter and the remote-proctoring company Proctorio finally settled, but the implications remain significant.

The case centered on Linkletter sharing publicly available Proctorio training videos to raise concerns about:

Educators and researchers are responding by pushing for practices such as Algorithmic Impact Assessments. This is essentially, a health-and-safety inspection for AI tools before they’re allowed into classrooms. Initiatives like the Data & Society Algorithmic Impact Methods Lab are helping schools understand how to evaluate these systems responsibly.

🤔 Consider

We build our computer systems the way we build our cities—over time, without a plan, on top of ruins.
— Ellen Ullman

Our society’s relationship with technology is defined by a paradox. We live surrounded by systems of immense power. AI models that amplify our abilities, networks that link billions, platforms that shape our social and civic lives. Yet these systems are more fragile, more opaque, and more easily abused than we care to admit.

The City of Glass offers clarity and brilliance, but also exposure. Its cracks show where trust is thinning, where infrastructure is brittle, and where our cognitive foundations are shifting beneath us.

But this isn’t a story of collapse, it’s a call to craftsmanship:

If we build with care, transparency, and agency, the City of Glass can become something better:
a place where technology reflects our best values, not our blind spots.

⚡ What You Can Do This Week

Previous: DL 412Next: DL 414Archive: 📧 Newsletter

🌱 Connected Concepts: