DL 430
The Accounting
Published: April 29, 2026 • 📧 Newsletter
For the past two years, AI has been sold as an inevitability. Growth curves, adoption charts, and trillion-dollar buildouts have all pointed in a single, unswerving direction.
This week, the numbers tell a different story.
It isn't a story of collapse or failure, but something far more uncomfortable for those selling the future: resistance, limits, and consequences.
Welcome to the new home of Digitally Literate. If you’ve been with me for the last 429 issues, thank you for the move. If you’re new here, glad you found it. My goal remains the same. To name what is shifting before it becomes invisible.
A quick note on architecture. While Substack is our new delivery engine, my work remains anchored at digitallyliterate.net, where every post is archived, and I continue to blog on the "human side" of these shifts at wiobyrne.com.
Let’s get into the accounting.
📉 The numbers are in, and they don't add up
OpenAI missed its internal goal of reaching one billion weekly active ChatGPT users by end of 2025. It still hasn't announced that milestone. This failure, paired with market share losses to Gemini and Anthropic, has left the company struggling to fund Sam Altman’s $600 billion "compute-first" expansion. Since this strategy was premised on indefinite growth, OpenAI's shortfall adds significant grist to the mill for those fearing an AI bubble.
Meanwhile, in federal court in Oakland this week, Elon Musk took the stand seeking $130 billion in damages from OpenAI and the removal of Altman and Greg Brockman. The two tech billionaires are facing off in federal court to determine if one of the wealthiest AI companies (OpenAI) violated its original mission as a nonprofit to develop AI for the benefit of humanity.
As we weigh long-term infrastructure decisions around AI tools, one thing is clear. The companies selling the "inevitable future" are nowhere near as stable as their brochures suggest.
📢 The backlash found its people
Here in the US, the most striking thing about the emerging anti-AI coalition isn't that it exists...it's who's in it. Though many have not been politically active before, they're part of a growing national movement that pits the tech industry and its billionaires against a diverse coalition of parent groups, religious leaders, environmentalists, and activists. A Quinnipiac poll this week found 55 percent of American adults now see AI as a force for harm rather than good, with Gen Z emerging as the most pessimistic group. Results show 7 in 10 believe the technology will primarily be used to cut jobs.
This isn't a debate about code; it’s a debate about power and accountability. As Sanders noted this week, while AI will impact every citizen, Congress has offered "minimal, minimal" discussion. Meanwhile, the industry’s response has been to label dissenters as "doomers" or "BANANAs" (Build Absolutely Nothing Anywhere Near Anyone) while pouring hundreds of millions into super PACs.
John Oliver echoed the public unease this week, noting that "AI friends" are being rushed to market not because they are ready, but because companies are desperate to show a return on their massive investments. As he put it: "Maybe it was a mistake to let some of the most flamboyantly friendless men on Earth be in charge of designing friends for the rest of us."
This is the political environment youth are inheriting. They are entering a world where the "inevitable" progress of Silicon Valley is being challenged by a coalition that spans the entire political spectrum. It's a civic question of who decides, who pays, and who truly benefits.
🏛️ The Information Environment is already different
A study from Stanford, Imperial College London, and the Internet Archive found that by mid-2025, roughly 35 percent of newly published websites were AI-generated or AI-assisted. In the three years since ChatGPT was launched in late 2022, more than a third of the new web has been produced by machines. Researchers used the Internet Archive’s Wayback Machine to pull a representative sample of the web from 2022 to 2025. They then ran that data through the Pangram v3 detector, which seems to be one of the more robust detection methods for distinguishing machine-made text from human writing.
The finding that surprised the researchers wasn't a surge in "fake news," but a surge in "relentless cheerfulness." While fact-check rates remained stable, the AI-heavy web is becoming semantically thinner. Less diverse, less distinctive, and increasingly optimized to avoid saying anything uncomfortable. We aren't being drowned in lies. We’re being drowned in "bland."
This "frictionless" content is colliding with the reality of the classroom. Sal Khan, perhaps AI’s most vocal champion in education, admitted this week that his Khanmigo tutor has been "a non-event" for most students. His chief learning officer was even blunter: "I am not seeing the revolution in education."
The core problem is that AI is entirely reactive. Because students are still learning how to think, they aren't yet great at asking the right questions, and an AI can only respond to what it's asked. Khan is now pivoting his focus back to "human systems" over AI silver bullets. It is a reminder that AI can generate an answer, but it cannot spark the curiosity required to seek one.
The web your students learned to navigate was built mostly by humans with something to say. The one they're in now is increasingly produced by systems optimized to avoid friction. Those are different environments, with different failure modes. The media literacy frameworks and skills most schools teach were designed for the first one.
💭 Consider
*And so castles made of sand slips into the sea, eventually.
— Jimi Hendrix*
The danger isn’t that things fall apart. It’s how long they can look stable while they’re already eroding.
📚 Recent Work
- Writing Is a Process. So Is Losing Your Voice — I recently posted about how I built an AI model trained on my voice as a writer. This post is feedback to a question I received from a reader about why do this when we think through our writing.
- Karpathy Found the Pattern. Educators Have Been Teaching It for Years. — A post about how I'm embedding Andrej Karpathy's LLM Wiki model in my learning management system.
🌱 Final Thought
This is the first issue of Digitally Literate to originate from Substack. Moving here is a deliberate step toward making this research sustainable.
The move introduces The Understory, a deeper layer of analysis for paid subscribers that looks at the connective tissue beneath the headlines.
If you value these reflections, there are a few ways to support the work:
- Upgrade to a paid subscription: This gives you access to The Understory and helps keep the lights on for the main newsletter.
- Stay as a free subscriber: I’m just happy you’re here. The core of DL will always remain open.
- The Tip Jar: My Ko-fi remains open for now as I figure out the best way to consolidate.
Regardless of how you choose to follow along, the mission doesn't change. I help educators understand and navigate the digital world, so their students inherit power, not just access.
See you next Wednesday on the other side. As always, my email is hello@wiobyrne.com.
🔗 Navigation
Previous: DL 429 • Next: DL 431 • Archive: 📧 Newsletter
🌱 Connected Concepts:
- The Democratic Deficit in Tech — The gap between the speed of AI deployment and the slow pace of public debate. As the "Bannon-Bernie" coalition shows, when citizens are cut out of the conversation about who decides and who pays, the technology loses its social license to operate.
- The Infinite Growth Fallacy — The high-stakes gamble that OpenAI and others have made on "indefinite user growth." This concept explains why a $600 billion compute debt becomes a liability the moment the revenue curve flattens.
- Semantic Thinning — The process by which the web becomes "relentlessly cheerful" and bland. When AI-generated content makes up a third of the internet, we lose the human "friction" of distinct voice and contrarian thought, leaving students in a sanitized information environment.
- The Inquiry Gap — The realization that AI is a mirror, not a mentor. As seen in the Khanmigo "non-event," the value of AI is capped by the user's ability to ask deep questions. If we don't teach the art of inquiry, the tool remains a silent partner.