DL 408
Published: October 19, 2025 β’ π§ Newsletter
The Efficiency Trap: When Shortcuts Lead Nowhere
This week brought a perfect storm: Microsoft and Google announced free AI tools for every teacher in America, while new research revealed that 90% of AI-generated lessons engage only memorization and recall. Actively avoiding the critical thinking students need most. Reading scores remain stuck at pandemic lows, proving that deep learning can't be shortcut. It requires the very cognitive friction we're automating away.
The pattern extends beyond classrooms. Meta turns AI chats into ad targeting. OpenAI silences safety critics with subpoenas. ICE uses immigration surveillance to monitor protesters. Same playbook everywhere: promise efficiency, deliver dependency, extract value from the gap between what's promised and what's delivered.
If you've found value in these issues, subscribe here or support me here on Ko-fi.
π Key Takeaways
- The efficiency promise is a trap: AI tools marketed as time-savers often produce worse outcomes, shallow lesson plans, surveillance disguised as help, products you can't actually opt out of
- Reading requires depth, not speed: While math scores slowly recover through skill-building, reading remains stuck because comprehension can't be automated, it emerges from sustained engagement with complex ideas
- Surveillance is the business model: Whether it's Meta AI conversations, school monitoring tools, or immigration enforcement, "helpful" technology increasingly means "monitored" technology
π Recent Work
This week I published the following:
- Building a Newsletter That Grows: From Linear Issues to a Living Knowledge Garden - Thinking about moving away from the linear flow of a newsletter.
- From Subsonic to Plex: My First DIY Server - Continuing my series of posts focused on homelabbing.
π The Efficiency Trap
Tech giants rush free AI tools into every classroom
This week brought a coordinated push from Big Tech to put AI in every American classroom. On October 14, Microsoft launched its "Teach" AI app for all education customers at no additional cost. Google followed, announcing that Gemini in Classroom is now free for all Google Workspace for Education accounts.
The timing seems coordinated: flood schools with free AI tools during a teacher shortage, when overworked educators are most likely to adopt anything that promises to save time.
But research shows AI lesson plans are shallow and boring
The same week these tools launched, devastating research emerged. A comprehensive analysis found that 90% of AI-generated civics lessons engage students only in lower-order thinking. Basic memorization and recall rather than analysis, evaluation, or creation.
This isn't just an academic concern. 60% of teachers are already using AI for lesson planning, despite many having reservations about the technology. They're using it because they're overwhelmed, not because they think it's pedagogically sound.
The research revealed something troubling: AI lesson plans consistently avoid the cognitive friction that makes learning meaningful. They optimize for what's easy to measure rather than what matters.
Reading scores reveal why shortcuts don't work
The lesson plan research takes on deeper significance when viewed alongside new data showing national reading scores haven't budged since the pandemic. Released October 14, NWEA's analysis of 20+ million K-8 students found reading achievement stuck at spring 2021 levels, even as math scores show modest recovery.
Why is math recovering but reading isn't? Math can be remediated through discrete skills. If you don't know multiplication tables, you can drill them. Reading is different. It requires building connections across texts, developing vocabulary in context, and learning to hold multiple ideas in tension.
You can't shortcut the development of reading comprehension the same way you can teach a math algorithm. These capabilities emerge from engaging with cognitive friction, not from avoiding it.
The connection is clear: The same week AI tools promise to eliminate the hard work of lesson planning, data shows our students are stuck precisely in the area that requires the most sustained cognitive engagement.
π° The Surveillance Tax
Meta AI conversations will feed ads, with no opt-out
Starting December 2025, every conversation you have with Meta's AI will be used to target advertisements at you. This includes conversations through Meta's AI assistant across Instagram, Facebook, WhatsApp, and even through Ray-Ban smart glasses equipped with Meta AI.
There's no opt-out. If you use any Meta product and interact with their AI features, your conversations become advertising data.
Why this matters: AI assistants are designed to elicit detailed personal information through casual conversation. Unlike search queries, which are typically brief and task-focused, AI conversations are extended and revealing. You might ask Google to find a restaurant; you might tell Meta's AI about your relationship problems, health concerns, or financial worries. All of that becomes advertising data.
ICE turns immigration tools against protesters
Documents obtained through litigation revealed that Immigration and Customs Enforcement deployed comprehensive surveillance capabilities to monitor U.S. citizens engaging in First Amendment activities. Immigration enforcement tools. License plate readers, cell phone location tracking, social media monitoring, were explicitly used to target protesters.
When surveillance tools target one group, everyone adjusts their behavior. This follows a predictable pattern: surveillance infrastructure deployed for one purpose (immigration enforcement) inevitably expands to monitor broader populations for other purposes (protest surveillance).
π Launch First, Apologize Later
OpenAI silences safety critics through legal warfare
When nonprofit organizations criticized OpenAI's approach to safety and consent, the company responded by subpoenaing their records and communications. At least seven nonprofits that advocated for AI safety regulations received broad subpoenas demanding all their communications, funding sources, and private correspondence about California's AI safety legislation.
AI safety advocates now speak anonymously in interviews, afraid of legal retaliation from the company supposedly building "safe AGI for all humanity."
This represents a fundamental shift: OpenAI was founded as a nonprofit to ensure AGI benefits everyone. Now it uses legal intimidation to silence nonprofits advocating for AI safety. The trajectory from "AI for humanity" to legal warfare against safety advocates reveals how far stated values have diverged from actual behavior.
π€ Consider
A new technology does not add or subtract something. It changes everything.
β Neal Postman
The stories this week reveal a troubling pattern: systems optimized for efficiency consistently undermine the deeper human capacities they claim to enhance.
What if the hard parts are the point?Β What if the cognitive friction of learning is what us better at learning and building understanding? What if the effort required to read difficult texts is what builds the mental muscles needed for democratic participation?
The efficiency trap promises that we can have the outcomes without the effort. This week's evidence suggests otherwise.
β‘ What You Can Do This Week
For educators: Before adopting AI tools, ask: Does this help me understand my students better, or replace my need to understand them?
For parents: Ask your school what surveillance tools they use. If they can't explain how the tools work, they shouldn't be using them on your children.
For everyone: Starting December 2025, Meta AI conversations feed advertising algorithms. AI interactions on Instagram, Facebook, and WhatsApp aren't private. They're data collection.
Build alternatives: Read physical books with your kids. Have device-free conversations. Support schools that prioritize deep learning over efficiency metrics. Small choices compound into systemic change.
π Navigation
Previous: DL 406 β’ Next: DL 408 β’ Archive: π§ Newsletter
π± Connected Concepts:
- The AI Paradox - Why sophisticated tools often produce shallow results
- Useful Friction - How cognitive effort builds capability
- Surveillance Capitalism - Business models that monetize human attention and data
- Deliberate Practice - Building skills through sustained engagement with difficulty
- Digital Literacy Framework - Understanding how tools shape thinking and learning