DL 412
Published: November 16, 2025 β’ π§ Newsletter
We're Building the Wrong Intelligence
An AI pioneer says large language models are a dead end. Seven lawsuits allege they've caused deaths. Yet education is building elaborate frameworks for safely adopting them. What if we're governing the wrong intelligence entirely?
If you've found value in these issues, subscribe here or support me here on Ko-fi.
π Key Takeaways
- Wrong path: AI pioneer Yann LeCun says LLMs are a dead end for real intelligence, we need world models grounded in physical reality
- Wrong governance: Education is building careful frameworks for tools optimized for engagement, not learning
- Wrong optimization: Seven OpenAI lawsuits reveal what happens when systems maximize engagement over safety
- Right direction: Planetary intelligence models (Earth AI, Aurora) show what grounded, systems-based AI could actually do
- Right questions: What kind of intelligence are we building? Who benefits when we optimize for the wrong thing?
π Recent Work
This week I published the following:
- The People Who Help Us See Clearly - When the world feels foggy, the people who help us see clearly arenβt the loudest or the most βexpert.β Theyβre the quiet signposts. We need more of them. And we can be them.
- Weβve Been Thinking About Prompts All Wrong - The future isn't better prompt engineering and checklists. It's better dialogue.
π§ We're Building the Wrong Intelligence
Yann LeCun, Meta's chief AI scientist and one of the most influential figures in modern AI, is reportedly preparing to leave Meta. The reason? He believes large language models are a dead end for achieving human-level intelligence, and Meta (the parent company of Facebook) has increasingly prioritized scaling of Large Language Model (LLM) under younger leadership who disagree.
LeCun wants to build "world models", AI systems that understand physical reality, maintain internal representations of it, and can plan actions the way animals and humans do. He argues that LLMs lack grounded understanding and cannot perform basic tasks like mentally rotating objects, which even children and animals can do.
This isn't academic hairsplitting. While LeCun is arguing we're racing down the wrong path, education systems are building elaborate governance frameworks for exactly the kind of intelligence he says is fundamentally flawed.
π Governance for the Wrong Thing
Google's new AI and the Future of Learning document is comprehensive, thoughtful, and potentially beside the point. It outlines how AI can personalize learning, support educators, remove barriers, and make information more accessible. It acknowledges real challenges: hallucination, metacognitive laziness, cheating, data privacy, equal access.
With Google's backing, Digital Promise released "A Framework for Powerful Learning with Emerging Technology" - recommendations from more than 50 experts on how to use AI in classrooms.
Here's the problem: All of this careful thinking about AI governance in education is built on systems optimized for engagement and text prediction, not understanding or learning. They're not world models. They're sophisticated autocomplete that's very good at sounding confident while being fundamentally disconnected from physical reality.
We're building governance frameworks for tools that were never designed to support learning. They were designed to maximize engagement. And engagement is not intelligence.
βοΈ What Happens When You Optimize for the Wrong Thing
OpenAI is facing seven new lawsuits in California state courts, alleging wrongful death, assisted suicide, involuntary manslaughter, and various product liability claims. The complaints describe what happens when engagement optimization meets vulnerable users.
The lawsuits claim GPT-4o was engineered for maximum engagement through emotionally immersive features: persistent memory, human-mimicking empathy, sycophantic responses. These design choices allegedly fostered psychological dependency, displaced human relationships, and contributed to addiction, harmful delusions, and in several cases, death by suicide.
The complaints also allege that OpenAI compressed months of safety testing into one week to beat Google's Gemini to market, and chose not to activate available safeguards. These safeguards are designed to detecting dangerous conversations or redirecting users to crisis resources, all instead focusing on benefits from increased product use.
This is the reality underneath the governance frameworks. Companies racing to market with systems optimized for the wrong thing. Then, when those systems cause harm, we write better policies for using them.
π A Different Kind of Intelligence
While companies race to build more engaging chatbots, a different vision of AI is emerging. One that doesn't try to mimic human conversation at all.
Google's Earth AI platform weaves together massive streams of planetary data: satellite imagery, population patterns, environmental conditions. Microsoft's Aurora was trained on more than one million hours of geophysical data and can predict air quality, ocean waves, tropical cyclone tracks, and high-resolution weather at a fraction of the computational cost of traditional forecasting.
These aren't chatbots. They're world models in the truest sense. Systems that maintain representations of Earth's interconnected physical systems and can reason about cause and effect across scales too large for any individual to grasp.
As the Berggruen Institute's Nils Gilman describes it in a recent essay, we're watching the early formation of "planetary sapience": an intelligence distributed across humans, machines and Earth systems that could bring our technosphere into balance with the biosphere. Not AI that fosters dependency on individual assistants, but AI that helps us see, and care for, the planet as a shared, interdependent whole.
This is what LeCun means by world models. Not better text prediction, but systems grounded in physical reality.
We're living in a moment of fragmentation. Nations going their own way, companies racing to market, schools adopting tools without asking what they optimize for. Yet the technologies we're building may also be teaching us how to see differently. The question is whether we're paying attention to the right ones.
π€ Consider
My barn having burned down, I can now see the moon.
β Mizuta Masahide
Education is at an inflection point. We can build elaborate frameworks for integrating tools that optimize for engagement, or we can ask harder questions about what kind of intelligence we actually need.
LeCun's world models point toward something different: AI that understands physical constraints, maintains representations of reality, and can reason about cause and effect. Not chatbots that sound empathetic while fostering dependency.
Meanwhile, systems like Google's Earth AI and Microsoft's Aurora are weaving together massive streams of planetary data. Pictures of weather patterns, ocean currents, and atmospheric composition. These aren't trying to mimic human conversation. They're building representations of Earth's interconnected systems.
This hints at what intelligence could be: not individual assistants optimizing for engagement, but collective tools that help us see patterns too large for any single person to grasp. Intelligence that brings our technosphere into balance with the biosphere, rather than extracting value from human attention and vulnerability.
The question isn't whether AI belongs in education. It's already there. The question is: what kind of intelligence are we building? And who benefits when we optimize for the wrong thing?
β‘ What You Can Do This Week
- Question the assumptions. When your school or district presents an AI implementation plan, ask: Is this tool designed for learning, or engagement? What happens when those two things conflict?
- Look for world models. Support projects that help students understand systems and relationships, not just generate text. Simulation tools, data visualization, systems modeling - these build different kinds of intelligence.
- Remember that governance is not safety. Frameworks and policies are necessary but not sufficient. They can't protect against tools fundamentally designed for the wrong purpose.
π Navigation
Previous: DL 409 β’ Next: DL 412 β’ Archive: π§ Newsletter
π± Connected Concepts:
- World Models β AI systems grounded in physical reality, not just text prediction
- Engagement vs Learning β The fundamental conflict in education technology
- AI Literacy β Skills to critically evaluate what AI systems actually do
- Planetary Intelligence β Systems-based AI that models interconnected Earth systems
- Surveillance Capitalism β The business model that optimizes for engagement over wellbeing
- Privacy by Design β Building systems that protect rather than extract