TLDR 185

Becoming the Machine

Published: 2019-02-15 • 📧 Newsletter

Welcome to Issue 185. Becoming the machine.

Hi all, welcome to TL;DR. My name is Ian O'Byrne. I research, teach, & write about technology in our lives. I try to synthesize what happened this week in tech...so you can be the expert as well.

I posted a couple of things this week:


🔖 Key Takeaways


📺 Watch

This week I spent time with one of my classes talking about "text" in their lives. This led to a discussion about podcasts, emoji, and memes. To that, one of my students mentioned the "Peppa Pig whistle meme" and I was like...what!!!

I'm glad I took the time to watch. It's hilarious. Enjoy.

The Peppa Pig meme demonstrates how "text" has expanded beyond written words to include sounds, images, remixes, and cultural references. Students' textual worlds include podcasts (audio narratives), emoji (visual-symbolic communication), and memes (multimodal cultural commentary). Understanding literacy now requires recognizing these diverse forms as legitimate texts with their own grammars, conventions, and meanings. The meme's hilarity comes from absurdist juxtaposition—evidence that humor itself is textual practice requiring cultural fluency.


📚 Read

This short essay was written by Oliver Sacks shortly before his death from cancer in 2015.

In this essay, Sacks notes the parallels between the modern world he sees around him and E.M. Forster's prescient classic 1909 short story "The Machine Stops," in which Forster imagined a future in which humans lived in separate cells, communicating only by audio and visual devices.

Forster's 1909 story is chillingly prescient: humans living underground in standardized cells, never meeting face-to-face, communicating through screens, dependent on the Machine for all needs, worshipping it while it slowly fails. Sacks recognized we're living Forster's dystopia—separate cells (apartments, bedrooms), audio-visual communication (screens everywhere), Machine dependence (infrastructure we don't understand). The story's power lies in showing how comfort and convenience can create catastrophic fragility. When the Machine stops, people have lost capacity for direct experience, physical competence, face-to-face relationship. We're becoming the machine by outsourcing cognition, navigation, memory, and connection to devices.

AR Will Spark the Next Big Tech Platform - Call It Mirrorworld

Kevin Kelly on the "next big thing after the next big thing." How we are building a 1-to-1 map of almost unimaginable scope. When it's complete, our physical reality will merge with the digital universe.

The mirrorworld—a term first popularized by Yale computer scientist David Gelernter—will reflect not just what something looks like but its context, meaning, and function. We will interact with it, manipulate it, and experience it like we do the real world.

Kelly's mirrorworld is augmented reality at planetary scale: every physical object, location, and space gets digital twin encoding its history, properties, relationships, and potential interactions. Unlike virtual reality (entirely synthetic) or current AR (overlays on reality), mirrorworld creates parallel dimension where physical and digital are inseparable. The implications are profound: spatial computing, persistent digital annotations, object-level data, context-aware everything. The question: who builds this mirror? Who owns the reflections? What happens when physical reality becomes substrate for digital property rights and surveillance?

This opinion from Thomas Friedman shares insight from The College Board, and a surprising conclusion about keys to success for college and life. Not surprisingly, they indicate that students need the ability to master "two codes" — computer science and the U.S. Constitution.

"Their short answer was that if you want to be an empowered citizen in our democracy — able to not only navigate society and its institutions but also to improve and shape them, and not just be shaped by them — you need to know how the code of the U.S. Constitution works. And if you want to be an empowered and adaptive worker or artist or writer or scientist or teacher — and be able to shape the world around you, and not just be shaped by it — you need to know how computers work and how to shape them."

The "two codes" framing is brilliant: both Constitution and computer code are sets of rules that appear fixed but are actually interpreted, contested, and modified by those who understand them. Literacy in both prevents being subject to systems you don't comprehend. Constitutional literacy means understanding rights, processes, checks and balances—how power flows and can be contested. Computational literacy means understanding algorithms, data structures, automation—how technology embeds values and can be shaped. Without both, you're governed by code (legal and computational) you can't read, challenge, or change. This isn't just skills acquisition but civic capacity for agency.

Autocomplete Presents the Best Version of You

It is interesting to see how individuals continue to modify their behaviors to fit the abilities of technology...as opposed to tech modifying to meet the needs of people.

"The predictive text meme is comforting in a social media world that often leaps from one dismal news cycle to the next. The customizations make us feel seen. The random quirks give our pattern-seeking brains delightful connections. The parts that don't make sense reassure us of our human superiority—the machines can't be taking over yet if they can't even write me a decent horoscope! And the topic boundaries prevent the meme from reminding us of our human frailty. The result is a version of ourselves through the verbal equivalent of an Instagram filter, eminently shareable on social media."

Autocomplete memes reveal how we're adapting ourselves to machine affordances. Predictive text learns from your patterns, then you perform for what it's learned, creating feedback loop where human and algorithm co-construct identity. The "best version" isn't authentic self but machine-legible self—the you that fits algorithmic categories. Like Instagram filters smooth faces, autocomplete smooths text into shareable performance. We find this comforting because it makes pattern-seeking brains feel understood while maintaining illusion of human superiority through AI's failures. But becoming the machine means measuring ourselves by machine metrics, shaping selves to algorithmic expectations.

Why It Costs Colleges Far More to Educate a Physicist or Teacher Than an English Major

A recent working paper from professors at the University of Michigan, University of Delaware and University of North Carolina examined the different factors that can influence the cost of a college education. The research looked at the cost to colleges of teaching different subjects and how various elements — such as professor salaries or average class size — pull these costs up or down.

The cost variation reveals hidden subsidy structures in higher education: humanities and social sciences subsidize STEM and professional programs. English majors generate revenue (large classes, lower faculty salaries, minimal equipment) that subsidizes physics majors (small classes, high faculty salaries, expensive labs). This matters for equity and institutional priorities: if STEM costs more, focusing enrollment there without corresponding funding creates pressure to cut "profitable" humanities programs. It also challenges narratives about economic value—STEM may generate higher salaries but requires higher educational investment. The sustainable model requires acknowledging these cross-subsidies explicitly rather than letting market forces hollow out programs that make others possible.


🔨 Do

Thanks to the YouTube algorithms, this week we found the Food Hacks for Kids series on the DreamWorks TV YouTube channel.

The short clips focus on "expert food hackers" Shanynn and Whitley as they show how to hack together new food hacks for kids.

This series has inspired my children to think more about cooking...and experimenting with me in the kitchen.

Food hacks demonstrate maker pedagogy: take existing materials, understand their properties, experiment with combinations, create something new. The "hacking" frame treats cooking as creative problem-solving not recipe-following. Kids learn by doing, failing, iterating—precisely what maker education advocates. The YouTube algorithm's recommendation shows how platforms can surface educational content alongside entertainment, though what gets recommended reflects algorithmic priorities not necessarily learning value. The key is children moving from consumption to creation, from watching to making, from passive to active engagement with food and family.


🤔 Consider

"Becoming is better than being." — Carol Dweck

Dweck's growth mindset philosophy challenges becoming the machine through fixed identity. Machine logic treats humans as static: you are your data, your history predicts your future, your profile defines your possibilities. But becoming means continuous change, learning from failure, evolving beyond past patterns. Sacks' machine stops when people lose capacity for becoming, frozen in comfortable cells. Mirrorworld risks freezing physical reality into fixed digital representations. Two codes enable becoming by providing tools to reshape systems. Autocomplete presents being (fixed algorithmic identity) while obscuring becoming (unpredictable human growth). Becoming better than being means resisting reduction to machine-legible categories, maintaining capacity for change that machines can't predict or control.


Previous: TLDR 184Next: TLDR 186Archive: 📧 Newsletter

🌱 Connected Concepts:


Part of the 📧 Newsletter archive documenting digital literacy and technology.