TLDR 188

Orchids and Dandelions

Published: 2019-03-09 • 📧 Newsletter

Welcome to Issue 188. Orchids and dandelions.

Hi all, welcome to TL;DR. My name is Ian O'Byrne. I research, teach, & write about technology in our lives. I try to synthesize what happened this week in tech...so you can be the expert as well.

I posted a couple of other things this week:


🔖 Key Takeaways


📺 Watch

Dear readers of this newsletter...I apologize that I did not pick up on the Momo challenge up until this point. It was not until some friends of mine in edtech indicated that fearful parents were inundating schools with calls from fearful parents asking what they should do about this supposed suicide game.

Before you keep exploring, please note that this is a viral urban legend that has been persistent online for a little over a year.

The UK Safer Internet Centre called the claims "fake news". YouTube said it had seen no evidence of videos showing or promoting the Momo challenge on its platform.

The Momo Challenge demonstrates how contemporary moral panics spread through social media amplification. A creepy sculpture (actually artwork by Japanese special effects company) becomes associated with unverified claims of dangerous content targeting children. Worried parents share warnings, media reports on parental concerns, schools respond to inquiries, creating feedback loop where reporting on panic generates more panic. No credible evidence emerged of actual Momo-related harm, but widespread fear was real. The mechanism: disturbing imagery plus child safety concerns plus social media virality plus traditional media coverage equals mass hysteria. The pattern repeats: Slenderman, Blue Whale, every generation's version of "stranger danger" adapted to digital contexts. The challenge isn't content moderation but media literacy—helping parents distinguish between legitimate threats and viral legends.


📚 Read

VPNs and Privacy Theater

I've talked about virtual private networks (VPNs) a lot in the past in this newsletter. VPNs extend a private network by allowing you to send and receive data across a shared network as if you were directly connected to the private network. What this means is that you would connect to a third party, and then conduct your web navigation from there.

This piece from Will Oremus in Slate shares his exploration and research of some of the companies that exist in the VPN market. He basically suggests that most of these companies are a complete waste of money...and perhaps may be more a privacy risk than just searching openly online.

Read this post in TechDirt from Karl Bode for more on this important topic.

VPN privacy promises rest on trusting third party more than your ISP. The marketing claims: VPNs encrypt traffic, hide browsing from ISP, protect on public WiFi, enable geo-spoofing. The reality: you're shifting trust from ISP (regulated, accountable, visible) to VPN provider (often anonymous, jurisdiction-shopping, incentivized to monetize your data). Most VPN companies operate opaquely—who owns them? Where are servers located? Do they log traffic despite "no logging" claims? Have they been audited? The investigation reveals sketchy practices: shared infrastructure across competing "brands," ownership by data brokers, connections to malware distribution, marketing through sponsored content disguised as journalism. The fundamental problem: VPN privacy depends entirely on provider trustworthiness, which is nearly impossible to verify. For most users, VPNs provide security theater—feeling of protection without meaningful privacy gain, possibly creating new vulnerabilities.

The move to open access resources, whereby anyone can read academic papers for free, is on a long, hard journey. However, the victories are starting to build up, and here's another one that could have important wider ramifications for open access, especially in the US. The University of California system indicated this week that they are moving to an open access model. They did this in grand fashion by canceling all of their subscriptions to Elsevier.

The problems faced by the University of California (UC) are the usual ones. The publishing giant Elsevier was willing to move to an open access model – but only if the University of California paid even more on top of what were already "rapidly escalating costs". To its credit, the institution instead decided to walk, depriving Elsevier of around $11 million a year (pdf).

UC's Elsevier cancellation represents major escalation in open access battles. The absurd system: publicly-funded researchers conduct studies, write papers, peer review others' work (unpaid), then universities pay publishers massive fees to access research they created. Elsevier's profit margins (37%) exceed Apple's, extracted from publicly-funded scholarship. Publishers claim they add value through editing, formatting, distribution—but digital publishing costs approach zero while subscription prices climb 5-7% annually. UC's leverage: 10-campus system, major research output, collective bargaining power. The risk: faculty and students temporarily lose access to paywalled journals. The bet: alternative access methods (interlibrary loan, preprint servers, author copies) plus pressure on Elsevier will prove sustainable. The stakes: if UC succeeds, other institutions follow, potentially breaking publishing oligopoly strangling scholarly communication. Open access isn't just about free papers but restructuring knowledge production away from profit extraction toward public benefit.

YouTube: Okay What If We Suggest Racist Videos But Add Context?

Arimeta Diop in The Outline on an attempt by YouTube to curb the proliferation of false information on their platform. The online video giant has started rolling out a new feature that will fact-check user's searches.

YouTube's algorithm can take a user from a benign news report from a reputable source to content from anti-immigrant hate groups in just a few clicks through their "Up Next" recommendations. With the new feature, when a user searches a topic that has been at the center of controversy or "prone to misinformation," an "information panel" debunking and offering accurate information from fact checkers will then appear, Buzzfeed reports.

YouTube's fact-checking panels treat symptom while ignoring disease. The actual problem: recommendation algorithm optimizes for engagement (watch time, clicks, session duration) which systemically promotes inflammatory, conspiratorial, extreme content because outrage drives engagement. The documented pipeline: someone watches mainstream news, algorithm recommends increasingly extreme content, within hours they're watching white nationalist propaganda or conspiracy theories. YouTube knows this—their own research showed radicalization pipeline—but fixing recommendation algorithm would reduce engagement metrics and ad revenue. So instead: information panels! Small text boxes with fact-checks that most users ignore, addressing search results while recommendation engine continues radicalizing. It's classic platform evasion: implement visible but ineffective intervention to deflect criticism while preserving problematic core functionality. Real solution would be redesigning recommendation algorithm to prioritize accuracy and diversity over engagement, but that requires sacrificing profit for responsibility.

Karen Hao in Technology Review on the need to stop perpetuating the false dichotomy between technology and the humanities.

In hindsight, this separation hasn't served us so well. As Henry Kissinger wrote in the June 2018 issue of the Atlantic: "The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy."

That so-called dominating technology is artificial intelligence. Its sudden rise has already permeated every aspect of our lives, transforming our social, political, and economic systems. We no longer live in a society that reflects our old, manufactured separations. To catch up, we need to restructure the way we learn and work.

Hao identifies the Enlightenment reversal: philosophy driving technology versus technology seeking philosophy. AI systems now make decisions about bail, hiring, credit, content moderation, medical diagnosis—embedding values, perpetuating biases, reshaping society—without coherent ethical framework. The "tech person" versus "humanities person" divide creates engineers who build powerful systems without understanding social implications and humanists who critique systems without understanding technical constraints. Neither alone is equipped for AI age. The integration challenge: teaching computer scientists about ethics, bias, fairness, social context while teaching humanists about algorithms, data structures, machine learning mechanics. MIT's College of Computing attempts this through mandatory ethics requirements and interdisciplinary collaboration. The deeper issue: education system structured around artificial disciplinary boundaries when actual problems (algorithmic bias, misinformation, privacy, automation) require integrated knowledge. Building ethical AI requires people who are simultaneously technical and humanistic—not "tech people" or "humanities people" but humans equipped to think across domains.

I've recently been researching a bit more about technology use in early childhood. As part of this, I've been intrigued by the framework developed by Thomas Boyce and colleagues at the University of California, San Francisco. The group studies the human response to stress and examines this in the lives of children.

They suggest that most kids tend to be like dandelions, fairly resilient and able to cope with stress and adversity in their lives. But a minority of kids, those he calls "orchid children," are more sensitive and biologically reactive to their circumstances, which makes it harder for them to deal with stressful situations.

The orchid-dandelion framework describes differential susceptibility to environmental influences. Dandelion children (roughly 80%) are resilient—they cope reasonably well across varied circumstances, thriving adequately even in challenging environments. Orchid children (roughly 20%) have heightened biological reactivity—elevated stress hormones, stronger physiological responses, greater sensitivity to both negative and positive experiences. In harsh environments, orchid children struggle more than dandelions, experiencing higher rates of illness, behavioral problems, emotional difficulties. But crucially: in supportive environments, orchid children don't just do equally well—they flourish beyond dandelions, showing exceptional creativity, empathy, academic achievement. The implication: orchid children aren't deficient but differently calibrated, requiring more careful environmental cultivation. For parenting and education: one-size-fits-all approaches miss that some children need more support, stability, and nurturance to thrive, while benefiting more dramatically when provided. For technology debates: orchid children likely more susceptible to both harms and benefits of digital environments, requiring more individualized rather than universal screentime rules.


🔨 Do

Work-Life Balance Strategies

If you feel constantly overwhelmed - like you've always got too much to do and not enough time to do it in - then you're in the same boat Frank has been in for several months. In this video he details the plan that he has been putting into action in order to strike a better work-life balance.

Chronic overwhelm signals mismatch between commitments and capacity. The typical responses—work harder, sleep less, multitask more—deepen the problem by ignoring that time is fundamentally limited. Better approaches: ruthless prioritization (saying no to good opportunities for sake of great ones), time blocking (protecting focused work periods), energy management (recognizing different tasks require different mental states), boundary setting (distinguishing urgent from important). The cultural challenge: hustle culture treats overwhelm as badge of honor and rest as weakness. Sustainable productivity requires acknowledging human limitations and structuring work accordingly rather than demanding infinite capacity.


🤔 Consider

"The grass is greener where you water it." — Neil Barringham

Barringham's metaphor challenges comparative thinking underlying many issues this week. VPN users seeking greener privacy pastures often trade known surveillance for unknown risks. Open access advocates water scholarly communication rather than envying publishing profits. YouTube's fact-checking panels ignore that they should be watering recommendation algorithm instead. Orchid children need more frequent watering than dandelions but bloom more beautifully when properly tended. The insight applies to technology debates broadly: instead of assuming grass is greener with different platforms, devices, or rules, cultivate intentional practices with tools you have.


Previous: TLDR 187Next: TLDR 189Archive: 📧 Newsletter

🌱 Connected Concepts:


Part of the 📧 Newsletter archive documenting digital literacy and technology.