TLDR 159
Too Long; Didn't Read Issue 159
Published: 2018-07-20 • 📧 Newsletter
Welcome to Issue 159. Balance between anonymity and general disinformation.
This week I posted the following:
- Four Questions For Kathryn Kennedy about following your passions - In this discussion with Kathryn Kennedy, I talk about the current transitions in her career. We discuss the guidance, focus, and balance needed to make these changes while listening to your dreams and desires.
- Making sense of teaching, learning, & assessment with technology - A review of teaching with tech. Moving from educational technology, to instructional technology, to literacy practices.
🔖 Key Takeaways
- Children's YouTube Nightmare: James Bridle reveals algorithmic content on YouTube exploiting young children through disturbing mashups generating advertising revenue.
- Disinformation Goals: Mike Caulfield argues disinformation aims to render informational environment inoperable where only power exists after eliminating accuracy and trust.
- Anonymous Data Myth: So-called anonymous data serves as fingerprint easily used to identify individuals from medical records to purchase histories.
- Dark Patterns Everywhere: UX manipulation techniques obfuscate simple steps making users do things they didn't intend through deceptive design choices.
- Youth Anxiety: Social media creates challenging landscape for young people requiring parental engagement beyond simply cutting off access.
📺 Watch
The nightmare of children's YouTube (16:32)
Writer and artist James Bridle uncovers a dark, strange corner of the internet, where unknown people or groups on YouTube hack the brains of young children in return for advertising revenue.
From "surprise egg" reveals and the "Finger Family Song" to algorithmically created mashups of familiar cartoon characters in violent situations, these videos exploit and terrify young minds—and they tell us something about where our increasingly data-driven world is headed.
The phenomenon, sometimes called "Elsagate," reveals how algorithmic content recommendation combined with automated content creation produces disturbing material that targets children. Videos use familiar characters from Disney, Peppa Pig, Spider-Man in scenarios involving violence, sexualized content, or otherwise inappropriate themes. Because they feature recognizable characters and trigger algorithmic recommendations, children encounter them through YouTube's autoplay and suggested videos.
The business model is straightforward: Create content that games YouTube's recommendation algorithm, target audience (children) who watch repeatedly, generate advertising revenue. The content doesn't need to be good or appropriate—just algorithmically optimized and addictive.
"We need to stop thinking about technology as a solution to all of our problems, but think of it as a guide to what those problems actually are, so we can start thinking about them properly and start to address them," Bridle says.
The broader lesson extends beyond children's content: When platforms optimize for engagement and advertising revenue without meaningful human oversight, disturbing outcomes emerge. Algorithms amplify whatever drives views, whether educational, entertaining, or exploitative.
📚 Read
Mark Zuckerberg and Holocaust denial
On the latest episode of Recode Decode, hosted by Kara Swisher, Facebook CEO Mark Zuckerberg sat down to talk about Cambridge Analytica, why Infowars is still on Facebook and the danger of over-regulation, among many other topics.
In this interview, he cited Holocaust denials as an example of controversial misinformation that Facebook would allow to remain on the platform. Facebook has said that it allows conspiracy theories to remain on the site, but limits their reach so fewer people see them.
Zuckerberg's comments drew immediate condemnation on social media, in the press, and among civil rights activists. He quickly tried to walk back his statements.
The controversy reveals fundamental tension in content moderation: Where is line between protecting free expression and preventing harmful misinformation? Zuckerberg's initial framing suggested Holocaust denial was merely "wrong" opinion rather than dangerous historical revisionism. His comparison implied equivalence between people who "get things wrong" and organized denial of genocide.
The backlash forced clarification, but underlying questions remain: Should platforms treat all misinformation equally? Does denying historical atrocities deserve same treatment as garden-variety factual errors? How do we balance free speech principles with responsibility to prevent spread of hate and dehumanization?
These aren't just Facebook's problems—they're fundamental challenges for any platform making editorial decisions at scale while claiming to be neutral infrastructure.
This is what disinformation looks like
This Twitter thread from Mike Caulfield effectively breaks down his thinking about the aims and goals, and response of disinformation tactics.
Caulfield suggests that the main goal is to render the informational environment inoperable. Once accuracy, bias, and trust are eliminated from our communication spaces, only power exists.
In the past, I've written about the need to teach students healthy skepticism to address these initiatives. Caulfield posits the following:
"I can't stress this enough. We don't need to build skepticism in our students. We need to build a love of distinguishing the mostly true from the mostly false, a belief that degrees of truth matter."
This reframing is crucial. Skepticism alone creates paralysis—if everything is questionable, nothing is knowable. Universal skepticism serves disinformation by making people give up on discerning truth altogether.
Instead, we need commitment to truth-seeking combined with practical verification skills. Not "trust nothing" but "verify what matters." Not "everything is biased" but "understand how bias shapes framing while facts remain checkable."
Mike also shares this playlist on Online Verification Skills to assist in your instruction and own learning.
The disinformation goal isn't making people believe false things—it's making people believe nothing is true. When informational environment becomes inoperable, power fills vacuum left by collapsed truth claims.
'Data is a fingerprint': why you aren't as anonymous as you think online
This piece in The Guardian talks about how so-called 'anonymous' data can be easily used to identify everything from our medical records to purchase histories.
One of the key takeaways:
"One of the failings of privacy law is it pushes too much responsibility on to the consumer in an environment where they are not well-equipped to understand the risks," said Johnston. "Much more legal responsibility should be pushed on to the custodians [of data, such as governments, researchers and companies]."
The technical reality: Even "anonymized" datasets containing location data, browsing history, purchase patterns, or health information can often be re-identified by cross-referencing with other datasets. Each person's patterns create unique fingerprint. Visit these three stores, browse these five websites, take this route to work—suddenly you're identifiable even without name attached.
Privacy laws often assume anonymization solves problem, allowing data to be shared and analyzed without consent as long as identifying information is removed. But mounting research demonstrates this assumption is false. Anonymization is fragile—easily broken with enough auxiliary information.
The burden shouldn't fall on individuals to somehow protect against re-identification they can't prevent. Data custodians need stronger legal obligations to prevent re-identification attempts and limit data combination that makes it possible.
What parents need to know about social media and anxiety
Social media is often a troublesome landscape for adults. Over the last couple of months, I've had a lot of discussions with many of you about whether or not social media is just plain bad for us.
From cyberbullying, to FOMO, to harassment, our youth are also trying to negotiate these spaces. Unfortunately, simply cutting off social media isn't necessarily the answer.
This post suggests the following tips:
- Encourage self-care
- Help kids put social media in perspective
- Encourage offline activities
- Talk about their feelings
- Let them know you're there for them
- Get help
The reality: For teens, social media is where their social lives happen. Cutting them off completely creates social isolation rather than solving problems. The challenge is helping young people develop healthy relationships with these tools—recognizing manipulation, managing comparison, maintaining boundaries, seeking support when needed.
Parents need to stay engaged without being invasive, offer guidance without judgment, model healthy tech use themselves. The goal isn't eliminating social media but developing digital citizenship skills that serve young people throughout their lives.
How to notice and avoid dark patterns online
This week I video conferenced with my Dad to help him set up a new wireless printer/scanner in his home office. After almost three hours of expletive-laced dialogue (from him) we got him all set up. The main challenge was helping him click through the labyrinth of links and pages to identify the drivers, install them, and then reset his passwords (because we realized he forgot them).
This made me think about the challenging phenomenon that occurs as what should be simple steps are obfuscated or hidden. This post from Lifehacker shares insight into these dark patterns, and how to avoid them.
This video by The Nerdwriter helpfully explains dark patterns and gives some classic examples of different types you'll encounter around the web. It can also be viewed at darkpatterns.org, a site conceived by UX researcher Harry Brignull.
The site also includes a Hall of Shame of examples collected on Twitter, and deeper dives into the different types of dark pattern.
Dark patterns include:
- Confirmshaming - making you feel guilty for declining offers
- Hidden costs - revealing fees only at final checkout step
- Bait and switch - button appears to do one thing but does another
- Forced continuity - charging card after free trial ends without warning
- Roach motel - easy to get into subscription, impossible to cancel
These aren't accidental UI mistakes—they're deliberate choices to manipulate users into actions that serve company interests over user interests. Recognizing dark patterns is essential digital literacy skill as more of life moves online and more interfaces are designed to extract value from users rather than serve them.
🔨 Do
Train your brain to feel better with these 4 techniques
Depending on your political or social contexts, you're either overwhelmed, anxious, and depressed, or overwhelmed, anxious, and depressed.
This post in The Conversation from Laurel Mellin shares four brain-based techniques to bounce back from stress:
- See stress as a moment of opportunity - Reframe stress as signal for growth rather than threat
- Check your stress number - Use 1-5 scale to become aware of stress levels
- Update your unconscious expectations - Identify and challenge automatic negative thoughts
- The power of compassion and humor - Practice self-compassion and find humor in situations
Use the EBT 5 point system to check your stress number:
- 1 = Low stress, feeling good
- 2 = Mild stress, slightly uncomfortable
- 3 = Moderate stress, clearly stressed
- 4 = High stress, very stressed
- 5 = Volcanic, completely overwhelmed
The techniques acknowledge that stress is inevitable—especially given current news cycle—but our responses to stress are shapeable. Rather than eliminating stressors (often impossible), we can develop tools for managing stress responses and recovering resilience.
🤔 Consider
"Makeup is about balance. When the eye makes a statement, the lips should be quiet." — Francois Nars
Nars's aesthetic principle about balance captures this issue's theme of navigating competing tensions. When one element dominates, others must recede for composition to work. The issue title "balance between anonymity and general disinformation" suggests these forces exist in tension—both privacy and open information matter, but maximizing both simultaneously may be impossible. When we prioritize anonymity, we create spaces for disinformation to flourish unchecked. When we eliminate anonymity to combat disinformation, we destroy privacy and enable surveillance. The children's YouTube nightmare emerges from imbalance—algorithm optimization overwhelming human judgment. Zuckerberg's Holocaust denial controversy reveals imbalance between free expression and preventing harm. Caulfield's disinformation analysis shows how eliminating trust creates imbalance where only power remains. Anonymous data fingerprinting demonstrates false balance between data utility and privacy protection. Dark patterns exploit imbalances in power and information between designers and users. Youth anxiety on social media reflects imbalance between connection and wellbeing. The challenge throughout is finding equilibrium—not perfect balance frozen in place but dynamic balance that adjusts as conditions change, knowing when eye should make statement and when lips should be quiet.
🔗 Navigation
Previous: TLDR 158 • Next: TLDR 160 • Archive: 📧 Newsletter
🌱 Connected Concepts:
- Childrens YouTube Nightmare — James Bridle reveals algorithmic content exploiting young children through disturbing mashups generating advertising revenue demonstrating how platforms optimizing for engagement without oversight produce harmful outcomes in Algorithmic Content and Platform Responsibility.
- Disinformation Tactics — Mike Caulfield argues disinformation aims to render Informational Environment inoperable eliminating accuracy bias and trust so only power remains requiring love of distinguishing truth rather than universal skepticism in Media Literacy.
- Anonymous Data Myth — Guardian investigation reveals so-called anonymous data serves as fingerprint easily used to identify individuals through cross-referencing patterns demonstrating fragility of anonymization and need for stronger custodian obligations in Privacy Protection.
- Dark Patterns Design — Harry Brignull's darkpatterns.org documents UX manipulation techniques deliberately designed to trick users into actions serving company interests over user welfare requiring recognition as essential Digital Literacy Framework skill in Interface Design.
- Youth Social Media Anxiety — Parental guidance needed as teens navigate Social Media landscape experiencing cyberbullying FOMO and harassment recognizing cutting off access creates isolation rather than solving problems requiring engagement without invasion in Digital Citizenship.
Part of the 📧 Newsletter archive documenting digital literacy and technology.