DL 204

The Quiet Part Out Loud

Published: 2019-07-06 • 📧 Newsletter

Welcome to issue 204. The quiet part out loud.

Hi all, my name is Ian O'Byrne and welcome to Digitally Literate. In this newsletter I distill the news of the week in technology into an easy-to-read resource. Thank you for reading.

This week I was working on some things behind the scenes. More to come soon.


🔖 Key Takeaways


📺 Watch

This week I came across the Two Minute Papers YouTube channel as I was working on a video teaser for a publication.

This video is a bit outside of the norm as the host is discussing themes across recent research on artificial intelligence (AI).

If you're looking for a deeper dive into AI, check out this slide deck on the State of AI in 2019.

Two Minute Papers exemplifies science communication done well: complex research translated into accessible summaries without losing substance. The channel's success demonstrates appetite for understanding AI beyond hype cycles—people want to know what's actually happening in research labs. The State of AI deck provides comprehensive landscape view: who's publishing, what's advancing, where investment flows. For educators, these resources offer entry points into conversations about AI that go beyond "robots will take our jobs" toward understanding actual capabilities and limitations. The democratization of research communication matters as AI increasingly shapes society.


📚 Read

This week was the 50th anniversary of the Apollo 11 moon landing…or was it!!!

Amanda Hess in The New York Times on the movement toward conspiracy theories by online stars, and the "emotional ambivalence" held by their audiences.

The internet's biggest stars are using irony and nonchalance to refurbish old conspiracies for new audiences, recycling them into new forms that help them persist in the cultural imagination. Along the way, these vloggers are unlocking a new, casual mode of experiencing paranoia. They are mutating our relationship to belief itself: It's less about having convictions than it is about having fun.

Hess identifies a significant shift in conspiracy culture: from true believers to ironic entertainers. Previous conspiracy theorists held convictions; YouTube personalities hold poses. The "emotional ambivalence" is key—audiences don't need to believe moon landing was faked to enjoy content about it. This creates problems for counter-messaging: you can't fact-check vibes. The mutation of belief into entertainment normalizes conspiratorial thinking as aesthetic rather than epistemology. When everything becomes content, truth claims become optional. The Apollo anniversary timing is perfect: a genuine historical achievement becomes fodder for engagement metrics, accuracy irrelevant to algorithmic success.

A couple of years ago, I was listening to a podcast and they were discussing Apple's inclusion of iBeacons in their devices.

Basically a low energy bluetooth chip in your device would "announce" your arrival to sensors at a location as you walk by. Imagine a world like in Blade Runner when you walk by a store, and the displays change based on your preferences and data collected about you. So, you might walk by a clothing store, and your phone will let the sensors know that you were recently searching for a new pair of pants. The signage outside the store will adjust accordingly.

This piece in The Privacy Project details how all of this works…and security concerns behind this data collection.

The Blade Runner comparison is apt: science fiction's personalized advertising dystopia is now infrastructure. iBeacons represent physical-digital convergence—your online behavior following you into physical space. The "announcement" framing matters: your device actively broadcasts presence, enabling tracking you didn't consciously consent to. Most people don't know bluetooth does this, don't know stores have sensors, don't know their search history could influence in-store experiences. The Privacy Project's documentation makes visible what's designed to be invisible. Security concerns compound privacy concerns: the same infrastructure enabling personalized ads enables stalking, profiling, and surveillance by actors beyond retailers.

Keith Hampton, a researcher in Michigan State University's Department of Media & Information, set out to test the theory that social media use leads to declining mental health. His findings challenge the notion that there is a looming mental health crisis in the U.S. and that the crisis is being caused by technology.

In research published in the Journal of Computer-Mediated Communication, Hampton showed that social media use often has the opposite effect of what people think. Social media is a protective influence. His article, "Social Media and Change in Psychological Distress Over Time," reveals that active internet and social media users are less likely to experience serious psychological distress, associated with depression or other mood and anxiety disorders.

Hampton's research complicates the "social media causes depression" narrative that dominates popular discourse. The protective effect finding matters: correlation between social media use and depression may reflect that struggling people use social media differently, not that social media causes struggle. The study design—examining change over time—addresses limitations of cross-sectional studies that capture snapshots without causation. This doesn't mean social media is unproblematically good; it means the relationship is more complex than tech panic suggests. Some uses harm, some help, and blanket condemnation misses the nuance needed for useful guidance.

The Colorado Supreme Court upheld a ruling last week that required a juvenile boy to register as a sex offender after sexting and trading erotic pictures with two girls roughly his age. This split decision highlights states' recent struggles with applying laws passed in a less tech-heavy age.

According to a 2018 study cited in the case, approximately one in four teenagers has received a "sext," and one in seven has sent one.

We're seeing courts have challenges as they need to set a hard line as they seek to differentiate between child exploitation and (in this case) what sounded like "unfortunate teenage behavior."

The case exemplifies law's difficulty keeping pace with technology. Sex offender registration laws were designed to protect children from predatory adults; applying them to teens exchanging consensual images with peers produces outcomes the laws never intended. The prevalence statistics—one in four receiving, one in seven sending—show this is normalized adolescent behavior, not aberration. Courts forced to apply existing statutes have no good options: either treat normal teen behavior as sex crime or seem to condone child exploitation. The solution requires legislative update, but legislatures move slowly while technology and teen behavior move fast. Meanwhile, individual teens face permanent consequences for temporary judgment lapses.

Your guide to the "vocabulary of bullshit" in Silicon Valley, where capitalism is euphemized.

My favorite of the list:

apology (n) – A public relations exercise designed to change headlines. In practice, a promise to keep doing the same thing but conceal it better. "People need to be able to explicitly choose what they share," said Mark Zuckerberg in a 2007 apology, before promising better privacy controls in a 2010 mea culpa, vowing more transparency in 2011, and acknowledging "mistakes" in the Cambridge Analytica scandal.

The glossary performs important cultural criticism through humor. Silicon Valley's language systematically obscures power relations: "disruption" means destroying livelihoods, "community" means users whose data you extract, "transparency" means we'll tell you after we've done it. The Zuckerberg apology timeline is devastating: the same promises repeated across 12 years with the same violations following. Naming the pattern—"apology as headline management"—provides vocabulary for recognizing it. This is what "saying the quiet part out loud" means: making explicit what industry language keeps implicit. The vocabulary of bullshit only works when we accept it uncritically.


🔨 Do

Looking to learn a bit about coding with Python and want a challenging task? Want to also learn more about speech recognition and privacy/security concerns?

Check out how to build this "simple" speech recognition tool.

Building speech recognition from tutorials teaches both coding and critical understanding. The technical process—audio capture, signal processing, pattern matching—reveals how voice assistants actually work. Understanding the technical foundation enables informed privacy assessment: what gets recorded, where it's processed, who has access. "Simple" in quotes acknowledges the irony: speech recognition involves complex systems, and tutorials necessarily simplify. But the educational value is real: moving from black-box consumer to someone who understands (at least partially) what's happening beneath interfaces. This connects to broader digital literacy: demystifying technology that increasingly mediates daily life.


🤔 Consider

"To this day the f-word turns my stomach. Because 'fine' is a euphemism for everything you're scared of saying." — Amy Molloy

Molloy's observation on "fine" as euphemism connects to this issue's theme of saying quiet parts out loud. Silicon Valley's vocabulary euphemizes power; conspiracy culture ironizes belief; retail surveillance operates invisibly; outdated laws produce unintended consequences. "Fine" covers fear; tech language covers extraction; irony covers belief. Honesty requires saying what we actually mean, even when it's uncomfortable.


Previous: DL 203Next: DL 205Archive: 📧 Newsletter

🌱 Connected Concepts:


Part of the 📧 Newsletter archive documenting digital literacy and technology.