DL 196
Dark, Toxic Corners of the Internet
Published: 2019-05-04 • 📧 Newsletter
Welcome to Issue 196. Dark, toxic corners of the Internet.
Hi all, welcome to Digitally Literate. My name is Ian O'Byrne. I research, teach, & write about technology in our lives. I try to synthesize what happened this week in tech...so you can be the expert as well.
I posted a couple of things this week:
- Putting down your phone may help you live longer - A look at the connections between device use and hormone levels in our bodies.
🔖 Key Takeaways
- Radicalization Pipeline: Poway synagogue shooting reveals direct connection between 8chan extremism forums and real-world terroristic violence against Jewish communities.
- Deplatforming Backfire: Facebook's announced Alex Jones ban gave extremists warning time to redirect followers, amplifying rather than containing their reach.
- Computational Inferencing: Zeynep Tufekci explains how data inference makes "opting out" of surveillance impossible—your behavior reveals you even without direct disclosure.
- Poverty Amplifies Harm: Mary Madden documents how marginalized populations experience both hypervisibility and invisibility in digital systems without resources to challenge outcomes.
- Researcher Despair: Those studying online extremism are losing hope, targeted by hate while watching disinformation spread faster than they can document it.
📺 Watch
The Future of Work - Jobs and Automation in Estonia
A little over a year ago, I met Marina Kaljurand, a former Ambassador of Estonia and the current chair of the Global Commission on the Stability of Cyberspace. We spoke quite a bit about education and the role of technology in the future of children's lives.
Kaljurand offered me a visit to Estonia to see it for myself. After seeing this video…I just might take her up on it.
Estonia represents a unique experiment in digital governance that other nations study but rarely replicate. The small Baltic nation rebuilt itself after Soviet occupation by embracing digital infrastructure—e-residency, digital voting, blockchain-secured records—creating perhaps the world's most thoroughly digitized society. The automation and jobs question takes different shape there: rather than fearing displacement, Estonians integrated technology into civic life from the ground up. Kaljurand's work on cyberspace stability connects directly to this national project: a country running on digital infrastructure understands cybersecurity as existential concern, not abstract policy debate. The contrast with larger democracies struggling to regulate technology they don't understand is stark.
📚 Read
Think You're Discreet Online? Think Again
Zeynep Tufekci talking about the digital residue we leave behind as we use digital tools and spaces.
There is the narrative that "as long as you're careful online" you'll be okay. As Tufekci indicates, there is no longer any chance of "opting out" of challenges in these spaces.
My key takeaway from this piece was the concept of "computational inferencing." This is the modern equivalent of "data inferencing" in which a group can make inferences about you based on a series of data points…like magazines and newspapers you subscribe to.
Tufekci's "computational inferencing" concept updates traditional data inference for the machine learning era. Previously, humans made inferences from data—marketers guessing interests from magazine subscriptions. Now algorithms detect patterns humans never see: typing speed revealing Parkinson's disease, shopping patterns predicting pregnancy before conscious awareness. The "careful online" narrative becomes not just difficult but impossible—your friends' behavior reveals you, your silences are data, your absence from platforms is itself informative. The implication for digital literacy: teaching "privacy protection" through behavior modification misses the point. The system infers regardless of your choices.
The Devastating Consequences of Being Poor in the Digital Age
Mary Madden discussing the privacy and security violations that occur in our increasingly digitized society. This is increasingly true for marginalized and vulnerable populations.
The poor experience these two extremes — hypervisibility and invisibility — while often lacking the agency or resources to challenge unfair outcomes.
Madden's "hypervisibility and invisibility" framing captures a cruel paradox. Poor people are hypervisible to surveillance systems—welfare monitoring, criminal justice databases, credit scoring, housing background checks—while remaining invisible when they need recognition, service, or redress. The asymmetry compounds: those with resources can challenge algorithmic errors, hire lawyers, move to new jurisdictions, pay for privacy. Those without resources are trapped in systems that see them as risk factors rather than people. Digital literacy for marginalized communities isn't just about using tools effectively—it's about understanding and surviving systems designed without their interests in mind.
The Existential Crisis Plaguing Online Extremism Researchers
Paris Martineau with an insightful piece on the current state of truth and morality research on the Internet. Martineau speaks with several researchers in this area, and finds feelings of futility and depression.
These individuals regularly see the increase in online extremism and disinformation. They are also often the target of hate as they try to speak out, or at least inform us about current trends.
The fact that they are losing hope should concern us all.
The researcher despair Martineau documents reveals an information ecosystem crisis. People dedicating careers to understanding online extremism face three pressures simultaneously: constant exposure to hateful content (vicarious trauma), personal targeting by the communities they study (direct harassment), and watching their research ignored while problems accelerate (professional futility). This isn't sustainable. When those who understand the problem best are burning out, the knowledge infrastructure for addressing extremism erodes. We need institutional support structures—mental health resources, protective policies, career pathways—for those doing this essential work.
Facebook Bans Alex Jones and Extremists, But Not As Planned
This week Facebook indicated that, in order to contain some of the misinformation and extremism that permeates their (and other) networks, they were banning (and removing content) from especially toxic, conspiracy theorist organizations.
As detailed by the lead to this segment…they didn't actually do this. Facebook and Instagram made a giant announcement, and then hours later they started removing the pages and content. Those of you that have worked with adolescents know exactly what happened next.
Many of these groups quickly informed their audience of the impending ban, and directed them to other networks and channels for more info. They then started spawning unofficial accounts and pages for their content to keep the stream flowing. The media noticed this, and amplified the message of this exodus, thereby extending the reach of the message.
Finally, Facebook indicated that they told the organizations they were being banned, to give them notice so they didn't need to read it in the news…but when they started scrubbing their sites for the hateful speech, they realized this is hard work, and that caused the delay.
The bungled deplatforming demonstrates content moderation's operational complexity. Announcing bans before executing them gave extremists a marketing opportunity—"We're being silenced! Follow us here!"—that they exploited expertly. The media amplification cycle compounded the failure, spreading migration instructions to audiences who might never have found alternative platforms otherwise. Facebook's justification (giving notice to avoid reading it in the news) prioritizes the feelings of banned extremists over effectiveness. The lesson: if deplatforming happens, it must happen simultaneously and completely, or it backfires. Halfway measures empower the movements they intend to suppress.
Poway Synagogue Shooting and Internet Radicalization
Earlier this week, a shooting occurred in a California synagogue in which a woman was killed, and three others injured. This attack highlights a growing link between online radicalization and terroristic violence offline.
This post from the Southern Poverty Law Center connects the dots between the online spaces where this hateful radicalization breeds, and the response to these events. For a deeper dive, this post from Vox describes 8chan, the nexus for much of this community.
The Poway shooting occurred just six months after Pittsburgh's Tree of Life synagogue massacre—a pattern of anti-Jewish violence incubated in the same online spaces. The shooter posted his manifesto to 8chan, following the template established by Christchurch. These aren't isolated incidents but a connected phenomenon: radicalization pipelines from edgy memes to ironic extremism to sincere ideology to action. 8chan's role as incubator matters: a platform explicitly designed to allow content other platforms ban becomes gathering place for those seeking to cross lines. The SPLC's documentation work—connecting dots between online radicalization and offline violence—provides essential evidence for understanding this threat.
🔨 Do
Spot the Surveillance VR App from EFF
It's now even easier to identify surveillance out in the wild. The Electronic Frontier Foundation (EFF) released version 1.2 of their Spot the Surveillance app. Head here for the Spanish version.
The EFF's VR approach to surveillance literacy addresses a fundamental problem: surveillance infrastructure hides in plain sight. Most people walk past cameras, license plate readers, and cell site simulators without recognition. Virtual reality creates safe space to learn identification—practicing pattern recognition without real-world consequences. The Spanish language version matters: surveillance disproportionately affects immigrant and Latinx communities, and tools in community languages enable informed resistance. Making surveillance visible is first step toward contesting it.
🤔 Consider
"Look at how a single candle can both defy and define the darkness." — Anne Frank
Frank's observation on light and darkness resonates with this issue's confrontation with toxic corners of the internet. The researchers losing hope, the shooting victims, the marginalized exposed by surveillance—all exist in darkness that platforms and policies fail to address. Yet documentation, education, and awareness function as candles: defying by illuminating what would prefer to remain hidden, defining by making visible the shape of the threat.
🔗 Navigation
Previous: DL 195 • Next: DL 197 • Archive: 📧 Newsletter
🌱 Connected Concepts:
- Online Extremism — Poway shooting revealing 8chan radicalization pipeline from ironic extremism to terroristic violence against Jewish communities in Domestic Terrorism.
- Platform Moderation — Facebook's bungled Alex Jones deplatforming giving extremists warning to redirect followers amplifying rather than containing reach in Content Policy.
- Privacy Inequality — Zeynep Tufekci on computational inferencing and Mary Madden on poverty's hypervisibility showing how surveillance harms distribute unequally in Digital Justice.
- Digital Surveillance — EFF's Spot the Surveillance VR app teaching recognition of cameras license plate readers and monitoring infrastructure in Civic Technology.
- Researcher Wellbeing — Extremism researchers experiencing burnout depression and targeting while watching problems accelerate faster than solutions in Academic Labor.
Part of the 📧 Newsletter archive documenting digital literacy and technology.