TLDR 163

Too Long; Didn't Read Issue 163

Published: 2018-08-31 • 📧 Newsletter

Welcome to Issue 163. Dealing with the long tail.

Welcome to the end of a busy week. Let's get started.

Here's some of what I posted this week:


🔖 Key Takeaways


📺 Watch

Social justice belongs in our schools, says educator Sydney Chaffee. In a bold talk, she shows how teaching students to engage in activism helps them build important academic and life skills — and asks us to rethink how we can use education to help kids find their voices. "Teaching will always be a political act," Chaffee says. "We can't be afraid of our students' power. Their power will help them make tomorrow better."

As an interesting activity...read through the comments for the video. I've had some debates with my students about when/where "social justice" begins and ends. Furthermore, when/where does "indoctrination" begin and end?

These questions reveal fundamental tensions about education's purpose. Some view teaching as neutral information transmission—students learn facts, not values. Others argue education is inherently value-laden—curriculum choices, whose stories get told, which histories matter, what counts as knowledge—all reflect political and ethical commitments.

Chaffee's position is that teaching is always political, so the question isn't whether to engage politics but how to do so transparently and thoughtfully. Teaching students to identify injustice, analyze power structures, and work for change isn't indoctrination if students learn to think critically rather than simply absorb teacher's views.

The "indoctrination" concern often gets weaponized against progressive pedagogy while conservative curriculum—presenting capitalism as natural, centering white narratives, treating current power arrangements as inevitable—escapes similar scrutiny. Both teach values. The difference is whether we're explicit about values we're teaching and whether we equip students to question all claims, including ours.


📚 Read

Digital Divides in Cuba

Classes are starting up, and as a result I'm leading students through the beginnings of literacy, language, and technology. In discussions across classes, we have already started to unpack the challenges and opportunities at this intersection between literacy, tech, and education.

This post wades into the challenges as we consider online discourse, ethics, and how culture adapts. This ties into some recent research of mine...and it has spurred some researching into Kant, pragmatics, and how we establish "truth." I should have some interesting posts on the way. But, for now...I'd definitely recommend reading the post, and the case studies they present.

I love all of the articles published by First Monday. It has helped my frame much of my work over the years.

This research by Alexander van Deursen and Lila Solis Andrade is valuable (IMHO) for two reasons. First, the lit review in the beginning about "digital divides" is excellent. Second, I value this examination of digital divides in the special case study of Cuba.

The digital divide framework has evolved. Early research focused on first-level divide—who has internet access versus who doesn't. As connectivity spread, researchers identified second-level divides—differences in digital skills, usage patterns, and outcomes even among those with access.

Cuba provides fascinating case study because government controls internet access but is gradually expanding it. This allows examination of both levels of divide simultaneously—some Cubans gaining access for first time while others develop sophisticated digital practices.

The findings reveal that access alone doesn't ensure meaningful participation. Skills matter—knowing how to evaluate information, protect privacy, use platforms effectively. Motivation matters—whether people see internet as valuable for their goals. Context matters—cultural norms, economic resources, social networks all shape how people engage with technology.

This matters for education policy: Providing devices and connectivity is necessary but insufficient. Digital literacy education must address skills and critical thinking, not just access. And we must consider how existing inequalities shape who benefits from digital technologies and who gets left behind.


Put your phone in "do not disturb" mode forever

This week I was reading another story about youth that are turning their backs on social media. I've also been digging more into research and findings about the role of devices in our lives and the impact of behaviorist philosophies. More on that to come.

In light of that, I was intrigued by this post from Paris Martineau wondering what would happen if we leave our phones and devices set on "do not disturb." I know that seems like insanity for most of us...but perhaps it is exactly what we need.

As a reminder, please get involved in our discussion about screentime. Many of our students will be jumping in this upcoming week.

The proposal isn't really insanity—it's resistance to design choices that prioritize engagement over user wellbeing. Phones are engineered to be interruptive, to demand attention through notifications, to create anxiety about missing something. These aren't accidents—they're features designed to maximize usage metrics.

Do Not Disturb mode inverts relationship: You decide when to check phone rather than phone deciding when to interrupt you. This simple flip represents fundamental shift in agency and attention management.

The resistance to this idea reveals how normalized constant connectivity has become. "What if someone needs to reach me urgently?" becomes justification for permanent interruptibility, even though humanity functioned fine for millennia without instant reachability and most "urgent" notifications aren't actually urgent.

The deeper question: Who benefits from your constant availability? Usually not you. Usually platforms, advertisers, employers seeking to extend working hours into personal time, social obligations creeping beyond reasonable boundaries. Permanent Do Not Disturb mode is claiming that your time and attention belong to you unless you explicitly grant access to others.

This connects to screentime research—not all screen time is equivalent, but constant interrupted attention creates different cognitive patterns than sustained focus or intentional communication. The issue isn't screens themselves but design patterns that manipulate attention for commercial purposes.


Several months ago I shared the story of "deepfakes" here in TL;DR. Deepfake is an artificial intelligence-based human image synthesis technique. Put simply, is is a process in which existing images and videos are superimposed onto source images and videos. When this first came to light last year, it was primarily used in creating celebrity pornographic videos or revenge porn videos.

I mentioned in my earlier discussions here in TL;DR that I was concerned about the use of this in propaganda and misinformation to create really fake news.

Apparently researchers are already exploring ways to combat this by having neural networks study "real" data before moving on to the fake stuff to develop "generative modeling." They then pit the first neural network against another neural network in a process known as "generative adversarial networks". This work in machine learning and neural networks is really interesting to watch evolve.

The technical arms race is inevitable: As detection tools improve, generation techniques will evolve to evade detection. GANs work by having generator create fakes and discriminator try to detect them—the competition drives both to improve. Apply this to deepfake ecosystem and you get perpetual cat-and-mouse game.

This raises epistemological crisis: When video evidence can be fabricated convincingly, what counts as proof? Courts, journalism, personal relationships—all rely on ability to verify claims through evidence. If any video might be fake, and detection requires technical expertise most people lack, trust collapses.

The solution can't be purely technical. Detection tools help but won't keep pace with generation forever. We need social and institutional responses: Media literacy about manipulated content, verification practices from journalists, legal frameworks for malicious deepfakes, and perhaps most importantly, humility about what we think we "saw with our own eyes."

The deeper problem: Deepfakes weaponize reasonable skepticism. Even before fakes become undetectable, their existence makes people question real evidence. "That's probably a deepfake" becomes excuse to dismiss uncomfortable truths. This "liar's dividend"—where bad actors benefit from general epistemic uncertainty—may be more damaging than specific fakes.


The Internet of Garbage is a 2015 non-fiction book by journalist and lawyer Sarah Jeong. It discusses online harassment as a threat to online discourse, and makes an argument for better possible futures.

The link above shares a portion of the text focused on the intersection of copyright and harassment. My students will soon start annotating this text in Hypothesis.

If you'd like the full text, The Verge made it available for free as a PDF, ePub, and .mobi ebook file, and for the minimum allowed price of $.99 in the Amazon Kindle store.

Jeong's analysis matters because it treats online harassment not as individual pathology but as systemic problem requiring structural solutions. The metaphor of "garbage" is deliberate—like waste management, harassment requires infrastructure for prevention, detection, and removal at scale.

The copyright-harassment intersection reveals how existing legal frameworks fail online contexts. Copyright law provides mechanisms for rapid content removal, which harassers exploit through false DMCA claims to silence targets. Meanwhile actual harassment—threats, doxxing, coordinated campaigns—often faces no effective legal remedy.

This asymmetry shapes online discourse: Bad actors have powerful tools to suppress speech while facing few consequences for their own abusive speech. The result is garbage accumulation—harassment that makes spaces unusable for targeted groups while platforms struggle with moderation at scale.

Jeong argues for better content moderation not as censorship but as necessary condition for speech. When harassment drives people offline, when certain voices can't participate without facing abuse, the net effect is less speech, not more. Protecting targets of harassment protects discourse itself.

The challenge is scale. Human moderators can't review everything. Algorithmic moderation misses context. Community moderation gets captured. No perfect solution exists—but refusing to try because moderation is hard means accepting that online spaces will be garbage dumps where only those willing to wade through abuse can participate.


This popped up in my feed and it's already been a point of discussion with several of my tech admins here at my institution. I'm thinking that I'll buy one soon to test it out and write about my experiences.

I've talked about two factor authentication (2FA) in the past. Basically when you log in to a site/service, you need to give another proof of identity. In this case, you would insert the USB stick, or click the bluetooth sensor on your keychain.

I'm intrigued. I want to see if I can make my own using a USB key. I also have questions about whether this would work as I'm typically using computers in my office, home, or the classroom.

Hardware security keys represent different security model than SMS or app-based 2FA. They're resistant to phishing—even if you're on fake login page, the key won't authenticate because cryptographic protocol verifies site identity. This makes them significantly more secure than codes sent via text or generated by apps.

The practical questions matter though. USB keys work great if you use same few computers regularly. But what about library computers, borrowed devices, mobile-only scenarios? Bluetooth option helps but adds battery and pairing complexity.

There's also adoption challenge. Most users won't spend $50 on security keys, won't understand why they need them, won't want to carry another device. This creates two-tier security system: sophisticated users with hardware keys, everyone else with weaker methods.

For institutions and high-value targets, hardware keys make sense. For average users, they're probably overkill unless you're particularly concerned about sophisticated attacks. The calculus depends on your threat model—who might want access to your accounts and what resources they'd invest to get it.

The DIY approach is interesting but be careful. Security devices are only as good as their implementation. Bugs in homemade key firmware could create vulnerabilities worse than not using keys at all. Sometimes paying for professionally-developed security tools is worth it.


🔨 Do

Try Hypothesis PDF Annotation

If you use PDFs in your teaching or research, check out my video tutorial on using Hypothesis with PDFs in Google Classroom. We've been having some issues with PDFs shared through Google Drive, and this should help resolve them.

Also explore DocDrop—a tool Katelyn Lemay shared that looks promising for PDF annotation workflows.


🤔 Consider

"Courage is not the absence of fear, but the capacity to act despite our fears." — John McCain

John McCain's reflection on courage resonates profoundly with this issue's theme of dealing with the long tail. The long tail represents ongoing consequences, delayed effects, persistent challenges that don't resolve cleanly—exactly the contexts where courage matters most. B-Tags surveillance raises fears about privacy erosion and racial targeting, but courage means confronting those technologies rather than accepting them as inevitable. Chaffee's social justice pedagogy requires courage to teach politically in spaces that punish political teaching, to trust students with power that might challenge existing arrangements. Cuba's digital divides reveal long tail of infrastructure inequality—addressing them requires courage to reimagine access beyond market-driven connectivity. Permanent Do Not Disturb mode requires courage to resist designed addiction, to accept others' frustration at your unavailability, to believe your attention belongs to you. Deepfakes detection work requires courage to acknowledge epistemic crisis where seeing no longer equals believing. Jeong's harassment research requires courage to name systemic problems that powerful platforms profit from ignoring. Hardware security keys require courage to acknowledge that convenience and security trade off, that protecting yourself requires friction. McCain's definition is perfect—courage isn't fearlessness but action despite fear. The long tail generates fear precisely because consequences extend beyond immediate control. Courage is choosing to act anyway, knowing you can't control all outcomes but refusing to let fear dictate inaction. Dealing with long tail requires courage to make choices whose full effects you won't see for years, whose success depends on others' choices you can't control, whose righteousness won't be vindicated immediately.


Previous: TLDR 162Next: TLDR 164Archive: 📧 Newsletter

🌱 Connected Concepts:


Part of the 📧 Newsletter archive documenting digital literacy and technology.