TLDR 166

Too Long; Didn't Read Issue 166

Published: 2018-09-22 • 📧 Newsletter

Welcome to Issue 166. Thinking of better than this.

TL;DR is a weekly look at the news in technology, education, and literacy. I'm seeking to keep you on top of the news so you can be the expert.

This week I published the following:

Digitally Native Scholarship - I'm working on a manuscript with some colleagues to define/describe this area. Please take a look and send me feedback on how we've framed this area as we continue to finalize the manuscript.

Exploring Temporal Patterns - I finished my first MOOC!!! It focused on social media analytics. In this post I discuss why I took the time to do this…and show the start of my learning.


🔖 Key Takeaways


📺 Watch

As I've stated many times before, my family watches a ton of YouTube. In fact…we're thinking about "cutting the cord" soon. More on that later…

I love YouTube because there are times that we're inspired to undertake a new project. This video from the Motherboard channel is to blame for us taking a deep dive this week looking at solar power, vehicles, and sustainability.

YouTube's role as learning platform demonstrates platform's best possibilities. Passionate creators share knowledge about technical projects—solar installations, vehicle conversions, sustainability practices—making previously inaccessible expertise available to anyone with internet connection and curiosity. This democratization of knowledge enables DIY culture, reduces dependence on credentialed experts, and empowers people to undertake projects they'd previously considered beyond their capabilities.

The algorithmic recommendation system works in your favor when you're watching solar power videos. You discover related content—battery systems, electrical theory, off-grid living, energy efficiency. Each video leads to three more. The rabbit hole becomes educational journey rather than waste of time.

But this same recommendation architecture that surfaces solar power videos also radicalizes viewers toward extremist content, as Rebecca Lewis's research discussed below demonstrates. The platform doesn't distinguish between "good" rabbit holes and "bad" ones—it optimizes for watch time and engagement regardless of content. Curiosity about sustainability and curiosity about white nationalism get treated identically by algorithms.

For families using YouTube as learning tool, this creates constant tension. We want kids discovering passion projects and exploring interests. But we can't fully trust that algorithmic recommendations will stay in productive territory. Parental supervision, media literacy conversations, and critical viewing habits become necessary protective factors.

The Motherboard video sparked family conversation about solar power that led to research, calculations, and serious consideration of home installation. That's YouTube at its best—inspiring curiosity, providing knowledge, enabling action. But maintaining that positive trajectory requires conscious curation rather than passive consumption of whatever algorithms serve next.


📚 Read

Rebecca Lewis in an important Data & Society research report that looks at how YouTube is being used to "sell" political ideology to audiences.

This is part of a larger phenomenon, in which YouTubers attempt to reach young audiences by broadcasting far-right ideas in the form of news and entertainment. An assortment of scholars, media pundits, and internet celebrities are using YouTube to promote a range of political positions, from mainstream versions of libertarianism and conservatism, all the way to overt white nationalism.

Lewis maps the "alternative influence network"—interconnected web of YouTubers who collaborate, cross-promote, and guest on each other's channels to create pathway from mainstream conservative content to white nationalist extremism. The network includes academic-style intellectuals, provocateurs, conspiracy theorists, and media personalities who present far-right ideology as reasonable alternative to mainstream politics.

The radicalization pipeline works through gradual exposure rather than sudden conversion. Viewer interested in libertarian economics watches video that seems reasonable. YouTube recommends slightly more extreme content from collaborator. That video includes guest appearance from someone further right. Viewer follows new channel out of curiosity. Over months of watching, positions that initially seemed shocking become normalized through repeated exposure and social proof from trusted content creators.

The aesthetic matters as much as content. These creators position themselves as truth-tellers censored by mainstream media, rebels against political correctness, defenders of free speech. They frame far-right positions as suppressed knowledge rather than extreme ideology. Viewers feel they're discovering hidden truth rather than being radicalized.

Women and people of color within the network provide cover against accusations of racism and misogyny. "How can we be white nationalists when we platform diverse voices?" But these token participants legitimate extreme positions while the network overwhelmingly promotes white male grievance politics.

YouTube's recommendation algorithm accelerates this process. The platform optimizes for watch time, and radicalized viewers watch more content. Extreme positions generate strong emotional responses—outrage, validation, tribal belonging—that drive engagement. The algorithm doesn't care about truth or social consequences; it cares about keeping people watching.

Monetization incentivizes extreme content. YouTubers earn revenue from ads and viewer donations. Provocative positions, inflammatory rhetoric, and controversial guests generate views. Taking moderate positions or admitting nuance doesn't maximize engagement. The business model rewards radicalization.

For parents and educators, this research reveals YouTube isn't neutral platform for exploring interests—it's optimized system that can funnel curiosity toward extremism. The same recommendation system that helps families discover solar power videos (as discussed above) can lead young men from gaming videos to white nationalism through series of clicks that feel like organic discovery rather than algorithmic manipulation.

The solution requires platform accountability—changing recommendation algorithms to deprioritize radicalizing content, demonetizing extremist channels, providing context for fringe claims. But it also requires media literacy—helping young people recognize manipulation tactics, understand how algorithms shape their viewing, critically evaluate sources, and resist tribal thinking that treats political positions as identity markers.


'Post-Truth' and the Decline of Swedish Education

Magnus Henrekson and Johan Wennström with an overview of a recent study "'Post-Truth' Schooling and Marketized Education: Explaining the Decline in Sweden's School Quality". Full copy of study available here.

The Swedish school system suffers from profound problems with teacher recruitment and retention, knowledge decline, and grade inflation. The researchers suggest that these problems regarding school quality are to no small extent a result of the Swedish school system's unlikely combination of a postmodern view of truth and knowledge, the ensuing pedagogy of child-centered discovery, and market principles.

Sweden's education crisis offers cautionary tale about what happens when incompatible educational philosophies collide. In the 1990s, Sweden simultaneously adopted postmodern pedagogy that rejected objective knowledge and introduced market-based school choice with voucher systems. The combination proved catastrophic.

The postmodern turn in Swedish education embraced relativism about truth and knowledge. Official curricula downplayed factual content in favor of critical thinking skills, student-directed learning, and subjective knowledge construction. The idea was that students would develop understanding through discovery rather than direct instruction, that multiple perspectives all have equal validity, that objective knowledge is illusion.

This philosophical shift produced predictable pedagogical consequences. Teachers became facilitators rather than instructors. Direct teaching was seen as authoritarian imposition rather than professional responsibility. Assessing student knowledge became problematic when all perspectives are equally valid. Standards declined because maintaining standards requires believing some answers are more correct than others.

Simultaneously, Sweden introduced aggressive school choice and voucher systems that created market competition between schools. Private schools could receive public funding while selecting students and operating for profit. Schools competed to attract students whose funding followed them.

Market competition combined catastrophically with postmodern pedagogy. Schools couldn't compete on student achievement when assessment was seen as oppressive and objective knowledge was rejected. Instead, they competed on being student-friendly, reducing workload, and inflating grades. Schools that demanded more or graded strictly lost students to competitors offering easier paths to credentials.

Teachers experienced crushing demoralization. Professional expertise became devalued when facilitating is privileged over teaching. Salaries stagnated as private schools minimized costs to maximize profits. Working conditions deteriorated as schools prioritized marketing over supporting instruction. Teacher recruitment and retention collapsed.

Student achievement plummeted in international assessments. Grade inflation meant credentials became meaningless—everyone got high marks but learned less. The connection between effort and outcomes dissolved when schools competed to be easiest rather than best.

The Swedish example reveals dangers in both postmodern pedagogy and unregulated school choice—but especially in combining them. Rejecting objective knowledge makes meaningful assessment impossible. Market competition without quality standards drives race to bottom. Together they create system that fails everyone except entrepreneurs profiting from operating schools.

The research has implications beyond Sweden. Many education systems flirt with similar combinations—downplaying content knowledge, embracing student-centered discovery, introducing market competition. The Swedish collapse suggests these elements interact in destructive ways that compound individual problems.

The solution requires rebuilding teacher professionalism, reestablishing that some knowledge is more valid than others, and recognizing that market logic doesn't improve education when it undermines educational values. Easier said than done when political commitments to choice and progressive pedagogy are deeply entrenched.


Pernille Ripp is a wonderful author and educator. I frequently highlight her work here in TL;DR.

In this post she highlights some recent death threats she has received on her blog. Please be advised that there is offensive language in the post…the threats from her attacker.

I'm talking through this with my students this week in classes…and will have a blog post soon on the topic.

Ripp's experience reveals systematic harassment infrastructure targeting educators who speak about equity, justice, and inclusive education. The death threats weren't isolated incident or disturbed individual—they're part of coordinated campaign to silence voices advocating for marginalized students.

The harassment follows predictable pattern. Educator writes about racial justice, LGBTQ+ inclusion, or feminist pedagogy. Post gets shared in far-right spaces—message boards, Discord servers, Twitter threads. Followers brigade the educator with threats, slurs, and attempts to get them fired. The goal isn't persuasion but punishment—making advocacy so costly that educators self-censor.

Women educators face particular targeting. Harassment includes misogynistic slurs, sexual threats, and gendered attacks beyond the political disagreement. The message: Women speaking publicly about social justice are transgressing proper roles and will be punished through gendered violence.

The psychological impact extends beyond immediate fear. Educators must weigh every public statement against risk of harassment. Blog comments require moderation. Social media becomes minefield. Professional sharing that should be normal part of teaching practice becomes fraught with danger. Many educators respond by going private, limiting their public voice, or avoiding controversial topics entirely—exactly what harassers intend.

Platforms provide insufficient protection. Moderating comments requires constant vigilance. Blocking harassers is whack-a-mole when they create new accounts. Platform report systems respond slowly if at all. Law enforcement typically can't or won't act on threats that don't meet specific legal thresholds. Educators are left largely unprotected.

The connection to the ACLU privacy guide discussed below becomes clear—educators advocating for justice need same security practices as high-profile activists. Two-factor authentication, secure passwords, scrubbed public information, and careful digital hygiene become necessary professional practices.

But individual security measures don't address the systemic problem. Harassment works because it's low-risk for harassers and high-cost for targets. Changing this requires platform accountability, law enforcement taking online threats seriously, professional organizations supporting targeted educators, and collective resistance rather than leaving individuals to fend for themselves.

Ripp's decision to publicize the threats rather than silently dealing with them demonstrates courage and strategic thinking. Making harassment visible challenges narrative that these are isolated incidents. It educates other educators about risks and necessary precautions. It builds solidarity among targets who often suffer in isolation.

For those of us who teach, this raises uncomfortable questions. How do we support colleagues facing harassment? How do we talk about digital citizenship with students when we see adults modeling abuse? How do we balance advocating for justice with protecting ourselves from retaliation? There aren't easy answers, but silence isn't option.


Facebook Expands Fake Election News Fight, But Falsehoods Still Rampant

Here in the U.S., we're gearing up for the "mid-term elections." We talked quite a bit about how social media and digital spaces have some responsibility to bear for the political unrest in elections around the globe over the past decade.

Facebook has built a war room to organize their preparations and monitoring of the network leading up to the elections. My initial thought is that Facebook can not be trusted to protect free speech or ensure transparency in their actions. I think education, advocacy, and empowerment is needed. But…I guess we'll see.

Facebook's war room represents public relations response to criticism more than fundamental solution to disinformation crisis. The optics are impressive—teams monitoring in real-time, sophisticated dashboards, cross-functional collaboration. The substance is less convincing.

The fundamental problem: Facebook's business model depends on engagement regardless of truth value. The algorithm amplifies content that generates reactions, shares, and comments. Disinformation often outperforms accurate information because outrage and tribal signaling drive engagement. The war room monitors for specific violations while the underlying incentive structure continues producing problems.

Facebook's moderation challenge is genuinely difficult at scale. Billions of posts daily, dozens of languages, cultural context mattering for interpretation, bad actors constantly adapting tactics. No amount of human moderators can review everything, and algorithmic detection produces both false positives and false negatives. But these difficulties don't absolve responsibility—they're consequences of building platform at unsustainable scale prioritizing growth over governance.

The war room focuses on identifiable violations—fake accounts, coordinated inauthentic behavior, policy-violating content. But most election disinformation operates in gray areas. Misleading framing that's technically accurate. Selectively edited videos. Amplification of genuine but unrepresentative voices. Targeted advertising with different messages to different audiences. These tactics don't violate clear rules but still undermine informed democratic participation.

Trust in Facebook's judgment is further undermined by company's history of privacy violations, misleading public statements, and prioritizing profit over social responsibility. The war room asks public to trust that Facebook will make good-faith decisions about political speech—but company has repeatedly demonstrated that growth and revenue take precedence over other values.

The deeper question: Should private company controlled by single individual (Mark Zuckerberg retains voting control despite minority ownership) have this much power over democratic discourse? Facebook's decisions about what content to allow, what to remove, what to amplify—these choices affect election outcomes. That's inappropriate concentration of power regardless of whether Facebook exercises it well or poorly.

My conclusion aligns with the skepticism in my initial note: Education, advocacy, and empowerment matter more than trusting Facebook to solve problems its business model creates. This means:

The war room might catch some bad actors. But addressing election integrity requires changing incentive structures, reducing platform power, and building public capacity for critical engagement rather than hoping Facebook protects democracy while maximizing shareholder value.


This post from the ACLU is directed at activists who might be involved in the MeToo movement. I've been revising my posts and information focused on privacy and security online. Most of this guidance indicates that this is stuff that is for "people that have reason to have privacy concerns." This is usually identified as journalists, politicians, activists.

As detailed by the post above about Pernille Ripp, this information is for everyone.

Take a scan through the recommendations from the ACLU on this page:

Secure your accounts and devices:

Scrub your public information

The ACLU guide reveals important truth: Digital security isn't special precaution for high-profile individuals—it's necessary practice for anyone participating in public discourse, especially around controversial topics. The same threats facing MeToo activists—doxxing, harassment, account compromise, threats to employment—face educators like Pernille Ripp, and potentially anyone who speaks about justice issues.

The recommendations seem basic, but most people don't follow them. Unique passwords everywhere means using password manager because human memory can't handle dozens of strong unique passwords. Most people reuse passwords across sites, meaning compromise of one account cascades to others. Attackers know this and try stolen credentials across multiple platforms.

Phishing remains remarkably effective because sophisticated attacks are hard to distinguish from legitimate messages. That email appearing to come from your school district or social media platform might be attacker trying to steal credentials. Checking URLs carefully, verifying unexpected messages through alternative channels, and maintaining skepticism about urgent requests all matter.

Two-factor authentication (2FA) provides crucial additional security layer. Even if attacker steals password, they can't access account without second factor—code from authenticator app, text message, or hardware key. 2FA isn't perfect (SMS-based 2FA has vulnerabilities), but it dramatically reduces risk of account takeover.

Staying patched means installing software updates promptly. Many people delay updates because they're inconvenient, but updates often fix security vulnerabilities that attackers actively exploit. Unpatched devices are low-hanging fruit for compromise.

Scrubbing public information means removing or limiting personally identifiable information available online—home address, phone number, family member names, workplace details. Attackers use this information for doxxing (publishing private information to enable harassment) or social engineering (using personal details to manipulate targets or others into providing access).

The guide's existence for MeToo activists, combined with Ripp's harassment experience, reveals that speaking about gender justice, racial equity, or LGBTQ+ inclusion now carries real security risks. This should be outrageous—educators and advocates shouldn't need counterintelligence operational security to do their work. But until we address harassment infrastructure systemically, individual security becomes necessary defensive measure.

For educators, this has professional development implications. We need training on digital security not just for ourselves but to teach students. If students will face these threats as they develop voices and speak about justice, we have responsibility to prepare them with security skills alongside critical thinking.

The democratization of security knowledge is itself justice issue. Historically, only well-resourced individuals and organizations had access to security expertise. Making guides like this freely available helps level playing field so marginalized voices can participate safely rather than being silenced through insecurity.


I've looking at different models to help you focus on your priorities as you identify future goals. Most of these are a challenge and don't help you think through how this works in your life. This video has been the most straightforward.

List the top 25 goals you'd like to achieve. The top five you focus on…the remaining 20 you avoid at all costs. This helps you deal with issues of selective focus…and saying yes to everything.

Buffett's rule addresses most common productivity problem: Not doing unimportant things, but doing too many moderately important things. We understand that watching television all evening doesn't advance our goals. But working on our seventh-priority project feels productive even though it prevents focus on top priorities.

The counterintuitive insight: Items 6-25 on your priority list are your enemies. Not because they're bad goals but because they're good enough to justify attention while not being important enough to deserve it. They feel like progress while preventing actual progress on what matters most.

The "avoid at all costs" framing sounds extreme but captures important truth about attention. Every hour spent on goal #12 is hour not spent on goal #2. When you have 25 goals, you make slow progress on all of them rather than substantial progress on top priorities. But achievement typically comes from depth rather than breadth—doing few things excellently rather than many things adequately.

This connects to teaching and professional development. Educators face endless opportunities—new technologies to learn, pedagogical approaches to try, professional communities to join, side projects to pursue. All are valuable. None are clearly wrong. But saying yes to everything means doing nothing well. The exhaustion and lack of progress don't come from lazy but from lack of ruthless prioritization.

Applying the rule requires honest confrontation with what actually matters versus what feels like it should matter. That networking opportunity sounds valuable, but is it top five? That interesting article deserves reading, but does it advance top priorities? That request to serve on committee is honor, but does it align with core goals?

The emotional difficulty: Putting goals on avoid-at-all-costs list feels like giving up on them. But practically, when we have 25 priorities, we're already not making meaningful progress on items 6-25. The rule just makes that reality explicit and redirects energy accordingly.

For students, this lesson matters enormously. College especially encourages exploration and involvement in everything. But students who focus deeply on few areas often achieve more and develop expertise rather than surface engagement with many fields. The student involved in twelve clubs makes less impact than student deeply invested in two.

The rule isn't permanent. Top five can change as life circumstances shift or goals are achieved. But at any given time, ruthless focus on few priorities produces better outcomes than diffused attention across many.

Implementation challenge: Identifying top 25 goals is relatively easy. Choosing top five from that list requires difficult prioritization. Actively avoiding items 6-25 requires saying no to opportunities that seem valuable. But this discipline is exactly what separates achievement from well-intentioned busyness.


🤔 Consider

"Sometimes your joy is the source of your smile, but sometimes your smile can be the source of your joy." — Thich Nhat Hanh

We often assume that feeling must precede expression—that we smile because we're happy, act because we're motivated, engage because we're interested. But causality runs both directions. The act of smiling can generate happiness. Choosing engagement can create interest. Expressing values can strengthen commitment to them. The issue's focus on "thinking of better than this" isn't just aspirational imagination but recognition that we must sometimes act our way into better thinking rather than waiting to think our way into better action. Change requires both imagining alternatives and embodying them through choices, practices, and behaviors that create feedback loops reinforcing new patterns. Sometimes we advocate for justice because we feel committed to it. Sometimes we feel committed to justice because we act as advocates. The smile creates the joy even as joy creates the smile.


Previous: TLDR 165Next: TLDR 167Archive: 📧 Newsletter

🌱 Connected Concepts:


Part of the 📧 Newsletter archive documenting digital literacy and technology.