TLDR 167
Too Long; Didn't Read Issue 167
Published: 2018-09-29 • 📧 Newsletter
Welcome to Issue 167. Stepping into the Twilight Zone.
TL;DR is a weekly look at the news in technology, education, and literacy. I'm seeking to keep you on top of the news so you can be the expert.
This week was spent working on many things behind the scenes.
🔖 Key Takeaways
- Board as Picket Sign: Valencia Clay demonstrates how educators can use Instagram and social media platforms to inspire critical thinking and amplify student voices beyond traditional classroom walls.
- Facebook's Breach Compounded: Security breach affecting 50 million accounts became more damaging when Facebook admitted to exploiting security phone numbers and shadow contact information for advertising targeting.
- Instagram's Independence Ends: Founders leaving after Facebook acquisition signals platform's transformation from independent creative space to fully integrated Facebook property with uncertain implications for users.
- Fragmented Information Landscape: Cable news hosts covering entirely different stories in same week illustrates how Americans inhabit separate information universes with minimal shared reality.
- Password Manager Evolution: Transitioning from cloud-based LastPass to algorithmic LessPass reflects broader tension between convenience and security in digital identity management.
📺 Watch
Valencia Clay: Board as Picket Sign
Valencia Clay, an 8th grade teacher at the Baltimore Design School, has made headlines for her inspirational teaching style. She shares her lessons on Instagram, pushing her students to think critically about themselves. Clay says her "board is her picket sign."
Clay's framing of her classroom board as picket sign reveals powerful understanding of teaching as activism. The picket sign connects labor organizing and social justice movements—visible public declaration of demands for change. By positioning her daily lessons as acts of advocacy, Clay claims teaching as inherently political work rather than neutral information transfer.
The Instagram documentation serves multiple purposes. It makes her teaching public, allowing parents, students, and broader community to engage with the curriculum. It models digital literacy and social media pedagogy—showing students how platforms can amplify messages and build community rather than just consuming content. It creates accountability and transparency about what students are learning.
The messages themselves address identity, justice, self-worth, and critical consciousness. Prompts like "Who are you becoming?" and lessons about systemic racism, representation, and power give students frameworks for understanding their experiences and envisioning futures. This isn't supplemental to academic content—it's foundation enabling learning by helping students see themselves as knowers and meaning-makers.
Clay's approach connects to culturally sustaining pedagogy and critical pedagogy traditions. Students can't learn effectively when curriculum ignores or marginalizes their identities and experiences. Building critical consciousness about systems of oppression empowers students to resist narratives that position them as deficient. The board messages affirm students' humanity and worth while challenging them to develop agency.
The Instagram sharing also invites broader conversation about what counts as curriculum and who has authority to determine it. Traditional gatekeeping restricts what gets taught to standards and textbooks approved by distant authorities. Making lessons public through social media bypasses those gates—Clay decides what her students need and invites community conversation rather than waiting for permission.
There are complexities around social media pedagogy worth considering. Not all students may want their learning documented publicly. Platform choices carry implications—Instagram is owned by Facebook, raising privacy concerns. Digital divide means not all families have equal access to this form of engagement. Virality can bring unwanted attention and scrutiny.
But Clay's framing of her board as picket sign cuts through those concerns to the heart of teaching's purpose. If education is about liberation—helping young people understand and transform their worlds—then teaching necessarily involves advocacy. The board isn't neutral content delivery but daily declaration that students matter, their questions are important, their liberation is worth fighting for. That's teaching as activism, and it's work worth amplifying.
📚 Read
Facebook Security Breach Affects 50 Million Accounts
Friday afternoon my email and apps across my devices started sending me notifications to let me know that my Facebook credentials were invalid. I soon found out that the social network disclosed that an unprecedented security issue, discovered September 25, impacted almost 50 million user accounts. Facebook responded by immediately logging out all of its 90 million users from the network. Facebook also confirmed that third-party sites that those users logged into with their Facebook accounts could also be affected.
This story is still evolving, but for now…please protect yourself. It's a good idea to change passwords, enable two-factor authentication, and take a look at times you've used social sign-on with Facebook.
The breach exploited vulnerability in Facebook's "View As" feature that lets users see how their profile appears to others. Attackers used this to steal access tokens—digital keys that keep users logged in without requiring repeated password entry. With these tokens, attackers could access accounts as if they were the legitimate users.
The scale is staggering—50 million compromised accounts, 90 million forcibly logged out as precaution. But numbers alone don't capture the full impact. Facebook Connect means many users log into dozens of third-party sites and apps using Facebook credentials. Compromised Facebook access potentially exposes all those connected services—shopping accounts, banking apps, work tools, health platforms.
The breach reveals cascading vulnerability of centralized authentication. Facebook Connect offers convenience—one login for many services. But this creates single point of failure where one breach compromises everything. The trade-off between convenience and security isn't abstract—it has real consequences when attackers exploit that central point.
Facebook's response shows both competence and concerning normalization. They detected and patched the vulnerability relatively quickly, notified affected users, forced password resets. That's professional incident response. But the regularity of Facebook privacy and security problems—Cambridge Analytica, shadow contact information, election manipulation, and now this breach—suggests systemic issues rather than isolated incidents.
The breach also highlights trust problem Facebook faces. When company consistently prioritizes growth and engagement over user protection, incidents like this reinforce perception that Facebook can't be trusted with sensitive information. Advising people to enable two-factor authentication and change passwords is necessary immediate response, but it doesn't address underlying question of whether people should trust Facebook at all.
The third-party site implications matter enormously for understanding modern web security. Many users don't realize how widely their Facebook authentication spreads or that compromising Facebook means compromising everything connected to it. This lack of transparency about security implications is itself security vulnerability—users can't make informed decisions about risk when they don't understand the systems they're embedded in.
From educational perspective, this breach offers teachable moment about authentication systems, single sign-on risks, password hygiene, two-factor authentication importance, and broader questions about trusting centralized platforms with identity management. Students growing up with social login as default need to understand what they're risking and what alternatives exist.
The recurring pattern of Facebook incidents also reinforces argument that platform regulation is necessary. Individual users can't audit Facebook's security, evaluate their code, or force them to prioritize protection over growth. Only regulatory pressure and meaningful penalties create incentives for platforms to invest adequately in security and privacy.
You Gave Facebook Your Number For Security. They Used It For Ads
Facebook already had tons of people looking to delete their accounts over previous privacy and security issues. Along with news of this week's data breach, there are also stories about the company explicitly providing personal information for targeted advertising.
A group of academic researchers from Northeastern University and Princeton University, along with Gizmodo reporters, discovered that Facebook is giving out your phone number to advertisers. The social network is also giving away your "shadow info" to advertisers. Shadow info is the contact and personal information that you may not give to the network…but they pull from information other users may have about you.
This represents betrayal of user trust that goes beyond typical privacy violations. Users provided phone numbers specifically for security purposes—two-factor authentication, account recovery, identity verification. Facebook explicitly positioned phone number collection as security measure, creating expectation that information would be used solely for protection. Instead, Facebook monetized those security phone numbers by making them available to advertisers for targeting.
The shadow contact information problem is even more insidious. You never gave Facebook your phone number or email address. But your friend uploaded their contacts to Facebook, which included your information. Facebook now has data about you that you never consented to provide, collected from third parties without your knowledge. And they're using this shadow profile information to enable advertisers to target you.
The technical mechanism: Advertisers can upload lists of phone numbers or email addresses for "custom audiences"—targeting ads to specific people. Facebook matches those identifiers against its database, which includes both information you provided and shadow information scraped from other users' contacts. So even if you never gave Facebook your phone number, they can match you through shadow data and deliver ads.
This violates basic principles of informed consent. Consent requires knowing what you're agreeing to, having meaningful choice to decline, and understanding how information will be used. None of those conditions are met when Facebook collects shadow information, repurposes security-related data for advertising, and operates through opacity that prevents users from understanding or controlling how their information is used.
The defense Facebook offered—that users agreed to this in terms of service—exemplifies the inadequacy of notice-and-consent privacy frameworks. No reasonable user would understand from reading Facebook's terms of service (which almost nobody reads) that providing phone numbers for security would enable advertisers to target them. The terms provide legal cover without genuine consent.
The shadow contact information also reveals how collective data undermines individual privacy. You can make perfect privacy choices for yourself—minimal information shared, careful privacy settings, security-focused decisions. But you can't control what information about you exists in other people's devices and gets uploaded to platforms. Your privacy depends not just on your choices but on choices of everyone connected to you.
From ethical perspective, this represents stark example of why surveillance capitalism business model is fundamentally problematic. When company's revenue depends on extracting maximum value from user data, there will always be pressure to repurpose information collected for one purpose to serve advertising goals. Security and user welfare will be subordinated to monetization.
The breach discussed above combined with shadow contact information abuse creates perfect storm: Facebook's security practices were inadequate to protect data they collected under false pretenses, which they were already misusing for purposes users never consented to. At some point, individual incidents compound into pattern revealing platform that fundamentally can't be trusted.
For users, this suggests minimizing information provided to Facebook and reconsidering whether Facebook account is worth the privacy compromise. For educators teaching about digital privacy, this offers concrete example of why reading privacy policies isn't enough, why "free" services have hidden costs, and why questioning platform trustworthiness is rational rather than paranoid.
The End of Instagram As We Know It Is Here
You may know that Facebook purchased Instagram a little over six years ago. During that time period, Facebook indicated that they were planning on leaving Instagram alone for the most part. Instagram founders Kevin Systrom and Mike Krieger stayed on to work with Facebook and lead Instagram. This week all of that changed when Systrom and Krieger stepped down.
For now, we don't immediately know what will happen to Instagram. But, given the regular questions and problems we have with Facebook, it'll be interesting to see what will become of the platform.
The founders' departure signals end of Instagram's semi-autonomous existence within Facebook. When Facebook acquired Instagram for $1 billion in 2012, the deal included implicit understanding that Instagram would maintain independent identity, brand, and product direction. Systrom and Krieger staying on suggested they still controlled the platform they built.
Their sudden departure—reportedly due to conflicts with Mark Zuckerberg over Instagram's independence—reveals that autonomy was illusion or at least temporary condition. Facebook tolerates independence only as long as it serves Facebook's interests. When platform becomes valuable enough or Facebook's needs become pressing enough, independence gets revoked.
The pattern repeats across Facebook's acquisitions. WhatsApp founders also left after conflicts over monetization and privacy. Facebook promises independence to close deals and reassure users that beloved platforms won't change. Then gradually or suddenly, Facebook integrates acquisitions into its ecosystem, imposing its values, business models, and priorities.
For Instagram users, this matters because Facebook and Instagram have fundamentally different cultures and approaches. Instagram built reputation for visual creativity, influencer culture, and relatively positive atmosphere compared to Facebook's toxicity. Instagram maintained separate brand and user experience. With founders gone, nothing prevents Facebook from deeper integration—shared infrastructure, unified advertising, algorithmic changes that prioritize Facebook's goals over Instagram's user experience.
Early signs of Facebook's influence were already visible: Instagram Stories copied Snapchat at Facebook's direction, long-form video got pushed to compete with YouTube, shopping features appeared to monetize platform, algorithmic feed replaced chronological to maximize engagement. With founders' departure, these Facebook-style changes will likely accelerate.
The broader implications concern tech industry consolidation. Facebook has neutralized potential competitors by acquiring them and either killing them (Instagram couldn't challenge Facebook as independent competitor) or eventually absorbing them into Facebook's empire. This acquisition-and-absorption strategy eliminates competitive pressure that might force Facebook to improve.
Users lose when platforms they valued get absorbed into corporate parent's priorities. Instagram was valuable partly because it wasn't Facebook—different norms, different aesthetics, different community dynamics. Integration erases that difference and the value it provided.
There's also loss of creative leadership and vision. Systrom and Krieger built product millions of people loved. Replacing founder-driven vision with professional managers executing Facebook's corporate strategy predictably degrades product. Founders have idiosyncratic values and deep care for their creations. Corporate managers optimize for metrics defined by parent company.
The lesson: Don't trust platform acquisitions that promise independence. The promise is tactical—it reassures users and prevents exodus while acquisition closes. But long-term, acquisition means control, and control means eventual integration on parent company's terms. If you value platform's independence and distinct identity, assume those will disappear post-acquisition even if founders initially stay.
For Instagram, this likely means more aggressive advertising, deeper Facebook integration, priority given to Facebook's needs over Instagram's community, and gradual erosion of what made Instagram valuable as distinct platform. Whether users tolerate this or migrate elsewhere remains to be seen. But the Instagram we knew—creative haven relatively independent of Facebook's problems—is probably gone with its founders' departure.
How Anderson Cooper, Rachel Maddow, and Sean Hannity Opened Their Shows
I spend quite a bit of my time researching and teaching about media and information literacy. In this work we often find stories about people having completely different world views, depending on their media and information streams.
This post from Business Insider shares the lead story from three cable news hosts in the U.S. over one week. Even though this is from an American context, I think the same patterns can be seen in other nations. This provides a jumping off point to discuss challenges of the content we all consume.
The comparison reveals staggering fragmentation of American information environment. Cooper (CNN), Maddow (MSNBC), and Hannity (Fox News) covered completely different stories as their lead items most nights. Not different angles on same events—different events entirely, as if reporting from parallel universes. Viewers of different networks wouldn't just disagree about what happened; they wouldn't know the same things happened.
This goes beyond traditional partisan bias where same facts get interpreted differently. When networks cover entirely different stories, they're constructing different realities. Events that CNN and MSNBC treat as major news simply don't exist in Fox universe, and vice versa. Shared factual foundation necessary for democratic deliberation can't exist when citizens literally aren't aware of the same events.
The selection effects compound interpretation effects. Not only does each network present events favoring their ideological perspective, but they choose which events matter. This shapes audience worldview more profoundly than explicit editorial commentary—what doesn't get covered doesn't exist for most viewers. If your network doesn't report an event, you likely won't know it happened or consider it significant.
The business model drives this fragmentation. Networks compete for audiences by giving them content that confirms existing beliefs and tribal identities. Fox viewers don't want to hear stories that challenge conservative narratives. MSNBC viewers seek Trump criticism and progressive perspectives. CNN tries to occupy "objective" middle but still must retain audience in fragmented landscape. Each network optimizes for viewer retention rather than shared factual foundation.
The algorithmic amplification of fragmentation extends this pattern beyond cable news. Social media feeds, YouTube recommendations, podcast choices, website visits—all get personalized to maximize engagement, which typically means reinforcing existing beliefs. Cable news fragmentation is most visible symptom of broader ecosystem designed to separate people into information silos.
For civic discourse, consequences are devastating. How do you have productive political conversation when you and your interlocutor literally aren't aware of the same facts? When events that shape your worldview simply didn't register in their information environment? When they're focused on completely different concerns because their media diet emphasized different priorities?
The fragmentation also enables manipulation. Politicians and interest groups can craft different messages for different audiences knowing that overlap is minimal. They can make contradictory claims to different constituencies without facing accountability because information silos prevent comparison. They can ignore inconvenient facts knowing their base won't encounter them.
Media literacy education typically focuses on evaluating individual sources—checking credibility, identifying bias, verifying facts. But this fragmentation requires additional layer: developing awareness of what you're not seeing. What stories aren't covered by your preferred sources? What perspectives are missing? What would someone in different information environment know that you don't?
Practical strategies for bridging information silos include deliberately consuming diverse sources, following fact-checking organizations, seeking primary sources rather than commentary, and maintaining epistemic humility about whether your information environment is complete. But these require significant effort and willingness to encounter uncomfortable perspectives.
The Business Insider comparison should be disturbing for anyone concerned about democracy. Not because one network is right and others are wrong (though accuracy varies), but because functioning democracy requires citizens to share at least minimal common factual foundation. When that foundation dissolves into irreconcilable information universes, democratic discourse becomes impossible—not because people disagree but because they can't even agree on what events require discussion.
Technology and Compassion: A Conversation with the Dalai Lama
SingularityU The Netherlands recently hosted a dialogue about compassion and technology with His Holiness the Dalai Lama. The video of the event is available here.
In the 21st century, His Holiness said, "There is real possibility to create a happier world, peaceful world. So now we need vision. A peaceful world on the basis of a sense of oneness of humanity."
Technology's role in that world is being developed and refined every day, and we must maintain an ongoing awareness of its positive and negative repercussions—on everyone.
The Dalai Lama's framing of "oneness of humanity" offers essential counterweight to issue's dominant themes of fragmentation, exploitation, and betrayal. The Facebook breaches, shadow contact information, Instagram's absorption, cable news silos—all exemplify technology enabling separation, surveillance, and manipulation. But technology itself isn't deterministically harmful. Its impact depends on values guiding development and use.
The "sense of oneness" concept challenges technological systems built on division—personalized filter bubbles, targeted advertising that treats people as consumer segments, platforms optimized for tribal engagement. If we took oneness seriously, we'd design technology that surfaces diverse perspectives, connects people across difference, and prioritizes collective wellbeing over individual engagement metrics.
Compassion as design principle would transform tech development. Instead of "how do we maximize time on platform?" the question becomes "how do we serve human flourishing?" Instead of extracting maximum data for advertising, "how do we respect human dignity and privacy?" Instead of optimizing for engagement regardless of truth, "how do we support shared understanding and wisdom?"
The challenge: Compassion and oneness don't align with surveillance capitalism's business model. Facebook's revenue depends on granular targeting and engagement maximization, not human flourishing. Instagram gets absorbed because Facebook prioritizes growth and control over maintaining distinct community values. Cable news fragments reality because outrage and tribal identity drive viewership better than shared truth and understanding.
The Dalai Lama's vision requires imagining technology built on different foundations. What if platforms were public goods governed democratically rather than private companies maximizing shareholder value? What if we valued connection quality over engagement quantity? What if we recognized that my privacy and your privacy are interdependent, requiring collective rather than individual solutions?
The technological possibility exists. We have tools that could enable unprecedented human connection, knowledge sharing, collective problem-solving, and compassionate response to suffering. The question is whether we'll overcome economic and political structures that currently channel technology toward surveillance, exploitation, and fragmentation.
For educators, this conversation matters because we're shaping students who will build or inhabit technological futures. Teaching them to critique surveillance capitalism and recognize manipulation is necessary. But we also need to help them envision alternatives—technology serving compassion and oneness rather than profit and control. Without that vision, critique leads to despair rather than transformation.
The Dalai Lama's emphasis on vision is crucial. We can't design compassionate technology if we can't imagine it. We can't build toward oneness if we accept fragmentation as inevitable. The conversation invites us to think beyond incrementally less harmful versions of current systems toward fundamentally different approaches aligned with human values.
The juxtaposition of this conversation with the week's news about Facebook and Instagram demonstrates the gap between technological possibility and reality. We have incredible tools for human connection and collective wisdom. We're using them for surveillance, manipulation, and tribal division. Closing that gap requires vision, will, and structures that prioritize human flourishing—exactly what the Dalai Lama calls for in emphasizing compassion and oneness as foundations for technological development.
Why I've Just Ditched My Cloud-Based Password Manager
While you're reviewing your passwords, you should consider your password system as well. My password system of choice has been LastPass for several years. I have to admit that I've increasingly been having problems with LastPass is it is slow, and forgetting passwords.
Doug Belshaw is transitioning over to LessPass and documents this process here. I think I'm going to follow his lead and try out LessPass.
The password manager comparison reveals fundamental tension between convenience and security. LastPass and similar cloud-based managers offer exceptional convenience—passwords sync across all devices, auto-fill in browsers, accessible anywhere with internet connection. But convenience requires trusting cloud provider with encrypted password database. If provider gets breached, has security flaw, or complies with government requests, your passwords potentially become vulnerable.
LessPass takes radically different approach. Instead of storing encrypted passwords in cloud, it generates passwords algorithmically from master password, site name, and username. Same inputs always produce same password, but password never needs to be stored anywhere. No cloud sync because nothing syncs—you regenerate passwords on each device using algorithm.
The security benefits are significant. No password database means nothing to steal or breach. No company holds your encrypted passwords, so no third party vulnerability. No sync means no interception risk. You're not trusting LastPass's security practices—you're trusting mathematics and your ability to remember one strong master password.
But LessPass trades convenience for security. Changing a password requires updating settings in LessPass and remembering which sites have custom settings. Password regeneration takes extra seconds. No auto-fill means more manual workflow. Migration from LastPass requires manually setting up each site in LessPass. For people managing hundreds of passwords, this is non-trivial effort.
The choice reflects values and threat model. If convenience matters more and you trust LastPass's security, cloud-based manager makes sense. If you're concerned about third-party breaches (like Facebook's, discussed above) and prefer zero-knowledge architecture where provider can't access your passwords even if breached, algorithmic generation is compelling despite inconvenience.
There's also philosophical difference about where security responsibility lies. Cloud managers outsource security to provider—you trust them to protect the database, maintain security practices, respond to breaches. Algorithmic approaches keep security responsibility with user—you must remember master password, maintain security on your devices, manage LessPass configuration. Neither is objectively better, but they distribute risk differently.
The timing of this discussion alongside Facebook's breach matters. Facebook asked users to provide phone numbers for security, then exploited that information for advertising. LastPass asks users to trust them with passwords for convenience. When platforms repeatedly demonstrate untrustworthiness, preference for self-hosted or algorithmic solutions rather than trusting third parties becomes more rational.
For educators teaching digital literacy, password managers exemplify broader principles: convenience and security trade off, zero-knowledge architecture reduces trust requirements, understanding your threat model informs tool choices, no solution is perfect but some align better with different priorities. Students need frameworks for evaluating these trade-offs rather than accepting defaults.
My own consideration of switching reflects accumulated erosion of trust in cloud services. LastPass has been fine. But after enough breaches, privacy violations, and examples of platforms exploiting user data, the appeal of solutions that don't require trust becomes stronger. LessPass isn't perfect—losing master password means losing everything, regeneration is slower, setup requires effort. But at least vulnerability is under my control rather than dependent on third-party security practices I can't evaluate or control.
🤔 Consider
"Stay angry, little Meg," Mrs Whatsit whispered. "You will need all your anger now." — Madeleine L'Engle, A Wrinkle in Time
The twilight zone quality of modern technology landscape—where platforms betray security for profit, where acquired companies get absorbed despite independence promises, where citizens inhabit separate information universes, where compassionate technological possibility gets channeled toward exploitation—should not be normalized. Mrs Whatsit's wisdom recognizes that anger isn't character flaw to overcome but energy needed for resistance. Not performative outrage or destructive rage but clear-eyed fury about injustice that motivates action. Stay angry about platforms exploiting users. Stay angry about consolidation killing alternatives. Stay angry about fragmentation undermining shared reality. Stay angry about technology serving profit over human flourishing. The twilight zone we're in doesn't have to be permanent reality, but accepting it as normal ensures it continues. We need all our anger now—not to lash out but to refuse to accept the unacceptable and work toward something better.
🔗 Navigation
Previous: TLDR 166 • Next: TLDR 168 • Archive: 📧 Newsletter
🌱 Connected Concepts:
- Facebook Data Breach — Security vulnerability affecting 50 million accounts exposed access tokens enabling account takeover with cascading implications for third-party sites using Facebook authentication in Platform Security.
- Shadow Contact Info — Facebook monetizing phone numbers provided for security plus contact information scraped from other users without consent reveals systematic privacy betrayal in Surveillance Capitalism.
- Instagram Platform Changes — Founders Kevin Systrom and Mike Krieger leaving after Facebook acquisition signals end of platform's independence and integration into parent company's priorities in Tech Consolidation.
- Media Literacy — Cable news hosts covering entirely different stories demonstrates fragmented information environment preventing shared factual foundation necessary for democratic discourse in Information Silos.
- Password Management — Transition from cloud-based LastPass to algorithmic LessPass reflects tension between convenience and security with implications for trust and control in Digital Security.
Part of the 📧 Newsletter archive documenting digital literacy and technology.