DL 208

Data Rights are Human Rights

Published: 2019-08-03 • 📧 Newsletter

Welcome to issue 208. Data rights are human rights.

Hi all, my name is Ian O'Byrne and welcome to Digitally Literate. In this newsletter I distill the news of the week in technology into an easy-to-read resource. Thank you for reading.


🔖 Key Takeaways


📺 Watch

Some practical ways to disagree and get along with someone at the same time from Jonathan Zimmerman. Zimmerman is Professor of History of Education at the Graduate School of Education at the University of Pennsylvania. His latest book is The Case for Contention: Teaching Controversial Issues in American Schools.

Key principles:

Zimmerman's framework addresses the discourse crisis underlying many issues this newsletter covers. Disinformation spreads partly because we've lost capacity for productive disagreement—conversations become performance rather than inquiry. The emotion/argument distinction is crucial: passionate delivery can substitute for evidence, and calling out lack of evidence gets read as personal attack. Teaching controversial issues well requires modeling these skills: showing students that disagreement can be curious rather than hostile, that changing minds is possible without losing face, that conviction doesn't equal correctness. These are citizenship skills as fundamental as reading.


📚 Read

A US lawmaker, Senator Dianne Feinstein introduced a bill this week, the Voter Privacy Act, that would regulate how political parties use voters' data in federal elections. This legislation is the first to directly respond to Cambridge Analytica, which used Facebook to harvest the data of 87 million voters, often without permission, in hopes of influencing their behavior.

Thankfully, many are starting to have a discussion about basic data rights. Data rights are fundamentally human rights.

David Carroll provides a great review of the five basic rights that do not exist yet in the US:

Carroll's five rights framework—know, own, review, remove, refuse—provides clear criteria for evaluating data legislation. Europeans gained these through GDPR; Americans still lack them. "Right to know" means companies must tell you what data they have. "Right to own" means that data belongs to you, not them. "Right to review" means you can see and correct errors. "Right to remove" means you can demand deletion. "Right to refuse" means you can decline collection in the first place. The Voter Privacy Act addresses political data specifically, but the framework applies broadly. Until these rights exist comprehensively, data extraction continues as default.

Capital One disclosed that they were hacked. The breach was first discovered on July 19th.

In a somewhat related story, education software maker Pearson indicated a data breach affected thousands of accounts in the US. The Wall Street Journal reports that the data breach happened in November 2018 and Pearson was notified by the FBI in March. The perpetrator is still unknown.

Another day, another data breach. After we didn't do anything about Equifax, these events will now be inevitable.

Matt Blaze indicates that perhaps we should handle data in the same way that we handle radioactive waste. "Best practice for protecting it is not to collect it in the first place, with potentially unlimited liability for those who mishandle it."

Blaze's radioactive waste analogy reframes data collection. Radioactive materials require extraordinary precautions because harm persists indefinitely; collected data similarly creates permanent vulnerability. The "inevitable" framing matters: Equifax exposed 147 million people with minimal consequences, teaching companies that breaches are survivable costs rather than existential threats. Capital One and Pearson follow logically—why invest in security when failure carries no real penalty? The Pearson breach affecting students is particularly concerning: educational data collected about children creates lifelong vulnerability for people who never consented. The solution Blaze suggests—don't collect it—challenges business models built on data accumulation.

How College Students Engage with News

This research reports results from a mixed-methods study about how college students engage with news when questions of credibility and "fake news" abound in the U.S. Findings are based on 5,844 online survey responses, one open-ended survey question (N=1,252), and 37 follow-up telephone interviews with students enrolled at 11 U.S. colleges and universities.

Results shed light on the information seeking behaviors of young adults. Of interest to me is the social life of news, most respondents got news during the past week through discussions with peers (93 percent) whether face-to-face or online via text, e-mail, or using direct messaging on social media. The majority of respondents had news consumption habits that were multimodal (text, images, video, audio, etc.). Participants gave extra credibility to the source of the info…if it was shared by a professor in this case.

The 93% peer discussion finding reveals how news actually circulates among young adults—not through direct consumption of news sources but through social mediation. Friends share, discuss, contextualize, and recommend. This has implications: news literacy education focused on evaluating sources misses how students actually encounter information. The professor credibility effect is interesting but potentially concerning—authority figures can share misinformation too. Multimodal consumption reflects platform reality: news arrives as text, images, video, audio, often combined. Literacy education must address all modes, not just reading comprehension.

We're quickly approaching…if we haven't already arrived…in a place where the machines are just talking to one another.

AI algorithms can generate text convincing enough to fool the average human—potentially providing a way to mass-produce fake news, bogus reviews, and phony social accounts. Thankfully, AI can now be used to identify fake text, too.

Researchers from Harvard University and the MIT-IBM Watson AI Lab have developed a new tool for spotting text that has been generated using AI. Called the Giant Language Model Test Room (GLTR), it exploits the fact that AI text generators rely on statistical patterns in text, as opposed to the actual meaning of words and sentences. In other words, the tool can tell if the words you're reading seem too predictable to have been written by a human hand.

Interested? Try it here.

GLTR represents arms race logic: AI generates convincing text, so AI must detect AI text. The "too predictable" detection method exploits how language models work—they predict likely next words, producing text that's statistically typical. Humans write less predictably, choosing unexpected words, making idiosyncratic errors, expressing genuinely novel ideas. As generation improves, detection must improve correspondingly. The tool's availability matters for educators, journalists, and anyone evaluating text authenticity. But the deeper concern: when AI-generated text becomes indistinguishable from human writing, what happens to trust in written communication itself?

While preserving democratic and economic institutions in the digital era will require more action from governments and platforms, we the people also need to recognize our responsibilities in these new spaces.

Here are four simple ways to do your part in fighting back:

The Brookings recommendations place responsibility on individuals, which is both necessary and insufficient. "Know your algorithm" means understanding that platforms show you content optimized for engagement, not accuracy. "Retrain your newsfeed" involves deliberately seeking diverse sources rather than accepting algorithmic curation. "Scrutinize your sources" requires evaluation skills many lack. "Consider not sharing" addresses amplification—the pause before retweet that could prevent spread. These individual actions matter but can't substitute for platform design changes and regulatory frameworks. The burden shouldn't fall entirely on users to resist systems designed to manipulate them.


🔨 Do

The emotionally draining news cycle doesn't show any signs of slowing down. A clinical psychologist shares her thoughts on staying grounded—and productive.

This post shares insight about dealing with the "news." I think this provides good advice for dealing with social media in general.

The news cycle stress management advice recognizes what information overload does to mental health. Constant exposure to negative news—especially when we can't act on it—creates helplessness and anxiety. The boundaries point matters most: those whose work requires news engagement (journalists, researchers, educators) face occupational hazard requiring intentional mitigation. Self-care isn't indulgence but sustainability. Channeling frustration into action—contacting representatives, supporting causes, creating content—transforms passive consumption into agency. Knowing when to ask for help acknowledges that news-induced distress can become clinical, requiring professional support.


🤔 Consider

"You can have data without information, but you cannot have information without data." — Daniel Keys Moran

Moran's distinction between data and information connects to this issue's themes. Cambridge Analytica had data—millions of data points about millions of people. They transformed it into information—psychological profiles predicting and influencing behavior. Data rights matter because data becomes information in hands of those who process it. The five rights Carroll articulates—know, own, review, remove, refuse—address data before transformation, giving individuals power over raw material that becomes power over them.


Previous: DL 207Next: DL 209Archive: 📧 Newsletter

🌱 Connected Concepts:


Part of the 📧 Newsletter archive documenting digital literacy and technology.