DL 206

Lose Yourself

Published: 2019-07-20 • 📧 Newsletter

Welcome to issue 206. Lose yourself.

Hi all, my name is Ian O'Byrne and welcome to Digitally Literate. In this newsletter I distill the news of the week in technology into an easy-to-read resource. Thank you for reading.

This week I worked on a number of things in the background. More info coming soon.


🔖 Key Takeaways


📺 Watch

Jessikka Aro is a Finnish investigative journalist, who has faced down death threats and harassment over her work exposing Russia's propaganda machine long before the 2016 U.S. presidential elections.

In this short clip she elaborates on what she has learned, and the risks that she has endured.

Aro's testimony matters because she documented Russian disinformation operations when most Western observers weren't paying attention. Finland's proximity to Russia made Finnish journalists early witnesses to techniques later deployed globally: troll farms, coordinated harassment, weaponized social media, manufactured controversy. The death threats she received demonstrate how seriously state actors take information warfare—journalists who expose operations become targets. Her persistence despite personal cost models what accountability journalism requires in an era when powerful actors actively try to suppress it. The 2016 surprise shouldn't have been surprising; people like Aro were sounding alarms for years.


📚 Read

The FaceApp meme that is going around is a big risk to your data and privacy. You should avoid it if you haven't already jumped on board. In truth, most things you already do online, especially in Facebook is a bad idea.

If you do share your photos, it probably won't destroy society, but the privacy trade-off is still shady as hell. You're giving up data (your photos) to an unknown party…forever. If I asked you to send me a bunch of photos of yourself, I'd hope you'd think twice.

I have a feeling that what is happening is they're using your content to train machine learning engines. That is to say that the content is being used to not only make you look older or younger…but the data around that is much more important.

The machines learn from the images you select, your reactions to these images, and the reactions from those in your network. All of that is gold for training these systems.

FaceApp crystallizes the privacy paradox: millions eagerly uploaded photos to an app from unknown Russian developers with vague terms of service granting perpetual rights to their images. The aging filter was fun; the data extraction was the business model. The machine learning hypothesis is almost certainly correct: facial transformation requires massive training datasets, and users volunteered theirs. But the broader lesson is grimmer—no amount of privacy education prevents viral spread of data-extracting apps when they offer entertaining experiences. We've run this experiment repeatedly: convenience and entertainment beat privacy concerns every time. The "shady as hell" framing is accurate but ineffective; people already know and don't care enough to stop.

The OpenPower Foundation — a nonprofit led by Google and IBM executives with the aim of trying to "drive innovation" — has set up a collaboration between IBM, Chinese company Semptian, and U.S. chip manufacturer Xilinx. Together, they have worked to advance a breed of microprocessors that enable computers to analyze vast amounts of data more efficiently.

Shenzhen-based Semptian is using the devices to enhance the capabilities of internet surveillance and censorship technology it provides to human rights-abusing security agencies in China, according to sources and documents. A company employee said that its technology is being used to covertly monitor the internet activity of 200 million people.

The Intercept's investigation reveals how "innovation" partnerships enable authoritarianism. OpenPower's mission sounds benign—advancing microprocessor technology—but the application matters enormously. Semptian's surveillance systems monitor 200 million people, enabling persecution of Uyghurs, dissidents, and anyone the Chinese state considers threatening. Google and IBM executives sitting on the foundation's board can't claim ignorance; due diligence would reveal Semptian's clients. The "we just provide technology, not how it's used" defense collapses when the use case is obvious. American tech companies are directly enabling human rights abuses at scale, and the nonprofit structure obscures accountability.

When we bring technology into the lives of youth, there is an understanding that the terms of use often is guided by the age of the child.

This post from the Wall Street Journal gives the inside story of the Children's Online Privacy Protection Act (COPPA), a law from the early days of e-commerce that is shaping a generation and creating a parental minefield.

"Across the board, parents and youth misinterpret the age requirements that emerged from the implementation of COPPA," BKC researchers wrote in a 2010 publication. "Except for the most educated and technologically savvy, they are completely unaware that these restrictions have anything to do with privacy."

COPPA's age-13 threshold was designed to protect children's privacy by requiring parental consent for data collection. In practice, it became age gate for platform access—kids lie about birthdays, parents help them lie, and the privacy protections never materialize. The "parental minefield" framing captures real confusion: is 13 when kids can use social media? When they're mature enough? When they're legally adults online? None of these interpretations match COPPA's actual purpose. Twenty years later, children's data gets collected anyway through family devices, educational technology, and platforms that don't verify age. The law shaped behavior without achieving goals.

While the internet has its perils with privacy breaches & fake news, a whole generation of youth have been teaching themselves skills in leadership & community-building, according to a new UC Davis study.

Self-governing internet communities, in the form of games, social networks or informational websites such as Wikipedia, create their own rule systems that help groups of anonymous users work together. They build hierarchies, create punishments, & write, enforce home-grown policies. Along the way, participants learn to avoid autocrats & find leaders that govern well.

The UC Davis finding reframes gaming from time-wasting to civic education. Self-governing game communities require exactly the skills democratic citizenship requires: negotiating rules, enforcing norms, identifying good leadership, recognizing and resisting authoritarianism. The "learning to avoid autocrats" phrase is striking—players develop intuitions about governance through repeated experience with different leadership styles. This doesn't mean all gaming teaches civics; design matters. But dismissing gaming wholesale ignores genuine learning happening in well-designed communities. The challenge: how do we help young people transfer governance intuitions from virtual to physical contexts?

There is plenty of research dedicated to figuring out how to make Artificial Intelligence human-friendly, but what about making sure AI is built to keep animals safe?

The closest ally creatures have come from animal-computer interaction (ACI), a discipline that officially launched seven years ago with a manifesto that laid out three goals: to enhance animals' quality of life and general well-being; to support animals in the functions assigned to them by humans; and to foster the relationship between humans and beasts.

Animal-computer interaction (ACI) extends design ethics beyond human users—recognizing that technology increasingly affects non-human creatures who can't consent or provide feedback in conventional ways. The drone wildlife example is concrete: aerial photography disturbs animals in ways photographers can't perceive. But broader applications include tracking collars, farm automation, pet technology, and conservation sensors. The three goals ACI articulates—enhancing welfare, supporting assigned functions, fostering relationships—provide framework for evaluating technology's animal impacts. As AI systems make more autonomous decisions affecting animals, these considerations become increasingly urgent.


🔨 Do

Losing yourself in virtual worlds can have good as well as negative effects.

Video games could be less sedentary as you have to physically interact with your environment. Video games would fight isolation as they would be inherently social experiences.

The "lose yourself" framing connects to therapeutic potential. Grief, depression, and anxiety all involve intrusive thoughts and rumination—patterns that gaming can interrupt by demanding focused attention elsewhere. This isn't escapism as pathology but escapism as temporary relief, creating space for processing that constant dwelling prevents. The physical interaction point matters for VR and motion-controlled games; the social dimension matters for multiplayer experiences. As with most tools, context determines whether gaming helps or harms: duration, content, what it replaces, and individual circumstances all matter.


🤔 Consider

"Common sense has much to learn from moonshine." — Philip Pullman

Pullman's observation on unconventional wisdom connects to this issue's themes. FaceApp common sense said "it's just fun"—moonshine recognized data extraction. Common sense dismissed gaming as waste—research shows civic learning. Common sense trusted tech partnerships—investigation revealed surveillance enabling. Sometimes the "obvious" reading misses what matters most.


Previous: DL 205Next: DL 207Archive: 📧 Newsletter

🌱 Connected Concepts:


Part of the 📧 Newsletter archive documenting digital literacy and technology.