Digitally Literate #227

Youth Never Forget
Digitally Lit #227 – 1/4/2020

Hi all, welcome to issue #227 of Digitally Literate. Welcome to 2020. I hope the new year…and the new decade treat you well. You’re more than welcome to review these materials on the website. Please subscribe if you would like this to show up in your email inbox.

Over the last two weeks I launched a new podcast that you might be interested in checking out. As part of the Infusing Computing research project, we’re recording and distributing a series of short episodes each month focused on embedding computational thinking into content area instruction in middle and high school classrooms. I used Jekyll and GitHub Pages to host the podcast feed. You can view the website here, or dig under the hood here on GitHub. This provided me with a free opportunity to host a podcast, as well as a chance to play with GitHub.

Lastly, I had a very nice interview with a reporter from ChinaDaily to talk about the challenges of being digitally literate for Americans. You can read about some of my remarks here.


Why Constant Learners All Embrace the 5 Hour Rule (6:20)

Throughout Ben Franklin’s adult life, he consistently invested roughly an hour a day in deliberate learning.

Will you devote one hour each day to learn?

Alternatively, perhaps you’ll schedule in some “slack time” in your schedule to allow for the serendipity of learning.


The 100 Worst Ed-Tech Debacles of the Decade

Audrey Watters closed out the decade with a retrospective on the “failures and f*ck-ups and flawed ideas” in ed-tech.

Take some time to skim through the post to reflect on the few hits, regular misses, and excessive hubris found in the ed tech industry.

We’ve spent the decade letting our tech define us. It’s out of control.

Douglas Rushkoff with a piece in The Guardian examining how technology has grown from devices and platforms to an entire environment in which we function. As the decade came to a close, we’ve started to see a form of “tech backlash” as we begin to understand that these spaces and tools may not have our best interests at heart.

We can no longer come to agreement on what we’re seeing, because we’re looking at different pictures of the world. It’s not just that we have different perspectives on the same events and stories; we’re being shown fundamentally different realities, by algorithms looking to trigger our engagement by any means necessary. The more conflicting the ideas and imagery to which are exposed, the more likely we are to fight over whose is real and whose is fake. We are living in increasingly different and irreconcilable worlds. We have no chance of making sense together. The only thing we have in common is our mutual disorientation and alienation.

We’ve spent the last 10 years as participants in a feedback loop between surveillance technology, predictive algorithms, behavioral manipulation and human activity. And it has spun out of anyone’s

There’s A Fatal Flaw In The New Study Claiming YouTube’s Recommendation Algorithm Doesn’t Radicalize Viewers

Mark Ledwich and Anna Zeitsev published a piece of research titled Algorithmic Extremism: Examining YouTube’s Rabbit Hole of Radicalization. The research suggests that YouTube’s recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.

To these findings, a cacophony of voices critiqued the methodology, and ultimate results of the research and publication. Most notable of these critiques was found in a tweet thread from Arvind Narayanan, an Associate Professor of Computer Science from Princeton University.

One of the key critiques of the study is that the researchers didn’t log in. That is to say that they could not experience the full impact of the algorithm as it impacts their findings. Anna Zaitsev published this response to the critique of the paper here.

Two things are interesting to me about this work. The first is whether or not we can ever really study, or at the very least understand, the impact of these algorithms on our lives. I’ve started to engage in more social network analysis, and it is a bit like playing whack-a-mole in data collection and analysis. The second piece that interests me is the way that social media tools and spaces were used to carry on discussion about the research after the fact. I’m still thinking about these elements…what do you think?

Why an internet that never forgets is especially bad for young people

As part of the Screentime Research Group, I’ve been thinking a lot about our digital literacy practices, and how youth will be impacted by these tools in their futures.

Kate Eichhorn, an Associate Professor of Culture and Media at The New School suggests that people are now forming their identities online from an early age, and in the process are creating a permanent record that’s impossible to delete.

This incessant documentation did not begin with digital natives themselves. Their parents and grandparents, the first users of photo-sharing services like Flickr, put these young people’s earliest moments online. Without Flickr users’ permission or knowledge, hundreds of thousands of images uploaded to the site were eventually sucked into other databases, including MegaFace—a massive data set used for training face recognition systems. As a result, many of these photographs are now available to audiences for which they were never intended.

Chatham House Sharing for OER

I’ll soon start launching a new open educational resource (OER) for my ed-tech class. This is connected to my regular interest in making my materials more accessible and approachable for all.

In thinking about OER, and related to the question I asked above about the YouTube algorithm research, I’ve been interested in these proposed rules for sharing of OER from Mike Caulfield.

Caulfield suggests that the following rules may be applied when working collaboratively with others, and choosing to share materials openly online:

  • Within the smaller group of collaborators, contributions may or may not be tracked by name, and
  • Anyone may share any document publicly, or remix/revise for their own use, but
  • They may not attribute the document to any author or expose any editing history


Seven stages in moving from consuming to creating

John Spencer with some great guidance about moving from being primarily a critical consumer to a creator.
enter image description here
Spencer posits that you can move from one stage to the next iteratively.

  1. Awareness – passive exposure of content
  2. Active Consuming – seeking out and consuming
  3. Critical Consuming – becoming an expert in an area
  4. Curating – finding the best and commenting
  5. Copying – replication and mimicry
  6. Mash-Ups – copy and make it your own
  7. Creating From Scratch – finding your voice


enter image description here

If youth knew; if age could.

Sigmund Freud

digilit banner

Digitally Literate is a synthesis of the important things I find as I critically consume and then curate as I work online. I leave my notes behind of everything that piques my interest, and then pull together the important stuff here in a weekly digest.

Feel free to say hello at or on the social network of your choice.


  1. Aaron Davis
    January 6, 2020 at 7:23 pm

    Another great newsletter Ian. Just a few thoughts. Firstly, in regards to the flaw with the research associated with YouTube:

    One of the key critiques of the study is that the researchers didn’t log in. That is to say that they could not experience the full impact of the algorithm as it impacts their findings.

    As Becca Lewis suggests, is the problem with measuring radicalisation of YouTube associated with methodology? This reminds me of some of the discussions associated with social media and teens. The examples I have read ‘How YouTube Radicalized Brazil‘ and ‘The Making of a YouTube Radical‘ are anecdotal. I assume this is why Arvind Narayanan says that we do not have the vocabulary to make sense of complexities generated via algorithms.
    Also, in regards to Kate Eichhorn’s post about the internet that never forgets (and the subsequent book):

    Kate Eichhorn, an Associate Professor of Culture and Media at The New School suggests that people are now forming their identities online from an early age, and in the process are creating a permanent record that’s impossible to delete.

    I am reminded of a post from Katia Hildebrandt and Alec Couros from a few years ago in which they suggest that in a world where there is digital record for everything somewhere then we need to learn to consider intent, context, and circumstance when considering different artefacts that may be dredged up.

    Also on:

    • wiobyrne
      January 8, 2020 at 9:16 am

      Hey Aaron! Thanks for the support…and the links. Bookmarking and reviewing as we speak. It is a challenge to understand what is happening in these algorithmic black boxes. 🙂

      I hope all is well in the start of your new year.

Leave A Comment