Welcome to Digitally Literate, issue #365. I posted the following this week.
- When you know better, do better: Developing Anti-Racist, Digitally Literate Educators through Critical Media Literacy – Together with a great set of colleagues, we’ve been exploring the possible intersections between anti-racism and digital literacies. Here’s the second manuscript we just sent out.
A young knight with a bright future is enticed by a spirit that has the ability to show him whatever vision he wants to see. The ideas are more captivating than his sword training, friends, the village girls, and life itself. These seemingly minor, harmless distractions end up costing the young knight the ultimate price – His potential.
This story was written by James McIntosh. To view more of his writings, please visit https://mediavsreality.com/ or the MediaVSReality YouTube channel.
A simple question that troubles me as I ponder the possible impacts of AI in our lives is this: How do human information and interactions shape AI models and their outputs? Large language models (LLMs) are built on and trained with human data, and they produce variations of that data.
In a related story, this unionization of 150 African workers who provide content moderation services for AI tools used by Facebook, TikTok, and ChatGPT is inspiring.
Why this matters. We may believe that AI tools are out to steal our jobs, but in some instances, they’re creating new fields and careers. The question is whether they’re the jobs that we want. ¯ \_| ✖ 〜 ✖ |_/¯
AI and generative language model technologies may disrupt and dislocate future jobs, but it’s already being used in threats made by people…against others.
Still, their meanings are similar, as are their ideological instincts: They see AI as a force for restoring a natural order, for keeping people in line, and as imminent proof that certain sorts of already devalued work — creative professions especially but not exclusively — should be considered truly worthless, and the people who do it are living on borrowed time. Whether or not it’s mistaken or makes much sense, it’s a deeply political instinct. Both intuit AI as a source of power to either harness or with which to stay aligned, but they’re also pretty sure it’s on their side.
Why this matters. John Herrman indicates in this post, that just like previous debates about automation and mechanization before that, a portion of our online discourse will focus on trolling and harassing others about the potential endangerment of their careers.
Ted Chiang challenges the myth of AI as a superintelligent entity that will either save or destroy humanity. We could instead think of AI as a tool that can be used for good or evil, depending on who controls it and how it is designed.
Why this matters. Can A.I. ameliorate the inequities of our world other than by pushing us to the brink of societal collapse? We need an alternative vision of AI that is more democratic, participatory, and ethical, and that respects human dignity and creativity.
My partner works from home and is frequently jumping into video calls to chat with clients and colleagues. One habit that seems a bit odd to me is that their group will meet up for hours at a time and work with their cameras on, and microphones off.
Perhaps this is a way to have employees feel connected, as the presence of another person provides accountability and support. Or this may be a way for employers to make sure folks are working. We saw some of this during the pandemic as educators mandated that students leave their cameras on during class (see the next story in this issue).
A growing number of remote workers are adopting a practice called body doubling or parallel working, which involves watching strangers work online via TikTok live or Zoom.
Why this matters. This raises questions about privacy, security, and distraction issues while working. This also seems like a great opportunity to make some ad revenue for a social media stream.
While planning for a talk this week, one of my colleagues brought up this post about trust and surveillance tools in our learning environments.
When a classroom becomes adversarial, of course, as cop shit presumes, then there must be a clear winner and loser. The student’s education then becomes not a victory for their own self-improvement or -enrichment, but rather that the teacher conquered the student’s presumed inherent laziness, shiftiness, etc. to instill some kernel of a lesson.
Why this matters. I’ve talked about this many times in this newsletter, but we need to question how much we really trust our students. In addition, we must question the ethics of nurturing learners in an environment of surveillance.
Ryan Holiday suggests that worries can be produced due to our thoughts about the past, what we are thinking about at present, or ideas about the future. Remember amor fati (the love of fate) and remember to stay within the present doing what you can with what you have – right now.
The uncreative mind can spot wrong answers, but it takes a very creative mind to spot wrong questions.
Cover Photo CC BY using Stable Diffusion 2.1 Demo