Hello all, welcome to Digitally Literate, issue #357.
I posted the following this week:
- Making Sense of ChatGPT and GPT-3 – I’ve been thinking about this topic for some time, and I finally started writing things down and sharing. This is the first post.
- Make Your Next Conference Proposal a Smash Hit with an AI Writing Assistant – I used ChatSonic, an AI writing assistant to help prepare a proposal for a local conference. Here’ what, why, and how I did it.
The Future Trends Forum continues and Bryan Alexander had a great discussion on the topic with Maria Anderson.
This long form discussion (57:44) is a great overview of the temperature of most educators at this point in the hype cycle.
Chris Gilliard and Pete Rorabaugh with an excellent piece about the fear and fervor we have as new technologies enter the classroom. Gilliard and Rorabaugh give an example of our use of Zoom during the pandemic.
The rise in Zoom use was quickly followed by a panic about student cheating if they were not properly surveilled. Opportunistic education technology companies were happy to jump in and offer more student surveillance as the solution, claiming that invading students’ kitchens, living rooms, and bedrooms was the only way to ensure academic integrity and the sanctity of the degrees they were working for. Indeed, this cycle also played out in white-collar work.
The post goes on to ask the same questions that I have:
- What are the implications of using a technology trained on some of the worst texts on the internet?
- What does it mean when we cede creation and creativity to a machine?
Why this matters. As new technologies come up in society, some educators usually adapt to, and embrace these new tools. Others put up walls against these new and novel opportunities and expect that students will use these tools for cheating and breaking the system. Perhaps a balanced approach would be to think about possible futures, and problematize the systems that we’ve created.
With these new generative, AI virtual assistants, much of the technology at this point is on reproducing text and images (including copyrighted content) from the data it was trained on. This is bringing up considerable copyright issues and an anticipated flood of litigation from millions of artists and creators whose work was used to train these tools.
Andrew Burt, one of the founders of the AI-focused law firm BNH.ai, indicates, “Without thoughtful, mature evaluation and management of these systems’ harms, we risk deploying a technology before we understand how to stop it from causing damage.”
Why this matters. As these new technologies warrant, we need to take time to explore and play, while at the same time considering whether we should…not just whether we could use these tools.
A podcast episode from First Person and the New York Times focusing on Logan Lane. Seventeen year old Lane started questioning whether this life of constant connection was a good thing or not. She began assembling a “Luddite Club” — a group of teenagers who reject technology and its creeping hold on all our lives.
Watch more about the Luddite Club here.
Why this matters. This whole story is interesting and impossibly charming. I’m rooting for these youth as they try to not be confined by smart technologies and social media.
The Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School (HKS) is shutting down the Technology and Social Change Project (TaSC). As a result, TaSC is in a hiring freeze, cannot raise new funding, and is forcing out online misinformation expert Joan M. Donovan from her role as Director of the research project.
Donovan was a leading force in bringing the study of misinformation and disinformation to prominence in academia. Donovan has testified in front of House and Senate subcommittees on the spread of misinformation online.
Why this matters. We’re in a race to study how and why social media is so vulnerable to misinformation so we can make these systems safer. We need voices like Donovan’s and more research to demystify how bad actors manipulate the Internet.
Colossal, a Dallas-based biotech company, recently added $60 million in funding to move toward a 2027 de-extinction of the woolly mammoth.
According to a Medium post, this is part of a plan to fight climate change by bringing back a cold-resistant elephant.
When mammoths once roamed the vast tundras of the arctic, they would graze on different plants and trees of the grassland. As they were roaming, they would also help to disrupt the snowpack, allowing the plants to grow.
Since their extinction, there hasn’t been anything to disrupt the ice-pack, and plants lost their ability to survive. A once flourishing ecosystem of grasslands that could absorb the carbon had been lost. But, with the reintroduction of the Wooly Mammoth; they could bring back the balance to the arctic grasslands by allowing the growth of these plants, and restoring a surface of snow that reflects the sun’s radiation.
Why this matters. God creates mammoths, God destroys mammoths. God creates man, man destroys God. Man heats up Earth to unbearable levels and brings back mammoths to send to Russia.
I’m really enjoying Minimalism Life, a collaborative publication focused on the nuances of minimalism and simple living.
This post by Alex Galben shares insight into how to simplify your life so you can organize your time more easily.
- Don’t chase people
- Declutter your digital life
- Buy less
- Organize your life
- Be financially smart
- Take care of how you look
- Be active and eat well
- Don’t lie to yourself
There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.
A friend on Facebook shared this with me after I shared the post above about using an AI writing assistant to create a conference proposal. 🙂