Most Big Ideas Have Loud Critics

Welcome to Digitally Literate, issue #395. Your go-to source for insightful content on education, technology, and the digital landscape.

I recently posted the following:


🛁 Openwashing AI

Last week I shared info about GPT-4o and included OpenAI’s CEO, Sam Altman redefining open by suggesting that open means free (as in cost for most people), and not open as free or freedom

It appears that many A.I. companies are using the “open source” label too loosely. There is ongoing debate over the benefits and risks of open-source A.I. models, with concerns about transparency and equitable access. Efforts are being made to define and categorize open-source AI, but challenges remain due to the resource-intensive nature of building these models. Explore

Initiatives to more accurately define open source AI are in progress. In March, researchers at the Linux Foundation publicized a guideline that classifies open-source A.I. models into different categories. Meanwhile, another non-profit organization, the Open Source Initiative, is working on crafting a definition


🪟 On The Topic of Open

As I delve deeper into the world of machine learning and AI, I find that most of my work in technology and education continually guides me back toward open and distributed technologies. This is important because openness means something. Yes, I may not be able to review the code and understand what is happening…but I trust that others can. Let me explain.

Okay, let’s imagine you have a game that you like. Sometimes, you want to share this game with friends so they can play with it too. But what if you share the game, but keep some special parts, strategies, or rules hidden away, so your friends can’t play with it the same way you do? That’s kind of what some AI companies are doing when they say their models are “open source.”

“Open source” is supposed to mean that everyone can see and use all the parts of the AI model, just like sharing games where everyone can see, use, and understand all its parts. But some companies say their AI models are open-source even when they aren’t sharing everything. This continues to confuse, dislocate, and disrupt individuals.

This is important because open source helps everyone learn and build new things. If only some people get to see how the AI models work, it can stop others from making cool new stuff or understanding how the models make decisions. This can be a big problem because AI is becoming very important in our lives.


💔 Disruption

Disruptive innovation theory says that change has happened in the past and will keep happening in the future. But this theory hasn’t met certain conditions needed for proof. We need to understand the limitations of disruptive innovation theories and advocate for a more critical perspective on technological advancements.

Jill Lepore explores this in detail in 2014:

Every age has a theory of rising and falling, of growth and decay, of bloom and wilt: a theory of nature. Every age also has a theory about the past and the present, of what was and what is, a notion of time: a theory of history.

Disruptive innovation as a theory of change is meant to serve both as a chronicle of the past (this has happened) and as a model for the future (it will keep happening). The strength of a prediction made from a model depends on the quality of the historical evidence and on the reliability of the methods used to gather and interpret it. Historical analysis proceeds from certain conditions regarding proof. None of these conditions have been met.


🌀 Being Left Behind

How much of our current focus on AI in everything is driven by a fear of being left out of the AI trend rather than by real needs or problems?

Christopher Mims reflects on writing about technology for over a decade and highlights the importance of collective wisdom in navigating the future of tech.

The excitement around disruptive innovation is often an overreaction. It is a solution looking for a problem to solve. Companies that quickly release new products without fine-tuning them have often failed. The fear is that they can’t afford to let someone else get there first.

Considering generative AI, the question we all have to ask ourselves is if we want to accept a new magic dictionary that feeds us alarmingly inaccurate information alongside the occasionally convenient results.


⚡️ A Critical Perspective

Undermine their pompous authority, reject their moral standards, make anarchy and disorder your trademarks. Cause as much chaos and disruption as possible but don’t let them take you ALIVE.

Sid Vicious

Thanks for reading Digitally Literate. Stay tuned for more insights and discussions. Contact me at hello@digitallyliterate.net or connect on social media.

Cover photo credits.

Leave A Comment