A Trip to TikTok + ChatGPT’s Origin Story

rw-book-cover

Hard Fork hosted by Kevin Roose and Casey Newton - Podcast Index

On today’s episode:

Additional reading:

Snips

[25:34] The Phenomenal Rise of Chat GPT: Insights from Reporting

🎧 Play snip - 1min️ (24:23 - 25:36)

✨ Key takeaways

  1. Chat GPT has more than 30 million registered users and more than 5 million daily users, making it one of the fastest growing software products of all time.
  2. Getting 30 million users in just two months is a phenomenal number, especially compared to Instagram which took a year to get 10 million users.
  3. OpenAI's popular language model, Chat GPT, was actually a total accident as their plan for most of last year was GPT-4, a new language model they were developing.

📚 Transcript

Click to expand
Speaker 2

And it felt like it came out of nowhere when it landed too, right? So I think it's a really good question. It's like, where did this thing come from? Totally.

Speaker 1

So I've been looking into this for a couple of weeks. And I found what I would say like are three big takeaways from my reporting. The first is that chat GPT is just way more popular than I thought. It's got more than 30 million registered users, more than 5 million people use it every day. And for a product that is only really two months old, that is a phenomenal number of people. So just by contrast, like Instagram in its first year got 10 million users. And that was seen as like one of the fastest growing things of all time. So getting 30 million users within two months, I just don't know that I've ever seen a software product grow that fast. Put that in contact. That's actually bigger than the hard for podcast. It's slightly bigger than the hard fork podcast. Another really interesting thing I found out is that this was a total accident. So OpenAI, its plan for most of last year, the thing that it was working on was GPT-4, right? This new language model that they were developing, they were very excited about it.

[29:26] OpenAI Chatbots?

🎧 Play snip - 1min️ (28:13 - 29:28)

✨ Key takeaways

  1. OpenAI started as a non-profit in 2015 with initial sponsors chipping in money to get it off the ground
  2. The original premise was to create open and safe AI, which seems at odds with how quickly OpenAI was built in just 13 days
  3. OpenAI was intended to be the anti-Google or the anti-Facebook, with a focus on humanitarian AI that was safe and responsible

📚 Transcript

Click to expand
Speaker 2

You know, I remember that the original premise of OpenAI was that they were not going to move that fast. They wanted to make their work, quote, open and safe. That's not the sort of thing that you think about being built in 13 days. So how did they sort of reconcile those ideas?

Speaker 1

Yeah, so OpenAI is sort of a strange beast, right? It was started in 2015 as a non-profit. It was started by this kind of all-star group of tech people, Elon Musk, Peter Thiel, Sam Altman, Reed Hoffman. It had all of these sort of initial sponsors who chipped in the money to get this thing off the ground. And the whole point was that it was not going to be driven by these sort of narrow commercial interests, right? They pitched this kind of like the anti-Google or the anti-Facebook, where those companies were developing AI to suit their business needs, whereas OpenAI was going to be sort of half Research lab and half kind of humanitarian AI organization that was going to make sure that its AI was safe and responsible and sort of steer this whole area of technological progress In a better direction. So what changed?

[38:08] The Inevitability of AI Language Models and the Importance of Testing and Norms

🎧 Play snip - 1min️ (36:46 - 38:08)

✨ Key takeaways

  1. OpenAI's release of chat GPT was inevitable and someone else may have released something with fewer safeguards if they had not done it first.
  2. It is important to set industry norms and do extensive testing on AI models before releasing them for mass use.
  3. Society needs time to slowly adapt and put structures in place to avoid disruptive changes when new AI is released.

📚 Transcript

Click to expand
Speaker 2

What you have me wondering though is wasn't all of this inevitable, wasn't someone always going to go first? And assuming it was good, wasn't that always going to cause an arms race?

Speaker 1

I think there's an element of that, sure. I think that it's plausible that if OpenAI had not released chat GPT as quickly as they did, that someone else would have come along and released something that had fewer guardrails Or was more easily abused. And that's certainly what they would tell you with OpenAI is that they did years of safety work on the base model GPT-3 that was used to make chat GPT. So it's not as if they had no safeguards in place. They had lots of them. It's just that they weren't expecting this many users. So I think it was always inevitable. But I also think it's very important in the early stages of this sort of next phase of AI to set norms in the industry that before you release something like chat GPT, you do months or years Of testing on it in a very limited setting where it's not going to be used by every high schooler in the country overnight. You sort of allow society to kind of slowly adapt and put in place some of the structures that allow this to not be so disruptive. You don't just flip a switch overnight.