A.I. Vibe Check With Ezra Klein

rw-book-cover

Hard Fork hosted by Kevin Roose and Casey Newton - Podcast Index

On today’s episode:

Ezra Klein is a columnist at The New York Times and host of “The Ezra Klein Show

Additional reading:

Snips

[45:44] Ezra Klein on The Disruption of Intelligence

🎧 Play snip - 5min️ (45:44 - 50:17)

✨ Key takeaways

  1. Ezra Klein believes that there is a difference between skepticism in the scientific sense and skepticism as a positional device, a temperament where you prefer to sound to yourself and to others like you're not a crazy person.
  2. Klein uses the analogy of COVID and crypto to explain this difference.
  3. Klein believes that until proven otherwise, we should be skeptical of what is happening with AI and other technologies.
  4. Klein also discusses the potential for automation to disrupt many aspects of society.

📚 Transcript

Click to expand
Speaker 1

Maybe the way I'd say this is that I think that there is a difference between skepticism in the scientific sense where you're bringing a critical intelligence to bear on information Coming into your system and skepticism as a positional device, a kind of temperament where you prefer to sound to yourself and to others like you're not a crazy person, which is very Alluring. Look, one of the ways I've tried to talk about this is using the analogies of COVID and crypto. And I remember periods early on in COVID, where I was on the phone with my family and I was saying, you all have to go buy toilet paper right now. And I was talking to them about a trip and I was like, we're going to come see in three weeks. I'm like, you're not going to come see me in three weeks. In three weeks, you will not be going anywhere. You need to listen to me. And it was really hard. You sounded really weird. And I was not by any means of the first person alert to COVID, but I am a journalist and I did begin to see what was coming a little bit earlier than others in my life. And one lesson of that to me was that tomorrow will not always be like today. So that that also should not become a positioning device. I think there are people who are always telling you tomorrow will not be like today. So then I think about crypto. And I mean, we were all here in the long ago year of 2021 when that was on the rise. And you'd have these conversations with people and you'd have to ask yourself, does any of this make sense exactly that there's a lot of money here. A lot of smart people are filtering into this world. I take seriously that smart people think this is going to change everything. It's going to be how we do governance and identity and socializing. And they have all these complicated plans for how it will place everything or up and everything in my life. But what evidence is there that any of this is true? What can I see? What can I feel? What can I touch? And it was endlessly a technology looking for a practical use case. There was money in it. But what was it changing? Nothing. And so my take on crypto was until proven otherwise, like I'm going to be skeptical of this. You need to prove to me this will change something before I believe you that it will change everything. And one of the points I'm making that call about AI is that I just think you have to take seriously what is happening now to believe that something quite profound is going on. I think you can look at the people who already have profound relationships with the replicas. I think that you can look at automation, which has already put people out of work. I think to my point that a world populated by things that feel to us like intelligence, if you believe my view that that is one of the profound disruptions here, that has already happened. It happened to you with Sydney. We already know that militaries and police systems are using these. So you don't even really have to believe the systems are going to get any better than they currently are. If we did, not just pause, but stop at something the level of GPT for and just took 15 years to figure out every way we could tune and retune it and filter it into new areas. Imagine you retrain the model just to be a lawyer, right? Instead of it having a generalized training system, it was trained to be a lawyer. That'd be very disruptive to legal profession. How disruptive would depend on regulations, but I think the capability is already there to automate a huge amount of contracting.

Speaker 2

They don't have to be sentient to be civilization altering. I just don't think you need a radical view on the future to think this is pretty profound.

Speaker 1

Totally.

Speaker 4

Well, Ezra, we're going to have to ask you to stop generating. The token generating machine is off. Ezra Klein, thanks for coming.

Speaker 1

Thanks so much, Ezra. Have a good day.

Speaker 5

All right.

Speaker 2

That's enough talk about AI and existential risk and societal adaptation. It's time to talk about something much more important than any of those things. There's me. Oh, my God. Get on your phone. Get on your stuff, Bruce. When we come back, we're going to talk about my quest for phone positivity and why I'm breaking my phone out of phone jail.

Speaker 5

This is a long way of saying we're going to talk about how I was right.