meta_pixel
Tapesearch Logo
Log in
Deep Questions with Cal Newport

Ep. 377: The Case Against Superintelligence

Deep Questions with Cal Newport

Cal Newport

Technology, Self-improvement, Education

4.81.3K Ratings

🗓️ 3 November 2025

⏱️ 91 minutes

🧾️ Download transcript

Summary

Techno-philosopher Eliezer Yudkowsky recently went on Ezra Klein's podcast to argue that if we continue on our path toward superintelligent AI, these machines will destroy humanity. In this episode, Cal responds to Yudkowsky’s argument point by point, concluding with a more general claim that these general styles of discussions suffer from what he calls “the philosopher’s fallacy,” and are distracting us from real problems with AI that are actually afflicting us right now. He then answers listener questions about AI, responds to listener comments from an earlier AI episode, and ends by discussing Alpha schools, which claim to use AI to 2x the speed of education.

Transcript

Click on a timestamp to play from that location

0:00.0

A couple weeks ago, the techno philosopher and AI critic Eliasur Yadkowski went on Ezra Klein's podcast.

0:10.2

Their episode had a cheery title, How Afraid of the AI Apocalypse Should We Be?

0:17.3

Yikowski, who recently co-authored a book titled, If Anyone Builds It, Everyone Dies, has been warning about the dangers of rogue AI since the early 2000s.

0:26.8

But it's been in the last half decade, as AI began to advance more quickly, that Utowski's warnings are now being taken more seriously.

0:35.7

This is why Ezra Klein had him on.

0:37.3

I mean, if you're worried about

0:38.2

AI taking over the world, Yadkowski is one of the people you want to talk to. Think of him as offering

0:44.8

the case for the worst case scenario. So I decided I would listen to this interview too.

0:51.6

Did Yikowski end up convincing me that my fear of extinction should be raised,

0:57.0

that AI was on a path to killing us all? Well, the short answer is no, not at all. And today,

1:04.9

I want to show you why. We'll break down Yikowski's arguments into their key points and then we'll respond to them

1:11.9

one by one. So if you've been worried about recent chattler, about AI taken over the world,

1:16.6

or if like me you've grown frustrated by these sort of fast and loose prophecies of the

1:20.1

apocalypse, then this episode is for you. As always, I'm Cal Newport, and this is Deep Questions.

1:42.1

Today's episode, The Case Against Super Intelligence. Today's episode, The Case Against Super Intelligence.

1:51.2

All right, so what I want to do here is I want to go pretty carefully through the conversation that Yudkowski had with Klein.

1:58.6

I actually have a series of audio clip so we can hear them in their own

2:02.3

words making what I think to be are the key points of the entire interview. Once we've done

2:08.1

that, we've established Yadkowski's argument. Then we'll begin responding. I would say most of

2:14.9

the first part of the conversation that Yikowski had with Klein focused on one observation in particular that the AI that exists today, which is relatively simple compared to the superintelligences that he's worried about, even today in its relatively simple form, we find AI to be hard to control.

2:34.0

All right. So, Jesse, I want you to play our to control. All right.

2:34.3

So, Jesse, I want you to play our first clip.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Cal Newport, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Cal Newport and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.