meta_pixel
Tapesearch Logo
Log in
The Good Fight

Nate Soares on Why AI Could Kill Us All

The Good Fight

Yascha Mounk

News

4.6907 Ratings

🗓️ 25 November 2025

⏱️ 86 minutes

🧾️ Download transcript

Summary

Nate Soares is president of the Machine Intelligence Research Institute and co-author, with Eliezer Yudkowsky, of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. He has been working in the field for over a decade, after previous experience at Microsoft and Google.  In this week’s conversation, Yascha Mounk and Nate Soares explore why AI is harder to control than traditional software, what happens when machines develop motivations, and at what point humans can no longer contain the potential catastrophe. If you have not yet signed up for our podcast, please do so now by following ⁠this link on your phone⁠. Email: [email protected] Podcast production by Jack Shields and Leonora Barclay. Connect with us! ⁠Spotify⁠ | ⁠Apple⁠ | ⁠Google⁠ X: ⁠@Yascha_Mounk⁠ & ⁠@JoinPersuasion⁠ YouTube: ⁠Yascha Mounk⁠, ⁠Persuasion⁠ LinkedIn: ⁠Persuasion Community Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript

Click on a timestamp to play from that location

0:00.0

And, you know, a lot of people saying, oh, the AIs are really dumb, it sort of sounds to me like someone saying, you know, hey, I taught my horse to multiply numbers. And they're like, oh, that horse can only multiply five-digit numbers. It can't multiply 12-digit numbers. My calculator can multiply 12-digit numbers. Clearly, this training process for making horses smarter is like not going anywhere. And it's like, holy crap, guys, we got a horse to multiply.

0:24.1

What are we going to do next?

0:26.1

And I think a lot of what people are missing by AI is this question of where are we going to go next.

0:33.5

And now the good fight with Yasha Monk.

0:42.3

Yeah. And now the Good Fight with Yasha Monk. The progress of artificial intelligence has been impressive and also scary.

0:49.0

Do we actually know whether these AI systems are going to obey our commands or whether they have a will

0:56.6

of their own. And once they reach superintelligence, are they going to pursue their own ends at any

1:04.5

cost, including perhaps the destruction of humanity? Well, a new book is making big waves. It is on the New York Times

1:14.3

bestseller list. It is called If Anyone Builds It, Everybody Dies. It is written by

1:22.1

Eliezer Yadkowski and Nate Sorries. And we have Nate on the podcast today.

1:28.7

He argues that we can't fully control AI systems because they're grown rather than built.

1:36.3

He argues that in order to be able to solve complex problems, they need to develop the capacity to have and make plans,

1:47.7

to have a kind of desire of their own,

1:51.3

and all of that makes it really likely for them to be misaligned.

1:55.6

And finally, he argues that once there are superintelligent machines,

1:59.1

it is incredibly unlikely, in fact impossible

2:01.8

for humans to be able to stop them from effectively pursuing their goals.

2:11.7

In the rest of this conversation, Nate makes a very compelling case for this bracing thesis,

2:20.1

for I also throw a whole bunch of objections and questions at him to push him a little bit.

2:28.4

As you'll see, it was a very respectful bit, a thorough debate about this subject.

2:37.8

In the rest of this conversation, Nate and I talk about what it is that humans can do to stop the development of these dangerous, super-intelligent

2:46.1

machines, something about which Nate is actually a little bit more optimistic than I am.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Yascha Mounk, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Yascha Mounk and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.