meta_pixel
Tapesearch Logo
Log in
The Ezra Klein Show

How Afraid of the A.I. Apocalypse Should We Be?

The Ezra Klein Show

New York Times Opinion

Society & Culture, Government, News

4.611K Ratings

🗓️ 15 October 2025

⏱️ 68 minutes

🧾️ Download transcript

Summary

How Afraid of the A.I. Apocalypse Should We Be? Eliezer Yudkowsky is as afraid as you could possibly be. He makes his case. Yudkowsky is a pioneer of A.I. safety research, who started warning about the existential risks of the technology decades ago, – influencing a lot of leading figures in the field. But over the last couple of years, talk of an A.I. apocalypse has become a little passé. Many of the people Yudkowsky influenced have gone on to work for A.I. companies, and those companies are racing ahead to build the superintelligent systems Yudkowsky thought humans should never create. But Yudkowsky is still out there sounding the alarm. He has a new book out, co-written with Nate Soares, “If Anyone Builds It, Everyone Dies,” trying to warn the world before it’s too late. So what does Yudkowsky see that most of us don’t? What makes him so certain? And why does he think he hasn’t been able to persuade more people? Mentioned: Oversight of A.I.: Rules for Artificial Intelligence If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares “A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.” by Kashmir Hill Book Recommendations: A Step Farther Out by Jerry Pournelle Judgment under Uncertainty by Daniel Kahneman, Paul Slovic, and Amos Tversky Probability Theory by E. T. Jaynes Thoughts? Guest suggestions? Email us at [email protected]. You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs. This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our executive producer is Claire Gordon. The show’s production team also includes Marie Cascione, Annie Galvin, Kristin Lin, Jack McCordick, Marina King and Jan Kobal. Original music by Pat McCusker. Audience strategy by Kristina Samulewski and Shannon Busta. The director of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Helen Toner and Jeffrey Ladish.

Transcript

Click on a timestamp to play from that location

0:00.0

The

0:07.0

The Shortly after Chad GPT was released, it felt like all anyone could talk about, at least if you were in AI circles, was the risk of rogue AI.

0:39.0

You began to hear a lot of talk of AI researchers discussing their P-Doom, the probability they gave

0:45.9

to AI destroying or fundamentally displacing humanity. In May of 2023, a group of the world's

0:52.7

top AI figures, including Sam Altman and Bill Gates and Jeffrey Hinton, signed on to a public statement that said,

0:59.8

mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war.

1:09.3

And then nothing really happened.

1:12.1

The signatories, or many of them at least, of that letter,

1:15.7

raced ahead releasing new models and new capabilities.

1:19.2

Your share price, your valuation became a whole lot more important in Silicon Valley than your P-Dum.

1:25.6

But not for everyone.

1:31.2

Eliasor Yukowski was one of the earliest voices warning loudly about the existential risk posed by AI. He was making this argument back in the 2000s,

1:37.3

many years before Chad GPD hit the scene. He has been in this community of AI researchers,

1:42.9

influencing many of the people who build these

1:44.7

systems, in some cases inspiring them to get into this work in the first place, yet unable

1:50.0

to convince him to stop building the technology he thinks will destroy humanity.

1:55.8

He just released a new book, co-written with Nate Suarez, called If Anyone Builds It, Everyone Dies.

2:02.8

Now, he's trying to make this argument to the public. A last-ditch effort to, at least in his view,

2:09.2

rouse us to save ourselves before it is too late. I commend to this conversation taking

2:14.0

AI risk seriously. If we're going to invent superintelligence, it is probably

2:18.0

going to have some implications for us, but also being skeptical of the scenarios I often see

2:24.5

by which each takeovers are said to happen. So I wanted to hear what the godfather of these

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from New York Times Opinion, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of New York Times Opinion and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.