4.6 • 11K Ratings
🗓️ 15 October 2025
⏱️ 68 minutes
🧾️ Download transcript
Click on a timestamp to play from that location
| 0:00.0 | The |
| 0:07.0 | The Shortly after Chad GPT was released, it felt like all anyone could talk about, at least if you were in AI circles, was the risk of rogue AI. |
| 0:39.0 | You began to hear a lot of talk of AI researchers discussing their P-Doom, the probability they gave |
| 0:45.9 | to AI destroying or fundamentally displacing humanity. In May of 2023, a group of the world's |
| 0:52.7 | top AI figures, including Sam Altman and Bill Gates and Jeffrey Hinton, signed on to a public statement that said, |
| 0:59.8 | mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war. |
| 1:09.3 | And then nothing really happened. |
| 1:12.1 | The signatories, or many of them at least, of that letter, |
| 1:15.7 | raced ahead releasing new models and new capabilities. |
| 1:19.2 | Your share price, your valuation became a whole lot more important in Silicon Valley than your P-Dum. |
| 1:25.6 | But not for everyone. |
| 1:31.2 | Eliasor Yukowski was one of the earliest voices warning loudly about the existential risk posed by AI. He was making this argument back in the 2000s, |
| 1:37.3 | many years before Chad GPD hit the scene. He has been in this community of AI researchers, |
| 1:42.9 | influencing many of the people who build these |
| 1:44.7 | systems, in some cases inspiring them to get into this work in the first place, yet unable |
| 1:50.0 | to convince him to stop building the technology he thinks will destroy humanity. |
| 1:55.8 | He just released a new book, co-written with Nate Suarez, called If Anyone Builds It, Everyone Dies. |
| 2:02.8 | Now, he's trying to make this argument to the public. A last-ditch effort to, at least in his view, |
| 2:09.2 | rouse us to save ourselves before it is too late. I commend to this conversation taking |
| 2:14.0 | AI risk seriously. If we're going to invent superintelligence, it is probably |
| 2:18.0 | going to have some implications for us, but also being skeptical of the scenarios I often see |
| 2:24.5 | by which each takeovers are said to happen. So I wanted to hear what the godfather of these |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from New York Times Opinion, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of New York Times Opinion and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.