4.6 • 29.1K Ratings
🗓️ 16 September 2025
⏱️ 36 minutes
🧾️ Download transcript
Sam Harris speaks with Eliezer Yudkowsky and Nate Soares about their new book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. They discuss the alignment problem, ChatGPT and recent advances in AI, the Turing Test, the possibility of AI developing survival instincts, hallucinations and deception in LLMs, why many prominent voices in tech remain skeptical of the dangers of superintelligent AI, the timeline for superintelligence, real-world consequences of current AI systems, the imaginary line between the internet and reality, why Eliezer and Nate believe superintelligent AI would necessarily end humanity, how we might avoid an AI-driven catastrophe, the Fermi paradox, and other topics.
If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Click on a timestamp to play from that location
| 0:00.0 | Welcome to the Making Sense podcast. This is Sam Harris. Just a note to say that if you're hearing this, you're not currently on our subscriber feed, and we'll only be hearing the first part of this conversation. In order to access full episodes of the Making Sense podcast, you'll need to |
| 0:21.5 | subscribe at samharris.org. We don't run ads on the podcast, and therefore it's made |
| 0:26.5 | possible entirely through the support of our subscribers. So if you enjoy what we're doing here, |
| 0:31.2 | please consider becoming one. I am here with Eliezer Yudkowski and Nate Sauris. |
| 0:41.1 | Eliasor, Nate, it's great to see you guys again. |
| 0:43.2 | Been a while. |
| 0:44.0 | Good to see you, Sam. |
| 0:44.8 | Been a long time. |
| 0:46.3 | So you were, Eliezer, you were among the first people to make me concerned about AI, |
| 0:53.2 | which is going to be the topic of today's conversation. I think many |
| 0:56.3 | people who are concerned about AI can say that. First, I should say you guys are releasing a book, |
| 1:01.4 | which will be available. I'm sure the moment this drops. If anyone builds it, everyone dies, |
| 1:08.2 | why superhuman AI would kill us all. I mean, the book is, its message is fully |
| 1:14.3 | condensed in that title. I mean, we're going to explore just how uncompromising a thesis that is |
| 1:21.6 | and how worried you are and how worried you think we all should be here. But before we jump into |
| 1:26.7 | the issue, maybe tell the audience how each of you got into this topic. |
| 1:32.3 | How is it that you came to be so concerned about the prospect of developing superhuman AI? |
| 1:37.1 | Well, in my case, I guess I was sort of raised in a house with enough science books and enough science fiction books that |
| 1:45.2 | thoughts like these were always in the background. Verner Vinji is the one where there was a |
| 1:52.2 | key click moment of observation. Vinji pointed out that at the point where our models of the future |
| 1:58.3 | predict building anything smarter than us than said Vinji at the time, |
| 2:03.7 | our crystal ball explodes past that point. It is very hard, said Vinci, to project what happens |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from Waking Up with Sam Harris, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Waking Up with Sam Harris and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.