4.7 • 4.6K Ratings
🗓️ 25 October 2025
⏱️ 97 minutes
🧾️ Download transcript
Click on a timestamp to play from that location
| 0:00.0 | If anyone builds it, everyone dies, why superhuman AI will kill us all. |
| 0:06.7 | Would kill us all. |
| 0:08.4 | Would kill us all. Okay. |
| 0:11.6 | Perhaps the most apocalyptic book title. |
| 0:17.5 | Maybe it's up there with maybe the most apocalyptic book title that i've ever read um |
| 0:22.7 | is it that bad that that big of a deal that serious of a problem yep i'm afraid so |
| 0:29.5 | we wish we were exaggerating okay um let's imagine that nobody's looked at the alignment problem, takeoff scenarios, super intelligent stuff. I think it sounds, unless you're going Terminator, super sci-fi world, how could a superintelligence not just make the world a better place? How do you introduce people to thinking about the problem of building a superhuman AI? |
| 1:00.0 | Well, different people tend to come in with different prior assumptions, coming at different angles that |
| 1:09.0 | lots of people are skeptical that you can get to superhuman ability at all. |
| 1:14.7 | If somebody's skeptical of that, I might start by talking about how you can at least get to much faster than human speed thinking. |
| 1:23.2 | There's a video of a train pulling into a subway at about a thousand to one speed up of the camera that shows people. |
| 1:33.8 | You can just barely see the people moving if you look at them closely, almost like not quite statues, just moving very, very slowly. |
| 1:43.2 | So even before you get into the notion of higher quality of thought, |
| 1:47.0 | you can sometimes tell somebody they're at least going to be thinking much faster, |
| 1:51.0 | you're going to be a slow-moving statute of them. |
| 1:54.0 | For some people, the sticking point is the notion that a machine ends up with its own motivations, its own preferences, |
| 2:02.7 | that it doesn't just do as it's told. It's a machine, right? It's like a more powerful |
| 2:07.4 | toaster oven, really. How could it possibly decide to threaten you? And depending on who you're |
| 2:12.9 | talking to there, it's actually in some ways a bit easier to explain now than when we wrote the book. |
| 2:19.3 | There have been some more striking recent examples of AIs, sort of parasitizing humans, |
| 2:27.1 | driving them into actual insanity in some cases. And in other cases, they're sort of like |
| 2:32.5 | people with a really crazy roommate who |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from Chris Williamson, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Chris Williamson and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.