4.8 • 1.5K Ratings
🗓️ 17 July 2025
⏱️ 38 minutes
🧾️ Download transcript
Click on a timestamp to play from that location
0:00.0 | Open AI, Anthropic, and to some extent GEM are explicitly trying to build super |
0:10.2 | intelligence to transform the world. |
0:12.7 | And many of the leaders of these companies, many of the researchers at these companies, and |
0:17.6 | then hundreds of academics and so forth in AI have all signed a statement |
0:21.9 | saying this could kill everyone. And so we've got these like important facts that people |
0:27.3 | need to understand. These people are building superintelligence. What does that even look like |
0:32.4 | and how could that possibly result in killing us all? We've written this scenario depicting what |
0:37.0 | that might look like. It's actually like my best guess scenario depicting what that might look like. |
0:38.7 | It's actually my best guess as to what the future will look like. |
0:44.4 | Hey, everyone. This is Tristan Harris. And this is Daniel Barquet. Welcome to your undivided attention. |
0:50.5 | So a couple months ago, AI researcher and futurist Daniel Kokatello and a team of experts at the AI Futures Project released a document online called AI 2027. |
1:01.0 | And it's a work of speculative futurism that's forecasting two possible outcomes of the current AI arms race that we're in. |
1:08.0 | And the point was to lay out this picture of what might realistically happen if the different |
1:13.2 | pressures that drove the AI race all went really quickly and to show how those different |
1:17.3 | pressures interrelate. |
1:18.8 | So how economic competition, how geopolitical intrigue, how acceleration of AI research, and |
1:24.3 | the inadequacy of AI safety research, how all those things come together to produce |
1:28.4 | a radically different future that we aren't prepared to handle and are even prepared to think about. |
1:33.3 | So in this work, there's two different scenarios, and one's a little bit more hopeful than the |
1:36.6 | other, but they're both pretty dark. I mean, one ends with a newly empowered, super-intelligent |
1:41.8 | AI that surpasses human intelligence in all domains and ultimately |
1:45.4 | causing the end of human life on Earth. |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from Center for Humane Technology, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Center for Humane Technology and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.