4.6 • 395 Ratings
🗓️ 13 August 2025
⏱️ 56 minutes
🧾️ Download transcript
Click on a timestamp to play from that location
0:00.0 | How bad could AI go in the worst case scenario? Let's look beyond the near-term risks and explore what could really happen if the wheels come completely off. |
0:10.1 | That's coming up right after this. Welcome to Big Technology Podcast, a show for cool-headed and nuanced conversation of the tech world and beyond. |
0:19.9 | Well, on the show, we've explored a lot of the |
0:21.9 | downsides of AI, a lot of the near-term risks, the business implications of what happens if things |
0:27.5 | don't continue to accelerate a pace. We haven't had a dedicated episode looking at what could |
0:32.9 | happen if things really go wrong. And so we're going to do that today. We're joined today by, I think, |
0:39.0 | the perfect guest for this conversation. Anthony Aguirre is here. He's the executive director |
0:43.7 | at the Future of Life Institute and also a professor of physics at UC Santa Cruz. Anthony, great to |
0:51.0 | see you. Welcome to the show. Thanks so much for having me on. Great to see you. |
0:57.4 | Great to see you. Nice to have a conversation again this time in public. |
1:02.8 | Suffice to say, you're not excited about all the progress that the AI industry is making. |
1:08.0 | Well, that's not quite true. So there's lots of progress in AI that I just love. |
1:13.2 | You know, I use AI models all the time. I love lots of AI applications in science and technology, lots of things where AI are tools that are letting us do things |
1:19.8 | that we couldn't do before. The thing that I'm concerned about is the direction that we're headed |
1:24.5 | in, which is toward increasingly autonomous and general and |
1:28.6 | intelligence systems, things that we've been calling AGI for a long time. And this, I think, |
1:33.6 | is different at some level from what we've been doing. And I think is where most of the danger |
1:40.0 | lies, especially on the large scale and in the longer term. |
1:52.8 | And there have been a number of studies in the training scenarios within the foundational model companies or foundational research houses, which are frontier research labs, |
1:57.9 | actually I think is probably the best way to refer to them. Where |
2:01.9 | AI has had this, it seems like a value or an instinct to try to preserve itself. In testing |
2:08.4 | scenarios, it's tried to copy its code out of the scenario when it thinks its values are being |
... |
Transcript will be available on the free plan in 13 days. Upgrade to see the full transcript now.
Disclaimer: The podcast and artwork embedded on this page are from Alex Kantrowitz, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Alex Kantrowitz and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.