meta_pixel
Tapesearch Logo
Log in
Modern Wisdom

#1011 - Eliezer Yudkowsky - Why Superhuman AI Would Kill Us All

Modern Wisdom

Chris Williamson

Society & Culture, Health & Fitness

4.74.6K Ratings

🗓️ 25 October 2025

⏱️ 97 minutes

🧾️ Download transcript

Summary

Eliezer Yudkowsky is an AI researcher, decision theorist, and founder of the Machine Intelligence Research Institute. Is AI our greatest hope or our final mistake? For all its promise to revolutionize human life, there’s a growing fear that artificial intelligence could end it altogether. How grounded are these fears, how close are we to losing control, and is there still time to change course before it’s too late Expect to learn the problem with building superhuman AI, why AI would have goals we haven’t programmed into it, if there is such a thing as Ai benevolence, what the actual goals of super-intelligent AI are and how far away it is, if LLMs are actually dangerous and their ability to become a super AI, how god we are at predicting the future of AI, if extinction if possible with the development of AI, and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get 15% off your first order of Intake’s magnetic nasal strips at https://intakebreathing.com/modernwisdom Get 10% discount on all Gymshark’s products at https://gym.sh/modernwisdom (use code MODERNWISDOM10) Get 4 extra months of Surfshark VPN at https://surfshark.com/modernwisdom Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript

Click on a timestamp to play from that location

0:00.0

If anyone builds it, everyone dies, why superhuman AI will kill us all.

0:06.7

Would kill us all.

0:08.4

Would kill us all. Okay.

0:11.6

Perhaps the most apocalyptic book title.

0:17.5

Maybe it's up there with maybe the most apocalyptic book title that i've ever read um

0:22.7

is it that bad that that big of a deal that serious of a problem yep i'm afraid so

0:29.5

we wish we were exaggerating okay um let's imagine that nobody's looked at the alignment problem, takeoff scenarios, super intelligent stuff. I think it sounds, unless you're going Terminator, super sci-fi world, how could a superintelligence not just make the world a better place? How do you introduce people to thinking about the problem of building a superhuman AI?

1:00.0

Well, different people tend to come in with different prior assumptions, coming at different angles that

1:09.0

lots of people are skeptical that you can get to superhuman ability at all.

1:14.7

If somebody's skeptical of that, I might start by talking about how you can at least get to much faster than human speed thinking.

1:23.2

There's a video of a train pulling into a subway at about a thousand to one speed up of the camera that shows people.

1:33.8

You can just barely see the people moving if you look at them closely, almost like not quite statues, just moving very, very slowly.

1:43.2

So even before you get into the notion of higher quality of thought,

1:47.0

you can sometimes tell somebody they're at least going to be thinking much faster,

1:51.0

you're going to be a slow-moving statute of them.

1:54.0

For some people, the sticking point is the notion that a machine ends up with its own motivations, its own preferences,

2:02.7

that it doesn't just do as it's told. It's a machine, right? It's like a more powerful

2:07.4

toaster oven, really. How could it possibly decide to threaten you? And depending on who you're

2:12.9

talking to there, it's actually in some ways a bit easier to explain now than when we wrote the book.

2:19.3

There have been some more striking recent examples of AIs, sort of parasitizing humans,

2:27.1

driving them into actual insanity in some cases. And in other cases, they're sort of like

2:32.5

people with a really crazy roommate who

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Chris Williamson, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Chris Williamson and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.