meta_pixel
Tapesearch Logo
Log in
Lex Fridman Podcast

#83 – Nick Bostrom: Simulation and Superintelligence

Lex Fridman Podcast

Lex Fridman

Philosophy, Society & Culture, Science, Technology

4.713K Ratings

🗓️ 26 March 2020

⏱️ 117 minutes

🧾️ Download transcript

Summary

Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Nick’s

Transcript

Click on a timestamp to play from that location

0:00.0

The following is a conversation with Nick Bostrom, a philosopher at University of Oxford

0:05.5

and the director of the Future of Humanity Institute.

0:08.8

He has worked on fascinating and important ideas in existential risk, simulation hypothesis,

0:15.0

human enhancement ethics, and the risks of super-intelligent AI systems, including in his book, Super-Intelligence.

0:23.2

I can see talking to Nick multiple times in this podcast, many hours each time, because

0:27.9

he has done some incredible work in artificial intelligence, in technology, space, science,

0:34.5

and really philosophy in general.

0:36.3

But we have to start somewhere.

0:38.9

This conversation was recorded before the outbreak of the coronavirus pandemic that both

0:43.8

Nick and I, I'm sure, will have a lot to say about next time we speak.

0:48.7

And perhaps that is for the best, because the deepest lessons can be learned only in retrospect

0:54.5

when the storm has passed.

0:56.7

I do recommend you read many of his papers on the topic of existential risk, including

1:01.3

the Technical Report titled Global Catastrophic Risks Survey that he co-authored with Anders

1:07.7

Sandberg.

1:09.4

For everyone feeling the medical, psychological, and financial burden of this crisis, I'm sending

1:14.1

love your way.

1:15.6

Stay strong.

1:16.6

We're in this together.

1:17.6

We'll beat this thing.

1:20.8

This is the Artificial Intelligence Podcast.

1:23.0

If you enjoy it, subscribe on YouTube, review it with 5 stars and Apple podcasts, support

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Lex Fridman, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Lex Fridman and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.