meta_pixel
Tapesearch Logo
Log in
Within Reason

#81 Nick Bostrom - How To Prevent AI Catastrophe

Within Reason

Alex O'Connor

Religion, Morality, Ethics, Society & Culture, Cosmicskeptic, Religion & Spirituality, Philosophy

4.91.8K Ratings

🗓️ 1 September 2024

⏱️ 62 minutes

🧾️ Download transcript

Summary

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, and superintelligence risks. His recent book, Deep Utopia, explores what might happen if we get AI development right.

Transcript

Click on a timestamp to play from that location

0:00.0

Nick Bostrom, welcome to the show.

0:02.0

Oh, my pleasure.

0:03.9

Do you spend more time these days being optimistic or pessimistic

0:08.2

about the future of artificial intelligence?

0:11.3

I think I'm in a superposition. Yeah, so equal time on either would you say because I mean

0:19.9

everyone you're under the sun. I'm on both at the same time, I feel the prospects are quite ambivalent, and it's just this big unknown that we are approaching.

0:36.1

And I think, yeah, there's like realistic prospects of doom,

0:41.1

realistic prospects of fantastically good future, and also realistic prospects of outcomes

0:47.6

where it might not be clear immediately, even if we could see all the details how we would evaluate it.

0:54.0

In some sense maybe that's the most likely possibility

0:59.0

that the future might be really good in some sense,

1:03.8

although different from the current way that we are,

1:08.2

in such a way that we'd lose some and gain some

1:12.1

and how you sum that all up might be non-obvious.

1:16.2

Yeah sure I mean it seems like every person who interviews you likes to point out the interesting fact that unlike a lot of authors you've kind of

1:24.1

represented both positions here I mean you've you've written extensively an entire book

1:29.3

about the dangers of AI and what can happen if things go wrong and people have liked to point out that there's a certain

1:36.3

sort of unusual fairness in your approach

1:39.1

writing your most recent book, Deep Utopia, which is the name suggests, is a description of the opposite.

1:44.7

So I suppose that could imply that you remain sort of agnostic on the question.

1:50.6

I mean, I'm imagining like a lot of people get very fearful about artificial intelligence

1:56.6

and what it can do to our society and not to mention the potential implications of

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Alex O'Connor, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Alex O'Connor and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.