meta_pixel
Tapesearch Logo
Log in
Robert Wright's Nonzero

In Defense of AI Doomerism (Robert Wright & Liron Shapira)

Robert Wright's Nonzero

Nonzero

News & Politics, Society & Culture, Philosophy

4.7618 Ratings

🗓️ 16 May 2024

⏱️ 78 minutes

🧾️ Download transcript

Summary

This is a free preview of a paid episode. To hear more, visit www.nonzero.org

0:24 Why this pod’s a little odd 2:50 Ilya Sutskever and Jan Leike quit OpenAI—part of a larger pattern? 10:20 Bob: AI doomers need Hollywood 16:26 Does an AI arms race spell doom for alignment? 20:40 Why the “Pause AI” movement matters 24:54 AI doomerism and Don’t Look Up: compare and contrast 27:23 How Liron (fore)sees AI doom 33:18 Are Sam Altman’s concerns about AI safety sincere? 39:46 Paperclip maximizing, evolution, and the AI will to power question 51:34 Are there real-world examples of AI going rogue? 1:07:12 Should we really align AI to human values? 1:15:27 Heading to Overtime

Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Liron Shapira (Pause AI, Relationship Hero). Recorded May 06, 2024. Additional segment recorded May 15, 2024.

Twitter: https://twitter.com/NonzeroPods

Transcript

Click on a timestamp to play from that location

0:00.0

You're listening to Robert Wright's Non-Zero Podcast.

0:28.9

Hello, LeRan.

0:30.9

Hey, Bob, good to be back with you.

0:33.5

Good to have you.

0:34.9

Let me introduce this time.

0:35.9

I'm Robert Wright, publisher with the Non-Zero newsletter. This This is a non-zero podcast. You are Leran Shapira. You are a Silicon Valley guy who's founded or co-founded a couple of companies, but more to the point. For present purposes, you are an AI safety activist and in fact, an AI pause activist. You're sufficiently

0:57.0

concerned about AI to want us to kind of pause or slow development and take a deep breath and

1:02.7

think about this whole thing. Now, this is an unusual kind of hybrid episode of this podcast,

1:10.1

so let me explain.

1:11.1

You and I taped a long conversation a week or so ago about why we should be concerned

1:18.2

about AI in your view.

1:19.8

The risks it could pose in particular, the kind of sci-fi dumer scenario of it actually

1:26.7

taking over the planet and like killing us

1:29.6

or something, something you take seriously. I wanted to kind of try to trace out the logic behind

1:34.4

your argument. And if people want to listen to that, all they have to do is keep listening to this,

1:40.3

because this is basically a preface to that. What happened is I take that thinking, you know,

1:47.6

I didn't have to post it super fast and it wasn't urgently topical. Then a couple of things

1:56.3

happened that made me think, well, it actually was quite topical and also that it might be worth

2:04.4

talking to you a little bit about these developments.

2:08.2

So one is, I was on Twitter and I saw some video of you on like some kind of soapbox or something

2:14.2

with a megaphone.

2:15.4

Apparently there was this international AI pause

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Nonzero, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Nonzero and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.