meta_pixel
Tapesearch Logo
Log in
Robert Wright's Nonzero

How to Not Lose Control of AI (Robert Wright & Max Tegmark)

Robert Wright's Nonzero

Nonzero

News & Politics, Society & Culture, Philosophy

4.7618 Ratings

🗓️ 27 May 2025

⏱️ 49 minutes

🧾️ Download transcript

Summary

This is a free preview of a paid episode. To hear more, visit www.nonzero.org

0:28 Why Max helped organize the 2023 “Pause AI” letter 7:12 AI as a species not a technology 12:32 The rate of AI progress since 2023 21:44 Loss of control is the biggest risk 32:42 How to get the NatSec community’s attention 36:12 How we could lose control 38:17 He we stay in control 47:30 Heading to Overtime

Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Max Tegmark (MIT, Future of Life Institute, Life 3.0). Recorded April 30, 2025.

Twitter: https://twitter.com/NonzeroPods

Transcript

Click on a timestamp to play from that location

0:00.0

You're listening to Robert Wright's Non-Zero podcast. You're listening to Robert Wright's Non-Zero Podcast.

0:27.4

Hello, Max.

0:31.1

Hey, Bob.

0:32.9

How you doing?

0:34.7

Good.

0:35.2

It's an honor to be here.

0:37.2

Well, that's nice of you to say. I bet you've never said that to any other podcasters, have you? Never. I'm the first. I feel great. I am on top of the world. Well, let me introduce this. I'm Robert Wright, editor-in-chief of the non-zero newsletter. This is a non-zero podcast. You are at Max Tegmark,

0:54.7

very well-known physicist at MIT and also an AI researcher, an author of books including Life 3.0.

1:03.9

And in recent years, something of an AI safety activist. In fact, you and your institute, the institute you co-founded, the Future of Life

1:13.6

Institute, played a big role in organizing a statement that I think most people are familiar with.

1:20.7

In 2023, it was calling for, I think, a six-month pause, I guess, in the training of large, large language models.

1:31.2

And various notable people signed it.

1:34.7

Steve Wozniak, Elon Musk, you, a number of others.

1:42.1

And, you know, before the conversation's over, I want to get a sense for what you think the

1:45.8

biggest risks are and what you think the state of AI progress is and so on.

1:50.3

But maybe I'll start by asking, like, since that statement came out, how happy are you

1:58.5

or unhappy are you with what's happened in terms of awareness of and concerned about

2:06.8

the things you're concerned about with AI, both at a popular level and I guess of what you

2:12.0

could call an elite level. You know, there are various communities, as you know, with the world.

2:17.2

What's your take on that?

2:19.8

Yeah, I'm very happy with the short-term response and quite unhappy with what's happened

2:25.4

one and two years afterwards.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Nonzero, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Nonzero and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.