meta_pixel
Tapesearch Logo
Log in
People I (Mostly) Admire

52. Max Tegmark on Why Superhuman Artificial Intelligence Won’t be Our Slave (Part 2)

People I (Mostly) Admire

Freakonomics Radio + Stitcher

Society & Culture

4.61.9K Ratings

🗓️ 20 November 2021

⏱️ 30 minutes

🧾️ Download transcript

Summary

He’s an M.I.T. cosmologist, physicist, and machine-learning expert, and once upon a time, almost an economist. Max and Steve continue their conversation about the existential threats facing humanity, and what Max is doing to mitigate our risk. The co-founder of the Future of Life Institute thinks that artificial intelligence can be the greatest thing to ever happen to humanity — if we don’t screw it up.

Transcript

Click on a timestamp to play from that location

0:00.0

In last week's episode, with the remarkable Max Tegmar, we covered topics ranging from

0:10.8

the origin of the universe to the disturbing reality of slaughterbots, AI-enabled drones

0:17.0

built to kill.

0:18.0

Today, we continue our conversation discussing how artificial intelligence is affecting

0:23.3

our lives already in ways we aren't even aware, and what Max is doing to ensure that

0:28.2

AI becomes a force for good, rather than evil, as a co-founder of the Future of Life

0:33.0

Institute, an organization that works to prevent global technology-driven catastrophes.

0:39.5

If we get it right, it will be the best thing that ever happened, because we're no longer

0:43.7

going to be limited by our own relative stupidity and inability to figure stuff out.

0:53.1

Welcome to People I Mostly Admire, with Steve Levitt.

0:59.2

Max grew up in Stockholm before he moved to the U.S. to get his PhD at Berkeley.

1:03.3

He was a tenured professor at the University of Pennsylvania before joining MIT's Physics

1:08.5

Department.

1:09.5

Just one quick note before we dive back into the conversation, today's episode stands

1:13.9

alone.

1:14.9

There's no need to have listened to part one of the conversation, but there's also no

1:18.2

harm.

1:19.2

If you're the kind of person who likes to do things in order, go back and listen to part

1:22.6

one first.

1:23.6

Listeners have been incredibly enthusiastic about it.

1:32.2

One of the scenarios that's really intriguing is to think about what happens if and when

1:39.0

AI advances to the level where it has capabilities much greater than humans have.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Freakonomics Radio + Stitcher, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Freakonomics Radio + Stitcher and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.