meta_pixel
Tapesearch Logo
Log in
Making Sense with Sam Harris

#420 — Countdown to Superintelligence

Making Sense with Sam Harris

Waking Up with Sam Harris

Samharris, Currentevents, Politics, Ethics, Religion, Neuroscience, Science, Society & Culture, Philosophy

4.629.1K Ratings

🗓️ 12 June 2025

⏱️ 20 minutes

🧾️ Download transcript

Summary

Sam Harris speaks with Daniel Kokotaljo about the potential impacts of superintelligent AI over the next decade. They discuss Daniel’s predictions in his essay “AI 2027,” the alignment problem, what an intelligence explosion might look like, the capacity of LLMs to intentionally deceive, the economic implications of recent advances in AI, AI safety testing, the potential for governments to regulate AI development, AI coding capabilities, how we’ll recognize the arrival of superintelligent AI, and other topics.

If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.

Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

 

Transcript

Click on a timestamp to play from that location

0:00.0

Welcome to the Making Sense podcast. This is Sam Harris. Just a note to say that if you're hearing this, you're not currently on our subscriber feed, and we'll only be hearing the first part of this conversation. In order to access full episodes of the Making Sense podcast, you'll need to

0:21.5

subscribe at samharris.org. We don't run ads on the podcast, and therefore it's made

0:26.5

possible entirely through the support of our subscribers. So if you enjoy what we're doing here,

0:31.2

please consider becoming one. I am here with Daniel Kokatello.

0:38.6

Daniel, thanks for joining me.

0:40.1

Thanks for having me.

0:41.3

So we'll get into your background in a second.

0:44.2

I just want to give people a reference that is going to be of great interest after we have this conversation.

0:50.1

You and a bunch of co-authors wrote a blog post titled AI 2027, which is a very compelling read, and we're going to cover some of it, but I'm sure there's there are details there that we're not going to get to.

1:03.6

So I highly recommend that people read that.

1:06.6

You might even read that before coming back to listen to this conversation.

1:10.6

Daniel, what's your background?

1:12.2

I mean, we're going to talk about the circumstances under which you left Open AI, but maybe

1:17.1

you can tell us how you came to work at Open AI in the first place.

1:21.0

Sure, yeah.

1:21.8

So I've been sort of in the AI field for a while, mostly doing forecasting and a little bit of alignment

1:29.8

research. So that's probably why I got hired at Open AI. I was on the governance team. We were

1:35.3

making policy recommendations to the company and trying to predict where all of this was headed.

1:40.6

I worked at Open AI for two years. And then I quit last year, and then I worked on AI 2027

1:46.0

with the team that we hired.

1:48.3

And one of your co-authors on that blog post was Scott Alexander.

1:53.0

That's right.

...

Transcript will be available on the free plan in 25 days. Upgrade to see the full transcript now.

Disclaimer: The podcast and artwork embedded on this page are from Waking Up with Sam Harris, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Waking Up with Sam Harris and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.