meta_pixel
Tapesearch Logo
Log in
Within Reason

#129 Will MacAskill - We're Not Ready for Artificial General Intelligence

Within Reason

Alex J O'Connor

Religion, Morality, Ethics, Society & Culture, Cosmicskeptic, Religion & Spirituality, Philosophy

4.91.8K Ratings

🗓️ 9 November 2025

⏱️ 79 minutes

🧾️ Download transcript

Summary

Get Huel today with this exclusive offer for New Customers of 15% OFF with code alexoconnor at https://huel.com/alexoconnor (Minimum $75 purchase).


William MacAskill is a Scottish philosopher and author, as well as one of the originators of the effective altruism movement. Get his book. What We Owe the Future, here.

0:00 – The World Isn’t Ready for AGI

9:12 – What Does AGI

Doomsday Look Like?

16:13 – Alignment is Not Enough

19:28 – How AGI Could Cause Government Coups

27:14 – Why Isn’t There More Widespread Panic?

33:55 – What Can We Do?

40:11 – If We Stop, China Won’t

47:43 – What is Currently Being Done to Regulate AGI Growth

51:03 – The Problem of “Value Lock-in”

01:05:03 – Is Inaction a Form of Action?

01:08:47 – Should Effective Altruists Focus on AGI?

Transcript

Click on a timestamp to play from that location

0:00.0

Are we ready for artificial general intelligence?

0:04.0

That's a great question, and I think the answer is very clearly no.

0:09.0

I think, yeah, the transition from where we are now to AI systems that can do anything, cognitively speaking, that a human can do,

0:19.0

and then from there, beyond that point, is going to be

0:23.0

one of the most momentous transitions in all of human history. It will bring a huge range of

0:28.3

changes to the world and almost no effort is going into preparing for these changes,

0:35.0

even though some of the biggest companies in the world

0:37.7

are trying to make this happen and have this as the explicit aim.

0:42.7

I'm interested to hear you say that because from my perspective, I mean, I don't know anything

0:47.1

about the technologies behind artificial intelligence. I don't really understand, you know,

0:52.7

how a LLM really works. I don't know how to code a software or anything like that. So I only ever hear about this really from a sort of ethical, philosophical's perspective. And it kind of feels like that's all anybody's ever talking about, AGI and it's going to take over the world and, you know, there's going to be job losses and all of this kind of stuff. I think people

1:11.2

are sort of talking about that a lot, like in my sphere, but do you mean to say that as far as actual

1:16.0

practical efforts go, that isn't mirrored in like policy planning and, you know, effective

1:24.3

campaigning to actually try to put a stop to disastrous outcomes? Yeah, well, I think

1:30.2

there's a few different categories of people. So there are the people who are trying to build

1:36.1

AGI, that's OpenAI and Google DeepMind and some other companies. And collectively they are

1:43.6

spending tens to hundreds of billions of dollars

1:46.0

on investment to try to make that happen. Sometimes the leaders of those companies talk about,

1:53.6

oh, all the good things that AI will be able to do. It's normally really quite narrow,

1:58.4

focused on improvements in medicine, perhaps greater economic prosperity.

2:05.2

Then there's a second category of people who tend to be primarily worried about loss of control to AI systems.

2:15.1

This is, you know, categories of people who are talking about AGI. And they are the

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Alex J O'Connor, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Alex J O'Connor and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.