meta_pixel
Tapesearch Logo
Log in
Your Undivided Attention

Daniel Kokotajlo Forecasts the End of Human Dominance

Your Undivided Attention

Center for Humane Technology

Tristan Harris, Socialjustice, Tech Podcast, Character Ai, Little Tech, Ai History, Silicon Valley, Privacy, Daniel Barcay, Addiction, Ai Addiction, Chat Bots, Children And Tech, Tech Policy, Responsibleai, Tech, New Ai Shows, Screen Time, Open Ai, Elections, Kids Tech, Google, Ai And Kids, Politicsandai, Politics, Anthropic, Dataprivacy, Humans, Tech And Relationships, Us Politics, Ai And Relationships, Aiandhumanrights, Civictech, Aiinsociety, Surveillance, Sam Altman, Technopoly, Humancenteredai, Breakdown Of Trust, Ai And Work, Ai And The Future, Democracy, Futureofwork, Tech Politics, Tech Ethics, Future, Tech Addiction, Asi, Kids Phone Addiction, Best Ai Shows, Ai Regulations, Meta, Digitalgovernance, Bigtech, Ai And Happiness, Machinelearning, Screentime, Relationships, Ai Welfare, Ai Podcast, Cognitive Liberty, Infinite Scroll, Ai And Education, Kids And Ai, Ai Politics, Apple, Digitaldemocracy, Claude, Llms, Societalimpact, Artificial General Intelligence, Agi, Machines, Us Society, Politicaltechnology, Disinformation, Ai And Rights, Elon Musk, Government, Aiaccountability, Polarization, Jon Haidt, Algorithmicbias, Ai Personhood, Kids Online Safety, Superintelligence, Techandsociety, Automation, Design Ethics, News, Time Well Spent, Tech News, Society & Culture, Humane Design, Technology, Cht, Artificial Intelligence, Center For Humane Technology, The Social Dilemma Netflix, Philosophy, Human Downgrading, Aza Raskin, Attention Economy, Ethical Technology

4.81.5K Ratings

🗓️ 17 July 2025

⏱️ 38 minutes

🧾️ Download transcript

Summary

Daniel Kokotajlo left OpenAI to warn about the dangerous direction of AI development. His new report, AI 2027, forecasts humanity losing control to misaligned superintelligence within years. On the show this week, we explore these risks, the incentives driving them, and how we might still change course.

Transcript

Click on a timestamp to play from that location

0:00.0

Open AI, Anthropic, and to some extent GEM are explicitly trying to build super

0:10.2

intelligence to transform the world.

0:12.7

And many of the leaders of these companies, many of the researchers at these companies, and

0:17.6

then hundreds of academics and so forth in AI have all signed a statement

0:21.9

saying this could kill everyone. And so we've got these like important facts that people

0:27.3

need to understand. These people are building superintelligence. What does that even look like

0:32.4

and how could that possibly result in killing us all? We've written this scenario depicting what

0:37.0

that might look like. It's actually like my best guess scenario depicting what that might look like.

0:38.7

It's actually my best guess as to what the future will look like.

0:44.4

Hey, everyone. This is Tristan Harris. And this is Daniel Barquet. Welcome to your undivided attention.

0:50.5

So a couple months ago, AI researcher and futurist Daniel Kokatello and a team of experts at the AI Futures Project released a document online called AI 2027.

1:01.0

And it's a work of speculative futurism that's forecasting two possible outcomes of the current AI arms race that we're in.

1:08.0

And the point was to lay out this picture of what might realistically happen if the different

1:13.2

pressures that drove the AI race all went really quickly and to show how those different

1:17.3

pressures interrelate.

1:18.8

So how economic competition, how geopolitical intrigue, how acceleration of AI research, and

1:24.3

the inadequacy of AI safety research, how all those things come together to produce

1:28.4

a radically different future that we aren't prepared to handle and are even prepared to think about.

1:33.3

So in this work, there's two different scenarios, and one's a little bit more hopeful than the

1:36.6

other, but they're both pretty dark. I mean, one ends with a newly empowered, super-intelligent

1:41.8

AI that surpasses human intelligence in all domains and ultimately

1:45.4

causing the end of human life on Earth.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Center for Humane Technology, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Center for Humane Technology and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.