meta_pixel
Tapesearch Logo
Log in
Your Undivided Attention

Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

Your Undivided Attention

Center for Humane Technology

Tristan Harris, Socialjustice, Tech Podcast, Character Ai, Little Tech, Ai History, Silicon Valley, Privacy, Daniel Barcay, Addiction, Ai Addiction, Chat Bots, Children And Tech, Tech Policy, Responsibleai, Tech, New Ai Shows, Screen Time, Open Ai, Elections, Kids Tech, Google, Ai And Kids, Politicsandai, Politics, Anthropic, Dataprivacy, Humans, Tech And Relationships, Us Politics, Ai And Relationships, Aiandhumanrights, Civictech, Aiinsociety, Surveillance, Sam Altman, Technopoly, Humancenteredai, Breakdown Of Trust, Ai And Work, Ai And The Future, Democracy, Futureofwork, Tech Politics, Tech Ethics, Future, Tech Addiction, Asi, Kids Phone Addiction, Best Ai Shows, Ai Regulations, Meta, Digitalgovernance, Bigtech, Ai And Happiness, Machinelearning, Screentime, Relationships, Ai Welfare, Ai Podcast, Cognitive Liberty, Infinite Scroll, Ai And Education, Kids And Ai, Ai Politics, Apple, Digitaldemocracy, Claude, Llms, Societalimpact, Artificial General Intelligence, Agi, Machines, Us Society, Politicaltechnology, Disinformation, Ai And Rights, Elon Musk, Government, Aiaccountability, Polarization, Jon Haidt, Algorithmicbias, Ai Personhood, Kids Online Safety, Superintelligence, Techandsociety, Automation, Design Ethics, News, Time Well Spent, Tech News, Society & Culture, Humane Design, Technology, Cht, Artificial Intelligence, Center For Humane Technology, The Social Dilemma Netflix, Philosophy, Human Downgrading, Aza Raskin, Attention Economy, Ethical Technology

4.81.5K Ratings

🗓️ 7 June 2024

⏱️ 38 minutes

🧾️ Download transcript

Summary

Whistleblower William Saunders quit over systemic issues at OpenAI. Now he’s put his name to an open letter that proposes 4 principles to protect the right of industry insiders to warn the public about AI risks. On Your Undivided Attention this week, Tristan and Aza sit down with Saunders to discuss.

Transcript

Click on a timestamp to play from that location

0:00.0

Hey everyone this is Tristan and this is Aisa. You might have heard some of the

0:08.8

big news in AI this week that 11 current and former open AI employees published an open letter called

0:16.2

A Right to Warn.

0:18.2

And it outlines four principles that are meant to protect the ability of employees to warn about under-addressed risks before they happen.

0:27.0

And Tristan, I sort of want to turn it over to you because they're talking about the risks that happen that are underdressed in a race to take shortcuts.

0:36.0

Yeah, so just to link this to things we've talked about all the time in this podcast.

0:40.0

If you show me the incentive, I will show you the outcome.

0:43.0

And in AI, the incentive is to win the race to AGI,

0:46.3

to get to artificial general intelligence first,

0:48.9

which means the race to market dominance,

0:51.6

getting as many users onto your AI model, getting as many people

0:55.0

using chat geepte, doing a deal with Apple, so the next iPhone comes with your AI model, opening

1:00.9

up your APIs so that you're giving every AI developer exactly what they want,

1:05.2

have as many plugins as they want, do the trillion dollar training run, and show your investors

1:10.3

that you have the new exciting AI model.

1:12.4

Once you do all that and you're going at this super fast clip,

1:15.0

your incentives are to keep releasing, to keep racing, and those incentives mean to keep taking shortcuts.

1:22.0

Given the fact that there is no current regulation in mean to keep taking shortcuts.

1:22.7

Given the fact that there is no current regulation

1:25.0

in the US for AI systems, we're left with,

1:27.8

well, who sees the early warning signs?

1:29.8

It's the people inside the companies.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Center for Humane Technology, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Center for Humane Technology and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.