meta_pixel
Tapesearch Logo
Log in
Your Undivided Attention

Spotlight on AI: What Would It Take For This to Go Well?

Your Undivided Attention

Center for Humane Technology

Tristan Harris, Socialjustice, Tech Podcast, Character Ai, Little Tech, Ai History, Silicon Valley, Privacy, Daniel Barcay, Addiction, Ai Addiction, Chat Bots, Children And Tech, Tech Policy, Responsibleai, Tech, New Ai Shows, Screen Time, Open Ai, Elections, Kids Tech, Google, Ai And Kids, Politicsandai, Politics, Anthropic, Dataprivacy, Humans, Tech And Relationships, Us Politics, Ai And Relationships, Aiandhumanrights, Civictech, Aiinsociety, Surveillance, Sam Altman, Technopoly, Humancenteredai, Breakdown Of Trust, Ai And Work, Ai And The Future, Democracy, Futureofwork, Tech Politics, Tech Ethics, Future, Tech Addiction, Asi, Kids Phone Addiction, Best Ai Shows, Ai Regulations, Meta, Digitalgovernance, Bigtech, Ai And Happiness, Machinelearning, Screentime, Relationships, Ai Welfare, Ai Podcast, Cognitive Liberty, Infinite Scroll, Ai And Education, Kids And Ai, Ai Politics, Apple, Digitaldemocracy, Claude, Llms, Societalimpact, Artificial General Intelligence, Agi, Machines, Us Society, Politicaltechnology, Disinformation, Ai And Rights, Elon Musk, Government, Aiaccountability, Polarization, Jon Haidt, Algorithmicbias, Ai Personhood, Kids Online Safety, Superintelligence, Techandsociety, Automation, Design Ethics, News, Time Well Spent, Tech News, Society & Culture, Humane Design, Technology, Cht, Artificial Intelligence, Center For Humane Technology, The Social Dilemma Netflix, Philosophy, Human Downgrading, Aza Raskin, Attention Economy, Ethical Technology

4.81.5K Ratings

🗓️ 12 September 2023

⏱️ 44 minutes

🧾️ Download transcript

Summary

Where do the top Silicon Valley AI researchers really think AI is headed? Do they have a plan if things go wrong? In this episode, Tristan Harris and Aza Raskin reflect on the last several months of highlighting AI risk, and share their insider takes on a high-level workshop run by CHT in Silicon Valley.

Transcript

Click on a timestamp to play from that location

0:00.0

Hey, this is Tristan, and this is Aiza.

0:02.5

Welcome to Your Undivided Attention.

0:08.4

So this episode, we're going to start with some bad news,

0:11.9

and then walk through like where we are,

0:14.6

what's happened since the Aida Lemo,

0:17.0

which I think now has been seen by 2.8 million people move into some bad news,

0:21.7

like what's been happening since then, then do some good news,

0:24.6

all of the great things that have happened.

0:26.2

And then we just ran a three-day-long workshop on how AI could go well

0:31.9

with the whole bunch of the AI safety groups and teams,

0:35.1

and we want to give an update on what we've learned.

0:38.6

So maybe we should dive in by talking about some of the concerning developments.

0:43.2

What are the concerning developments in the space?

0:47.6

So we released the Aida Lemo, I think it was March 9, 2023.

0:53.8

Was we may have to talk? The video came out a little bit after a couple weeks after.

0:57.2

Yeah, the video came out a few weeks after.

0:58.6

So basically, we were in San Francisco, we were at the Commonwealth Club.

1:01.6

It was our third of several briefings, what we called the AI dilemma,

1:04.8

not knowing what to call it, knowing people knew about the social dilemma,

1:08.0

and we decided to make this presentation,

1:11.2

because we had people from the AI labs come to us saying that the current arms race

1:15.8

between the major companies, open AI and Thropic, Google, Microsoft,

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Center for Humane Technology, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Center for Humane Technology and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.