meta_pixel
Tapesearch Logo
Log in
Your Undivided Attention

No One is Immune to AI Harms with Dr. Joy Buolamwini

Your Undivided Attention

Center for Humane Technology

Tristan Harris, Socialjustice, Tech Podcast, Character Ai, Little Tech, Ai History, Silicon Valley, Privacy, Daniel Barcay, Addiction, Ai Addiction, Chat Bots, Children And Tech, Tech Policy, Responsibleai, Tech, New Ai Shows, Screen Time, Open Ai, Elections, Kids Tech, Google, Ai And Kids, Politicsandai, Politics, Anthropic, Dataprivacy, Humans, Tech And Relationships, Us Politics, Ai And Relationships, Aiandhumanrights, Civictech, Aiinsociety, Surveillance, Sam Altman, Technopoly, Humancenteredai, Breakdown Of Trust, Ai And Work, Ai And The Future, Democracy, Futureofwork, Tech Politics, Tech Ethics, Future, Tech Addiction, Asi, Kids Phone Addiction, Best Ai Shows, Ai Regulations, Meta, Digitalgovernance, Bigtech, Ai And Happiness, Machinelearning, Screentime, Relationships, Ai Welfare, Ai Podcast, Cognitive Liberty, Infinite Scroll, Ai And Education, Kids And Ai, Ai Politics, Apple, Digitaldemocracy, Claude, Llms, Societalimpact, Artificial General Intelligence, Agi, Machines, Us Society, Politicaltechnology, Disinformation, Ai And Rights, Elon Musk, Government, Aiaccountability, Polarization, Jon Haidt, Algorithmicbias, Ai Personhood, Kids Online Safety, Superintelligence, Techandsociety, Automation, Design Ethics, News, Time Well Spent, Tech News, Society & Culture, Humane Design, Technology, Cht, Artificial Intelligence, Center For Humane Technology, The Social Dilemma Netflix, Philosophy, Human Downgrading, Aza Raskin, Attention Economy, Ethical Technology

4.81.5K Ratings

🗓️ 26 October 2023

⏱️ 48 minutes

🧾️ Download transcript

Summary

Dr. Joy Buolamwini, the founder of the Algorithmic Justice League, argues that the most urgent risks from AI are algorithmic bias, discrimination, and the concentration of power in tech companies.

Transcript

Click on a timestamp to play from that location

0:00.0

Hey everyone, this is Tristan and this is AESA.

0:08.2

One of the things that makes AI so vexing is the multiple horizons of harm that it affects

0:12.4

simultaneously.

0:13.8

We sometimes hear about this divide or schism in the responses to the immediate risks that

0:18.7

AI poses today and the longer term and emerging risks that AI can pose tomorrow.

0:23.8

In those camps, there's the AI bias and AI ethics community, which is typically focused

0:27.6

on the immediate risks, and there's the AI safety community, which is typically focused

0:31.9

on the longer term risks.

0:33.5

But is there really a divide between these concerns?

0:36.4

About this notion of schism, it makes for good headlines.

0:41.1

That's Dr. Joy Bellemweeney, founder of the Algorithmic Justice League, an author of

0:44.8

a new book called Unmasking AI, my mission to protect what is human in a world of machines.

0:50.0

I've heard this.

0:51.2

There are camps.

0:52.2

We got AI safety on one end.

0:54.4

We got AI ethics on the other hand.

0:58.0

We got the doomers, the gloomers, all of these things.

1:00.9

I think it makes for interesting headlines.

1:04.1

And I see it less as a schism and more as a spectrum of concerns.

1:12.2

I think there are immediate harms, emerging harms and longer term harms.

1:17.2

And I think the way you address the longer term harms is by attending to what is immediate.

1:27.9

Dr. Joy conducted the breakthrough research that demonstrated to the world how gender and

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Center for Humane Technology, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Center for Humane Technology and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.