meta_pixel
Tapesearch Logo
Log in
Tech Won't Save Us

Patreon Preview: Maybe We Should Destroy AI w/ Ali Alkhatib

Tech Won't Save Us

Paris Marx

Silicon Valley, Books, Technology, Arts, Future, Tech Criticism, Socialism, Paris Marx, News, Criticism, Tech News, Politics

4.8626 Ratings

🗓️ 13 November 2024

⏱️ 5 minutes

🧾️ Download transcript

Summary

Our Data Vampires series may be over, but Paris interviewed a bunch of experts on data centers and AI whose insights shouldn’t go to waste. Starting this week, we’re releasing those interviews as bonus episodes for Patreon supporters. Here’s a preview of this week’s premium episode with Ali Alkhatib, Logic(s) data editor and former interim director of the Center for Applied Data Ethics. For the full interview, support the show on Patreon. Support the show

Transcript

Click on a timestamp to play from that location

0:00.0

Hey, this is Paris. I hope you enjoyed the Data Vampire series that we did back in October.

0:04.6

It's had a fantastic response. And for that series, I spoke to a bunch of experts. And now we're

0:10.3

releasing the full-length versions of those interviews for our supporters over on patreon.com.

0:15.8

And I wanted to give you a preview of what those interviews sound like. So you can consider whether to go to patreon.com slash tech won't save us,

0:24.4

become a supporter yourself so you can tune in to the full-length interviews.

0:28.5

So you can learn even more about the important topics that we dug into in that special series.

0:34.4

So enjoy this clip from my interview with Ali al-Khqatib.

0:40.5

There's been a lot of talk in the past year and a half about what effective regulation of AI is going to look like or should look like,

0:48.2

you know, a lot of debate about those things, a lot of CEOs speaking out and saying what that should look like, and unfortunately,

0:56.7

lawmakers listening to them as though they have the answers. But you recently wrote an essay about

1:02.4

destroying AI, you know, taking one step further than that than just kind of passing some

1:07.8

regulations to try to, you know, reduce the worst aspects of what

1:12.8

these AI systems can do. What brought you to the point to write something like that and to take

1:17.1

that further step? Yeah, sorry. So I've been studying human computer interaction for about 10 years,

1:21.4

started a PhD program 10 years ago today, actually, or close to today. I mean, I had been spending

1:25.8

a long time thinking about how to develop human-centered

1:29.2

systems and particularly writing papers that were trying to bring ideas from the social

1:35.6

sciences about power, about oppression, about violence, into understanding how algorithmic systems

1:40.8

can manifest these kinds of harms and trying to encourage people to think

1:45.2

along those kinds of lines to understand and then to design consequential algorithmic

1:52.3

systems in various different ways. And part of my frustration was coming from the feeling that

1:59.3

HCI was sort of not picking up some of that,

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Paris Marx, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Paris Marx and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.