meta_pixel
Tapesearch Logo
Log in
Tech Won't Save Us

Patreon Preview: The Harms of Generative AI w/ Alex Hanna

Tech Won't Save Us

Paris Marx

Silicon Valley, Books, Technology, Arts, Future, Tech Criticism, Socialism, Paris Marx, News, Criticism, Tech News, Politics

4.8626 Ratings

🗓️ 17 February 2025

⏱️ 6 minutes

🧾️ Download transcript

Summary

Our Data Vampires series may be over, but Paris interviewed a bunch of experts on data centers and AI whose insights shouldn’t go to waste. We’re releasing those interviews as bonus episodes for Patreon supporters. Here’s a preview of this week’s premium episode with Alex Hanna, the Director of Research at the Distributed AI Institute. For the full interview, support the show on Patreon. Support the show

Transcript

Click on a timestamp to play from that location

0:00.0

Hey, this is Paris. I hope you enjoyed the Data Vampire series that we did back in October.

0:04.6

It's had a fantastic response. And for that series, I spoke to a bunch of experts.

0:09.5

And now we're releasing the full-length versions of those interviews for our supporters over on patreon.com.

0:15.4

And I wanted to give you a preview of what those interviews sound like.

0:19.6

So you can consider whether to go to

0:21.4

patreon.com slash tech won't save us, become a supporter yourself, so you can learn even more

0:26.4

about the important topics that we dug into in that special series. So enjoy this clip from my

0:31.6

interview with Alex Hanna. Why are these generative AI models so computationally intensive?

0:38.6

Yeah, so they're computationally intensive because they are so large,

0:45.5

and the process of training is a pretty computationally intensive process.

0:53.2

So it depends on how far you want to go back.

0:56.0

But if you go back to the original innovation of neural networks

1:01.0

and the kind of advent of back propagation,

1:04.0

back propagation is just a really intensive process

1:09.0

because you have this architecture.

1:11.5

When I say architecture, it's the kind of actual structure of the neural network.

1:15.3

There's a few that are more or less popular.

1:18.2

It depends on, you know, the latest and greatest large language model, you know,

1:22.5

has a certain kind of architecture, you know, and it differs.

1:27.0

You know, like I can't imagine there's a lot of daylight

1:28.8

between the cutting edge models at OpenAI versus Anthropic versus Google. Maybe there's some

1:36.9

kind of differentiation that they have. But at the end of the day, the actual parameter fitting,

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Paris Marx, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Paris Marx and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.