meta_pixel
Tapesearch Logo
Log in
City Journal Audio

The AI Arms Race

City Journal Audio

Manhattan Institute

Politics, News Commentary, News

4.8615 Ratings

🗓️ 6 February 2025

⏱️ 25 minutes

🧾️ Download transcript

Summary

Judd Rosenblatt joins Jordan McGillis to discuss DeepSeek and the competition around AI development. 

Transcript

Click on a timestamp to play from that location

0:00.0

Welcome to Ten Blocks. I'm Jordan McGillis, economics editor of City Journal.

0:22.4

On January 20, 2025, the Chinese artificial intelligence firm, Deepseek, released its R1 model.

0:30.7

The model is competitive with top American models, but Deepseek has reportedly achieved this feat

0:35.5

at a tiny fraction of the cost that American firms have been pouring into their training.

0:40.7

The next day, President Donald Trump held a press conference at the White House

0:44.0

with the heads of Open AI, Oracle, and Japan SoftBank

0:48.2

to announce a $500 billion plan to build a system of AI data centers in America called Stargate. To discuss the latest

0:56.8

in AI geopolitics, I've invited Judd Rosenblatt on today's show. Judd is the founder and CEO of

1:05.9

A.E. Studio and a leading advocate for aggressive, thoughtful, American AI development.

1:12.5

Judd, thanks for coming on.

1:14.5

Thanks for having me, Jordan.

1:16.2

First question for you.

1:18.1

What does Deep Seeks release tell us about the state of the AI arms race?

1:22.8

Well, it's impressive that they were able to make so much algorithmic improvements with limited compute.

1:29.7

And one thing that's fairly interesting about the deep seek work is that it strongly reinforces this idea of a negative alignment tax,

1:39.0

where improving alignment techniques, investing in trying to make AI more likely to be more capable by virtue of its

1:45.5

alignment, not only mitigates risks, but also enhances capabilities because it uses reinforcement

1:51.0

learning to induce chain of thought reasoning. And that winds up increasing. So basically,

1:58.5

optimizes for transparent reasoning and structures and also

2:01.2

increases model performance in all these different complex tasks, math ones, especially. And so,

2:07.2

so basically, instead of just using reinforcement learning for preference alignment, which is what

2:12.8

Open AI's RLHF does for politeness and stuff, it uses these reward signals in reinforcement learning

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Manhattan Institute, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Manhattan Institute and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.