meta_pixel
Tapesearch Logo
Log in
The a16z Show

Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show

a16z

Science, Innovation, Business, Entrepreneurship, Culture, Disruption, Software Eating The World, Technology

4.41.1K Ratings

🗓️ 15 December 2025

⏱️ 92 minutes

🧾️ Download transcript

Summary

AI models feel smarter than their real-world impact. They ace benchmarks, yet still struggle with reliability, strange bugs, and shallow generalization. Why is there such a gap between what they can do on paper and in practice In this episode from The Dwarkesh Podcast, Dwarkesh talks with Ilya Sutskever, cofounder of SSI and former OpenAI chief scientist, about what is actually blocking progress toward AGI. They explore why RL and pretraining scale so differently, why models outperform on evals but underperform in real use, and why human style generalization remains far ahead. Ilya also discusses value functions, emotions as a built-in reward system, the limits of pretraining, continual learning, superintelligence, and what an AI driven economy could look like.

Transcript

Click on a timestamp to play from that location

0:00.0

Now that compute is big, computer is now very big.

0:03.6

In some sense, we are back to the age of research.

0:06.9

We got to the point where we are in a world

0:11.0

where there are more companies than ideas by quite a bit.

0:15.9

Now there is the Silicon Valley saying that says

0:19.1

that ideas are cheap, execution is everything. What is the problem

0:24.6

of AI and AIGI? The whole problem is the power. AI models look incredibly smart on benchmarks,

0:32.5

yet their real world performance often feels far behind. Why is there such a gap and what does that

0:37.1

say about the path to

0:38.0

AGI? From the Dwar Keshe podcast, here's a rare long-form conversation with Ilya

0:42.9

co-founder of SSI, exploring what's actually slowing down progress toward AGI. Dwarkesh and Ilya dig

0:50.1

into the core problems in modern AI systems, from why RL and pre-training scale so differently,

0:55.7

to why generalization, reliability, and sample efficiency still fall short of human learning.

1:00.9

They also explore continual learning, value functions, superintelligence, and what a future economy

1:06.0

shaped by AI might look like.

1:09.3

You know what's crazy?

1:10.7

I know. That all of this is real.

1:13.6

Yeah.

1:14.1

Don't you think so?

1:15.4

Meaning what?

1:16.2

Like all this AI stuff and all this big area.

1:18.6

Yeah, that it's happened.

...

Transcript will be available on the free plan in 25 days. Upgrade to see the full transcript now.

Disclaimer: The podcast and artwork embedded on this page are from a16z, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of a16z and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.