meta_pixel
Tapesearch Logo
Log in
a16z Podcast

Chasing Silicon: The Race for GPUs

a16z Podcast

a16z

Science, Innovation, Business, Entrepreneurship, Culture, Disruption, Software Eating The World, Technology

4.41.1K Ratings

🗓️ 2 August 2023

⏱️ 22 minutes

🧾️ Download transcript

Summary

With the world constantly generating more data, unlocking the full potential of AI means a constant need for faster and more resilient hardware. In this episode – the second in our three-part series – we explore the challenges for founders trying to build AI companies. We dive into the delta between supply and demand, whether to own or rent, where moats can be found, and even where open source comes into play. Look out for the rest of our series, where we dive into terminology and technology that is the backbone of the AI, how much the cost of compute truly costs!

Transcript

Click on a timestamp to play from that location

0:00.0

We currently don't have as many AI chips or servers as we'd like to have.

0:06.0

How do I get access to the compute that I need?

0:10.0

Who decides this?

0:11.0

You're looking at some very large investment projects that take some time to adjust.

0:15.0

We're rebuilding a stack. You can look at AI just as a new application, but I'm

0:20.0

see I think it's probably a better way to look as a different type of compute.

0:24.1

With software becoming more important than ever, hardware is following suit.

0:29.5

And with the world constantly generating more data, unlocking the full potential of AI means a constant need for

0:36.0

faster and more resilient hardware. That is exactly why we've created this mini series on AI hardware. In part one, we took you through the emerging

0:45.8

architecture powering LLMs, from GPU to TPU, including how they work, who's creating them, and also whether we can expect Moore's Law to continue.

0:57.3

But part two is for the founders trying to build AI companies.

1:01.4

And here we dive into the delta between supply and demand, why we can't just

1:05.3

print our way out of a shortage, how founders can get access to inventory, whether they should

1:09.8

think about renting or owning, or motes can be found and even where open source comes into play.

1:16.4

You should also look out for part three coming very soon where we break down exactly

1:20.7

how much all of this costs from training to inference. And today we're joined again by

1:26.1

A16Z Special Advisor Gido Appenzeler, someone who is truly uniquely suited for this deep

1:32.1

dive as a storied infrastructure expert with experience like

1:36.2

CTO for Intel's Data Center group dealing a lot with hardware and the low-level components.

1:40.7

So it's giving yourself I think a good insight how large data centers work, what the best. level components are they make all of this AI boom

1:47.7

possible today. Despite working with infrastructure for quite some time here's Quito commenting on how the momentum of the recent AI wave is shifting supply and demand dynamics.

1:59.0

The biggest thing that is striking, there's just the crazy exponential growth of AI at the moment.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from a16z, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of a16z and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.