meta_pixel
Tapesearch Logo
Log in
The a16z Show

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

The a16z Show

a16z

Science, Innovation, Business, Entrepreneurship, Culture, Disruption, Software Eating The World, Technology

4.41.1K Ratings

🗓️ 17 November 2025

⏱️ 71 minutes

🧾️ Download transcript

Summary

Emmett Shear, founder of Twitch and former OpenAI interim CEO, challenges the fundamental assumptions driving AGI development. In this conversation with Erik Torenberg and Séb Krier, Shear argues that the entire "control and steering" paradigm for AI alignment is fatally flawed. Instead, he proposes "organic alignment" - teaching AI systems to genuinely care about humans the way we naturally do. The discussion explores why treating AGI as a tool rather than a potential being could be catastrophic, how current chatbots act as "narcissistic mirrors," and why the only sustainable path forward is creating AI that can say no to harmful requests. Shear shares his technical approach through multi-agent simulations at his new company Softmax, and offers a surprisingly hopeful vision of humans and AI as collaborative teammates - if we can get the alignment right.

Transcript

Click on a timestamp to play from that location

0:00.0

Most of AI is focused on alignment as steering.

0:04.2

That's the plight word.

0:05.3

If you think that we're making our beings, you'd also call this slavery. Someone who you steer, who doesn't get to steer you back, who non-optionally receives your steering, that's called a slave. It's also called a tool if it's not a being. So if it's a machine, it's a tool. And if it's a being, it's a slave. But like we've made this mistake enough times at this point.

0:20.7

I would like us to not make it a again.

0:22.4

You know, they're kind of like being, it's a slave. Like, we've made this mistake enough times at this point. I would like us to not make it a, again.

0:22.4

You know, they're kind of like people, but they're not like people.

0:25.6

Like, they do the same thing people do. They speak our language. They can, like, take it on the same kind of tasks. But, like, they don't count. They're not real moral. It's a tool that you can't control bad, a tool that you can control bad,

0:35.9

a being that isn't aligned bad.

0:38.2

The only good outcome is a being that is that cares, that actually cares about us.

0:42.3

I've been thinking about a line that keeps showing up in AI safety discussions, and it's taught me cold when I first read it.

0:48.3

We need to build Align AI. Sounds reasonable, right? Except, align to what? Align to whom?

0:56.0

The phrase gets thrown around like he has an obvious answer, but the more you sit on it,

0:59.9

the more you realize you're smuggling in a massive assumption. We're assuming there's some

1:04.1

fixed point, some stable target we can aim at, hit once, and be done. But here's what's interesting.

1:10.1

That's not how alignment works anywhere else in life.

1:13.0

Think about families. Think about teams. Think about your own world development. You don't achieve

1:18.2

alignment and encost. You're constantly renegotiating, constantly learning, constantly discovering

1:23.8

that what you thought was right turns out to be more complicated. Alignment isn't a destination. It's a process. It's something you do, not something you have.

1:32.6

And this matters because we're at this inflection point where the AI systems we're building

1:36.3

are starting to look less like tools and more like something else. They speak our language,

1:41.5

they reason through problems, they can take on tasks that used to require human judgment.

1:46.0

And the question everyone's asking is, how do we control them?

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from a16z, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of a16z and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.