5 • 1.5K Ratings
🗓️ 20 July 2025
⏱️ 75 minutes
🧾️ Download transcript
Benjamin Mann is a co-founder of Anthropic, an AI startup dedicated to building aligned, safety-first AI systems. Prior to Anthropic, Ben was one of the architects of GPT-3 at OpenAI. He left OpenAI driven by the mission to ensure that AI benefits humanity. In this episode, Ben opens up about the accelerating progress in AI and the urgent need to steer it responsibly.
In this conversation, we discuss:
1. The inside story of leaving OpenAI with the entire safety team to start Anthropic
2. How Meta’s $100M offers reveal the true market price of top AI talent
3. Why AI progress is still accelerating (not plateauing), and how most people misjudge the exponential
4. Ben’s “economic Turing test” for knowing when we’ve achieved AGI—and why it’s likely coming by 2027-2028
5. Why he believes 20% unemployment is inevitable
6. The AI nightmare scenarios that concern him most—and how he believes we can still avoid them
7. How focusing on AI safety created Claude’s beloved personality
8. What three skills he’s teaching his kids instead of traditional academics
—
Brought to you by:
Sauce—Turn customer pain into product revenue: https://sauce.app/lenny
LucidLink—Real-time cloud storage for teams: https://www.lucidlink.com/lenny
Fin—The #1 AI agent for customer service: https://fin.ai/lenny
—
Transcript: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann
—
My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/168107911/my-biggest-takeaways-from-this-conversation
—
Where to find Ben Mann:
• LinkedIn: https://www.linkedin.com/in/benjamin-mann/
• Website: https://benjmann.net/
—
Where to find Lenny:
• Newsletter: https://www.lennysnewsletter.com
• X: https://twitter.com/lennysan
• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/
—
In this episode, we cover:
(00:00) Introduction to Benjamin
(04:43) The AI talent war
(06:28) AI progress and scaling laws
(10:50) Defining AGI and the economic Turing test
(12:26) The impact of AI on jobs
(17:45) Preparing for an AI future
(24:05) Founding Anthropic
(27:06) Balancing AI safety and progress
(29:10) Constitutional AI and model alignment
(34:21) The importance of AI safety
(43:40) The risks of autonomous agents
(45:40) Forecasting superintelligence
(48:36) How hard is it to align AI?
(53:19) Reinforcement learning from AI feedback (RLAIF)
(57:03) AI's biggest bottlenecks
(01:00:11) Personal reflections on responsibilities
(01:02:36) Anthropic’s growth and innovations
(01:07:48) Lightning round and final thoughts
—
Referenced:
• Dario Amodei on LinkedIn: https://www.linkedin.com/in/dario-amodei-3934934/
• Anthropic CEO: AI Could Wipe Out 50% of Entry-Level White Collar Jobs: https://www.marketingaiinstitute.com/blog/dario-amodei-ai-entry-level-jobs
• Alexa+: https://www.amazon.com/dp/B0DCCNHWV5
• Azure: https://azure.microsoft.com/
• Sam Altman on X: https://x.com/sama
• Opus 3: https://www.anthropic.com/news/claude-3-family
• Claude’s Constitution: https://www.anthropic.com/news/claudes-constitution
• Greg Brockman on X: https://x.com/gdb
• Anthropic’s Responsible Scaling Policy: https://www.anthropic.com/news/anthropics-responsible-scaling-policy
• Agentic Misalignment: How LLMs could be insider threats: https://www.anthropic.com/research/agentic-misalignment
• Anthropic’s CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next
• AI prompt engineering in 2025: What works and what doesn’t | Sander Schulhoff (Learn Prompting, HackAPrompt): https://www.lennysnewsletter.com/p/ai-prompt-engineering-in-2025-sander-schulhoff
• Unitree: https://www.unitree.com/
• Arthur C. Clarke: https://en.wikipedia.org/wiki/Arthur_C._Clarke
• How Reinforcement Learning from AI Feedback Works: https://www.assemblyai.com/blog/how-reinforcement-learning-from-ai-feedback-works
• RLHF: https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback
• Jared Kaplan on LinkedIn: https://www.linkedin.com/in/jared-kaplan-645843213/
• Moore’s law: https://en.wikipedia.org/wiki/Moore%27s_law
• Machine Intelligence Research Institute: https://intelligence.org/
• Raph Lee on LinkedIn: https://www.linkedin.com/in/raphaeltlee/
• “The Last Question”: https://en.wikipedia.org/wiki/The_Last_Question
• Beth Barnes on LinkedIn: https://www.linkedin.com/in/elizabethmbarnes/
• “The Last Question”: https://en.wikipedia.org/wiki/The_Last_Question
• Good Strategy, Bad Strategy | Richard Rumelt: https://www.lennysnewsletter.com/p/good-strategy-bad-strategy-richard
• Pantheon on Netflix: https://www.netflix.com/title/81937398
• Ted Lasso on AppleTV+: https://tv.apple.com/us/show/ted-lasso/umc.cmc.vtoh0mn0xn7t3c643xqonfzy
• Kurzgesagt—In a Nutshell: https://www.youtube.com/channel/UCsXVk37bltHxD1rDPwtNM8Q
• 5 tips to poop like a champion: https://8enmann.medium.com/5-tips-to-poop-like-a-champion-3292481a9651
—
Recommended books:
• Superintelligence: Paths, Dangers, Strategies: https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834
• The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics: https://www.amazon.com/Hacker-State-Attacks-Normal-Geopolitics/dp/0674987551
• Replacing Guilt: Minding Our Way: https://www.amazon.com/Replacing-Guilt-Minding-Our-Way/dp/B086FTSB3Q
• Good Strategy/Bad Strategy: The Difference and Why It Matters: https://www.amazon.com/Good-Strategy-Bad-Difference-Matters/dp/0307886239
• The Alignment Problem: Machine Learning and Human Values: https://www.amazon.com/Alignment-Problem-Machine-Learning-Values/dp/0393635821
—
Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
Lenny may be an investor in the companies discussed.
Click on a timestamp to play from that location
0:00.0 | You wrote somewhere that creating powerful AI might be the last invention humanity ever needs to make. How much time do we have been? I think 50th percentile chance of hitting some kind of superintelligence is now like 2028. What is it that you saw at Open AI? What did you experience there that made you feel like, okay, we got to go do our own thing? We felt like safety wasn't the top priority there. The case for safety has gotten a lot |
0:20.8 | more concrete. So super intelligence is a lot about like, how do we keep God in a box and not let the |
0:25.8 | god out? What are the odds that we align AI correctly? Once we get to super intelligence, it will be |
0:31.6 | too late to align the models. My best granularity forecast for like, could we have an X risk or |
0:37.4 | extremely bad outcome is somewhere between 0 and 10%. |
0:40.4 | Something that's in the news right now is this whole Zuck coming after all the top AI researchers. |
0:45.3 | We've been much less affected because people here, they get these offers and then they say, well, of course I'm not going to leave because my best case scenario at meta is that we make money. |
0:54.1 | And my best case scenario at Antha is that we make money. |
0:58.9 | And my best case scenario at Anthropic is we like affect the future of humanity. |
1:04.2 | Dario, your CEO recently talked about how unemployment might go up to something like 20%. If you just think about like 20 years in the future where we're like way past the singularity, |
1:08.6 | it's hard for me to imagine that even capitalism will look at all like it looks today. Do you have any advice for folks that want to try to get ahead of this? I'm not immune to job replacement either. At some point, it's coming for all of us. Today, my guest is Benjamin Mann. Holy moly, what a conversation. Ben is the co-founder of Anthropic. He serves as tech lead for product |
1:29.2 | engineering. He focuses most of his time and energy on aligning AI to be helpful, harmless, and |
1:34.7 | honest. Prior to Anthropic, he was one of the architects of GPT3 at OpenAI. In our conversation, |
1:40.6 | we cover a lot of ground, including his thoughts on the recruiting battle for top AI researchers, why he left open AI to start Anthropic, how soon he expects |
1:49.8 | will see AGI, also his economic touring test for knowing when we've hit AGI, why scaling |
1:55.4 | laws have not slowed down, and are in fact accelerating, and what the current biggest bottlenecks |
2:00.0 | are, why he's so deeply |
2:01.8 | concerned with AI safety and how he and anthropic operationalize safety in alignment into |
2:07.0 | the models that they build and into their ways of working, also how the existential risk |
2:12.0 | from AI has impacted his own perspectives on the world and his own life, and what he's |
2:16.9 | encouraging his kids to learn |
2:18.1 | to succeed in an AI future. |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from Lenny Rachitsky, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Lenny Rachitsky and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.