meta_pixel
Tapesearch Logo
Log in
Tom Bilyeu's Impact Theory

Ethics, Control, and Survival: Navigating the Risks of Superintelligent AI | Impact Theory w/ Tom Bilyeu X Dr. Roman Yampolskiy Pt. 2

Tom Bilyeu's Impact Theory

Impact Theory

Education, News, News Commentary, Philosophy, Technology, Society & Culture, Business, Self-improvement

4.7 • 5.1K Ratings

🗓️ 19 November 2025

⏱️ 63 minutes

🧾️ Download transcript

Summary

What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER:  https://tombilyeu.com/zero-to-founder?utm_campaign=Podcast%20Offer&utm_source=podca[%E2%80%A6]d%20end%20of%20show&utm_content=podcast%20ad%20end%20of%20show SCALING a business: see if you qualify here.:  https://tombilyeu.com/call Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here.: https://tombilyeu.com/ ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu’s Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Linkedin: Post your job free at https://linkedin.com/impacttheory HomeServe: Help protect your home systems – and your wallet – with HomeServe against covered repairs. Plans start at just $4.99 a month at https://homeserve.com Bevel Health: 1st month FREE at https://bevel.health/impact with code IMPACT Incogni: Take your personal data back with Incogni! Use code IMPACT at the link below and get 60% off an annual plan: https://incogni.com/impact BlandAI: Call it for free today: https://bland.ai Or for enterprises, you can book a demo directly:  https://bland.ai/enterprise Business Wars: Follow Business Wars on the Wondery App or wherever you get your podcasts. Connectteam: 14 day free trial at https://connecteam.cc/46GxoTFd Raycon: Go to https://buyraycon.com/impact to get up to 30% off sitewide. Cape: 33% off with code IMPACT33 at https://cape.co/impact Shopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impact AirDoctor: Up to $300 off with code IMPACT at https://airdoctorpro.com Welcome back to Impact Theory with Tom Bilyeu! In this episode, Tom sits down for part two of his riveting conversation with Dr. Roman Yampolski, a leading AI safety researcher. Together, they dive deep into the existential risks and moral quandaries posed by artificial superintelligence, exploring why even some of the industry’s most vocal advocates for caution—like Elon Musk—end up accelerating the development of advanced AI. Dr. Yampolski pulls no punches, explaining why attempts to control superintelligent systems might be futile, the challenges of aligning AI interests with humanity’s well-being, and the reasons he’s so skeptical that society can pump the brakes on innovation. The discussion ranges from the dangers of an unchecked AI arms race and the psychological burden of confronting these risks, to practical approaches for boosting AI safety and the impact of emerging technologies like quantum computing and genetic engineering. Tom and Roman also touch on longevity science, ethics around gene editing, and the future of personal adaptation across generations. Whether you’re an optimist about technology or deeply concerned about humanity’s trajectory, this episode is packed with insight, tough questions, and thought-provoking perspectives that will leave you rethinking the future of intelligence—both artificial and human. FOLLOW DR. ROMAN YAMPOLSKIY:Twitter: https://twitter.com/romanyampolskiFacebook: https://www.facebook.com/roman.yampolskiy Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript

Click on a timestamp to play from that location

0:00.0

Welcome back to part two of my conversation with Dr. Roman Yampalski. So why do you think that Elon,

0:07.3

who was banging the drum harder than anybody, lobbying Congress, desperately trying to get them to

0:13.6

slow down, suddenly hit a point where he was like, well, I guess I'll just build it faster than

0:19.4

anybody else. He likened AI to a demon summoning circle and laughed at everybody who thought, yeah, yeah, yeah, I'll summon a demon and then I'll be able to control it. All is going to be well. Like he sees the problem clearly. But after years of trying to slow this down, he finally completely abandoned that

0:40.2

and went to, I'll just build it faster than anybody else.

0:43.3

What happened there, and why do you think you can reverse it?

0:47.3

So I think he realized he's not succeeding at his initial approach

0:51.5

of convincing him not to do it.

0:54.0

And so the second step in that plan would be to become the leader in the field

0:58.3

and convince them from position of leadership and control of the more advanced technology.

1:04.5

If the leader says, you know, we're going to slow down and it's fine for you to slow down,

1:09.0

it's easier to negotiate that deal with, let's

1:12.2

say, top seven companies than if you are not even part of a game, you have no AI, you are

1:18.0

a nobody in that space. So all of them as a group benefit more if they agree to slow down

1:24.5

or stop than if they just arms race and the first one to get there

1:28.9

gets everyone destroyed. He says words along those lines or did for a while. I think he even signed

1:35.3

one of the letters about we should pump the brakes. But none of his actions indicate that that's

1:41.1

actually what he plans to do from just trying to take advantage of every company

1:50.1

that he's building from the amount of data that Tesla cars capture visually to all the

1:57.1

decisions that drivers are currently making to all of the decisions that the AI will make,

2:02.1

to now he's talking about using the cars as a distributed fleet so that when they're idle,

2:07.6

that they're actually running inference models. And so using it as a gigantic AI brain to,

...

Transcript will be available on the free plan in 3 days. Upgrade to see the full transcript now.

Disclaimer: The podcast and artwork embedded on this page are from Impact Theory, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Impact Theory and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.