meta_pixel
Tapesearch Logo
Log in
Big Technology Podcast

Dwarkesh Patel: AI Continuous Improvement, Intelligence Explosion, Memory, Frontier Lab Competition

Big Technology Podcast

Alex Kantrowitz

Technology, Religion & Spirituality, Business News, Business, Religion, Science, Philosophy, Society & Culture, Entrepreneurship, Management, Marketing, Politics, News Commentary, Government, Investing, Tech News, Social Sciences, News

4.6395 Ratings

🗓️ 18 June 2025

⏱️ 70 minutes

🧾️ Download transcript

Summary

Dwarkesh Patel is the host of the Dwarkesh Podcast. He joins Big Technology Podcast to discuss the frontiers of AI research, sharing why his timeline for AGI is a bit longer than the most enthusiastic researchers. Tune in for a candid discussion of the limitations of current methods, why continuous AI improvement might help the technology reach AGI, and what an intelligence explosion looks like. We also cover the race between AI labs, the dangers of AI deception, and AI sycophancy. Tune in for a deep discussion about the state of artificial intelligence, and where it’s going. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: [email protected]

Transcript

Click on a timestamp to play from that location

0:00.0

Why do we have such vastly different perspectives on what's next for AI if we're all looking at the same data and what's actually going to happen next?

0:08.0

Let's talk about it with Dwar Keshe Patel, one of the leading voices on AI who's here with us in studio to cover it all.

0:16.0

Dwar Keshe, great to see you. Welcome back to the show.

0:18.0

Thanks for having me, man.

0:19.0

Thanks for being here. I was listening to our last episode, which we recorded last year. And we were anticipating what was going to happen with GPT5. Still no GPT5. That's right. Oh, yeah. That would have surprised me a year ago.

0:31.4

Definitely. And another thing that would have surprised me is we were saying that we were at a moment where we were going to figure out basically

0:38.3

what's going to happen with AI progress, whether the traditional method of training

0:42.3

LLMs was going to hit a wall or whether it wasn't. We were going to find out. We were basically

0:47.3

months away from knowing the answer to that. Here we are a year later. We have, everybody's

0:51.3

looking at the same data. Like I mentioned in the intro. Right.

0:55.0

We have no idea.

0:56.0

There are people who are saying AI, artificial general intelligence or human level intelligence

1:01.0

is imminent with the methods that are available today.

1:05.0

And there are others that are saying 20, 30, maybe longer, maybe more than 30 years until we reach it.

1:12.7

So let me start by asking you this.

1:17.8

If we're all looking at the same data, why are there such vastly different perspectives on where this goes?

1:18.9

I think people have different philosophies around what intelligence is.

1:22.6

That's part of it.

1:23.6

I think some people think that these models are just basically baby agey eyes already.

1:27.9

And they just need a couple additional little unhop legs, a little sprinkle on top.

1:32.3

Things like test time thinking.

1:35.6

So we already got that with 01 and 03 now.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Alex Kantrowitz, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Alex Kantrowitz and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.