meta_pixel
Tapesearch Logo
Log in
Deep Questions with Cal Newport

Ep. 380: ChatGPT is Not Alive!

Deep Questions with Cal Newport

Cal Newport

Technology, Self-improvement, Education

4.81.3K Ratings

🗓️ 24 November 2025

⏱️ 79 minutes

🧾️ Download transcript

Summary

There has been a lot of loose talk online recently about the capabilities of existing AI tools. In this episode, Cal reacts to a specific recent clip from the Joe Rogan podcast in which the guest argues that language models are like a child’s brain, and may already be conscious. Cal puts on his (always stylish) computer scientist had to explain why this cannot be true. He then answers listener questions and reacts to feedback on his recent episode about using a notebook to enhance long thinking.

Transcript

Click on a timestamp to play from that location

0:00.0

A couple weeks ago, the biologist Brett Weinstein went on Joe Rogan's podcast.

0:10.9

Their conversation turned, as it so often does these days, to the topic of AI.

0:17.3

The Weinstein goes on a monologue pretty early in the episode about AI, and it's the monologue that begins as follows.

0:24.0

I want to play you a clip here from the start of his conversation.

0:29.8

Entity, so you'll hear people say, well, it's not really thinking, right?

0:35.1

It's just figuring out if it was thinking what the next word in the sentence is.

0:39.7

Garbage.

0:41.6

In some sense, right?

0:43.0

Weinstein is right in how he starts the conversation, right?

0:47.4

He's saying, look, we can't just saying language models predict the next word.

0:52.1

That's too dismissive.

0:54.0

So while it's true that, yeah, they literally do predict the next word, that's too dismissive. So while it's true that, yeah,

0:55.0

they literally do predict the next word, there is a lot of impressive understanding and processing

1:00.2

that goes into actually figuring out what word to produce. In fact, writing the New Yorker recently,

1:05.8

James Summer argues that we can even think of the processing that goes into predicting next words

1:10.4

and language models as

1:11.5

thinking. And I think he lays out a good argument for that. So Weinstein, he starts, I think I'm on board

1:16.7

with him. But then as he continues in his discussion, he begins to argue that not only is AI doing

1:23.6

impressive processing, but is rapidly evolving beyond our ability to control,

1:29.6

and that it is, quote, like five minutes, in quote, away from starting to manipulate us.

1:35.0

Weinstein implies that existing LLMs might already be conscious and that we have no real way of

1:39.5

testing whether or not this is true.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Cal Newport, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Cal Newport and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.