4.7 • 618 Ratings
🗓️ 6 June 2024
⏱️ 63 minutes
🧾️ Download transcript
1:13 AI's human-like, but inhuman, language skills 7:22 Bob argues that LLMs don’t vindicate the ‘blank slate’ view of the mind 18:56 Do humans and AIs acquire language in totally different ways? 31:11 Will AIs ever quit hallucinating? 40:47 The importance (or not) of “embodied cognition” 47:50 What is it like to be an AI? 53:48 Why Steve is skeptical of AI doom scenarios
Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Steven Pinker (Harvard University, Enlightenment Now, The Language Instinct). Recorded May 21, 2024.
Twitter: https://twitter.com/NonzeroPods
Click on a timestamp to play from that location
0:00.0 | You're listening to Robert Wright's Non-Zero podcast. You're listening to Robert Wright's Non-Zero Podcast. |
0:27.1 | Hi, Steve. |
0:30.5 | Hi, Bob. |
0:31.6 | How you doing? |
0:32.7 | Okay. |
0:33.2 | And you? |
0:34.2 | I can't complain. |
0:35.7 | Let me introduce this. |
0:36.5 | I'm Robert Wright, a publisher of Non-Zero Newsletter. This is a non-zero podcast. You are Stephen Pinker, famous public intellectual, psychologist at Harvard, author of a number of books, some of which are relevant to what we're going to talk about today, which is artificial intelligence. I think in your book, Enlightenment now you were somewhat dismissive |
1:00.6 | of at least the more extreme concerns about what sort of risks AI might pose. Your book, |
1:09.1 | the language instinct, is relevant in a different way to what we're going to talk about, |
1:13.1 | because I want to spend some time on what you think is actually going on inside these machines |
1:17.1 | and how much it may or may not be like what's going on in the human mind. |
1:22.3 | In how the mind works, there's a chapter called Thinking Machines, in which I, even, this was |
1:28.5 | 27 years ago, but the distinction between the neural network architecture of modern AI and classical |
1:36.0 | rule or proposition-based AI was explained there, including what each architecture is good or bad at, and I even anticipated hallucinations. |
1:48.0 | Really? Well, congratulations. Now, you are more associated of those two basic approaches, |
1:55.4 | the kind of symbol manipulation and rule-based and the neural network deep learning, the latter of which is kind of getting all the attention now because it's responsible for the large language models. |
2:07.6 | You are more associated with a former approach, right? |
2:10.1 | You were, in fact, you were a, Gary Marcus was a student of yours. |
2:13.2 | I think you did some collaboration with him. |
2:15.4 | And he's now kind of famous in AI circles on Twitter for minimizing the significance of the large |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from Nonzero, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Nonzero and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.