4.8 • 4.4K Ratings
🗓️ 28 August 2023
⏱️ 73 minutes
🧾️ Download transcript
Over the last year, AI large-language models (LLMs) like ChatGPT have demonstrated a remarkable ability to carry on human-like conversations in a variety of different concepts. But the way these LLMs "learn" is very different from how human beings learn, and the same can be said for how they "reason." It's reasonable to ask, do these AI programs really understand the world they are talking about? Do they possess a common-sense picture of reality, or can they just string together words in convincing ways without any underlying understanding? Computer scientist Yejin Choi is a leader in trying to understand the sense in which AIs are actually intelligent, and why in some ways they're still shockingly stupid.
Blog post with transcript: https://www.preposterousuniverse.com/podcast/2023/08/28/248-yejin-choi-on-ai-and-common-sense/
Support Mindscape on Patreon.
Yejin Choi received a Ph.D. in computer science from Cornell University. She is currently the Wissner-Slivka Professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research director at AI2 overseeing the project Mosaic. Among her awards are a MacArthur fellowship and a fellow of the Association for Computational Linguistics.
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Click on a timestamp to play from that location
0:00.0 | Hello, everyone. Welcome to the Mindscape Podcast. I'm your host, John Carroll. If you are a fan of evolutionary biology, then you've heard of the theory of punctuated equilibrium. |
0:11.0 | This was an idea put forward by Niles Eldridge and Stephen J. Gould back in the 70s to think about how evolution works in contrast with the dominant paradigm at the time of gradualism, right? |
0:24.0 | In the course of evolution, you build up many tiny little mutations and gradualism says that therefore evolutionary change happens slowly. |
0:32.0 | Eldridge and Gould wanted to say that in fact you can get the kind of mutation where it speeds everything up and it looks like there is some sudden change even though there's long periods of equilibrium between the sudden changes. |
0:44.0 | Physicists know about this kind of thing very, very well. There are phase transitions in physics where you can have a gradual change in the underlying microscopic constituents or their temperature or pressure or whatever, which leads to sudden changes at the macroscopic level. |
1:01.0 | And by the way, in biology, guess what? There are aspects of both. There are gradual changes and there are also punctuated rapid changes. |
1:09.0 | I mentioned this not because we're going to be talking about that at all today, but because I think that we are in the midst of a sudden rapid change, a phase transition when it comes to the topic we will be talking about today, which is artificial intelligence. |
1:24.0 | As I say later in the podcast a year ago when I started teaching my first courses at Johns Hopkins, there was no danger that the students writing papers were going to appeal to AI help. |
1:36.0 | Now it is almost inevitable that they will do that. It's something you can try to tell them not to, but they're going to because the capabilities of the technology has grown. |
1:46.0 | So very rapidly and it's become much more useful. Very far away from being foolproof, don't get me wrong. So that raises a whole bunch of issues. |
1:56.0 | And we're going to talk about a lot of these issues today with today's guest, Asian Choi, who is a computer science researcher. |
2:03.0 | She's done a lot of work on large language models and natural language processing, which is the sort of hot topic these days in AI. |
2:13.0 | And one of her embassies is something that I'm very interested in, which is the idea of how do you get common sense into a large language model. |
2:26.0 | For better or for worse, the ways that we have been most successful at training AI to be human-like is to not try to presume a lot about what it means to be human-like. |
2:40.0 | We just train it. We just say, okay, Mr. AI or Ms. AI, here is a whole bunch of text, you know, all of the internet or whatever. |
2:48.0 | You figure out, given a whole bunch of words, what the word is most likely to come next. |
2:54.0 | Rather than teaching it, you know, what a table is and what a coffee cup is and what it means for one object to be on top of another one, etc. |
3:03.0 | And that's, you know, surprising in some ways, how can AI become so good, even though it doesn't have a common-sensical image of the world, it doesn't truly maybe, arguably, depending on what you mean, understand what it is saying when it is stringing these sentences together. |
3:20.0 | But also, you know, maybe that's a shortcoming. Maybe there are examples where you would like to be able to extrapolate outside what you've already read about on the internet. |
3:30.0 | And you can do that if you have some common sense and it's hard if all of your training is just what is the next word coming up. |
3:37.0 | A completely unfamiliar context makes it very difficult for that kind of large language model to make process. |
3:44.0 | So this is what we talk about today. Is it possible for LLMs, large language models to learn common sense? Is it possible for them to be truly creative? Is there some sense in which they do understand and can explain things? |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from Sean Carroll | Wondery, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Sean Carroll | Wondery and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.