4.6 • 1.2K Ratings
🗓️ 18 November 2024
⏱️ 9 minutes
🧾️ Download transcript
Advancements in artificial intelligence have made it possible for the technology to mimic humans in ever-more convincing ways. But even far less sophisticated tools than today’s chatbots have been shown in research to trick our brains, in a sense, into projecting human thought processes and emotions onto these systems. It’s a cognitive failure that can leave people open to deception and manipulation, which makes the increasingly human-like technologies proliferating in our daily lives particularly dangerous, Rick Claypool, research director at the nonprofit Public Citizen, a consumer advocacy organization, told Marketplace’s Meghan McCarty Carino.
Click on a timestamp to play from that location
| 0:00.0 | Of course, we all know AI chatbots aren't human, don't we? |
| 0:07.0 | From American Public Media, this is Marketplace Tech. I'm Megan McCarty Carrino. Advancements in AI have made it possible for the technology to mimic humans in ever more convincing ways. |
| 0:29.1 | But even far less sophisticated tools than today's chatbots have been shown in research to trick our brains, in a sense, into projecting human thought |
| 0:39.3 | processes and emotions onto these systems. It's a cognitive failure that can leave people open |
| 0:45.8 | to deception and manipulation, which makes the increasingly human-like technologies proliferating |
| 0:52.1 | in our daily lives particularly dangerous, according to Rick |
| 0:55.8 | Claypool. He's a research director at the nonprofit Public Citizen, a consumer advocacy |
| 1:01.0 | organization. Well, so the human mind is naturally inclined to believe that something that we |
| 1:07.4 | can speak with must be human too, or human-like in that it has, you know, has a, |
| 1:13.1 | has a mind behind what it is. And younger people, people who are psychologically vulnerable |
| 1:19.9 | for any number of reasons are sort of more susceptible to being, you know, drawn in to this. |
| 1:27.2 | And even looking at the story that the New York Times reporter |
| 1:31.3 | Kevin Ruse wrote on the early version of Bing and his interactions with it and how it professed |
| 1:39.5 | love and tried to talk him into, you know, leaving his wife and all that kind of thing. |
| 1:44.8 | Just having that kind of intense conversation is surprising. |
| 1:50.4 | And you could tell from the story he wrote about it, left him feeling pretty, you know, reeling and then dumbfounded. |
| 1:56.1 | Tell me more about the risks inherent to interacting with technology in this way. |
| 2:03.1 | So there are a range of risks associated with anthropomorphic AI systems, right? |
| 2:10.4 | And they also have a tendency to engage in what technologists sort of have called it the sort of a sycophantic risk, |
| 2:21.6 | the risk that the system is being designed to always validate rather than challenge what the |
| 2:29.4 | user is saying. |
| 2:31.0 | That gets more dangerous whenever you say things like, |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from American Public Media, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of American Public Media and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.