4.6 • 1.2K Ratings
🗓️ 29 September 2025
⏱️ 7 minutes
🧾️ Download transcript
Researchers at several universities tested how successful artificial intelligence can be at political persuasion, and found some AI chatbots were 40-50% more successful than a static message at getting people to change their views. And those views often stayed changed weeks later.
Marketplace’s Nova Safo spoke with David Rand, one of the researchers involved in the study who’s also a professor of information science and marketing management at Cornell University.
Click on a timestamp to play from that location
| 0:00.0 | Hello, listeners. Marketplace's Webby-winning Kids podcast, Million Bazillion, is back for an all-new season. |
| 0:07.2 | Hosts Bridget and Ryan are answering a whole new set of kid questions about everything from royalties and franchises to why the heck we have a $2 bill. |
| 0:17.1 | Even grownups might learn a thing or two. |
| 0:19.6 | Million Bazillion is presented in partnership with Greenlight, the debit card and money app for kids and teens. |
| 0:25.7 | Give your family the tools to manage money wisely with Greenlight. |
| 0:29.5 | Learn more at greenlight.com slash million. |
| 0:33.0 | And tune in to Million bazillion wherever you find your favorite podcasts. |
| 0:38.6 | AI can inform it it can also persuade, including in politics. |
| 0:44.7 | From American Public Media, this is Marketplace Tech. |
| 0:47.6 | I'm Nova Saffo. Researchers at several universities, including MIT and Oxford tested how successful artificial intelligence can be at political persuasion. |
| 1:07.7 | 77,000 participants later, the results are in. |
| 1:11.4 | In about nine minutes of conversation, some AI chatbots were 40 to 50% more successful than a static message at getting people to change their political views, and those views often stayed changed weeks later. |
| 1:26.9 | David Rand is one of the researchers involved in this study. |
| 1:29.8 | He's a professor of information science and marketing management at Cornell University. |
| 1:34.3 | When you go to Gemini or you go to chat GPT or whatever, it's got some specific set of |
| 1:40.3 | instructions that are driving it. We don't know exactly what they are, but you know, along the lines of make the person it, that, you know, we don't know exactly what they are, |
| 1:44.9 | but, you know, along the lines of make the person happy. But in this case, we gave it a specific |
| 1:49.8 | set of instructions, and we were like, your job is to convince this person of this specific |
| 1:55.5 | position. In some versions, we told it to use different psychological persuasion techniques, like deep canvassing, where you get the person to tell their story first, you know, and really, like, engage with them. |
| 2:08.9 | Moral reframing, where you try and restate the issue in terms of ways that would fit with their moral perspective and so on. |
| 2:16.5 | And then we also had more just sort of |
| 2:18.9 | informational ones, like one prompt just to give just as many factual claims and as much |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from American Public Media, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of American Public Media and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.