4.8 • 861 Ratings
🗓️ 13 September 2024
⏱️ 46 minutes
🧾️ Download transcript
As artificial intelligence becomes more sophisticated, are we in danger of creating a world in which people turn to computers for companionship instead of living, breathing humans? Robert Mahari, JD-PhD Researcher at MIT Media Lab and Harvard Law School, joins host Krys Boyd to discuss why the doom and gloom of A.I. taking over has got it all wrong — that the real problem is we might actually like it too much to put it down. His article “We need to prepare for ‘addictive intelligence’” was published by the MIT Technology Review.
Click on a timestamp to play from that location
0:00.0 | If I ask you to imagine a world gone awry due to artificial intelligence, maybe you think of millions of people left unemployed because machines can do their jobs. |
0:20.0 | Or perhaps you've worried about the robot |
0:22.1 | overlord scenario in which AI platforms quickly spin beyond human control, realize they'd be better |
0:28.1 | off without us, and start laying waste to all the systems and infrastructure we depend on. |
0:33.4 | But there's another way our AI creations could harm us, one that is so frog in the pot |
0:39.1 | subtle that we're not even aware it's already underway. What if, as AI gets better and |
0:44.5 | better at approximating human interactions, we find ourselves just liking it too much. |
0:50.6 | From KERA in Dallas, this is think. I'm Chris Boyd. If we hope artificial intelligence might offer something like companionship to all the chronically lonely people we worry about, we should think about what that could be like in practice and the ways it could be either a bridge to human relationships or a barrier. Robert Mahari is a JD.D. PhD researcher at MIT Media Lab and the Harvard |
1:13.6 | Law School and co-author of an article for the MIT Technology Review titled, We Need to Prepare for |
1:19.9 | Addictive Intelligence. Robert, welcome to think. Hi, Chris. Great to be here. So as you survey |
1:26.9 | the landscape of just popular culture, how would you characterize the things |
1:30.9 | we currently are most worried about with regard to artificial intelligence? |
1:36.4 | There really is a range of worries that people have expressed, and I think that's part of the |
1:41.2 | challenge here. |
1:42.8 | Some people are worried that AI will magnify the things that we're already seeing, |
1:48.4 | misinformation, discrimination and bias, things like that. |
1:52.9 | Others are worried primarily about the economic consequences, |
1:56.3 | disruption of creative industries or labor writ large in automation. And then there's some folks who are |
2:03.6 | most concerned about the kind of existential scenarios that AI will escape the ability for humans |
2:11.9 | to control it and will wreck havoc on society in some really fundamental ways. |
2:23.0 | I think there's been less emphasis on the topic that we focus on, |
2:26.4 | which is this addictive potential of AI companions. |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from KERA, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of KERA and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.