meta_pixel
Tapesearch Logo
Log in
Short Wave

How Hackers Could Fool Artificial Intelligence

Short Wave

NPR

Daily News, Nature, Life Sciences, Astronomy, Science, News

4.76K Ratings

🗓️ 21 September 2020

⏱️ 10 minutes

🧾️ Download transcript

Summary

Artificial intelligence might not be as smart as we think. University and military researchers are studying how attackers could hack into AI systems by exploiting how these systems learn. It's known as "adversarial AI." In this encore episode, Dina Temple-Raston tells us that some of these experiments use seemingly simple techniques.

For more, check out Dina's special series, I'll Be Seeing You.

Email the show at [email protected].

Transcript

Click on a timestamp to play from that location

0:00.0

Hey, everybody. Maddie Sify here. Today we've got one for you that we first published

0:05.6

last year when shortwave was just two weeks old, barely lifting our little baby podcast

0:11.3

head up on our own for the first time. The episodes all about how artificial intelligence

0:16.8

works and how it can be hacked. Plus, disco music. We're back tomorrow with a new episode.

0:23.3

And don't forget to subscribe to or follow shortwave wherever you get your podcasts.

0:28.9

We're listening to shortwave from NPR.

0:32.7

Hey, everybody. Maddie Sify here again. This time with NPR Special Correspondent, Dina

0:38.4

Templarastin. Hey, Dina. Hey there. So you're here because you've been doing some really

0:43.0

cool reporting about artificial intelligence as part of your special series. I'll be

0:47.7

seeing you. Yeah, we did a story that it was explaining how AI works and how it's finding

0:52.9

its way into everything from refrigerators to insurance, even conservation. But you also

0:58.3

found out that for all of its potential, there are some real concerns about hacking into

1:02.9

AI. There's actually a whole field of study that is focused on this. It's called adversarial

1:08.5

or evil AI. Evil. And it's a big enough concern that DARPA, the military's research arm,

1:15.6

has created this whole program to study it. And it's called guaranteeing AI robustness

1:21.2

against deception. Or luckily it has a short name guard. The government is so good at naming

1:26.1

things, Dina. It is quite the name. So DARPA is really good at creating tongue twisters.

1:31.8

But basically what they're trying to do is imagine adversaries hacking into AI systems.

1:36.9

And as they see it affect everything from like public opinion to driverless cars. So it

1:41.6

has huge implications. Today on shortwave, adversarial AI, how does it work and how can we

1:50.2

stop it?

1:56.4

Okay, Dina, let's start with the basics. What makes AI so vulnerable to hacking?

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from NPR, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of NPR and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.