meta_pixel
Tapesearch Logo
Log in
Nature Podcast

‘Malicious use is already happening’: machine-learning pioneer on making AI safer

Nature Podcast

[email protected]

Science, News, Technology

4.4859 Ratings

🗓️ 14 November 2025

⏱️ 15 minutes

🧾️ Download transcript

Summary

Yoshua Bengio, considered by many to be one of the godfathers of AI, has long been at the forefront of machine-learning research . However, his opinions on the technology have shifted in recent years — he joins us to talk about ways to address the risks posed by AI, and his efforts to develop an AI with safety built in from the start. 


Nature: ‘It keeps me awake at night’: machine-learning pioneer on AI’s threat to humanity


Hosted on Acast. See acast.com/privacy for more information.

Transcript

Click on a timestamp to play from that location

0:00.0

Hi Benjamin here with a podcast Extra, which features an interview with Joshua Benjio,

0:09.5

considered by many to be one of the godfathers of AI.

0:13.7

Joshua works at the University of Montreal in Canada and has been at the forefront of AI research

0:19.8

for many years. But recently, his opinions on the technology

0:23.9

have shifted, and he spends much of his time talking about his views on the potential dangers to

0:30.3

humanity that AI could represent. Joshua happened to be in London last week, and I went to meet him

0:36.8

along with my colleague

0:37.9

Davidey Castilevecchi. Davide spoke to Joshua about ways to identify and address the risks

0:45.1

posed by AI and his efforts to develop an AI with safety built in from the start.

0:51.6

Joshua chairs an international panel of advisors in the field of artificial intelligence,

0:56.4

which this year published the International AI Safety Report, which identified three main areas of

1:04.0

risk for the technology, unintended risk from malfunctions, malicious use and systemic risk, such as the loss of livelihoods.

1:13.9

Davidei asked Joshua which of these areas is the most likely to have a short-term impact

1:19.1

and which keeps him awake at night.

1:22.6

The second one, in other words, malicious use, is already happening, but I think we're only seeing

1:30.5

just the shades of it with things like deep fakes, cyber attacks that are very likely to be

1:37.5

driven by the most recent cyber capabilities of AI. And we need to have much better guardrails to mitigate those risks,

1:47.7

and those guardrails have to be both technical and political.

1:51.4

What keeps me even more awake, of course, is the possibility of human extinction.

1:57.5

That's the extreme malfunction.

1:59.8

That's why I suddenly pivoted my research into the question,

2:06.4

how do we build AI that will not harm humans by design? More broadly, I think it's a mistake

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from [email protected], and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of [email protected] and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.