meta_pixel
Tapesearch Logo
Log in
Marketplace Tech

Sora 2's disinformation problem

Marketplace Tech

American Public Media

Technology, News

4.61.2K Ratings

🗓️ 4 November 2025

⏱️ 9 minutes

🧾️ Download transcript

Summary

OpenAI’s latest AI video generator Sora 2 has gotten a lot of attention for its realistic creations. The tool is supposed to have guardrails to prevent creating videos based on misinformation.


But new analysis from watchdog group Newsguard found that, when prompted, Sora 2 often generated videos based on lies, such as false claims having to do with election fraud in a foreign country or that a toddler was detained by immigration agents.


Marketplace’s Nova Safo spoke with Sofia Rubinson, senior editor at Newsguard, to learn more.

Transcript

Click on a timestamp to play from that location

0:00.0

Online misinformation turned into deceptively realistic AI-generated videos.

0:07.5

From American Public Media, this is Marketplace Tech. I'm Novosafo. OpenAI's latest AI video generator, SORA2,

0:24.2

has gotten a lot of attention for how realistic its creations are.

0:28.3

The tool is supposed to have guardrails to prevent creating videos based on misinformation.

0:34.0

But new analysis from Watchdog Group NewsGuard found that when prompted, SORA 2 often generated

0:40.0

videos based on lies, false claims having to do with election fraud in a foreign country, for

0:45.1

instance, or that a toddler was detained by immigration agents. We spoke with NewsGuard's senior

0:51.3

editor, Sophia Rubinson. We have a very high bar at NewsGuard for what claims we enter into our database.

0:58.2

They have to be provably false, and there has to be an abundance of evidence proving that they are in fact not true.

1:05.7

Open AI does say in their guidelines that they prohibit the use of their software to produce videos that will

1:13.2

mislead. However, obviously, we found that in 80% of the cases that we tested, it did produce

1:19.3

videos on claims that are provably false. We reached out to Open AI to ask about our findings,

1:24.9

and they just pointed us to that policy saying that this is against

1:28.5

their guidelines. I will say that in four out of the 20 cases we tested, it refused to produce

1:34.7

videos corresponding with those claims. It wasn't exactly clear about why it rejected some prompts

1:42.3

and it didn't reject others. One hypothesis I have is that it had to do

1:46.8

more with the violent images. So one of the claims that we asked it to produce an image for or a video

1:53.2

for was the claim that there was National Guardman pepper spraying protesters at a recent rally

2:00.7

and that it refused to produce.

2:03.4

So it seems that some of the more violent images tended to trigger the guardrails of SORA,

2:08.3

whereas just claims that were blatantly false, but maybe perhaps weren't as violent,

2:12.6

it was able to produce those videos.

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from American Public Media, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of American Public Media and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.