meta_pixel
Tapesearch Logo
Log in
TechCheck

Apple Researchers' AI Red Flag, Plus OpenAI's Revenue Milestone 6/9/25

TechCheck

CNBC

Disruptors, Investing, Faang, Technology, Business, Management, Cnbc, Tech

4.856 Ratings

🗓️ 9 June 2025

⏱️ 7 minutes

🧾️ Download transcript

Summary

Researchers at Apple are out with a new paper that throws cold water on the biggest trend in AI right now: reasoning models. Scientists there say that beyond a certain point, the models lose all accuracy. Plus, OpenAI has just hit $10 billion in annual recurring revenue, driven by chatGPT, its consumer business and APIs.

Transcript

Click on a timestamp to play from that location

0:00.0

Apple researchers now raising red flags that may be hard for the AI trade to ignore.

0:04.9

In a new paper, they're throwing cold water on the growing trend of reasoning models and whether

0:09.7

they're really more accurate. Deirdre Bosa has more in today's tech check. Please explain, Deirdre.

0:15.0

I will. So in doing this, Apple is essentially calling out not just a growing, but the biggest

0:19.8

trend in AI right now.

0:21.9

And that is reasoning the ability of models to try and think like humans.

0:25.5

Now, these models are everywhere.

0:26.9

Open AI, Google, Anthropic, they are all racing to build AI that doesn't just spit out

0:31.4

answers, but shows its work step by step, promising more reliable and logical responses.

0:37.6

But Apple scientists, they're now saying that beyond a certain point, it just doesn't work.

0:41.7

Not only do these models struggle with harder problems, but they actually get worse as the

0:46.0

task gets more complex and they waste computing power overthinking simple tasks.

0:51.3

Here's an example from the paper.

0:53.0

Now, the AI, it's playing a simple version of

0:54.7

checkers, swapping pieces from one side to the other. It starts easy, but the more pieces you add,

0:59.8

the harder it gets. It's a logical puzzle. This chart shows how well AI models handle the puzzle

1:05.6

as it gets harder. At first, they're accurate, but at around eight to ten pieces, everything

1:10.2

just collapses. Even the most advanced reasoning models, they're accurate, but at around eight to ten pieces, everything just collapses.

1:11.3

Even the most advanced reasoning models, they break down.

1:14.7

So put simply, thinking harder doesn't make these models any smarter.

1:17.8

It just makes them slower, less reliable, and importantly more expensive, too.

1:21.5

They're burning through compute power.

...

Transcript will be available on the free plan in 9 days. Upgrade to see the full transcript now.

Disclaimer: The podcast and artwork embedded on this page are from CNBC, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of CNBC and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.