4.7 • 984 Ratings
🗓️ 6 February 2025
⏱️ 18 minutes
🧾️ Download transcript
Now researchers say they have trained a cutting edge AI model for… checks notes… $50. Not $50 million dollars. $50. Dollars. Get ready for the superbowl of AI ads. Amazon has scheduled an Alexa AI event. And also, why does Amazon fail so hard when it comes to physical retail?
Sponsors:
Links:
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Click on a timestamp to play from that location
0:00.0 | Welcome to the Tech meme right home for Thursday, February 6th, 2025. I'm Brian McCullough today. Now researchers say they have trained a cutting-edge AI model for checks notes. $50. Not $50 million. Get ready for the Super Bowl of AI ads. |
0:21.3 | Amazon has scheduled an Alexa AI event, and also, why does Amazon fail so hard when |
0:26.3 | it comes to physical retail? |
0:28.4 | Here's what you miss today in the world of tech. |
0:34.3 | Well, if this holds up, then the whole race to commoditization of the intelligence part of the AI |
0:39.8 | stack is happening faster than I imagined. Stanford and University of Washington AI researchers |
0:44.8 | claim they have trained an AI reasoning model S1, distilled from a Gemini 2.0 model for under |
0:51.0 | $50 in cloud compute. So when it comes to model training, $50 million isn't |
0:57.7 | cool. You know what's cool? $50. Quoting Sherwood News, researchers at Stanford and the University |
1:04.0 | of Washington have developed an AI model that could compete with big tech rivals and trained it |
1:08.0 | in 26 minutes for less than $50 in cloud compute credits. |
1:12.0 | In a research paper published last Friday, the new S1 model demonstrated similar performance |
1:16.9 | on tests measuring mathematical problem solving and coding abilities to advanced reasoning |
1:21.7 | models like opening eyes 01 and deep seeks R1. Researchers said that S1 was distilled from Gemini 2.0 Flash thinking experimental, |
1:31.6 | one of Google's AI models, and that they used test time scaling or presenting a base model |
1:37.2 | with a data set of questions and giving it more time to think before it answers. |
1:41.3 | While this technique is widely used, researchers attempted to achieve the simplest |
1:45.2 | approach through a process called supervised fine-tuning where the model is explicitly instructed |
1:50.2 | to mimic certain behaviors, end quote. And quoting TechCrunch. To some, the idea that a few |
1:56.6 | researchers without millions of dollars behind them can still innovate in the AI space is exciting. |
2:01.7 | But S1 raises real questions about the commoditization of AI models. Where's the moat? |
2:06.9 | If someone can closely replicate a multi-million dollar model with relative pocket change, |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from Brian McCullough, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Brian McCullough and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.