5 • 1.1K Ratings
🗓️ 9 July 2025
⏱️ 24 minutes
🧾️ Download transcript
Click on a timestamp to play from that location
| 0:00.0 | So let's get started though with marking my work for this year. So the first prediction was that |
| 0:07.1 | there would be no AI wall. There was this battle between whether scaling AI models was still a |
| 0:14.6 | strategy that would work, that would deliver results, whether they could be actually even |
| 0:19.2 | built to that size or not. My second prediction was about the |
| 0:24.1 | speed of deployment, making certain predictions about how fast things would spread and what would |
| 0:30.2 | happen to the price of tokens. I made a prediction that bots would outtalk humans this year |
| 0:37.2 | in the production of natural language. I also predicted |
| 0:41.2 | that Waymo would overtake Uber in San Francisco. I noted that I expected climate extremes to |
| 0:47.0 | intensify significantly and that alongside this and despite the change in the political |
| 0:53.5 | environment, renewable deployment, particularly of solar, would despite the change in the political environment, |
| 0:58.8 | renewable deployment, particularly of solar, would continue to surprise to the upside. |
| 1:03.1 | And alongside that, that, again, despite changes to the political environment, |
| 1:08.6 | electric vehicles would significantly shift up a gear in their market. |
| 1:12.6 | So there were seven predictions, and I had some watchlist themes around geopolitical volatility, demographic decline, and climate and capital, which I didn't have, you know, |
| 1:20.1 | strong tests against. But let's start with the first one, which was that there would be no AI |
| 1:25.3 | wall. And I said, look, research is accelerating, |
| 1:27.9 | not plateauing. And we would likely see a 10 million token context model and reasoning breakthroughs |
| 1:36.0 | across some of these reasoning benchmarks. Now, both of those matter because the context window |
| 1:40.9 | of a model is a little bit like its working memory. It's a bit shonky as a |
| 1:46.6 | working memory, but it's the bit that you put into your LLM and it can manipulate back and forth. |
| 1:52.7 | And when you get to the end of the context window, it tends to hallucinate much, much more. |
| 1:57.5 | And the reasoning model, the reasoning tests like Frontier and RKGI are very useful tests for |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from EPIIPLUS 1 Ltd / Azeem Azhar, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of EPIIPLUS 1 Ltd / Azeem Azhar and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.