4.9 • 848 Ratings
🗓️ 7 May 2025
⏱️ 51 minutes
🔗️ Recording | iTunes | RSS
🧾️ Download transcript
Explains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (MoE), describes the three-phase training process culminating in Reinforcement Learning from Human Feedback (RLHF) for model alignment, and explores advanced reasoning techniques such as chain-of-thought prompting which significantly improve complex task performance.
Click on a timestamp to play from that location
0:00.0 | Welcome back to Machine Learning Guide, Episode 34 Large Language Models, or LLMs. |
0:08.9 | Now, I have three episodes, starting at Episode 18, on foundational material for natural |
0:16.5 | language processing in general. This includes things like bag of words, TFIDF, tokenization. I have two |
0:24.0 | episodes on deep natural language processing that introduces the concept of recurrent neural |
0:28.6 | networks, embeddings. We'll talk about embeddings a little bit in this episode. And then the last |
0:33.8 | MLG episode 33 was Transformers. And really, LLMs, if you want to understand the true |
0:41.2 | core heart of LLMs, you got to understand Transformers. So that episode, Transformers, is the real |
0:48.7 | heart and engine of LLMs. So listen to that episode, if you want to understand LLMs properly. |
0:55.1 | This episode's going to add a little bit of what we may have missed that goes into modern |
1:01.2 | day large language models. What are some of the new interesting techniques that they're |
1:05.1 | experimenting with and going into the future? So this is sort of to just fill in the knowledge |
1:09.8 | gaps on things that we haven't |
1:11.5 | covered and wrapping up the LLM's package. |
1:15.8 | 2017 white paper attention is all you need introduced the transformer architecture. |
1:22.9 | Transformers being a sequence of attention layers, especially self-attention layers, followed by feed-forward |
1:31.2 | layers and various other components in that stack. |
1:35.0 | Again, listen to the last episode on Transformers, if you want to understand it. |
1:38.2 | And the reason for the Transformers architecture is that unlike RNNs, which have to predict next tokens and be |
1:47.8 | trained sequentially due to the nature of the architecture, the Transformers architecture |
1:53.2 | takes that attention mechanism which was introduced in RNNs and applies it to a more traditional |
1:59.5 | deep learning architecture that can train these sequences in parallel and run them in parallel. |
2:05.6 | It improved the throughput of training and running these models at scale. |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from OCDevel, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of OCDevel and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.