In this episode, Alex discusses the recent update from the Google Gemini team, focusing on Gemini and Gemma. Gemma is Google's family of open-source lightweight AI models for generative AI, while Gemini is Google's flagship AI model. Gemma is designed to be more accessible and agile, with smaller models that require less computational power. The update includes Gemma 2, the latest addition to the Gemma family, and Gemini 1.5, which offers open access to a 2 million token context window. Alex explains that tokens are the fundamental building blocks that AI models use to understand and process language, while parameters are the numerical values that the models learn during training. The context window refers to the amount of information the model can remember while generating text. Gemini's context window has now doubled to 2 million tokens, with a theoretical maximum of 10 million tokens. Alex explores the possible interpretations of the extended and maximum context windows and highlights the importance of understanding these differences for developers and users.
![](https://i.ytimg.com/vi/XGGW8zQf1zo/maxresdefault.jpg)