To try everything Brilliant has to offer—free—for a full 30 days, visit [ Ссылка ] . You’ll also get 20% off an annual premium subscription.
37% Better Output with 15 Lines of Code - Llama 3 8B (Ollama) & 70B (Groq)
GitHub Project:
[ Ссылка ]
👊 Become a member and get access to GitHub and Code:
[ Ссылка ]
🤖 Great AI Engineer Course:
[ Ссылка ]
📧 Join the newsletter:
[ Ссылка ]
🌐 My website:
[ Ссылка ]
In this video I try to improve a known problem when using RAG in local model like Llama 3 8B on ollama. This local RAG system was improved by just adding around 15 lines of code. Feel free to share and rate on GitHub :)
00:00 Llama 3 Improved RAG Intro
02:01 Problem / Soulution
03:05 Brilliant.org
04:26 How this works
12:05 Llama 3 70B Groq
15:12 Conclusion
![](https://s2.save4k.ru/pic/vFGng_3hDRk/maxresdefault.jpg)