In this video, we’re taking our RAG architecture to intergalactic levels!
Building on our previous video ([ Ссылка ]), we’re removing dependencies on PaaS components like Document Intelligence and Azure AI Search. Instead, we’re leveraging LangChain to process PDF documents and using the open-source vector database, Chroma DB, as our vector store.
Timeline:
00:00 - Introduction and Overview
00:31 - Previous RAG Implementation Issues
01:25 - Importance of Using LLMs with Company Data
02:15 - Workflow of RAG in Microsoft Fabric
03:32 - Eliminating Azure Dependencies
04:03 - Installing LangChain and Required Packages
05:01 - Preparing PDF Documents
05:23 - Setting Up the Environment
06:00 - Creating Embeddings and Text Splitter
07:01 - Using ChromaDB for Vector Storage
08:14 - Querying the Vector Database
09:16 - Example Queries and Results
11:11 - Finalizing the RAG Implementation
13:00 - Advantages of the Current Solution
14:44 - Future Plans and Resources
Link to our blog for the code used in this video: [ Ссылка ]
Ещё видео!