Introduction to RAG (Retrieval-Augmented Generation) with LangChain & Ollama. Includes PDF guide and hands-on Jupyter notebooks for practical learning.
| Section | Description |
| --- | --- |
| 1. 🔎 Retrieval-Augmented Generation (RAG) | Definition and overview
Why it goes beyond a stand-alone LLM |
| 2. 💡 Concept | Core meaning of RAG
Key benefits and features |
| 3. ⚙️ How to Build? | Guardrails
Caching
Monitoring
Evaluation
LLM Security |
| 4. 🏗 RAG System Design (Overview) | Input/Output orchestrator
Retriever
Data load & splitting
Data conversion
Storage component
LLM setup
Data indexing
Prompt management |
| 5. 👀 Retriever: Indexing Pipeline | Data load & splitting
Data conversion
Storage (e.g., FAISS) |
| 6. 🧩 Generation Pipeline | Retriever: Query analysis & information retrieval
Prompt Management: Contextual, few-shot, controlled, chain of thought
LLM: Model configuration & generation flow |
| 7. 🔧 Hands-On |
Note: These two notebooks are the primary hands-on exercises for a basic RAG workflow.
|Source Repository: conect2ai/rag_course
License: MIT License - Feel free to use, modify, and share