Welcome to the RAG-Projects repository! This collection contains various projects and tutorials centered around Retrieval-Augmented Generation (RAG) techniques using Langchain. Each project showcases different aspects and applications of Langchain in building intelligent systems, including chatbots, document Q&A systems, and more.
- Introduction
- Projects
- Langchain-What We Will Learn And Demo Projects
- Building Chatbot Using Paid And Open Source LLM's
- Production Grade Deployment LLM As API
- Getting Started With RAG Pipeline
- Advanced RAG Q&A Chatbot
- Advanced RAG Q&A Project With Multiple Data Sources
- End To End Advanced RAG Project
- Building Gen AI Powered App
- Powerful Document Q&A Chatbot
- Advanced Q&A Chatbot Using Ragstack
- On-Device AI: RAG Using ObjectBox Vector Database
- RAG With OpenAI GPT-4o Model
- Hugging Face x LangChain
- Document Q&A RAG App With Gemma
- Hybrid Search RAG With Pinecone Vector DB
- Contributing
- License
This repository is dedicated to exploring the capabilities of Langchain for creating advanced Retrieval-Augmented Generation (RAG) applications. The projects included cover a wide range of use cases, from building chatbots and deploying APIs to implementing complex Q&A systems.
An introductory project providing an overview of Langchain's features, including several demo applications to get you started.
A comprehensive guide to creating chatbots using both paid and open-source Large Language Models (LLMs) with Langchain and Ollama.
Learn how to deploy Langchain-based LLMs as production-grade APIs using FastAPI for robust and scalable solutions.
A beginner's guide to setting up a RAG pipeline using Langchain, Chromadb, and FAISS for efficient information retrieval.
Develop an advanced Q&A chatbot with chaining and retrieving capabilities, leveraging Langchain's powerful tools.
Explore techniques for building a sophisticated RAG Q&A project that integrates multiple data sources using Langchain.
An in-depth project demonstrating an end-to-end RAG application using open-source LLM models and the Groq Inferencing engine.
Create a generative AI-powered application with Langchain, Huggingface, and Mistral, showcasing cutting-edge AI capabilities.
Build a powerful document Q&A chatbot using Llama3, Langchain, and the Groq API, designed for high performance and accuracy.
Implement an advanced Q&A chatbot utilizing Ragstack with a vector-enabled Astra DB Serverless database and Huggingface.
Get started with on-device AI by building a RAG system using ObjectBox Vector Database and LangChain.
Construct a RAG application with OpenAI's GPT-4o (omni) model, incorporating the Objectbox Vector Database for enhanced performance.
Explore the new partner package in LangChain with Hugging Face, enhancing your projects with state-of-the-art models.
Develop an end-to-end document Q&A RAG application using Gemma and the Groq API for efficient and accurate document processing.
Implement hybrid search capabilities in your RAG applications using Langchain and the Pinecone Vector Database for versatile and scalable solutions.
We welcome contributions from the community! If you would like to contribute, please follow these guidelines:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Commit your changes and push to your branch.
- Submit a pull request detailing your changes.
This repository is licensed under the MIT License. See the LICENSE file for more details.