This repository contains a Jupyter Notebook that demonstrates the implementation of a Retrieval-Augmented Generation (RAG) system using the Llama2 model through the Hugging Face platform. The .py file is the RAG implementation with a Graphical User Interface (GUI) using the streamlit python library.
The notebook provides an example of setting up a RAG system which utilizes Llama2 for enhancing text generation capabilities with the power of external knowledge retrieval.
- Implementation of the RAG system using Llama2.
- Integration with Hugging Face libraries.
- Demonstrations of setup and basic usage.
- Python 3.x
- Hugging Face Transformers
- Other dependencies listed in
requirements.txt
-
Clone the repository:
git clone <repository-url>
-
Create a conda environment: ''' conda create -n myenv python=3.x '''
-
Install the required packages:
pip install -r requirements.txt
-
Activate your environemt:
conda activate myenv
-
Open the notebook in Jupyter Lab or Jupyter Notebook:
jupyter lab
Follow the instructions in the notebook to learn how to initialize the system, prepare data, and generate responses using the RAG model.
-
Clone the repository:
git clone <repository-url>
-
Create a conda environment: ''' conda create -n myenv python=3.9 '''
-
Activate your environemt:
conda activate myenv
-
Run Streamlit:
streamlit run llama_app.py
Contributions are welcome! For major changes, please open an issue first to discuss what you would like to change.
This project is licensed under the MIT License - see the LICENSE file for details.