A RAG (Retrieval Augmented Generation) setup for further exploration of chatting to company documents
! This repo is tested on a Windows platform
- Clone this repo to a folder of your choice
- In a folder of your choice, create a file named ".env"
- When using the OpenAI API, enter your OpenAI API key in the first line of this file:
OPENAI_API_KEY="sk-....."
- If you don't have an OpenAI API key yet, you can obtain one here: https://platform.openai.com/account/api-keys
- Click on + Create new secret key
- Enter an identifier name (optional) and click on Create secret key
- When using Azure OpenAI Services, enter the variable AZURE_OPENAI_API_KEY="....." in the .env file
The value of this variable can be found in your Azure OpenAI Services subscription - In case you want to use one of the open source models API's that are available on Huggingface:
Enter your Huggingface API key in the ".env" file :
HUGGINGFACEHUB_API_TOKEN="hf_....."
- If you don't have an Huggingface API key yet, you can register at https://huggingface.co/join
- When registered and logged in, you can get your API key in your Huggingface profile settings
- This repository also allows for using one of the Ollama open source models on-premise. You can do this by following the steps below:
- In Windows go to "Turn Windows features on or off" and check the features "Virtual Machine Platform" and "Windows Subsystem for Linux"
- Download and install the Ubuntu Windows Subsystem for Linux (WSL) by opening a terminal window and type
wsl --install
- Start WSL by typing opening a terminal and typing
wsl
, and install Ollama withcurl -fsSL https://ollama.com/install.sh | sh
- When you decide to use a local LLM and/or embedding model, make sure that the Ollama server is running by:
- opening a terminal and typing
wsl
- starting the Ollama server with
ollama serve
. This makes any downloaded models accessible through the Ollama API
- opening a terminal and typing
- Open an Anaconda prompt or other command prompt
- Go to the root folder of the project and create a Python environment with conda with
conda env create -f appl-docchat.yml
NB: The name of the environment is appl-docchat by default. It can be changed to a name of your choice in the first line of the file appl-docchat.yml - Activate this environment with
conda activate appl-docchat
- Open an Anaconda prompt or other command prompt
- Go to the root folder of the project and create a Python environment with pip with
python -m venv venv
This will create a basic virtual environment folder named venv in the root of your project folder NB: The chosen name of the environment folder is here venv. It can be changed to a name of your choice - Activate this environment with
venv\Scripts\activate
- All required packages can now be installed with
pip install -r requirements.txt
- If you would like to run unit tests, you need to
pip install -e appl-docchat
as well.
The file settings_template.py contains all parameters that can be used and needs to be copied to settings.py. In settings.py, fill in the parameter values you want to use for your use case. Examples and restrictions for parameter values are given in the comment lines
When the NLTKTextSplitter is used for chunking the documents, it is necessary to download the punkt and punkt_tab module of NLTK.
This can be done in the activated environment by starting a Python interactive session: type python
.
Once in the Python session, type import nltk
+ Enter
Then nltk.download('punkt')
+ Enter
And finally, nltk.download('punkt_tab')
+ Enter
This repo allows reranking the retrieved documents from the vector store, by using the FlashRank reranker The very first use will download and unzip the required model as indicated in settings.py from HuggingFace platform For more information on the Flashrank reranker, see https://github.com/PrithivirajDamodaran/FlashRank
The file ingest.py can be used to vectorize all documents in a chosen folder and store the vectors and texts in a vector database for later use.
Execution is done in the activated virtual environment with python ingest.py
The file query.py can be used to query any folder with documents, provided that the associated vector database exists.
Execution is done in the activated virtual environment with python query.py
The file summarize.py can be used to summarize every file individually in a document folder. Two options for summarization are implemented:
- Map Reduce: this will create a summary in a fast way
- Refine: this will create a more refined summary, but can take a long time to run, especially for larger documents
Execution is done in the activated virtual environment with python summarize.py
. The user will be prompted for the summarization method
The functionalities described above can also be used through a GUI.
In the activated virtual environment, the GUI can be started with streamlit run streamlit_app.py
When this command is used, a browser session will open automatically
The file review.py uses the standard question-answer technique but allows you to ask multiple questions to each document in a folder sequentially.
- Create a subfolder with the name review in a document folder
- Secondly, add a file with the name questions.txt in the review folder with all your questions. The file expects a header with column names Question Type and Question for example. Then add all your question types ('Initial', or 'Follow Up' when the question refers to the previous question) and questions tab-separated in the following lines. You can find an example in the docs/CAP_nis folder.
Execution is done in the activated virtual environment with python review.py
All the results, including the answers and the sources used to create the answers, are stored in a file result.csv which is also stored in the subfolder review
When parsing files, the raw text is chunked. To see and compare the results of different chunking methods, use the chunks analysis GUI.
In the activated virtual environment, the chunks analysis GUI can be started with streamlit run streamlit_chunks.py
When this command is used, a browser session will open automatically
The file evaluate.py can be used to evaluate the generated answers for a list of questions, provided that the file eval.json exists, containing
not only the list of questions but also the related list of desired answers (ground truth).
Evaluation is done at folder level (one or multiple folders) in the activated virtual environment with python evaluate.py
All evaluation results can be viewed by using a dedicated evaluation GUI.
In the activated virtual environment, this evaluation GUI can be started with streamlit run streamlit_evaluate.py
When this command is used, a browser session will open automatically
This repo is mainly inspired by:
- https://docs.streamlit.io/
- https://docs.langchain.com/docs/
- https://blog.langchain.dev/tutorial-chatgpt-over-your-data/
- https://github.com/PatrickKalkman/python-docuvortex/tree/master
- https://github.com/PrithivirajDamodaran/FlashRank
- https://blog.langchain.dev/evaluating-rag-pipelines-with-ragas-langsmith/
- https://github.com/explodinggradients/ragas