The goal of the project is to let people easily load their local LLMs in a notebook for testing with langchain or other agents. This notebook is a companion to oobabooga/text-generation-webui and uses all the same code for loading models. If you are using cpp only you do not need the text-generation-webui code.
Model: llama-30b-sft-oa-alpaca-epoch-2
These instructions assume you have successfully set up the one-click installer text-generation-webui
on Windows with CUDA or installed llama-cpp
and its dependencies.
If you are using llama-cpp
models only, you do not need to follow the instructions for text-generation-webui
.
- Activate your Python or Conda environment.
- Install Jupyter Notebook by running
pip install jupyter
in your preferred command prompt or terminal. - Restart your command prompt or terminal to ensure that the installation is properly configured.
- Activate your Python or Conda environment again and run
jupyter notebook
in the command prompt or terminal to launch the Jupyter interface. - Navigate to the directory where
Alpaca-wikipedia-search.ipynb
is located (ooba users put it in./text-generation-webui/
and open the notebook in the Jupyter interface.
This might not work the same for every model and search query. Prompts may need to be tweaked to get the Agent to follow the instructions correctly. If you know of any instruct prompts that work well with certain models let me know.
Feel free to open issues, submit pull requests etc if you want to join in on this research