OpenLLM is a versatile project that allows users to interact with different language models (LLMs). It leverages the Perplexity API for access to well-known open-source models like the LLAMA models or Mistral 7B. Additionally, OpenAI's API is used to communicate with GPT models. There is also simple functinality for file embeddings.
- Intuitive Chat Interface: Users can seamlessly chat with various LLMs.
- Chat History: Basic chat history is stored locally in JSON files.
- Token Counter: Keep track of tokens used during interactions.
- Online LLMs!: Using the newest perplexity online-models that are grounded on internet data. So no cutoff-date! Very new models so definetly expect errors.
- Basic embeddings with llama-index: Upload your files and chat to them.
- Basic authentification: Set a password as an env variable and only users that enter that password correctly can access the site. Authentification is stored in a user-coockie, so you dont to enter the password only once->Very basic, no password encoding implemented.
demo.basic.online-video-cutter.com.mp4
Online-LLM
Bildschirmaufnahme.2023-12-03.um.15.06.22.mov
- Robust Error Handling: Improve error handling for a smoother experience.
- Optimizations: Minor improvements and optimizations.
OpenLLM relies on two Python libraries:
- NiceGUI: Used for the user interface.
- Langchain: Facilitates communication with LLMs.
- LlamaIndex: Facilitates Embeddings with LLMs.
- Clone the repository.
- Install dependencies from
requirements.txt
. - Set the env variables listed in the env_vars_github.txt. Or create a file called "var.env" where you set the env variables found in the env_vars_github.txt. The code will automatically use the vars stored in the var.env. You need API-Keys from Open-AI and Perplexity to use all the models.
- Run
main.py
.