Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Custom Models #461

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Enable Custom Models #461

wants to merge 2 commits into from

Conversation

enochlev
Copy link

Problem statment: There are alot of powerfull LLMs backend with terrible UI and alot of nice front end LLMw with bad/unscalable backend.

This PR will allow to use custom models from a private server hosting openai compatible API

Requirments:

a backend with openAI compatble API
here are some popular ones
FastChat, FastChat is an open platform for training, serving, and evaluating large language model based chatbots. Includes scalability capabilities
vLLM faster/scalable version of hosting LLMs
LMDeploy Less popular, but more faster/scalable version of vLLM
llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
text-generation-webui, the most popular web UI. Supports NVidia CUDA GPU acceleration.
LM Studio, a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.

this will make libraries like [llama-gpt]https://github.com/getumbrel/llama-gpt
obsolete as the focus of this project is nice UI, but has a unscallable backend

Here is how to test it

  1. host one of the models above and get a working vLLM backend.
    I recommend vLLM due to its powerfull backend
  2. Host the server via ngrok

To skip the 2 steps above test using my api server... Ill host it for a week or untill this PR is closed

  • API Endpoint: https://major-collie-officially.ngrok-free.app/v1/chat/completions API key EMPTY
  1. Enter in your credentials into the app
    image

  2. Choose a model supported by your API
    image

  3. Start chating on a local private server!!!

@mratanusarkar
Copy link

I tried bettergpt.chat by setting API as:
API Endpoint: https://major-collie-officially.ngrok-free.app/v1/chat/completions and API key as EMPTY
on checking "Use custom API endpoint"

but on clicking "Model" on top, I don't get vicuna-7b-v1.5 in the dropdown.
@enochlev could you help on what I am doing wrong?

Also, a follow-up question on how to use private Models with this UI, without using "api.openapi.com"?

@enochlev
Copy link
Author

I closed down the server as I said I would host it for a week... I reopened it up.

here is the error you should get
image

And my code should allow you to add your own custom model into the text box.

I guess after thinking about it for a while, a better option is to send a request to the openai endpoint and check for existing models and add them into the dropdown instead of having the user manual edit it.

@7flash
Copy link

7flash commented Jan 28, 2024

I closed down the server as I said I would host it for a week... I reopened it up.

here is the error you should get image

And my code should allow you to add your own custom model into the text box.

I guess after thinking about it for a while, a better option is to send a request to the openai endpoint and check for existing models and add them into the dropdown instead of having the user manual edit it.

I think we need both, requesting /models endpoint to show default dropdown but still allow user to add custom models in case if /models endpoint did not return all of them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants