-
Notifications
You must be signed in to change notification settings - Fork 470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: add llama api using local models #108
Conversation
I see that you actually worked in this in https://github.com/mkellerman/chatgpt-web/tree/feature/add-llama-cli, great :) |
Yes, weither approaches would work. I dont see a negative impact to have both repo document/showcase one another. And it allows both projects to grow and thrive. |
Having it hear helps with the development work of the UI. And having it overthere helps people working on their MLs. |
OK, I will support both - but first I will contribute a Dockerfile to the Edit: done, see abetlen/llama-cpp-python#73 |
@mkellerman & @Niek I am not sure, if I am missing anything, but isn't this capability already implemented by #494? You can simply start a local OpenAI API compatible |
Yes, this is an old issue that can be closed now. PRs are welcome with some docs around this though :) |
This allows the user to un-comment the section in the
docker-compose.yml
file and.env
to either use the mocked-api or the llama-api.The user can then load local models, and use chatgpt-web against it.