-
-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instructions for the configuration on MacOS with llama.cpp #211
Comments
Great thanks for the explanation I totally missed the providers menu, but this is what I was looking for. |
Adding more information here:
Hope it helps. |
Greetings, and thanks for your hard work! I am trying to setup the extension properly as instructed in the README.md, but it does not seem the UI matches what's described there.
I am running llama.cpp, server, which offers an OpenAI compliant api.
The instructions advise for deepseek BASE for chat, but also deepseek base for completion with a good GPU. I am not sure why the chat function requires a base model.
When I open the side panel and chooses the configuration, there is no
Api Provider
, instead, there are just fields for theOllama Hostname
andOllama API Port
, but I am not using ollama (screenshot below). How/Where can we select llama.cpp as per instructions?3. Eventually (and maybe related issue), in the side panel, clicking on the robot emoji shows two dropdown boxes for chat and FIM, but only one option is present there, and is tagged "ollama" (screenshot below). No other possibility is displayed.
The text was updated successfully, but these errors were encountered: