-
-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Idea: Add option to use a local model like GPT4ALL #44
Comments
This would be a great addition for the plugin 👍 It would be better if the model is started externally and this plugin only communicates with it. codeexplain.nvim runs the model itself. |
hfcc.nvim has an interface for a hosted open assistant model at huggingface. It doesn't have as robust of a feature set so it would be great if huggingface chat could be leveraged with this plugin. |
So there is a way to use llama.cpp with the openai api... if one could add a different URI for the openai endpoint we would be in business.... [https://www.reddit.com/r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/] |
I came here looking to see if this plugin could be used with llama.cpp. Perhaps making this URL in openai.lua configurable would just work? utils.exec("curl", {
"--silent",
"--show-error",
"--no-buffer",
"https://api.openai.com/v1/chat/completions",
"-H",
"Content-Type: application/json",
"-H",
"Authorization: Bearer " .. api_key,
"-d",
vim.json.encode(data),
} |
agree |
Thank you for the great plugin!
The option to use a local model like GPT4ALL instead of GPT-4 could make the prompts more cost effective to play with.
See codexplain.nvim for an example plugin that is doing this.
The text was updated successfully, but these errors were encountered: