Replies: 1 comment 1 reply
-
There's currently no ollama support built-in, but this plugin is designed to make adding new providers easy (and can be done externally, as in without modifying llm.nvim). You just need to create a lua table that fulfills this interface. The huggingface provider is probably the simplest example. To use the provider, just set it as the provider field in a new prompt, e.g: local curl = require('llm.curl')
local ollama = {
request_completion = function(handlers, params, options)
return curl.stream(
-- request goes here
)
end
}
require('llm').setup({
prompts = { -- or vim.tbl_extend('force', require('llm.prompts.starters'),
ollama_code = {
provider = ollama,
-- rest of prompt
}
}
}) Feel free to open an issue to track ollama support as well - I only have an 8gb MBA so I haven't really played around with local runners which are mac only, but once they add cross platform I'll give it a shot. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Does this support ollama?
So far ollama has been my smoothest experience to set up an llm locally.
Here's what the API use looks like (as documented here):
Beta Was this translation helpful? Give feedback.
All reactions