-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama support #1028
Labels
area/backends
enhancement
New feature or request
roadmap
up for grabs
Tickets that no-one is currently working on
Comments
While I can understand the request of another backend, ollama isn't really adding any features that LocalAI hasn't already. Did you followed the docs to use GPU? https://localai.io/basics/getting_started/#cuda for metal Ollama isn't in the roadmap, however feel free to take a stab at it. |
come up again on X, up for grabs :) |
1 task
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
area/backends
enhancement
New feature or request
roadmap
up for grabs
Tickets that no-one is currently working on
Is your feature request related to a problem? Please describe.
Hello, I tried
ollama
on my macbook and got pretty good performance compared to runningLocalAI
withllama-stable
directly(which consumes lots of CPU and not using GPU at all):While
Ollama
will use the GPU and so saves CPU, but unfortunatelyollama
did not have OpenAI like API.Describe the solution you'd like
Add support for ollama.
Describe alternatives you've considered
Had not found one proper.
Additional context
Thanks a lot :D
The text was updated successfully, but these errors were encountered: