-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Ollama failing to set context and use tools #5166
Comments
Hey Tom, at the moment, "tool use" is enabled from our side for Anthropic models and a couple of open AI models. The reason is that tests with several others gave pretty bad results. If we make them user configurable though, that should cover it. Why do you say that the inability to use tools has something to do with the 2048 limit? Just wondering. I could be wrong, I think the difference isn't high... but I can double check. |
@enyst thanks for the reply, if LiteLLM+Ollama has no tool use (please drop the FOSS equivalents if possible), then the current system will do. |
Just to clarify, if an LLM doesn't have "tool use" (function calling) or it's not enabled from our side because of how it works, then what that means for They will not perform as well as Claude/GPT-4o though. Possibly by a lot. Can you please explain what do you mean by "drop the FOSS equivalents"? |
@enyst what I mean is that there should be Open Source tool use aides, or at the very least good ways of explicitly noting open weight models that are tool use compatible, such that people can mix tool use LLMs with pure code LLMs depending on the steps of the task |
We are working towards supporting OSS models with tool use. You may want to see this comment (and before it in the thread, the discussion and current results) |
Is there an existing issue for the same bug?
Describe the bug and reproduction steps
The issue is that when it has to deal with the failings of inability to use tools, the API calls break the 2048 default limit for Ollama, which needs to be set higher (hopefully not making a custom model with higher context length since most modern LLM have context lengths at 128K or more), and Ollama should be able to add params to the API calls such that this does not happen? https://github.com/ollama/ollama/blob/main/docs/faq.md#how-can-i-specify-the-context-window-size
I mean it is true that OpenHands relies on LiteLLM, but then there definitely is something analogous to "num_ctx" that can can be called through LiteLLM into Ollama as well https://docs.litellm.ai/docs/providers/ollama https://docs.litellm.ai/docs/completion/input#translated-openai-params
Regarding tool use they made an update on July (but system flags this as unable to use tools natively, and require emulation through prompting https://ollama.com/blog/tool-support
OpenHands Installation
Docker command in README
OpenHands Version
0.14
Operating System
None
Logs, Errors, Screenshots, and Additional Context
No response
The text was updated successfully, but these errors were encountered: