-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama LLM provider tools support #14623
base: master
Are you sure you want to change the base?
Conversation
@dhuebner Thank you for the PR! Which models did you use for testing? |
@planger |
@dhuebner thanks, no particular reason, just out of curiosity and to know how to best test the PR. Thank you! |
Great feature addition! I tried to test this with LLama3.1 but I was not really successful (see below).
This was my test with Ollama: |
The reason is that Ollama doesn't seem to support tools with streaming. The document says: There is also no possibility to listen to e.g.
I tried the questions mentioned in #14285 by @navr32 , that worked okay. I will test with more complex questions, although I don't know how to make it perform better with just calling the API provided... |
@dhuebner OK that makes sense. However I think we should still add the tool call to the response so that is its rendered in the chat. |
How should it look like? I can send only one response during a request, or is there a special API to achieve this? |
I think at the moment we cannot really represent tool calls generically in the non-streaming case. In the non-streaming case, we can only return a export interface LanguageModelTextResponse {
text: string;
} In the streaming-case we use export interface LanguageModelStreamResponsePart {
content?: string | null;
tool_calls?: ToolCall[];
} These are then translated in the method So I think we may need to extend our |
@planger |
Just an idea: Can't we map non-stream responses to streamed ones rather easily? We just pretend that there is a stream and once we have the answer of the LLM we send it as one blob. This way we could reuse the tools? |
@sdirix |
What it does
Resolves #14610
Adds Tools handling for Ollama LMs
How to test
Set an Ollama model as workspace agent and ask questions about the workspace. For example:
@Workspace How many files are in my workspace?
Follow-ups
Breaking changes
Attribution
Review checklist
Reminder for reviewers