-
-
Notifications
You must be signed in to change notification settings - Fork 764
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adopt to new function calling format - allow function calling on Groq & other non-openai models #569
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @tijszwinkels thank you for the pull request. There are few issues I found during testing the changes:
When functions folder is empty (user doesn't have any functions) sgpt
will fail because of OpenAI API error:
Invalid value for 'tool_choice': 'tool_choice' is only allowed when 'tools' are specified
This error can be fixed by adding if condition:
if functions:
additional_kwargs["tool_choice"] = "auto"
additional_kwargs["tools"] = functions
Seems there is another issue when LLM tries to call mutliple functions at once. For example sgpt "echo foo and echo bar"
fails with JSONDecodeError
since arguments from LLM coming in just one response: '{"shell_command": "echo foo"}{"shell_command": "echo bar"}'
- Prevent crash when user has empty functions directory - Prevent parallel function calling from the LLM as we don't handle that correctly - The option to disable parallel function calling is added in a later release of the openai library, so upgraded it
@TheR1D Thank you for testing this well! I have addressed your comments:
Tested by prompting it to perform multiple action at once, as well as with an empty functions directory. |
- Prevent crash when user has empty functions directory - Prevent parallel function calling from the LLM as we don't handle that correctly - The option to disable parallel function calling is added in a later release of the openai library, so upgraded it
Hi @tijszwinkels @TheR1D , I’ve noticed that this PR is almost complete, but there are some lint test failures. I’d love to help out and get this merged as I'm also interested in this feature. I’ve made the necessary changes to fix the lint issues and pushed them to a branch on my fork. You can review my changes here: If you’re okay with it, I can either create a new PR with these changes or, if the PR owner add me as a collaborator, I guess I can push the changes directly to this branch. Please let me know how you’d like to proceed. Thanks again for your work on this feature! |
…ew-function-calling-format
@HejiaZ2023 Thanks! - I merged and pushed your changes onto this branch. I've added you as a colaborator as well for now. |
@TheR1D Any more changes you'd like to see? Anything we can do to reduce your workload? |
Hi @tijszwinkels, @HejiaZ2023, sorry for the late reply. The PR has been merged. Thank you for your contribution! |
…D#569) Co-authored-by: Hejia Zhang <hejiazhang2023@outlook.com> Docker container improvements (TheR1D#540) Co-authored-by: Farkhod Sadykov <Farkhod.Sadykov@gmail.com> Minor bug fixes, documentation, release 1.4.4 (TheR1D#607) CD pipeline fix (TheR1D#608) Option for LiteLLM to manage API keys (TheR1D#604)
Update function calling to use the new 'tool use' format. This format is compatible with litellm proxies, groq, etc.