Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support function calling #514

Merged
merged 9 commits into from
May 18, 2024
Merged

feat: support function calling #514

merged 9 commits into from
May 18, 2024

Conversation

sigoden
Copy link
Owner

@sigoden sigoden commented May 16, 2024

Why use LLM function calling?

LLM function calling is a powerful technique that allows Large Language Models (LLMs) to interact with external tools and data sources. This significantly enhances their capabilities and opens up exciting possibilities for various applications.

AIChat LLM Functions

AIChat implements function calling by running an external tool.

I created a new repo https://github.com/sigoden/llm-functions and write some tools that cooperate with function calls.

You can configure function calling feature according to its readme.

The client types that currently support function calling are: openai, claude, gemini and cohere.

Function calling in AIChat must work with a role or session.

A function_filter field is added to the role/session. It controls which functions are used.

AIChat provides a built-in %functions% role whose function_filter is .*, matching all functions.

In AIChat, the roles are smiliar to GPTs.

- name: myfuncs
  prompt: ''
  function_filter: func_foo|func_bar|func_.*

Function Types

Retrieve Type

The function returns JSON data.
AIChat retrieves the data and send it to LLM for further processing.
AIChat does not ask permission to run the function or print the output.

Execute Type

The function can do anything.
AIChat will ask permission before running the function.

image

AIChat categorizes functions starting with "may_" as "execute" type and all others as "retrieve" type.

Conclusion

close #507

@sigoden sigoden changed the title refactor: support function calling feat: support function calling May 16, 2024
@tkanarsky
Copy link

nice! seems to work pretty well. Is function calling available as a part of continued conversation? right now if you ask a question that invokes a tool call, that tool call replaces the output of the LLM instead of using the answer in its response

image

@sigoden
Copy link
Owner Author

sigoden commented May 16, 2024

The text generated by function calls is beyond the capabilities of the LLM. Including it in session conversation will only cause interference.

Note: the functions is bound to role other than session. It's recommended to use temporary roles during the session.

image

Another factor to consider is the unpredictable variety of text generated by external tools. The text can be empty (for example, if the tool simply opens another application), dynamic (like a progress bar), or unexpectedly large. It is inappropriate to include such output in a conversation.

@sigoden
Copy link
Owner Author

sigoden commented May 17, 2024

image

AIChat now fully supports the function calling feature. It can retrieve information from functions and pass it back to the LLM for further processing. @tkanarsky

@gilcu3
Copy link

gilcu3 commented May 17, 2024

@sigoden in the current implementation the functions are called later in the conversation even though they were already called with the same parameters. Is this required in some scenario?
For most functions, and as in a natural conversation, I think caching previous tool input/output and making that part of the conversation history makes more sense.

@sigoden
Copy link
Owner Author

sigoden commented May 17, 2024

AIChat save the question and final result in the session. The intermediate data returned by the function call has an uncertain format and variable size, not as valuable as you think.

Cache makes no sense, how do you know when the cache should be invalidated?
Don’t fall into the trap of over-optimization. @gilcu3

@gilcu3
Copy link

gilcu3 commented May 17, 2024

AIChat save the question and final result in the session. The intermediate data returned by the function call has an uncertain format and variable size, not as valuable as you think.

Cache makes no sense, how do you know when the cache should be invalidated? Don’t fall into the trap of over-optimization. @gilcu3

In my testing I got the impression that the tool was not storing the results in the session. If that's not the case then should be fine :) And yes I agree there is no need to over-optimize :)

@sigoden
Copy link
Owner Author

sigoden commented May 18, 2024

If the LLM doesn't support parallel function calling, it will cause an infinite loop:

Call get_current_weather --location 'London, England'
Call get_current_weather --location 'Paris, France'
Call get_current_weather --location 'London, UK'
Call get_current_weather --location 'Paris, France'

Is this what you're talking about? @gilcu3

Despite multiple requests, they are counted as a single round-trip message. we can and need to reuse function call results.

@sigoden
Copy link
Owner Author

sigoden commented May 18, 2024

Aichat can now reuse tool call results. LLMs that don't support parallel function calls now work.

@gilcu3
Copy link

gilcu3 commented May 18, 2024

Is this what you're talking about? @gilcu3

What I observed was thr LLM making a function call that was superfluous, because the answer was already mentioned in the conversation before. This infinite loop is/was a different issue.

@tkanarsky
Copy link

@sigoden you're the GOAT!! Thank you for the implementation!

@sigoden sigoden merged commit b4a40e3 into main May 18, 2024
3 checks passed
@sigoden sigoden deleted the feat-function-calling branch May 18, 2024 11:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request] Function calling
3 participants