-
Notifications
You must be signed in to change notification settings - Fork 44.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decouple thoughts generation from function call generation #6947
Comments
responded on discord :) |
@ntindle can you give a link to the discord discussion here for those of use arriving from the web at large? Or maybe a summarication here? I think this is a very important topic going forward, this project should not be shackled to openAI. |
Absolutely. Good point. I'll try to be more diligent about that in the future too. |
Link to start of discussion: https://discord.com/channels/1092243196446249134/1095817829405704305/1212845507060437033 Summary:
|
I wanted add in the @Wladastic 's comment. I like the idea of using multiple models, perhaps eventually a model could be trained specifically for the functions themselves. |
Thank you :) |
I just went and ran it myself again, I can confirm the LLM doesnt even respond in the correct format. |
@Wladastic are you working out of a branch? Or are you able to get all this working on main? And/Or do you have a link to this other project? |
@joshuacox |
1 similar comment
@joshuacox |
@Wladastic I completely understand, sometimes you need to simplify things to isolate the parts you are working with. I encourage you to put up a branch or repo, it might be easier for some of us to contribute to as well. |
I could try to but my project is now merged with my own ai that works different than AutoGPT right now. |
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days. |
This issue was closed automatically because it has been stale for 10 days with no activity. |
Duplicates
Summary 💡
Currently the AutoGPT app assumes the underlying LLM supports OpenAI-style function calling. Even though there is a config variable
OPENAI_FUNCTIONS
which defaults to false, turning this on/off is a no-op. I don't think the actual value of this variable is used by any part of the system. This bug is hidden by the fact that all the supportedOPEN_AI_CHAT_MODELS
havehas_function_call_api=True
(AutoGPT/autogpts/autogpt/autogpt/core/resource/model_providers/openai.py
Line 110 in 64f48df
So even when
OPENAI_FUNCTIONS
is turned off, during e.g. agent creation, the system still expects to be interacting with a model that supports OpenAI-style function calling. This usually isn't a problem since all of the supported models have function calling enabled, so errors never get raised.The errors only arise when you try to use a non-OpenAI model (e.g. local model via Ollama, llamafile, etc) by setting
OPENAI_API_BASE_URL=http://localhost:8080/v1
. If the model doesn't support function calling (i.e. thetool_calls
field of the model response is empty) you get aValueError: LLM did not call create_agent function; agent profile creation failed
from this lineIt seems like there should be a happy path to delegating function calling to customizable/pluggable components instead of assuming the underlying LLM will take care of everything end-to-end. I think this would make it easier for people to use local LLMs, as well as mix local LLMs with expensive APIs. Maybe this happy path already exists -- if so, I'd be happy to write docs for this.
Related to this issue: #6336
Let me know what you think @ntindle
Examples 🌈
No response
Motivation 🔦
No response
The text was updated successfully, but these errors were encountered: