-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autogen llm_config error #1863
Comments
There are some updates in the required format of config_list. Could you set llm_config to False, or specify a non-empty 'model' either in 'llm_config' or in each config of 'config_list' as suggested in the error message if you are using 0.2.16? Thank you! |
If I am interpreting this correctly it seems that litellm does not need a model arg to work? So perhaps requiring this is too heavy handed? |
@jackgerrits good point. I'm OK with removing the check in ConversableAgent. @gunnarku what do you think? |
Looping in @olgavrou |
The dilemma is between breaking the backward compatibility and more robust validation for preconditions (and I admit that I am biased towards more formality ;-)). If we remove the checking for the presence of the If it's the only known case of overly aggressive validation, then modifying the configuration by the user is an obvious fix. Issuing a warning message is a middle-ground solution. However, @@jackgerrits and @sonichi, this is your show - do what's right for AutoGen customers ;-) |
I tried to find a solution to accommodate both requirements in #1946 |
custom models also don't necessarily need the |
Hi, when my version is
pyautogen==0.2.16
Autogen gives this error:But when i switch to the
pyautogen==0.2.2
version. It does not gives any error.This is my llm_config:
And litellm runs on that port without any issues.
The text was updated successfully, but these errors were encountered: