-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle azure_deployment Parameter Issue in GPTAssistantAgent to Maintain Compatibility with OpenAIWrapper #1721
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #1721 +/- ##
===========================================
+ Coverage 39.33% 51.17% +11.83%
===========================================
Files 57 57
Lines 6096 6097 +1
Branches 1365 1482 +117
===========================================
+ Hits 2398 3120 +722
+ Misses 3502 2729 -773
- Partials 196 248 +52
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@qingyun-wu please don't run the openai contrib test until the logging of api_keys etc. is removed.
…ain Compatibility with OpenAIWrapper (#1721) * support getting model from both llm config and config list * address comments * address commentsd --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com>
…ain Compatibility with OpenAIWrapper (microsoft#1721) * support getting model from both llm config and config list * address comments * address commentsd --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Why are these changes needed?
context
The GPTAssistantAgent supports an optional
azure_deployment
parameter, which, when specified during the initialization of the AzureOpenAI client, leads to aNotFoundError
with error code 404 on callingclient.beta.assistants.list()
:Omitting the azure_deployment parameter results in normal operation. This behavior is incompatible with the implementation logic of OpenAIWrapper and implies a potential deficiency in the AzureOpenAI implementation(openai/openai-python#1163).
Notably, GPTAssistantAgent requires the model/azure_deployment to be specified only when calling client.beta.assistants.create.
To circumvent this issue without affecting the existing logic of other agents and before a fix is released for the openai python library, we should locally handle the azure_deployment parameter within GPTAssistantAgent.
What to do in this PR?
The PR addresses the comprehensive handling of llm config issues outlined in #1688.
Related issue number
Closes #1688
Checks