-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using a more robust "reflection_with_llm" summary method #1575
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #1575 +/- ##
===========================================
+ Coverage 35.19% 68.70% +33.51%
===========================================
Files 44 44
Lines 5328 5323 -5
Branches 1236 1303 +67
===========================================
+ Hits 1875 3657 +1782
+ Misses 3299 1317 -1982
- Partials 154 349 +195
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Consider improving the formatting of |
Why are these changes needed?
The current way of getting a summary under the
reflection_with_llm
option is not robust (not working well with function/tool call). This change re-uses code from ConversableAgent'sgenerate_oai_reply
, which should be more robust.Default behavior: if "summary_method" is not specified (the default setting), uses last msg as the summary.
Enhancing tests to cover result return and summary
Related issue number
Checks