Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using a more robust "reflection_with_llm" summary method #1575

Merged
merged 11 commits into from
Feb 7, 2024
Merged

Conversation

qingyun-wu
Copy link
Contributor

@qingyun-wu qingyun-wu commented Feb 7, 2024

Why are these changes needed?

  1. The current way of getting a summary under the reflection_with_llm option is not robust (not working well with function/tool call). This change re-uses code from ConversableAgent's generate_oai_reply, which should be more robust.

  2. Default behavior: if "summary_method" is not specified (the default setting), uses last msg as the summary.

  3. Enhancing tests to cover result return and summary

Related issue number

Checks

@codecov-commenter
Copy link

codecov-commenter commented Feb 7, 2024

Codecov Report

Attention: 4 lines in your changes are missing coverage. Please review.

Comparison is base (e0fa6ee) 35.19% compared to head (51bcf23) 68.70%.

Files Patch % Lines
autogen/agentchat/conversable_agent.py 80.00% 3 Missing and 1 partial ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main    #1575       +/-   ##
===========================================
+ Coverage   35.19%   68.70%   +33.51%     
===========================================
  Files          44       44               
  Lines        5328     5323        -5     
  Branches     1236     1303       +67     
===========================================
+ Hits         1875     3657     +1782     
+ Misses       3299     1317     -1982     
- Partials      154      349      +195     
Flag Coverage Δ
unittests 68.66% <80.00%> (+33.47%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 7, 2024

Consider improving the formatting of ChatResult class's __str__ and __repr__, right now it is hard to read.

@sonichi sonichi added this pull request to the merge queue Feb 7, 2024
Merged via the queue into main with commit 2a2e466 Feb 7, 2024
56 checks passed
@sonichi sonichi deleted the patch1402 branch February 7, 2024 18:01
@sonichi sonichi mentioned this pull request Feb 8, 2024
3 tasks
whiskyboy pushed a commit to whiskyboy/autogen that referenced this pull request Apr 17, 2024
)

* summary exception

* badrequest error

* test

* skip reason

* error

* address func call in summary

* reflection_with_llm enhancement and tests

* remove old

* update notebook

* update notebook
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants