-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make groupchat & generation async, actually #543
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a test to cover the new code. It'll be even better if the test case mimics a real use case and is documented.
Hey, @sonichi, thanks for the feedback. What's still missing is test for async as you suggested, but I'm not sure how to proceed here to be honest. There are some tests using higher async functions already and I don't have a good idea for the clean test case. In my actual code async webserver is running at the same time, so it's a difference between being able to serve new requests in a meantime and not. Here though might replace that with a loop that has asyncio.sleep(0.5) inside it and it needs to iterate at least 10 times in the next 6 seconds or something, but this feels very hacky. |
Thanks. I'd like experts in async to chime in about the best way to write the test. |
@microsoft-github-policy-service agree company="Manifold" |
@aayushchhabra1999 @ragyabraham Could you please review this PR? Do you have suggestions about how to write the test? |
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #543 +/- ##
===========================================
+ Coverage 26.69% 63.62% +36.92%
===========================================
Files 28 28
Lines 3742 3766 +24
Branches 849 895 +46
===========================================
+ Hits 999 2396 +1397
+ Misses 2670 1115 -1555
- Partials 73 255 +182
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
https://github.com/microsoft/autogen/actions/runs/6761699902/job/18376843408?pr=543 |
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* make groupchat & generation async actually * factored out func call pre-select; updated indecies * fixed code format issue * mark prepare agents subset as internal * func renaming * func inputs * return agents * Update test/agentchat/test_async.py Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Update notebook/agentchat_stream.ipynb Co-authored-by: Chi Wang <wang.chi@microsoft.com> --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* make groupchat & generation async actually * factored out func call pre-select; updated indecies * fixed code format issue * mark prepare agents subset as internal * func renaming * func inputs * return agents * Update test/agentchat/test_async.py Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Update notebook/agentchat_stream.ipynb Co-authored-by: Chi Wang <wang.chi@microsoft.com> --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com> Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
Why are these changes needed?
Those are the changes I made when developing application using autogen and asynchronous server. The core problem it addresses is
ConversableAgent.generate_oai_reply
being sync, so even when calling async version of initiate chat or send, in the end the whole application gets blocked by sync call underneath. So, this PR includes async version of that via a simple executor wrapper and some other changes to make async groupchat work as expected.Looking forward to your feedback, I'd love to get this merged and not maintain my own fork any longer than I need to.
Key parts:
Related issue number
Checks