Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Follow up of #665: respond to more run states #899

Closed
wants to merge 25 commits into from
Closed

Conversation

gagb
Copy link
Collaborator

@gagb gagb commented Dec 6, 2023

@sidhujag added:
according to the spec there are other states we can handle in wait_for_run function, so I added those.
added termination msg param. Pass through kwargs to super()
register_reply using invoke_assistant and check_termination_and_human_reply in order, so we can check for exit/human reply for human_input_mode != "NEVER". Remove the hardcoded human_input_mode.

I added test cases

Why are these changes needed?

Related issue number

#665

Checks

jagdeep sidhu and others added 14 commits November 13, 2023 13:20
1) according to the spec there are other states we can handle in wait_for_run function, so I added those.
2) added termination msg param.
3) register_reply using invoke_assistant and check_termination_and_human_reply in order, so we can check for exit/human reply for human_input_mode != "NEVER". Remove the hardcoded human_input_mode.
4) return empty array if while loop terminates for some reason without returning messages from the state machine (while loop)
I recieved
```
openai.BadRequestError: Error code: 400 - {'error': {'message': "1 validation error for Request\nbody -> role\n  value is not a valid enumeration member; permitted: 'user' (type=type_error.enum; enum_values=[<RoleParam.USER: 'user'>])", 'type': 'invalid_request_error', 'param': None, 'code': None}}
```

When using message["role"] which uses "assistant" for send messages but the API assumes only user role coming into new messages in thread. Not sure how it works for you without this change?
logging level based on errors/cancellations.

Remove extra message on failure. Remove last error message on success.

only show last error casted to string in content for last error
1) remove is_termination_msg
2) add external run cancellation
3) remove _wait_for_run and internalize through _get_run_response
4) process responses through _process_messages
@gagb
Copy link
Collaborator Author

gagb commented Dec 16, 2023

LGTM, and remember to add a method to set self.cancellation_requested

@IANTHEREAL, actually, I started implementing this and realized a mistake!
When would an end user want to request cancellation? I think there is only one case -- when they are waiting for an in-progress run to finish.
But if a run is in-progress, they will have to set the value of cancellation_requested asynchronously? So if we are providing a method to set cancellation_requested e.g., say assistant.request_cancellation() that method ought to be an async function?

Does this make sense? Is there a different scenario where someone would want to request cancellation while a thread is running?

I was thinking that if a function calling is hanged, this cancellation mechanism actually won't be much of use, and the only thing users can do is to kill the corresponding agent, right? And this is not a problem unique to the GPT assistant.
In what scenarios would users want to cancel?

  • Most likely, the run is always in 'in_progress' or 'queued' status.
  • It's less likely that, during the execution of some functions, users think it's no need to continue this round.

My biggest confusion right now is, assuming that the user successfully cancels, what happens next? Will the next run process these messages in the cancelled again, or are they just wan to discarded them? Does openai support it?
Please let me know if my thoughts stray.

@IANTHEREAL, How about we defer figuring out the cancellation to a different issue? And at least proceed with merging the intended functionality of this PR?

Agree to merge it ASAP. Now LGTM

Just removed unnecessary cancellation code. Check and recommend it for merging.

@IANTHEREAL
Copy link
Collaborator

Rest LGTM

@gagb
Copy link
Collaborator Author

gagb commented Dec 17, 2023

@sidhujag could we get a review?

@sonichi
Copy link
Contributor

sonichi commented Dec 19, 2023

@gagb this PR is approved but I am not sure if you'd like to merge it from the discussion. Could you clarify? Thanks.

@gagb
Copy link
Collaborator Author

gagb commented Jan 10, 2024

@gagb this PR is approved but I am not sure if you'd like to merge it from the discussion. Could you clarify? Thanks.

I don't understand?

@sonichi
Copy link
Contributor

sonichi commented Jan 14, 2024

@gagb this PR is approved but I am not sure if you'd like to merge it from the discussion. Could you clarify? Thanks.

I don't understand?

There are conversations that are not resolved. And there are conflicts. I'm not sure about the plan for this PR.

@gagb
Copy link
Collaborator Author

gagb commented Mar 25, 2024

closing because of inactivity.

@gagb gagb closed this Mar 25, 2024
whiskyboy pushed a commit to whiskyboy/autogen that referenced this pull request Apr 17, 2024
* stratified group kfold splitter

* exclude catboost

---------

Co-authored-by: Shaokun <shaokunzhang529@gmail.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
openai-assistant Issues related to assistants from OpenAI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants