Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

respond to more run status states and other GPT assistant housekeeping changes #665

Closed
wants to merge 10 commits into from

Conversation

sidhujag
Copy link

@sidhujag sidhujag commented Nov 13, 2023

  1. according to the spec there are other states we can handle in wait_for_run function, so I added those.
  2. added termination msg param. Pass through kwargs to super()
  3. register_reply using invoke_assistant and check_termination_and_human_reply in order, so we can check for exit/human reply for human_input_mode != "NEVER". Remove the hardcoded human_input_mode.

1) according to the spec there are other states we can handle in wait_for_run function, so I added those.
2) added termination msg param.
3) register_reply using invoke_assistant and check_termination_and_human_reply in order, so we can check for exit/human reply for human_input_mode != "NEVER". Remove the hardcoded human_input_mode.
4) return empty array if while loop terminates for some reason without returning messages from the state machine (while loop)
@sidhujag sidhujag changed the title respond to more run status states respond to more run status states and other GPT assistant housekeeping changes Nov 13, 2023
I recieved
```
openai.BadRequestError: Error code: 400 - {'error': {'message': "1 validation error for Request\nbody -> role\n  value is not a valid enumeration member; permitted: 'user' (type=type_error.enum; enum_values=[<RoleParam.USER: 'user'>])", 'type': 'invalid_request_error', 'param': None, 'code': None}}
```

When using message["role"] which uses "assistant" for send messages but the API assumes only user role coming into new messages in thread. Not sure how it works for you without this change?
@sidhujag
Copy link
Author

Also there are code formatting errors that need to be fixed.

Can you check again after latest changes

@gagb gagb self-requested a review November 20, 2023 18:13
Copy link
Collaborator

@LeoLjl LeoLjl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@sonichi
Copy link
Contributor

sonichi commented Nov 21, 2023

Could you please add test to cover the new code?

@IANTHEREAL
Copy link
Collaborator

@sidhujag Hello, I would like to know if you have any updates?

@sidhujag
Copy link
Author

Yes im working on my own code quite a bit to have it functional.. but i can add coverage soon as i get time if someone else doesn't get to it before me

@gagb
Copy link
Collaborator

gagb commented Nov 28, 2023

@sonichi @IANTHEREAL @sidhujag, I haven't used Mock from unittest before but does the following strategy look good for creating tests that you asked? This sample only tests one of the run statuses, but we can extend to test more.

@pytest.mark.skipif(
    sys.platform in ["darwin", "win32"] or skip_test,
    reason="do not run on MacOS or windows or dependency is not installed",
)
def test_process_messages_failed():
    # Create a mock assistant thread and run
    mock_thread = Mock()
    mock_thread.id = 'thread1'

    # Create a mock run with status 'failed' and a last error
    mock_run = Mock()
    mock_run.status = 'failed'
    mock_run.last_error = 'Error message'

    instance = GPTAssistantAgent(
        "assistant",
        llm_config={
            "config_list": config_list,
        },
    )

    # Call _process_messages and assert it returns the correct response
    response = instance._process_messages(mock_thread, mock_run)
    instance.delete_assistant()
    assert response == {"role": "assistant", "content": 'Error message'}

@IANTHEREAL
Copy link
Collaborator

I believe for UT (Unit Testing) points, we should: 1) Ensure that for different states, the logic is processed as expected; 2) For each different state's return, ensure that a human agent or other agents can identify it and take the anticipated next action. Both of these can be expedited using mocks.

For all the GPT Assistant response states that mocks can't cover, we rely on reviewers to ensure accuracy, or future E2E (End-to-End) testing.

)

self.cancellation_requested = False
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an useage example for this argument?

Copy link
Author

@sidhujag sidhujag Nov 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was assumed an external object may want to cancel because the run is looping in the state machine, this is a way to try to cancel before it finishes, wasting tokens or making other calls that are not intended

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, Implementing a timeout mechanism for the GPT Assistant Agent would be even better.


def _get_run_response(self, thread, run):
def _process_messages(self, assistant_thread, run):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Decorate the arguments, what types of object are thread and run

"role": "assistant",
"content": 'Expired',
}
return new_messages
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

new_messages is not defined here.

Copy link
Collaborator

@gagb gagb Nov 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed this in this PR: https://github.com/syscoin/autogen/pull/1 @sidhujag @IANTHEREAL please take a look and add to your PR. I don't have write access to your repo.

@sonichi
Copy link
Contributor

sonichi commented Dec 6, 2023

@sidhujag Do you plan to continue working on this PR?

@gagb
Copy link
Collaborator

gagb commented Dec 6, 2023

Continued in #899

@gagb gagb closed this Dec 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
openai-assistant Issues related to assistants from OpenAI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants