-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bugfix/aqua.evaluation #761
Conversation
@@ -1265,7 +1273,10 @@ def cancel(self, eval_id) -> dict: | |||
raise AquaRuntimeError( | |||
f"Failed to get evaluation details for model {eval_id}" | |||
) | |||
job_run_id = model.provenance_metadata.training_id | |||
|
|||
job_run_id = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the fix for the existing potential bug in production: when model.provenance_metadata
is None, the function will break.
@@ -1328,7 +1339,7 @@ def delete(self, eval_id): | |||
job_id = model.custom_metadata_list.get( | |||
EvaluationCustomMetadata.EVALUATION_JOB_ID.value | |||
).value | |||
except ValueError: | |||
except Exception: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model.custom_metadata_list
can be None for DataScienceModel, which will give 500 AttributeError
from ads. We need to make it as AquaMissingKeyError
as well.
@@ -1538,20 +1550,6 @@ def _build_resource_identifier( | |||
) | |||
return AquaResourceIdentifier() | |||
|
|||
def _get_jobrun( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function has not been used. So it is safe to remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we revert the changes in evaluation.py and add the changes separately later on? This PR will change the current behavior in production even if the intention is to fix bugs. All these changes can go as a part of the next set of improvements/bug fixes later on.
These changes as I commented, are worthwhile to be fixed. They didn't touch the main logic. I believe it is the reason for writing test: to find the bug and fix them. I will keep this PR open to be merged later. |
0dcc256
to
f3cb14b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for this PR, changes look good.
Description
This PR aims to fix the following problem: