Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tune; RLlib] Missing stopping criterion should not error (just warn). #45613

Merged

Conversation

sven1977
Copy link
Contributor

@sven1977 sven1977 commented May 29, 2024

Missing stopping criterion should not error (just warn).

Some RLlib algorithms take a few iterations in order to sample enough data from their EnvRunners to complete an RL episode. Only after that point in time, episode stats will be published in the result dicts returned by Algorithm.train() (Algorithm is-a tune.Trainable).

The current hacky fix is to prophylactically add common episode stats (such as episode_return_mean) to the initialized algorithm's result dict as NaNs, however, this is not a sustailable solution as users might add their own stats and will then run into the TuneError(Stopping criteria ... not provided) error without any idea how to fix this. A one-time warning here would be the better solution.

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977 sven1977 changed the title [Tune] Missing stopping criterion should not error (just warn). [Tune; RLlib] Missing stopping criterion should not error (just warn). May 30, 2024
@sven1977
Copy link
Contributor Author

Hey @justinvyu , could you take a look at this PR?

Copy link
Contributor

@justinvyu justinvyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I think this generally makes sense.

Could we just add a small unit test here, asserting that the warning is logged?

https://github.com/ray-project/ray/blob/master/python/ray/tune/tests/test_trial.py#L5

elif criterion not in result:
if log_once("tune_trial_stop_criterion_not_found"):
logger.warning(
f"Stopping criterion {criterion} not found in result dict. "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a quick note about the behavior that will follow:

"If this metric is never reported, the run will continue running until training is finished instead."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

sven1977 added 3 commits June 3, 2024 17:09
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977
Copy link
Contributor Author

sven1977 commented Jun 4, 2024

Hey @justinvyu , thanks for the review! Could you take another look? Added the test cases and enhanced the warning message. Thanks! :)

Signed-off-by: sven1977 <svenmika1977@gmail.com>
Copy link
Contributor

@justinvyu justinvyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Seems like the caplog is not working -- you may need to include the propagate_logs fixture.

Comment on lines 130 to 133
result = _TrainingResult(
checkpoint=None, metrics={"a": 9.999, "b/c": 0.0, "some_other_key": True}
)
assert not trial.should_stop(result.metrics)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just pass in a dict without the _TrainingResult wrapper?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

sven1977 added 5 commits June 6, 2024 13:22
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977 sven1977 enabled auto-merge (squash) June 7, 2024 14:18
@github-actions github-actions bot added the go add ONLY when ready to merge, run all tests label Jun 7, 2024
Copy link
Contributor

@justinvyu justinvyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

sven1977 added 2 commits June 8, 2024 16:38
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@github-actions github-actions bot disabled auto-merge June 8, 2024 15:09
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977 sven1977 merged commit 8b89a7b into ray-project:master Jun 9, 2024
6 checks passed
@sven1977 sven1977 deleted the fix_tune_stop_criterium_not_found_erro branch June 11, 2024 12:29
richardsliu pushed a commit to richardsliu/ray that referenced this pull request Jun 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go add ONLY when ready to merge, run all tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants