-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tune] Don't recommend tune.run
API in logging messages when using the Tuner
#33642
[Tune] Don't recommend tune.run
API in logging messages when using the Tuner
#33642
Conversation
Signed-off-by: Justin Yu <justinvyu@berkeley.edu>
Signed-off-by: Justin Yu <justinvyu@berkeley.edu>
python/ray/tune/tune.py
Outdated
logger.debug( | ||
"TrialRunner resumed, ignoring new add_experiment but " | ||
"updating trial resources." | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this talking about?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any new experiments/configurations passed to tune.run
will be ignored (we only continue the current state). This is when people pass tune.run(different_experiment)
when resuming.
However, overwriting trainables is now a default way
Can we maybe
- Detect if an Experiment was passed or just a trainable (in this block
if not isinstance(exp, Experiment)
) - If an experiment, continue to use the INFO message (maybe with updated wording)
- Else, don't print anything
Alternatively, we can keep it as is in the PR. I don't think anybody really passes experiments anyway and the message was unhelpful to begin with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've kept it as a DEBUG, and improved the message a bit. It was a a bit more complicated to tell if the user passed in an experiment, since all trainables get converted to experiment -- felt that keeping it at the DEBUG log level was good enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good 🙂️
@@ -226,6 +227,7 @@ def run( | |||
_remote: Optional[bool] = None, | |||
# Passed by the Tuner. | |||
_remote_string_queue: Optional[Queue] = None, | |||
_tuner_api: bool = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there anyway to determine the entrypoint with certain internal states, instead of passing this flag explicitly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, it's a bit hard. I did this bc it seems like we have some special Tuner flags already, but maybe I could do a __tuner_api
double underscore to make sure users really don't use this thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm I think @xwjiang2010 is also doing some context passing for entry point detection? Just checking to avoid duplicate work here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like there is no telemetry for tuner vs. tune.run yet! This can be used in a future telemetry PR too then.
} | ||
if _tuner_api | ||
else { | ||
"entrypoint": "tune.run(...)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When do users typically call tune.run()?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the old Tune API (before Tuner) that we will deprecate at some point in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it.
Signed-off-by: Justin Yu <justinvyu@berkeley.edu>
…/resume_vs_restore
Signed-off-by: Justin Yu <justinvyu@berkeley.edu>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me, thanks! Ping me for merge
python/ray/tune/tune.py
Outdated
logger.debug( | ||
"TrialRunner resumed, ignoring new add_experiment but " | ||
"updating trial resources." | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any new experiments/configurations passed to tune.run
will be ignored (we only continue the current state). This is when people pass tune.run(different_experiment)
when resuming.
However, overwriting trainables is now a default way
Can we maybe
- Detect if an Experiment was passed or just a trainable (in this block
if not isinstance(exp, Experiment)
) - If an experiment, continue to use the INFO message (maybe with updated wording)
- Else, don't print anything
Alternatively, we can keep it as is in the PR. I don't think anybody really passes experiments anyway and the message was unhelpful to begin with.
Signed-off-by: Justin Yu <justinvyu@berkeley.edu>
Signed-off-by: Justin Yu <justinvyu@berkeley.edu>
Signed-off-by: Justin Yu <justinvyu@berkeley.edu>
…the `Tuner` (ray-project#33642) Signed-off-by: Justin Yu <justinvyu@berkeley.edu> Signed-off-by: elliottower <elliot@elliottower.com>
…the `Tuner` (ray-project#33642) Signed-off-by: Justin Yu <justinvyu@berkeley.edu> Signed-off-by: Jack He <jackhe2345@gmail.com>
Why are these changes needed?
This PR changes some logs to use the correct Tune entrypoint, depending on if the user is running with
tune.run
vs.tuner.fit()
. Certain args likeconfig
andparam_space
are different between the two. Restoration logic is also different. This PR also reduces the amount of redundant logs that we print on restoration. This PR fixes the log when auto ray init happens to actually show up -- this will help users figure out how to customizeray.init
options.Problem
Before this change, doing Ctrl+C on your experiment would give you a message to restore with
tune.run
. Also, specifying an invalidmode
in TuneConfig would also referencetune.run
.Auto ray init log example
Related issue number
Closes #31478
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.