-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tools for selecting a default evaluation time #768
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a couple of suggested changes in terms of style for the tests so highlighting here one comment on structure: We also have logic on when to error/warn in terms of combinations of metric type and eval_time in check_eval_time()
which we may want to consolidate in the series of PRs here.
Co-authored-by: Hannah Frick <hfrick@users.noreply.github.com>
This will pass CI when #776 is merged :-< |
Now I remember why |
This pull request has been automatically locked. If you believe you have found a related problem, please file a new issue (with a reprex: https://reprex.tidyverse.org) and link to this issue. |
show_best()
will pick an evaluation time for a dynamic metric when none is given.Previously, we would find what was in the data and select a time that was close to the median time. This was fine but inconsistent with other parts of tidymodels that do similar operations. For example, tune_bayes has to have a metric to optimize on so it uses the first metric in the metric set and, if needed, the first evaluation time given to the function.
This PR adds a few exported but internal helper functions (primarily
first_eval_time()
) as the canonical tools for these selections.This is one of a sequence of PRs:
show_best()
to be more modular with these toolsshow_best()
or do similar computations (see When we need to default to a single value foreval_time
#766)autoplot()
)