-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update reference scripts to use the "Batteries Included" utils #4281
Closed
4 tasks done
Comments
16 tasks
facebook-github-bot
pushed a commit
to pytorch/pytorch
that referenced
this issue
Sep 7, 2021
Summary: Partially unblocks pytorch/vision#4281 Previously we have added WarmUp Schedulers to PyTorch Core in the PR : #60836 which had two mode of execution - linear and constant depending on warming up function. In this PR we are changing this interface to more direct form, as separating linear and constant modes to separate Schedulers. In particular ```Python scheduler1 = WarmUpLR(optimizer, warmup_factor=0.1, warmup_iters=5, warmup_method="constant") scheduler2 = WarmUpLR(optimizer, warmup_factor=0.1, warmup_iters=5, warmup_method="linear") ``` will look like ```Python scheduler1 = ConstantLR(optimizer, warmup_factor=0.1, warmup_iters=5) scheduler2 = LinearLR(optimizer, warmup_factor=0.1, warmup_iters=5) ``` correspondingly. Pull Request resolved: #64395 Reviewed By: datumbox Differential Revision: D30753688 Pulled By: iramazanli fbshipit-source-id: e47f86d12033f80982ddf1faf5b46873adb4f324
This was referenced Sep 15, 2021
This was referenced Sep 18, 2021
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
🚀 Feature
As part of the "Batteries Included" initiative (#3911) we are adding a number of new utils that can be used to produce SOTA results. Once those utils are landed, we should update our reference scripts to use them.
More specifically we need to:
warmup_method
(if any), thewarmup_iters
and thewarmup_factor
. The warmup scheduler should be chained with other existing schedulers.label_smoothing
argument of CrossEntropyLoss (ENH Adds label_smoothing to cross entropy loss pytorch#63122).label_smoothing
argument should be introduced on the script. To ensure BC, its default value should be 0.0 (no label smoothing).mixup-alpha
andcutmix-alpha
arguments should have 0.0 default values to ensure BC.torch.optim.swa_utils.AveragedModel
util and ensure the EMA model is evaluated at the end of each epoch.model-ema
param to ensure BC and ensure we have a goodmodel-ema-decay
default value.The text was updated successfully, but these errors were encountered: