You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add optional EMA support in our classification reference scripts.
The solution needs to take care of:
The EMA creation: We should use the torch.optim.swa_utils.AveragedModel util to build this similarly to how it's described in the documentation.
Checkpointing: Ideally store the weights of the ema model in a "model_ema" key on the state_dict
Update the training loops: make the necessary calls to update the weights, provide stats on validation dataset etc.
The reference script should updated in a BC way to accept a decay param for the EMA. The default value of the decay should be 0, to indicate that the EMA should be turned off. The solution should also explore ways to keep the implementation memory efficient and avoid exhausting the GPU memory.
Motivation, pitch
Most of SOTA models use EMA to get a few extra accuracy points for free. TorchVision should include how to do this on our reference scripts in order to help users build better models.
🚀 The feature
Add optional EMA support in our classification reference scripts.
The solution needs to take care of:
torch.optim.swa_utils.AveragedModel
util to build this similarly to how it's described in the documentation."model_ema"
key on thestate_dict
The reference script should updated in a BC way to accept a decay param for the EMA. The default value of the decay should be 0, to indicate that the EMA should be turned off. The solution should also explore ways to keep the implementation memory efficient and avoid exhausting the GPU memory.
Motivation, pitch
Most of SOTA models use EMA to get a few extra accuracy points for free. TorchVision should include how to do this on our reference scripts in order to help users build better models.
cc @datumbox
The text was updated successfully, but these errors were encountered: