-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
on_after_backward
should always run after backward
#7924
Comments
I'll take a look at this. |
Awesome! Ask away if you need any help |
Hey @carmocca, would something as simple as this work?
A couple of questions for you:
Also @awaelchli, I notice there is a training loop refactor in progress. Would this change affect your work or vice versa? |
I would put it right before this line:
Maybe just the batch idx, optimizer, and optimizer idx. Could also include some of the other arguments of
Note that for a hook with these attributes, we already have
Yes. those need explicit calls.
That is the case, see: Regardless, maybe we shouldn't call it in that case as it won't get called when using manual optimization without AMP.
Has already landed (the large training changes at least), so more like it impacts yours. But fixing the conflicts shouldn't be hard. |
🐛 Bug
If
accumulate_grad_batches
is enabled, we don't callon_after_backward
until we step the optimizershttps://github.com/PyTorchLightning/pytorch-lightning/blob/d209b689796719d1ab4fcc8e1c26b8b57cd348c4/pytorch_lightning/trainer/training_loop.py#L757-L763
This means
on_after_backward
is acting likeon_before_optimizer_step
.So we should add that and always run
on_after_backward
after backward.The text was updated successfully, but these errors were encountered: