-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Add self.lr_schedulers()
to LightningModule for manual optimization
#6567
Conversation
# ignore other keys "interval", "frequency", etc. | ||
lr_schedulers = [s["scheduler"] for s in self.trainer.lr_schedulers] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.lr_schedulers()
is supposed to be used in manual optimization, so even when dict keys like "interval"
and "monitor"
are defined in configure_optimizers()
, this line ignores all of the keys except "scheduler"
. Related docs: https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#learning-rate-scheduling
self.lr_schedulers()
to LightningModule for manual optimizationself.lr_schedulers()
to LightningModule for manual optimization [WIP]
b5128e0
to
5767487
Compare
Codecov Report
@@ Coverage Diff @@
## master #6567 +/- ##
=======================================
- Coverage 91% 87% -5%
=======================================
Files 192 192
Lines 12190 12256 +66
=======================================
- Hits 11144 10635 -509
- Misses 1046 1621 +575 |
Hello @akihironitta! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2021-04-04 20:34:31 UTC |
6cd0f73
to
5767487
Compare
self.lr_schedulers()
to LightningModule for manual optimization [WIP]self.lr_schedulers()
to LightningModule for manual optimization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks neat !
I think this update introduced a new bug: When automatic_optimization is disabled and you are using schedulers that require a metric, e.g. th.optim.lr_scheduler.ReduceLROnPlateau, an error is raised:
As soon as you define a monitor you get a warning:
Pytorch Lightning Version 1.3.1 Edit: (I do not know what the preferred solution might be, but adding) |
@maxoppelt The lr schedulers need to be manually stepped in manual optimization. For schedulers that require the val_loss, that means they need to be stepped in the |
Yes, that is a possible solution. Disable the raise of MisconfigurationException when using automatic_optimization False. Another design choice could be: Disable the warning and provide access to the monitor key in training_epoch_end/validation_epoch_end. Minor remark on the documentation: https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#learning-rate-scheduling-manual is misleading: Most schedulers have an epoch argument in the step method. Therefore one should not call scheduler.step() in training_step(). Especially when adding epoch as argument to your scheduler step: You get an EPOCH_DEPRECATION_WARNING. This could lead to misunderstandings, when reading the doc. However calling the scheduler in training_epoch_end() might be problematic when using multi dataloaders or ddp training? |
My preference.
There is already a pattern for this, by returning the value in the step method or by using torchmetrics.
For multi dataloaders, training_epoch_end() will receive outputs for all. So I see no big problem here. Are you interested sending a PR, for the error message handling / doc improvements? |
What does this PR do?
Part of #6379.
Before disabling
lr_scheduler.step()
in manual optimization in #6379, this PR addsself.lr_schedulers()
so that users canlr_scheduler.step()
in LightningModule at arbitrary intervals in manual optimization.Example:
TODO
Update the docsI'll update the docs in the following PR which disables lr_scheduler.step() in manual optimization because this PR itself doesn't enable users to callstep()
in manual optimization.Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃
Related to #6825.