-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deprecate enable_pl_optimizer as it is not restored properly #5244
Changes from all commits
fbebccb
f84085c
a309878
ae08761
3ef910f
f5a5d1e
7edec88
b4181ea
be48064
379d2be
fd51f32
05a838e
82c2602
386f6d4
b007c9d
5007e68
88c5c63
3accce3
7fc56ee
8d13893
e7abee6
14475e7
b47db5e
6a79921
c106828
85e4e96
40f7c54
e6f9945
9d4fd68
f71ce5d
5c98b0f
cfd63ea
ca6c184
a8c0c20
6b5af8b
feaa861
c1e9d14
1352a49
c0afb3b
2d8b9bb
9a43d8e
12b3554
a126e56
9d083d5
7126b2d
b6c7ad0
f5ec5f5
a9c1f7e
151790d
b33ee49
196d8b4
1677b6c
02ded96
94d3b4b
1e8a11e
6e68e31
47d047c
05678f5
c764bee
55536ab
1dfe521
134cf0e
09ea317
a42cb3a
0f81944
a61297b
dcc4897
e62e4fb
de5b0cd
0899e56
704b47a
bd3a5f4
36a5cfb
624b560
4f18365
bf6e529
b3ac147
65da7b7
a7c3d44
c9e4ffc
4c10879
9ef42bb
eecde3d
7f356d4
b971445
5a97d80
c5fdbea
07cb66e
1f4f255
302b005
5922dc6
7835edc
1201c31
3f75ae0
13d4119
d45b501
70aef31
5eb6b1f
ab660fe
8342314
9102217
3d38b60
637e423
1122cec
6a8a65b
538019f
36e09d5
ca41a18
db5e102
f6b272c
0542c5f
57a06c1
3374578
2dcfd2a
b1e847b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -167,8 +167,6 @@ def ddp_train(self, process_idx, mp_queue, model, is_master=False, proc_offset=0 | |
# 16-bit | ||
model = self.trainer.precision_connector.connect(model) | ||
|
||
self.trainer.convert_to_lightning_optimizers() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. not related now, but as it is in each then it shall be in the base not in all children... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I couldn't do as the self.trainer.precision wasn't called as the same place depending of the accelerator. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we have some cool ideas for accelerators refactoring in #4510, more on that soon :) |
||
|
||
# device ids change depending on the DDP setup | ||
device_ids = self.get_device_ids() | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -17,7 +17,7 @@ | |
|
||
from torch.optim.optimizer import Optimizer | ||
|
||
from pytorch_lightning.utilities import TPU_AVAILABLE | ||
from pytorch_lightning.utilities import AMPType, TPU_AVAILABLE | ||
from pytorch_lightning.utilities.exceptions import MisconfigurationException | ||
|
||
if TPU_AVAILABLE: | ||
|
@@ -62,6 +62,10 @@ def __init__(self, | |
self._accumulate_grad_batches = accumulate_grad_batches | ||
self._optimizer_idx = None | ||
|
||
@property | ||
def optimizer(self): | ||
return self._optimizer | ||
|
||
@property | ||
def defaults(self): | ||
return self._optimizer.defaults | ||
|
@@ -102,11 +106,13 @@ def _on_trainer_init(self, trainer): | |
break | ||
|
||
@classmethod | ||
def to_lightning_optimizer(cls, optimizer, trainer): | ||
if isinstance(optimizer, LightningOptimizer): | ||
return optimizer | ||
optimizer = cls(optimizer) | ||
optimizer._on_trainer_init(trainer) | ||
def _to_lightning_optimizer(cls, optimizer, trainer, opt_idx): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this is a breaking API change, need to add meta methods with deprecation warning...
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it was a public api so you never know who was using it... so we shall be always careful what we make as public/protected... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. either way is fine imo. lightning optimizer is anyway experimental feature and user should not even know about it at this point :) |
||
# apex overrides .step function and need to be wrapped on each step | ||
tchaton marked this conversation as resolved.
Show resolved
Hide resolved
|
||
if trainer.amp_backend == AMPType.APEX: | ||
optimizer = cls(optimizer) | ||
optimizer._on_trainer_init(trainer) | ||
tchaton marked this conversation as resolved.
Show resolved
Hide resolved
|
||
else: | ||
optimizer = trainer.lightning_optimizers[opt_idx] | ||
return optimizer | ||
|
||
def _accumulated_batches_reached(self): | ||
|
@@ -148,7 +154,7 @@ def __optimizer_step(self, *args, closure: Optional[Callable] = None, profiler_n | |
**kwargs | ||
) | ||
|
||
trainer.train_loop.on_before_zero_grad(self) | ||
trainer.train_loop.on_before_zero_grad(optimizer) | ||
|
||
model.optimizer_zero_grad( | ||
trainer.current_epoch, | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need it if its the default? i think its more confusing than helpful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So people are aware they can opt out. Do you think it should be removed ?