Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove optimizer_connector.py #10119

Closed
daniellepintz opened this issue Oct 25, 2021 · 8 comments · Fixed by #10120
Closed

Remove optimizer_connector.py #10119

daniellepintz opened this issue Oct 25, 2021 · 8 comments · Fixed by #10120
Assignees
Labels

Comments

@daniellepintz
Copy link
Contributor

daniellepintz commented Oct 25, 2021

Proposed refactoring or deprecation

Move on_trainer_init function to trainer.py
https://github.com/PyTorchLightning/pytorch-lightning/blob/c9bc10ce8473a2249ffa4e00972c0c3c1d2641c4/pytorch_lightning/trainer/connectors/optimizer_connector.py#L26

Move update_learning_rates function to training_epoch_loop.py
https://github.com/PyTorchLightning/pytorch-lightning/blob/c9bc10ce8473a2249ffa4e00972c0c3c1d2641c4/pytorch_lightning/trainer/connectors/optimizer_connector.py#L31

Motivation

We are auditing the connectors, and optimizer_connector.py can easily be removed since it only has one significant function which is used in one place (training_epoch_loop.py)

@tchaton
Copy link
Contributor

tchaton commented Oct 25, 2021

Dear @daniellepintz,

This sounds good to me. @carmocca @awaelchli @Borda Do you validate ?

Best,
T.C

@awaelchli
Copy link
Contributor

awaelchli commented Oct 25, 2021

Move on_trainer_init function to trainer.py

probably fine :)

Move update_learning_rates function method to training_epoch_loop.py

Not convinced. The optimizers are owned by the trainer. imo it should be the responsibility of the owner to update that state. The alternative would be to create a function that takes the optimizers as inputs and updates the learning rates on them. Adding this method to the training_epoch_loop seems a bit arbitrary in the light of loop customization. If we are not sure what the design will be in the future, we could do it as an intermediate step and mark the update_learning_rates method as protected.

@carmocca
Copy link
Contributor

Adding this method to the training_epoch_loop seems a bit arbitrary in the light of loop customization.

It is but not any any more arbitrary than having it in a separate connector.

we could do it as an intermediate step and mark the update_learning_rates method as protected.

I agree with this.

Marking this as "approved"

@daniellepintz
Copy link
Contributor Author

Adding this method to the training_epoch_loop seems a bit arbitrary

I don't think it's arbitrary. First of all update_learning_rates is only called in one place in the entire codebase which is in training_epoch_loop. Second of all we currently call self.trainer.fit_loop.epoch_loop.scheduler_progress.increment_ready() in update_learning_rates.
https://github.com/PyTorchLightning/pytorch-lightning/blob/76081fb846072692f3e30cf28a4b7ef012fc276b/pytorch_lightning/trainer/connectors/optimizer_connector.py#L83
This means how it currently is we call into OptimizerConnector from TrainingEpochLoop, which then calls back into TrainingEpochLoop. I don't think this is good design. if we instead move update_learning_rates to TrainingEpochLoop there won't be cross class circular function calls and we can simplify the self.trainer.fit_loop.epoch_loop.scheduler_progress.increment_ready() calls with self.scheduler_progress.increment_ready()

I am not a big fan of just marking update_learning_rates as protected since it seems silly to have a class and file optimizer_connector just to house this one method.

The alternative would be to create a function that takes the optimizers as inputs and updates the learning rates on them.

@awaelchli where do you envision this function living?

@awaelchli
Copy link
Contributor

I don't think it's arbitrary. First of all update_learning_rates is only called in one place in the entire codebase which is in training_epoch_loop.

I expressed arbitrary because the update of learning rates could be called in any other loop, a custom loop. If the update can be called in any other loop then the choice of putting the method in the training epoch loop is arbitrary as it could be in any other place. Lightning calls the method in that loop but that's a choice based on a "standard" loop we expect. Loop customization is supposed to enable us to change that. Whenever we make changes to the loops I want to think about how this serves the idea of why we made the loops customizable. It's not in a great state but much better than before and we can work towards it.

I am not a big fan of just marking update_learning_rates as protected since it seems silly to have a class and file optimizer_connector just to house this one method.

The _update_learning_rates would just be one additional method on the TrainingEpochLoop and once deprecation ends, the connector can be removed. If we can't agree on what will happen with that functionality, imo it's better to just not expose it to the user. That has btw been the main argumentation of FB/@ananthsub for many many other parts of the code base.

@daniellepintz daniellepintz changed the title Deprecate optimizer_connector.py Remove optimizer_connector.py Oct 25, 2021
@daniellepintz
Copy link
Contributor Author

I meant to title this as "remove", not "deprecate". since this is an internal implementation detail we shouldn't need a deprecation cycle?

@daniellepintz
Copy link
Contributor Author

The _update_learning_rates would just be one additional method on the TrainingEpochLoop

I'm confused - are you saying you're ok with adding _update_learning_rates to TrainingEpochLoop?

@daniellepintz
Copy link
Contributor Author

imo it's better to just not expose it to the user

totally agree with making it protected, yes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants