-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mark the trainer's connectors as protected #9778
Comments
I agree overall with this choice. For most connectors this will be fine because they are only accessed by the trainer itself. However I want to comment on the perspective of "protectedness" mentioned here. I see the value of protected members in the encapsulation of functionality rather than the reduction of exposure to users. Hence I would only do it for the connectors that are actually not accessed anywhere outside the trainer (e.g. the config validator you started doing). Some connectors are accessed in plugins or elsewhere and if we mark them protected without refactoring these access points we are at the same time violating that protected rule. So what I'm trying to say is, I believe the biggest motivation of this issue should be the encapsulation aspect and changes should be carried out primarily with that in mind, and then the rest of what you mention comes for free. |
I agree. The underlying motivation here is to enable the refactoring of these connectors. Moving away from Trainer Mixins (where everything is publicly available) and public connectors is targeted first to limit their exposure in the API. This enables the next step of refactoring to happen more expediently. This partially came up based on #9777 (cc @daniellepintz) This is just one such example |
Do we want to make also a separate issue for ideas how we want to refactor connectors (off of the architecture doc)? I would like to comment on that as well. I think we can also cleanly separate the discussion and refactor efforts for connectors and mixins as they are not really related I think. |
Yes, @ananthsub and I are working on auditing the connectors and mixins in the architecture doc, once we are done we will create issues! if you have any ideas you can feel free to share them with us now too :) |
@daniellepintz the last one here is the logger connector. We must do this before v1.6 - otherwise it's a bad experience for uers to encounter these breaking changes in multiple Lightning minor release versions. We must do this before v1.6 to ensure that all changes are rolled out simultaneously for consistency. fyi @carmocca , the changes should be minimal to be able to close out this issue. |
Proposed refactoring or deprecation
Mark these attributes of the Trainer as protected:
https://github.com/PyTorchLightning/pytorch-lightning/blob/ab207921b9149a2c20ff9f797b6381f61cb09b1e/pytorch_lightning/trainer/trainer.py#L389-L419
Motivation
We are auditing the Lightning components and APIs to assess opportunities for improvements:
By python coding conventions, the connectors are technically part of the trainer's public API, despite not being documented and not intending to be used by callers.
However, we do not honor backward compatibility for these connectors because we (framework developers) consider them as implementation details of the trainer. Examples:
#9680
#9566
Pitch
Let's make this explicit that these connectors are not intended to be called by end users
Proposal: prefix all the attributes with an underscore to mark them as protected.
This gives us users more clarity around what is part of the public trainer API, and gives framework developers more flexibility to evolve the implementation here.
To make the PRs easy to review, we can migrate one connector at a time (1 PR per connector)
Alternatives
Keep as is
Additional context
Part of #7740
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
The text was updated successfully, but these errors were encountered: