Skip to content

Commit

Permalink
remove legacy plugins (#5950)
Browse files Browse the repository at this point in the history
* remove legacy plugins

* imports

* formatting

* fix docs references

* fix cluster environment inheritance

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
  • Loading branch information
Borda and awaelchli committed Feb 16, 2021
1 parent 4531b1c commit 1c87f1f
Show file tree
Hide file tree
Showing 19 changed files with 14 additions and 1,458 deletions.
4 changes: 0 additions & 4 deletions .yapfignore
Original file line number Diff line number Diff line change
@@ -1,5 +1 @@
.git/*


# TODO
pytorch_lightning/plugins/legacy/*
10 changes: 5 additions & 5 deletions docs/source/advanced/multi_gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -580,9 +580,9 @@ Below are the possible configurations we support.

Implement Your Own Distributed (DDP) training
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.legacy.ddp_plugin.DDPPlugin.init_ddp_connection`.
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.training_type.ddp.DDPPlugin.init_ddp_connection`.

If you also need to use your own DDP implementation, override :meth:`pytorch_lightning.plugins.legacy.ddp_plugin.DDPPlugin.configure_ddp`.
If you also need to use your own DDP implementation, override :meth:`pytorch_lightning.plugins.training_type.ddp.DDPPlugin.configure_ddp`.


----------
Expand Down Expand Up @@ -679,7 +679,7 @@ In addition, we use Gradient Checkpointing to reduce GPU memory requirements fur

Reference: https://arxiv.org/abs/1811.06965

.. note:: DDPSequentialPlugin is currently supported only for Pytorch 1.6.
.. note:: RPCSequentialPlugin is currently supported only for Pytorch 1.6.

To get started, install FairScale using the command below. We install a specific branch which contains PyTorch related fixes for Sequential Parallelism.

Expand All @@ -692,7 +692,7 @@ This should be kept within the ``sequential_module`` variable within your ``Ligh

.. code-block:: python
from pytorch_lightning.plugins.legacy.ddp_sequential_plugin import DDPSequentialPlugin
from pytorch_lightning.plugins.training_type.rpc_sequential import RPCSequentialPlugin
from pytorch_lightning import LightningModule
class MyModel(LightningModule):
Expand All @@ -702,7 +702,7 @@ This should be kept within the ``sequential_module`` variable within your ``Ligh
# Split my module across 4 gpus, one layer each
model = MyModel()
plugin = DDPSequentialPlugin(balance=[1, 1, 1, 1])
plugin = RPCSequentialPlugin(balance=[1, 1, 1, 1])
trainer = Trainer(accelerator='ddp', gpus=4, plugins=[plugin])
trainer.fit(model)
Expand Down
12 changes: 6 additions & 6 deletions pl_examples/basic_examples/conv_sequential_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
to balance across your GPUs.
To run:
python conv_model_sequential_example.py --accelerator ddp --gpus 4 --max_epochs 1 --batch_size 256 --use_ddp_sequential
python conv_model_sequential_example.py --accelerator ddp --gpus 4 --max_epochs 1 --batch_size 256 --use_rpc_sequential
"""
import math
from argparse import ArgumentParser
Expand All @@ -32,7 +32,7 @@
from pl_examples import cli_lightning_logo
from pytorch_lightning import Trainer
from pytorch_lightning.metrics.functional import accuracy
from pytorch_lightning.plugins.legacy.ddp_sequential_plugin import DDPSequentialPlugin
from pytorch_lightning.plugins import RPCSequentialPlugin
from pytorch_lightning.utilities import _BOLTS_AVAILABLE, _FAIRSCALE_PIPE_AVAILABLE

if _BOLTS_AVAILABLE:
Expand Down Expand Up @@ -201,7 +201,7 @@ def instantiate_datamodule(args):
if __name__ == "__main__":
cli_lightning_logo()
parser = ArgumentParser(description="Pipe Example")
parser.add_argument("--use_ddp_sequential", action="store_true")
parser.add_argument("--use_rpc_sequential", action="store_true")
parser = Trainer.add_argparse_args(parser)
parser = pl_bolts.datamodules.CIFAR10DataModule.add_argparse_args(parser)
args = parser.parse_args()
Expand All @@ -212,8 +212,8 @@ def instantiate_datamodule(args):
cifar10_dm = instantiate_datamodule(args)

plugins = None
if args.use_ddp_sequential:
plugins = DDPSequentialPlugin()
if args.use_rpc_sequential:
plugins = RPCSequentialPlugin()

model = LitResnet(batch_size=args.batch_size, manual_optimization=not args.automatic_optimization)

Expand All @@ -223,4 +223,4 @@ def instantiate_datamodule(args):

if trainer.accelerator_backend.rpc_enabled:
# Called at the end of trainer to ensure all processes are killed
trainer.accelerator_backend.ddp_plugin.exit_rpc_process()
trainer.training_type_plugin.exit_rpc_process()
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.

from pytorch_lightning.plugins.legacy.plugin import LightningPlugin


class ClusterEnvironment(LightningPlugin):
class ClusterEnvironment:

def __init__(self):
self._world_size = None
Expand Down
Empty file.
144 changes: 0 additions & 144 deletions pytorch_lightning/plugins/legacy/apex.py

This file was deleted.

Loading

0 comments on commit 1c87f1f

Please sign in to comment.