Skip to content

Commit

Permalink
Merge branch 'master' into refactor/is_slurm_managing_tasks_0
Browse files Browse the repository at this point in the history
  • Loading branch information
awaelchli committed Oct 25, 2021
2 parents 5244d03 + d3e5a43 commit c1f65c9
Show file tree
Hide file tree
Showing 45 changed files with 515 additions and 150 deletions.
3 changes: 2 additions & 1 deletion .github/workflows/ci_test-conda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ jobs:
python ./requirements/adjust_versions.py requirements/extra.txt
python ./requirements/adjust_versions.py requirements/examples.txt
pip install --requirement requirements/devel.txt --find-links https://download.pytorch.org/whl/nightly/torch_nightly.html
pip install pytest-random-order
pip list
- name: Pull checkpoints from S3
Expand All @@ -44,7 +45,7 @@ jobs:
- name: Tests
run: |
# NOTE: run coverage on tests does not propagate failure status for Win, https://github.com/nedbat/coveragepy/issues/1003
coverage run --source pytorch_lightning -m pytest pytorch_lightning tests -v --durations=50 --junitxml=junit/test-results-${{ runner.os }}-torch${{ matrix.pytorch-version }}.xml
coverage run --source pytorch_lightning -m pytest --random-order-seed=1 pytorch_lightning tests -v --durations=50 --junitxml=junit/test-results-${{ runner.os }}-torch${{ matrix.pytorch-version }}.xml
shell: bash -l {0}

- name: Upload pytest results
Expand Down
11 changes: 6 additions & 5 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
* Marked several methods in `PredictionLoop` as protected: `on_predict_start`, `on_predict_epoch_end`, `on_predict_end`, `on_predict_model_eval` ([#9516](https://github.com/PyTorchLightning/pytorch-lightning/pull/9516))
* Marked several methods in `EvaluationLoop` as protected: `get_max_batches`, `on_evaluation_model_eval`, `on_evaluation_model_train`, `on_evaluation_start`, `on_evaluation_epoch_start`, `on_evaluation_epoch_end`, `on_evaluation_end`, `reload_evaluation_dataloaders` ([#9516](https://github.com/PyTorchLightning/pytorch-lightning/pull/9516))
* Marked several methods in `EvaluationEpochLoop` as protected: `on_evaluation_batch_start`, `evaluation_step`, `evaluation_step_end` ([#9516](https://github.com/PyTorchLightning/pytorch-lightning/pull/9516))
* Added `yielding_training_step` example ([#9983](https://github.com/PyTorchLightning/pytorch-lightning/pull/9983))


- Added support for saving and loading state of multiple callbacks of the same type ([#7187](https://github.com/PyTorchLightning/pytorch-lightning/pull/7187))
Expand Down Expand Up @@ -213,10 +214,10 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- LightningLite:
* Added `PrecisionPlugin.forward_context`, making it the default implementation for all `{train,val,test,predict}_step_context()` methods ([#9988](https://github.com/PyTorchLightning/pytorch-lightning/pull/9988))
* Added `DDPSpawnPlugin.spawn()` for spawning new processes of a given function ([#10018](https://github.com/PyTorchLightning/pytorch-lightning/pull/10018), [#10022](https://github.com/PyTorchLightning/pytorch-lightning/pull/10022))
* Added `TrainingTypePlugin.{_setup_model, _setup_optimizer}` methods ([#9994](https://github.com/PyTorchLightning/pytorch-lightning/pull/9994))
* Added `TrainingTypePlugin.{_setup_model, _setup_optimizer}` methods ([#9994](https://github.com/PyTorchLightning/pytorch-lightning/pull/9994), [#10064](https://github.com/PyTorchLightning/pytorch-lightning/pull/10064))
* Implemented `DataParallelPlugin._setup_model` ([#10010](https://github.com/PyTorchLightning/pytorch-lightning/pull/10010))
* Implemented `DeepSpeedPlugin._setup_models_and_optimizers` ([#10009](https://github.com/PyTorchLightning/pytorch-lightning/pull/10009))
* Implemented `{DDPShardedPlugin,DDPShardedSpawnPlugin}._setup_models_and_optimizers` ([#10028](https://github.com/PyTorchLightning/pytorch-lightning/pull/10028))
* Implemented `DeepSpeedPlugin._setup_model_and_optimizers` ([#10009](https://github.com/PyTorchLightning/pytorch-lightning/pull/10009), [#10064](https://github.com/PyTorchLightning/pytorch-lightning/pull/10064))
* Implemented `{DDPShardedPlugin,DDPShardedSpawnPlugin}._setup_model_and_optimizers` ([#10028](https://github.com/PyTorchLightning/pytorch-lightning/pull/10028), [#10064](https://github.com/PyTorchLightning/pytorch-lightning/pull/10064))
* Added optional `model` argument to the `optimizer_step` methods in accelerators and plugins ([#10023](https://github.com/PyTorchLightning/pytorch-lightning/pull/10023))


Expand Down Expand Up @@ -327,13 +328,14 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- `pytorch_lightning.utilities.grads.grad_norm` now raises an exception if parameter `norm_type <= 0` ([#9765](https://github.com/PyTorchLightning/pytorch-lightning/pull/9765))



- Updated error message for interactive incompatible plugins ([#9896](https://github.com/PyTorchLightning/pytorch-lightning/pull/9896))


- Updated several places in the loops and trainer to access `training_type_plugin` directly instead of `accelerator` ([#9901](https://github.com/PyTorchLightning/pytorch-lightning/pull/9901))


- Disable quantization aware training observers by default during validating/testing/predicting stages ([#8540](https://github.com/PyTorchLightning/pytorch-lightning/pull/8540))


### Deprecated

Expand Down Expand Up @@ -617,7 +619,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed `LearningRateMonitor` logging with multiple param groups optimizer with no scheduler ([#10044](https://github.com/PyTorchLightning/pytorch-lightning/pull/10044))



- Fixed undesired side effects being caused by `Trainer` patching dataloader methods on the `LightningModule` ([#9764](https://github.com/PyTorchLightning/pytorch-lightning/pull/9764))


Expand Down
4 changes: 3 additions & 1 deletion docs/source/common/trainer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -516,7 +516,9 @@ Example::
checkpoint_callback
^^^^^^^^^^^^^^^^^^^

Deprecated: This has been deprecated in v1.5 and will be removed in v1.7. Please use ``enable_checkpointing`` instead.
.. warning:: `checkpoint_callback` has been deprecated in v1.5 and will be removed in v1.7.
To disable checkpointing, pass ``enable_checkpointing = False`` to the Trainer instead.


default_root_dir
^^^^^^^^^^^^^^^^
Expand Down
8 changes: 4 additions & 4 deletions docs/source/extensions/callbacks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,10 +72,10 @@ Examples
--------
You can do pretty much anything with callbacks.

- `Add a MLP to fine-tune self-supervised networks <https://lightning-bolts.readthedocs.io/en/latest/self_supervised_callbacks.html#sslonlineevaluator>`_.
- `Find how to modify an image input to trick the classification result <https://lightning-bolts.readthedocs.io/en/latest/vision_callbacks.html#confused-logit>`_.
- `Interpolate the latent space of any variational model <https://lightning-bolts.readthedocs.io/en/latest/variational_callbacks.html#latent-dim-interpolator>`_.
- `Log images to Tensorboard for any model <https://lightning-bolts.readthedocs.io/en/latest/vision_callbacks.html#tensorboard-image-generator>`_.
- `Add a MLP to fine-tune self-supervised networks <https://lightning-bolts.readthedocs.io/en/latest/deprecated/callbacks/self_supervised.html#sslonlineevaluator>`_.
- `Find how to modify an image input to trick the classification result <https://lightning-bolts.readthedocs.io/en/latest/deprecated/callbacks/vision.html#confused-logit>`_.
- `Interpolate the latent space of any variational model <https://lightning-bolts.readthedocs.io/en/latest/deprecated/callbacks/variational.html#latent-dim-interpolator>`_.
- `Log images to Tensorboard for any model <https://lightning-bolts.readthedocs.io/en/latest/deprecated/callbacks/vision.html#tensorboard-image-generator>`_.


--------------
Expand Down
168 changes: 168 additions & 0 deletions pl_examples/loop_examples/yielding_training_step.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
# Copyright The PyTorch Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
from functools import partial
from typing import Generator

import torch

from pl_examples.domain_templates.generative_adversarial_net import GAN as GANTemplate
from pl_examples.domain_templates.generative_adversarial_net import MNISTDataModule
from pytorch_lightning import Trainer
from pytorch_lightning.loops import OptimizerLoop
from pytorch_lightning.loops.optimization.optimizer_loop import ClosureResult
from pytorch_lightning.loops.utilities import _build_training_step_kwargs
from pytorch_lightning.utilities.exceptions import MisconfigurationException

#############################################################################################
# Yield Loop #
# #
# This example shows an implementation of a custom loop that changes how the #
# `LightningModule.training_step` behaves. In particular, this custom "Yield" loop will #
# enable the `training_step` to yield like a Python generator, retaining the values #
# of local variables for subsequent calls. This can result in much cleaner and elegant #
# code when dealing with multiple optimizers (automatic optimization). #
# #
# Learn more about the loop structure from the documentation: #
# https://pytorch-lightning.readthedocs.io/en/latest/extensions/loops.html #
#############################################################################################


#############################################################################################
# Step 1 / 3: Implement a custom OptimizerLoop #
# #
# The `training_step` gets called in the #
# `pytorch_lightning.loops.optimization.OptimizerLoop`. To make it into a Python generator, #
# we need to override the place where it gets called. #
#############################################################################################


class YieldLoop(OptimizerLoop):
def __init__(self):
super().__init__()
self._generator = None

def connect(self, **kwargs):
raise NotImplementedError(f"{self.__class__.__name__} does not connect any child loops.")

def on_run_start(self, batch, optimizers, batch_idx):
super().on_run_start(batch, optimizers, batch_idx)
if not inspect.isgeneratorfunction(self.trainer.lightning_module.training_step):
raise MisconfigurationException("The LightingModule does not yield anything in the `training_step`.")
assert self.trainer.lightning_module.automatic_optimization

# We request the generator once and save it for later
# so we can call next() on it.
self._generator = self._get_generator(batch, batch_idx, opt_idx=0)

def _make_step_fn(self, split_batch, batch_idx, opt_idx):
return partial(self._training_step, self._generator)

def _get_generator(self, split_batch, batch_idx, opt_idx):
step_kwargs = _build_training_step_kwargs(
self.trainer.lightning_module, self.trainer.optimizers, split_batch, batch_idx, opt_idx, hiddens=None
)

# Here we are basically calling `lightning_module.training_step()`
# and this returns a generator! The `training_step` is handled by the
# accelerator to enable distributed training.
return self.trainer.accelerator.training_step(step_kwargs)

def _training_step(self, generator):
# required for logging
self.trainer.lightning_module._current_fx_name = "training_step"

# Here, instead of calling `lightning_module.training_step()`
# we call next() on the generator!
training_step_output = next(generator)
self.trainer.accelerator.post_training_step()

training_step_output = self.trainer.call_hook("training_step_end", training_step_output)

# The closure result takes care of properly detaching the loss for logging and peforms
# some additional checks that the output format is correct.
result = ClosureResult.from_training_step_output(training_step_output, self.trainer.accumulate_grad_batches)
return result


#############################################################################################
# Step 2 / 3: Implement a model using the new yield mechanism #
# #
# We can now implement a model that defines the `training_step` using "yield" statements. #
# We choose a generative adversarial network (GAN) because it alternates between two #
# optimizers updating the model parameters. In the first step we compute the loss of the #
# first network (coincidentally also named "generator") and yield the loss. In the second #
# step we compute the loss of the second network (the "discriminator") and yield again. #
# The nice property of this yield approach is that we can reuse variables that we computed #
# earlier. If this was a regular Lightning `training_step`, we would have to recompute the #
# output of the first network. #
#############################################################################################


class GAN(GANTemplate):

# This training_step method is now a Python generator
def training_step(self, batch, batch_idx, optimizer_idx=0) -> Generator:
imgs, _ = batch
z = torch.randn(imgs.shape[0], self.hparams.latent_dim)
z = z.type_as(imgs)

# Here, we compute the generator output once and reuse it later.
# It gets saved when we yield from the training_step.
# The output then gets re-used again in the discriminator update.
generator_output = self(z)

# train generator
real_labels = torch.ones(imgs.size(0), 1)
real_labels = real_labels.type_as(imgs)
g_loss = self.adversarial_loss(self.discriminator(generator_output), real_labels)
self.log("g_loss", g_loss)

# Yield instead of return: This makes the training_step a Python generator.
# Once we call it again, it will continue the execution with the block below
yield g_loss

# train discriminator
real_labels = torch.ones(imgs.size(0), 1)
real_labels = real_labels.type_as(imgs)
real_loss = self.adversarial_loss(self.discriminator(imgs), real_labels)
fake_labels = torch.zeros(imgs.size(0), 1)
fake_labels = fake_labels.type_as(imgs)

# We make use again of the generator_output
fake_loss = self.adversarial_loss(self.discriminator(generator_output.detach()), fake_labels)
d_loss = (real_loss + fake_loss) / 2
self.log("d_loss", d_loss)

yield d_loss


#############################################################################################
# Step 3 / 3: Connect the loop to the Trainer #
# #
# Finally, attach the loop to the `Trainer`. Here, we modified the `AutomaticOptimization` #
# loop which is a subloop of the `TrainingBatchLoop`. We use `.connect()` to attach it. #
#############################################################################################

if __name__ == "__main__":
model = GAN()
dm = MNISTDataModule()
trainer = Trainer()

# Connect the new loop
# YieldLoop now replaces the previous optimizer loop
trainer.fit_loop.epoch_loop.batch_loop.connect(optimizer_loop=YieldLoop())

# fit() will now use the new loop!
trainer.fit(model, dm)
Loading

0 comments on commit c1f65c9

Please sign in to comment.