Skip to content

Commit

Permalink
Merge branch 'master' into tests/prune-unused-result-obj
Browse files Browse the repository at this point in the history
  • Loading branch information
tchaton committed Dec 11, 2020
2 parents 9c5057d + 7e8673d commit 34cf8f5
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions docs/source/multi_gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -593,9 +593,9 @@ Below are the possible configurations we support.

Implement Your Own Distributed (DDP) training
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.core.LightningModule.`.
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.plugins.ddp_plugin.DDPPlugin.init_ddp_connection`.

If you also need to use your own DDP implementation, override: :meth:`pytorch_lightning.core.LightningModule.configure_ddp`.
If you also need to use your own DDP implementation, override: :meth:`pytorch_lightning.plugins.ddp_plugin.DDPPlugin.configure_ddp`.


----------
Expand Down
2 changes: 1 addition & 1 deletion docs/source/weights_loading.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ You can customize the checkpointing behavior to monitor any quantity of your tra
1. Calculate any metric or other quantity you wish to monitor, such as validation loss.
2. Log the quantity using :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method, with a key such as `val_loss`.
3. Initializing the :class:`~pytorch_lightning.callbacks.ModelCheckpoint` callback, and set `monitor` to be the key of your quantity.
4. Pass the callback to `checkpoint_callback` :class:`~pytorch_lightning.trainer.Trainer` flag.
4. Pass the callback to the `callbacks` :class:`~pytorch_lightning.trainer.Trainer` flag.

.. code-block:: python
Expand Down

0 comments on commit 34cf8f5

Please sign in to comment.