Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update auto-opt docs #6037

Merged
merged 7 commits into from
Feb 18, 2021
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 24 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Lightning is rigurously tested across multiple GPUs, TPUs CPUs and against major

<details>
<summary>Current build statuses</summary>

<center>

| System / PyTorch ver. | 1.4 (min. req.)* | 1.5 | 1.6 | 1.7 (latest) | 1.8 (nightly) |
Expand All @@ -93,9 +93,9 @@ Lightning is rigurously tested across multiple GPUs, TPUs CPUs and against major

<details>
<summary>Bleeding edge build status (1.2)</summary>

<center>

![CI base testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20base%20testing/badge.svg?branch=release%2F1.2-dev&event=push)
![CI complete testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20complete%20testing/badge.svg?branch=release%2F1.2-dev&event=push)
![PyTorch & Conda](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg?branch=release%2F1.2-dev&event=push)
Expand All @@ -121,13 +121,13 @@ pip install pytorch-lightning
<!-- following section will be skipped from PyPI description -->

#### Install with optional dependencies

```bash
pip install pytorch-lightning['extra']
```

#### Conda

```bash
conda install pytorch-lightning -c conda-forge
```
Expand Down Expand Up @@ -229,7 +229,7 @@ Here are some examples:

<details>
<summary>Highlighted feature code snippets</summary>

```python
# 8 GPUs
# no code changes needed
Expand All @@ -240,66 +240,66 @@ Here are some examples:
```

<summary>Train on TPUs without code changes</summary>

```python
# no code changes needed
trainer = Trainer(tpu_cores=8)
```

<summary>16-bit precision</summary>

```python
# no code changes needed
trainer = Trainer(precision=16)
```

<summary>Experiment managers</summary>

```python
from pytorch_lightning import loggers

# tensorboard
trainer = Trainer(logger=TensorBoardLogger('logs/'))

# weights and biases
trainer = Trainer(logger=loggers.WandbLogger())

# comet
trainer = Trainer(logger=loggers.CometLogger())

# mlflow
trainer = Trainer(logger=loggers.MLFlowLogger())

# neptune
trainer = Trainer(logger=loggers.NeptuneLogger())

# ... and dozens more
```

<summary>EarlyStopping</summary>

```python
es = EarlyStopping(monitor='val_loss')
trainer = Trainer(callbacks=[es])
```

<summary>Checkpointing</summary>

```python
checkpointing = ModelCheckpoint(monitor='val_loss')
trainer = Trainer(callbacks=[checkpointing])
```

<summary>Export to torchscript (JIT) (production use)</summary>

```python
# torchscript
autoencoder = LitAutoEncoder()
torch.jit.save(autoencoder.to_torchscript(), "model.pt")
```

<summary>Export to ONNX (production use)</summary>

```python
# onnx
with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile:
Expand All @@ -315,6 +315,10 @@ For complex/professional level work, you have optional full control of the train

```python
class LitAutoEncoder(pl.LightningModule):
def __init__(self):
super().__init__()
self.automatic_optimization = False

carmocca marked this conversation as resolved.
Show resolved Hide resolved
def training_step(self, batch, batch_idx, optimizer_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
rohitgr7 marked this conversation as resolved.
Show resolved Hide resolved
(opt_a, opt_b) = self.optimizers(use_pl_optimizer=True)
Expand Down
83 changes: 81 additions & 2 deletions docs/source/common/lightning_module.rst
Original file line number Diff line number Diff line change
Expand Up @@ -841,7 +841,7 @@ The current step (does not reset each epoch)

hparams
~~~~~~~
After calling `save_hyperparameters` anything passed to init() is available via hparams.
After calling ``save_hyperparameters`` anything passed to init() is available via hparams.
rohitgr7 marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: python

Expand Down Expand Up @@ -932,9 +932,88 @@ True if using TPUs

--------------

automatic_optimization
rohitgr7 marked this conversation as resolved.
Show resolved Hide resolved
~~~~~~~~~~~~~~~~~~~~~~
When set to False, Lightning does not automate the optimization process. This means you are responsible for your own
rohitgr7 marked this conversation as resolved.
Show resolved Hide resolved
optimizer behavior

.. code-block:: python

def __init__(self):
self.automatic_optimization = False
rohitgr7 marked this conversation as resolved.
Show resolved Hide resolved

def training_step(self, batch, batch_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
opt = self.optimizers(use_pl_optimizer=True)

loss = ...
self.manual_backward(loss, opt)
opt.step()
opt.zero_grad()

This is not recommended when using a single optimizer, instead it's recommended when using 2+ optimizers
AND you are an expert user. Most useful for research like RL, sparse coding and GAN research.
rohitgr7 marked this conversation as resolved.
Show resolved Hide resolved

In the multi-optimizer case, ignore the optimizer_idx flag and use the optimizers directly
Borda marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: python

def __init__(self):
self.automatic_optimization = False

def training_step(self, batch, batch_idx, optimizer_idx):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should probably also make it so that the opt idx is not even part of the signature when doing manual opt, right? or perhaps it is already like this and just outdated docs.

Copy link
Contributor Author

@rohitgr7 rohitgr7 Feb 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's not like that in the code too. opt_idx is a part of the signature and agreed it shouldn't be in case of manual_optimization. @tchaton wdys?

# access your optimizers with use_pl_optimizer=False. Default is True
(opt_a, opt_b) = self.optimizers(use_pl_optimizer=True)

gen_loss = ...
self.manual_backward(gen_loss, opt_a)
opt_a.step()
opt_a.zero_grad()

disc_loss = ...
self.manual_backward(disc_loss, opt_b)
opt_b.step()
opt_b.zero_grad()
rohitgr7 marked this conversation as resolved.
Show resolved Hide resolved

--------------

example_input_array
~~~~~~~~~~~~~~~~~~~
Set and access example_input_array which is basically a single batch.

.. code-block:: python

def __init__(self):
self.example_input_array = ...
self.generator = ...

def on_train_epoch_end(...):
# generate some images using the example_input_array
gen_images = self.generator(self.example_input_array)

--------------

datamodule
rohitgr7 marked this conversation as resolved.
Show resolved Hide resolved
~~~~~~~~~~
Set or access your datamodule.

.. code-block:: python

def configure_optimizers(self):
num_training_samples = len(self.datamodule.train_dataloader())
...

--------------

model_size
~~~~~~~~~~
Get the model file size using ``self.model_size`` inside LightningModule.
Borda marked this conversation as resolved.
Show resolved Hide resolved

--------------

Hooks
^^^^^
This is the pseudocode to describe how all the hooks are called during a call to `.fit()`
This is the pseudocode to describe how all the hooks are called during a call to ``.fit()``.

.. code-block:: python

Expand Down
4 changes: 2 additions & 2 deletions docs/source/common/production_inference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Exporting to ONNX
-----------------
PyTorch Lightning provides a handy function to quickly export your model to ONNX format, which allows the model to be independent of PyTorch and run on an ONNX Runtime.

To export your model to ONNX format call the `to_onnx` function on your Lightning Module with the filepath and input_sample.
To export your model to ONNX format call the ``to_onnx`` function on your Lightning Module with the filepath and input_sample.

.. code-block:: python

Expand All @@ -18,7 +18,7 @@ To export your model to ONNX format call the `to_onnx` function on your Lightnin
input_sample = torch.randn((1, 64))
model.to_onnx(filepath, input_sample, export_params=True)

You can also skip passing the input sample if the `example_input_array` property is specified in your LightningModule.
You can also skip passing the input sample if the ` example_input_array ` property is specified in your LightningModule.

Once you have the exported model, you can run it on your ONNX runtime in the following way:

Expand Down
37 changes: 0 additions & 37 deletions docs/source/common/trainer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -330,43 +330,6 @@ Example::
# default used by the Trainer
trainer = Trainer(amp_level='O2')

automatic_optimization
^^^^^^^^^^^^^^^^^^^^^^
When set to False, Lightning does not automate the optimization process. This means you are responsible for your own
optimizer behavior

Example::

def training_step(self, batch, batch_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
opt = self.optimizers(use_pl_optimizer=True)

loss = ...
self.manual_backward(loss, opt)
opt.step()
opt.zero_grad()

This is not recommended when using a single optimizer, instead it's recommended when using 2+ optimizers
AND you are an expert user. Most useful for research like RL, sparse coding and GAN research.

In the multi-optimizer case, ignore the optimizer_idx flag and use the optimizers directly

Example::

def training_step(self, batch, batch_idx, optimizer_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
(opt_a, opt_b) = self.optimizers(use_pl_optimizer=True)

gen_loss = ...
self.manual_backward(gen_loss, opt_a)
opt_a.step()
opt_a.zero_grad()

disc_loss = ...
self.manual_backward(disc_loss, opt_b)
opt_b.step()
opt_b.zero_grad()

auto_scale_batch_size
^^^^^^^^^^^^^^^^^^^^^

Expand Down
11 changes: 4 additions & 7 deletions docs/source/starter/new-project.rst
Original file line number Diff line number Diff line change
Expand Up @@ -258,16 +258,13 @@ Manual optimization
However, for certain research like GANs, reinforcement learning, or something with multiple optimizers
or an inner loop, you can turn off automatic optimization and fully control the training loop yourself.

First, turn off automatic optimization:

.. testcode::

trainer = Trainer(automatic_optimization=False)

Now you own the train loop!
Turn off automatic optimization and you own the train loop!
rohitgr7 marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: python

def __init__(self):
self.automatic_optimization = False

def training_step(self, batch, batch_idx, optimizer_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
(opt_a, opt_b, opt_c) = self.optimizers(use_pl_optimizer=True)
Expand Down