Skip to content

Commit

Permalink
Merge branch 'master' into func_metric_docs
Browse files Browse the repository at this point in the history
  • Loading branch information
williamFalcon committed Oct 7, 2020
2 parents ebd88a1 + 616fe82 commit a11c3a4
Show file tree
Hide file tree
Showing 19 changed files with 759 additions and 295 deletions.
2 changes: 1 addition & 1 deletion docs/source/datamodules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ This class can then be shared and used anywhere:
Why do I need a DataModule?
---------------------------
In normal PyTorch code, the data cleaning/preparation is usually scattered across many files. This makes
sharing and reusing the exact splits, and transforms across projects.
sharing and reusing the exact splits and transforms across projects impossible.

Datamodules are for you if you ever asked the questions:

Expand Down
24 changes: 12 additions & 12 deletions docs/source/hyperparameters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

Hyperparameters
---------------
Lightning has utilities to interact seamlessly with the command line ArgumentParser
Lightning has utilities to interact seamlessly with the command line ``ArgumentParser``
and plays well with the hyperparameter optimization framework of your choice.

----------
Expand Down Expand Up @@ -37,15 +37,15 @@ Argparser Best Practices
^^^^^^^^^^^^^^^^^^^^^^^^
It is best practice to layer your arguments in three sections.

1. Trainer args (gpus, num_nodes, etc...)
2. Model specific arguments (layer_dim, num_layers, learning_rate, etc...)
3. Program arguments (data_path, cluster_email, etc...)
1. Trainer args (``gpus``, ``num_nodes``, etc...)
2. Model specific arguments (``layer_dim``, ``num_layers``, ``learning_rate``, etc...)
3. Program arguments (``data_path``, ``cluster_email``, etc...)

|
We can do this as follows. First, in your LightningModule, define the arguments
We can do this as follows. First, in your ``LightningModule``, define the arguments
specific to that module. Remember that data splits or data paths may also be specific to
a module (ie: if your project has a model that trains on Imagenet and another on CIFAR-10).
a module (i.e.: if your project has a model that trains on Imagenet and another on CIFAR-10).

.. testcode::

Expand All @@ -58,7 +58,7 @@ a module (ie: if your project has a model that trains on Imagenet and another on
parser.add_argument('--data_path', type=str, default='/some/path')
return parser

Now in your main trainer file, add the Trainer args, the program args, and add the model args
Now in your main trainer file, add the ``Trainer`` args, the program args, and add the model args

.. testcode::

Expand All @@ -81,7 +81,7 @@ Now in your main trainer file, add the Trainer args, the program args, and add t

args = parser.parse_args()

Now you can call run your program like so
Now you can call run your program like so:

.. code-block:: bash
Expand Down Expand Up @@ -109,12 +109,12 @@ Finally, make sure to start the training like so:
LightningModule hyperparameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Often times we train many versions of a model. You might share that model or come back to it a few months later
at which point it is very useful to know how that model was trained (ie: what learning_rate, neural network, etc...).
at which point it is very useful to know how that model was trained (i.e.: what learning rate, neural network, etc...).

Lightning has a few ways of saving that information for you in checkpoints and yaml files. The goal here is to
improve readability and reproducibility

1. The first way is to ask lightning to save the values anything in the __init__ for you to the checkpoint. This also
1. The first way is to ask lightning to save the values of anything in the __init__ for you to the checkpoint. This also
makes those values available via `self.hparams`.

.. code-block:: python
Expand Down Expand Up @@ -198,7 +198,7 @@ In that case, choose only a few

Trainer args
^^^^^^^^^^^^
To recap, add ALL possible trainer flags to the argparser and init the Trainer this way
To recap, add ALL possible trainer flags to the argparser and init the ``Trainer`` this way

.. code-block:: python
Expand All @@ -217,7 +217,7 @@ Multiple Lightning Modules
^^^^^^^^^^^^^^^^^^^^^^^^^^

We often have multiple Lightning Modules where each one has different arguments. Instead of
polluting the main.py file, the LightningModule lets you define arguments for each one.
polluting the ``main.py`` file, the ``LightningModule`` lets you define arguments for each one.

.. testcode::

Expand Down
2 changes: 1 addition & 1 deletion docs/source/lr_finder.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ which can be accessed via ``self.learning_rate`` or ``self.lr``.
trainer.tune(model)
If your model is using an arbitrary value instead of ``self.lr`` or ``self.learning_rate``, set that value as auto_lr_find
If your model is using an arbitrary value instead of ``self.lr`` or ``self.learning_rate``, set that value as ``auto_lr_find``:

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/metrics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ To implement your custom metric, subclass the base ``Metric`` class and implemen
- ``update()``: Any code needed to update the state given any inputs to the metric.
- ``compute()``: Computes a final value from the state of the metric.

All you need to do is call add_state correctly to implement a custom metric with DDP.
All you need to do is call ``add_state`` correctly to implement a custom metric with DDP.
``reset()`` is called on metric state variables added using ``add_state()``.

To see how metric states are synchronized across distributed processes, refer to ``add_state()`` docs
Expand Down
10 changes: 5 additions & 5 deletions docs/source/multiple_loaders.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,16 @@ Multiple Datasets
=================
Lightning supports multiple dataloaders in a few ways.

1. Create a dataloader that iterates both datasets under the hood.
1. Create a dataloader that iterates multiple datasets under the hood.
2. In the validation and test loop you also have the option to return multiple dataloaders
which lightning will call sequentially.

----------

Multiple training dataloaders
-----------------------------
For training, the best way to use multiple-dataloaders is to create a Dataloader class
which wraps both your dataloaders. (This of course also works for testing and validation
For training, the best way to use multiple dataloaders is to create a ``DataLoader`` class
which wraps your multiple dataloaders (this of course also works for testing and validation
dataloaders).

(`reference <https://discuss.pytorch.org/t/train-simultaneously-on-two-datasets/649/2>`_)
Expand Down Expand Up @@ -63,8 +63,8 @@ dataloaders).

Test/Val dataloaders
--------------------
For validation, test dataloaders lightning also gives you the additional
option of passing in multiple dataloaders back from each call.
For validation and test dataloaders, lightning also gives you the additional
option of passing multiple dataloaders back from each call.

See the following for more details:

Expand Down
3 changes: 3 additions & 0 deletions pytorch_lightning/metrics/functional/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,6 @@
rmsle,
ssim
)
from pytorch_lightning.metrics.functional.self_supervised import (
embedding_similarity
)
Loading

0 comments on commit a11c3a4

Please sign in to comment.