Skip to content

Commit

Permalink
Merge branch 'master' into refactor/deprecate
Browse files Browse the repository at this point in the history
  • Loading branch information
s-rog committed Mar 24, 2021
2 parents 79b3c82 + b1e3dcc commit 051066a
Show file tree
Hide file tree
Showing 88 changed files with 1,864 additions and 2,709 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci_test-base.yml
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ jobs:
- name: Test Package [only]
run: |
# NOTE: run coverage on tests does not propagare faler status for Win, https://github.com/nedbat/coveragepy/issues/1003
python -m pytest pytorch_lightning -v --cov=pytorch_lightning --junitxml=junit/test-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}.xml
coverage run --source pytorch_lightning -m pytest pytorch_lightning -v --junitxml=junit/test-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}.xml
- name: Upload pytest test results
uses: actions/upload-artifact@v2
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/ci_test-conda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ jobs:
- name: Tests
run: |
# NOTE: run coverage on tests does not propagare faler status for Win, https://github.com/nedbat/coveragepy/issues/1003
python -m pytest pytorch_lightning tests --cov=pytorch_lightning -v --durations=50 --junitxml=junit/test-results-${{ runner.os }}-torch${{ matrix.pytorch-version }}.xml
coverage run --source pytorch_lightning -m pytest pytorch_lightning tests -v --durations=50 --junitxml=junit/test-results-${{ runner.os }}-torch${{ matrix.pytorch-version }}.xml
shell: bash -l {0}

- name: Upload pytest results
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/ci_test-full.yml
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ jobs:
- name: Tests
run: |
# NOTE: do not include coverage report here, see: https://github.com/nedbat/coveragepy/issues/1003
python -m pytest pytorch_lightning tests --cov=pytorch_lightning -v --durations=50 --junitxml=junit/test-results-${{ runner.os }}-py${{ matrix.python-version }}-${{ matrix.requires }}.xml
coverage run --source pytorch_lightning -m pytest pytorch_lightning tests -v --durations=50 --junitxml=junit/test-results-${{ runner.os }}-py${{ matrix.python-version }}-${{ matrix.requires }}.xml
- name: Examples
run: |
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/docs-checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ jobs:
# First run the same pipeline as Read-The-Docs
cd docs
make clean
make html --debug --jobs $(nproc) SPHINXOPTS="-W"
make html --debug --jobs $(nproc) SPHINXOPTS="-W --keep-going"
- name: Upload built docs
uses: actions/upload-artifact@v2
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -157,3 +157,4 @@ tags
data
MNIST
runs
*trace*
5 changes: 0 additions & 5 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,3 @@ repos:
hooks:
- id: yapf
args: [--parallel, --in-place]

- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.790
hooks:
- id: mypy
58 changes: 49 additions & 9 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Added

- Added `RetrievalMAP` metric, the corresponding functional version `retrieval_average_precision` and a generic superclass for retrieval metrics `RetrievalMetric` ([#5032](https://github.com/PyTorchLightning/pytorch-lightning/pull/5032))


- Added a way to print to terminal without breaking up the progress bar ([#5470](https://github.com/PyTorchLightning/pytorch-lightning/pull/5470))


- Added support to checkpoint after training steps in `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146))


- Added `checkpoint` parameter to callback's `on_save_checkpoint` hook ([#6072](https://github.com/PyTorchLightning/pytorch-lightning/pull/6072))


Expand All @@ -37,11 +37,28 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Added arg to `self.log` that enables users to give custom names when dealing with multiple dataloaders ([#6274](https://github.com/PyTorchLightning/pytorch-lightning/pull/6274))


- Added `teardown` method to `BaseProfiler` to enable subclasses defining post-profiling steps outside of `__del__` ([#6370](https://github.com/PyTorchLightning/pytorch-lightning/pull/6370))


- Added `setup` method to `BaseProfiler` to enable subclasses defining pre-profiling steps for every process ([#6633](https://github.com/PyTorchLightning/pytorch-lightning/pull/6633))


- Added no return warning to predict ([#6139](https://github.com/PyTorchLightning/pytorch-lightning/pull/6139))


- Added `outputs` parameter to callback's `on_validation_epoch_end` & `on_test_epoch_end` hooks ([#6120](https://github.com/PyTorchLightning/pytorch-lightning/pull/6120))
- Added `Trainer.predict` config validation ([#6543](https://github.com/PyTorchLightning/pytorch-lightning/pull/6543))


- Added `AbstractProfiler` interface ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621))


- Added support for including module names for forward in the autograd trace of `PyTorchProfiler` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349))


- Added support for the PyTorch 1.8.1 autograd profiler ([#6618](https://github.com/PyTorchLightning/pytorch-lightning/pull/6618))


- Added `outputs` parameter to callback's `on_validation_epoch_end` & `on_test_epoch_end` hooks ([#6120](https://github.com/PyTorchLightning/pytorch-lightning/pull/6120))


### Changed
Expand All @@ -58,6 +75,12 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Changed `setup()` and `teardown()` stage argument to take any of `{fit,validate,test,predict}` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386))


- Changed profilers to save separate report files per state and rank ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621))


- Changed `PyTorchProfiler` to use `torch.autograd.profiler.record_function` to record functions ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349))


### Deprecated

- `period` has been deprecated in favor of `every_n_val_epochs` in the `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146))
Expand All @@ -66,6 +89,12 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Deprecated `trainer.running_sanity_check` in favor of `trainer.sanity_checking` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945))


- Deprecated `Profiler(output_filename)` in favor of `dirpath` and `filename` ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621))


- Deprecated `PytorchProfiler(profiled_functions)` in favor of `record_functions` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349))


- Deprecated metrics in favor of `torchmetrics` ([#6505](https://github.com/PyTorchLightning/pytorch-lightning/pull/6505),

[#6530](https://github.com/PyTorchLightning/pytorch-lightning/pull/6530),
Expand All @@ -80,6 +109,10 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

[#6584](https://github.com/PyTorchLightning/pytorch-lightning/pull/6584),

[#6636](https://github.com/PyTorchLightning/pytorch-lightning/pull/6636),

[#6637](https://github.com/PyTorchLightning/pytorch-lightning/pull/6637),

)


Expand Down Expand Up @@ -118,6 +151,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Added Autocast in validation, test and predict modes for Native AMP ([#6565](https://github.com/PyTorchLightning/pytorch-lightning/pull/6565))


- Made the `Plugin.reduce` method more consistent across all Plugins to reflect a mean-reduction by default ([#6011](https://github.com/PyTorchLightning/pytorch-lightning/pull/6011))


Expand Down Expand Up @@ -145,9 +179,21 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed LightningModule `all_gather` on cpu tensors ([#6416](https://github.com/PyTorchLightning/pytorch-lightning/pull/6416))


- Fixed a bug where `all_gather` would not work correctly with `tpu_cores=8` ([#6587](https://github.com/PyTorchLightning/pytorch-lightning/pull/6587))


- Update Gradient Clipping for the TPU Accelerator ([#6576](https://github.com/PyTorchLightning/pytorch-lightning/pull/6576))


- Fixed torch distributed not available in setup hook for DDP ([#6506](https://github.com/PyTorchLightning/pytorch-lightning/pull/6506))


- Fixed comparing required versions ([#6434](https://github.com/PyTorchLightning/pytorch-lightning/pull/6434))


- Fixed a bug where gradients were disabled after calling `Trainer.predict` ([#6657](https://github.com/PyTorchLightning/pytorch-lightning/pull/6657))


## [1.2.4] - 2021-03-16

### Changed
Expand All @@ -168,12 +214,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed when Train loop config was run during `Trainer.predict` ([#6541](https://github.com/PyTorchLightning/pytorch-lightning/pull/6541))


- Fixed a bug where `all_gather` would not work correctly with `tpu_cores=8` ([#6587](https://github.com/PyTorchLightning/pytorch-lightning/pull/6587))


- Update Gradient Clipping for the TPU Accelerator ([#6576](https://github.com/PyTorchLightning/pytorch-lightning/pull/6576))


## [1.2.3] - 2021-03-09

### Fixed
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,4 @@ test: clean

docs: clean
pip install --quiet -r requirements/docs.txt
python -m sphinx -b html -W docs/source docs/build
python -m sphinx -b html -W --keep-going docs/source docs/build
4 changes: 3 additions & 1 deletion azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ jobs:
displayName: 'Get legacy checkpoints'
- bash: |
python -m pytest pytorch_lightning tests -v --cov=pytorch_lightning --junitxml=$(Build.StagingDirectory)/test-results.xml --durations=50
python -m coverage run --source pytorch_lightning -m pytest pytorch_lightning tests -v --junitxml=$(Build.StagingDirectory)/test-results.xml --durations=50
displayName: 'Testing: standard'
- bash: |
Expand Down Expand Up @@ -121,4 +121,6 @@ jobs:
# cd pl_examples/basic_examples
# bash submit_ddp_job.sh
# bash submit_ddp2_job.sh
env:
PL_USE_MOCKED_MNIST: "1"
displayName: 'Examples'
2 changes: 1 addition & 1 deletion docs/source/advanced/multi_gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ Lightning allows multiple ways of training
- TPUs (``tpu_cores=8|x``) (tpu or TPU pod)

.. note::
If you request multiple GPUs or nodes without setting a mode, DDP will be automatically used.
If you request multiple GPUs or nodes without setting a mode, DDP Spawn will be automatically used.

For a deeper understanding of what Lightning is doing, feel free to read this
`guide <https://medium.com/@_willfalcon/9-tips-for-training-lightning-fast-neural-networks-in-pytorch-8e63a502f565>`_.
Expand Down
24 changes: 14 additions & 10 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@
# documentation root, use os.path.abspath to make it absolute, like shown here.

# import m2r
import builtins
import glob
import os
import shutil
Expand All @@ -27,10 +26,13 @@

FOLDER_GENERATED = 'generated'
SPHINX_MOCK_REQUIREMENTS = int(os.environ.get('SPHINX_MOCK_REQUIREMENTS', True))
if SPHINX_MOCK_REQUIREMENTS:
builtins.__LIGHTNING_SETUP__ = True

import pytorch_lightning # noqa: E402
try:
from pytorch_lightning import info
except ImportError:
# alternative https://stackoverflow.com/a/67692/4521646
sys.path.append(os.path.join(PATH_ROOT, "pytorch_lightning"))
import info

# -- Project documents -------------------------------------------------------

Expand Down Expand Up @@ -79,13 +81,13 @@ def _transform_changelog(path_in: str, path_out: str) -> None:
# -- Project information -----------------------------------------------------

project = 'PyTorch Lightning'
copyright = pytorch_lightning.__copyright__
author = pytorch_lightning.__author__
copyright = info.__copyright__
author = info.__author__

# The short X.Y version
version = pytorch_lightning.__version__
version = info.__version__
# The full version, including alpha/beta/rc tags
release = pytorch_lightning.__version__
release = info.__version__

# -- General configuration ---------------------------------------------------

Expand Down Expand Up @@ -176,8 +178,8 @@ def _transform_changelog(path_in: str, path_out: str) -> None:
# documentation.

html_theme_options = {
'pytorch_project': pytorch_lightning.__homepage__,
'canonical_url': pytorch_lightning.__homepage__,
'pytorch_project': info.__homepage__,
'canonical_url': info.__homepage__,
'collapse_navigation': False,
'display_version': True,
'logo_only': False,
Expand Down Expand Up @@ -279,6 +281,7 @@ def _transform_changelog(path_in: str, path_out: str) -> None:
'torch': ('https://pytorch.org/docs/stable/', None),
'numpy': ('https://numpy.org/doc/stable/', None),
'PIL': ('https://pillow.readthedocs.io/en/stable/', None),
'torchmetrics': ('https://torchmetrics.readthedocs.io/en/stable/', None),
}

# -- Options for todo extension ----------------------------------------------
Expand Down Expand Up @@ -331,6 +334,7 @@ def package_list_from_file(file):
}
MOCK_PACKAGES = []
if SPHINX_MOCK_REQUIREMENTS:
MOCK_PACKAGES += ['fairscale']
# mock also base packages when we are on RTD since we don't install them there
MOCK_PACKAGES += package_list_from_file(os.path.join(PATH_ROOT, 'requirements.txt'))
MOCK_PACKAGES += package_list_from_file(os.path.join(PATH_ROOT, 'requirements', 'extra.txt'))
Expand Down
6 changes: 3 additions & 3 deletions docs/source/starter/introduction_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -882,8 +882,8 @@ Or maybe we have a model that we use to do generation
generated_imgs = model(z)
To perform inference at scale, it is possible to use ``trainer.predict`` with LightningModule ``predict`` function
By default, LightningModule ``predict`` calls forward, but it can be overriden to add any processing logic.
To perform inference at scale, it is possible to use ``trainer.predict`` with LightningModule ``predict_step`` function
By default, LightningModule ``predict_step`` calls forward, but it can be overriden to add any processing logic.

.. code-block:: python
Expand All @@ -893,7 +893,7 @@ By default, LightningModule ``predict`` calls forward, but it can be overriden t
imgs = self.decoder(z)
return imgs
def predict(self, batch, batch_idx: int , dataloader_idx: int = None):
def predict_step(self, batch, batch_idx: int , dataloader_idx: int = None):
return self(batch)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/starter/new-project.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Step 1: Define LightningModule

.. testcode::

class LitAutoEncoder(LightningModule):
class LitAutoEncoder(pl.LightningModule):

def __init__(self):
super().__init__()
Expand Down
4 changes: 2 additions & 2 deletions pl_examples/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@
_DATASETS_PATH = os.path.join(_PACKAGE_ROOT, 'Datasets')

_TORCHVISION_AVAILABLE = _module_available("torchvision")
_TORCHVISION_MNIST_AVAILABLE = True
_TORCHVISION_MNIST_AVAILABLE = not bool(os.environ.get("PL_USE_MOCKED_MNIST", False))
_DALI_AVAILABLE = _module_available("nvidia.dali")

if _TORCHVISION_AVAILABLE:
if _TORCHVISION_MNIST_AVAILABLE:
try:
from torchvision.datasets.mnist import MNIST
MNIST(_DATASETS_PATH, download=True)
Expand Down
Loading

0 comments on commit 051066a

Please sign in to comment.