Skip to content

Commit

Permalink
Merge branch 'master' into tests/doctest-examples
Browse files Browse the repository at this point in the history
  • Loading branch information
Borda committed Dec 17, 2020
2 parents d753dfc + 405a840 commit 937f81b
Show file tree
Hide file tree
Showing 57 changed files with 1,006 additions and 1,048 deletions.
112 changes: 56 additions & 56 deletions .mergify.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,59 +12,59 @@
# See the License for the specific language governing permissions and
# limitations under the License.

pull_request_rules:

- name: Automatic merge on approval
conditions:
- base=master
# number of review approvals
- "#approved-reviews-by>=3"
# no waiting or assigned review
- "#review-requested=0"
# no requested chnages from any reviewer
- "#changes-requested-reviews-by=0"
# this serves as ALL check has to pass as we have actually around 40 tests in total
- "#status-success>=54"
# this is just in case since we rely on GPU tests (note: redundand to the above)
- status-success=continuous-integration/drone/pr
- "status-success=ci/circleci: TPU-tests"
# this is patter-like, unofrunatly serves as `any(...)` (note: redundand to the above)
#- "status-success~=^ci/circleci:"
# no conflict with master branch
- -conflict
# was not closed yet
- -closed
# filter-out GH draft PRs
- -draft
actions:
delete_head_branch: {}
merge:
# https://doc.mergify.io/merge-action.html#strict-merge
# (on head branch) $ git merge --no-ff base
# (on head branch) # Wait for CI to go green
# (on head branch) # Squash all commits
# (on base branch) $ git merge --ff head
strict: true
method: squash
comment:
message: Great job! =)

- name: warn on conflicts
conditions:
- conflict
# filter-out GH draft PRs
- -draft
actions:
comment:
message: This pull request is now in conflict... :(

- name: add core reviewer
conditions:
# filter-out GH draft PRs
- -draft
# number of review approvals
- "#approved-reviews-by<3"
actions:
request_reviews:
teams:
- core-contributors
#pull_request_rules:
#
# - name: Automatic merge on approval
# conditions:
# - base=master
# # number of review approvals
# - "#approved-reviews-by>=3"
# # no waiting or assigned review
# - "#review-requested=0"
# # no requested chnages from any reviewer
# - "#changes-requested-reviews-by=0"
# # this serves as ALL check has to pass as we have actually around 40 tests in total
# - "#status-success>=54"
# # this is just in case since we rely on GPU tests (note: redundand to the above)
# - status-success=continuous-integration/drone/pr
# - "status-success=ci/circleci: TPU-tests"
# # this is patter-like, unofrunatly serves as `any(...)` (note: redundand to the above)
# #- "status-success~=^ci/circleci:"
# # no conflict with master branch
# - -conflict
# # was not closed yet
# - -closed
# # filter-out GH draft PRs
# - -draft
# actions:
# delete_head_branch: {}
# merge:
# # https://doc.mergify.io/merge-action.html#strict-merge
# # (on head branch) $ git merge --no-ff base
# # (on head branch) # Wait for CI to go green
# # (on head branch) # Squash all commits
# # (on base branch) $ git merge --ff head
# strict: true
# method: squash
# comment:
# message: Great job! =)
#
# - name: warn on conflicts
# conditions:
# - conflict
# # filter-out GH draft PRs
# - -draft
# actions:
# comment:
# message: This pull request is now in conflict... :(
#
# - name: add core reviewer
# conditions:
# # filter-out GH draft PRs
# - -draft
# # number of review approvals
# - "#approved-reviews-by<3"
# actions:
# request_reviews:
# teams:
# - core-contributors
29 changes: 20 additions & 9 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).


## [unreleased.Features] - YYYY-MM-DD
## [unreleased.BugFix] - YYYY-MM-DD

### Added

Expand All @@ -22,28 +22,39 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Fixed



## [unreleased.BugFix] - YYYY-MM-DD
## [1.1.1] - 2020-12-15

### Added

- Add a notebook example to reach a quick baseline of ~94% accuracy on CIFAR10 using Resnet in Lightning ([#4818](https://github.com/PyTorchLightning/pytorch-lightning/pull/4818)

### Changed


### Deprecated
- Simplify accelerator steps ([#5015](https://github.com/PyTorchLightning/pytorch-lightning/pull/5015)
- Refactor load in checkpoint connector ([#4593](https://github.com/PyTorchLightning/pytorch-lightning/pull/4593)
- Fixed the saved filename in `ModelCheckpoint` when it already exists ([#4861](https://github.com/PyTorchLightning/pytorch-lightning/pull/4861))


=======
### Removed

- Drop duplicate metrics ([#5014](https://github.com/PyTorchLightning/pytorch-lightning/pull/5014)
- Remove beta arg from F1 class and functional ([#5076](https://github.com/PyTorchLightning/pytorch-lightning/pull/5076)

### Fixed

- Fixed trainer by default `None` in `DDPAccelerator` ([#4915](https://github.com/PyTorchLightning/pytorch-lightning/pull/4915))


- Fixed `LightningOptimizer` exposes optimizer attributes ([#5095](https://github.com/PyTorchLightning/pytorch-lightning/pull/5095))

- Fixed `LightningOptimizer` to expose optimizer attributes ([#5095](https://github.com/PyTorchLightning/pytorch-lightning/pull/5095))
- Do not warn when the `name` key is used in the `lr_scheduler` dict ([#5057](https://github.com/PyTorchLightning/pytorch-lightning/pull/5057))
- Check if optimizer supports closure ([#4981](https://github.com/PyTorchLightning/pytorch-lightning/pull/4981)
- Extend LightningOptimizer to exposure underlying Optimizer attributes + update doc ([#5095](https://github.com/PyTorchLightning/pytorch-lightning/pull/5095)
- Add deprecated metric utility functions back to functional (
[#5067](https://github.com/PyTorchLightning/pytorch-lightning/pull/5067),
[#5068](https://github.com/PyTorchLightning/pytorch-lightning/pull/5068))
- Allow any input in `to_onnx` and `to_torchscript` ([#4378](https://github.com/PyTorchLightning/pytorch-lightning/pull/4378)
- Do not warn when the name key is used in the `lr_scheduler` dict ([#5057](https://github.com/PyTorchLightning/pytorch-lightning/pull/5057)

- Fixed `DDPHPCAccelerator` hangs in DDP construction by calling `init_device` ([#5157](https://github.com/PyTorchLightning/pytorch-lightning/pull/5157))


## [1.1.0] - 2020-12-09
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/test_parity.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@
import pytest
import torch

from pytorch_lightning import seed_everything, Trainer
import tests.base.develop_utils as tutils
from pytorch_lightning import Trainer, seed_everything
from tests.base.models import ParityModuleMNIST, ParityModuleRNN


Expand Down
2 changes: 1 addition & 1 deletion benchmarks/test_sharded_parity.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import pytest
import torch

from pytorch_lightning import Trainer, seed_everything
from pytorch_lightning import seed_everything, Trainer
from pytorch_lightning.plugins.ddp_plugin import DDPPlugin
from pytorch_lightning.plugins.sharded_plugin import DDPShardedPlugin
from pytorch_lightning.utilities import FAIRSCALE_AVAILABLE, NATIVE_AMP_AVAILABLE
Expand Down
2 changes: 2 additions & 0 deletions dockers/base-xla/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,8 @@ RUN \
python -c "fname = 'requirements.txt' ; lines = [line for line in open(fname).readlines() if not line.startswith('torch')] ; open(fname, 'w').writelines(lines)" && \
# drop Horovod as it is not needed
python -c "fname = 'requirements/extra.txt' ; lines = [line for line in open(fname).readlines() if not line.startswith('horovod')] ; open(fname, 'w').writelines(lines)" && \
# drop fairscale as it is not needed
python -c "fname = 'requirements/extra.txt' ; lines = [line for line in open(fname).readlines() if 'fairscale' not in line] ; open(fname, 'w').writelines(lines)" && \
# drop TorchVision as it was installed with XLA
python -c "fname = 'requirements/examples.txt' ; lines = [line for line in open(fname).readlines() if not line.startswith('torchvision')] ; open(fname, 'w').writelines(lines)" && \
pip install --requirement ./requirements/devel.txt --upgrade-strategy only-if-needed && \
Expand Down
4 changes: 3 additions & 1 deletion dockers/tpu-tests/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,10 @@ COPY ./ ./pytorch-lightning/
RUN \
# Install pytorch-lightning at the current PR, plus dependencies.
#pip install -r pytorch-lightning/requirements.txt --no-cache-dir && \
# drop Horovod
# drop Horovod as it is not needed
python -c "fname = 'pytorch-lightning/requirements/extra.txt' ; lines = [line for line in open(fname).readlines() if not line.startswith('horovod')] ; open(fname, 'w').writelines(lines)" && \
# drop fairscale as it is not needed
python -c "fname = 'pytorch-lightning/requirements/extra.txt' ; lines = [line for line in open(fname).readlines() if 'fairscale' not in line] ; open(fname, 'w').writelines(lines)" && \
pip install -r pytorch-lightning/requirements/devel.txt --no-cache-dir --upgrade-strategy only-if-needed

#RUN python -c "import pytorch_lightning as pl; print(pl.__version__)"
Expand Down
4 changes: 2 additions & 2 deletions docs/source/introduction_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -601,8 +601,8 @@ In this method we do all the preparation we need to do once (instead of on every
def setup(self, stage):
# transform
transform=transforms.Compose([transforms.ToTensor()])
MNIST(os.getcwd(), train=True, download=False, transform=transform)
MNIST(os.getcwd(), train=False, download=False, transform=transform)
mnist_train = MNIST(os.getcwd(), train=True, download=False, transform=transform)
mnist_test = MNIST(os.getcwd(), train=False, download=False, transform=transform)
# train/val split
mnist_train, mnist_val = random_split(mnist_train, [55000, 5000])
Expand Down
2 changes: 1 addition & 1 deletion docs/source/multi_gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -663,7 +663,7 @@ It is highly recommended to use Sharded Training in multi-GPU environments where
A technical note: as batch size scales, storing activations for the backwards pass becomes the bottleneck in training. As a result, sharding optimizer state and gradients becomes less impactful.
Work within the future will bring optional sharding to activations and model parameters to reduce memory further, but come with a speed cost.

To use Sharded Training, you need to first install FairScale using the command below or install all extras using ``pip install pytorch-lightning["extra"]``.
To use Sharded Training, you need to first install FairScale using the command below.

.. code-block:: bash
Expand Down
7 changes: 7 additions & 0 deletions notebooks/04-transformers-text-classification.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
Expand Down
7 changes: 7 additions & 0 deletions notebooks/05-trainer-flags-overview.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/05-trainer-flags-overview.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
Expand Down
4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ exclude = "(.eggs|.git|.hg|.mypy_cache|.nox|.tox|.venv|.svn|_build|buck-out|buil

[tool.isort]
known_first_party = [
"bencharmks",
"benchmarks",
"docs",
"pl_examples",
"pytorch_lightning",
Expand Down Expand Up @@ -52,3 +52,5 @@ skip_glob = [
]
profile = "black"
line_length = 120
force_sort_within_sections = "True"
order_by_type = "False"
2 changes: 1 addition & 1 deletion pytorch_lightning/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Root package info."""

__version__ = '1.1.1rc0'
__version__ = '1.1.1'
__author__ = 'William Falcon et al.'
__author_email__ = 'waf2107@columbia.edu'
__license__ = 'Apache-2.0'
Expand Down
3 changes: 3 additions & 0 deletions pytorch_lightning/accelerators/ddp_cpu_hpc_accelerator.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,3 +48,6 @@ def model_to_device(self, model, process_idx):
def get_device_ids(self):
device_ids = None
return device_ids

def init_device(self, process_idx):
pass
1 change: 1 addition & 0 deletions pytorch_lightning/accelerators/ddp_hpc_accelerator.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,7 @@ def ddp_train(self, process_idx, model):
"""
# determine which process we are and world size
self.set_world_ranks(process_idx)
self.init_device(process_idx)

# toggle prog bar
if (self.trainer.node_rank != 0 or process_idx != 0) and self.trainer.progress_bar_callback is not None:
Expand Down
11 changes: 8 additions & 3 deletions pytorch_lightning/callbacks/early_stopping.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,16 @@
Monitor a metric and stop training when it stops improving.
"""
import numbers
import os

import numpy as np
import torch

from pytorch_lightning import _logger as log
from pytorch_lightning.callbacks.base import Callback
from pytorch_lightning.utilities import rank_zero_info, rank_zero_warn, TPU_AVAILABLE
from pytorch_lightning.metrics.metric import Metric
from pytorch_lightning.utilities import TPU_AVAILABLE, rank_zero_info, rank_zero_warn


class EarlyStopping(Callback):
Expand Down Expand Up @@ -201,8 +203,11 @@ def _run_early_stopping_check(self, trainer, pl_module):
# when in dev debugging
trainer.dev_debugger.track_early_stopping_history(self, current)

if not isinstance(current, torch.Tensor):
current = torch.tensor(current, device=pl_module.device)
if current is not None:
if isinstance(current, Metric):
current = current.compute()
elif isinstance(current, numbers.Number):
current = torch.tensor(current, device=pl_module.device, dtype=torch.float)

if trainer.use_tpu and TPU_AVAILABLE:
current = current.cpu()
Expand Down
2 changes: 1 addition & 1 deletion pytorch_lightning/callbacks/lr_monitor.py
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ def _find_names(self, lr_schedulers) -> List[str]:
names = []
for scheduler in lr_schedulers:
sch = scheduler['scheduler']
if 'name' in scheduler:
if scheduler['name'] is not None:
name = scheduler['name']
else:
opt_name = 'lr-' + sch.optimizer.__class__.__name__
Expand Down
Loading

0 comments on commit 937f81b

Please sign in to comment.