Skip to content

Commit

Permalink
Merge branch 'master' into tests/doctest-examples
Browse files Browse the repository at this point in the history
  • Loading branch information
Borda committed Dec 11, 2020
2 parents 78cc48c + 63fb7f9 commit d753dfc
Show file tree
Hide file tree
Showing 25 changed files with 563 additions and 133 deletions.
3 changes: 1 addition & 2 deletions .github/workflows/ci_test-base.yml
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,7 @@ jobs:
with:
name: pytest-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}
path: junit/test-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}.xml
# Use always() to always run this step to publish test results when there are test failures
if: always()
if: failure()

- name: Statistics
if: success()
Expand Down
3 changes: 1 addition & 2 deletions .github/workflows/ci_test-conda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,5 +50,4 @@ jobs:
with:
name: pytest-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}
path: junit/test-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}.xml
# Use always() to always run this step to publish test results when there are test failures
if: always()
if: failure()
3 changes: 1 addition & 2 deletions .github/workflows/ci_test-full.yml
Original file line number Diff line number Diff line change
Expand Up @@ -129,8 +129,7 @@ jobs:
with:
name: pytest-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}
path: junit/test-results-${{ runner.os }}-${{ matrix.python-version }}-${{ matrix.requires }}.xml
# Use always() to always run this step to publish test results when there are test failures
if: always()
if: failure()

- name: Statistics
if: success()
Expand Down
45 changes: 43 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,47 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).


## [unreleased.Features] - YYYY-MM-DD

### Added


### Changed


### Deprecated


### Removed


### Fixed



## [unreleased.BugFix] - YYYY-MM-DD

### Added


### Changed


### Deprecated


### Removed


### Fixed

- Fixed trainer by default `None` in `DDPAccelerator` ([#4915](https://github.com/PyTorchLightning/pytorch-lightning/pull/4915))


- Fixed `LightningOptimizer` exposes optimizer attributes ([#5095](https://github.com/PyTorchLightning/pytorch-lightning/pull/5095))



## [1.1.0] - 2020-12-09

### Added
Expand Down Expand Up @@ -44,9 +85,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Changed

- Removed `multiclass_roc` and `multiclass_precision_recall_curve`, use `roc` and `precision_recall_curve` instead ([#4549](https://github.com/PyTorchLightning/pytorch-lightning/pull/4549))
- Tuner algorithms will be skipped if `fast_dev_run=True` ([#3903](https://github.com/PyTorchLightning/pytorch-lightning/pull/3903))
- WandbLogger does not force wandb `reinit` arg to True anymore and creates a run only when needed ([#4648](https://github.com/PyTorchLightning/pytorch-lightning/pull/4648))
- `WandbLogger` does not force wandb `reinit` arg to True anymore and creates a run only when needed ([#4648](https://github.com/PyTorchLightning/pytorch-lightning/pull/4648))
- Changed `automatic_optimization` to be a model attribute ([#4602](https://github.com/PyTorchLightning/pytorch-lightning/pull/4602))
- Changed `Simple Profiler` report to order by percentage time spent + num calls ([#4880](https://github.com/PyTorchLightning/pytorch-lightning/pull/4880))
- Simplify optimization Logic ([#4984](https://github.com/PyTorchLightning/pytorch-lightning/pull/4984))
Expand All @@ -64,6 +104,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Removed

- Removed `reorder` parameter of the `auc` metric ([#5004](https://github.com/PyTorchLightning/pytorch-lightning/pull/5004))
- Removed `multiclass_roc` and `multiclass_precision_recall_curve`, use `roc` and `precision_recall_curve` instead ([#4549](https://github.com/PyTorchLightning/pytorch-lightning/pull/4549))

### Fixed

Expand Down
49 changes: 36 additions & 13 deletions docs/source/optimizers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -191,46 +191,69 @@ override the :meth:`optimizer_step` function.
For example, here step optimizer A every 2 batches and optimizer B every 4 batches
.. testcode::
.. note:: When using Trainer(enable_pl_optimizer=True), there is no need to call `.zero_grad()`.
def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_idx, second_order_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False):
optimizer.step()
.. testcode::
def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx):
optimizer.zero_grad()
# Alternating schedule for optimizer steps (ie: GANs)
def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_idx, second_order_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False):
def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_idx, closure, on_tpu=False, using_native_amp=False, using_lbfgs=False):
# update generator opt every 2 steps
if optimizer_i == 0:
if batch_nb % 2 == 0 :
optimizer.step()
optimizer.zero_grad()
optimizer.step(closure=closure)
# update discriminator opt every 4 steps
if optimizer_i == 1:
if batch_nb % 4 == 0 :
optimizer.step()
optimizer.zero_grad()
optimizer.step(closure=closure)
.. note:: When using ``Trainer(enable_pl_optimizer=True)``, ``.step`` accepts a boolean ``make_optimizer_step`` which can be used as follow.
.. testcode::
def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx):
optimizer.zero_grad()
# Alternating schedule for optimizer steps (ie: GANs)
def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_idx, closure, on_tpu=False, using_native_amp=False, using_lbfgs=False):
# update generator opt every 2 steps
if optimizer_i == 0:
optimizer.step(closure=closure, make_optimizer_step=(batch_nb % 2) == 0)
# ...
# add as many optimizers as you want
# update discriminator opt every 4 steps
if optimizer_i == 1:
optimizer.step(closure=closure, make_optimizer_step=(batch_nb % 4) == 0)
Here we add a learning-rate warm up
.. testcode::
# learning rate warm-up
def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_idx, second_order_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False):
def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_idx, closure, on_tpu=False, using_native_amp=False, using_lbfgs=False):
# warm up lr
if self.trainer.global_step < 500:
lr_scale = min(1., float(self.trainer.global_step + 1) / 500.)
for pg in optimizer.param_groups:
pg['lr'] = lr_scale * self.hparams.learning_rate
# update params
optimizer.step()
optimizer.zero_grad()
optimizer.step(closure=closure)
The default ``optimizer_step`` is relying on the internal ``LightningOptimizer`` to properly perform a step.
.. testcode::
from pytorch_lightning.core.optimizer import LightningOptimizer
# function hook in LightningModule
def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_idx, closure, on_tpu=False, using_native_amp=False, using_lbfgs=False):
if not isinstance(optimizer, LightningOptimizer):
# wraps into LightingOptimizer only for running step
optimizer = LightningOptimizer.to_lightning_optimizer(optimizer, self.trainer)
optimizer.step(closure=closure)
----------
Expand Down
2 changes: 1 addition & 1 deletion pytorch_lightning/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Root package info."""

__version__ = '1.1.0'
__version__ = '1.1.1rc0'
__author__ = 'William Falcon et al.'
__author_email__ = 'waf2107@columbia.edu'
__license__ = 'Apache-2.0'
Expand Down
4 changes: 1 addition & 3 deletions pytorch_lightning/core/lightning.py
Original file line number Diff line number Diff line change
Expand Up @@ -1170,7 +1170,6 @@ def toggle_optimizer(self, optimizer: Optimizer, optimizer_idx: int):

def optimizer_step(
self,
*args,
epoch: int = None,
batch_idx: int = None,
optimizer: Optimizer = None,
Expand All @@ -1179,7 +1178,6 @@ def optimizer_step(
on_tpu: bool = None,
using_native_amp: bool = None,
using_lbfgs: bool = None,
**kwargs,
) -> None:
r"""
Override this method to adjust the default way the
Expand Down Expand Up @@ -1254,7 +1252,7 @@ def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx,
if not isinstance(optimizer, LightningOptimizer):
# wraps into LightingOptimizer only for running step
optimizer = LightningOptimizer.to_lightning_optimizer(optimizer, self.trainer)
optimizer.step(closure=optimizer_closure, *args, **kwargs)
optimizer.step(closure=optimizer_closure)

def optimizer_zero_grad(
self, epoch: int, batch_idx: int, optimizer: Optimizer, optimizer_idx: int
Expand Down
28 changes: 25 additions & 3 deletions pytorch_lightning/core/optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,12 +57,35 @@ def __init__(self,
else:
self.__class__ = type("Lightning" + optimizer.__class__.__name__, (self.__class__, optimizer.__class__), {})

self._trainer = None
self._optimizer = optimizer
self._trainer = None
self._accumulate_grad_batches = accumulate_grad_batches
self._automatic_optimization = None
self._optimizer_idx = None

@property
def defaults(self):
return self._optimizer.defaults

@defaults.setter
def defaults(self, defaults):
self._optimizer.defaults = defaults

@property
def state(self):
return self._optimizer.state

@state.setter
def state(self, state):
self._optimizer.state = state

@property
def param_groups(self):
return self._optimizer.param_groups

@param_groups.setter
def param_groups(self, param_groups):
self._optimizer.param_groups = param_groups

@property
def accumulate_grad_batches(self):
return self._accumulate_grad_batches
Expand All @@ -73,7 +96,6 @@ def accumulate_grad_batches(self, accumulate_grad_batches):

def _on_trainer_init(self, trainer):
self._trainer = proxy(trainer)
self._automatic_optimization = trainer.train_loop.automatic_optimization
for opt_idx, opt in enumerate(trainer.optimizers):
if opt == self._optimizer:
self._optimizer_idx = opt_idx
Expand Down
2 changes: 1 addition & 1 deletion pytorch_lightning/metrics/classification/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
from pytorch_lightning.metrics.classification.accuracy import Accuracy
from pytorch_lightning.metrics.classification.average_precision import AveragePrecision
from pytorch_lightning.metrics.classification.confusion_matrix import ConfusionMatrix
from pytorch_lightning.metrics.classification.f_beta import FBeta, F1
from pytorch_lightning.metrics.classification.f_beta import FBeta, Fbeta, F1
from pytorch_lightning.metrics.classification.precision_recall import Precision, Recall
from pytorch_lightning.metrics.classification.precision_recall_curve import PrecisionRecallCurve
from pytorch_lightning.metrics.classification.roc import ROC
5 changes: 2 additions & 3 deletions pytorch_lightning/metrics/classification/average_precision.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,9 +92,8 @@ def __init__(
self.add_state("target", default=[], dist_reduce_fx=None)

rank_zero_warn(
'Metric `AveragePrecision` will save all targets and'
' predictions in buffer. For large datasets this may lead'
' to large memory footprint.'
'Metric `AveragePrecision` will save all targets and predictions in buffer.'
' For large datasets this may lead to large memory footprint.'
)

def update(self, preds: torch.Tensor, target: torch.Tensor):
Expand Down
29 changes: 29 additions & 0 deletions pytorch_lightning/metrics/classification/f_beta.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
_fbeta_compute
)
from pytorch_lightning.metrics.metric import Metric
from pytorch_lightning.utilities import rank_zero_warn


class FBeta(Metric):
Expand Down Expand Up @@ -131,6 +132,34 @@ def compute(self) -> torch.Tensor:
self.actual_positives, self.beta, self.average)


# todo: remove in v1.2
class Fbeta(FBeta):
r"""
Computes `F-score <https://en.wikipedia.org/wiki/F-score>`_
.. warning :: Deprecated in favor of :func:`~pytorch_lightning.metrics.classification.f_beta.FBeta`
"""
def __init__(
self,
num_classes: int,
beta: float = 1.0,
threshold: float = 0.5,
average: str = "micro",
multilabel: bool = False,
compute_on_step: bool = True,
dist_sync_on_step: bool = False,
process_group: Optional[Any] = None,
):
rank_zero_warn(
"This `Fbeta` was deprecated in v1.0.x in favor of"
" `from pytorch_lightning.metrics.classification.f_beta import FBeta`."
" It will be removed in v1.2.0", DeprecationWarning
)
super().__init__(
num_classes, beta, threshold, average, multilabel, compute_on_step, dist_sync_on_step, process_group
)


class F1(FBeta):
"""
Computes F1 metric. F1 metrics correspond to a harmonic mean of the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -102,9 +102,8 @@ def __init__(
self.add_state("target", default=[], dist_reduce_fx=None)

rank_zero_warn(
'Metric `PrecisionRecallCurve` will save all targets and'
' predictions in buffer. For large datasets this may lead'
' to large memory footprint.'
'Metric `PrecisionRecallCurve` will save all targets and predictions in buffer.'
' For large datasets this may lead to large memory footprint.'
)

def update(self, preds: torch.Tensor, target: torch.Tensor):
Expand Down
5 changes: 2 additions & 3 deletions pytorch_lightning/metrics/classification/roc.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,9 +105,8 @@ def __init__(
self.add_state("target", default=[], dist_reduce_fx=None)

rank_zero_warn(
'Metric `ROC` will save all targets and'
' predictions in buffer. For large datasets this may lead'
' to large memory footprint.'
'Metric `ROC` will save all targets and predictions in buffer.'
' For large datasets this may lead to large memory footprint.'
)

def update(self, preds: torch.Tensor, target: torch.Tensor):
Expand Down
7 changes: 6 additions & 1 deletion pytorch_lightning/metrics/functional/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,18 @@
auc,
auroc,
dice_score,
f1_score,
fbeta_score,
get_num_classes,
iou,
multiclass_auroc,
precision,
precision_recall,
recall,
stat_scores,
stat_scores_multiple_classes,
iou,
to_categorical,
to_onehot,
)
from pytorch_lightning.metrics.functional.confusion_matrix import confusion_matrix
# TODO: unify metrics between class and functional, add below
Expand Down
Loading

0 comments on commit d753dfc

Please sign in to comment.