Skip to content

Commit

Permalink
update lightning version to v1.2.2
Browse files Browse the repository at this point in the history
remove unneccessary import

Update CHANGELOG

resolve a bug

remove print

resolve bug

fix pep8 issues
  • Loading branch information
kaushikb11 authored and lexierule committed Mar 5, 2021
1 parent c5e9d67 commit 9f3ef1b
Show file tree
Hide file tree
Showing 5 changed files with 4 additions and 24 deletions.
21 changes: 0 additions & 21 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,41 +9,20 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Added


- Added `checkpoint` parameter to callback's `on_save_checkpoint` hook ([#6072](https://github.com/PyTorchLightning/pytorch-lightning/pull/6072))


### Changed

- Changed the order of `backward`, `step`, `zero_grad` to `zero_grad`, `backward`, `step` ([#6147](https://github.com/PyTorchLightning/pytorch-lightning/pull/6147))


- Changed default for DeepSpeed CPU Offload to False, due to prohibitively slow speeds at smaller scale ([#6262](https://github.com/PyTorchLightning/pytorch-lightning/pull/6262))


### Deprecated


### Removed


### Fixed

- Fixed epoch level schedulers not being called when `val_check_interval < 1.0` ([#6075](https://github.com/PyTorchLightning/pytorch-lightning/pull/6075))


- Fixed multiple early stopping callbacks ([#6197](https://github.com/PyTorchLightning/pytorch-lightning/pull/6197))


- Fixed incorrect usage of `detach()`, `cpu()`, `to()` ([#6216](https://github.com/PyTorchLightning/pytorch-lightning/pull/6216))


- Fixed LBFGS optimizer support which didn't converge in automatic optimization ([#6147](https://github.com/PyTorchLightning/pytorch-lightning/pull/6147))


- Prevent `WandbLogger` from dropping values ([#5931](https://github.com/PyTorchLightning/pytorch-lightning/pull/5931))


- Fixed error thrown when using valid distributed mode in multi node ([#6297](https://github.com/PyTorchLightning/pytorch-lightning/pull/6297)


Expand Down
2 changes: 1 addition & 1 deletion pytorch_lightning/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
import time

_this_year = time.strftime("%Y")
__version__ = '1.2.1'
__version__ = '1.2.2'
__author__ = 'William Falcon et al.'
__author_email__ = 'waf2107@columbia.edu'
__license__ = 'Apache-2.0'
Expand Down
3 changes: 2 additions & 1 deletion pytorch_lightning/trainer/training_loop.py
Original file line number Diff line number Diff line change
Expand Up @@ -514,6 +514,7 @@ def run_training_epoch(self):
# VALIDATE IF NEEDED + CHECKPOINT CALLBACK
# -----------------------------------------
should_check_val = self.should_check_val_fx(batch_idx, is_last_batch)

if should_check_val:
self.trainer.run_evaluation()
val_loop_called = True
Expand Down Expand Up @@ -577,7 +578,7 @@ def run_training_epoch(self):
self.trainer.run_evaluation(on_epoch=True)

# reset stage to train
self.trainer._running_stage = RunningStage.TRAINING
self.trainer._set_running_stage(RunningStage.TRAINING, self.trainer.lightning_module)

# increment the global step once
# progress global step according to grads progress
Expand Down
1 change: 0 additions & 1 deletion tests/accelerators/test_accelerator_connector.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@
)
from pytorch_lightning.plugins.environments import ClusterEnvironment, SLURMEnvironment, TorchElasticEnvironment
from pytorch_lightning.utilities import _DEEPSPEED_AVAILABLE
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from tests.helpers.boring_model import BoringModel


Expand Down
1 change: 1 addition & 0 deletions tests/overrides/test_data_parallel.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,7 @@ def training_step(self, batch, batch_idx):
model = TestModel().to(device)
model.trainer = MagicMock()
model.trainer._running_stage = RunningStage.TRAINING
model.running_stage = RunningStage.TRAINING
batch = torch.rand(2, 32).to(device)
batch_idx = 0

Expand Down

0 comments on commit 9f3ef1b

Please sign in to comment.