Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

Bump pytorch-lightning from 1.1.8 to 1.2.4 #17

Closed
wants to merge 1 commit into from

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Mar 22, 2021

Bumps pytorch-lightning from 1.1.8 to 1.2.4.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.2.4] - 2021-03-16

Changed

  • Changed the default of find_unused_parameters back to True in DDP and DDP Spawn (#6438)

Fixed

  • Expose DeepSpeed loss parameters to allow users to fix loss instability (#6115)
  • Fixed DP reduction with collection (#6324)
  • Fixed an issue where the tuner would not tune the learning rate if also tuning the batch size (#4688)
  • Fixed broadcast to use PyTorch broadcast_object_list and add reduce_decision (#6410)
  • Fixed logger creating directory structure too early in DDP (#6380)
  • Fixed DeepSpeed additional memory use on rank 0 when default device not set early enough (#6460)
  • Fixed DummyLogger.log_hyperparams raising a TypeError when running with fast_dev_run=True (#6398)
  • Fixed an issue with Tuner.scale_batch_size not finding the batch size attribute in the datamodule (#5968)
  • Fixed an exception in the layer summary when the model contains torch.jit scripted submodules (#6511)
  • Fixed when Train loop config was run during Trainer.predict (#6541)

Contributors

@​awaelchli, @​kaushikb11, @​Palzer, @​SeanNaren, @​tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.2.3] - 2021-03-09

Fixed

  • Fixed ModelPruning(make_pruning_permanent=True) pruning buffers getting removed when saved during training (#6073)
  • Fixed when _stable_1d_sort to work when n >= N (#6177)
  • Fixed AttributeError when logger=None on TPU (#6221)
  • Fixed PyTorch Profiler with emit_nvtx (#6260)
  • Fixed trainer.test from best_path hangs after calling trainer.fit (#6272)
  • Fixed SingleTPU calling all_gather (#6296)
  • Ensure we check deepspeed/sharded in multinode DDP (#6297)
  • Check LightningOptimizer doesn't delete optimizer hooks (#6305)
  • Resolve memory leak for evaluation (#6326)
  • Ensure that clip gradients is only called if the value is greater than 0 (#6330)
  • Fixed Trainer not resetting lightning_optimizers when calling Trainer.fit() multiple times (#6372)

Contributors

@​awaelchli, @​carmocca, @​Chizuchizu, @​frankier, @​SeanNaren, @​tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.2.2] - 2021-03-02

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.2.4] - 2021-03-16

Changed

  • Changed the default of find_unused_parameters back to True in DDP and DDP Spawn (#6438)

Fixed

  • Expose DeepSpeed loss parameters to allow users to fix loss instability (#6115)

  • Fixed DP reduction with collection (#6324)

  • Fixed an issue where the tuner would not tune the learning rate if also tuning the batch size (#4688)

  • Fixed broadcast to use PyTorch broadcast_object_list and add reduce_decision (#6410)

  • Fixed logger creating directory structure too early in DDP (#6380)

  • Fixed DeepSpeed additional memory use on rank 0 when default device not set early enough (#6460)

  • Fixed DummyLogger.log_hyperparams raising a TypeError when running with fast_dev_run=True (#6398)

  • Fixed an issue with Tuner.scale_batch_size not finding the batch size attribute in the datamodule (#5968)

  • Fixed an exception in the layer summary when the model contains torch.jit scripted submodules (#6511)

  • Fixed when Train loop config was run during Trainer.predict (#6541)

  • Fixed a bug where all_gather would not work correctly with tpu_cores=8 (#6587)

  • Update Gradient Clipping for the TPU Accelerator (#6576)

[1.2.3] - 2021-03-09

Fixed

  • Fixed ModelPruning(make_pruning_permanent=True) pruning buffers getting removed when saved during training (#6073)
  • Fixed when _stable_1d_sort to work when n >= N (#6177)
  • Fixed AttributeError when logger=None on TPU (#6221)
  • Fixed PyTorch Profiler with emit_nvtx (#6260)
  • Fixed trainer.test from best_path hangs after calling trainer.fit (#6272)
  • Fixed SingleTPU calling all_gather (#6296)
  • Ensure we check deepspeed/sharded in multinode DDP (#6297
  • Check LightningOptimizer doesn't delete optimizer hooks (#6305
  • Resolve memory leak for evaluation (#6326
  • Ensure that clip gradients is only called if the value is greater than 0 (#6330
  • Fixed Trainer not resetting lightning_optimizers when calling Trainer.fit() multiple times (#6372)

[1.2.2] - 2021-03-02

Added

  • Added checkpoint parameter to callback's on_save_checkpoint hook (#6072)

Changed

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually

@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Mar 22, 2021
@dependabot dependabot bot requested a review from amogkam March 22, 2021 07:01
@dependabot @github
Copy link
Contributor Author

dependabot bot commented on behalf of github Mar 23, 2021

Looks like pytorch-lightning is up-to-date now, so this is no longer needed.

@dependabot dependabot bot closed this Mar 23, 2021
@dependabot dependabot bot deleted the dependabot/pip/pytorch-lightning-1.2.4 branch March 23, 2021 02:58
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants