Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tune](deps): Bump pytorch-lightning from 1.0.3 to 1.2.3 in /python/requirements #10

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Mar 13, 2021

Bumps pytorch-lightning from 1.0.3 to 1.2.3.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.2.3] - 2021-03-09

Added

Changed

Fixed

  • Fixed ModelPruning(make_pruning_permanent=True) pruning buffers getting removed when saved during training (#6073)
  • Fixed when _stable_1d_sort to work when n >= N (#6177)
  • Fixed AttributeError when logger=None on TPU (#6221)
  • Fixed PyTorch Profiler with emit_nvtx (#6260)
  • Fixed trainer.test from best_path hangs after calling trainer.fit (#6272)
  • Fixed SingleTPU calling all_gather (#6296)
  • Ensure we check deepspeed/sharded in multinode DDP (#6297)
  • Check LightningOptimizer doesn't delete optimizer hooks (#6305)
  • Resolve memory leak for evaluation (#6326)
  • Ensure that clip gradients is only called if the value is greater than 0 (#6330)
  • Fixed Trainer not resetting lightning_optimizers when calling Trainer.fit() multiple times (#6372)

Contributors

@​awaelchli, @​carmocca, @​Chizuchizu, @​frankier, @​SeanNaren, @​tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.2.2] - 2021-03-02

Added

  • Added checkpoint parameter to callback's on_save_checkpoint hook (#6072)

Changed

  • Changed the order of backward, step, zero_grad to zero_grad, backward, step (#6147)
  • Changed default for DeepSpeed CPU Offload to False, due to prohibitively slow speeds at smaller scale (#6262)

Fixed

  • Fixed epoch level schedulers not being called when val_check_interval < 1.0 (#6075)
  • Fixed multiple early stopping callbacks (#6197)
  • Fixed incorrect usage of detach(), cpu(), to() (#6216)
  • Fixed LBFGS optimizer support which didn't converge in automatic optimization (#6147)
  • Prevent WandbLogger from dropping values (#5931)
  • Fixed error thrown when using valid distributed mode in multi node (#6297)

Contributors

@​akihironitta, @​borisdayma, @​carmocca, @​dvolgyes, @​SeanNaren, @​SkafteNicki

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.2.3] - 2021-03-09

Fixed

  • Fixed ModelPruning(make_pruning_permanent=True) pruning buffers getting removed when saved during training (#6073)

  • Fixed when _stable_1d_sort to work when n >= N (#6177)

  • Fixed AttributeError when logger=None on TPU (#6221)

  • Fixed PyTorch Profiler with emit_nvtx (#6260)

  • Fixed trainer.test from best_path hangs after calling trainer.fit (#6272)

  • Fixed SingleTPU calling all_gather (#6296)

  • Ensure we check deepspeed/sharded in multinode DDP (#6297

  • Check LightningOptimizer doesn't delete optimizer hooks (#6305

  • Resolve memory leak for evaluation (#6326

  • Ensure that clip gradients is only called if the value is greater than 0 (#6330

  • Fixed Trainer not resetting lightning_optimizers when calling Trainer.fit() multiple times (#6372)

  • Fixed DummyLogger.log_hyperparams raising a TypeError when running with fast_dev_run=True (#6398)

[1.2.2] - 2021-03-02

Added

  • Added checkpoint parameter to callback's on_save_checkpoint hook (#6072)

Changed

  • Changed the order of backward, step, zero_grad to zero_grad, backward, step (#6147)
  • Changed default for DeepSpeed CPU Offload to False, due to prohibitively slow speeds at smaller scale (#6262)

Fixed

  • Fixed epoch level schedulers not being called when val_check_interval < 1.0 (#6075)
  • Fixed multiple early stopping callbacks (#6197)
  • Fixed incorrect usage of detach(), cpu(), to() (#6216)
  • Fixed LBFGS optimizer support which didn't converge in automatic optimization (#6147)
  • Prevent WandbLogger from dropping values (#5931)
  • Fixed error thrown when using valid distributed mode in multi node (#6297

[1.2.1] - 2021-02-23

Fixed

  • Fixed incorrect yield logic for the amp autocast context manager (#6080)
  • Fixed priority of plugin/accelerator when setting distributed mode (#6089)
  • Fixed error message for AMP + CPU incompatibility (#6107)

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Mar 13, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Mar 20, 2021

Superseded by #13.

@dependabot dependabot bot closed this Mar 20, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/pytorch-lightning-1.2.3 branch March 20, 2021 07:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants