Skip to content

standard weekly patch release

Compare
Choose a tag to compare
@Borda Borda released this 04 Nov 02:00

Detail changes

Added

  • Added PyTorch 1.7 Stable support (#3821)
  • Added timeout for tpu_device_exists to ensure process does not hang indefinitely (#4340)

Changed

  • W&B log in sync with Trainer step (#4405)
  • Hook on_after_backward is called only when optimizer_step is being called (#4439)
  • Moved track_and_norm_grad into training loop and called only when optimizer_step is being called (#4439)
  • Changed type checker with explicit cast of ref_model object (#4457)

Deprecated

  • Deprecated passing ModelCheckpoint instance to checkpoint_callback Trainer argument (#4336)

Fixed

  • Disable saving checkpoints if not trained (#4372)
  • Fixed error using auto_select_gpus=True with gpus=-1 (#4209)
  • Disabled training when limit_train_batches=0 (#4371)
  • Fixed that metrics do not store computational graph for all seen data (#4313)
  • Fixed AMP unscale for on_after_backward (#4439)
  • Fixed TorchScript export when module includes Metrics (#4428)
  • Fixed CSV logger warning (#4419)
  • Fixed skip DDP parameter sync (#4301)

Contributors

@ananthsub, @awaelchli, @borisdayma, @carmocca, @justusschock, @lezwon, @rohitgr7, @SeanNaren, @SkafteNicki, @ssaru, @tchaton, @ydcjeff

If we forgot someone due to not matching commit email with GitHub account, let us know :]