Skip to content

Commit

Permalink
Release 0.13.0 (#961)
Browse files Browse the repository at this point in the history
Preparation for release of version 0.13.0

Release text:

The new skorch release is here and it has some changes that will be exiting for
some users.

- First of all, you may have heard of the [PyTorch 2.0
  release](https://pytorch.org/get-started/pytorch-2.0/), which includes the
  option to compile the PyTorch module for better runtime performance. This
  skorch release allows you to pass `compile=True` when initializing the net to
  enable compilation.
- Support for training on multiple GPUs with the help of the
  [`accelerate`](https://huggingface.co/docs/accelerate/index) package has been
  improved by fixing some bugs and providing a dedicated [history
  class](https://skorch.readthedocs.io/en/latest/user/history.html#distributed-history).
  Our documentation contains more information on [what to consider when training
  on multiple
  GPUs](https://skorch.readthedocs.io/en/latest/user/huggingface.html#caution-when-using-a-multi-gpu-setup).
- If you have ever been frustrated with your neural net not training properly,
  you know how hard it can be to discover the underlying issue. Using the new
  [`SkorchDoctor`](https://skorch.readthedocs.io/en/latest/helper.html#skorch.helper.SkorchDoctor)
  class will simplify the diagnosis of underlying issues. Take a look at the
  accompanying
  [notebook](https://nbviewer.org/github/skorch-dev/skorch/blob/master/notebooks/Skorch_Doctor.ipynb)

Apart from that, a few bugs have been fixed and the included notebooks have been
updated to properly install requirements on Google Colab.

We are grateful for external contributors, many thanks to:

- Kshiteej K (kshitij12345)
- Muhammad Abdullah (abdulasiraj)
- Royi (RoyiAvital)
- Sawradip Saha (sawradip)
- y10ab1 (y10ab1)

Find below the list of all changes since v0.12.1 below:

### Added
- Add support for compiled PyTorch modules using the `torch.compile` function,
  introduced in [PyTorch 2.0
  release](https://pytorch.org/get-started/pytorch-2.0/), which can greatly
  improve performance on new GPU architectures; to use it, initialize your net
  with the `compile=True` argument, further compilation arguments can be
  specified using the dunder notation, e.g. `compile__dynamic=True`
- Add a class
  [`DistributedHistory`](https://skorch.readthedocs.io/en/latest/history.html#skorch.history.DistributedHistory)
  which should be used when training in a multi GPU setting (#955)
- `SkorchDoctor`: A helper class that assists in understanding and debugging the
  neural net training, see [this
  notebook](https://nbviewer.org/github/skorch-dev/skorch/blob/master/notebooks/Skorch_Doctor.ipynb)
  (#912)
- When using `AccelerateMixin`, it is now possible to prevent unwrapping of the
  modules by setting `unwrap_after_train=True` (#963)

### Fixed
- Fixed install command to work with recent changes in Google Colab (#928)
- Fixed a couple of bugs related to using non-default modules and criteria
  (#927)
- Fixed a bug when using `AccelerateMixin` in a multi-GPU setup (#947)
- `_get_param_names` returns a list instead of a generator so that subsequent
  error messages return useful information instead of a generator `repr` string
  (#925)
- Fixed a bug that caused modules to not be sufficiently unwrapped at the end of
  training when using `AccelerateMixin`, which could prevent them from being
  pickleable (#963)
  • Loading branch information
BenjaminBossan authored May 17, 2023
1 parent cbc3cd3 commit cc210fe
Show file tree
Hide file tree
Showing 6 changed files with 12 additions and 72 deletions.
13 changes: 11 additions & 2 deletions CHANGES.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,19 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

### Added

### Changed

### Fixed

## [0.13.0] - 2023-05-17

### Added
- Add support for compiled PyTorch modules using the `torch.compile` function, introduced in [PyTorch 2.0 release](https://pytorch.org/get-started/pytorch-2.0/), which can greatly improve performance on new GPU architectures; to use it, initialize your net with the `compile=True` argument, further compilation arguments can be specified using the dunder notation, e.g. `compile__dynamic=True`
- Add a class [`DistributedHistory`](https://skorch.readthedocs.io/en/latest/history.html#skorch.history.DistributedHistory) which should be used when training in a multi GPU setting (#955)
- `SkorchDoctor`: A helper class that assists in understanding and debugging the neural net training, see [this notebook](https://nbviewer.org/github/skorch-dev/skorch/blob/master/notebooks/Skorch_Doctor.ipynb) (#912)
- When using `AccelerateMixin`, it is now possible to prevent unwrapping of the modules by setting `unwrap_after_train=True`
- When using `AccelerateMixin`, it is now possible to prevent unwrapping of the modules by setting `unwrap_after_train=True` (#963)

### Changed

Expand All @@ -22,7 +30,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `_get_param_names` returns a list instead of a generator so that subsequent
error messages return useful information instead of a generator `repr`
string (#925)
- Fixed a bug that caused modules to not be sufficiently unwrapped at the end of training when using `AccelerateMixin`, which could prevent them from being pickleable
- Fixed a bug that caused modules to not be sufficiently unwrapped at the end of training when using `AccelerateMixin`, which could prevent them from being pickleable (#963)

## [0.12.1] - 2022-11-18

Expand Down Expand Up @@ -306,3 +314,4 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
[0.11.0]: https://github.com/skorch-dev/skorch/compare/v0.10.0...v0.11.0
[0.12.0]: https://github.com/skorch-dev/skorch/compare/v0.11.0...v0.12.0
[0.12.1]: https://github.com/skorch-dev/skorch/compare/v0.12.0...v0.12.1
[0.13.0]: https://github.com/skorch-dev/skorch/compare/v0.12.1...v0.13.0
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.12.1dev
0.13.0
10 changes: 0 additions & 10 deletions skorch/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -333,13 +333,3 @@ def __call__(self, dataset, y=None, groups=None):
def __repr__(self):
# pylint: disable=useless-super-delegation
return super(ValidSplit, self).__repr__()

# TODO remove in skorch 0.13
class CVSplit(ValidSplit):
def __init__(self, *args, **kwargs):
warnings.warn(
f"{self.__class__.__name__} is deprecated, use the new name ValidSplit instead",
DeprecationWarning,
stacklevel=2,
)
super().__init__(*args, **kwargs)
13 changes: 0 additions & 13 deletions skorch/net.py
Original file line number Diff line number Diff line change
Expand Up @@ -1777,19 +1777,6 @@ def get_iterator(self, dataset, training=False):
mini-batches.
"""
# TODO: remove in skorch v0.13, see #835
if isinstance(dataset, DataLoader):
msg = (
"get_iterator was called with a DataLoader instance but it should be "
"called with a Dataset instead. Probably, you implemented a custom "
"run_single_epoch method. Its first argument is now a DataLoader, "
"not a Dataset. For more information, look here: "
"https://skorch.readthedocs.io/en/latest/user/FAQ.html"
"#migration-from-0-11-to-0-12. This will raise an error in skorch v0.13"
)
warnings.warn(msg, DeprecationWarning)
return dataset

if training:
kwargs = self.get_params_for('iterator_train')
iterator = self.iterator_train
Expand Down
8 changes: 0 additions & 8 deletions skorch/tests/test_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -918,11 +918,3 @@ def test_random_state_not_used_raises(self, valid_split_cls):

def test_random_state_and_float_does_not_raise(self, valid_split_cls):
valid_split_cls(0.5, random_state=0) # does not raise

def test_cvsplit_deprecation(self):
from skorch.dataset import CVSplit
with pytest.warns(
DeprecationWarning,
match="is deprecated, use the new name ValidSplit instead",
):
CVSplit()
38 changes: 0 additions & 38 deletions skorch/tests/test_net.py
Original file line number Diff line number Diff line change
Expand Up @@ -3778,44 +3778,6 @@ def evaluation_step(self, batch, training=False):
y_pred = net.predict(X)
assert y_pred.shape == (100, 2)

# TODO: remove in skorch v0.13
def test_net_with_custom_run_single_epoch(self, net_cls, module_cls, data):
# See #835. We changed the API to initialize the DataLoader only once
# per epoch. This test is to make sure that code that overrides
# run_single_epoch still works for the time being.
from skorch.dataset import get_len

class MyNet(net_cls):
def run_single_epoch(self, dataset, training, prefix, step_fn, **fit_params):
# code as in skorch<=0.11
# first argument should now be an iterator, not a dataset
if dataset is None:
return

# make sure that the "dataset" (really the DataLoader) can still
# access the Dataset if needed
assert hasattr(dataset, 'dataset')

batch_count = 0
for batch in self.get_iterator(dataset, training=training):
self.notify("on_batch_begin", batch=batch, training=training)
step = step_fn(batch, **fit_params)
self.history.record_batch(prefix + "_loss", step["loss"].item())
batch_size = (get_len(batch[0]) if isinstance(batch, (tuple, list))
else get_len(batch))
self.history.record_batch(prefix + "_batch_size", batch_size)
self.notify("on_batch_end", batch=batch, training=training, **step)
batch_count += 1

self.history.record(prefix + "_batch_count", batch_count)

net = MyNet(module_cls, max_epochs=2)
X, y = data
with pytest.deprecated_call():
net.fit(X, y)
# does not raise
net.predict(X)


class TestNetSparseInput:
@pytest.fixture(scope='module')
Expand Down

0 comments on commit cc210fe

Please sign in to comment.