diff --git a/CHANGELOG.md b/CHANGELOG.md index e10d13976f999..26761ab9c195d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -26,6 +26,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - Allow passing model hyperparameters as complete kwarg list ([#1896](https://github.com/PyTorchLightning/pytorch-lightning/pull/1896)) +- Re-Enable Logger's `ImportError`s ([#1938](https://github.com/PyTorchLightning/pytorch-lightning/pull/1938)) + ### Deprecated - Dropped official support/testing for older PyTorch versions <1.3 ([#1917](https://github.com/PyTorchLightning/pytorch-lightning/pull/1917)) @@ -34,6 +36,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - Removed unintended Trainer argument `progress_bar_callback`, the callback should be passed in by `Trainer(callbacks=[...])` instead ([#1855](https://github.com/PyTorchLightning/pytorch-lightning/pull/1855)) +- Remove obsolete `self._device` in Trainer ([#1849](https://github.com/PyTorchLightning/pytorch-lightning/pull/1849)) + ### Fixed - Run graceful training teardown on interpreter exit ([#1631](https://github.com/PyTorchLightning/pytorch-lightning/pull/1631)) @@ -50,6 +54,12 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - Fixed `LearningRateLogger` in multi-scheduler setting ([#1944](https://github.com/PyTorchLightning/pytorch-lightning/pull/1944)) +- Fixed test configuration check and testing ([#1804](https://github.com/PyTorchLightning/pytorch-lightning/pull/1804)) + +- Fixed an issue with Trainer constructor silently ignoring unknown/misspelled arguments ([#1820](https://github.com/PyTorchLightning/pytorch-lightning/pull/1820)) + +- Fixed `save_weights_only` in ModelCheckpoint ([#1780](https://github.com/PyTorchLightning/pytorch-lightning/pull/1780)) + ## [0.7.6] - 2020-05-16 ### Added @@ -102,11 +112,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - Fixed native amp + ddp ([#1788](https://github.com/PyTorchLightning/pytorch-lightning/pull/1788)) - Fixed `hparam` logging with metrics ([#1647](https://github.com/PyTorchLightning/pytorch-lightning/pull/1647)) - -- Fixed an issue with Trainer constructor silently ignoring unkown/misspelled arguments ([#1820](https://github.com/PyTorchLightning/pytorch-lightning/pull/1820)) - -- Fixed `save_weights_only` in ModelCheckpoint ([#1780](https://github.com/PyTorchLightning/pytorch-lightning/pull/1780)) - ## [0.7.5] - 2020-04-27 ### Changed @@ -649,16 +654,16 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - Fixed a bug where `Experiment` object was not process safe, potentially causing logs to be overwritten -## [0.3.5] - 2019-MM-DD +## [0.3.5] - 2019-07-25 -## [0.3.4] - 2019-MM-DD +## [0.3.4] - 2019-07-22 -## [0.3.3] - 2019-MM-DD +## [0.3.3] - 2019-07-22 -## [0.3.2] - 2019-MM-DD +## [0.3.2] - 2019-07-21 -## [0.3.1] - 2019-MM-DD +## [0.3.1] - 2019-07-21 -## [0.2.x] - YYYY-MM-DD +## [0.2.x] - 2019-07-09 -## [0.1.x] - YYYY-MM-DD +## [0.1.x] - 2019-06-DD diff --git a/docs/source/test_set.rst b/docs/source/test_set.rst index aa2a6e4e9dd70..7873f765a5092 100644 --- a/docs/source/test_set.rst +++ b/docs/source/test_set.rst @@ -1,6 +1,6 @@ Test set ======== -Lightning forces the user to run the test set separately to make sure it isn't evaluated by mistake +Lightning forces the user to run the test set separately to make sure it isn't evaluated by mistake. Test after fit @@ -15,6 +15,7 @@ To run the test set after training completes, use this method # run test set trainer.test() + Test pre-trained model ---------------------- To run the test set on a pre-trained model, use this method. @@ -34,4 +35,22 @@ To run the test set on a pre-trained model, use this method. trainer.test(model) In this case, the options you pass to trainer will be used when -running the test set (ie: 16-bit, dp, ddp, etc...) \ No newline at end of file +running the test set (ie: 16-bit, dp, ddp, etc...) + + +Test with additional data loaders +--------------------------------- +You can still run inference on a test set even if the `test_dataloader` method hasn't been +defined within your :class:`~pytorch_lightning.core.LightningModule` instance. This would be the case when your test data +is not available at the time your model was declared. + +.. code-block:: python + + # setup your data loader + test = DataLoader(...) + + # test (pass in the loader) + trainer.test(test_dataloaders=test) + +You can either pass in a single dataloader or a list of them. This optional named +parameter can be used in conjunction with any of the above use cases. diff --git a/pytorch_lightning/__init__.py b/pytorch_lightning/__init__.py index 7d26eefe8df10..3a6ffc1d7f527 100644 --- a/pytorch_lightning/__init__.py +++ b/pytorch_lightning/__init__.py @@ -1,6 +1,6 @@ """Root package info.""" -__version__ = '0.7.7-dev' +__version__ = '0.8.0-dev' __author__ = 'William Falcon et al.' __author_email__ = 'waf2107@columbia.edu' __license__ = 'Apache-2.0' diff --git a/tests/test_deprecated.py b/tests/test_deprecated.py index df541b623e4ad..79634578b75cb 100644 --- a/tests/test_deprecated.py +++ b/tests/test_deprecated.py @@ -97,7 +97,8 @@ def test_tbd_remove_in_v0_9_0_trainer(): assert getattr(trainer, 'show_progress_bar') with pytest.deprecated_call(match='v0.9.0'): - _ = Trainer(num_tpu_cores=8) + trainer = Trainer(num_tpu_cores=8) + assert trainer.tpu_cores == 8 def test_tbd_remove_in_v0_9_0_module_imports():