Releases: sktime/pytorch-forecasting
Releases · sktime/pytorch-forecasting
Generic distribution loss(es)
Added
- Allow lists for multiple losses and normalizers (#405)
- Warn if normalization is with scale
< 1e-7
(#429) - Allow usage of distribution losses in all settings (#434)
Fixed
- Fix issue when predicting and data is on different devices (#402)
- Fix non-iterable output (#404)
- Fix problem with moving data to CPU for multiple targets (#434)
Contributors
- jdb78
- domplexity
Simple models
Added
- Adding a filter functionality to the timeseries datasset (#329)
- Add simple models such as LSTM, GRU and a MLP on the decoder (#380)
- Allow usage of any torch optimizer such as SGD (#380)
Fixed
- Moving predictions to CPU to avoid running out of memory (#329)
- Correct determination of
output_size
for multi-target forecasting with the TemporalFusionTransformer (#328) - Tqdm autonotebook fix to work outside of Jupyter (#338)
- Fix issue with yaml serialization for TensorboardLogger (#379)
Contributors
- jdb78
- JakeForsey
- vakker
Bugfix release
Added
Fixed
- Underlying data is copied if modified. Original data is not modified inplace (#263)
- Allow plotting of interpretation on passed figure for NBEATS (#280)
- Fix memory leak for plotting and logging interpretation (#311)
- Correct shape of
predict()
method output for multi-targets (#268) - Remove cloudpickle to allow GPU trained models to be loaded on CPU devices from checkpoints (#314)
Contributors
- jdb78
- kigawas
- snumumrik
Fix for output transformer
- Added missing output transformation which was switched off by default (#260)
Adding support for lag variables
Adding multi-target support
Added
- Adding support for multiple targets in the TimeSeriesDataSet (#199) and amended tutorials.
- Temporal fusion transformer and DeepAR with support for multiple tagets (#199)
- Check for non-finite values in TimeSeriesDataSet and better validate scaler argument (#220)
- LSTM and GRU implementations that can handle zero-length sequences (#235)
- Helpers for implementing auto-regressive models (#236)
Changed
- TimeSeriesDataSet's
y
of the dataloader is a tuple of (target(s), weight) - potentially breaking for model or metrics implementation
Most implementations will not be affected as hooks in BaseModel and MultiHorizonMetric were modified.
Fixed
- Fixed autocorrelation for pytorch 1.7 (#220)
- Ensure reproducibility by replacing python
set()
withdict.fromkeys()
(mostly TimeSeriesDataSet) (#221) - Ensures BetaDistributionLoss does not lead to infinite loss if actuals are 0 or 1 (#233)
- Fix for GroupNormalizer if scaling by group (#223)
- Fix for TimeSeriesDataSet when using
min_prediction_idx
(#226)
Contributors
- jdb78
- JustinNeumann
- reumar
- rustyconover
Tutorial on how to implement a new architecture
Added
- Tutorial on how to implement a new architecture covering basic and advanced use cases (#188)
- Additional and improved documentation - particularly of implementation details (#188)
Changed (breaking for new model implementations)
- Moved multiple private methods to public methods (particularly logging) (#188)
- Moved
get_mask
method from BaseModel into utils module (#188) - Instead of using label to communicate if model is training or validating, using
self.training
attribute (#188) - Using
sample((n,))
of pytorch distributions instead of deprecatedsample_n(n)
method (#188)
New API for transforming inputs and outputs with encoders
Added
- Beta distribution loss for probabilistic models such as DeepAR (#160)
Changed
- BREAKING: Simplifying how to apply transforms (such as logit or log) before and after applying encoder. Some transformations are included by default but a tuple of a forward and reverse transform function can be passed for arbitrary transformations. This requires to use a
transformation
keyword in target normalizers instead of, e.g.log_scale
(#185)
Fixed
- Incorrect target position if
len(static_reals) > 0
leading to leakage (#184) - Fixing predicting completely unseen series (#172)
Contributors
- jdb78
- JakeForsey
Bugfixes and DeepAR improvements
Added
- Using GRU cells with DeepAR (#153)
Fixed
- GPU fix for variable sequence length (#169)
- Fix incorrect syntax for warning when removing series (#167)
- Fix issue when using unknown group ids in validation or test dataset (#172)
- Run non-failing CI on PRs from forks (#166, #156)
Docs
- Improved model selection guidance and explanations on how TimeSeriesDataSet works (#148)
- Clarify how to use with conda (#168)
Contributors
- jdb78
- JakeForsey
Adding DeepAR
Added
- DeepAR by Amazon (#115)
- First autoregressive model in PyTorch Forecasting
- Distribution loss: normal, negative binomial and log-normal distributions
- Currently missing: handling lag variables and tutorial (planned for 0.6.1)
- Improved documentation on TimeSeriesDataSet and how to implement a new network (#145)
Changed
- Internals of encoders and how they store center and scale (#115)
Fixed
- Update to PyTorch 1.7 and PyTorch Lightning 1.0.5 which came with breaking changes for CUDA handling and with optimizers (PyTorch Forecasting Ranger version) (#143, #137, #115)
Contributors
- jdb78
- JakeForesey