Skip to content

Releases: sktime/pytorch-forecasting

v1.1.1

09 Sep 09:08
b497a6b
Compare
Choose a tag to compare

What's Changed

Hotfix release to correct typo in package name in pyproject.toml, to correct pytorch-forecasting PEP 440 identifier.

Otherwise identical with 1.1.0

Full Changelog: v1.1.0...v1.1.0

v1.1.0

08 Sep 18:54
ff1a7bd
Compare
Choose a tag to compare

What's Changed

Maintenance update widening compatibility ranges and consolidating dependencies:

  • support for python 3.11 and 3.12, added CI testing
  • support for MacOS, added CI testing
  • core dependencies have been minimized to numpy, torch, lightning, scipy, pandas, and scikit-learn.
  • soft dependencies are available in soft dependency sets: all_extras for all soft dependencies, and tuning for optuna based optimization.

Dependency changes

  • the following are no longer core dependencies and have been changed to optional dependencies : optuna, statsmodels, pytorch-optimize, matplotlib. Environments relying on functionality requiring these dependencies need to be updated to install these explicitly.
  • optuna bounds have been updated to optuna >=3.1.0,<4.0.0
  • optuna-integrate is now an additional soft dependency, in case of optuna >=3.3.0

Deprecations and removals

  • from 1.2.0, the default optimizer will be changed from "ranger" to "adam" to avoid non-torch dependencies in defaults. pytorch-optimize optimizers can still be used. Users should set the optimizer explicitly to continue using "ranger".
  • from 1.1.0, the loggers do not log figures if soft dependency matplotlib is not present, but will raise no exceptions in this case. To log figures, ensure that matplotlib is installed.

All Contributors

@andre-marcos-perez,
@avirsaha,
@bendavidsteel,
@benHeid,
@bohdan-safoniuk,
@Borda,
@CahidArda,
@fkiraly,
@fnhirwa,
@germanKoch,
@jacktang,
@jdb78,
@jurgispods,
@maartensukel,
@MBelniak,
@orangehe,
@pavelzw,
@sfalkena,
@tmct,
@XinyuWuu,
@yarnabrina,

New Contributors

Full Changelog: v1.0.0...v1.1.0

Update to pytorch 2.0

10 Apr 19:56
7c775c1
Compare
Choose a tag to compare

Breaking Changes

  • Upgraded to pytorch 2.0 and lightning 2.0. This brings a couple of changes, such as configuration of trainers. See the lightning upgrade guide. For PyTorch Forecasting, this particularly means if you are developing own models, the class method epoch_end has been renamed to on_epoch_end and replacing model.summarize() with ModelSummary(model, max_depth=-1) and Tuner(trainer) is its own class, so trainer.tuner needs replacing. (#1280)
  • Changed the predict() interface returning named tuple - see tutorials.

Changes

  • The predict method is now using the lightning predict functionality and allows writing results to disk (#1280).

Fixed

  • Fixed robust scaler when quantiles are 0.0, and 1.0, i.e. minimum and maximum (#1142)

Poetry update

07 Sep 11:54
7f67513
Compare
Choose a tag to compare

Fixed

  • Removed pandoc from dependencies as issue with poetry install (#1126)
  • Added metric attributes for torchmetric resulting in better multi-GPU performance (#1126)

Added

  • "robust" encoder method can be customized by setting "center", "lower" and "upper" quantiles (#1126)

Multivariate networks

23 May 11:53
9d8e985
Compare
Choose a tag to compare

Added

  • DeepVar network (#923)
  • Enable quantile loss for N-HiTS (#926)
  • MQF2 loss (multivariate quantile loss) (#949)
  • Non-causal attention for TFT (#949)
  • Tweedie loss (#949)
  • ImplicitQuantileNetworkDistributionLoss (#995)

Fixed

  • Fix learning scale schedule (#912)
  • Fix TFT list/tuple issue at interpretation (#924)
  • Allowed encoder length down to zero for EncoderNormalizer if transformation is not needed (#949)
  • Fix Aggregation and CompositeMetric resets (#949)

Changed

  • Dropping Python 3.6 suppport, adding 3.10 support (#479)
  • Refactored dataloader sampling - moved samplers to pytorch_forecasting.data.samplers module (#479)
  • Changed transformation format for Encoders to dict from tuple (#949)

Contributors

  • jdb78

Bugfixes

24 Mar 21:27
4854c32
Compare
Choose a tag to compare

Fixed

  • Fix with creating tensors on correct devices (#908)
  • Fix with MultiLoss when calculating gradient (#908)

Contributors

  • jdb78

Adding N-HiTS network (N-BEATS successor)

23 Mar 12:52
af9e1d3
Compare
Choose a tag to compare

Added

  • Added new N-HiTS network that has consistently beaten N-BEATS (#890)
  • Allow using torchmetrics as loss metrics (#776)
  • Enable fitting EncoderNormalizer() with limited data history using max_length argument (#782)
  • More flexible MultiEmbedding() with convenience output_size and input_size properties (#829)
  • Fix concatentation of attention (#902)

Fixed

  • Fix pip install via github (#798)

Contributors

  • jdb78
  • christy
  • lukemerrick
  • Seon82

Maintenance Release

29 Nov 19:54
Compare
Choose a tag to compare

Added

  • Added support for running pytorch_lightning.trainer.test (#759)

Fixed

  • Fix inattention mutation to x_cont (#732).
  • Compatability with pytorch-lightning 1.5 (#758)

Contributors

  • eavae
  • danielgafni
  • jdb78

Maintenance Release (26/09/2021)

26 Sep 11:16
e5af895
Compare
Choose a tag to compare

Added

  • Use target name instead of target number for logging metrics (#588)
  • Optimizer can be initialized by passing string, class or function (#602)
  • Add support for multiple outputs in Baseline model (#603)
  • Added Optuna pruner as optional parameter in TemporalFusionTransformer.optimize_hyperparameters (#619)
  • Dropping support for Python 3.6 and starting support for Python 3.9 (#639)

Fixed

  • Initialization of TemporalFusionTransformer with multiple targets but loss for only one target (#550)
  • Added missing transformation of prediction for MLP (#602)
  • Fixed logging hyperparameters (#688)
  • Ensure MultiNormalizer fit state is detected (#681)
  • Fix infinite loop in TimeDistributedEmbeddingBag (#672)

Contributors

  • jdb78
  • TKlerx
  • chefPony
  • eavae
  • L0Z1K

Simplified API

04 Jun 17:48
d6a009d
Compare
Choose a tag to compare

Breaking changes

  • Removed dropout_categoricals parameter from TimeSeriesDataSet.
    Use categorical_encoders=dict(<variable_name>=NaNLabelEncoder(add_nan=True)) instead (#518)

  • Rename parameter allow_missings for TimeSeriesDataSet to allow_missing_timesteps (#518)

  • Transparent handling of transformations. Forward methods should now call two new methods (#518):

    • transform_output to explicitly rescale the network outputs into the de-normalized space
    • to_network_output to create a dict-like named tuple. This allows tracing the modules with PyTorch's JIT. Only prediction is still required which is the main network output.

    Example:

    def forward(self, x):
        normalized_prediction = self.module(x)
        prediction = self.transform_output(prediction=normalized_prediction, target_scale=x["target_scale"])
        return self.to_network_output(prediction=prediction)

Added

  • Improved validation of input parameters of TimeSeriesDataSet (#518)

Fixed

  • Fix quantile prediction for tensors on GPUs for distribution losses (#491)
  • Fix hyperparameter update for RecurrentNetwork.from_dataset method (#497)