- Data:
- Update TSStandardize to accept some variables and/or groups of variables when using by_var.
- added option to pad labeled and unlabed datasets with SlidingWindow with a padding value
- added split_idxs and idxs to mixed_dls
- added sklearn preprocessing tfms
- added functions to measure sequence gaps
- added decodes to TSStandardize
- Callbacks:
- change mask return values in MVP to True then mask
- updated MVP to accept nan values
- Models:
- updated mWDN to take either model or arch
- added padding_var to TST
- added MiniRocketFeatures in Pytorch
- Losses & metrics:
- added WeightedPerSampleLoss
- added mean_per_class_accuracy to metrics
- added mape metric
- added HuberLoss and LogCoshLoss
- Learner:
- added Learner.remove_all_cbs
- updated get_X_preds to work with multilabel datasets
- Miscellaneous:
- added natural mask that reuses missing data in the input
- added rotate_axis utility functions
- Callbacks:
- fixed and issue with inconsistency in show_preds in MVP
- Models:
- Fixed an issue in InceptionTimePlus with stochastic depth regularization (stoch_depth parameter)
- Fixed issue with get_X_preds (different predictions when executed multiple times)
- fixed stoch_depth issue in InceptionTimePlus
- fixed kwargs issue in MultiInceptionTimePlus
- Data:
- fixed issue in delta gap normalize
- Learner:
- fixed bug in get_X_preds device
- updated get_X_preds to decode classification and regression outputs
- Models:
- Fixed an issue in TST and TSTPlus related to encoder layer creation.
- Fixed issue in TSStandardize when passing tensor with nan values
- Models:
- Added TabTransformer, a state-of-the-art tabular transformer released in Dec 2020.
- TSTPlus now supports padding masks (passed as nan values) by default.
- Data:
- Added a Nan2Value batch transform that removes any nan value in the tensor by zero or median.
- Faster dataloader when suffle == True.
- Added TSUndindowedDataset and TSUnwindowedDatasets, which apply window slicing online to prepare time series data.
- Added TSMetaDataset and TSMetaDatasets, which allow you to use one or multiple X (and y) arrays as input. In this way, you won't need to merge all data into a single array. This will allow you to work with larger than memory datasets.
- Added a new tutorial notebook that demonstrates both multi-class and multi-label classification using tsai.
- Upgraded df2Xy to accept y_func that allows calculation of different types of targets
- Callbacks:
- MVP is now much faster as masks are now created directly as cuda tensors. This has increased speed by 2.5x in some tests.
- Data:
- train_perc in get_splits has been changed to train_size to allow both floats or integers.
- df2Xy API has been modified
-
Learner:
- Updated 3 new learner APIs: TSClassifier, TSRegressor, TSForecaster.
-
ShowGraph callback:
- Callback optionally plots all metrics at the end of training.
- Data:
- Updated df2xy function to fix a bug.
- Tutorial notebooks:
- Updated 04 (regression) to use the recently released Monash, UEA & UCR Time Series Extrinsic Regression Repository (2020).
- Models:
- Added new pooling layers and 3 new heads: attentional_pool_head, universal_pool_head, gwa_pool_head
-
General:
- Added 3 new sklearn-type APIs: TSClassifier, TSRegressor and TSForecaster.
-
Data:
- External: added a new function get_forecasting_data to access some forecasting datasets.
- Modified TimeSplitter to also allow passing testing_size.
- Utilities: add a simple function (standardize) to scale any data using splits.
- Preprocessing: added a new class (Preprocess) to be able to preprocess data before creating the datasets/ dataloaders. This is mainly to test different target preprocessing techniques.
- Utils added Nan2Value batch transform to remove any nan values in the dataset.
- Added a new utility function to easy the creation of a single TSDataLoader when no splits are used (for example with unlabeled datasets).
- Added a new function to quickly create empty arrays on disk or in memory (create_empty_array).
-
Models:
- TST: Added option to visualize self-attention maps.
- Added 3 new SOTA models: MiniRocketClassifier and MiniRocketRegressor for datasets <10k samples, and MiniRocket (Pytorch) which supports any dataset size.
- Added a simple function to create a naive forecast.
- Added future_mask to TSBERT to be able to train forecasting models.
- Added option to pass any custom mask to TSBERT.
-
Training:
- PredictionDynamics callback: allows you to visualize predictions during training.
-
Tutorial notebooks:
- New notebook demonstrating the new PredictionDynamics callback.
- Models:
- Fixed bug that prevented models to freeze or unfreeze. Now all models that end with Plus can take predefined weights and learn.freeze()/ learn.unfreeze() will work as expected.
- Data:
- External: added a new function get_Monash_data to get extrinsic regression data.
- Models:
- Added show_batch functionality to TSBERT.
- General: Added min requirements for all package dependencies.
- Data:
- Validation: added split visualization (show_plot=True by default).
- Data preprocessing: add option to TSStandardize or TSNormalize by_step.
- Featurize time series: added tsfresh library to allow the creation of features from time series.
- Models:
- Updated ROCKET to speed up feature creation and allow usage of large datasets.
- Added change_model_head utility function to ease the process of changing an instantiated models head.
- conv_lin_3d_head function to allow generation of 3d output tensors. This may be useful for multivariate, multi-horizon direct (non-recursive) time series forecasting, multi-output regression tasks, etc.
- Updated TST (Time series transformer) to allow the use of residual attention (based on He, R., Ravula, A., Kanagal, B., & Ainslie, J. (2020). Realformer: Transformer Likes Informed Attention. arXiv preprint arXiv:2012.11747.)
- provided new functionality to transfer model's weights (useful when using pre-trained models).
- updated build_ts_model to be able to use pretrained model weights.
- Training:
- TSBERT: a new callback has been added to be able to train a model in a self-supervised manner (similar to BERT).
- Tutorial notebooks:
- I've added a new tutorial notebook to demonstrate how to apply TSBERT (self-supervised method for time series).
- Data:
- ROCKET: fixed a bug in
create_rocket_features
.
- ROCKET: fixed a bug in
- Data:
- core:
get_subset_dl
andget_subset_dls
convenience function have been added. - data preparation:
SlidingWindow
andSlidingWindowPanel
functions are now vectorized, and are at least an order of magnitude faster.
- core:
- Models:
XCM
: An Explainable Convolutional Neural Network for Multivariate Time Series Classification have been added. Official code not released yet. This is a stete-of-the-art time series model that combines Conv1d and Conv2d and has good explainability.
- Training:
- learner:
ts_learner
andtsimage_learner
convenience functions have been added, as well as aget_X_preds
methods to facilitate the generation of predictions.
- learner:
- Data:
- data preparation: a new
SlidingWindowPanel
function has been added to help prepare the input from panel data.SlidingWindow
has also been enhanced. - new preprocessors: TSRobustScaler, TSClipOutliers, TSDiff, TSLog, TSLogReturn
- data preparation: a new
- Models:
MLP
andTCN
(Temporal Convolutional Network) have been added.
- Training:
- Callback: Uncertainty-based data augmentation
- Label-mixing transforms (data augmentation): MixUp1D, CutMix1D callbacks
- Utility functions: build_ts_model, build_tabular_model, get_ts_dls, get_tabular_dls, ts_learner
- Added support to Pytorch 1.7.
tsai
0.2.0 is a major update to the tsai library. These are the major changes made to the library:
- New tutorial nbs have been added to demonstrate the use of new functionality like:
- Time series data preparation
- Intro to time series regression
- TS archs comparison
- TS to image classification
- TS classification with transformers
- More ts data transforms have been added, including ts to images.
- New callbacks, like the state of the art noisy_student that will allow you to use unlabeled data.
- New time series, state-of-the-art models are now available like XceptionTime, RNN_FCN (like LSTM_FCN, GRU_FCN), TransformerModel, TST (Transformer), OmniScaleCNN, mWDN (multi-wavelet decomposition network), XResNet1d.
- Some of the models (those finishing with an plus) have additional, experimental functionality (like coordconv, zero_norm, squeeze and excitation, etc).