Releases: awslabs/Renate
Release 0.5.2
Release 0.5.1
Minor release that changes versions of Pillow and transformers library to account for untrusted data vulnerability in transformers<4.36.0 and arbitrary code execution in Pillow<10.2.0.
v0.5.0
🤩 Highlights
In this release we focused on the addition of methods for continual learning that do not require storing data in memory. In particular, we implemented methods that can work in combination with pre-trained transformer models.
🌟 New Features
- Logging additional metrics by @prabhuteja12 in #448
- S-Prompts for ViT and Text Transformers by @prabhuteja12 in #388
🛢 Datasets
- Adding CDDB dataset by @prabhuteja12 in #442
- Core50 dataset by @prabhuteja12 in #447
📜 Documentation Updates
- Documentation changes by @prabhuteja12 in #450
🐛 Bug Fixes
- CLEAR and TinyImageNet tests bugfixes by @prabhuteja12 in #441
- Fix CosineAnnealing issue in configs and add more configs by @wistuba in #449
- Fixing Issue of Avalanche Updater with Datasets that return PIL Images by @wistuba in #457
- Avoid In-Memory Dataset Copy for Avalanche by @wistuba in #463
- Update Flaky Tests by @wistuba in #495
- Speeding up unitests by @prabhuteja12 in #432
Full Changelog: v0.4.0...v0.5.0
Release 0.4.0
🤩 Highlights
Renate 0.4.0 adds multi-gpu training via deepspeed, data shift detectors, L2P as a new updater, and a couple of new datasets for benchmarking (WildTimeData, CLEAR, DomainNet, 4TextDataset).
🌟 New Features
- MultiGPU training with deepspeed by @prabhuteja12 in #218
- Renate NLP Models and Benchmarking Support for Hugging Face by @wistuba in #213 #233
- Covariate Shift Detectors by @lballes: MMD (#237), KS (#242)
- New Updater: Learning to Prompt (L2P) by @prabhuteja12 in #367
- Upload custom files and folders with a SageMaker training Job by @wistuba in #286
- Custom Optimizer and LR schedulers by @wistuba in #290
- Flag to remove intermediate tasks' states by @prabhuteja12 in #289
- Make number of epochs "finetuning-equivalent" by @lballes in #344
- Add Micro Average Accuracy by @wistuba in #323
- Experimentation Tools by @wistuba in #356
🛢 Datasets
- Added 4 Wild Time Datasets by @wistuba in #187
- Enable CLEAR Datasets for Benchmarking by @prabhuteja12 in #287
- Add DomainNet Benchmark by @wistuba in #357
- Add benchmark made of multiple text datasets by @610v4nn1 in #354
- MultiText dataset Added to Benchmarking by @wistuba in #366
📜 Documentation Updates
- Add doc page and example for shift detection by @lballes in #244
- Add example of using renate in your own script by @lballes in #274
- Describe Installation of Dependencies for Benchmarking by @wistuba in #313
- Improve title for the NLP example by @610v4nn1 in #416
🐛 Bug Fixes
- Fix Offline-ER bug and change loss functions by @wistuba in #273
- Missing Argument Doesn't Allow for Remote Experiments by @wistuba in #304
- Fix Small Bug in Benchmarking Script and Add LR Scheduler to Experiment Config by @wistuba in #305
- Enable Downloading Large Files by @wistuba in #337
- Fix Scenario for CLEAR by @wistuba in #339
- Fix CLS-ER Loss by @wistuba in #347
- Fix weighting in OfflineER by @lballes in #355
- Fixing Bug with HPO by @wistuba in #345
- Adding a Datacollator to handle the wild time text datasets by @prabhuteja12 in #338
- Enable Offline-ER for NestedTensors by @wistuba in #336
- Refactor Offline-ER to work with
collate_fn
by @wistuba in #390 - Fixing the issue with Domainnet redownloading by @prabhuteja12 in #389
- CLEAR dataset download link update by @prabhuteja12 in #431
- Support Use of Joint and GDumb with Pre-Trained Models by @wistuba in #362
🏗️ Code Refactoring
- Remove obsolete
set_transforms
from memory buffer by @lballes in #265 - Missing dependency and problem with import by @wistuba in #272
- Using HuggingFace ViT implementation (#219) by @prabhuteja12 in #303
- Introduce
RenateLightningModule
by @wistuba in #301 - Cleanup iCarl by @wistuba in #358
- Abstracting prompting transformer for use in L2P and S-Prompt by @prabhuteja12 in #420
- Adding flags to expose gradient clipping args in Trainer by @prabhuteja12 in #361
- Wild Time Benchmarks and Small Memory Hack by @wistuba in #363
- Clean Up Learner Checkpoint and Fix Model Loading by @wistuba in #365
- Enable Custom Grouping for DataIncrementalScenario by @wistuba in #368
- Masking of logits of irrelevant classes by @prabhuteja12 in #364
- Modifies current text transformer implementation to a RenateBenchmarkingModule by @prabhuteja12 in #380
- Replace memory batch size with a fraction of the total batch size by @wistuba in #359
- Make offline ER us total batch size in first update by @lballes in #381
🔧 Maintenance
- Robust Integration Tests by @wistuba in #214
- Update Renate Config Example by @wistuba in #226
- Longer Experiments for GPUs by @wistuba in #246
- Using
num_gpus_per_trial
after SyneTune update by @prabhuteja12 in #278 - Implementing a buffer that handles dataset elements of different sizes by @prabhuteja12 in #279
- Run sagemaker tests from GitHub Actions by @wesk in #275
- Fix Security Problem with
transformers
by @wistuba in #298
Full Changelog: v0.3.1...v0.4.0
Release v0.3.1
What's Changed
- Adding a missing dependency and fixing a case where a conditional requirement was unnecessarily required by @wistuba in #284
Full Changelog: v0.3.0...v0.3.1
Release v0.3.0
What's Changed
- Covariate shift detection by @lballes (#237, #242, #244). Shift detection may help users decide when to update model. We now provide methods for covariate shift detection in
renate.shift
. - Wild Time benchmarking by @wistuba in #187. Wild Time is a collection of datasets that exhibit temporal data distribution shifts. It is now available for benchmarking in Renate.
- Improved NLP support by @wistuba (#213, #233). There's now a
RenateModule
for convenient usage of Hugging Face Transformers. NLP and models are now included in the benchmarking suite. - Bug fixes and minor improvements.
Full Changelog: v0.2.1...v0.3.0
Release v0.2.1
What's Changed
- Update README.rst with paper ref by @610v4nn1
- Add doc page explaining NLP example by @lballes
- Bugfix, removed the need to specify the chunk id @wistuba
Full Changelog: v0.2.0...v0.2.1
Release v0.2.0
Renate v0.2 is finally here! 🌟
In these 88 new commits we made a number of enhancements and fixes.
It has been a great team effort and we are very happy to see that two more developers decided to contribute to Renate.
Highlights
- Scalable data buffer (@lballes). Since replay-based methods are used in many practical applications, and having a larger memory buffer leads to better performance, we made sure Renate users will be able to use a replay-memory larger than the physical memory they have available on their machines. This will enable more folks to use Renate in practice, especially in combinations with large models and datasets.
- Avalanche learning strategies are usable in Renate(@wistuba). Avalanche is a library for continual learning aiming at making research reproducible. While Renate focuses on real-world applications, it can still be useful to for users to compare with the training strategies implemented in Avalanche. To this purpose, Renate now allows the usage of Avalanche training strategies but not all the functionalities are available for Avalanche training strategies (see details here ).
- Simplified interfaces (@610v4nn1, @wistuba). We simplified naming for attributes and methods to make the library more intuitive and easier to use. Usability is always among our priorities and we will be happy to get more feedback after these changes.
- Additional tests (@wesk). We increased the amount of testing done for every PR and we are not running a number of quick training jobs. This will allow us to capture additional problems which may come from the interaction between different components of the library and which, usually, are not captured by unit tests.
There is way more to be discovered, from the examples using pre-trained text models (nlp_finetuning
folder in the examples) to the additional Scenario classes created to test the algorithms in different environments.
New Contributors
Full Changelog
See the full changelog: v0.1.0...v0.2.0
Initial Release
First public release of Renate.
The library provides the ability to:
- train and retrain neural network models
- optimize the hyperparameters when training
- run training jobs either locally or Amazon SageMaker
The package also contains documentation, examples, and scripts for experimentation.
Contributors (ordered by number of commits)
- @martinferianc
- @wistuba
- @lballes
- @610v4nn1
- Beyza Ermis
- Yantao Shen
- Elman Mansimov
- @mlblack