This repository has been archived by the owner on Mar 19, 2024. It is now read-only.
Releases: facebookresearch/vissl
Releases · facebookresearch/vissl
v0.1.6
Vissl Release Overview 0.1.6
- VISSL relicenses with MIT license to enable broader use. This includes personal, university, and commercial use.
- VISSL master branch renamed to main.
- We now recommend building VISSL from source. Since VISSL is designed to be a hackable research library, we believe this method of installation gives the user the most flexibility to hack on VISSL as they like.
- Added the following SSL approaches and architectures to VISSL:
- XCiT: Cross-Covariance Image Transformers,
- DINO: Emerging Properties in Self-Supervised Vision Transformers,
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction,
- ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases, and the
- Vision Transformers (ViT) backbone with optional gradient clipping.
- Integrated Fully Sharded Data Parallel into VISSL, tested on SwAV, Regnet models.
- To aid its development, we have added enhanced tooling for memory profiling.
- Full support for the newest Pytorch versions: 1.8.1, 1.9, and 1.9.1.
- Added CLIP and VTAB benchmarks. See our docs for more information on how to set these up.
- Updated all tutorials and improved versioning for future stability. We now suggest to install VISSL from source for ease-of-use.
- Improved support for reproducibility and debugging, as well as increased unit testing. Including new DATA_LIMIT options, the [debugging sampler(https://github.com/facebookresearch/vissl/blob/v0.1.6/vissl/config/defaults.yaml#L374), CUDA reproducibility settings, and dataloader seeding improvements. For more information see our docs
- Enhanced metrics support. We now have the ability to log multiple metrics to stdout and tensorboard.
- Added support for the Precision@k and Recall@k metrics. See our docs for more info.
- Updated DOCKERFILE to reflect the newest version.
- Enhanced support for image retrieval benchmarks for the Copydays, ROxford, and RParis benchmark datasets.
- Support for image transformations using the Augly library.
- Added flexibility to register own custom base model class.
How to Upgrade
We encourage users to build from source. However, if you still wish to use the binaries you can upgrade by following the following steps:
Conda environment
conda install -c vissl vissl==0.1.6
Python venv
# Uninstall fairscale, as we now include the library in the package
# because we rely on a specific commit that is not part of a PyPi release.
pip uninstall fairscale
pip install vissl==0.1.6
If you are installing for the first time, please see our installation instructions.
As always, thank you all so much for your contributions and feedback. Please feel free to continue to reach out in our issues for any questions, suggestions, or if you wish to contribute. We hope you are finding VISSL useful to pushing the state-of-the-art in self-supervised learning!
v0.1.5 Initial Release
Initial release of VISSL