VISSL is a computer VIsion library for state-of-the-art Self-Supervised Learning research with PyTorch. VISSL aims to accelerate research cycle in self-supervised learning: from designing a new self-supervised task to evaluating the learned representations. Key features include:
-
Reproducible implementation of SOTA in Self-Supervision: All existing SOTA in Self-Supervision are implemented - SwAV, SimCLR, MoCo(v2), PIRL, NPID, NPID++, DeepClusterV2, ClusterFit, RotNet, Jigsaw. Also supports supervised trainings.
-
Benchmark suite: Variety of benchmarks tasks including linear image classification (places205, imagenet1k, voc07), full finetuning, semi-supervised benchmark, nearest neighbor benchmark, object detection (Pascal VOC and COCO).
-
Ease of Usability: easy to use using yaml configuration system based on Hydra.
-
Modular: Easy to design new tasks and reuse the existing components from other tasks (objective functions, model trunk and heads, data transforms, etc.). The modular components are simple drop-in replacements in yaml config files.
-
Scalability: Easy to train model on 1-gpu, multi-gpu and multi-node. Several components for large scale trainings provided as simple config file plugs: Activation checkpointing, ZeRO, FP16, LARC, Stateful data sampler, data class to handle invalid images, large model backbones like RegNets, etc.
-
Model Zoo: Over 60 pre-trained self-supervised model weights.
See INSTALL.md
.
Install VISSL by following the installation instructions. After installation, please see Getting Started with VISSL and the Colab Notebook to learn about basic usage.
Learn more about VISSL at our documentation. And see the projects/ for some projects built on top of VISSL.
Get started with VISSL by trying one of the Colab tutorial notebooks.
- Train SimCLR on 1-gpu
- Extracting Features from a pretrained model
- Benchmark task: Full finetuning on ImageNet-1K
- Benchmark task: Linear image classification on ImageNet-1K
- Large scale training (fp16, LARC, ZeRO)
We provide a large set of baseline results and trained models available for download in the VISSL Model Zoo.
VISSL is written and maintained by the Facebook AI Research.
We welcome new contributions to VISSL and we will be actively maintaining this library! Please refer to CONTRIBUTING.md
for full instructions on how to run the code, tests and linter, and submit your pull requests.
VISSL is released under CC-NC 4.0 International license.
If you find VISSL useful in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry.
@misc{goyal2021vissl,
author = {Priya Goyal and Quentin Duval and Jeremy Reizenstein and Matthew Leavitt and Min Xu and
Benjamin Lefaudeux and Mannat Singh and Vinicius Reis and Mathilde Caron and Piotr Bojanowski and
Armand Joulin and Ishan Misra},
title = {VISSL},
howpublished = {\url{https://github.com/facebookresearch/vissl}},
year = {2021}
}
Below we share, in reverse chronological order, the updates and new releases in VISSL. All VISSL releases are available here.
[January 2021]: VISSL v0.1 released.