Skip to content

open-mmlab/mmselfsup

Repository files navigation

 
OpenMMLab website HOT      OpenMMLab platform TRY IT OUT
 

PyPI docs badge codecov license open issues

📘Documentation | 🛠️Installation | 👀Model Zoo | 🆕Update News | 🤔Reporting Issues

🌟 MMPreTrain is a newly upgraded open-source framework for visual pre-training. It has set out to provide multiple powerful pre-trained backbones and support different pre-training strategies.

👉 MMPreTrain 1.0 branch is in trial, welcome every to try it and discuss with us! 👈

English | 简体中文

Introduction

MMSelfSup is an open source self-supervised representation learning toolbox based on PyTorch. It is a part of the OpenMMLab project.

The master branch works with PyTorch 1.8 or higher.

Major features

  • Methods All in One

    MMSelfsup provides state-of-the-art methods in self-supervised learning. For comprehensive comparison in all benchmarks, most of the pre-training methods are under the same setting.

  • Modular Design

    MMSelfSup follows a similar code architecture of OpenMMLab projects with modular design, which is flexible and convenient for users to build their own algorithms.

  • Standardized Benchmarks

    MMSelfSup standardizes the benchmarks including logistic regression, SVM / Low-shot SVM from linearly probed features, semi-supervised classification, object detection and semantic segmentation.

  • Compatibility

    Since MMSelfSup adopts similar design of modulars and interfaces as those in other OpenMMLab projects, it supports smooth evaluation on downstream tasks with other OpenMMLab projects like object detection and segmentation.

What's New

MMSelfSup v1.0.0 was released based on main branch. Please refer to Migration Guide for more details.

MMSelfSup v1.0.0 was released in 06/04/2023.

  • Support PixMIM.
  • Support DINO in projects/dino/.
  • Refactor file io interface.
  • Refine documentations.

MMSelfSup v1.0.0rc6 was released in 10/02/2023.

  • Support MaskFeat with video dataset in projects/maskfeat_video/
  • Translate documentation to Chinese.

MMSelfSup v1.0.0rc5 was released in 30/12/2022.

  • Support BEiT v2, MixMIM, EVA.
  • Support ShapeBias for model analysis
  • Add Solution of FGIA ACCV 2022 (1st Place)
  • Refactor t-SNE

Please refer to Changelog for details and release history.

Differences between MMSelfSup 1.x and 0.x can be found in Migration.

Installation

MMSelfSup depends on PyTorch, MMCV, MMEngine and MMClassification.

Please refer to Installation for more detailed instruction.

Get Started

For tutorials, we provide User Guides for basic usage:

Pretrain

Downetream Tasks

Useful Tools

Advanced Guides and Colab Tutorials are also provided.

Please refer to FAQ for frequently asked questions.

Model Zoo

Please refer to Model Zoo.md for a comprehensive set of pre-trained models and benchmarks.

Supported algorithms:

More algorithms are in our plan.

Benchmark

Benchmarks Setting
ImageNet Linear Classification (Multi-head) Goyal2019
ImageNet Linear Classification (Last)
ImageNet Semi-Sup Classification
Places205 Linear Classification (Multi-head) Goyal2019
iNaturalist2018 Linear Classification (Multi-head) Goyal2019
PASCAL VOC07 SVM Goyal2019
PASCAL VOC07 Low-shot SVM Goyal2019
PASCAL VOC07+12 Object Detection MoCo
COCO17 Object Detection MoCo
Cityscapes Segmentation MMSeg
PASCAL VOC12 Aug Segmentation MMSeg

Contributing

We appreciate all contributions improving MMSelfSup. Please refer to Contribution Guides for more details about the contributing guideline.

Acknowledgement

MMSelfSup is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new algorithms.

MMSelfSup originates from OpenSelfSup, and we appreciate all early contributions made to OpenSelfSup. A few contributors are listed here: Xiaohang Zhan (@XiaohangZhan), Jiahao Xie (@Jiahao000), Enze Xie (@xieenze), Xiangxiang Chu (@cxxgtxy), Zijian He (@scnuhealthy).

Citation

If you use this toolbox or benchmark in your research, please cite this project.

@misc{mmselfsup2021,
    title={{MMSelfSup}: OpenMMLab Self-Supervised Learning Toolbox and Benchmark},
    author={MMSelfSup Contributors},
    howpublished={\url{https://github.com/open-mmlab/mmselfsup}},
    year={2021}
}

License

This project is released under the Apache 2.0 license.

Projects in OpenMMLab

  • MMEngine: OpenMMLab foundational library for training deep learning models.
  • MMCV: OpenMMLab foundational library for computer vision.
  • MMEval: A unified evaluation library for multiple machine learning libraries.
  • MIM: MIM installs OpenMMLab packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
  • MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
  • MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
  • MMRazor: OpenMMLab model compression toolbox and benchmark.
  • MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMDeploy: OpenMMLab model deployment framework.