Skip to content

Commit

Permalink
[Enhance] update SimCLR models and results (#295)
Browse files Browse the repository at this point in the history
* [Enhance] update simclr models and results

* [Fix] revise comments to indicate settings
  • Loading branch information
Jiahao000 authored Apr 29, 2022
1 parent c532a11 commit 7516b5b
Show file tree
Hide file tree
Showing 13 changed files with 70 additions and 33 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@
'../_base_/schedules/sgd_coslr-100e.py',
'../_base_/default_runtime.py',
]
# SwAV linear evaluation setting

model = dict(backbone=dict(frozen_stages=4))

# swav setting
# runtime settings
# the max_keep_ckpts controls the max number of ckpt file in your work_dirs
# if it is 3, when CheckpointHook (in mmcv) saves the 4th ckpt
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@
'../_base_/schedules/sgd_steplr-100e.py',
'../_base_/default_runtime.py',
]
# MoCo v1/v2 linear evaluation setting

model = dict(backbone=dict(frozen_stages=4))

evaluation = dict(interval=1, topk=(1, 5))

# moco setting
# optimizer
optimizer = dict(type='SGD', lr=30., momentum=0.9, weight_decay=0.)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,17 @@
'../_base_/schedules/lars_coslr-90e.py',
'../_base_/default_runtime.py',
]
# SimSiam linear evaluation setting
# According to SimSiam paper, this setting can also be used to evaluate
# other methods like SimCLR, MoCo, BYOL, SwAV

model = dict(backbone=dict(frozen_stages=4))

# dataset summary
data = dict(samples_per_gpu=512) # total 512*8=4096, 8GPU linear cls
data = dict(
samples_per_gpu=512,
workers_per_gpu=8) # total 512*8=4096, 8GPU linear cls

# simsiam setting
# runtime settings
# the max_keep_ckpts controls the max number of ckpt file in your work_dirs
# if it is 3, when CheckpointHook (in mmcv) saves the 4th ckpt
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
'../_base_/schedules/sgd_steplr-100e.py',
'../_base_/default_runtime.py',
]
# Multi-head linear evaluation setting

model = dict(backbone=dict(frozen_stages=4))

Expand Down Expand Up @@ -45,4 +46,8 @@

# runtime settings
runner = dict(type='EpochBasedRunner', max_epochs=90)
checkpoint_config = dict(interval=10)

# the max_keep_ckpts controls the max number of ckpt file in your work_dirs
# if it is 3, when CheckpointHook (in mmcv) saves the 4th ckpt
# it will remove the oldest one to keep the number of total ckpts as 3
checkpoint_config = dict(interval=10, max_keep_ckpts=3)
3 changes: 2 additions & 1 deletion configs/selfsup/_base_/models/simclr.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@
depth=50,
in_channels=3,
out_indices=[4], # 0: conv-1, x: stage-x
norm_cfg=dict(type='SyncBN')),
norm_cfg=dict(type='SyncBN'),
zero_init_residual=True),
neck=dict(
type='NonLinearNeck', # SimCLR non-linear neck
in_channels=2048,
Expand Down
9 changes: 5 additions & 4 deletions configs/selfsup/simclr/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,11 +36,12 @@ Besides, k=1 to 96 indicates the hyper-parameter of Low-shot SVM.

The **Feature1 - Feature5** don't have the GlobalAveragePooling, the feature map is pooled to the specific dimensions and then follows a Linear layer to do the classification. Please refer to [resnet50_mhead_linear-8xb32-steplr-90e_in1k](https://github.com/open-mmlab/mmselfsup/blob/master/configs/benchmarks/classification/imagenet/resnet50_mhead_linear-8xb32-steplr-90e_in1k.py) for details of config.

The **AvgPool** result is obtained from Linear Evaluation with GlobalAveragePooling. Please refer to [resnet50_linear-8xb32-steplr-100e_in1k](https://github.com/open-mmlab/mmselfsup/blob/master/configs/benchmarks/classification/imagenet/resnet50_linear-8xb32-steplr-100e_in1k.py) for details of config.
The **AvgPool** result is obtained from Linear Evaluation with GlobalAveragePooling. Please refer to [resnet50_linear-8xb512-coslr-90e_in1k](https://github.com/open-mmlab/mmselfsup/blob/master/configs/benchmarks/classification/imagenet/resnet50_linear-8xb512-coslr-90e_in1k.py) for details of config.

| Self-Supervised Config | Feature1 | Feature2 | Feature3 | Feature4 | Feature5 | AvgPool |
| ------------------------------------------------------------------------------------------------------------------------------------------------ | -------- | -------- | -------- | -------- | -------- | ------- |
| [resnet50_8xb32-coslr-200e](https://github.com/open-mmlab/mmselfsup/blob/master/configs/selfsup/simclr/simclr_resnet50_8xb32-coslr-200e_in1k.py) | 14.43 | 30.97 | 41.02 | 53.92 | 61.24 | 57.28 |
| Self-Supervised Config | Feature1 | Feature2 | Feature3 | Feature4 | Feature5 | AvgPool |
| ---------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -------- | -------- | -------- | -------- | ------- |
| [resnet50_8xb32-coslr-200e](https://github.com/open-mmlab/mmselfsup/blob/master/configs/selfsup/simclr/simclr_resnet50_8xb32-coslr-200e_in1k.py) | 16.29 | 31.11 | 39.99 | 55.06 | 62.91 | 62.56 |
| [resnet50_16xb256-coslr-200e](https://github.com/open-mmlab/mmselfsup/blob/master/configs/selfsup/simclr/simclr_resnet50_16xb256-coslr-200e_in1k.py) | 15.44 | 31.47 | 41.83 | 59.44 | 66.41 | 66.66 |

#### Places205 Linear Evaluation

Expand Down
18 changes: 15 additions & 3 deletions configs/selfsup/simclr/metafile.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Collections:
Training Data: ImageNet-1k
Training Techniques:
- LARS
Training Resources: 8x V100 GPUs
Training Resources: 8x V100 GPUs (b256), 16x A100-80G GPUs (b4096)
Architecture:
- ResNet
- SimCLR
Expand All @@ -23,6 +23,18 @@ Models:
- Task: Self-Supervised Image Classification
Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 57.28
Top 1 Accuracy: 62.56
Config: configs/selfsup/simclr/simclr_resnet50_8xb32-coslr-200e_in1k.py
Weights: https://download.openmmlab.com/mmselfsup/simclr/simclr_resnet50_8xb32-coslr-200e_in1k_20220225-97d2abef.pth
Weights: https://download.openmmlab.com/mmselfsup/simclr/simclr_resnet50_8xb32-coslr-200e_in1k_20220428-46ef6bb9.pth
- Name: simclr_resnet50_16xb256-coslr-200e_in1k
In Collection: SimCLR
Metadata:
Epochs: 200
Batch Size: 4096
Results:
- Task: Self-Supervised Image Classification
Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 66.66
Config: configs/selfsup/simclr/simclr_resnet50_16xb256-coslr-200e_in1k.py
Weights: https://download.openmmlab.com/mmselfsup/simclr/simclr_resnet50_16xb256-coslr-200e_in1k_20220428-8c24b063.pth
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
_base_ = 'simclr_resnet50_8xb32-coslr-200e_in1k.py'

# optimizer
optimizer = dict(lr=4.8)

# dataset summary
data = dict(samples_per_gpu=256, workers_per_gpu=8) # total 256*16
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
_base_ = 'simclr_resnet50_8xb32-coslr-200e_in1k.py'

# optimizer
optimizer = dict(lr=0.6)

# dataset summary
data = dict(samples_per_gpu=64) # total 64*8
Loading

0 comments on commit 7516b5b

Please sign in to comment.