Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add Twins backbone and convert checkpoints #642

Merged
merged 19 commits into from
Jan 27, 2022
Merged
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,8 +77,8 @@ Results and models are available in the [model zoo](https://mmclassification.rea
- [x] [DeiT](https://github.com/open-mmlab/mmclassification/tree/master/configs/deit)
- [x] [Conformer](https://github.com/open-mmlab/mmclassification/tree/master/configs/conformer)
- [x] [T2T-ViT](https://github.com/open-mmlab/mmclassification/tree/master/configs/t2t_vit)
- [x] [Twins](https://github.com/open-mmlab/mmclassification/tree/master/configs/twins)
- [x] [EfficientNet](https://github.com/open-mmlab/mmclassification/tree/master/configs/efficientnet)
- [ ] Twins
- [ ] HRNet

</details>
Expand Down
30 changes: 30 additions & 0 deletions configs/_base_/models/twins_pcpvt_base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='PCPVT',
arch='base',
in_channels=3,
out_indices=(3, ),
qkv_bias=True,
norm_cfg=dict(type='LN', eps=1e-06),
norm_after_stage=[False, False, False, True],
drop_rate=0.0,
attn_drop_rate=0.,
drop_path_rate=0.3),
neck=dict(type='GlobalAveragePooling'),
head=dict(
type='LinearClsHead',
num_classes=1000,
in_channels=512,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
cal_acc=False),
init_cfg=[
dict(type='TruncNormal', layer='Linear', std=0.02, bias=0.),
dict(type='Constant', layer='LayerNorm', val=1., bias=0.)
],
train_cfg=dict(augments=[
dict(type='BatchMixup', alpha=0.8, num_classes=1000, prob=0.5),
dict(type='BatchCutMix', alpha=1.0, num_classes=1000, prob=0.5)
]))
30 changes: 30 additions & 0 deletions configs/_base_/models/twins_svt_base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# model settings
model = dict(
type='ImageClassifier',
backbone=dict(
type='SVT',
arch='base',
in_channels=3,
out_indices=(3, ),
qkv_bias=True,
norm_cfg=dict(type='LN'),
norm_after_stage=[False, False, False, True],
drop_rate=0.0,
attn_drop_rate=0.,
drop_path_rate=0.3),
neck=dict(type='GlobalAveragePooling'),
head=dict(
type='LinearClsHead',
num_classes=1000,
in_channels=768,
loss=dict(
type='LabelSmoothLoss', label_smooth_val=0.1, mode='original'),
cal_acc=False),
init_cfg=[
dict(type='TruncNormal', layer='Linear', std=0.02, bias=0.),
dict(type='Constant', layer='LayerNorm', val=1., bias=0.)
],
train_cfg=dict(augments=[
dict(type='BatchMixup', alpha=0.8, num_classes=1000, prob=0.5),
dict(type='BatchCutMix', alpha=1.0, num_classes=1000, prob=0.5)
]))
38 changes: 38 additions & 0 deletions configs/twins/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Twins

> [Twins: Revisiting the Design of Spatial Attention in Vision Transformers](http://arxiv-export-lb.library.cornell.edu/abs/2104.13840)
<!-- [ALGORITHM] -->

## Abstract

Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully-devised yet simple spatial attention mechanism performs favourably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and Twins-SVT. Our proposed architectures are highly-efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks, including image level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our code is released at [this https URL](https://github.com/Meituan-AutoML/Twins).

<div align=center>
<img src="https://user-images.githubusercontent.com/24582831/145021310-57826cf5-5e03-4c7c-9081-ffa744bdae27.png" width="80%"/>
</div>

## Results and models

### ImageNet-1k

| Model | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
|:--------------:|:---------:|:--------:|:---------:|:---------:|:------:|:--------:|
| PCPVT-small\* | 24.11 | 3.67 | 81.14 | 95.69 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-pcpvt-small_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-pcpvt-small_3rdparty_8xb128_in1k_20220126-ef23c132.pth) |
| PCPVT-base\* | 43.83 | 6.45 | 82.66 | 96.26 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-pcpvt-base_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-pcpvt-base_3rdparty_8xb128_in1k_20220126-f8c4b0d5.pth) |
| PCPVT-large\* | 60.99 | 9.51 | 83.09 | 96.59 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-pcpvt-large_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-pcpvt-large_3rdparty_16xb64_in1k_20220126-c1ef8d80.pth) |
| SVT-small\* | 24.06 | 2.82 | 81.77 | 95.57 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-svt-small_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-svt-small_3rdparty_8xb128_in1k_20220126-8fe5205b.pth) |
| SVT-base\* | 56.07 | 8.35 | 83.13 | 96.29 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-svt-base_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-svt-base_3rdparty_8xb128_in1k_20220126-e31cc8e9.pth) |
| SVT-large\* | 99.27 | 14.82 | 83.60 | 96.50 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-svt-large_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-svt-large_3rdparty_16xb64_in1k_20220126-4817645f.pth) |

*Models with \* are converted from [the official repo](https://github.com/Meituan-AutoML/Twins). The config files of these models are only for validation. We don't ensure these config files' training accuracy and welcome you to contribute your reproduction results. The validation accuracy is a little different from the official paper because of the PyTorch version. This result is get in PyTorch=1.9 while the official result is get in PyTorch=1.7*

## Citation

```
@article{chu2021twins,
title={Twins: Revisiting spatial attention design in vision transformers},
author={Chu, Xiangxiang and Tian, Zhi and Wang, Yuqing and Zhang, Bo and Ren, Haibing and Wei, Xiaolin and Xia, Huaxia and Shen, Chunhua},
journal={arXiv preprint arXiv:2104.13840},
year={2021}altgvt
}
```
114 changes: 114 additions & 0 deletions configs/twins/metafile.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
Collections:
- Name: Twins
Metadata:
Training Data: ImageNet-1k
Architecture:
- Global Subsampled Attention
- Locally Grouped SelfAttention
- Conditional Position Encoding
- Pyramid Vision Transformer
Paper:
URL: http://arxiv-export-lb.library.cornell.edu/abs/2104.13840
Title: "Twins: Revisiting the Design of Spatial Attention in Vision Transformers"
README: configs/twins/README.md
Code:
URL: https://github.com/open-mmlab/mmclassification/blob/v0.20.0/mmcls/models/backbones/twins.py
Version: v0.20.0

Models:
- Name: twins-pcpvt-small_3rdparty_8xb128_in1k
Metadata:
FLOPs: 3670000000 # 3.67G
Parameters: 24110000 # 24.11M
In Collection: Twins
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 81.14
Top 5 Accuracy: 95.69
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/twins/twins-pcpvt-small_3rdparty_8xb128_in1k_20220126-ef23c132.pth
Config: configs/twins/twins-pcpvt-small_8xb128_in1k.py
Converted From:
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vt3p-weights/twins_pcpvt_small-e70e7e7a.pth
Code: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/twins.py
- Name: twins-pcpvt-base_3rdparty_8xb128_in1k
Metadata:
FLOPs: 6450000000 # 6.45G
Parameters: 43830000 # 43.83M
In Collection: Twins
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 82.66
Top 5 Accuracy: 96.26
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/twins/twins-pcpvt-base_3rdparty_8xb128_in1k_20220126-f8c4b0d5.pth
Config: configs/twins/twins-pcpvt-base_8xb128_in1k.py
Converted From:
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vt3p-weights/twins_pcpvt_small-e70e7e7a.pth
Code: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/twins.py
- Name: twins-pcpvt-large_3rdparty_16xb64_in1k
Metadata:
FLOPs: 9510000000 # 9.51G
Parameters: 60990000 # 60.99M
In Collection: Twins
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.09
Top 5 Accuracy: 96.59
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/twins/twins-pcpvt-large_3rdparty_16xb64_in1k_20220126-c1ef8d80.pth
Config: configs/twins/twins-pcpvt-large_16xb64_in1k.py
Converted From:
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vt3p-weights/twins_pcpvt_small-e70e7e7a.pth
Code: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/twins.py
- Name: twins-svt-small_3rdparty_8xb128_in1k
Metadata:
FLOPs: 2820000000 # 2.82G
Parameters: 24060000 # 24.06M
In Collection: Twins
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 81.77
Top 5 Accuracy: 95.57
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/twins/twins-svt-small_3rdparty_8xb128_in1k_20220126-8fe5205b.pth
Config: configs/twins/twins-svt-small_8xb128_in1k.py
Converted From:
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vt3p-weights/twins_pcpvt_small-e70e7e7a.pth
Code: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/twins.py
- Name: twins-svt-base_8xb128_3rdparty_in1k
Metadata:
FLOPs: 8350000000 # 8.35G
Parameters: 56070000 # 56.07M
In Collection: Twins
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.13
Top 5 Accuracy: 96.29
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/twins/twins-svt-base_3rdparty_8xb128_in1k_20220126-e31cc8e9.pth
Config: configs/twins/twins-svt-base_8xb128_in1k.py
Converted From:
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vt3p-weights/twins_pcpvt_small-e70e7e7a.pth
Code: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/twins.py
- Name: twins-svt-large_3rdparty_16xb64_in1k
Metadata:
FLOPs: 14820000000 # 14.82G
Parameters: 99270000 # 99.27M
In Collection: Twins
Results:
- Dataset: ImageNet-1k
Metrics:
Top 1 Accuracy: 83.60
Top 5 Accuracy: 96.50
Task: Image Classification
Weights: https://download.openmmlab.com/mmclassification/v0/twins/twins-svt-large_3rdparty_16xb64_in1k_20220126-4817645f.pth
Config: configs/twins/twins-svt-large_16xb64_in1k.py
Converted From:
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vt3p-weights/twins_pcpvt_small-e70e7e7a.pth
Code: https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/twins.py
33 changes: 33 additions & 0 deletions configs/twins/twins-pcpvt-base_8xb128_in1k.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
_base_ = [
'../_base_/models/twins_pcpvt_base.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]

data = dict(samples_per_gpu=128)

paramwise_cfg = dict(_delete=True, norm_decay_mult=0.0, bias_decay_mult=0.0)

# for batch in each gpu is 128, 8 gpu
# lr = 5e-4 * 128 * 8 / 512 = 0.001
optimizer = dict(
type='AdamW',
lr=5e-4 * 128 * 8 / 512,
weight_decay=0.05,
eps=1e-8,
betas=(0.9, 0.999),
paramwise_cfg=paramwise_cfg)
optimizer_config = dict(_delete_=True, grad_clip=dict(max_norm=5.0))

# learning policy
lr_config = dict(
policy='CosineAnnealing',
by_epoch=True,
min_lr_ratio=1e-2,
warmup='linear',
warmup_ratio=1e-3,
warmup_iters=5,
warmup_by_epoch=True)

evaluation = dict(interval=1, metric='accuracy')
5 changes: 5 additions & 0 deletions configs/twins/twins-pcpvt-large_16xb64_in1k.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
_base_ = ['twins-pcpvt-base_8xb128_in1k.py']

model = dict(backbone=dict(arch='large'), head=dict(in_channels=512))

data = dict(samples_per_gpu=64)
3 changes: 3 additions & 0 deletions configs/twins/twins-pcpvt-small_8xb128_in1k.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
_base_ = ['twins-pcpvt-base_8xb128_in1k.py']

model = dict(backbone=dict(arch='small'), head=dict(in_channels=512))
33 changes: 33 additions & 0 deletions configs/twins/twins-svt-base_8xb128_in1k.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
_base_ = [
'../_base_/models/twins_svt_base.py',
'../_base_/datasets/imagenet_bs64_swin_224.py',
'../_base_/schedules/imagenet_bs1024_adamw_swin.py',
'../_base_/default_runtime.py'
]

data = dict(samples_per_gpu=128)

paramwise_cfg = dict(_delete=True, norm_decay_mult=0.0, bias_decay_mult=0.0)

# for batch in each gpu is 128, 8 gpu
# lr = 5e-4 * 128 * 8 / 512 = 0.001
optimizer = dict(
type='AdamW',
lr=5e-4 * 128 * 8 / 512,
weight_decay=0.05,
eps=1e-8,
betas=(0.9, 0.999),
paramwise_cfg=paramwise_cfg)
optimizer_config = dict(_delete_=True, grad_clip=dict(max_norm=5.0))

# learning policy
lr_config = dict(
policy='CosineAnnealing',
by_epoch=True,
min_lr_ratio=1e-2,
warmup='linear',
warmup_ratio=1e-3,
warmup_iters=5,
warmup_by_epoch=True)

evaluation = dict(interval=1, metric='accuracy')
5 changes: 5 additions & 0 deletions configs/twins/twins-svt-large_16xb64_in1k.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
_base_ = ['twins-svt-base_8xb128_in1k.py']

data = dict(samples_per_gpu=64)

model = dict(backbone=dict(arch='large'), head=dict(in_channels=1024))
3 changes: 3 additions & 0 deletions configs/twins/twins-svt-small_8xb128_in1k.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
_base_ = ['twins-svt-base_8xb128_in1k.py']

model = dict(backbone=dict(arch='small'), head=dict(in_channels=512))
6 changes: 6 additions & 0 deletions docs/en/model_zoo.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,12 @@ The ResNet family models below are trained by standard data augmentations, i.e.,
| Conformer-small-p32\* | 38.85 | 7.09 | 81.96 | 96.02 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/conformer/conformer-small-p32_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/conformer/conformer-small-p32_8xb128_in1k_20211206-947a0816.pth) |
| Conformer-small-p16\* | 37.67 | 10.31 | 83.32 | 96.46 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/conformer/conformer-small-p16_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/conformer/conformer-small-p16_3rdparty_8xb128_in1k_20211206-3065dcf5.pth) |
| Conformer-base-p16\* | 83.29 | 22.89 | 83.82 | 96.59 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/conformer/conformer-base-p16_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/conformer/conformer-base-p16_3rdparty_8xb128_in1k_20211206-bfdf8637.pth) |
| PCPVT-small\* | 24.11 | 3.67 | 81.14 | 95.69 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-pcpvt-small_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-pcpvt-small_3rdparty_8xb128_in1k_20220126-ef23c132.pth) |
| PCPVT-base\* | 43.83 | 6.45 | 82.66 | 96.26 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-pcpvt-base_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-pcpvt-base_3rdparty_8xb128_in1k_20220126-f8c4b0d5.pth) |
| PCPVT-large\* | 60.99 | 9.51 | 83.09 | 96.59 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-pcpvt-large_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-pcpvt-large_3rdparty_16xb64_in1k_20220126-c1ef8d80.pth) |
| SVT-small\* | 24.06 | 2.82 | 81.77 | 95.57 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-svt-small_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-svt-small_3rdparty_8xb128_in1k_20220126-8fe5205b.pth) |
| SVT-base\* | 56.07 | 8.35 | 83.13 | 96.29 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-svt-base_8xb128_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-svt-base_3rdparty_8xb128_in1k_20220126-e31cc8e9.pth) |
| SVT-large\* | 99.27 | 14.82 | 83.60 | 96.50 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/twins/twins-svt-large_16xb64_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/twins/twins-svt-large_3rdparty_16xb64_in1k_20220126-4817645f.pth) |
| EfficientNet-B0\* | 5.29 | 0.02 | 76.74 | 93.17 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/efficientnet/efficientnet-b0_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-b0_3rdparty_8xb32_in1k_20220119-a7e2a0b1.pth) |
| EfficientNet-B0 (AA)\* | 5.29 | 0.02 | 77.26 | 93.41 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/efficientnet/efficientnet-b0_8xb32_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-b0_3rdparty_8xb32-aa_in1k_20220119-8d939117.pth) |
| EfficientNet-B0 (AA + AdvProp)\* | 5.29 | 0.02 | 77.53 | 93.61 | [config](https://github.com/open-mmlab/mmclassification/blob/master/configs/efficientnet/efficientnet-b0_8xb32-01norm_in1k.py) | [model](https://download.openmmlab.com/mmclassification/v0/efficientnet/efficientnet-b0_3rdparty_8xb32-aa-advprop_in1k_20220119-26434485.pth) |
Expand Down
4 changes: 3 additions & 1 deletion mmcls/models/backbones/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
from .t2t_vit import T2T_ViT
from .timm_backbone import TIMMBackbone
from .tnt import TNT
from .twins import PCPVT, SVT
from .vgg import VGG
from .vision_transformer import VisionTransformer

Expand All @@ -30,5 +31,6 @@
'ResNeSt', 'ResNet_CIFAR', 'SEResNet', 'SEResNeXt', 'ShuffleNetV1',
'ShuffleNetV2', 'MobileNetV2', 'MobileNetV3', 'VisionTransformer',
'SwinTransformer', 'TNT', 'TIMMBackbone', 'T2T_ViT', 'Res2Net', 'RepVGG',
'Conformer', 'MlpMixer', 'DistilledVisionTransformer', 'EfficientNet'
'Conformer', 'MlpMixer', 'DistilledVisionTransformer', 'PCPVT', 'SVT',
'EfficientNet'
]
Loading