From 1836d22d667825929d3129d76afa3a6a1567f57e Mon Sep 17 00:00:00 2001 From: Kai Han Date: Mon, 22 May 2023 20:45:58 +0800 Subject: [PATCH] Update readme.md --- readme.md | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/readme.md b/readme.md index 7f1e5b5..9a8168f 100755 --- a/readme.md +++ b/readme.md @@ -1,10 +1,11 @@ # Efficient AI Backbones including GhostNet, TNT (Transformer in Transformer), AugViT, WaveMLP and ViG developed by Huawei Noah's Ark Lab. +- [News](#news) - [Model zoo](#model-zoo) - [Citation](#citation) - [Other versions](#other-versions-of-ghostnet) -**News** +## News 2022/12/01 The code of NeurIPS 2022 (Spotlight) [GhostNetV2](https://arxiv.org/abs/2211.12905) is released at [./ghostnetv2_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/ghostnetv2_pytorch). @@ -22,25 +23,24 @@ including GhostNet, TNT (Transformer in Transformer), AugViT, WaveMLP and ViG de 2020/10/31 GhostNet+TinyNet achieves better performance. See details in our NeurIPS 2020 paper: [arXiv](https://arxiv.org/abs/2010.14819). ---- ## Model zoo -| Model | Paper | Pytorch code | Tensorflow code | MindSpore code | -| - | - | - | - | - | -| GhostNet | GhostNet: More Features from Cheap Operations. [[CVPR 2020]](https://arxiv.org/abs/1911.11907) | [./ghostnet_pytorch](https://github.com/huawei-noah/CV-backbones/tree/master/ghostnet_pytorch) | [./ghostnet_tensorflow](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/ghostnet_tensorflow) | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/ghostnet) | -| GhostNetV2 | GhostNetV2: Enhance Cheap Operation with Long-Range Attention. [[NeurIPS 2022 Spotlight]](https://arxiv.org/abs/2211.12905) | [./ghostnetv2_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/ghostnetv2_pytorch) | - | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/ghostnetv2) | -| G-GhostNet | GhostNets on Heterogeneous Devices via Cheap Operations. [[IJCV 2022]](https://arxiv.org/abs/2201.03297) | [./g_ghost_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/g_ghost_pytorch) | - | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/ghostnet_d) | -| TinyNet | Model Rubik’s Cube: Twisting Resolution, Depth and Width for TinyNets. [[NeurIPS 2020]](https://arxiv.org/abs/2010.14819) | [./tinynet_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/tinynet_pytorch) | - | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/tinynet) | -| TNT | Transformer in Transformer. [[NeurIPS 2021]](https://arxiv.org/abs/2103.00112) | [./tnt_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/tnt_pytorch) | - | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/TNT) | -| PyramidTNT | PyramidTNT: Improved Transformer-in-Transformer Baselines with Pyramid Architecture. [[CVPR 2022 Workshop]](https://arxiv.org/abs/2201.00978)| [./tnt_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/tnt_pytorch) | - | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/TNT) | -| CMT | CMT: Convolutional Neural Networks Meet Vision Transformers. [[CVPR 2022]](https://arxiv.org/pdf/2107.06263.pdf) | [./cmt_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/cmt_pytorch) | - | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/CMT) | -| AugViT | Augmented Shortcuts for Vision Transformers. [[NeurIPS 2021]](https://proceedings.neurips.cc/paper/2021/file/818f4654ed39a1c147d1e51a00ffb4cb-Paper.pdf) | [./augvit_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/augvit_pytorch) | - | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/augvit) | -| SNN-MLP | Brain-inspired Multilayer Perceptron with Spiking Neurons. [[CVPR 2022]](https://arxiv.org/pdf/2203.14679.pdf) | [./snnmlp_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/snnmlp_pytorch) | - | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/snn_mlp) | -| WaveMLP | An Image Patch is a Wave: Quantum Inspired Vision MLP. [[CVPR 2022]](https://arxiv.org/pdf/2111.12294.pdf) | [./wavemlp_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/wavemlp_pytorch) | - | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/wave_mlp) | +| Model | Paper | Pytorch code | MindSpore code | +| - | - | - | - | +| GhostNet | GhostNet: More Features from Cheap Operations. [[CVPR 2020]](https://arxiv.org/abs/1911.11907) | [./ghostnet_pytorch](https://github.com/huawei-noah/CV-backbones/tree/master/ghostnet_pytorch) | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/ghostnet) | +| GhostNetV2 | GhostNetV2: Enhance Cheap Operation with Long-Range Attention. [[NeurIPS 2022 Spotlight]](https://arxiv.org/abs/2211.12905) | [./ghostnetv2_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/ghostnetv2_pytorch) | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/ghostnetv2) | +| G-GhostNet | GhostNets on Heterogeneous Devices via Cheap Operations. [[IJCV 2022]](https://arxiv.org/abs/2201.03297) | [./g_ghost_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/g_ghost_pytorch) | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/ghostnet_d) | +| TinyNet | Model Rubik’s Cube: Twisting Resolution, Depth and Width for TinyNets. [[NeurIPS 2020]](https://arxiv.org/abs/2010.14819) | [./tinynet_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/tinynet_pytorch) | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/tinynet) | +| TNT | Transformer in Transformer. [[NeurIPS 2021]](https://arxiv.org/abs/2103.00112) | [./tnt_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/tnt_pytorch) | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/TNT) | +| PyramidTNT | PyramidTNT: Improved Transformer-in-Transformer Baselines with Pyramid Architecture. [[CVPR 2022 Workshop]](https://arxiv.org/abs/2201.00978)| [./tnt_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/tnt_pytorch) | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/TNT) | +| CMT | CMT: Convolutional Neural Networks Meet Vision Transformers. [[CVPR 2022]](https://arxiv.org/pdf/2107.06263.pdf) | [./cmt_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/cmt_pytorch) | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/CMT) | +| AugViT | Augmented Shortcuts for Vision Transformers. [[NeurIPS 2021]](https://proceedings.neurips.cc/paper/2021/file/818f4654ed39a1c147d1e51a00ffb4cb-Paper.pdf) | [./augvit_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/augvit_pytorch) | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/augvit) | +| SNN-MLP | Brain-inspired Multilayer Perceptron with Spiking Neurons. [[CVPR 2022]](https://arxiv.org/pdf/2203.14679.pdf) | [./snnmlp_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/snnmlp_pytorch) | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/snn_mlp) | +| WaveMLP | An Image Patch is a Wave: Quantum Inspired Vision MLP. [[CVPR 2022]](https://arxiv.org/pdf/2111.12294.pdf) | [./wavemlp_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/wavemlp_pytorch) | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/wave_mlp) | | ViG | Vision GNN: An Image is Worth Graph of Nodes. [[NeurIPS 2022]](https://arxiv.org/abs/2206.00272) | [./vig_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch) | - | [MindSpore Model Zoo](https://gitee.com/mindspore/models/tree/master/research/cv/ViG) | -| LegoNet | LegoNet: Efficient Convolutional Neural Networks with Lego Filters. [[ICML 2019]](http://proceedings.mlr.press/v97/yang19c/yang19c.pdf) | [./legonet_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/legonet_pytorch) | - | - | -| Versatile Filters | Learning Versatile Filters for Efficient Convolutional Neural Networks. [[NeurIPS 2018]](https://papers.nips.cc/paper/7433-learning-versatile-filters-for-efficient-convolutional-neural-networks) | [./versatile_filters](https://github.com/huawei-noah/CV-backbones/tree/master/versatile_filters) | - | - | +| LegoNet | LegoNet: Efficient Convolutional Neural Networks with Lego Filters. [[ICML 2019]](http://proceedings.mlr.press/v97/yang19c/yang19c.pdf) | [./legonet_pytorch](https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/legonet_pytorch) | - | +| Versatile Filters | Learning Versatile Filters for Efficient Convolutional Neural Networks. [[NeurIPS 2018]](https://papers.nips.cc/paper/7433-learning-versatile-filters-for-efficient-convolutional-neural-networks) | [./versatile_filters](https://github.com/huawei-noah/CV-backbones/tree/master/versatile_filters) | - | ## Citation ```