Some attention boosted CNNs implemented using PaddlePaddle, mainly for classification tasks.
The PaddlePaddle implemented archtectures are: DCANet, ECANet, GCNet, BAT, RAN, SGE, SA, BAM, CBAM, SE, GE and SRM.
Paper: DCANet: Learning Connected Attentions for Convolutional Neural Networks, Arxiv 2020 Reference Code: Pytorch Code
Paper: ECA-Net: Efficient Channel Attention, CVPR 2020 Reference Code: Pytorch Code
Paper: GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond, Arxiv 2019 Reference Code: Pytorch Code
Paper: Non-Local Neural Networks with Grouped Bilinear Attentional Transforms, CVPR 2020 Reference Code: Pytorch Code
Paper: Residual Attention Network for Image Classification, CVPR 2017 Reference Code: Pytorch Code
Paper: Spatial Group-wise Enhance: Improving Semantic Feature Learning in Convolutional Networks, Arxiv 2019 Reference Code: Pytorch Code
Paper: SA-Net: Shuffle Attention for Deep Convolutional Neural Networks,ICASSP 2021 Reference Code: Pytorch Code
Paper: BAM: Bottleneck Attention Module, BMVC 2018 Paper: CBAM: Convolutional Block Attention Module, ECCV 2018 Reference Code: Pytorch Code
Paper: A Style-based Recalibration Module for Convolutional Neural Networks, Arxiv 2019 Reference Code: Pytorch Code
Paper: Squeeze-and-Excitation Networks, CVPR 2018 Reference Code: Caffe Code
Paper: Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks, NIPS 2018 Reference Code: Caffe Code
More attention boosted CNNs are to be added later.