Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhance] Support select gpu-ids in non-distribute testing time #6781

Merged
merged 25 commits into from
Dec 24, 2021
Merged
Show file tree
Hide file tree
Changes from 19 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
d1fb57f
Update README_zh-CN.md (#6652)
LJoson Dec 2, 2021
bc97116
add abstract and sketch to the CONFIGS/algorithm/README.md (#6654)
Czm369 Dec 2, 2021
a9d32b0
[Fix]fix init when densehead contains dcn (#6625)
jshilong Dec 2, 2021
95a3def
fix init of convfchead (#6624)
jshilong Dec 2, 2021
82a3041
polish docs (#6623)
jshilong Dec 2, 2021
cc8ceea
fix pseudosampler (#6622)
jshilong Dec 2, 2021
339f0ce
[Fix]Add an example of combining swin and one-stage models (#6621)
jshilong Dec 2, 2021
edd8248
add mmhuman3d in readme (#6699)
ZCMax Dec 6, 2021
7277a25
[Fix] Fix init weights in Swin and PVT. (#6663)
RangiLyu Dec 7, 2021
d5f40aa
[Fix] update metafile (#6717)
BIGWangYuDong Dec 8, 2021
2461375
Release YOLOX model (#6698)
hhaAndroid Dec 8, 2021
de60de7
Add 'get_ann_info' to dataset_wrappers (#6526)
zhaoxin111 Dec 8, 2021
3c91d21
[Enchance] Update FAQ docs (#6587)
hhaAndroid Dec 8, 2021
926e457
Support keeping image ratio in the multi-scale training of YOLOX (#6732)
GT9505 Dec 10, 2021
43699a2
[Doc]Add doc for detect_anomalous_params (#6697)
jshilong Dec 10, 2021
c9f4297
Fix dtype bug in base_dense_head
shinya7y Dec 12, 2021
d3d42fd
Support `bbox_clip_border` for the augmentations of YOLOX (#6730)
GT9505 Dec 13, 2021
cc721ee
[Fix] Fix SimOTA with no valid bbox. (#6733)
RangiLyu Dec 13, 2021
c870e8e
[Enhance] support select gpu-ids in testing time
BIGWangYuDong Dec 14, 2021
cd0e6d6
fix conflict
BIGWangYuDong Dec 16, 2021
83eea4b
[Fix] fix link (#6796)
BIGWangYuDong Dec 16, 2021
55d60d1
Merge branch 'dev-v2.20.0' into support-gpus
BIGWangYuDong Dec 16, 2021
a12ae5a
support select gpu-ids in testing time
BIGWangYuDong Dec 22, 2021
0bc8cdd
Merge branch 'master' into support-gpus
BIGWangYuDong Dec 22, 2021
1838f30
minor fix
BIGWangYuDong Dec 24, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,8 @@ If you use this toolbox or benchmark in your research, please cite this project.
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
- [MMOCR](https://github.com/open-mmlab/mmocr): A Comprehensive Toolbox for Text Detection, Recognition and Understanding.
- [MMOCR](https://github.com/open-mmlab/mmocr): A comprehensive toolbox for text detection, recognition and understanding.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab FewShot Learning Toolbox and Benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
4 changes: 3 additions & 1 deletion README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [Ope
请参考[快速入门文档](docs/get_started.md)学习 MMDetection 的基本使用。
我们提供了 [colab 教程](demo/MMDet_Tutorial.ipynb),也为新手提供了完整的运行教程,分别针对[已有数据集](docs/1_exist_data_model.md)和[新数据集](docs/2_new_data_model.md) 完整的使用指南

我们也提供了一些进阶教程,内容覆盖了 [finetune 模型](docs/tutorials/finetune.md),[增加新数据集支持](docs/tutorials/new_dataset.md),[设计新的数据预处理流程](docs/tutorials/data_pipeline.md),[增加自定义模型](ocs/tutorials/customize_models.md),[增加自定义的运行时配置](docs/tutorials/customize_runtime.md),[常用工具和脚本](docs/useful_tools.md)。
我们也提供了一些进阶教程,内容覆盖了 [finetune 模型](docs/tutorials/finetune.md),[增加新数据集支持](docs/tutorials/customize_dataset.md),[设计新的数据预处理流程](docs/tutorials/data_pipeline.md),[增加自定义模型](docs/tutorials/customize_models.md),[增加自定义的运行时配置](docs/tutorials/customize_runtime.md),[常用工具和脚本](docs/useful_tools.md)。

如果遇到问题,请参考 [常见问题解答](docs_zh-CN/faq.md)。

Expand Down Expand Up @@ -204,6 +204,8 @@ MMDetection 是一款由来自不同高校和企业的研发人员共同参与
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab 全流程文字检测识别理解工具包
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab 图片视频生成模型工具箱
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 人体参数化模型工具箱与测试基准

## 欢迎加入 OpenMMLab 社区

Expand Down
16 changes: 16 additions & 0 deletions configs/albu_example/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,21 @@
# Albu Example

## Abstract

<!-- [ABSTRACT] -->

Data augmentation is a commonly used technique for increasing both the size and the diversity of labeled training sets by leveraging input transformations that preserve output labels. In computer vision domain, image augmentations have become a common implicit regularization technique to combat overfitting in deep convolutional neural networks and are ubiquitously used to improve performance. While most deep learning frameworks implement basic image transformations, the list is typically limited to some variations and combinations of flipping, rotating, scaling, and cropping. Moreover, the image processing speed varies in existing tools for image augmentation. We present Albumentations, a fast and flexible library for image augmentations with many various image transform operations available, that is also an easy-to-use wrapper around other augmentation libraries. We provide examples of image augmentations for different computer vision tasks and show that Albumentations is faster than other commonly used image augmentation tools on the most of commonly used image transformations.

<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/40661020/143870703-74f3ea3f-ae23-4035-9856-746bc3f88464.png" height="400" />
</div>

<!-- [PAPER_TITLE: Albumentations: fast and flexible image augmentations] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1809.06839] -->

## Citation

<!-- [OTHERS] -->

```
Expand Down
16 changes: 15 additions & 1 deletion configs/atss/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
# Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection

## Introduction
## Abstract

<!-- [ABSTRACT] -->

Object detection has been dominated by anchor-based detectors for several years. Recently, anchor-free detectors have become popular due to the proposal of FPN and Focal Loss. In this paper, we first point out that the essential difference between anchor-based and anchor-free detection is actually how to define positive and negative training samples, which leads to the performance gap between them. If they adopt the same definition of positive and negative samples during training, there is no obvious difference in the final performance, no matter regressing from a box or a point. This shows that how to select positive and negative training samples is important for current object detectors. Then, we propose an Adaptive Training Sample Selection (ATSS) to automatically select positive and negative samples according to statistical characteristics of object. It significantly improves the performance of anchor-based and anchor-free detectors and bridges the gap between them. Finally, we discuss the necessity of tiling multiple anchors per location on the image to detect objects. Extensive experiments conducted on MS COCO support our aforementioned analysis and conclusions. With the newly introduced ATSS, we improve state-of-the-art detectors by a large margin to 50.7% AP without introducing any overhead.

<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/40661020/143870776-c81168f5-e8b2-44ee-978b-509e4372c5c9.png"/>
</div>

<!-- [PAPER_TITLE: Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1912.02424] -->

## Citation

<!-- [ALGORITHM] -->

Expand Down
16 changes: 15 additions & 1 deletion configs/autoassign/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
# AutoAssign: Differentiable Label Assignment for Dense Object Detection

## Introduction
## Abstract

<!-- [ABSTRACT] -->

Determining positive/negative samples for object detection is known as label assignment. Here we present an anchor-free detector named AutoAssign. It requires little human knowledge and achieves appearance-aware through a fully differentiable weighting mechanism. During training, to both satisfy the prior distribution of data and adapt to category characteristics, we present Center Weighting to adjust the category-specific prior distributions. To adapt to object appearances, Confidence Weighting is proposed to adjust the specific assign strategy of each instance. The two weighting modules are then combined to generate positive and negative weights to adjust each location's confidence. Extensive experiments on the MS COCO show that our method steadily surpasses other best sampling strategies by large margins with various backbones. Moreover, our best model achieves 52.1% AP, outperforming all existing one-stage detectors. Besides, experiments on other datasets, e.g., PASCAL VOC, Objects365, and WiderFace, demonstrate the broad applicability of AutoAssign.

<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/40661020/143870875-33567e44-0584-4470-9a90-0df0fb6c1fe2.png"/>
</div>

<!-- [PAPER_TITLE: AutoAssign: Differentiable Label Assignment for Dense Object Detection] -->
<!-- [PAPER_URL: https://arxiv.org/abs/2007.03496] -->

## Citation

<!-- [ALGORITHM] -->

Expand Down
16 changes: 15 additions & 1 deletion configs/carafe/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
# CARAFE: Content-Aware ReAssembly of FEatures

## Introduction
## Abstract

<!-- [ABSTRACT] -->

Feature upsampling is a key operation in a number of modern convolutional network architectures, e.g. feature pyramids. Its design is critical for dense prediction tasks such as object detection and semantic/instance segmentation. In this work, we propose Content-Aware ReAssembly of FEatures (CARAFE), a universal, lightweight and highly effective operator to fulfill this goal. CARAFE has several appealing properties: (1) Large field of view. Unlike previous works (e.g. bilinear interpolation) that only exploit sub-pixel neighborhood, CARAFE can aggregate contextual information within a large receptive field. (2) Content-aware handling. Instead of using a fixed kernel for all samples (e.g. deconvolution), CARAFE enables instance-specific content-aware handling, which generates adaptive kernels on-the-fly. (3) Lightweight and fast to compute. CARAFE introduces little computational overhead and can be readily integrated into modern network architectures. We conduct comprehensive evaluations on standard benchmarks in object detection, instance/semantic segmentation and inpainting. CARAFE shows consistent and substantial gains across all the tasks (1.2%, 1.3%, 1.8%, 1.1db respectively) with negligible computational overhead. It has great potential to serve as a strong building block for future research. It has great potential to serve as a strong building block for future research.

<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/40661020/143872016-48225685-0e59-49cf-bd65-a50ee04ca8a2.png"/>
</div>

<!-- [PAPER_TITLE: CARAFE: Content-Aware ReAssembly of FEatures] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1905.02188] -->

## Citation

<!-- [ALGORITHM] -->

Expand Down
55 changes: 55 additions & 0 deletions configs/carafe/metafile.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
Collections:
- Name: CARAFE
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- RPN
- FPN_CARAFE
- ResNet
- RoIPool
Paper:
URL: https://arxiv.org/abs/1905.02188
Title: 'CARAFE: Content-Aware ReAssembly of FEatures'
README: configs/carafe/README.md
Code:
URL: https://github.com/open-mmlab/mmdetection/blob/v2.12.0/mmdet/models/necks/fpn_carafe.py#L11
Version: v2.12.0

Models:
- Name: faster_rcnn_r50_fpn_carafe_1x_coco
In Collection: CARAFE
Config: configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py
Metadata:
Training Memory (GB): 4.26
Epochs: 12
Results:
- Task: Object Detection
Dataset: COCO
Metrics:
box AP: 38.6
- Task: Instance Segmentation
Dataset: COCO
Metrics:
mask AP: 38.6
Weights: https://download.openmmlab.com/mmdetection/v2.0/carafe/faster_rcnn_r50_fpn_carafe_1x_coco/faster_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.386_20200504_175733-385a75b7.pth

- Name: mask_rcnn_r50_fpn_carafe_1x_coco
In Collection: CARAFE
Config: configs/carafe/mask_rcnn_r50_fpn_carafe_1x_coco.py
Metadata:
Training Memory (GB): 4.31
Epochs: 12
Results:
- Task: Object Detection
Dataset: COCO
Metrics:
box AP: 39.3
- Task: Instance Segmentation
Dataset: COCO
Metrics:
mask AP: 35.6
Weights: https://download.openmmlab.com/mmdetection/v2.0/carafe/mask_rcnn_r50_fpn_carafe_1x_coco/mask_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.393__segm_mAP-0.358_20200503_135957-8687f195.pth
16 changes: 15 additions & 1 deletion configs/cascade_rcnn/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
# Cascade R-CNN: High Quality Object Detection and Instance Segmentation

## Introduction
## Abstract

<!-- [ABSTRACT] -->

In object detection, the intersection over union (IoU) threshold is frequently used to define positives/negatives. The threshold used to train a detector defines its quality. While the commonly used threshold of 0.5 leads to noisy (low-quality) detections, detection performance frequently degrades for larger thresholds. This paradox of high-quality detection has two causes: 1) overfitting, due to vanishing positive samples for large thresholds, and 2) inference-time quality mismatch between detector and test hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, composed of a sequence of detectors trained with increasing IoU thresholds, is proposed to address these problems. The detectors are trained sequentially, using the output of a detector as training set for the next. This resampling progressively improves hypotheses quality, guaranteeing a positive training set of equivalent size for all detectors and minimizing overfitting. The same cascade is applied at inference, to eliminate quality mismatches between hypotheses and detectors. An implementation of the Cascade R-CNN without bells or whistles achieves state-of-the-art performance on the COCO dataset, and significantly improves high-quality detection on generic and specific object detection datasets, including VOC, KITTI, CityPerson, and WiderFace. Finally, the Cascade R-CNN is generalized to instance segmentation, with nontrivial improvements over the Mask R-CNN.

<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/40661020/143872197-d99b90e4-4f05-4329-80a4-327ac862a051.png"/>
</div>

<!-- [PAPER_TITLE: Cascade R-CNN: High Quality Object Detection and Instance Segmentation] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1906.09756] -->

## Citation

<!-- [ALGORITHM] -->

Expand Down
18 changes: 17 additions & 1 deletion configs/cascade_rpn/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,20 @@
# Cascade RPN
# Cascade RPN: Delving into High-Quality Region Proposal Network with Adaptive Convolution

## Abstract

<!-- [ABSTRACT] -->

This paper considers an architecture referred to as Cascade Region Proposal Network (Cascade RPN) for improving the region-proposal quality and detection performance by systematically addressing the limitation of the conventional RPN that heuristically defines the anchors and aligns the features to the anchors. First, instead of using multiple anchors with predefined scales and aspect ratios, Cascade RPN relies on a single anchor per location and performs multi-stage refinement. Each stage is progressively more stringent in defining positive samples by starting out with an anchor-free metric followed by anchor-based metrics in the ensuing stages. Second, to attain alignment between the features and the anchors throughout the stages, adaptive convolution is proposed that takes the anchors in addition to the image features as its input and learns the sampled features guided by the anchors. A simple implementation of a two-stage Cascade RPN achieves AR 13.4 points higher than that of the conventional RPN, surpassing any existing region proposal methods. When adopting to Fast R-CNN and Faster R-CNN, Cascade RPN can improve the detection mAP by 3.1 and 3.5 points, respectively.

<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/40661020/143872368-1580193a-d19c-4723-a579-c7ed2d5da4d1.png"/>
</div>

<!-- [PAPER_TITLE: Cascade RPN: Delving into High-Quality Region Proposal Network with Adaptive Convolution] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1909.06720] -->

## Citation

<!-- [ALGORITHM] -->

Expand Down
18 changes: 16 additions & 2 deletions configs/centernet/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
# CenterNet
# Objects as Points

## Introduction
## Abstract

<!-- [ABSTRACT] -->

Detection identifies objects as axis-aligned boxes in an image. Most successful object detectors enumerate a nearly exhaustive list of potential object locations and classify each. This is wasteful, inefficient, and requires additional post-processing. In this paper, we take a different approach. We model an object as a single point --- the center point of its bounding box. Our detector uses keypoint estimation to find center points and regresses to all other object properties, such as size, 3D location, orientation, and even pose. Our center point based approach, CenterNet, is end-to-end differentiable, simpler, faster, and more accurate than corresponding bounding box based detectors. CenterNet achieves the best speed-accuracy trade-off on the MS COCO dataset, with 28.1% AP at 142 FPS, 37.4% AP at 52 FPS, and 45.1% AP with multi-scale testing at 1.4 FPS. We use the same approach to estimate 3D bounding box in the KITTI benchmark and human pose on the COCO keypoint dataset. Our method performs competitively with sophisticated multi-stage methods and runs in real-time.

<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/40661020/143873810-85ffa6e7-915b-46a4-9b8f-709e5d7700bb.png"/>
</div>

<!-- [PAPER_TITLE: Objects as Points] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1904.07850] -->

## Citation

<!-- [ALGORITHM] -->

Expand Down
18 changes: 16 additions & 2 deletions configs/centripetalnet/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,20 @@
# CentripetalNet
# CentripetalNet: Pursuing High-quality Keypoint Pairs for Object Detection

## Introduction
## Abstract

<!-- [ABSTRACT] -->

Keypoint-based detectors have achieved pretty-well performance. However, incorrect keypoint matching is still widespread and greatly affects the performance of the detector. In this paper, we propose CentripetalNet which uses centripetal shift to pair corner keypoints from the same instance. CentripetalNet predicts the position and the centripetal shift of the corner points and matches corners whose shifted results are aligned. Combining position information, our approach matches corner points more accurately than the conventional embedding approaches do. Corner pooling extracts information inside the bounding boxes onto the border. To make this information more aware at the corners, we design a cross-star deformable convolution network to conduct feature adaption. Furthermore, we explore instance segmentation on anchor-free detectors by equipping our CentripetalNet with a mask prediction module. On MS-COCO test-dev, our CentripetalNet not only outperforms all existing anchor-free detectors with an AP of 48.0% but also achieves comparable performance to the state-of-the-art instance segmentation approaches with a 40.2% MaskAP.

<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/40661020/143873955-42804e0e-3638-4c5b-8bf4-ac8133bbcdc8.png"/>
</div>

<!-- [PAPER_TITLE: CentripetalNet: Pursuing High-quality Keypoint Pairs for Object Detection] -->
<!-- [PAPER_URL: https://arxiv.org/abs/2003.09119] -->

## Citation

<!-- [ALGORITHM] -->

Expand Down
19 changes: 18 additions & 1 deletion configs/cityscapes/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,21 @@
# Cityscapes Dataset
# The Cityscapes Dataset for Semantic Urban Scene Understanding

## Abstract

<!-- [ABSTRACT] -->

Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes.
To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations; 20000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.

<!-- [IMAGE] -->
<div align=center>
<img src="https://user-images.githubusercontent.com/40661020/143874154-db4484a5-9211-41f6-852a-b7f0a8c9ec26.png"/>
</div>

<!-- [PAPER_TITLE: The Cityscapes Dataset for Semantic Urban Scene Understanding] -->
<!-- [PAPER_URL: https://arxiv.org/abs/1604.01685] -->

## Citation

<!-- [DATASET] -->

Expand Down
Loading