diff --git a/README.md b/README.md index 6245196..08e03cb 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # Attention-guided Context Feature Pyramid Network for Object Detection -This repository is written by Junxu Cao at UC San Diego, on the base of [Detectron](https://github.com/facebookresearch/Detectron) @ [e8942c8](https://github.com/facebookresearch/Detectron/tree/e8942c882abf6e28fe68a626ec55028c9bdfe1cf). +This repository is written by Junxu Cao, on the base of [Detectron](https://github.com/facebookresearch/Detectron) @ [e8942c8](https://github.com/facebookresearch/Detectron/tree/e8942c882abf6e28fe68a626ec55028c9bdfe1cf). ## Introduction @@ -8,6 +8,75 @@ This repository re-implements AC-FPN on the base of [Detectron](https://github.c Please follow [Detectron](https://github.com/facebookresearch/Detectron) on how to install and use Detectron-Cascade-RCNN. +## AC-FPN + +Because of the proposed architecture, We have better performance on bigger objectWe have better performance on bigger object +![architecture](pics/architecture.jpg) + +Object detection result +![detection](pics/detection_samples.png) + +Instance segmentation result +![segmentation](pics/instance_samples.png) + +More detail in paper. + +## Benchmarking + +AC-FPN can be readily plugged into existing FPN-based models and improve performance. +![segmentation](pics/paper_result.png) + +This repo has released CEM module without AM module, but we can get higher performance than the implementation of pytorch in paper. +Also, thanks to the power of detectron, this repo is faster in training and inference. + +The result of coco test-dev(team Neptune) +![rank](pics/rank.png) +### Mask R-CNN with Bells & Whistles + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
        backbone        typelr
schd
im/
gpu
box
AP
box
AP50
box
AP75
X-152-32x8d-FPN-IN5k-baselineMasks1x148.168.352.9
X-152-32x8d-FPN-IN5k-cascadeMasks1x150.268.255.0
X-152-32x8d-FPN-IN5k-acfpn(just CEM)Masks1x151.970.457.0
+ + + ## Citation If you use our code/model/data, please site our paper: diff --git a/pics/architecture.jpg b/pics/architecture.jpg new file mode 100644 index 0000000..9105166 Binary files /dev/null and b/pics/architecture.jpg differ diff --git a/pics/detection_samples.png b/pics/detection_samples.png new file mode 100644 index 0000000..c9b2026 Binary files /dev/null and b/pics/detection_samples.png differ diff --git a/pics/instance_samples.png b/pics/instance_samples.png new file mode 100644 index 0000000..d350d8b Binary files /dev/null and b/pics/instance_samples.png differ diff --git a/pics/paper_result.png b/pics/paper_result.png new file mode 100644 index 0000000..d931070 Binary files /dev/null and b/pics/paper_result.png differ diff --git a/pics/rank.png b/pics/rank.png new file mode 100644 index 0000000..50c16fc Binary files /dev/null and b/pics/rank.png differ