-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mAP (mean average precision) calculation for different Datasets (MSCOCO, ImageNet, PascalVOC) #2746
Comments
@AlexeyAB Quick question, did you try verifying your mAP script result matches the one mentioned in Yolo3 paper on the COCO test-dev set? Thanks! |
@pjspillai Hi, There are no public annotations for COCO test-dev, so there is nothing to use with How to check Yolo v3 on COCO test-dev set on evaluation server is described here: #2145 You can check yolov3-spp.weights model on COCO 2014 val-set |
@AlexeyAB Hi,
Thanks! |
Here we provide YOLOv3-320 evaluation results on COCO val 2017 by this implemention and cocoapi respectively. It seems that there are some difference between them.
COCOAPI: (https://github.com/cocodataset/cocoapi) This implemention (AP50 & AP75): Thanks for your attention. |
@chenjoya Hi,
There are additional params (
|
@AlexeyAB can you please confirm the command for running mAP (or evaluation for that matter) on PASCAL-VOC 07? I see you have it listed as If I try to run it with the Can you please confirm the command to run on |
There is no trained model of Yolo v3 for Pascal VOC. There is model Yolo v2 for Pascal VOC: https://github.com/AlexeyAB/darknet#pre-trained-models How to check the mAP on Pascal VOC: https://github.com/AlexeyAB/darknet#how-to-calculate-map-on-pascalvoc-2007 |
Thank you for the quick reply, Cheers |
Different approaches of mAP (mean average precision) calculation:
-map_points 101
for MS COCO-map_points 11
for PascalVOC 2007 (uncommentdifficult
in voc.data)-map_points 0
for ImageNet, PascalVOC 2010-2012 and your custom datasetFor example:
use this command to calculate
mAP@0.5
for ImageNet, PascalVOC 2010-2012 and your custom dataset:./darknet detector map cfg/coco.data cfg/yolov3-spp.cfg yolov3-spp.weights
use this command to calculate
mAP@0.5
for PascalVOC 2007 dataset:./darknet detector map cfg/coco.data cfg/yolov3-spp.cfg yolov3-spp.weights -points 11
use this command to calculate
mAP@0.5
for MSCOCO dataset:./darknet detector map cfg/coco.data cfg/yolov3-spp.cfg yolov3-spp.weights -points 101 -iou_thresh 0.5
use these commands to calculate
mAP@[.5, .95]
for MSCOCO dataset:Then calculate:
AP@[.5, .95] =
mAP@IoU=0.50:.05:.0.95
=(mAP@IoU=0.50 + mAP@IoU=0.55 + mAP@IoU=0.60 + mAP@IoU=0.65 + mAP@IoU=0.70 + mAP@IoU=0.75 + mAP@IoU=0.80 + mAP@IoU=0.85 + mAP@IoU=0.90 + mAP@IoU=0.95) / 10
I.e.
MS COCO uses
101 points
on Precision-Recall curve: https://github.com/cocodataset/cocoapi/blob/ed842bffd41f6ff38707c4f0968d2cfd91088688/PythonAPI/pycocotools/cocoeval.py#L507-L508PascalVOC 2007 uses
11 points
on Precision-Recall curve: https://github.com/rbgirshick/py-faster-rcnn/blob/781a917b378dbfdedb45b6a56189a31982da1b43/lib/datasets/voc_eval.py#L37-L45PascalVOC 2010-2012 and ImageNet uses
each unique point
on Precision-Recall curve - i.e. calculates Area Under Curve without approximation: https://github.com/rbgirshick/py-faster-rcnn/blob/781a917b378dbfdedb45b6a56189a31982da1b43/lib/datasets/voc_eval.py#L46-L61URLs:
https://mc.ai/which-one-to-measure-the-performance-of-object-detectors-ap-or-olrp/
http://host.robots.ox.ac.uk/pascal/VOC/voc2012/htmldoc/devkit_doc.html#sec:ap
https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173
https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
The text was updated successfully, but these errors were encountered: