-
Notifications
You must be signed in to change notification settings - Fork 21.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Yolov3 performance chart #604
Comments
nope, one is COCO mAP averaged over .5-.95 IOU, the other is just at .5 IOU. |
Your 1st chart is for AP, but your 2nd cart is for mAP (or AP50) see metrics on MS COCO site: http://cocodataset.org/#detections-leaderboard You should compare with the same metcir - mAP (AP50):
|
Thank you. |
@AlexeyAB I want to ask the mAP of yolov2 / yolov1 on VOC2007 is calculated in the format of average over 11 point [0., 1.1, 0.1] or as same as coco metric AP50. I know yolov3 uses the latter on coco dataset, but I am not sure on voc, which metric is used. For these two metrics differ a bit. Thank you. |
mAP for Yolo v1/v2/v3 is always calculated for single How to get mAP for MS COCO: AlexeyAB#2145 (comment) |
@AlexeyAB thanks for your reply. but now I am more confused about the mAP. Just show the code for simplicity [^_^]
I think both ways are right, but I am not sure which way is used by yolov[1-3]. The IOU is 0.5 for both of them. |
@muye5 Hi, Earlier repository https://github.com/AlexeyAB/darknet used only 11-Recall-points for PascalVOC 2007. But currently it allows you to use any approach - more about it: AlexeyAB#2746
|
The performance plot seems to vary between the paper and your web site. Perhaps a Y offset issue.
The text was updated successfully, but these errors were encountered: