Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolov3 performance chart #604

Closed
apiszcz opened this issue Mar 30, 2018 · 7 comments
Closed

Yolov3 performance chart #604

apiszcz opened this issue Mar 30, 2018 · 7 comments

Comments

@apiszcz
Copy link

apiszcz commented Mar 30, 2018

The performance plot seems to vary between the paper and your web site. Perhaps a Y offset issue.

image

image

@pjreddie
Copy link
Owner

nope, one is COCO mAP averaged over .5-.95 IOU, the other is just at .5 IOU.

@AlexeyAB
Copy link
Collaborator

@apiszcz

Your 1st chart is for AP, but your 2nd cart is for mAP (or AP50) see metrics on MS COCO site: http://cocodataset.org/#detections-leaderboard

You should compare with the same metcir - mAP (AP50):

  1. Table 1 (e): https://arxiv.org/pdf/1708.02002.pdf

    • depth=50 scale=500 AP50=50.9 time=72ms
    • depth=101 scale=500 AP50=53.1 time=90ms
    • depth=101 scale=800 AP50=57.5 time=198ms
  2. Figure 3: https://pjreddie.com/media/files/papers/YOLOv3.pdf


  1. Table 1 (e): https://arxiv.org/pdf/1708.02002.pdf

image


  1. Figure 3: https://pjreddie.com/media/files/papers/YOLOv3.pdf

image

@apiszcz
Copy link
Author

apiszcz commented Mar 30, 2018

Thank you.

@muye5
Copy link

muye5 commented Jan 15, 2019

@AlexeyAB I want to ask the mAP of yolov2 / yolov1 on VOC2007 is calculated in the format of average over 11 point [0., 1.1, 0.1] or as same as coco metric AP50. I know yolov3 uses the latter on coco dataset, but I am not sure on voc, which metric is used. For these two metrics differ a bit. Thank you.

@AlexeyAB
Copy link
Collaborator

@muye5

mAP for Yolo v1/v2/v3 is always calculated for single IoU-threshold = 0.5 and for 11 points on Precision-Recall-curve.
I.e. it uses mAP (Pascal VOC) = mAP@IoU=0.5 = AP50 (MS COCO) = mAP (ImageNet)
More about it: AlexeyAB#2123 (comment)

How to get mAP for MS COCO: AlexeyAB#2145 (comment)

@muye5
Copy link

muye5 commented Jan 15, 2019

@AlexeyAB thanks for your reply. but now I am more confused about the mAP. Just show the code for simplicity [^_^]
voc_eval @Line 37

def voc_ap(rec, prec, use_07_metric=False):
    """ ap = voc_ap(rec, prec, [use_07_metric])
    Compute VOC AP given precision and recall.
    If use_07_metric is true, uses the
    VOC 07 11 point method (default:False).
    """
    if use_07_metric:
        # 11 point metric
        ap = 0.
        for t in np.arange(0., 1.1, 0.1):
            if np.sum(rec >= t) == 0:
                p = 0
            else:
                p = np.max(prec[rec >= t])
            ap = ap + p / 11.
    else:
        # correct AP calculation
        # first append sentinel values at the end
        mrec = np.concatenate(([0.], rec, [1.]))
        mpre = np.concatenate(([0.], prec, [0.]))

        # compute the precision envelope
        for i in range(mpre.size - 1, 0, -1):
            mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])

        # to calculate area under PR curve, look for points
        # where X axis (recall) changes value
        i = np.where(mrec[1:] != mrec[:-1])[0]

        # and sum (\Delta recall) * prec
        ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
    return ap

I think both ways are right, but I am not sure which way is used by yolov[1-3]. The IOU is 0.5 for both of them.

@AlexeyAB
Copy link
Collaborator

@muye5 Hi,

Earlier repository https://github.com/AlexeyAB/darknet used only 11-Recall-points for PascalVOC 2007.

But currently it allows you to use any approach - more about it: AlexeyAB#2746

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants