-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
theoritical understanding of map results #3187
Comments
@buzdarbalooch Hi, In your image "AP(0.75) means the AP with IoU=0.75" Do they want to know how AP was calculated, or want to know how did you train the model? And read how mAP and APs are calculated in different datasets:
If you want, you can see this code that calculates mAP and APs: Lines 649 to 1088 in 0ae5191
|
they didnt mention specifically, but i think they are more interested in how AP was calculated for the reference dataset
On Sun May 19 2019 00:28:58 GMT+0200 (ora legale Europa occidentale), Alexey <notifications@github.com> wrote:
@buzdarbalooch Hi,
In your image "AP(0.75) means the AP with IoU=0.75"
I think it should be
"AP(0.75) means the AP for True Positives with IoU>=0.75"
or
"AP(0.75) means the AP with IoU_threshold=0.75"
Do they want to know how AP was calculated, or want to know how did you train the model?
And read how mAP and APs are calculated in different datasets:
-
https://mc.ai/which-one-to-measure-the-performance-of-object-detectors-ap-or-olrp/
-
https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173
-
Also in the last Darknet version you can chose how many points on Precision-Recall curve will be used by using -point 101 flag. More about this: #2746
If you want, you can see this code that calculates mAP and APs: https://github.com/AlexeyAB/darknet/blob/0ae5191580d49fdf4392e56152d4386b89224500/src/detector.c#L649-L1088
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
@AlexeyAB i have read those two articles u shared. but if u can just summarize in two to three lines, by looking at fig i shared how we reached to AP or how AP was calculated? that would be soo nice of u. thanks |
Value of average precision (AP) is equal to the area under Precision-Recall curve, where Precision and Recall are calculated for every possible confidence threshold (for confidence of each detection), and for specified IoU_threshold = 0.75 (75%).
|
OK thanks alot dear. |
@AlexeyAB can i ask whats the difference between confidence threshold (confidence of each detection) and IoU threshold? i know |
hi @AlexeyAB can i ask whats the difference between confidence threshold (confidence of each detection) and IoU threshold? |
|
cofidence threshold is always fixed ? |
|
but map is calculated for Iou threshold not for confidence threshold, corrext me if i am wrong. |
and during the detection whats the standard fixed value one uses for the confidence threshold? |
Read these articles: https://mc.ai/which-one-to-measure-the-performance-of-object-detectors-ap-or-olrp/ https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173
|
hi i recently used yolo to train the model (video) of approximately 3 mins and 30 seconds , video was a sequence of clips where each clip was 10 to 15 seconds.
i executed the script To see the map values at different iou thresholds. mAP@IoU=50: darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights
but after subitting the paper to a conference somhow its hard to satisfy reviewers how these values are reached. Mostly they point out. "Results depicted in Table 1, ... high accuracy" is not clear and should be reformulated.".
i am attaching that small paragraph from paper and also the pic attached when i get these results.
@AlexeyAB if u can kindly an idea how can i satiisfy the reviwers about these results. i shall be greatful.
The text was updated successfully, but these errors were encountered: