-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low mAP score on pycocotools #71
Comments
Hmm, I'm not familiar with pycocotools. Is that the official COCO mAP code? Inference is about identical between this repo and darknet (training differences abound though...), so mAP on the official weights should also be the same, though test.py computes mAP slightly differently than the official COCO code. I noticed your local version is a bit out of date with the current repo. The current test.py Line 136 in 2dd2564
|
Yeah, running https://github.com/cocodataset/cocoapi. So I didn't touch anything. And I've already tested |
Ah it sounds like you tried several values. I think < 0.10 is too low, and > 0.30 is too high. You should get a pretty big improvement going from 0.5 to 0.3, perhaps 10% better mAP (i.e. from 0.40 to 0.50 mAP). |
I modified eval_map.py,datasets.py to adapt more recent style of the repo. Here are the results. The reason why I would try
Console log:
|
Hmmm, well then I don't understand the discrepancy. The last official COCO SDK results from test.py were by @nirbenz in #2 (comment), showing 0.543 mAP@0.5 at 416 pixels. Nothing significant should have changed for inference in the repo since then. I'm not sure what to say, other than to try to ask @nirbenz for a PR for his SDK export code. Recent results from detect.py also look the exact same as darknet's, i.e. #16 (comment) |
Understood. I will continue testing and see if I did something wrong with eval_map. I'll let you know if I find something. |
hi,i see your repo map is 0.547 use pycocotools, can you tell me how to solve this? I met the same issue. |
It looks like it would be beneficial for test.py to output a JSON file in the format that https://github.com/cocodataset/cocoapi wants, so we could generate mAP directly from cocoapi. I think the relevant JSON format is here. Do any of you have code ready-made for a PR that already does this? |
The NMS scheme in original darknet repo is different from this repo, I suggest you can take a look at the nms code where they differs. Awhile back I did try to make those change on this repo, and I was able to push the mAP to 0.49. But then I got dragged to work on something else. |
I could make a simple PR, the code is pretty straightforward as shown in eval_map.py above. However, you will still need to generate the ground truth json for the 5k dataset as well as any other custom dataset if you want to use COCO api properly. I don't know in which way you would want it to be included in the code. |
@ydixon Actually, you could write the imgsIDs from "5k.txt" into a list and use that to filter ground truth labels in default cocoeval code. I could upload it if anyone needs. |
@ydixon @okanlv @AndOneDay, I updated
Code to compile the json dict: Lines 67 to 81 in eb6a4b5
Code to evaluate the json with pycocotools: Lines 141 to 157 in eb6a4b5
Output mAP is low using
|
In the nms function, comment and replace this line
Run the test with with 0.005
Please also let me know how is your model performing under COCO api. There's some interesting different design choices in this repo compared to the original darknet and I would really like to know how well they do under such changes. |
Oh I thought the |
@ydixon @okanlv @AndOneDay it worked, pycocotools mAP is 0.550 (416) and 0.579 (608) with
|
@glenn-jocher Thanks! Now I'm more incentivized to try the unique anchor loss layer approach. (GPU resource is expensive!) |
@ydixon yeah of course! I'm left disenfranchised by the mAP metric now. The lower I set |
Have you found the reason of this beahvior? The intuition is that the lower the confidence threshold, the higher the false positives, so the precision would be lower.. so it's a little confusing. |
@simaiden the original problems referenced in this issue have been corrected. mAP is correctly reported now, along with P and R. |
But what about the lower map with high confidence threshold? This happen when I use the coco api, do you mean that with this repo didn't happen? Thanks! |
@simaiden I don't understand. mAP should be computed at 0.001 or 0.0001 confidence threshold. Everything is working correctly in this repo in regards to mAP computation. |
Thanks for your reply. Ok, this is the way to calculate the map, but do you have any clue why this? and why when I increase the confidence the map decrease? I'm not talking about this repo in particular but in general, sorry if my question is not about the repo itselfs. |
@simaiden search online, we can't help you with this. |
@glenn-jocher Apologize for bringing up the same topic again. I've noticed there's lot of threads about mAP and I have read them, but none of them has code to test. So I modified a little bit of your detect/test code to run pycocotools.
Added
load_images_v2
in datasets.py . Run eval_map.py and it is compared against ground truth file coco_valid.json. The ground truth file has been test against results generated from darknet repo with matching mAP. If you want to generate it yourself, you can go here.I am only running the model with official yolov3 weights. Any ideas on improving the score?
The text was updated successfully, but these errors were encountered: