Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate decrease in mAP in retinanet resnet50 model #4437

Open
prabhat00155 opened this issue Sep 17, 2021 · 5 comments
Open

Investigate decrease in mAP in retinanet resnet50 model #4437

prabhat00155 opened this issue Sep 17, 2021 · 5 comments

Comments

@prabhat00155
Copy link
Contributor

prabhat00155 commented Sep 17, 2021

🐛 Describe the bug

Reference: #4409 (comment)

cc @datumbox

@prabhat00155 prabhat00155 self-assigned this Sep 17, 2021
@prabhat00155 prabhat00155 changed the title Investigate mAP in retinanet resnet50 model Investigate decrease in mAP in retinanet resnet50 model Sep 17, 2021
@prabhat00155
Copy link
Contributor Author

@datumbox
This is the result I got for retinanet_resnet50_fpn model(which seems to match the published result):

IoU metric: bbox
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.364
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.558
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.385
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.194
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.400
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.483
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.312
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.501
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.540
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.336
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.587
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.691
Training time 4:17:43

Training logs:
retinanet_run1.txt
retinanet_run2.txt

@datumbox
Copy link
Contributor

@prabhat00155 I understand from your logs that you have not validated the pre-trained model using the existing weights but instead you trained a new model. To test if the already published weights work, you pass the --pretrained --test-only flags.

@prabhat00155
Copy link
Contributor Author

I see. Yeah, now I get this:

IoU metric: bbox
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.363
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.557
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.382
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.193
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.400
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.490
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.314
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.500
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.540
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.340
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.581
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.696

@fmassa
Copy link
Member

fmassa commented Sep 21, 2021

Note: this could potentially be due to the change of defaults in FrozenBatchNorm #2933

The reported results in that PR matches the one we get here, and could be a reasonable explanation for the difference

@datumbox
Copy link
Contributor

@fmassa This is not it unfortunately. PR #2933 indeed dropped the mAP by 0.1 but then #2940 ensured the issue is addressed for pre-trained models.

In the original issue, I provide reference to the PR #3032 which is later than both aforementioned PRs and reports mAP to 36.4. As you can see the specific PR contains the eps patch:

eps: float = 1e-5,

And the overwrite:

def overwrite_eps(model, eps):

@prabhat00155 prabhat00155 removed their assignment Apr 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants