Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lower COCO AP after pre-training SoCo_FPN_100ep #16

Open
linusericsson opened this issue Feb 10, 2022 · 5 comments
Open

Lower COCO AP after pre-training SoCo_FPN_100ep #16

linusericsson opened this issue Feb 10, 2022 · 5 comments

Comments

@linusericsson
Copy link

Hi,

I've managed to run the SoCo_FPN_100ep model and subsequently evaluate it on COCO using the provided configs. The performance I achieve is 39.8 bb AP and 36.0 mk AP.

I've checked that my training hyperparameters are the same as yours (as reported in the google drive config.json/log.txt). The only difference is that I ran mine on 8xV100 instead of 16. This should therefore give similar results to your Table 5.b where batch size is 1024, so 41.9 bb AP and 37.6 mk AP.

Do you have any idea why my numbers are lower? Any help would be appreciated.

Thanks,
Linus

@linusericsson
Copy link
Author

I have now tried fine-tuning my pre-trained model using both your hologerry/detectron2 codebase and the standard facebookresearch/detectron2 and in both cases I get around 39.8 bb AP. However, when I use your pre-trained weights for SoCo_FPN_100epand fine-tune on the standard detectron2 code I get 42.1 AP (very similar to the reported number in your paper of 41.9).

This suggests there is something different in the pre-training hyperparameters since I'm unable to replicate the paper's results.
Thanks for any help!

@yanjk3
Copy link

yanjk3 commented Feb 16, 2022

Hi, I found that the result of mocov2 in Table 1 of the SOCO paper is particularly high (40.4 bb AP). But the experiment I have run shows that the result of fine-tuning a mask RCNN-R50 FPN with mocov2 backbone should be similar to the supervised pre-training (38.9 bb AP). Do you find this problem? Thanks!!!

@linusericsson
Copy link
Author

I've found the source of the difference. De to an error in the apex automatic mixed precision library I ended up using optimization setting O2 in my original run. When I fixed the bug and reran with optimization O1 it achieves 41.8 bb AP.

@yanjk3 Sorry I've not experimented on mocov2 in this setting so can't help. But good luck!

@ross-Hr
Copy link

ross-Hr commented May 10, 2022

Does the author not update this library ?

@hologerry
Copy link
Owner

Hello @linusericsson and @xiaoxiong007 The optimization level does affect the performance, for all results of our models are trained with O1 level.
There might be quite small differences among different runs as provided in the supplemental materials.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants