Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot re-implement the performance #6

Closed
JesseYang opened this issue Mar 26, 2018 · 4 comments
Closed

Cannot re-implement the performance #6

JesseYang opened this issue Mar 26, 2018 · 4 comments

Comments

@JesseYang
Copy link

JesseYang commented Mar 26, 2018

I have tried to train the ssdlite with Mobilenetv2 as the backbone. As suggested, the pre-trained Mobilenetv2 model is downloaded from another repo. Besides changing the weights' name in pre-trained model to load it, I also change the preprocessing of input images. The input image is normalized to to make the mean 0 and standard deviation 1, as an issue in the repo of the pre-trained model shows. I didn't make other modifications and followed the Readme to train the ssdlite model for 300 epochs. I used the test.py to test and the mAP after 300 epochs is about 65 to 66. I also tested the pre-trained ssdlite model downloaded from the 73.2 link in the Readme file, and the mAP is 73.4 when nms threshold is 0.45.

Could you give some suggestions on how to improve the mAP and re-implement the results? Thanks!

@foreverYoungGitHub
Copy link
Collaborator

The preprocessing will not have a huge influence on the bad result. Actually, normalization always have good result in my experience.

I'm not sure that, when you train the net, do you try to train the extra layers and localization and classification layers first and freeze the base model at the same time?

@JesseYang
Copy link
Author

Do you mean firstly set the "TRAINABLE_SCOPE" in configuration file as "extras, loc, conf", train some epochs, and then include "base" in the "TRAINABLE_SCOPE"? I have tried that in my tensorflow implementation, but didn't see much difference. In your experience, how much improvement can be achieved by doing this? I will try with this repo. How many epochs should I freeze the base part when training on VOC dataset?

@foreverYoungGitHub
Copy link
Collaborator

It could get around 24% improvement when you freeze the previous layers(sometimes partially basebone). It's hard to say how many epoch it should be freezed. Regularly, I just try 3050 epochs.

Besides, I also used the coco as the pretrain model in sometimes, but I'm not sure which model I used to train coco first.

@bailvwangzi
Copy link

@JesseYang Can you re-implement the mAP 73.x% performance? thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants