Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mobilenet slower than vgg16? #202

Open
mfiore opened this issue Apr 13, 2018 · 1 comment
Open

Mobilenet slower than vgg16? #202

mfiore opened this issue Apr 13, 2018 · 1 comment

Comments

@mfiore
Copy link

mfiore commented Apr 13, 2018

Hi,

I have trained three ssd models, using vgg16_reduced, mobilenet_512 and mobilenet_608. After that I am running inference on a video, using batches of a single frame and comparing the speed of the three models on a PC with a i7 7700k and a GeForce 1080 ti.

I was surprised by the results:
vgg16: 70 fps
mobilenet_608: 45 fps
mobilenet_512: 60 fps

I'm measuring the time by using the opencv function getTickCount() before and after the forward operation.

I was expecting mobilenet to be a lot faster. Any ideas about that? I have read that cudnn has problems optimizing depthwise convolutions. Is this related to that?

I trained all three on my dataset (which has a single class). The two mobilenet where trained from the pretrained models using --finetune,--network mobilenet and --data-shape 512 (or 608). If i got it right, with vgg16 you don't need to use --finetune, and can simply use the starting epoch of the model.

@zhreshold
Copy link
Owner

Mobilenet was originally very slow on GPU, last year the depthwise convolution op has been optimized. But I am not entirely sure if CUDNN is involved in this operation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants