You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have trained three ssd models, using vgg16_reduced, mobilenet_512 and mobilenet_608. After that I am running inference on a video, using batches of a single frame and comparing the speed of the three models on a PC with a i7 7700k and a GeForce 1080 ti.
I was surprised by the results:
vgg16: 70 fps
mobilenet_608: 45 fps
mobilenet_512: 60 fps
I'm measuring the time by using the opencv function getTickCount() before and after the forward operation.
I was expecting mobilenet to be a lot faster. Any ideas about that? I have read that cudnn has problems optimizing depthwise convolutions. Is this related to that?
I trained all three on my dataset (which has a single class). The two mobilenet where trained from the pretrained models using --finetune,--network mobilenet and --data-shape 512 (or 608). If i got it right, with vgg16 you don't need to use --finetune, and can simply use the starting epoch of the model.
The text was updated successfully, but these errors were encountered:
Mobilenet was originally very slow on GPU, last year the depthwise convolution op has been optimized. But I am not entirely sure if CUDNN is involved in this operation.
Hi,
I have trained three ssd models, using vgg16_reduced, mobilenet_512 and mobilenet_608. After that I am running inference on a video, using batches of a single frame and comparing the speed of the three models on a PC with a i7 7700k and a GeForce 1080 ti.
I was surprised by the results:
vgg16: 70 fps
mobilenet_608: 45 fps
mobilenet_512: 60 fps
I'm measuring the time by using the opencv function getTickCount() before and after the forward operation.
I was expecting mobilenet to be a lot faster. Any ideas about that? I have read that cudnn has problems optimizing depthwise convolutions. Is this related to that?
I trained all three on my dataset (which has a single class). The two mobilenet where trained from the pretrained models using --finetune,--network mobilenet and --data-shape 512 (or 608). If i got it right, with vgg16 you don't need to use --finetune, and can simply use the starting epoch of the model.
The text was updated successfully, but these errors were encountered: