You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried some example models by using benchmark.py. However, I obtain similar results for each yolov3 model. In other words I used three different yolov3 model which are downloaded from SparseZoo, those are pruned, quantized and base. Then I used benchmark.py to see the results and obtain the following results for those models.
BenchmarkResults:
items_per_second: 3.8288485463800352
ms_per_batch: 261.17512559890747
batch_times_mean: 0.2611751255989075
batch_times_median: 0.2577577829360962
batch_times_std: 0.013241334711086931
End-to-end per image time: 261.17512559890747ms
I set the parameters as follows:
/home/user/Desktop/deepsparse/model/yolov3/model_quantized.onnx --engine onnxruntime --data-path /home/user/Desktop/deneme_test --batch-size 1 --num-iterations 500 --num-warmup-iterations 100 (--quantized-inputs)
(--quantized-inputs) used only for quantized model. What am I missing?
How can I test the models and see the annotation for yolov3 or yolov5 with out internet connection or docker. It would be grate if I have a chance.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
Firstly thank you for the effort.
BenchmarkResults:
items_per_second: 3.8288485463800352
ms_per_batch: 261.17512559890747
batch_times_mean: 0.2611751255989075
batch_times_median: 0.2577577829360962
batch_times_std: 0.013241334711086931
End-to-end per image time: 261.17512559890747ms
I set the parameters as follows:
/home/user/Desktop/deepsparse/model/yolov3/model_quantized.onnx --engine onnxruntime --data-path /home/user/Desktop/deneme_test --batch-size 1 --num-iterations 500 --num-warmup-iterations 100 (--quantized-inputs)
(--quantized-inputs) used only for quantized model. What am I missing?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions