You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey @AnhPC03, sorry for late reply. Did you try with "--minShapes=images:1x3x672x672 --maxShapes=images:1x3x672x672 --optShapes=images:1x3x672x672 --shapes=images:1x3x672x672" to specific the input shape range?
Hello,
I could run your repo and when I printed input and output tensors size, the values were
But the converted onnx model has this values, I saw on Netron having same values
If I converted onnx model to .engine only for inferencing using GPU
This having the same tensor size with Netron. And I couldn't inference this .engine using GPU.
How can I convert only inferencing using GPU but having same tensor size with yours DLA loadable?
Thank you very much.
The text was updated successfully, but these errors were encountered: