-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Dynamic batch / Dynamic shape] onnx model with dynamic input is converted to tflite with static input 1 #441
Comments
Thank you so much for your rapid reply and your time once again 😄 |
Yup, I printed
I guess there is something weird going on in TFLiteConverter. Can run ONNX with dynamic inputs but the TFLite one crashes... |
If you want to infer in variable batches, you need to infer using
|
works, no problem. But when I run: import numpy as np
import tensorflow as tf
from pprint import pprint
interpreter = tf.lite.Interpreter(model_path="/home/mikel.brostrom/yolo_tracking/examples/weights/osnet_x0_25_msmt17_saved_model/osnet_x0_25_msmt17_float32.tflite")
tf_lite_model = interpreter.get_signature_runner()
inputs = {
'images': np.ones([5,256,128,3], dtype=np.float32),
}
tf_lite_output = tf_lite_model(**inputs)
print(f"[TFLite] Model Predictions shape: {tf_lite_output['output'].shape}")
print(f"[TFLite] Model Predictions:")
pprint(tf_lite_output) I get: lite/python/interpreter.py", line 853, in get_signature_runner
raise ValueError(
ValueError: SignatureDef signature_key is None and model has 0 Signatures. None is only allowed when the model has 1 SignatureDef Should this be added manually? |
Are all necessary packages installed? If it doesn't work, try Docker. docker run --rm -it \
-v `pwd`:/workdir \
-w /workdir \
docker.io/pinto0309/onnx2tf:1.15.8 |
Yup, installed all the packages mentioned in README ( |
I get the same issue there: user@69584e9dc119:/workdir$ python examples/weights/test.py
Traceback (most recent call last):
File "examples/weights/test.py", line 6, in <module>
tf_lite_model = interpreter.get_signature_runner()
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/interpreter.py", line 853, in get_signature_runner
raise ValueError(
ValueError: SignatureDef signature_key is None and model has 0 Signatures. None is only allowed when the model has 1 SignatureDef
user@69584e9dc119:/workdir$
import numpy as np
import tensorflow as tf
from pprint import pprint
interpreter = tf.lite.Interpreter(model_path="/workdir/examples/weights/osnet_x0_25_msmt17_saved_model/osnet_x0_25_msmt17_float32.tflite")
tf_lite_model = interpreter.get_signature_runner()
inputs = {
'images': np.ones([5,256,128,3], dtype=np.float32),
}
tf_lite_output = tf_lite_model(**inputs)
print(f"[TFLite] Model Predictions shape: {tf_lite_output['output'].shape}")
print(f"[TFLite] Model Predictions:")
pprint(tf_lite_output) |
Have to catch a train. So will have to continue looking at this later today 😄 |
What is Once the conversion is performed in the Docker container, there should be no errors. Also, if you are running docker run --rm -it \
-v `pwd`:/workdir \
-w /workdir \
docker.io/pinto0309/onnx2tf:1.15.8
onnx2tf \
-i osnet_x0_25_msmt17.onnx \
-o saved_model \
-osd \
-coion \
--non_verbose |
The onnx2tf command is not failing. What is falling is the inference, in test.py. |
I know that from the beginning. Concerned that your host PC environment was corrupted at the time of converting the model. Please redo everything in Docker. |
Thanks for your patience. Will try Docker later today from scratch :) |
Everything that could go wrong went wrong 🤣. My bad with the environment. Have it working after your suggestions: tflite model input torch.Size([1, 256, 128, 3])
tflite model output (1, 512)
0: 480x640 1 person, 9.1ms
tflite model input torch.Size([2, 256, 128, 3])
tflite model output (2, 512)
0: 480x640 1 person, 1 chair, 9.5ms
tflite model input torch.Size([2, 256, 128, 3])
tflite model output (2, 512)
0: 480x640 1 person, 1 chair, 15.5ms Will use the provided docker from now on when doing |
Glad to hear it went well. |
I have added it to the README and will close it. |
Great tutorial for dynamic batch inference using TFLite models! It was much needed IMO. |
Issue Type
Others
OS
Linux
onnx2tf version number
1.15.8
onnx version number
1.13.0
onnxruntime version number
1.13.1
onnxsim (onnx_simplifier) version number
0.4.33
tensorflow version number
2.13.0
Download URL for ONNX
osnet_x0_25_msmt17.zip
Parameter Replacement JSON
NA
Description
Hi @PINTO0309!
I have the following issue
ONNX input:
TFLite (FP32 model) input:
after conversion by:
onnx2tf -i examples/weights/osnet_x0_25_msmt17.onnx -o /home/mikel.brostrom/yolo_tracking/examples/weights/osnet_x0_25_msmt17_saved_model -nuo --non_verbose
I went through the README but could find any reason behind this behavior.
-b 10
works as expected but my input varies depending on the image so the input needs to be dynamic. Output size is also set to the static input value.The text was updated successfully, but these errors were encountered: