-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A PTQ tflite model fails to pass benchmark test #95
Comments
My quantization strategy: |
The following strategy works: |
Looks int8 per-channel quantization may incur errors. |
The following pattern in your model is the root cause of the problem.
The output tensor of the
Then, we may just unify the quantization parameters of (A_, Y, B), just like what we do as usual. |
Or you may just skip the quantization for this kind of pattern, which seems to be the simplest solution. |
This is simpler I guess. We will try to fix it this way. |
My use case:
Apply post training quantization to a pth model and convert to tflite. The generated tflite model fails to pass benchmark test with following error message:
STARTING!
Log parameter values verbosely: [0]
Graph: [out/ptq_model.tflite]
Loaded model out/ptq_model.tflite
ERROR: tensorflow/lite/kernels/concatenation.cc:179 t->params.scale != output->params.scale (3 != -657359264)
ERROR: Node number 154 (CONCATENATION) failed to prepare.
Failed to allocate tensors!
Benchmarking failed.
Pls refer to the attachment. Thanks.
test.zip
The text was updated successfully, but these errors were encountered: