-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[OSNet] int8 tflite model - catastrophic accuracy degradation #444
Comments
The Float32 accuracy is in perfect agreement, so the structure of the model must be very weakly structured for quantization. onnx2tf does not perform any special operations with respect to quantization; it simply uses all the standard features of TFLiteConverter. Note that the Float32 model and the INT8 model are simply converted internally using the same Keras structural model, so there can be no cause for the degradation in accuracy other than the structural reasons for the original model. I don't know all the activation functions and model structures that are vulnerable to quantization, so your investigative efforts will help me identify the causes of the degradation. You will need to transform the model up to the midpoint using the Float32 has no precision degradation whatsoever. onnx2tf \
-i osnet_x1_0_fp_32_bs_1.onnx \
-cotof |
Thanks for the amazing quick reply @PINTO0309 I can briefly describe the network structure as follows:
If you're interested in more details, you can find more information about the network topology in this paper, and you can have a look at the network implementation in this script However, I do agree with splitting the model into smaller subgraphs and see where the problem starts, but this would be time-consuming and I won't manage to do it quickly |
There was a bug in the behavior of the |
Thanks very much for this information. Hope we can contribute and elaborate on solving this issue. And the |
The regression test by CI takes about 2 hours, so the latest version will be released in about 2 hours. |
Fixes: https://github.com/PINTO0309/onnx2tf/releases/tag/1.15.9
|
Thanks a lot for the provided fix !! I started my investigations at the very beginning of the model, and things are getting interesting! I'm trying to spot the position where the significant accuracy drop begins, that's why I updated the provided
Interestingly, I have found that the outputs of the int8 tflite model right after the |
I see. There is a tremendously small negative value padded in This is the part. The minimum value of Float32 is used. However, it is only a guess. # use minimum limit value of data type for explicit padding value since this is max pooling
padded_tensor = tf.pad(
tensor=input_tensor,
paddings=tf_pads,
mode='CONSTANT',
constant_values=input_tensor.dtype.min
) onnx2tf/onnx2tf/ops/MaxPool.py Lines 206 to 232 in 32534d7
Just changing the padding value to a larger value changed the output. However, the error is still large. Padding constant_values:
|
…he minimum value causes the output error of `MaxPool2D` to be maximized only when quantizing with INT8 quantization. #444
Implemented a workaround to deal with the problem that padding with the minimum value causes the output error of `MaxPool2D` to be maximized only when quantizing with INT8 quantization. #444
Probably resolved: https://github.com/PINTO0309/onnx2tf/releases/tag/1.15.10 |
Can you confirm that this has been resolved @MustafaYounes1? Planing to integrate this in: https://github.com/mikel-brostrom/yolo_tracking |
Thanks @PINTO0309 for your efforts !! I did a test on my custom data and the int8 tflite is working quite well with the adjusted padding constant value! (about 0.4 accuracy drop which is pretty acceptable) I totally understand your concerns regarding the integration between @mikel-brostrom hope you have got your answer .. |
Thx @MustafaYounes1 for letting me know 😄 |
@MustafaYounes1 @mikel-brostrom |
Yup 😄. This serves me well 👍 |
I'll gladly discuss other potential issues with you again @PINTO0309 Thank you very much, and since we can get a valid quantized OSNet now, I will close this issue. |
Issue Type
Others
OS
Linux
onnx2tf version number
1.15.8
onnx version number
1.13.1
onnxruntime version number
1.15.0
onnxsim (onnx_simplifier) version number
0.4.33
tensorflow version number
2.13.0
Download URL for ONNX
osnet_quant_issue.zip
Parameter Replacement JSON
None
Description
Source Model Information
OSNet is a person-reid model that was trained using Pytorch and converted to ONNX with the pre-trained ImageNet weights.
onnx2tf
conversion commandThe quantization process was calibrated using 100 samples from the DukeMTMC person-reid dataset, the samples were normalized between 0 and 1 and preprocessed accordingly.
Issue Description
I checked the accuracy of the converted float32 tflite model and it was pretty much the same as the source model, however, when I checked the accuracy of the int8 model, I encountered a catastrophic accuracy drop (more than 95%)
I read Section 7 from the README file, and it was clearly stated that it could be a matter of the model structure, is there any way to fix this problem?
Resources
You can find the following resources in the attached zip file:
osnet_x1_0_fp_32_bs_1.onnx
: the source ONNX model.osnet_x1_0_imagenet_fp32_bs_1_float32.tflite
: output fp32 tflite model.osnet_x1_0_imagenet_fp32_bs_1_integer_quant.tflite
: output int8 tflite modelaccuracy_check.py
: a python script that takes in the fp32/int8 tflite models and an input image, it runs the models on the input image and measures the cosine similarity of the output embeddings (this will simplify the accuracy check on your end)0001_c6_f0030809.jpg
: an input image sample from the DukeMTMCThe text was updated successfully, but these errors were encountered: