-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CenterFusion] Model Full Interger Quantize problem #222
Comments
First, this problem requires an understanding of two issues related to the use of TFLite converters and the story of how onnx2tf works. Your model appears to have 3 channels of input and 64 channels of non-image data, as shown in the image. The tool will attempt to quantize using the default data set I have prepared if the user does not give data for calibration. The default data set refers to the 3-channel MS-COCO image data set. https://github.com/PINTO0309/onnx2tf#cli-parameter
Debugging allows you to see the shape of the data set for calibration. It is shown in the figure below. This means that if you need to use special input data other than images, you will need to pass the calibration data set to TFLite Converter yourself. The 64-channel data is too special, and it is not possible to determine just from the structure of the model what kind of calibration should be done for onnx2tf in the first place. Therefore, you need to generate your own Numpy dataset and pass it to onnx2tf, as described in the README tutorial transcribed above. Also, to give you an idea of a potential problem that may arise a little further down the road in exchanging this topic, it is possible that TFLite Converter may not be able to successfully complete quantization for models with more than two inputs. This is not a problem with onnx2tf, but rather with the specification of the TFLite Converter itself. |
Error judgment message for quantization calibration has been improved.
|
If there is no activity within the next two days, this issue will be closed automatically. |
@PINTO0309 Thank you very much! I will try to generate my own npy file for quantization. Even I don't know how to generate input tensor, mean, and std. |
I can only describe a general method for creating calibration data for quantization.
Simply stack |
If there is no activity within the next two days, this issue will be closed automatically. |
Thank you for your detailed explanations! We had transferred the model into full quantized int8 format. |
I can only judge the structure of the model, but it looks kind of good. I am not sure how much the accuracy is degraded, but if the accuracy deteriorates, please refer to the tutorial on INT8 quantization that I have added to the README in the last couple of days. |
I will close this issue as the problem seems to be resolved. |
Issue Type
Others
onnx2tf version number
1.7.6
onnx version number
1.13.1
tensorflow version number
2.12.0rc0
Download URL for ONNX
https://drive.google.com/file/d/1yKCN8_2ayeBLOhd6bgeL55pu_HXClO1L/view?usp=share_link
Parameter Replacement JSON
Sorry! I don't have a Replacement JSON file and I don't know how it works.
Description
The text was updated successfully, but these errors were encountered: