-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AssertionError: Not equal to tolerance rtol=0.001, atol=1e-05 #1272
Comments
/assigntome |
/assigntome |
Hello, can you please help me with the specific pytorch-segmentation model you are trying to use and the other code settings, so that I can try and reproduce your issue on my end?
There is no assertion error related to tolerance anymore, and the two output results fall under the range.
I would suggest you to check your versions and retry first. |
This issue has been unassigned due to inactivity. If you are still planning to work on this, you can still send a PR referencing this issue. |
/assigntome |
I don't think this is an issue with the tutorial. The tutorial works as expected and #1272 (comment) shows that it also works for a segmentation model. Without any additional details from the submitter, we should close this issue. cc: @svekars |
Thanks @bjhargrave - will close and grant half credit for investigation. |
Recently I am converting the pytorch segmentation model to onnx model。I can export the onnx model, pass the onnx.checker.check_model() and use the onnxruntime to do inference. But when I use np.testing.assert_allclose(to_numpy(torch_out), ort_outs[0], rtol=1e-03, atol=1e-05) to compare ONNX Runtime and PyTorch results, there is an AssertionError, like follows:
AssertionError:
Not equal to tolerance rtol=0.001, atol=1e-05
Mismatched elements: 20827169 / 20971520 (99.3%)
Max absolute difference: 1.8859415
Max relative difference: 1008390.8
x: array([[[[ 1.165803e+01, 1.163278e+01, 1.160753e+01, ...,
1.179392e+01, 1.176985e+01, 1.174578e+01],
[ 1.167064e+01, 1.164517e+01, 1.161970e+01, ...,...
y: array([[[[11.636896, 11.6166 , 11.596304, ..., 12.943967, 12.909642,
12.875318],
[11.656967, 11.636346, 11.615723, ..., 12.954525, 12.920053,...
The code snippet to export the model is as follows:
model.eval()
batch_size = 1
input_shape = (3, 512, 512)
# x = torch.autograd.Variable(torch.randn(batch_size, *input_shape))
x = torch.rand(batch_size, 3, 512, 512, requires_grad=True)
torch.onnx.export(model, x, model_file_name + '.onnx', export_params=True, opset_version=11, verbose=False)
In this tutorial, https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html, it said, if the results do not match then there is an issue in the ONNX exporter. But i don't know where is the mistake.
cc @BowenBao @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen
The text was updated successfully, but these errors were encountered: