Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix inception v3 input transform for trace & onnx #621

Merged
merged 2 commits into from
Oct 25, 2018

Conversation

BowenBao
Copy link
Contributor

@BowenBao BowenBao commented Oct 8, 2018

  • Input transform are in-place updates, which produce issues for tracing
    and exporting to onnx.

The tracing issue can be reproduced by

dummy_input = torch.Tensor(torch.randn((10, 3, 299, 299))).cuda()
inception_model = models.inception_v3(pretrained=True).cuda()
inception_model = inception_model.eval()
torch.jit.trace(inception_model, dummy_input)
>>> TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 735] (3.154785633087158 vs. 0.27894413471221924) and 9999 other locations (100.00%)
  _check_trace([example_inputs], func, executor_options, module, check_tolerance)

This fix references the section Tracer Warnings from https://pytorch.org/docs/master/jit.html.

* Input transform are in-place updates, which produce issues for tracing
and exporting to onnx.
x[:, 1] = x[:, 1] * (0.224 / 0.5) + (0.456 - 0.5) / 0.5
x[:, 2] = x[:, 2] * (0.225 / 0.5) + (0.406 - 0.5) / 0.5
x = torch.cat((
torch.unsqueeze(x[:, 0], 1) * (0.229 / 0.5) + (0.485 - 0.5) / 0.5,

This comment was marked as off-topic.

Copy link
Member

@soumith soumith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks correct to me. thanks a lot for this patch!

@soumith soumith merged commit 85369e3 into pytorch:master Oct 25, 2018
x_ch0 = torch.unsqueeze(x[:, 0], 1) * (0.229 / 0.5) + (0.485 - 0.5) / 0.5
x_ch1 = torch.unsqueeze(x[:, 1], 1) * (0.224 / 0.5) + (0.456 - 0.5) / 0.5
x_ch2 = torch.unsqueeze(x[:, 2], 1) * (0.225 / 0.5) + (0.406 - 0.5) / 0.5
x = torch.cat((x_ch0, x_ch1, x_ch2), 1)

This comment was marked as off-topic.

This comment was marked as off-topic.

rajveerb pushed a commit to rajveerb/vision that referenced this pull request Nov 30, 2023
…ytorch#621)

* [DLRMv2] Resolve benchmark name and use other constants for logging

* Use mllog_constants.ADAGRAD for optimizer name
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants