Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNX export for variable input sizes #1840
ONNX export for variable input sizes #1840
Changes from 6 commits
e48f3f7
8df5790
03bc884
9b6142f
e3d6040
37888e2
84786b0
ebcd45b
89404f3
b6f7859
60ba5e7
10a9fcb
ea5cf6e
dced83a
da44102
cd79435
fbe4680
2b4ad07
be0ae7e
e6e3109
829b58b
7999e55
94b1ac6
050e756
a445d4a
b0c79bb
89c2a80
9d2b533
931d5eb
c90b6e0
ba40ea1
56b62d8
228db38
04ff430
d01a8ff
23bff59
b9ff797
2cecb1c
572e522
d0ff3f3
a66381a
8d4b1ee
16cd2eb
c9fc98c
f12e10c
a3a9cd6
088d514
ce5d781
e826afe
8f9d8d5
faa16aa
d8ec5c5
85daf17
52aaa67
c30cce7
3032e8e
04a3806
0c2b4b1
dbc2c9d
c82321b
35142fa
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this is using too much memory and is making the tests segfault in CI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So now we don't need to use
operators.shape_as_tensor
? Does ONNX now trace the.shape
values from tensors and keep them as dynamic variables?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, now we are able to trace the shapes dynamically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this change, I wonder if lines 124-126 are still necessary?
Also, do we need to pass the
device
in the tensor constructor, so that it works for CUDA as well?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for pointing this out. The lines here are not needed anymore. I'll remove this part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you also pass the
device
to the tensor constructor?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It now looks like this function is completely different (or almost) for ONNX and the other code-path.
Also, would this become unnecessary when pytorch/pytorch#32493 gets merged?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR is not addressing this issue. Currently split with dynamic sizes is not supported in tracing.
I agree that the behavior is much different here. I can implement _get_top_n_idx_for_onnx.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually going to add support for split with dynamic input in the exporter. We won't need these extra code-paths anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line will be replaced by:
# num_anchors_per_level = [torch.prod(shape_as_tensor(o[0])) for o in objectness]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't
numel()
dynamic on the input shape?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the output is an integer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this something that we might want to change on ONNX side? Because
shape
in PyTorch currently returns aSize
object, which is atuple[int]
under the hood, and usingshape
works inn ONNXThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it could be fixed in exporter. Numel is different than shape since it is traced as a constant in the IR graph. I tested this with a small repro, but I haven't looked deeper.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this will break torchscript compatibility, because it would imagine that
height
andwidth
are int, which do not have a.to
method.Would it be possible to make
clamp
work with dynamic values? In PyTorch,.clamp
already support tensor values formin
/max
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe clamp is traces with constants for min/max:
Tensor clamp(const Tensor & self, c10::optional min, c10::optional max)
Would this cause a failure with branch:
if torchvision._is_tracing():
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From your previous comment,
Can we make it trace with tensors as well if tensors are passed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we have the same issue. The traced graph for clamp shows min/max as tensors even when a tensor is passed.
%7 : int = prim::Constant [ value=2 ] ( )
%8 : None = prim::Constant()
%9 : Float(3, 4) = aten::clamp(%0, %7, %8)
I haven't looked deeper.
The PR to address this: pytorch/pytorch#32587
Maybe we can have this branch until the PR is merged.
I'm not sure if changing it to tensors would help much. Let me know what you think. Thanks.