Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNX export for variable input sizes #1840
ONNX export for variable input sizes #1840
Changes from 58 commits
e48f3f7
8df5790
03bc884
9b6142f
e3d6040
37888e2
84786b0
ebcd45b
89404f3
b6f7859
60ba5e7
10a9fcb
ea5cf6e
dced83a
da44102
cd79435
fbe4680
2b4ad07
be0ae7e
e6e3109
829b58b
7999e55
94b1ac6
050e756
a445d4a
b0c79bb
89c2a80
9d2b533
931d5eb
c90b6e0
ba40ea1
56b62d8
228db38
04ff430
d01a8ff
23bff59
b9ff797
2cecb1c
572e522
d0ff3f3
a66381a
8d4b1ee
16cd2eb
c9fc98c
f12e10c
a3a9cd6
088d514
ce5d781
e826afe
8f9d8d5
faa16aa
d8ec5c5
85daf17
52aaa67
c30cce7
3032e8e
04a3806
0c2b4b1
dbc2c9d
c82321b
35142fa
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why only
strides
should be a tensor, and notgrid_sizes
? Is it becausegrid_sizes
is returned byshape
and in ONNX this becomestensor
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is because of line 164 where I'm changing int() to torch.tensor(... dtype=int64) for strides.
This was not required for grid_sizes
There's a follow up item which is to find why int() and float() result in constants in the graph.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lara-hdr Maybe the comment was deleted after update.
So the grid_sizes is a list of ints, and strides is a list of tensors. So we cannot concat and then cast them to string. But casting to string and concatenating the strings has the same result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line will be replaced by:
# num_anchors_per_level = [torch.prod(shape_as_tensor(o[0])) for o in objectness]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't
numel()
dynamic on the input shape?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the output is an integer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this something that we might want to change on ONNX side? Because
shape
in PyTorch currently returns aSize
object, which is atuple[int]
under the hood, and usingshape
works inn ONNXThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it could be fixed in exporter. Numel is different than shape since it is traced as a constant in the IR graph. I tested this with a small repro, but I haven't looked deeper.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just wondering, do we need to cast the boxes to
float32
here, or can we just use theboxes_x
dtype while constructing the scalar tensor?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When using boxes.dtype, I'm actually seeing an error from ONNX Runtime:
[ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Type Error: Type 'tensor(int64)' of input parameter (17) of operator (Max) in node (Max_21) is invalid.
Will need to look into the issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From your previous comment,
Can we make it trace with tensors as well if tensors are passed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we have the same issue. The traced graph for clamp shows min/max as tensors even when a tensor is passed.
%7 : int = prim::Constant [ value=2 ] ( )
%8 : None = prim::Constant()
%9 : Float(3, 4) = aten::clamp(%0, %7, %8)
I haven't looked deeper.
The PR to address this: pytorch/pytorch#32587
Maybe we can have this branch until the PR is merged.
I'm not sure if changing it to tensors would help much. Let me know what you think. Thanks.