-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Frontend][PaddlePaddle] Add 10+ operators for PaddlePaddle #9126
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really big, so it's hard to catch all of the edge cases in a review, but it looks okay, with the standard caveats that dynamic support might not be fully there yet. Not seeing anything I disapprove of, but I'd like to get more eyes on it before approving.
except Exception as e: | ||
msg = "Dynamic shape is not supported in SAME padding algorithm while stride!=1" | ||
raise tvm.error.OpAttributeInvalid(msg) from e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just as a heads up, I supported SAME
padding in the ONNX frontend with dynamic shapes here:
tvm/python/tvm/relay/frontend/onnx.py
Lines 412 to 472 in d0c6ca5
def autopad( | |
data, | |
strides, | |
kernel_shape, | |
dilations, | |
ndim, | |
pad_type="constant", | |
deconv=False, | |
mode="SAME_UPPER", | |
pad_value=0.0, | |
): | |
""" | |
Perform autopadding with dynamic input shapes | |
""" | |
# get attributes as constants | |
strides = _op.const(np.array(strides), dtype="int64") | |
dilated_kernel_shape = _op.const( | |
np.array( | |
[(kernel - 1) * dilation + 1 for kernel, dilation in zip(kernel_shape, dilations)] | |
), | |
dtype="int64", | |
) | |
# get input shape | |
shape = _op.strided_slice(shape_of(data, dtype="int64"), [2], [ndim]) | |
# set up integer constants | |
zero = _op.const(0, dtype="int64") | |
one = _op.const(1, dtype="int64") | |
two = _op.const(2, dtype="int64") | |
# Calculate total padding | |
mod = _op.mod(shape, strides) | |
left = _op.maximum(dilated_kernel_shape - strides, zero) | |
right = _op.maximum(dilated_kernel_shape - mod, zero) | |
total_pad = _op.where(_op.equal(mod, zero), left, right) | |
if deconv: | |
total_pad = _op.const(np.array(kernel_shape), dtype="int64") - one - total_pad | |
# split total padding into before and after | |
pad_before = _op.floor_divide(total_pad, two) | |
pad_after = total_pad - pad_before | |
# combine | |
if "LOWER" in mode: | |
pad = _op.concatenate( | |
[_op.reshape(pad_after, [-1, 1]), _op.reshape(pad_before, [-1, 1])], axis=1 | |
) | |
else: | |
pad = _op.concatenate( | |
[_op.reshape(pad_before, [-1, 1]), _op.reshape(pad_after, [-1, 1])], axis=1 | |
) | |
# pad N and C with zeros | |
pad = _op.concatenate([_op.const(np.zeros([2, 2], dtype="int64"), dtype="int64"), pad], axis=0) | |
if isinstance(pad_value, (float, int)): | |
pad_value = _op.const(pad_value) | |
return _op.nn.pad(data, fold_constant(pad), pad_value, pad_type) |
It's fairly complicated, I'm totally cool if you want to punt on that until you need it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this will solve a big problem! But to avoid making this pull request more complicated to review, let's left this for the next pull request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some initial comments. I've gotten up to convert_fill_constant
will review the rest later. Agree with mbrookhart. This is pretty big so we might need more eyes.
Can you send the most up to date english docs btw?
Also, does PaddlePaddle support operator versioning? How will you handle API changes in the future?
inputs[i] = input_op.astype(max_dtype) | ||
return inputs | ||
|
||
|
||
def shape_of(x, dtype="int32"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use python/tvm/relay/frontend/common.py::infer_shape?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have referred ONNX frontend, this function also comes from there https://github.com/apache/tvm/blob/main/python/tvm/relay/frontend/onnx.py#L1411
It's a little different from common::infer_shape
return _op.shape_of(x, dtype) | ||
|
||
|
||
def _get_pad_size(in_size, dilated_kernel_size, stride_size): | ||
"""calculate the paddings size""" | ||
def _infer_value(x, params): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use python/tvm/relay/frontend/common.py::infer_value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. I just found there's try_infer_value
in common.py
, this function is removed.
@@ -40,28 +41,108 @@ | |||
__all__ = ["from_paddle"] | |||
|
|||
|
|||
def _get_pad_size(in_size, dilated_kernel_size, stride_size): | |||
"""calculate the paddings size""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general Docstrings should be complete sentences. End with a period and capitalize the first letter.
E.g. "Calculate the paddings size."
Please fix the other docstrings
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
ipt1 = g.get_node(op.input("Y")[0]) | ||
op_func = get_relay_op(op.type) | ||
out = op_func(ipt0, ipt1) | ||
g.add_node(op.output("Out")[0], out) | ||
|
||
|
||
def convert_arg_max(g, op, block): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a link to the english docs?
https://www.paddlepaddle.org.cn/documentation/docs/en/1.8/api/layers/argmax.html
Doesn't seem to have some of the attributes listed in the op
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now the latest version is 2.1, API documents: https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/argmax_en.html#argmax
Follow the API definition code, we can find there's some attributes not list in the API's parameters
https://github.com/PaddlePaddle/Paddle/blob/release/2.1/python/paddle/tensor/search.py#L179
attrs['keepdims'] = keepdim
attrs['axis'] = axis
attrs['flatten'] = flatten
attrs['dtype'] = var_dtype
helper.append_op(
type='arg_max', inputs={'X': x}, outputs={'Out': [out]}, attrs=attrs)
out.stop_gradient = True
ipt1 = g.get_node(op.input("Y")[0]) | ||
op_func = get_relay_op(op.type) | ||
out = op_func(ipt0, ipt1) | ||
g.add_node(op.output("Out")[0], out) | ||
|
||
|
||
def convert_arg_max(g, op, block): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of the logic in argmin and argmax is similar. Refactor to combine the two.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
axis = op.attr("axis") | ||
descending = op.attr("descending") | ||
|
||
out = _op.sort(x, axis, not descending) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider using _op.gather on the out_indices
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
descending = op.attr("descending") | ||
|
||
out = _op.sort(x, axis, not descending) | ||
out_indice = _op.argsort(x, axis, not descending, dtype="int64") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: out_indices
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
x = g.get_node(op.input("X")[0]) | ||
y = g.get_node(op.input("Y")[0]) | ||
|
||
out = _op.sum(_op.multiply(x, y), axis=[-1], keepdims=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might want to note the semantics of paddle paddle's dot operator. Namely how it also operates on 2d-vectors (and hence why axis=[-1]).
I have not seen this elsewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
paddle.dot
: https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/dot_en.html
It's similar with torch.dot
, while torch.dot
only supports 1D tensor.
In PaddlePaddle, inputs should be both 1D or 2D tensor. When it is 2d, the first dimension of this matrix is the batch dimension.
For clarify, I also put this explanation in code
Hi, @AndrewZhaoLuo @mbrookhart
|
Hi, @AndrewZhaoLuo |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coming together well. I think I am ok with merging this in the current state as part of a more experimental frontend, but would like another pair of eyes on this.
Still a few comments which will make this better:
-
Can you add more substantial test cases to your tests? E.g. different input shapes and those of different ranks at least for some of the relevant ops. I feel that some issues might come to light from this. Do not worry about PR size anymore
-
Please add type annotations to functions for this and future PRs. e.g.
def myFunc(a: int, b: List[string]) -> None:
...
out = act_func(g.get_node(op.input("X")[0])) | ||
x = g.get_node(op.input("X")[0]) | ||
target_shape = op.attr("target_shape") | ||
out = _op.broadcast_to(x, target_shape) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe you might run into a similar issue as https://github.com/apache/tvm/blob/main/python/tvm/relay/frontend/onnx.py#L2257
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PaddlePaddle's expand_as
doesn't support multi-directional broadcasting, so this problem will not happen in PaddlePaddle frontend
new_var, | ||
) | ||
|
||
__all__ = ["from_paddle"] | ||
|
||
|
||
def _get_pad_size(in_size, dilated_kernel_size, stride_size): | ||
"""Calculate the paddings size.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should describe padding size for what
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shame about the typing, you can do forward references like:
def g(f:"paddle.paddleblahblah.blah"): -->:
But eh it's not the end of the world to be untyped since the rest of the frontends are like that. Just one comment about test case sizes.
We'll need another approver though. @mbrookhart ?
"relu", | ||
"tanh", | ||
] | ||
input_shapes = [[128], [2, 256], [1000, 128, 32], [7, 3, 256, 256]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please reduce the size of your test cases to something smaller e.g. less than 256 total elements (totally arbitrary, just as small as possible while still accomplishing the test)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. I guess the limit on the number of elements is to reduce the cost time of testing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@junrushao1994 Hi, could you help to merge this pull request? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @AndrewZhaoLuo for the review! Thanks @jiangjiajun for the PR!
…Paddle (apache#9126)" This reverts commit c980db3.
…Paddle (apache#9126)" This reverts commit c980db3.
…pache#9126) * add part of operators * remove part of operators * add lookup * add test * Update paddlepaddle.py * modify error message for SAME padding * Remove some function and old version operator * Remove some function and old version operator * Remove some function and old version operator * Remove some function and old version operator * add dot test * modify doc * remove unreviewed code * Update paddlepaddle.py * Update test_forward.py * Update paddlepaddle.py * Update paddlepaddle.py * Update test_forward.py * Update test_forward.py * add more cases for tests * add more cases for tests * remove annotation * reduce test case sizes
…pache#9126) * add part of operators * remove part of operators * add lookup * add test * Update paddlepaddle.py * modify error message for SAME padding * Remove some function and old version operator * Remove some function and old version operator * Remove some function and old version operator * Remove some function and old version operator * add dot test * modify doc * remove unreviewed code * Update paddlepaddle.py * Update test_forward.py * Update paddlepaddle.py * Update paddlepaddle.py * Update test_forward.py * Update test_forward.py * add more cases for tests * add more cases for tests * remove annotation * reduce test case sizes
…pache#9126) * add part of operators * remove part of operators * add lookup * add test * Update paddlepaddle.py * modify error message for SAME padding * Remove some function and old version operator * Remove some function and old version operator * Remove some function and old version operator * Remove some function and old version operator * add dot test * modify doc * remove unreviewed code * Update paddlepaddle.py * Update test_forward.py * Update paddlepaddle.py * Update paddlepaddle.py * Update test_forward.py * Update test_forward.py * add more cases for tests * add more cases for tests * remove annotation * reduce test case sizes
This pull request is part of #9102 hope this will bring some help to review
@AndrewZhaoLuo
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.