-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Frontend][PaddlePaddle] Add autopad for conv/pool #9295
Conversation
merge to newest code
@areusch @comaniac @AndrewZhaoLuo @mbrookhart Hi, Could you help to review this pull request, all the tests have passed |
I'll take a look today. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments, overall LGTM though I need to maybe read the spec a bit more closely.
@@ -43,18 +44,56 @@ | |||
__all__ = ["from_paddle"] | |||
|
|||
|
|||
def _get_pad_size(in_size, dilated_kernel_size, stride_size): | |||
"""Calculate the paddings size for Conv/Pool in SAME padding mode.""" | |||
def _autopad( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you just use
tvm/python/tvm/relay/frontend/onnx.py
Lines 412 to 472 in d0c6ca5
def autopad( | |
data, | |
strides, | |
kernel_shape, | |
dilations, | |
ndim, | |
pad_type="constant", | |
deconv=False, | |
mode="SAME_UPPER", | |
pad_value=0.0, | |
): | |
""" | |
Perform autopadding with dynamic input shapes | |
""" | |
# get attributes as constants | |
strides = _op.const(np.array(strides), dtype="int64") | |
dilated_kernel_shape = _op.const( | |
np.array( | |
[(kernel - 1) * dilation + 1 for kernel, dilation in zip(kernel_shape, dilations)] | |
), | |
dtype="int64", | |
) | |
# get input shape | |
shape = _op.strided_slice(shape_of(data, dtype="int64"), [2], [ndim]) | |
# set up integer constants | |
zero = _op.const(0, dtype="int64") | |
one = _op.const(1, dtype="int64") | |
two = _op.const(2, dtype="int64") | |
# Calculate total padding | |
mod = _op.mod(shape, strides) | |
left = _op.maximum(dilated_kernel_shape - strides, zero) | |
right = _op.maximum(dilated_kernel_shape - mod, zero) | |
total_pad = _op.where(_op.equal(mod, zero), left, right) | |
if deconv: | |
total_pad = _op.const(np.array(kernel_shape), dtype="int64") - one - total_pad | |
# split total padding into before and after | |
pad_before = _op.floor_divide(total_pad, two) | |
pad_after = total_pad - pad_before | |
# combine | |
if "LOWER" in mode: | |
pad = _op.concatenate( | |
[_op.reshape(pad_after, [-1, 1]), _op.reshape(pad_before, [-1, 1])], axis=1 | |
) | |
else: | |
pad = _op.concatenate( | |
[_op.reshape(pad_before, [-1, 1]), _op.reshape(pad_after, [-1, 1])], axis=1 | |
) | |
# pad N and C with zeros | |
pad = _op.concatenate([_op.const(np.zeros([2, 2], dtype="int64"), dtype="int64"), pad], axis=0) | |
if isinstance(pad_value, (float, int)): | |
pad_value = _op.const(pad_value) | |
return _op.nn.pad(data, fold_constant(pad), pad_value, pad_type) |
- Refactor _autopad in the onnx.py file to
tvm/python/tvm/relay/frontend/common.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good advice, I think this function also works for tensorflow and tflite to solve dynamic shape problem.
shape_of
and autopad
are both removed to common.py
pad_h = _get_pad_size(in_h, (k_h - 1) * dilations[0] + 1, strides[0]) | ||
pad_w = _get_pad_size(in_w, (k_w - 1) * dilations[1] + 1, strides[1]) | ||
paddings = [pad_h[0], pad_w[0], pad_h[1], pad_w[1]] | ||
dilations = [1, 1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we mean to override the dilations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a history issue for Paddle framework, while padding==SAME
, it will force dliations = 1
Here is the implementation code https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/operators/conv_op.h#L113
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid confusion, I put a comment on this line of code to explain this problem.
Refactor autopad in the onnx.py and paddlepaddle.py to relay/frontend…
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* Add autopad for conv/pool * add autopad for conv/pool * fix pylint warning * add some annotations * add som annotations * add som annotations * Refactor autopad in the onnx.py and paddlepaddle.py to relay/frontend/common.py * add comment for conv2d Co-authored-by: heliqi <1101791222@qq.com>
* Add autopad for conv/pool * add autopad for conv/pool * fix pylint warning * add some annotations * add som annotations * add som annotations * Refactor autopad in the onnx.py and paddlepaddle.py to relay/frontend/common.py * add comment for conv2d Co-authored-by: heliqi <1101791222@qq.com>
This will solve the problem with dynamic input shape while padding==
SAME
inconv2d/pool2d
also include
pad3d
andsqueeze
, this two operators will be used for padding tensor.And the
_autopad
function refers to ONNX frontend @mbrookhartThanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.