-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PaddlePaddle Hackathon 4][Frontend][Paddle]Add tile/mish/stack/unstack/silu/softshrink/where op for paddle frontend #14160
Conversation
Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.
Generated by tvm-bot |
…p for paddle frontend
def convert_tile(g, op, block): | ||
"""Operator converter for tile.""" | ||
|
||
x = g.get_node(op.input("X")[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There're some cases are not covered, please refer this https://github.com/PaddlePaddle/Paddle2ONNX/blob/develop/paddle2onnx/mapper/tensor/tile.cc#L33 for its implemenation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also please update the test case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I fixed it.
Due to the old version(2.1.3) of paddle in ci-gpu docker, mish and tile may have compatibility problems.
class Tile5(nn.Layer):
@paddle.jit.to_static
def forward(self, inputs):
reps = paddle.to_tensor([3, 2])
return paddle.tile(inputs, repeat_times=reps)
verify_model(Tile6(), input_data=input_data)
input_shapes = [[10], [2, 3], [5, 10, 11], [3, 4, 5, 6]]
for input_shape in input_shapes:
input_data = paddle.randn(shape=input_shape, dtype="float32")
verify_model(Tile5(), input_data=input_data) The above code will be error in version 2.1: Tensor But in version 2.4, the code works well and passes the test cases reps = paddle.to_tensor([3, 2], dtype='int32')
reps = paddle.cast(reps, "int32") In paddle 2.1.3, must explicitly cast again to avoid this error. |
Let me know if you need CI update for paddle cc @jiangjiajun |
@masahi It's better that we upgrade paddlepaddle, could you help to update ci image? Let's talk in this pull request #14206 |
add_threshold = _op.add(x, threshold_right) | ||
condition_2 = tvm.relay.less(calc_right, threshold_left) | ||
calc_left = tvm.relay.where(condition_2, add_threshold, calc_right) | ||
g.add_node(op.output("Out")[0], calc_left) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you refer the following code to optimize this implementation
tvm/python/tvm/relay/frontend/onnx.py
Lines 2280 to 2291 in befdc4e
class Shrink(OnnxOpConverter): | |
"""Operator converter for Shrink.""" | |
@classmethod | |
def _impl_v9(cls, inputs, attr, params): | |
x = inputs[0] | |
dtype = infer_type(x).checked_type.dtype | |
lambd = _op.const(attr.get("lambd", 0.5), dtype=dtype) | |
bias = _op.const(attr.get("bias", 0.0), dtype=dtype) | |
zeros = _op.zeros_like(x) | |
return _op.where(x < -lambd, x + bias, zeros) + _op.where(x > lambd, x - bias, zeros) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK,I‘ve done it
softplus = _op.where(x > threshold, x, log) | ||
tanh = _op.tanh(softplus) | ||
out = _op.multiply(x, tanh) | ||
g.add_node(op.output("Out")[0], out) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I modified the implementation and only process the case of x <= threshold (beta = 1.0).
Since the ci image is not updated yet, you can modify the test case to fix
|
Please feel free to ping me as soon as it’s ready to be merged |
HI, @junrushao thanks, this PR is approved, I think we can merge now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
The pipeline “cross-isa-minimal” timeouts too often…re-triggered |
Add tile/mish/stack/unstack/silu/softshrink/where op for paddle frontend