-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Hackathon 5th No.105】move fusion_gru/fusion_seqconv_eltadd_relu/fusion_seqexpand_concat_fc to phi #57881
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
额,这个输出在fluid的OpMaker中有一个 |
需要解决下冲突 |
Done |
|
我迁移fusion_gru_mkldnn_op试试 |
部分单测仍然会挂掉,需要再看一下 |
嗯嗯,我看到了,我后面检查一下 |
paddle/phi/api/yaml/fused_ops.yaml
Outdated
@@ -208,7 +208,7 @@ | |||
data_type : x | |||
|
|||
- op : fusion_gru | |||
args : (Tensor x, Tensor h0, Tensor weight_x, Tensor weight_h, Tensor bias, str activation = "tanh", str gate_activation = "sigmoid", bool is_reverse = false, bool use_seq = true, bool origin_mode = false, bool use_mkldnn = false, str mkldnn_data_type = "float32", float scale_data = 1.0f, float shift_data = 0.0f, float[] scale_weights = {1.0f}, bool force_fp32_output = false) | |||
args : (Tensor x, Tensor h0, Tensor weight_x, Tensor weight_h, Tensor bias, str activation = "tanh", str gate_activation = "sigmoid", bool is_reverse = false, bool use_seq = true, bool origin_mode = false, bool use_onednn = false, str onednn_data_type = "float32", float scale_data = 1.0f, float shift_data = 0.0f, float[] scale_weights = {1.0f}, bool force_fp32_output = false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
op的属性改为onednn字样还需要评估下,会不会导致一些不兼容的问题
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
额,那我改回来_(:з)∠)_
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个不用改回来,下面那个改回来吧
DenseTensor* batched_input, | ||
DenseTensor* batched_out, | ||
DenseTensor* hidden) { | ||
void FusionGRUOneDNNKernel(const Context& dev_ctx, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个kernel名字就没要加OneDNN字样了吧?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -1332,15 +1325,51 @@ | |||
extra : | |||
attrs : [str data_format = "AnyLayout"] | |||
|
|||
- op : fusion_transpose_flatten_concat |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么把fusion_transpose_flatten_concat给删掉了?它的输入输出不是不一致吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个没有删掉啊,之前解决冲突的时候被我换了个位置然后用pre-commit的时候位置调整了一下,他现在在1368行
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好的
另外确认下,迁移完后相关的几个单测都是跑通的对吧? |
不开启 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
out->set_dims({x_dims[0], w_dims[1]}); | ||
col_mat->set_dims({x_dims[0], w_dims[0]}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
一般dtype也需要进行设置,这几个InferMeta函数可以再提个PR补一下
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zeroRains 直接在这个PR补上吧~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
之前合入的那个PR好像也没有设置,也一并加上吧~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好的
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
额。。Corverage超时了。。。我还没权限重跑 |
attrs : | ||
trans_axis : trans_axis | ||
flatten_axis : flatten_axis | ||
concat_axis : concat_axis |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里为什么将原来的删除了?会不会存在兼容性问题
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不会的,name一样不需要添加映射
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for YAML
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. No doc changes
…on_seqexpand_concat_fc to phi (PaddlePaddle#57881) * add a part of fusion_gru * add some code * move the fusion_seqconv_eltadd_relu_op to phi but have the same bug in gru_op * move fusion_seqexpand_concat_fc_op to phi, but have the same bug in fusion_gru_op * fix the intermediate bug * fix the conflict * pass the fusion_seqconv in new ir * try to move fusion_gru_mkldnn_kernel to phi * move fusion_gru_mkldnn to phi, pass the test but some bug in new iR * change some discribe * fix conflict * fix bug * fix the bug in getInputName * remove some describe * add the set_dtype
hi同学,这几个kernel的迁移到phi存在问题,需要提PR修复下哈,mkldnn相关的参数不需要写到yaml中,也不需要在kernel、infermata的签名中体现。 |
修复前后,与之前的验证方法一样哈 |
PR types
Others
PR changes
Others
Description
move fusion_gru/fusion_seqconv_eltadd_relu/fusion_seqexpand_concat_fc to phi
#57262