-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[xpu]:fix scope new tensor: max weight is unchanged #9641
Conversation
Thanks for your contribution! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
LGTM |
@@ -1310,7 +1311,7 @@ class XPUMultiEncoderFuser { | |||
weight_tensor_vec[0]->Resize({weight_dim1_acc, weight_dims_vec[0][0]}); | |||
float max_f = | |||
paddle::lite::xpu::math::FindMaxAbs(weight_qkv_trans.get(), qkv_len); | |||
auto max_tensor = scope->NewTensor(max_tensor_name); | |||
auto max_tensor = scope->MutableParent()->NewTensor(max_tensor_name); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么需要在 parent scope 创建一个tensor 呢?是为了解决什么问题呢?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1、如果是scope->NewTensor()就是从current scope创建,如果是scope->MutableParent()->NewTensor()就是从root scope创建。
2、因为PaddleLite 现在将权重信息保存在了 root scope中,如果clone多个predictor,会共用一个root scope。
3、需要存放权重才用的MutableParent,如果说你的tensor不需要存放权重,是特征值,是不需要MutableParent的。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* [Cherry-Pick][XPU] fixed inplace op mem reuse issue when the previous op is an invalid op (#9562) (#9564) * [XPU] support roformer relative embedding (#9536) * fix sampling_id, fix xpu python whl, fix quant_dequant pass (#9636) * [XPU] support ffn intermediate size M!=4 (#9646) * [xpu] fix scope new tensor, max weight is unchanged (#9641) * [XPU] Fixed the bug in op calib. (#9700) * [XPU] support skip ffn quant in K200 (#9704)
No description provided.