-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ARM]fix ios bug && add mul quant op && add quant transformer support #7310
[ARM]fix ios bug && add mul quant op && add quant transformer support #7310
Conversation
Thanks for your contribution! |
…into fix_ios_bug_develop
ab5d385
to
f2fa9a2
Compare
…into fix_ios_bug_develop
@@ -61,6 +61,27 @@ namespace arm { | |||
din_batch, weights, chout, hout, wout, kh, kw, paddings[0], paddings[1], \ | |||
paddings[2], paddings[3], dilations[0], dilations[1], dout_batch, &ctx | |||
|
|||
template <> | |||
void Conv2DTransposeCompute<PRECISION(kFloat), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个删除,已在公用ReInitWhenNeeded() 添加实现,请见conv_transpose_compute.h
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
|
||
virtual void ReInitWhenNeeded() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么删除这个额?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修复
lite/kernels/arm/matmul_compute.cc
Outdated
for (int i = 0; i < n_; i++) { | ||
param.output_scale = param.input_scale * param.weight_scale[i]; | ||
if (param.weight_scale.size() == 1) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是不是就要这样改写:
if (param.weight_scale.size() == 1) {
param.output_scale =
param.input_scale * param.weight_scale[0] * param.alpha;
for (int i = 0; i < n_; i++) {
scale_[i] = param.output_scale;
}
} else {
for (int i = 0; i < n_; i++) {
param.output_scale = param.input_scale * param.weight_scale[i];
scale_[i] = param.output_scale;
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修复
} | ||
|
||
template <> | ||
void MulCompute<PRECISION(kInt8), PRECISION(kFloat)>::PrepareForRun() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
下次补上 <int8, int8> 这个case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好
@@ -77,7 +77,7 @@ TEST(mul_arm, retrive_op) { | |||
} | |||
|
|||
TEST(mul_arm, init) { | |||
MulCompute mul; | |||
MulCompute<PRECISION(kFloat), PRECISION(kFloat)> mul; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
顺便一起补上,mul OP int8 的单测case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好
scale_.resize(n_); | ||
scale_one.resize(m_); | ||
for (int i = 0; i < n_; i++) { | ||
param.output_scale = param.input_scale * param.weight_scale[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个地方不用判断 param.weight_scale size大小吗?我记得上一版有判断,如果size 是1,一直是常数;否则是这个
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
No description provided.