-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix some bugs for 0d output and fix some typoes #10282
Conversation
Thanks for your contribution! |
lite/operators/norm_op.cc
Outdated
@@ -71,7 +71,7 @@ bool PNormOpLite::InferShapeImpl() const { | |||
std::vector<int64_t> reduce_dims; | |||
const bool asvector = param_.asvector; | |||
if (asvector) { | |||
reduce_dims.emplace_back(1); | |||
// reduce_dims.emplace_back(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
直接删了就行,不用注释
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已改
lite/operators/reduce_ops.cc
Outdated
<< "dims[i] is " << dims[i] << ", x_rank is " << x_rank; | ||
} | ||
return true; | ||
// CHECK_GT(dims.size(), 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
删掉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已改
@@ -82,7 +82,7 @@ def add_ignore_pass_case(self): | |||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
没加0Dtensor判断,cambricon_mlu这个target也会跑
lite/operators/reduce_ops.cc
Outdated
@@ -58,10 +50,10 @@ bool ReduceOp::InferShapeImpl() const { | |||
bool keep_dim = param_.keep_dim; | |||
|
|||
for (int i = 0; i < dims.size(); i++) { | |||
CHECK(dims[i] <= x_rank && dims[i] + x_rank >= 0) | |||
<< "dims[i] is " << dims[i] << ", x_rank is " << x_rank; | |||
// CHECK(dims[i] <= x_rank && dims[i] + x_rank >= 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
删掉,不要注释
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
reduce_all_data = True if len( | ||
in_shape) == 0 or axis_list == None or axis_list == [] else False | ||
if len(in_shape) == 0: | ||
axis_list = draw(st.sampled_from([[-1], []])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个单测也没加0D tensor target 判断
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已添加
reduce_all_data = True if len( | ||
in_shape) == 0 or axis_list == None or axis_list == [] else False | ||
if len(in_shape) == 0: | ||
axis_list = draw(st.sampled_from([[-1], []])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个单测也没加0D tensor target 判断
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已添加
reduce_all_data = True if len( | ||
in_shape) == 0 or axis_list == None or axis_list == [] else False | ||
|
||
if len(in_shape) == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个单测也没加0D tensor target 判断
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已添加
4e34c30
to
b1e0220
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR devices
others
PR types
Bug fixes
PR changes
OP
Description
fix some bugs for 0d output and fix some typoes