Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix some bugs for 0d output and fix some typoes #10282

Merged
merged 13 commits into from
Jun 8, 2023

Conversation

lishicheng1996
Copy link
Contributor

PR devices

others

PR types

Bug fixes

PR changes

OP

Description

fix some bugs for 0d output and fix some typoes

@paddle-bot
Copy link

paddle-bot bot commented May 23, 2023

Thanks for your contribution!

@@ -71,7 +71,7 @@ bool PNormOpLite::InferShapeImpl() const {
std::vector<int64_t> reduce_dims;
const bool asvector = param_.asvector;
if (asvector) {
reduce_dims.emplace_back(1);
// reduce_dims.emplace_back(1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

直接删了就行,不用注释

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已改

<< "dims[i] is " << dims[i] << ", x_rank is " << x_rank;
}
return true;
// CHECK_GT(dims.size(), 0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

删掉

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已改

@@ -82,7 +82,7 @@ def add_ignore_pass_case(self):
pass
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

没加0Dtensor判断,cambricon_mlu这个target也会跑

@@ -58,10 +50,10 @@ bool ReduceOp::InferShapeImpl() const {
bool keep_dim = param_.keep_dim;

for (int i = 0; i < dims.size(); i++) {
CHECK(dims[i] <= x_rank && dims[i] + x_rank >= 0)
<< "dims[i] is " << dims[i] << ", x_rank is " << x_rank;
// CHECK(dims[i] <= x_rank && dims[i] + x_rank >= 0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

删掉,不要注释

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

reduce_all_data = True if len(
in_shape) == 0 or axis_list == None or axis_list == [] else False
if len(in_shape) == 0:
axis_list = draw(st.sampled_from([[-1], []]))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个单测也没加0D tensor target 判断

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已添加

reduce_all_data = True if len(
in_shape) == 0 or axis_list == None or axis_list == [] else False
if len(in_shape) == 0:
axis_list = draw(st.sampled_from([[-1], []]))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个单测也没加0D tensor target 判断

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已添加

reduce_all_data = True if len(
in_shape) == 0 or axis_list == None or axis_list == [] else False

if len(in_shape) == 0:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个单测也没加0D tensor target 判断

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已添加

Copy link
Collaborator

@mjp9527 mjp9527 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mjp9527 mjp9527 merged commit e4dce40 into PaddlePaddle:develop Jun 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants