Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Prim][PIR] binary_cross_entropy_with_logits forward decomp #61613

Merged
merged 30 commits into from
May 21, 2024

Conversation

zeroRains
Copy link
Contributor

@zeroRains zeroRains commented Feb 5, 2024

PR Category

Operator Mechanism

PR Types

Others

Description

binary_cross_entropy_with_logits forward decomp
subtasks:

  • sigmoid_cross_entropy_with_logits
  • mean_all

Copy link

paddle-bot bot commented Feb 5, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Feb 5, 2024
Copy link

paddle-ci-bot bot commented Mar 15, 2024

Sorry to inform you that 4950de4's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

paddle/fluid/primitive/composite/composite.h Outdated Show resolved Hide resolved
full<T>(common::vectorize(dims), ignore_index, label.type());
auto out = where<T>(label == ignore_index_tensor, zero, tmp_out);
if (normalize) {
const Tensor eps1 = full<T>(common::vectorize(dims), 1e-6, x.type());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1e-6 怎么来的?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
是根据sigmoid_cross_entropy_with_logits在kernel的实现来设置的

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

建议代码行注释一下来源

const Tensor eps1 = full<T>(common::vectorize(dims), 1e-6, x.type());
auto diff = label - ignore_index_tensor;
const Tensor tmp_norm = sum<T>(where<T>(abs<T>(diff) > eps1, one, zero));
const Tensor eps2 = full<T>(empty_shape, 1e-5, x.type());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1e-5同上?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

@cyber-pioneer
Copy link
Contributor

test_check_grad 里也应该添加check_prim_pir=True

@cyber-pioneer
Copy link
Contributor

单侧shape有点小,建议增大

Copy link

paddle-ci-bot bot commented Mar 26, 2024

Sorry to inform you that b72d42d's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

Copy link

paddle-ci-bot bot commented Apr 6, 2024

Sorry to inform you that 02a9896's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

value = value * get_slice<T>(x_shape_tensor, i);
}
value = reshape<T>(value, {});
ans = sum<T>(x_cast) / cast<T>(value, DataType::FLOAT32);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

再review cast逻辑,以及dtype

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@cyber-pioneer cyber-pioneer merged commit 15888d1 into PaddlePaddle:develop May 21, 2024
32 checks passed
@zeroRains zeroRains deleted the logit branch May 21, 2024 13:03
co63oc pushed a commit to co63oc/Paddle that referenced this pull request May 23, 2024
…ddle#61613)

* sigmoid_cross_entropy_with_logits forward decomp

* mean_all forward decomp

* add the test case for binary_cross_entropy_with_logits

* creat a new test file

* modify the assert method

* modify the test

* fix code style

* add prim in check grad for test and handle the optional tensor

* fix conflict

* do not modify the third_party package

* fix merge bug

* modfiy the test data and change the file name

* roback

* fix bug

* support mean_all for dynamic shape

* modify the type
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants