Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[XPU] plain softmax_mask_fuse_upper_triangle implement #58346

Conversation

houj04
Copy link
Contributor

@houj04 houj04 commented Oct 24, 2023

PR types

New features

PR changes

APIs

Description

flash_attention.py里面,会根据不同的设备,以及不同的head_dim执行不同的操作。
目前XPU下面没有flash attention的实现,会走普通的计算路径。但是同时也没有softmax_mask_fuse_upper_triangle的实现,因此会报错。
本PR先简单“垫”了一下,用普通的“先生成mask再做softmax”的方式来进行计算,至少不会让训练挂掉。

其中get_triangle_upper_mask函数的灵感来源于PaddleNLP中的GPT-3模型

@paddle-bot
Copy link

paddle-bot bot commented Oct 24, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link
Contributor

@qili93 qili93 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZibinGuo
Copy link
Contributor

LGTM

Copy link
Contributor

@XiaociZhang XiaociZhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZibinGuo
Copy link
Contributor

ZibinGuo commented Oct 25, 2023 via email

@QingshuChen QingshuChen merged commit 6d1e685 into PaddlePaddle:develop Oct 25, 2023
danleifeng pushed a commit to danleifeng/Paddle that referenced this pull request Nov 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants