Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Auto-Parallel] Reshard API & Hybrid Parallel Unitest for dy2static mode #59856

Merged
merged 31 commits into from
Dec 9, 2023

Conversation

JZ-LIANG
Copy link
Contributor

@JZ-LIANG JZ-LIANG commented Dec 8, 2023

PR types

Function optimization

PR changes

Others

Description

Pcard-76459

Reshard API in dy2static mode
MP-SP-DP-PP Hybrid Parallelism in dy2static mode

Copy link

paddle-bot bot commented Dec 8, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link

paddle-bot bot commented Dec 8, 2023

❌ The PR is not created using PR's template. You can refer to this Demo.
Please use PR's template, it helps save our maintainers' time so that more developers get helped.

@JZ-LIANG JZ-LIANG changed the title [Auto-Parallel] Hybrid Parallel Unitest for dy2static mode [Auto-Parallel] Reshard API & Hybrid Parallel Unitest for dy2static mode Dec 8, 2023
@@ -164,7 +164,7 @@ def test_simple_net_hybrid_strategy(self):
class TestSemiAutoParallelLlama3D(test_base.CommunicationTestDistBase):
def setUp(self):
super().setUp(num_of_devices=8, timeout=200, nnode=1)
self._default_envs = {"dp": "2", "mp": "2", "pp": "2", "acc_step": "2"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

现在acc_step=2是会挂吗

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

acc_step 需要strategy支持

@@ -501,7 +501,7 @@ def _prepare_decoder_attention_mask(
combined_attention_mask = dist.shard_tensor(
combined_attention_mask,
get_mesh(),
[dist.Shard(0), dist.Replicate()],
[dist.Replicate(), dist.Replicate()],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why change to Replicate?

Copy link
Contributor

@zhiqiu zhiqiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@XieYunshen XieYunshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
单测超时时间设置

Copy link
Contributor

@zhiqiu zhiqiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@JiabinYang JiabinYang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for _c_ops

@zhiqiu zhiqiu merged commit 70c4d21 into PaddlePaddle:develop Dec 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants