Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fix] Broadcast optimizer state using broadcast_dp without shard-resh… #8522

Merged
merged 1 commit into from
Jun 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions paddlenlp/trainer/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -582,7 +582,7 @@
weights_index_file,
]
):
raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint}")
raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint} -- {weights_file}")

Check warning on line 585 in paddlenlp/trainer/trainer.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/trainer/trainer.py#L585

Added line #L585 was not covered by tests

logger.info(f"Loading model from {resume_from_checkpoint} .")

Expand Down Expand Up @@ -2237,7 +2237,7 @@
safe_serialization=True,
)
else:
if self.dp_group.rank > 0:
if self.args.data_parallel_rank > 0 and self.args.use_expert_parallel:

Check warning on line 2240 in paddlenlp/trainer/trainer.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/trainer/trainer.py#L2240

Added line #L2240 was not covered by tests
self._save_ckpt_func(
self._filter_moe_no_sync_optimizer_params(), os.path.join(output_dir, OPTIMIZER_NAME)
)
Expand Down Expand Up @@ -2525,7 +2525,9 @@
if self.args.local_rank != -1:
dist.barrier()
if self.args.use_expert_parallel:
opt_state_dict = broadcast_moe_optimizer(opt_state_dict)
opt_state_dict = broadcast_moe_optimizer(

Check warning on line 2528 in paddlenlp/trainer/trainer.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/trainer/trainer.py#L2528

Added line #L2528 was not covered by tests
opt_state_dict, broadcast_dp=not self.args.should_load_sharding_stage1_model
)
else:
if not self.args.should_load_sharding_stage1_model:
opt_state_dict = broadcast_dp_optimizer(opt_state_dict)
Expand Down
7 changes: 5 additions & 2 deletions paddlenlp/trainer/utils/helper.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@
return state_dict


def broadcast_moe_optimizer(state_dict):
def broadcast_moe_optimizer(state_dict, broadcast_dp=True):

try:
hcg = fleet.get_hybrid_communicate_group()
Expand Down Expand Up @@ -270,7 +270,10 @@
base_state_dict.update(buf[2])
return base_state_dict

base_state_dict = _broadcast_moe_optimizer_state(state_dict)
if broadcast_dp:
base_state_dict = broadcast_dp_optimizer(state_dict)

Check warning on line 274 in paddlenlp/trainer/utils/helper.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/trainer/utils/helper.py#L273-L274

Added lines #L273 - L274 were not covered by tests
else:
base_state_dict = _broadcast_moe_optimizer_state(state_dict)

Check warning on line 276 in paddlenlp/trainer/utils/helper.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/trainer/utils/helper.py#L276

Added line #L276 was not covered by tests
if data_parallel_rank > 0:
master_weight = state_dict.pop("master_weights", {})
base_state_dict.update(state_dict)
Expand Down
Loading