Skip to content

Commit

Permalink
revert unintended change
Browse files Browse the repository at this point in the history
Signed-off-by: Chen Cui <chcui@nvidia.com>
  • Loading branch information
cuichenx committed Sep 27, 2024
1 parent 01ce1be commit c7a0793
Showing 1 changed file with 0 additions and 7 deletions.
7 changes: 0 additions & 7 deletions nemo/lightning/pytorch/callbacks/model_checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -430,13 +430,6 @@ def _save_checkpoint(self, trainer: 'pytorch_lightning.Trainer', filepath: str)
not self.save_optim_on_train_end and trainer.global_step == trainer.max_steps
)

## PEFT training must have save_weights_only=False to use the on_save_checkpoint callback.
## (See https://github.com/Lightning-AI/pytorch-lightning/blob/bc3c9c536dc88bfa9a46f63fbce22b382a86a9cb/src/lightning/pytorch/trainer/connectors/checkpoint_connector.py#L487-L492)
# breakpoint()
# from nemo.lightning.pytorch.callbacks import PEFT
# if any(isinstance(callback, PEFT) for callback in trainer.callbacks):
# save_weights_only = False

# Async save passes the finalization function to checkpoint_io,
# sync save calls the finalization function immediately after save.
finalize_fn = self._get_finalize_save_checkpoint_callback(trainer, filepath, trainer.global_step)
Expand Down

0 comments on commit c7a0793

Please sign in to comment.