Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clip_gradient with clip_grad_value #5456

Closed
dhkim0225 opened this issue Jan 11, 2021 · 3 comments · Fixed by #6123
Closed

clip_gradient with clip_grad_value #5456

dhkim0225 opened this issue Jan 11, 2021 · 3 comments · Fixed by #6123
Labels
feature Is an improvement or enhancement help wanted Open to be worked on priority: 1 Medium priority task

Comments

@dhkim0225
Copy link
Contributor

dhkim0225 commented Jan 11, 2021

🚀 Feature

Same issue with #4927
The current clip_gradient uses clip_grad_norm; can we add clip_grad_value?

https://github.com/PyTorchLightning/pytorch-lightning/blob/f2e99d617f05ec65fded81ccc6d0d59807c47573/pytorch_lightning/plugins/native_amp.py#L63-L65

@dhkim0225 dhkim0225 added feature Is an improvement or enhancement help wanted Open to be worked on labels Jan 11, 2021
@tchaton
Copy link
Contributor

tchaton commented Jan 11, 2021

Hey @dhkim0225,

Sounds like a great feature. Would you mind making a PR to enable it ?

Best regards,
T.C

@tchaton tchaton added the priority: 1 Medium priority task label Jan 11, 2021
@dhkim0225
Copy link
Contributor Author

@tchaton I'm really glad to do that !

Renaming the clip_grad_value variable to clip_grad_norm and,
adding a new clip_grad_value feature seems most intuitive,
but this is BC-breaking.

Do you have any good ideas?

Sincerely,
Anthony Kim.

@tchaton
Copy link
Contributor

tchaton commented Jan 11, 2021

Hey @dhkim0225,

I just checked. You can pass those arguments directly to the Trainer.
trainer = Trainer(gradient_clip_val=10, track_grad_norm=2) and they will be used in NativeAMP clip_gradient.

It will use self.trainer.gradient_clip_val.

    def clip_gradients(self, optimizer, clip_val=None):
        # use the trainer's clip val if none passed
        grad_clip_val = self.trainer.gradient_clip_val
        if clip_val is not None:
            grad_clip_val = clip_val
        grad_clip_val = float(grad_clip_val)

        if grad_clip_val <= 0:
            return
        self._clip_gradients(optimizer, grad_clip_val)

    def _clip_gradients(self, optimizer: Optimizer, grad_clip_val: Union[float, int], norm_type: float = 2.0):
        if self.trainer.amp_backend:
            self.trainer.precision_connector.backend.clip_gradients(grad_clip_val, optimizer, norm_type)
        else:
            model = self.trainer.get_model()
            torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=grad_clip_val, norm_type=norm_type)

Closing this issue as the parameter is already exposed in the Trainer. Feel free to re-open it if you think I missed something.

Best,
T.C

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement help wanted Open to be worked on priority: 1 Medium priority task
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants