-
Notifications
You must be signed in to change notification settings - Fork 22.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix docstrings on torch/optim and torch/autograd #113218
Fix docstrings on torch/optim and torch/autograd #113218
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/113218
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit a1f039b with merge base 9f71452 (): FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
e9c1840
to
7c5cdb7
Compare
"""Decays the learning rate of each parameter group by a small constant factor. | ||
|
||
Until the number of epoch reaches a pre-defined milestone: total_iters. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"""Decays the learning rate of each parameter group by a small constant factor. | |
Until the number of epoch reaches a pre-defined milestone: total_iters. | |
"""Decays the learning rate of each parameter group by a small constant factor until the number of epoch reaches a pre-defined milestone: total_iters. |
"""Decays the learning rate of each parameter group by linearly changing small multiplicative factor. | ||
|
||
Until the number of epoch reaches a pre-defined milestone: total_iters. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"""Decays the learning rate of each parameter group by linearly changing small multiplicative factor. | |
Until the number of epoch reaches a pre-defined milestone: total_iters. | |
"""Decays the learning rate of each parameter group by linearly changing small multiplicative factor until the number of epoch reaches a pre-defined milestone: total_iters. |
optimization process and milestone points that provides exact intervals to reflect | ||
which scheduler is supposed to be called at a given epoch. | ||
""" | ||
Receives the list of schedulers that is expected to be called sequentially during optimization process. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Receives the list of schedulers that is expected to be called sequentially during optimization process. | |
Receives the list of schedulers that is expected to be called sequentially during the optimization process. |
@svekars these suggested lines are too long for flake8 should I add them to the pr? |
torch/optim/optimizer.py
Outdated
r"""Register a load_state_dict pre-hook which will be called before | ||
:meth:`~torch.optim.Optimizer.load_state_dict` is called. It should have the | ||
following signature:: | ||
r"""Register a load_state_dict pre-hook which will be called. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems odd. Can you revert this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You want to have the first sentence as:
Register a load_state_dict pre-hook which will be called before :meth:~torch.optim.Optimizer.load_state_dict
`is called
(with the proper quoting)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pydocstyle wants a summary line and linter wants the line shorter than 120 char.
Register a load_state_dict pre-hook which will be called before :meth:
~torch.optim.Optimizer.load_state_dict
is called
still too long for linter to accept because of the indentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I think we should err on the of being more informative rather than appeasing the linter though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@soulitzer I also felt like it became odd. But I couldn't come up with another solution that could pass by both pydocstyle and flake8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is probably fine to leave it as it was before. That way flake8 passes, and we don't regress the meaning of the first sentence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix, I added some suggestions (mainly where I think more context in the first sentence is important to understanding).
Thank you @soulitzer!! |
@svekars Can you also look at soulitzer's review? |
The autograd parts look good to me, but maybe @janeyx99 should take a closer look at the optim changes. |
@@ -187,8 +185,9 @@ def __exit__(self, type, value, traceback): | |||
|
|||
|
|||
class LambdaLR(LRScheduler): | |||
"""Sets the learning rate of each parameter group to the initial lr | |||
times a given function. When last_epoch=-1, sets initial lr as lr. | |||
"""Sets the learning rate of each parameter group to the initial lr times a given function. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"""Sets the learning rate of each parameter group to the initial lr times a given function. | |
"""Set the learning rate of each parameter group to the initial lr times a given function. |
Should this be singularized following the pattern everywhere else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Classes don't necessarily have to start with an imperative verb.
Therefore, I didn't modify the class docstrings to make them imperative.
in the specified function. When last_epoch=-1, sets initial lr as lr. | ||
"""Multiply the learning rate of each parameter group by the factor given in the specified function. | ||
|
||
When last_epoch=-1, sets initial lr as lr. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When last_epoch=-1, sets initial lr as lr. | |
When last_epoch=-1, set the initial lr as lr. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't touch them since they are in a class and not the summary line.
times a given function. When last_epoch=-1, sets initial lr as lr. | ||
"""Sets the learning rate of each parameter group to the initial lr times a given function. | ||
|
||
When last_epoch=-1, sets initial lr as lr. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When last_epoch=-1, sets initial lr as lr. | |
When last_epoch=-1, set initial lr as lr. |
here too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe not, since this is a class..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also didn't touch them because they are in a class and not the summary line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delayed review, I've recently returned.
Left my review--most edits are good but a few are wrong. Thank you for going through the docs!
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Thank you for your review @janeyx99!! |
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So sorry I did not get back to you earlier!
If you're willing to manually rebase and bring this PR up to date, please ping me and we can try to get this in this week. |
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Fixes #112593
Description
Fixes the docstrings on following files.