-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix the loss initialization in intermediate_source/char_rnn_generation_tutorial.py #2380
Conversation
✅ Deploy Preview for pytorch-tutorials-preview ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll approve, but as mentioned in the original issue #844 (comment) the current code is just fine. We don't need to initialize loss
as a tensor.
@@ -278,7 +278,7 @@ def train(category_tensor, input_line_tensor, target_line_tensor): | |||
|
|||
rnn.zero_grad() | |||
|
|||
loss = 0 | |||
loss = torch.tensor([0]) # in PyTorch 2.0 and later can be don't need to initialize as tensor, can be ``loss = 0`` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is true for any pytorch version, it's not related to 2.0
Fixes #844
Description
Changed the loss initialization from 0 to torch.Tensor([0]) in order to avoid confusion about the dtypes.
Checklist
cc @svekars @carljparker @kit1980