You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would take this on but first I wanted some clarity on the question - is the question whether setting loss = 0 vs. loss = torch.autograd.Variable(torch.Tensor([0])) is better? Or just to explain why loss = 0 is used instead?
When instantiating loss at a real zero, i think you rather meant something like (i don't know if there is a better way, but I usually do it this way) :
I think instantiating the loss with 0 (or python int dtype) is not a problem because once you use the add operation with an integer and a tensor
tutorials/intermediate_source/char_rnn_generation_tutorial.py
Line 281 in 48e5ccc
Hello folks at Pytorch,
When instantiating loss at a real zero, i think you rather meant something like (i don't know if there is a better way but I usually do it this way) :
loss = torch.autograd.Variable(torch.Tensor([0]))
The text was updated successfully, but these errors were encountered: