Skip to content

Commit

Permalink
Fix log-softmax unused issue (#2420)
Browse files Browse the repository at this point in the history
Fixes: #800

Co-authored-by: Svetlana Karslioglu <svekars@fb.com>
  • Loading branch information
j3soon and Svetlana Karslioglu authored Jun 9, 2023
1 parent a58279c commit 3b20fe6
Showing 1 changed file with 9 additions and 2 deletions.
11 changes: 9 additions & 2 deletions beginner_source/transformer_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,15 @@
# of the word (see the next paragraph for more details). The
# ``nn.TransformerEncoder`` consists of multiple layers of
# `nn.TransformerEncoderLayer <https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html>`__.
# To produce a probability distribution over output words, the output of
# the ``nn.TransformerEncoder`` model is passed through a linear layer.
# Along with the input sequence, a square attention mask is required because the
# self-attention layers in ``nn.TransformerDecoder`` are only allowed to attend
# the earlier positions in the sequence. For the language modeling task, any
# tokens on the future positions should be masked. To produce a probability
# distribution over output words, the output of the ``nn.TransformerEncoder``
# model is passed through a linear layer to output unnormalized logits.
# The log-softmax function isn't applied here due to the later use of
# `CrossEntropyLoss <https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html>`__,
# which requires the inputs to be unnormalized logits.
#

import math
Expand Down

0 comments on commit 3b20fe6

Please sign in to comment.