-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seq2Seq Transformer Tutorial #1225
Comments
The tutorial has used |
The tutorial is supposed to be abut using the Second, the task seems to be language modeling i.e., given a sequence of tokens as input, generate another sequence of tokens given the input sequence as context. This ought to be explicitly mentioned, since it's a beginner tutorial. Also, I am unable to run the tutorial on Google Colab. At first, I got a dependency issue upon attempting to run the following line::
After running
Didn't paste the entire trace since it has 61 frames. |
@QasimKhan5x even I got this error. Maybe it's due to the version of torchtext and supporting libraries. |
/assigntome |
I found it confusing that the diagram of the Transformer used in the tutorial includes the Decoder. Therefore, I have made it clear that the task focuses on predicting the next word from a sequence and that the nn.TransformerDecoder is not used. I also faced the same error mentioned earlier, and I noticed that the cause was executing the same code without exit() after installing portalocker when encountering an error related to the absence of portalocker. To resolve this issue, I installed portalocker alongside torchdata before encountering the error, which successfully resolved the problem. |
I'm having difficulty understanding a few aspects of the Seq2Seq transformer tutorial (https://pytorch.org/tutorials/beginner/transformer_tutorial.html)
Appreciate any help.
cc @pytorch/team-text-core @Nayef211
The text was updated successfully, but these errors were encountered: