-
-
Notifications
You must be signed in to change notification settings - Fork 612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transformer Example Update - AutoModelforSequenceClassification #2190
Transformer Example Update - AutoModelforSequenceClassification #2190
Conversation
Tried on Colab and it seems this does not work fine... @Ishan-Kumar2 could you check please ? Thanks !! |
Thanks for investigating @sdesrozis. Thanks for the PR @Ishan-Kumar2, please make sure your code works before pushing your updates, thanks! |
Hi @sdesrozis, I tested on my machine and on colab just now, its seems to be working for me. Could you share the notebook you running or please have a look at this one(I stopped the run after 1 epoch)? |
@Ishan-Kumar2 I confirm it does not work using GPU which is not enabled in your notebook. Anyway, it would be great that your branch works fine (version 0.5 instead of 0.4.6)
|
@sdesrozis thanks for the review, I realized there was a slight mistake in the transfer of the input to GPU. I have fixed it now and tested on Colab, it works fine. Please take a look thanks! |
Thanks for the updates @Ishan-Kumar2, the first issue has been solved, but there is another issue related to GPU memory in Colab with the default |
Hey @Ishan-Kumar2, any updates of this? |
@KickItLikeShika, added the commit. Sorry for the delay, I was stuck on why there was a sudden increase in GPU memory usage when going from Epoch 0 to Epoch 1. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice work, Ishan! thanks a lot for the updates!
@KickItLikeShika I have made a minor change reverting the validation dataloader batch_size to 2 * train_loader batch_size. This wasn't the reason for the out of memory earlier. |
Hey @Ishan-Kumar2 and @KickItLikeShika , thank you for working on this PR. Just wanted to let you know that we are in the process of shifting ignite/examples to https://github.com/pytorch-ignite/examples. While this PR is a great improvement, this might not be relevant and can be removed once we make tutorials out of this, one of which is already in the process: pytorch-ignite/examples#32 which uses |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Ishan-Kumar2 Thanks ! LGTM !
Fixes #2189
Description:
Changes the model from
AutoModel
toAutoModelforSequenceClassification
so that any model which supports Sequence Classification list can be used without the need to make changes to the code.Check list: