This repository has been archived by the owner on Nov 22, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 800
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
facebook-github-bot
added
the
CLA Signed
Do not delete this pull request or issue due to inactivity.
label
Oct 30, 2019
This pull request was exported from Phabricator. Differential Revision: D18218091 |
debowin
force-pushed
the
export-D18218091
branch
from
November 5, 2019 21:53
e3cb284
to
bfce4c1
Compare
debowin
pushed a commit
to debowin/pytext
that referenced
this pull request
Nov 5, 2019
Summary: Pull Request resolved: facebookresearch#1088 This diff torchscriptifies RoBERTa-QA to make it exportable in the same manner as BERT-QA. It also converts the has_answer_logits tensor to float before doing an argmax in squad_output_layer in order to use the FP16FairseqOptimizer. This otherwise breaks as argmax_cuda isn't supported for "Half" datatype. Differential Revision: D18218091 fbshipit-source-id: a53ea099da52e13784f327d533348524dfd6e5d5
This pull request was exported from Phabricator. Differential Revision: D18218091 |
debowin
pushed a commit
to debowin/pytext
that referenced
this pull request
Nov 5, 2019
Summary: Pull Request resolved: facebookresearch#1088 This diff torchscriptifies RoBERTa-QA to make it exportable in the same manner as BERT-QA. It also converts the has_answer_logits tensor to float before doing an argmax in squad_output_layer in order to use the FP16FairseqOptimizer. This otherwise breaks as argmax_cuda isn't supported for "Half" datatype. Differential Revision: D18218091 fbshipit-source-id: 96a112260a20ae836fbfad88994b6a934be2062c
debowin
force-pushed
the
export-D18218091
branch
from
November 5, 2019 22:35
bfce4c1
to
5b3d948
Compare
This pull request was exported from Phabricator. Differential Revision: D18218091 |
debowin
force-pushed
the
export-D18218091
branch
from
November 8, 2019 23:22
5b3d948
to
7dc5fcf
Compare
debowin
pushed a commit
to debowin/pytext
that referenced
this pull request
Nov 8, 2019
Summary: Pull Request resolved: facebookresearch#1088 This diff torchscriptifies RoBERTa-QA to make it exportable in the same manner as BERT-QA. It also converts the has_answer_logits tensor to float before doing an argmax in squad_output_layer in order to use the FP16FairseqOptimizer. This otherwise breaks as argmax_cuda isn't supported for "Half" datatype. Differential Revision: D18218091 fbshipit-source-id: d917a82663be0b42f337b84b048a18346ad39b41
This pull request was exported from Phabricator. Differential Revision: D18218091 |
Summary: Pull Request resolved: facebookresearch#1088 This diff torchscriptifies RoBERTa-QA to make it exportable in the same manner as BERT-QA. It also converts the has_answer_logits tensor to float before doing an argmax in squad_output_layer in order to use the FP16FairseqOptimizer. This otherwise breaks as argmax_cuda isn't supported for "Half" datatype. Reviewed By: hikushalhere Differential Revision: D18218091 fbshipit-source-id: 7c315b17fedfe82915d396179bef28367dce1d56
debowin
force-pushed
the
export-D18218091
branch
from
November 11, 2019 22:47
7dc5fcf
to
2930f1f
Compare
This pull request was exported from Phabricator. Differential Revision: D18218091 |
This pull request has been merged in 8afd065. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
This diff torchscriptifies RoBERTa-QA to make it exportable in the same manner as BERT-QA.
It also converts the has_answer_logits tensor to float before doing an argmax in squad_output_layer in order to use the FP16FairseqOptimizer. This otherwise breaks as argmax_cuda isn't supported for "Half" datatype.
Differential Revision: D18218091