-
Notifications
You must be signed in to change notification settings - Fork 486
Issues: pytorch/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[benchmarks]
dlrm
training running twice on dynamo and non-dynamo configurations.
xla:gpu
#7976
opened Sep 9, 2024 by
ysiraichi
[torchbench]
hf_BigBird
(inference and training) fails to run on dynamo.
xla:gpu
#7833
opened Aug 12, 2024 by
ysiraichi
[torchbench]
moco
fails to run with CUDA OpenXLA fallback.
xla:gpu
#7647
opened Jul 9, 2024 by
ysiraichi
F.embedding_bag(..., mode='max')
yields different results than PyTorch eager.
xla:gpu
#7588
opened Jun 28, 2024 by
ysiraichi
Sharing tensor storage (with DLPack) results in unexpected behavior.
xla:gpu
#7304
opened Jun 17, 2024 by
ysiraichi
In-place operations on an DLPack aliased XLA tensor does not propagate.
xla:gpu
#7198
opened Jun 5, 2024 by
ysiraichi
[torchbench]
hf_T5_large
training fails to run on dynamo.
xla:gpu
#6901
opened Apr 8, 2024 by
ysiraichi
[torchbench]
hf_GPT2
and hf_GPT2_large
training fails to run on dynamo.
xla:gpu
#6900
opened Apr 8, 2024 by
ysiraichi
Result for
torch.pow
mismatch between PyTorch eager and PyTorch/XLA.
xla:gpu
#6750
opened Mar 14, 2024 by
ysiraichi
[torchbench]
cm3leon_generate
inference running significantly slower than inductor.
xla:gpu
#6541
opened Feb 15, 2024 by
ysiraichi
[torchbench]
hf_T5_generate
inference running significantly slower than inductor.
xla:gpu
#6540
opened Feb 15, 2024 by
ysiraichi
[torchbench]
Background_Matting
fails when lowering UpsampleBilinear2D
xla:gpu
#6520
opened Feb 12, 2024 by
ysiraichi
Tracking issue: PyTorch precision upcast issue.
xla:gpu
#6404
opened Jan 29, 2024 by
ysiraichi
2 of 3 tasks
Unexpected upcasted output when
XLA_USE_FP16
and XLA_USE_BF16
are set.
xla:gpu
#6403
opened Jan 29, 2024 by
ysiraichi
[torchbench]
opacus_cifar10
training runs unexpectedly without errors.
xla:gpu
#6391
opened Jan 26, 2024 by
ysiraichi
[torchbench]
opacus_cifar10
memory not freed after each run.
xla:gpu
#6380
opened Jan 25, 2024 by
ysiraichi
[torchbench] Regression:
detectron2_maskrcnn
training fails on non-dynamo.
xla:gpu
#6353
opened Jan 22, 2024 by
ysiraichi
[torchbench] Training benchmarks failing with: tensor does not require grad
xla:gpu
#6084
opened Dec 9, 2023 by
ysiraichi
3 tasks
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.