-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix some more core aten ops #6342
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
ae77bfa
to
34786e8
Compare
34786e8
to
0478d2f
Compare
I've bisected this commit to a large amount of failures (all torchbench inference on XLA:GPU). Some example failures:
Does this ring any bells? |
Thanks for catching this. It's hard to identify the offending op just looking at the trace, but this PR basically only touches two ops -- aten::reciprocal and aten::sigmoid. Let me revert the changes that this PR does for this two ops for now and investigate. |
This reverts commit 99a1341.
Reading the error, it's complaining that it's getting passed a boolean in @cota, is there something that describes the set-up for me to repro that (run torchbench) in GPU? |
This reverts commit 4ab7a24. Turns out that the revert was unnecessary; things broke from a different commit. This reverts the revert, i.e. it reinstates pytorch#6342.
@wonjoolee95 I re-did the bisection paying more attention this time. It turns out that the problem was introduced in a prior commit, not in this PR. My apologies! :( |
Fixes #5896, fixes #5867, fixes #5884, fixes #5889