-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re-land: upsample_bilinear
: fix output data-type.
#7168
Conversation
I thought |
haha, you need to first set the |
Hmmm. Let me try rebasing the PR. |
91f8ef6
to
215c753
Compare
@JackCaoG Could you take a look at this PR whenever you have some time? |
@@ -2118,6 +2118,59 @@ def fn(inp, s): | |||
self.assertEqual(out, Xout.cpu()) | |||
self.assertEqual("f16", torch_xla._XLAC._get_xla_tensor_shape_type(Xout)) | |||
|
|||
# We skip TPU for 2 reasons: | |||
# 1. upsample_bilinear on f64 tensors doesn't work on TPUs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will f64 works prior to your change on TPU? I don't want this to become a regression.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My change doesn't affect TPU. The lowering for upsample_bilinear
is different for CUDA and TPU. This change only affects non-TPU and non-Neuron devices.
Re-land: #7111
Previously, it was reverted because it broke TPU CI. That happened due to the newly added test in that PR. Specifically, because it called
upsample_bilinear
with f64 data-type. Since the bug was only on non-TPU and non-Neuron device types, we simply skip the test for those. It's safe to do that because there's a short-circuit that avoids the bug for those devices.cc @miladm @JackCaoG