Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-land: upsample_bilinear: fix output data-type. #7168

Merged
merged 2 commits into from
Jun 4, 2024

Conversation

ysiraichi
Copy link
Collaborator

Re-land: #7111

Previously, it was reverted because it broke TPU CI. That happened due to the newly added test in that PR. Specifically, because it called upsample_bilinear with f64 data-type. Since the bug was only on non-TPU and non-Neuron device types, we simply skip the test for those. It's safe to do that because there's a short-circuit that avoids the bug for those devices.

cc @miladm @JackCaoG

@ysiraichi
Copy link
Collaborator Author

I thought tpuci enabled TPU CI. Doesn't look like it.
@JackCaoG Is there any way to enable TPU CI?

@JackCaoG
Copy link
Collaborator

JackCaoG commented Jun 3, 2024

haha, you need to first set the tpu-ci tag and then rerun the CI. This way TPU ci will get rerun, I just trigger a rerun

@ysiraichi
Copy link
Collaborator Author

Hmmm. Let me try rebasing the PR.

@ysiraichi ysiraichi force-pushed the ysiraichi/reland-upsample-data-type-issue branch from 91f8ef6 to 215c753 Compare June 3, 2024 20:20
@ysiraichi
Copy link
Collaborator Author

@JackCaoG Could you take a look at this PR whenever you have some time?

@@ -2118,6 +2118,59 @@ def fn(inp, s):
self.assertEqual(out, Xout.cpu())
self.assertEqual("f16", torch_xla._XLAC._get_xla_tensor_shape_type(Xout))

# We skip TPU for 2 reasons:
# 1. upsample_bilinear on f64 tensors doesn't work on TPUs
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will f64 works prior to your change on TPU? I don't want this to become a regression.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My change doesn't affect TPU. The lowering for upsample_bilinear is different for CUDA and TPU. This change only affects non-TPU and non-Neuron devices.

@JackCaoG JackCaoG merged commit e563cfe into master Jun 4, 2024
22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants