-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add dynamo expand test. #6659
Add dynamo expand test. #6659
Conversation
test/dynamo/test_dynamo.py
Outdated
@@ -568,6 +568,24 @@ def test_all_cpu_tensor(self): | |||
self.assertIn('MarkStep', met.counter_names()) | |||
|
|||
|
|||
class DynamoOperationsTests(test_utils.XlaTestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add a flag to skip this test while we address the corresponding bug you will soon open on "expand failing on dynamo"
You can add a .torch_pin to build with your patch PR. Following the instructions here: https://github.com/pytorch/xla/tree/master/torch_patches Such that we don't need to add any guards. We just need to land the two patches together. |
Great, looks like all tests pass! Please ping me once your upstream PR is approved. Also, we need to remove the .torch_pin before landing the PR. |
b3aa233
to
372c226
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
This PR adds a test for #5837. The fix was introduced in PyTorch main repository (pytorch/pytorch#121007), but we need PyTorch/XLA for actually exercising the (previously) failing test case.
cc @miladm @JackCaoG