-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Record the lazy tracing time(C++) in metrics #5757
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Kokoro CI seems to fail because it can't recognize the new metric name, is this expected because it requires an upstream PyTorch pin?
hmm, not sure I can rerun the CI once upstream thing merged |
t1 = torch.tensor(156, device=xla_device) | ||
t2 = t1 + 100 | ||
self.assertIn('LazyTracing', met.metric_names()) | ||
self.assertGreater(met.metric_data('LazyTracing')[0], 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the op counter will still be int? But adding another LazyTracing metrics?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
* Record the lazy tracing time(C++) in metrics * Delete torch_patches/.torch_pin
* Record the lazy tracing time(C++) in metrics * Delete torch_patches/.torch_pin
* Record the lazy tracing time(C++) in metrics * Delete torch_patches/.torch_pin
* Record the lazy tracing time(C++) in metrics * Delete torch_patches/.torch_pin
* Record the lazy tracing time(C++) in metrics * Delete torch_patches/.torch_pin
* Record the lazy tracing time(C++) in metrics * Delete torch_patches/.torch_pin
This way we at least know how long it takes to trace the op in C++ PyTorch/XLA land.
upstream pr pytorch/pytorch#112679 (comment)