-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add repr method on tensor subclass #397
Add repr method on tensor subclass #397
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/397
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit c22adf2 with merge base f5b6ec9 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ghstack-source-id: 1adc8a1bb74725d7c9b1709a5df70973d2b60497 Pull Request resolved: #397
[ghstack-poisoned]
ghstack-source-id: f3d4313996236644408268f914fa880cb079165d Pull Request resolved: #397
torchao/dtypes/aqt.py
Outdated
@@ -480,6 +480,9 @@ def _apply_fn_to_data(self, fn): | |||
self.scale_and_zero = fn(self.scale_and_zero) | |||
return self | |||
|
|||
def __repr__(self): | |||
return f"TensorCoreTiledAQTLayout(packed_weight={self.packed_weight}, scale_and_zero={self.scale_and_zero})" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can display the result of get_plain()
here, packed_weight is not very useful
int_data, scale, zero_point = self.get_plain()
return f"TensorCoreTiledAQTLayout(int_data={int_data}, scale={scale}, zero_point={zero_point})"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok sure!
Without this, it will go through default tensor __repr__ method which has some aten ops to print the tensor. This results in triggerring subclass's torch_dispatch. [ghstack-poisoned]
ghstack-source-id: f644a8d9f41ea3c999b2f54bd370e93a12ef4f35 Pull Request resolved: #397
Btw @tugsbayasgalan do you intend to merge this to main? since seems like you're targeting your own branch |
btw, the way to merge this PR is the following (heard from Vasiliy):
|
@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
ea71549
into
gh/tugsbayasgalan/1/base
Summary: * added repr for TensorCoreTiledAQTLayoutTensor: pytorch#397 * removed the str -> apply_tensor_class map that was used to implement string APIs, which is removed in pytorch#400 Test Plan: CI Reviewers: Subscribers: Tasks: Tags:
Summary: * added repr for TensorCoreTiledAQTLayoutTensor: pytorch#397 * removed the str -> apply_tensor_class map that was used to implement string APIs, which is removed in pytorch#400 Test Plan: CI Reviewers: Subscribers: Tasks: Tags:
Summary: * added repr for TensorCoreTiledAQTLayoutTensor: pytorch#397 * removed the str -> apply_tensor_class map that was used to implement string APIs, which is removed in pytorch#400 Test Plan: CI Reviewers: Subscribers: Tasks: Tags:
Stack from ghstack (oldest at bottom):
Without this, it will go through default tensor repr method which has some aten ops to print the tensor. This results in triggerring subclass's torch_dispatch.
Differential Revision: D58786586