-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Forward XLATensorImpl::is_contiguous_custom
to TensorImpl
.
#8032
base: master
Are you sure you want to change the base?
Changes from all commits
857867d
4c897e7
dfc32cf
ca02543
408ae74
8c324e0
5d914a4
780d2c6
b045e2c
8239b9b
fc49779
d42b1c4
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -173,9 +173,16 @@ int64_t XLATensorImpl::numel_custom() const { | |
} | ||
|
||
bool XLATensorImpl::is_contiguous_custom(at::MemoryFormat memory_format) const { | ||
// If functionalization is disabled, the tensors' metadata aren't being | ||
// updated w.r.t. the output of meta functions. Therefore, we fallback to the | ||
// old behavior returning true, always. | ||
if (runtime::sys_util::GetEnvBool("XLA_DISABLE_FUNCTIONALIZATION", false)) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. maybe just add a |
||
return true; | ||
} | ||
|
||
// Storage is always contiguous, but the tensor metadata is_contiguous_ might | ||
// be false due to the update in the functionalization layer.. | ||
return true; | ||
return c10::TensorImpl::is_contiguous_custom(memory_format); | ||
} | ||
|
||
void XLATensorImpl::SetupSizeProperties() { | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The changes in this file are needed because it seems that PyTorch is running these tests with Python 3.8. Thus, the subscript operator is not allowed for types. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hopefully pytorch/pytorch#135278 can be merged and we don't have this issue anymore in the future... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was needed for fixing a CI failure in pytorch/pytorch#135237. @JackCaoG let me know if you think this should not be happening.