You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There's a bug here, though (fix). It doesn't propagates to the inner XLA tensor, even though it forwards metadata function calls to its inner XLA tensor
I would expect the 4th return value (program above) to be the same on both eager and XLA
Whenever as_strided is called with no storage offset argument, it preserves the storage offset of the tensor
We need to know what the tensor's original storage offset was
Internally (AFAIU), we don't depend on neither strides nor storage offset. So, we could just make them represent whatever eager would have
Currently, I have this stack of PRs that do just that: propagate the computed reference metadata inside functional tensors. However, some tests are failing because of this assertion on the contiguity of the tensor. One way around this is to remove the assertion, leaving the metadata as (ii).
Consider the following program:
Question: should XLA tensors' metadata reflect: (i) the actual data storage; or (ii) eager metadata?
I would argue that we should do (ii) instead of (i) for the following reasons:
set_sizes_strides_offset
at the end of each kernel for propagating computed reference metadata to functional tensorsas_strided
is called with no storage offset argument, it preserves the storage offset of the tensorCurrently, I have this stack of PRs that do just that: propagate the computed reference metadata inside functional tensors. However, some tests are failing because of this assertion on the contiguity of the tensor. One way around this is to remove the assertion, leaving the metadata as (ii).
cc @miladm @JackCaoG @alanwaketan @wconstab
The text was updated successfully, but these errors were encountered: