-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In-place operations on an DLPack aliased XLA tensor does not propagate. #7198
Comments
This behavior should be the results of our functionalization pass. @alanwaketan to confirm the expected behavior. Either way, let's have a |
Thanks for the issue. I checked buffer pointer at more places:
Could you elaborate on |
Yes, exactly. In summary, functionalized lazy tensors is composed of:
Suppose |
Hmm. Not sure I get it. Could you explain a bit more? |
That's a helper where we can bridge information through intermediate tensors created by functionalization for in-place ops. |
When we do the in-place op
in sequence. So it seems the helper is already being used? |
Here's how I think we could use
This, however, won't work. Once we call On another note, we could use this ( |
🐛 Bug
In the example below, we have 2 tensors:
t0
andt1
.t1
is created from a DLPack capsule generated fromt0
. So, we could say they share the same storage. However, after modifyingt0
, we see that this change doesn't reflectt1
. Furthermore, their buffer pointer is different.This is actually expected. That's because even though functionalization emulates views and mutation, PyTorch/XLA doesn't really have the concept of views and can't mutate a given tensor.
That said, this could be unexpected behavior from the user point-of-view. When using DLPack to alias (i) CUDA and (ii) XLA tensors, in-place operations on (i) do propagate to (ii), but not the other way around.
I think that even if this is an expected limitation, it should be documented somewhere. Or, even better, we should warn the user if they try to use an in-place operation on an DLPack created XLA tensor (e.g. by having a flag
XLATensor::dlpack_created
).Environment
cc @miladm @JackCaoG @vanbasten23 @lezcano
The text was updated successfully, but these errors were encountered: