-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix non-contiguous tensor problem in keyed_jagged_index_select_dim1 #2061
Conversation
✅ Deploy Preview for pytorch-fbgemm-docs canceled.
|
This pull request was exported from Phabricator. Differential Revision: D49939713 |
e0a88d2
to
31126e7
Compare
…ytorch#2061) Summary: Before this diff, `keyed_jagged_index_select_dim1` kernels take raw pointers as arguments. This requires the input tensors to be contiguous. However, the `keyed_jagged_index_select_dim1` operator did not make sure that the tensors are contiguous before extracting and passing the raw pointers to the kernels causing the correctness issue. This diff replaces the raw pointer arguments with PyTorch's `PackedTensorAccessor` which handles non-contiguous tensor accesses automatically. For some tensors that their raw pointers are still being used, the operator makes sure that the tensors are contiguous before using them. Reviewed By: jasonjk-park Differential Revision: D49939713
This pull request was exported from Phabricator. Differential Revision: D49939713 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D49939713 |
…ytorch#2061) Summary: Before this diff, `keyed_jagged_index_select_dim1` kernels take raw pointers as arguments. This requires the input tensors to be contiguous. However, the `keyed_jagged_index_select_dim1` operator did not make sure that the tensors are contiguous before extracting and passing the raw pointers to the kernels causing the correctness issue. This diff replaces the raw pointer arguments with PyTorch's `PackedTensorAccessor` which handles non-contiguous tensor accesses automatically. For some tensors that their raw pointers are still being used, the operator makes sure that the tensors are contiguous before using them. Reviewed By: jasonjk-park Differential Revision: D49939713
31126e7
to
e3d38ba
Compare
This pull request was exported from Phabricator. Differential Revision: D49939713 |
…ytorch#2061) Summary: Before this diff, `keyed_jagged_index_select_dim1` kernels take raw pointers as arguments. This requires the input tensors to be contiguous. However, the `keyed_jagged_index_select_dim1` operator did not make sure that the tensors are contiguous before extracting and passing the raw pointers to the kernels causing the correctness issue. This diff replaces the raw pointer arguments with PyTorch's `PackedTensorAccessor` which handles non-contiguous tensor accesses automatically. For some tensors that their raw pointers are still being used, the operator makes sure that the tensors are contiguous before using them. Reviewed By: jasonjk-park, venkatrsrinivas Differential Revision: D49939713
e3d38ba
to
3111b77
Compare
This pull request was exported from Phabricator. Differential Revision: D49939713 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D49939713 |
This pull request has been merged in 1e194b7. |
Summary:
Before this diff,
keyed_jagged_index_select_dim1
kernels take rawpointers as arguments. This requires the input tensors to be
contiguous. However, the
keyed_jagged_index_select_dim1
operatordid not make sure that the tensors are contiguous before extracting
and passing the raw pointers to the kernels causing the correctness
issue. This diff replaces the raw pointer arguments with PyTorch's
PackedTensorAccessor
which handles non-contiguous tensor accessesautomatically. For some tensors that their raw pointers are still
being used, the operator makes sure that the tensors are contiguous
before using them.
Differential Revision: D49939713