-
Notifications
You must be signed in to change notification settings - Fork 291
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add jagged_sum operator for padded nested tensors to TritonBench #2305
Conversation
This pull request was exported from Phabricator. Differential Revision: D58423489 |
This pull request was exported from Phabricator. Differential Revision: D58423489 |
…orch#2305) Summary: Pull Request resolved: pytorch#2305 Add a `jagged_sum` reduction operator for padded nested tensors, based on the PyTorch `sum` operator, to TritonBench. This diff uses the PyTorch function [`torch.ops.aten._jagged_to_padded_dense_forward`](https://www.internalfb.com/code/fbsource/[92c2a067ab04e3eebc999254fed4ae2fbea6def3]/fbcode/deeplearning/fbgemm/fbgemm_gpu/fb/inductor_lowerings/elementwise_ops.py?lines=26), hosted at this [GitHub pull request](pytorch/pytorch#125968), to pad each 2-dimensional tensor in a nested tensor of shape `(B, *, M)`, then reduce across the `N`-th dimension (`dim == 1`) to a `(B, M)` output tensor. Measure accuracy of padded implementation against unpadded baseline implementation via `accuracy` TritonBench metric. Reviewed By: davidberard98 Differential Revision: D58423489
07c7249
to
a8fcd5a
Compare
This pull request was exported from Phabricator. Differential Revision: D58423489 |
…orch#2305) Summary: Pull Request resolved: pytorch#2305 Add a `jagged_sum` reduction operator for padded nested tensors, based on the PyTorch `sum` operator, to TritonBench. This diff uses the PyTorch function [`torch.ops.aten._jagged_to_padded_dense_forward`](https://www.internalfb.com/code/fbsource/[92c2a067ab04e3eebc999254fed4ae2fbea6def3]/fbcode/deeplearning/fbgemm/fbgemm_gpu/fb/inductor_lowerings/elementwise_ops.py?lines=26), hosted at this [GitHub pull request](pytorch/pytorch#125968), to pad each 2-dimensional tensor in a nested tensor of shape `(B, *, M)`, then reduce across the `N`-th dimension (`dim == 1`) to a `(B, M)` output tensor. Measure accuracy of padded implementation against unpadded baseline implementation via `accuracy` TritonBench metric. Reviewed By: davidberard98 Differential Revision: D58423489
a8fcd5a
to
7125068
Compare
This pull request was exported from Phabricator. Differential Revision: D58423489 |
7125068
to
4776e94
Compare
…orch#2305) Summary: Pull Request resolved: pytorch#2305 Add a `jagged_sum` reduction operator for padded nested tensors, based on the PyTorch `sum` operator, to TritonBench. This diff uses the PyTorch function [`torch.ops.aten._jagged_to_padded_dense_forward`](https://www.internalfb.com/code/fbsource/[92c2a067ab04e3eebc999254fed4ae2fbea6def3]/fbcode/deeplearning/fbgemm/fbgemm_gpu/fb/inductor_lowerings/elementwise_ops.py?lines=26), hosted at this [GitHub pull request](pytorch/pytorch#125968), to pad each 2-dimensional tensor in a nested tensor of shape `(B, *, M)`, then reduce across the `N`-th dimension (`dim == 1`) to a `(B, M)` output tensor. Measure accuracy of padded implementation against unpadded baseline implementation via `accuracy` TritonBench metric. Reviewed By: davidberard98 Differential Revision: D58423489
…orch#2305) Summary: Pull Request resolved: pytorch#2305 Add a `jagged_sum` reduction operator for padded nested tensors, based on the PyTorch `sum` operator, to TritonBench. This diff uses the PyTorch function [`torch.ops.aten._jagged_to_padded_dense_forward`](https://www.internalfb.com/code/fbsource/[92c2a067ab04e3eebc999254fed4ae2fbea6def3]/fbcode/deeplearning/fbgemm/fbgemm_gpu/fb/inductor_lowerings/elementwise_ops.py?lines=26), hosted at this [GitHub pull request](pytorch/pytorch#125968), to pad each 2-dimensional tensor in a nested tensor of shape `(B, *, M)`, then reduce across the `N`-th dimension (`dim == 1`) to a `(B, M)` output tensor. Measure accuracy of padded implementation against unpadded baseline implementation via `accuracy` TritonBench metric. Reviewed By: davidberard98 Differential Revision: D58423489
This pull request was exported from Phabricator. Differential Revision: D58423489 |
4776e94
to
1933c38
Compare
This pull request has been merged in 40b376d. |
Summary:
Add a
jagged_sum
reduction operator for padded nested tensors, based on the PyTorchsum
operator, to TritonBench. This diff uses the PyTorch functiontorch.ops.aten._jagged_to_padded_dense_forward
, hosted at this GitHub pull request, to pad each 2-dimensional tensor in a nested tensor of shape(B, *, M)
, then reduce across theN
-th dimension (dim == 1
) to a(B, M)
output tensor.Measure accuracy of padded implementation against unpadded baseline implementation via
accuracy
TritonBench metric.Reviewed By: davidberard98
Differential Revision: D58423489