-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Relay][Op] Trilu operator implementation #12124
Conversation
@sfvaroglu Can you take a look at this PR? |
LGTM! Thanks for doing this @jwfromm! cc @mikepapadim |
221026d
to
cc2864e
Compare
"test_tril_neg", | ||
"test_tril_one_row_neg", | ||
"test_tril_out_neg", | ||
"test_tril_out_pos", | ||
"test_tril_zero", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't looked at this op at all. How tricky would it be to support the zero case? Otherwise LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It actually works on llvm and cuda. I was testing on my macbook and it seems like the metal backend in general doesnt support empty tensors. I think for CI we could add these cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like it also doesnt work with nvptx for the same issue with empty tensors. I'll add them here and see how it does in CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM. Thanks for sending this in! Our pytorch frontend could use this new Trilu
op as well. Just one nit.
I added pytorch testing and integration. Thanks for the recommendation @shingjan. |
Unfortunately the empty tensor tests still fail on CI GPUs. I'm not sure why, it doesnt seem like its directly related to this PR so I'm going to reenable skips for those tests. |
@mbrookhart I think this is ready to merge. |
Thanks @jwfromm @sfvaroglu @mikepapadim @shingjan |
* Added topi trilu implementation * Implemented and tested full Trilu op. * Fix test type. * Add tril zero tests. * Add pytorch trilu integration. * Clean up torch integration. * Readded skip for zero tests.
This PR adds a new operator that supports triangular masking similar to that in
np.triu
andnp.tril
. The addition ofrelay.trilu
conveniently lets us pass many of the remaining onnx tests.