-
Notifications
You must be signed in to change notification settings - Fork 507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decompose torch.slice_scatter #1622
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can delete this code as well:
class ConvertAtenSliceScatterOp |
Also, do you need to update XFAIL set for MHLO?
1b9cde3
to
3532359
Compare
The
No XFAIL will be updated because some extra torch ops still can't be lowered correctly. |
That's very strange. Can you open an issue with the backtrace generated? I can take a look |
@ramiro050 I can't get a concrete stack, but you can recover the exception with branch tanyo/slice_scatter_stage. The crash stack:
|
3532359
to
c7459f7
Compare
* Decompose torch.slice_scatter * fix compilation error * update file check * fix ci * fix i64 torch.tensor dtype
@tanyokwok, there is currently no e2e test that checks that this decomposition is correct. Every time support is added for an op (especially as a decomposition, since it affects every backend), the PR should contain passing e2e tests. Can you revert this commit and add it back once it is passing on at least one of the three backends? |
@ramiro050 The decomposition had passed all the e2e tests of SliceScatter in I think it's sufficient to say the decomposition is correct in those cases. The EagerModeTestConfig will map the input The |
The eager mode tests have two limitations that make it difficult to rely on them for correctness. The first is that it only uses static shapes in the IR, so dynamic support in implementations is not tested. The second limitation is that if the torch-mlir compilation fails, eager mode will fallback on executing things on conventional PyTorch, so your decomposition could be failing and the tests will still pass. You can try this out by having your decomposition return Proper testing of a decomposition should involve having passing tests with dynamic shapes on at least one of the three backends: |
This reverts commit f3f2f10.
@ramiro050 Thanks for reminding me. The revert was created #1659 |
* Decompose torch.slice_scatter * fix compilation error * update file check * fix ci * fix i64 torch.tensor dtype
* Decompose torch.slice_scatter * fix compilation error * update file check * fix ci * fix i64 torch.tensor dtype
* Decompose torch.slice_scatter * fix compilation error * update file check * fix ci * fix i64 torch.tensor dtype
* Fix float width * Fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * Add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * Reimplement linear lowering * Reimplement 2-D rhs for mutmul * Add torchdynamo * Decompose torch.slice_scatter (llvm#1622) * Fix i64 torch.tensor dtype * Add more mhlo basic converters * Alleviate softmax datatype check (#24) * Fix decompose native_batch_norm (#27) * Support group_norm lowering (#25) * Decompose torch.ones/zeros (#28) * Fix softmax output type * Fix gather * Fix some decompose patterns * Not check assert at runtime (#31) * Fix bool tensor attr conversion bug (#32) * Fix mlirDenseElementsAttrBoolGet
* Fix float width * Fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * Add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * Reimplement linear lowering * Reimplement 2-D rhs for mutmul * Add torchdynamo * Decompose torch.slice_scatter (llvm#1622) * Fix i64 torch.tensor dtype * Add more mhlo basic converters * Alleviate softmax datatype check (#24) * Fix decompose native_batch_norm (#27) * Support group_norm lowering (#25) * Decompose torch.ones/zeros (#28) * Fix softmax output type * Fix gather * Fix some decompose patterns * Not check assert at runtime (#31) * Fix bool tensor attr conversion bug (#32) * Fix mlirDenseElementsAttrBoolGet Co-Authored-By: ZHENG, Zhen <zzchman@gmail.com>
* Rewrite mhlo with stablehlo after rebase. * Fix BAZEL building error of multiple definition. * Fix float width * Fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * Add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * Reimplement linear lowering * Reimplement 2-D rhs for mutmul * Add torchdynamo * Decompose torch.slice_scatter (llvm#1622) * Fix i64 torch.tensor dtype * Add more mhlo basic converters * Alleviate softmax datatype check (#24) * Fix decompose native_batch_norm (#27) * Support group_norm lowering (#25) * Decompose torch.ones/zeros (#28) * Fix softmax output type * Fix gather * Fix some decompose patterns * Not check assert at runtime (#31) * Fix bool tensor attr conversion bug (#32) * Fix mlirDenseElementsAttrBoolGet --------- Co-authored-by: ZHENG, Zhen <zzchman@gmail.com>
* Rewrite mhlo with stablehlo after rebase. * Fix BAZEL building error of multiple definition. * Fix float width * Fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * Add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * Reimplement linear lowering * Reimplement 2-D rhs for mutmul * Add torchdynamo * Decompose torch.slice_scatter (llvm#1622) * Fix i64 torch.tensor dtype * Add more mhlo basic converters * Alleviate softmax datatype check (#24) * Fix decompose native_batch_norm (#27) * Support group_norm lowering (#25) * Decompose torch.ones/zeros (#28) * Fix softmax output type * Fix gather * Fix some decompose patterns * Not check assert at runtime (#31) * Fix bool tensor attr conversion bug (#32) * Fix mlirDenseElementsAttrBoolGet
* Rewrite mhlo with stablehlo after rebase. * Fix BAZEL building error of multiple definition. * Fix float width * Fix divide_floor & export promoteTypes api (#9) * To comply with the old pytorch versions * Add native_dropout_backward & native_layer_norm_backward decomposition (#15) * Add native_dropout and related ops pattern (llvm#1211) * [MHLO] fix dot general contract * Fix batch_norm, div.Tensor_mode and folder (#21) * Reimplement linear lowering * Reimplement 2-D rhs for mutmul * Add torchdynamo * Decompose torch.slice_scatter (llvm#1622) * Fix i64 torch.tensor dtype * Add more mhlo basic converters * Alleviate softmax datatype check (#24) * Fix decompose native_batch_norm (#27) * Support group_norm lowering (#25) * Decompose torch.ones/zeros (#28) * Fix softmax output type * Fix gather * Fix some decompose patterns * Not check assert at runtime (#31) * Fix bool tensor attr conversion bug (#32) * Fix mlirDenseElementsAttrBoolGet
No description provided.