You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An important pre-fusion stage is the regularization pass which rewrites the code into fusion-friendly form.
For this code, when recomputing the coordinates for indexing %out_view, it's hard (impossible) to get the coordinates of %write_view and %alloc. Regularization rewrites
Problems and potential solutions with input fusion
Vectorization
When linalg.generic has multiple inputs, the major issue is how to determine which input needs to be vectorized during gridwise-gemm-to-blockwise.
Potential solutions:
1.At MiGraphX or Tosa level pick the first largest function argument as the primary tensor that will be vectorized
2. annotate it and pass through to the regularization which will mark which linalg.generic input leads to the primary tensor.
Update regularization
Secondly we need to rewrite the regularization to push linalg.generic output view to gemm's input.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Background
Regularization
An important pre-fusion stage is the regularization pass which rewrites the code into fusion-friendly form.
For this code, when recomputing the coordinates for indexing %out_view, it's hard (impossible) to get the coordinates of %write_view and %alloc. Regularization rewrites
as
and then:
Input fusion:
After blockwise-to-threadwise:
After fusion:
Problems and potential solutions with input fusion
Vectorization
When linalg.generic has multiple inputs, the major issue is how to determine which input needs to be vectorized during gridwise-gemm-to-blockwise.
Potential solutions:
1.At MiGraphX or Tosa level pick the first largest function argument as the primary tensor that will be vectorized
2. annotate it and pass through to the regularization which will mark which linalg.generic input leads to the primary tensor.
Update regularization
Secondly we need to rewrite the regularization to push linalg.generic output view to gemm's input.
Instead of
do this transformation:
Beta Was this translation helpful? Give feedback.
All reactions