GEMM lowering paths #1127
Unanswered
jungpark-mlir
asked this question in
Q&A
Replies: 1 comment
-
@sjw36 is working on getting a lowering from the GPU MMA ops to MFMA going. WMMA will very much be future work there, I gather, but once the MFMA handling is landed, WMMA shouldn't be too hard (especially since you'll have our code to work off of for the register layout). If you'd be open to reviewing https://reviews.llvm.org/D152451 , that'd be good |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
I'm wondering if you plan to implement passes converting from/to
gpu.subgroup_mma_compute_matrix
e.g., rock.blockwise_gemm_accel -> gpu.subgroup_mma_compute_matrix -> amdgpu.wmma
And other gpu.subgroup_mma_* operation might be also interesting to you. Especially, upstream has transforms to fold
vector.contraction
togpu.subgroup_mma_compute_matrix
. Andgpu.subgoup_mma_compute_matrix
can be converted tonvvm.wmma
orspirv.NV.CooperativeMatrixMulAdd
. (KHR extension will be landing soon to the spirv)Cheers,
Jungwook
Beta Was this translation helpful? Give feedback.
All reactions