You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I encounter the following error when I try to build a matmul with a_dtype = 'float16' and b_dtype = 'fp4_e2m1' with bitblas, however I encounter the following error:
Hi @LeiWang1999,
I encounter the following error when I try to build a matmul with a_dtype = 'float16' and b_dtype = 'fp4_e2m1' with bitblas, however I encounter the following error:
which caused by the following python function
At this line
seems bitblas is trying to reinterpret a "uint32" to "float16", making tvm complains.
I am using the following matmul config.
Do you have any example to run the fp16xfp4 matmul in bitblas?
(I am using the
v0.0.1.dev15
version)The text was updated successfully, but these errors were encountered: