Multiple GEMM operators are constructed with cutlass to support LLM inference.
The calculation expression is as follows, where the precision of Matrix A, B, C and D is FP16 or BF16. You can also customize your own epilogue. In some scenarios, it exceeds the performance of cublas and cublasLt.
D = alpha * (A * B) + beta * C
- OS: Linux
- Cmake Version: >= 3.16
- GCC Version: >= 5.0
- CUDA Version: >= 11.4
- Others: gflags, ccache
sudo apt-get install libgflags-dev ccache
git clone https://github.com/Bruce-Lee-LY/cutlass_gemm.git
cd cutlass_gemm
./build.sh -a 80 -t Release -b OFF
./build.sh -a 80 -t Debug -b OFF
cd cutlass_gemm
./build.sh -a 86 -t Release -b OFF
./build.sh -a 86 -t Debug -b OFF
./run_sample.sh
Process the data in the log and plot it as a line chart.
cd tools/performance
./performance.sh
- GPU: RTX3090
- CUDA Version: 12.1
- Data Type: FP16
- Beta: 0.0
Performance achieved by current cutlass methods.
- cutlass: v3.5.1
Add '#include <cute/numeric/math.hpp>' to file 'cute/algorithm/functional.hpp' to avoid error 'namespace "cute" has no member "max"' during compilation.
- Add SM90 Kernel