Fast inference engine for Transformer models
-
Updated
Dec 18, 2024 - C++
Fast inference engine for Transformer models
Tuned OpenCL BLAS
BLISlab: A Sandbox for Optimizing GEMM
Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruction.
High-Performance FP32 Matrix Multiplication on CPU
Optimizing SGEMM kernel functions on NVIDIA GPUs to a close-to-cuBLAS performance.
The HPC toolbox: fused matrix multiplication, convolution, data-parallel strided tensor primitives, OpenMP facilities, SIMD, JIT Assembler, CPU detection, state-of-the-art vectorized BLAS for floats and integers
Stretching GPU performance for GEMMs and tensor contractions.
🔥🔥🔥 A collection of some awesome public CUDA, cuBLAS, cuDNN, CUTLASS, TensorRT, TensorRT-LLM, Triton, TVM, MLIR and High Performance Computing (HPC) projects.
DBCSR: Distributed Block Compressed Sparse Row matrix library
hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditional BLAS library
FP64 equivalent GEMM via Int8 Tensor Cores using the Ozaki scheme
Serial and parallel implementations of matrix multiplication
A Flexible and Energy Efficient Accelerator For Sparse Convolution Neural Network
Add a description, image, and links to the gemm topic page so that developers can more easily learn about it.
To associate your repository with the gemm topic, visit your repo's landing page and select "manage topics."