Accelerating Sparse Matrix-Matrix Multiplication with GPU Tensor Cores
Orestis Zachariadis, Nitin Satpute, Juan Gómez-Luna, Joaquín Olivares
Sparse general matrix-matrix multiplication (spGEMM) is an essential
component in many scientific and data analytics applications. However, the
sparsity pattern of the input matrices and the interaction of their patterns
make spGEMM challenging. Modern GPUs include Tensor Core Units (TCUs), which
specialize in dense matrix multiplication. Our aim is to re-purpose TCUs for
sparse matrices. The key idea of our spGEMM algorithm, tSparse, is to multiply
sparse rectangular blocks using the mixed precision mode of TCUs. tSparse
partitions the input matrices into tiles and operates only on tiles which
contain one or more elements. It creates a task list of the tiles, and performs
matrix multiplication of these tiles using TCUs. To the best of our knowledge,
this is the first time that TCUs are used in the context of spGEMM. We show
that spGEMM, with our tiling approach, benefits from TCUs. Our approach
significantly improves the performance of spGEMM in comparison to cuSPARSE,
CUSP, RMerge2, Nsparse, AC-SpGEMM and spECK.