Accelerating Sparse Approximate Matrix Multiplication on GPUs
Xiaoyan Liu, Yi Liu, Ming Dun, Bohong Yin, Hailong Yang, Zhongzhi Luan, Depei Qian
Although the matrix multiplication plays a vital role in computational linear
algebra, there are few efficient solutions for matrix multiplication of the
near-sparse matrices. The Sparse Approximate Matrix Multiply (SpAMM) is one of
the algorithms to fill the performance gap neglected by traditional
optimizations for dense/sparse matrix multiplication. However, existing SpAMM
algorithms fail to exploit the performance potential of GPUs for acceleration.
In this paper, we present cuSpAMM, the first parallel SpAMM algorithm optimized
for multiple GPUs. Several performance optimizations have been proposed,
including algorithm re-design to adapt to the thread parallelism, blocking
strategies for memory access optimization, and the acceleration with the tensor
core. In addition, we scale cuSpAMM to run on multiple GPUs with an effective
load balance scheme. We evaluate cuSpAMM on both synthesized and real-world
datasets on multiple GPUs. The experiment results show that cuSpAMM achieves
significant performance speedup compared to vendor optimized cuBLAS and
cuSPARSE libraries.