Accelerating Bandwidth-Bound Deep Learning Inference with Main-Memory Accelerators
Benjamin Y. Cho, Jeageun Jung, Mattan Erez
DL inference queries play an important role in diverse internet services and
a large fraction of datacenter cycles are spent on processing DL inference
queries. Specifically, the matrix-matrix multiplication (GEMM) operations of
fully-connected MLP layers dominate many inference tasks. We find that the GEMM
operations for datacenter DL inference tasks are memory bandwidth bound,
contrary to common assumptions: (1) strict query latency constraints force
small-batch operation, which limits reuse and increases bandwidth demands; and
(2) large and colocated models require reading the large weight matrices from
main memory, again requiring high bandwidth without offering reuse
opportunities. We demonstrate the large potential of accelerating these
small-batch GEMMs with processing in the main CPU memory. We develop a novel
GEMM execution flow and corresponding memory-side address-generation logic that
exploits GEMM locality and enables long-running PIM kernels despite the complex
address-mapping functions employed by the CPU that would otherwise destroy
locality. Our evaluation of StepStone variants at the channel, device, and
within-device PIM levels, along with optimizations that balance parallelism
benefits with data-distribution overheads demonstrate $12\times$ better minimum
latency than a CPU and $2.8\times$ greater throughput for strict query latency
constraints. End-to-end performance analysis of recent recommendation and
language models shows that StepStone PIM outperforms a fast CPU (by up to
$16\times$) and prior main-memory acceleration approaches (by up to $2.4\times$
compared to the best prior approach).