Mesa: A Resource-efficient Training Framework for High-Performance Transformers
Mesa: A Memory-saving Training Framework for Transformers
Transformers have delivered significant performanceimprovements, but training such networks is extremely memory intensive owing to storing all intermediate activations that are needed for gradient computation during backpropagation, especially for long sequences.
To this end, we present mesa, a memory-saving resource-efficient training framework for transformers.
Specifically, mesa uses exact activations during forward pass while storing a low-precision version of activations to reduce memory consumption during training.
The low-precision activations are then dequantized during back-propagation to compute gradients.
To address the heterogeneous activation distributions in the multi-head self-attention layers, we propose a head-wise activation quantization strategy, which quantizes activations basedon the statistics of each head to minimize the approximation error.
To further boost training efficiency, we learn quantization parameters by running estimates.
Extensive experiments on imagenet, cifar-100 and ade20k demonstrate that mesa can reduce half of the memory footprints during training while achieving comparable or even better performance.