4-bit Conformer with Native Quantization Aware Training for Speech Recognition
Shaojin Ding, Phoenix Meadowlark, Yanzhang He, Lukasz Lew, Shivani Agrawal, Oleg Rybakov
Reducing the latency and model size has always been a significant research
problem for live Automatic Speech Recognition (ASR) application scenarios.
Along this direction, model quantization has become an increasingly popular
approach to compress neural networks and reduce computation cost. Most of the
existing practical ASR systems apply post-training 8-bit quantization. To
achieve a higher compression rate without introducing additional performance
regression, in this study, we propose to develop 4-bit ASR models with native
quantization aware training, which leverages native integer operations to
effectively optimize both training and inference. We conducted two experiments
on state-of-the-art Conformer-based ASR models to evaluate our proposed
quantization technique. First, we explored the impact of different precisions
for both weight and activation quantization on the LibriSpeech dataset, and
obtained a lossless 4-bit Conformer model with 7.7x size reduction compared to
the float32 model. Following this, we for the first time investigated and
revealed the viability of 4-bit quantization on a practical ASR system that is
trained with large-scale datasets, and produced a lossless Conformer ASR model
with mixed 4-bit and 8-bit weights that has 5x size reduction compared to the
float32 model.