A comparison of streaming models and data augmentation methods for robust speech recognition
In this paper, we present a comparative study on the robustness of two
different online streaming speech recognition models: Monotonic Chunkwise
Attention (MoChA) and Recurrent Neural Network-Transducer (RNN-T). We explore
three recently proposed data augmentation techniques, namely, multi-conditioned
training using an acoustic simulator, Vocal Tract Length Perturbation (VTLP)
for speaker variability, and SpecAugment. Experimental results show that
unidirectional models are in general more sensitive to noisy examples in the
training set. It is observed that the final performance of the model depends on
the proportion of training examples processed by data augmentation techniques.
MoChA models generally perform better than RNN-T models. However, we observe
that training of MoChA models seems to be more sensitive to various factors
such as the characteristics of training sets and the incorporation of
additional augmentations techniques. On the other hand, RNN-T models perform
better than MoChA models in terms of latency, inference time, and the stability
of training. Additionally, RNN-T models are generally more robust against noise
and reverberation. All these advantages make RNN-T models a better choice for
streaming on-device speech recognition compared to MoChA models.
Authors
Jiyeon Kim, Mehul Kumar, Dhananjaya Gowda, Abhinav Garg, Chanwoo Kim