Robustly Optimized and Distilled Training for Natural Language Understanding
In this paper, we explore multi-task learning (MTL) as a second pretraining
step to learn enhanced universal language representation for transformer
language models. We use the MTL enhanced representation across several natural
language understanding tasks to improve performance and generalization.
Moreover, we incorporate knowledge distillation (KD) in MTL to further boost
performance and devise a KD variant that learns effectively from multiple
teachers. By combining MTL and KD, we propose Robustly Optimized and Distilled
(ROaD) modeling framework. We use ROaD together with the ELECTRA model to
obtain state-of-the-art results for machine reading comprehension and natural
language inference.