UNKs Everywhere: Adapting Multilingual Language Models to New Scripts
Massively multilingual language models such as multilingual BERT (mBERT) and
XLM-R offer state-of-the-art cross-lingual transfer performance on a range of
NLP tasks. However, due to their limited capacity and large differences in
pretraining data, there is a profound performance gap between resource-rich and
resource-poor target languages. The ultimate challenge is dealing with
under-resourced languages not covered at all by the models, which are also
written in scripts \textit{unseen} during pretraining. In this work, we propose
a series of novel data-efficient methods that enable quick and effective
adaptation of pretrained multilingual models to such low-resource languages and
unseen scripts. Relying on matrix factorization, our proposed methods
capitalize on the existing latent knowledge about multiple languages already
available in the pretrained model's embedding matrix. Furthermore, we show that
learning of the new dedicated embedding matrix in the target language can be
improved by leveraging a small number of vocabulary items (i.e., the so-called
\textit{lexically overlapping} tokens) shared between mBERT's and target
language vocabulary. Our adaptation techniques offer substantial performance
gains for languages with unseen scripts. We also demonstrate that they can also
yield improvements for low-resource languages written in scripts covered by the
pretrained model.
Authors
Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, Sebastian Ruder