Primer: Searching for Efficient Transformers for Language Modeling
Large Transformer models have been central to recent advances in natural
language processing. The training and inference costs of these models, however,
have grown rapidly and become prohibitively expensive. Here we aim to reduce
the costs of Transformers by searching for a more efficient variant. Compared
to previous approaches, our search is performed at a lower level, over the
primitives that define a Transformer TensorFlow program. We identify an
architecture, named Primer, that has a smaller training cost than the original
Transformer and other variants for auto-regressive language modeling. Primer's
improvements can be mostly attributed to two simple modifications: squaring
ReLU activations and adding a depthwise convolution layer after each Q, K, and
V projection in self-attention.
Experiments show Primer's gains over Transformer increase as compute scale
grows and follow a power law with respect to quality at optimal model sizes. We
also verify empirically that Primer can be dropped into different codebases to
significantly speed up training without additional tuning. For example, at a
500M parameter size, Primer improves the original T5 architecture on C4
auto-regressive language modeling, reducing the training cost by 4X.
Furthermore, the reduced training cost means Primer needs much less compute to
reach a target one-shot performance. For instance, in a 1.9B parameter
configuration similar to GPT-3 XL, Primer uses 1/3 of the training compute to
achieve the same one-shot performance as Transformer. We open source our models
and several comparisons in T5 to help with reproducibility.
Authors
David R. So, Wojciech Mańke, Hanxiao Liu, Zihang Dai, Noam Shazeer, Quoc V. Le