Transformers have become one of the most important architectural innovations
in deep learning and have enabled many breakthroughs over the past few years.
Here we propose a simple attention-free network architecture, gMLP, based
solely on MLPs with gating, and show that it can perform as well as
Transformers in key language and vision applications. Our comparisons show that
self-attention is not critical for Vision Transformers, as gMLP can achieve the
same accuracy. For BERT, our model achieves parity with Transformers on
pretraining perplexity and is better on some downstream tasks. On finetuning
tasks where gMLP performs worse, making the gMLP model substantially larger can
close the gap with Transformers. In general, our experiments show that gMLP can
scale as well as Transformers over increased data and compute.