Multi-Head Self-Attentions and Vision Transformers
How Do Vision Transformers Work?
The success of multi-head self-attentions (msas) for computer vision is now indisputable.
However, little is known about how they work.
We present fundamental explanations to help better understand the nature of msas.
In particular, we demonstrate the following properties of msas and visiontransformers (vits): (1) msas improve not only accuracy but also generalization by flattening the loss landscapes by flattening the loss landscapes.
Such improvement is primarily attributable to their data specificity, not long-range dependency.
On the other hand, vits suffer from non-convex losses.
Large datasets and loss landscape smoothingmethods alleviate this problem.
For example, msas are low-pass filters, but convs are high-pass filters.
Therefore, msas and convs are complementary.
In addition, msas at the end of a stage play a key role in prediction.
Based on these insights, we propose alternet, a model in which convolutional neural network (conv) blocks at the end of a stage are replaced with multi-head self-attentions (msa) blocks.
We show that alternet outperforms cnns not only in large data regimes but also in small data regimes.