Although convolutional networks have been the dominant architecture for
vision tasks for many years, recent experiments have shown that
Transformer-based models, most notably the Vision Transformer (ViT), may exceed
their performance in some settings. However, due to the quadratic runtime of
the self-attention layers in Transformers, ViTs require the use of patch
embeddings, which group together small regions of the image into single input
features, in order to be applied to larger image sizes. This raises a question:
Is the performance of ViTs due to the inherently-more-powerful Transformer
architecture, or is it at least partly due to using patches as the input
representation? In this paper, we present some evidence for the latter:
specifically, we propose the ConvMixer, an extremely simple model that is
similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it
operates directly on patches as input, separates the mixing of spatial and
channel dimensions, and maintains equal size and resolution throughout the
network. In contrast, however, the ConvMixer uses only standard convolutions to
achieve the mixing steps. Despite its simplicity, we show that the ConvMixer
outperforms the ViT, MLP-Mixer, and some of their variants for similar
parameter counts and data set sizes, in addition to outperforming classical
vision models such as the ResNet. Our code is available at
this https URL