A Vision Transformer (ViT) is a simple neural architecture amenable to serve
several computer vision tasks. It has limited built-in architectural priors, in
contrast to more recent architectures that incorporate priors either about the
input data or of specific tasks. Recent works show that ViTs benefit from
self-supervised pre-training, in particular BerT-like pre-training like BeiT.
In this paper, we revisit the supervised training of ViTs. Our procedure builds
upon and simplifies a recipe introduced for training ResNet-50. It includes a
new simple data-augmentation procedure with only 3 augmentations, closer to the
practice in self-supervised learning. Our evaluations on Image classification
(ImageNet-1k with and without pre-training on ImageNet-21k), transfer learning
and semantic segmentation show that our procedure outperforms by a large margin
previous fully supervised training recipes for ViT. It also reveals that the
performance of our ViT trained with supervision is comparable to that of more
recent architectures. Our results could serve as better baselines for recent
self-supervised approaches demonstrated on ViT.