Understanding Robustness of Transformers for Image Classification
Recently, transformer-based architectures like vision transformer (vit) have matched or even surpassed deep convolutional neural networks (cnns) for image classification tasks.
However, details of the transformer architecture, such as the use of non-overlapping patches, lead one to wonder whether these networks are as robust.
In this paper, we perform an extensive study of a variety of different measures of robustness of vit models and compare the findings to those of deep convolutional neural networks baselines.
We find that when pre-trained with a sufficient amount of data, vit models are at least as robust as the resnetcounterparts on a broad range of perturbations.
We also find that transformers are robust to the removal of almost any single layer, and that while activations from later layers are highly correlated with each other, they nevertheless play an important role in classification.
Authors
Srinadh Bhojanapalli, Ayan Chakrabarti, Daniel Glasner, Daliang Li, Thomas Unterthiner, Andreas Veit