What do Vision Transformers Learn? A Visual Exploration
Vision transformers (ViTs) are quickly becoming the de-facto architecture for
computer vision, yet we understand very little about why they work and what
they learn. While existing studies visually analyze the mechanisms of
convolutional neural networks, an analogous exploration of ViTs remains
challenging. In this paper, we first address the obstacles to performing
visualizations on ViTs. Assisted by these solutions, we observe that neurons in
ViTs trained with language model supervision (e.g., CLIP) are activated by
semantic concepts rather than visual features. We also explore the underlying
differences between ViTs and CNNs, and we find that transformers detect image
background features, just like their convolutional counterparts, but their
predictions depend far less on high-frequency information. On the other hand,
both architecture types behave similarly in the way features progress from
abstract patterns in early layers to concrete objects in late layers. In
addition, we show that ViTs maintain spatial information in all layers except
the final layer. In contrast to previous works, we show that the last layer
most likely discards the spatial information and behaves as a learned global
pooling operation. Finally, we conduct large-scale visualizations on a wide
range of ViT variants, including DeiT, CoaT, ConViT, PiT, Swin, and Twin, to
validate the effectiveness of our method.
Authors
Amin Ghiasi, Hamid Kazemi, Eitan Borgnia, Steven Reich, Manli Shu, Micah Goldblum, Andrew Gordon Wilson, Tom Goldstein