This survey aims to provide a comprehensive overview of the transformer models in the computer vision discipline and assumes little to no prior background in the field.
We start with an introduction to fundamental concepts behind the success of transformer models i.e., self-supervision and self-attention.
Since they assume minimal prior knowledge about the structure of the problem, self-supervision using pretext tasks is applied to pre-train transformer models on large-scale (unlabelled) datasets.
The learned representations are then fine-tuned on the downstream tasks typically leading to excellent performance due to the generalization and expressivity of encoded features.
We cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question-answering and visual reasoning), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution and colorization) and 3d analysis (e.g., point cloud classification and segmentation).
We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value.
Finally, we provide an analysis on open research directions and possible future works.
Authors
Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, Mubarak Shah