Improved Multiscale Vision Transformers for Classification and Detection
In this paper, we study Multiscale Vision Transformers (MViT) as a unified
architecture for image and video classification, as well as object detection.
We present an improved version of MViT that incorporates decomposed relative
positional embeddings and residual pooling connections. We instantiate this
architecture in five sizes and evaluate it for ImageNet classification, COCO
detection and Kinetics video recognition where it outperforms prior work. We
further compare MViTs' pooling attention to window attention mechanisms where
it outperforms the latter in accuracy/compute. Without bells-and-whistles, MViT
has state-of-the-art performance in 3 domains: 88.8% accuracy on ImageNet
classification, 56.1 box AP on COCO object detection as well as 86.1% on
Kinetics-400 video classification. Code and models will be made publicly
available.
Authors
Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Mangalam, Bo Xiong, Jitendra Malik, Christoph Feichtenhofer