Is Space-Time Attention All You Need for Video Understanding?
We present a convolution-free approach to video classification built
exclusively on self-attention over space and time. Our method, named
"TimeSformer," adapts the standard Transformer architecture to video by
enabling spatiotemporal feature learning directly from a sequence of
frame-level patches. Our experimental study compares different self-attention
schemes and suggests that "divided attention," where temporal attention and
spatial attention are separately applied within each block, leads to the best
video classification accuracy among the design choices considered. Despite the
radically different design compared to the prominent paradigm of 3D
convolutional architectures for video, TimeSformer achieves state-of-the-art
results on several major action recognition benchmarks, including the best
reported accuracy on Kinetics-400 and Kinetics-600. Furthermore, our model is
faster to train and has higher test-time efficiency compared to competing
architectures. Code and pretrained models will be made publicly available.