MaskViT: Masked Visual Modeling for Video Prediction
MaskViT: Masked Visual Pre-Training for Video Prediction
Embodied agents can plan solutions to a variety of tasks in complex environments by pre-training transformers via masked visualmodeling.
This work shows that we can create good video prediction models by pre-training transformers via masked visualmodeling.
Our approach, named maskvit, is based on two simple design decisions.
First, for memory and training efficiency, we use two types of windowattention: spatial and spatiotemporal.
Second, during training, we mask a variable percentage of tokens instead of a fixed mask ratio.
For inference, maskvit generates all tokens via iterative refinement where we incrementally decrease the masking ratio following a mask scheduling function.
We demonstrate that maskvit outperforms prior works in videoprediction, is parameter efficient, and can generate high-resolution videos (256x256).
Further, we demonstrate the benefits of inference speedup (up to 512x) due to iterative decoding by using maskvit for planning on a real robot.
Our work suggests that we can endow embodied agents with powerful predictivemodels by leveraging the general framework of masked visual modeling with minimal domain knowledge.
Authors
Agrim Gupta, Stephen Tian, Yunzhi Zhang, Jiajun Wu, Roberto Martín-Martín, Li Fei-Fei