Make-A-Video: Text-to-Video Generation without Text-Video Data
We propose Make-A-Video -- an approach for directly translating the
tremendous recent progress in Text-to-Image (T2I) generation to Text-to-Video
(T2V). Our intuition is simple: learn what the world looks like and how it is
described from paired text-image data, and learn how the world moves from
unsupervised video footage. Make-A-Video has three advantages: (1) it
accelerates training of the T2V model (it does not need to learn visual and
multimodal representations from scratch), (2) it does not require paired
text-video data, and (3) the generated videos inherit the vastness (diversity
in aesthetic, fantastical depictions, etc.) of today's image generation models.
We design a simple yet effective way to build on T2I models with novel and
effective spatial-temporal modules. First, we decompose the full temporal U-Net
and attention tensors and approximate them in space and time. Second, we design
a spatial temporal pipeline to generate high resolution and frame rate videos
with a video decoder, interpolation model and two super resolution models that
can enable various applications besides T2V. In all aspects, spatial and
temporal resolution, faithfulness to text, and quality, Make-A-Video sets the
new state-of-the-art in text-to-video generation, as determined by both
qualitative and quantitative measures.
Authors
Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal Gupta, Yaniv Taigman