Generative Adversarial Networks for 3D Video Synthesis
3D-Aware Video Generation
Generative models have emerged as an essential building block for many imagesynthesis and editing tasks.
Recent advances in this field have also enabled high-quality 3d or video content to be generated that exhibits either multi-view or temporal consistency.
We explore 4-dimensional generative adversarial networks (gans) that learn unconditional generation of 3d-aware videos supervised only with monocular videos.
We show that our method learns a rich embedding of decomposable 3d structures and motions that enables new visual effects of spatio-temporal renderings while producing imagery with quality comparable to that of existing 3d or video gans.
Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Hao Tang, Gordon Wetzstein, Leonidas Guibas, Luc Van Gool, Radu Timofte