Efficient Geometry-aware 3D Generative Adversarial Networks
Unsupervised generation of high-quality multi-view-consistent images and 3D
shapes using only collections of single-view 2D photographs has been a
long-standing challenge. Existing 3D GANs are either compute-intensive or make
approximations that are not 3D-consistent; the former limits quality and
resolution of the generated images and the latter adversely affects multi-view
consistency and shape quality. In this work, we improve the computational
efficiency and image quality of 3D GANs without overly relying on these
approximations. For this purpose, we introduce an expressive hybrid
explicit-implicit network architecture that, together with other design
choices, synthesizes not only high-resolution multi-view-consistent images in
real time but also produces high-quality 3D geometry. By decoupling feature
generation and neural rendering, our framework is able to leverage
state-of-the-art 2D CNN generators, such as StyleGAN2, and inherit their
efficiency and expressiveness. We demonstrate state-of-the-art 3D-aware
synthesis with FFHQ and AFHQ Cats, among other experiments.
Authors
Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, Gordon Wetzstein