GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images
As several industries are moving towards modeling massive 3D virtual worlds,
the need for content creation tools that can scale in terms of the quantity,
quality, and diversity of 3D content is becoming evident. In our work, we aim
to train performant 3D generative models that synthesize textured meshes which
can be directly consumed by 3D rendering engines, thus immediately usable in
downstream applications. Prior works on 3D generative modeling either lack
geometric details, are limited in the mesh topology they can produce, typically
do not support textures, or utilize neural renderers in the synthesis process,
which makes their use in common 3D software non-trivial. In this work, we
introduce GET3D, a Generative model that directly generates Explicit Textured
3D meshes with complex topology, rich geometric details, and high-fidelity
textures. We bridge recent success in the differentiable surface modeling,
differentiable rendering as well as 2D Generative Adversarial Networks to train
our model from 2D image collections. GET3D is able to generate high-quality 3D
textured meshes, ranging from cars, chairs, animals, motorbikes and human
characters to buildings, achieving significant improvements over previous
methods.