VoxGRAF: Fast 3D-Aware Image Synthesis with Sparse Voxel Grids
State-of-the-art 3D-aware generative models rely on coordinate-based MLPs to
parameterize 3D radiance fields. While demonstrating impressive results,
querying an MLP for every sample along each ray leads to slow rendering.
Therefore, existing approaches often render low-resolution feature maps and
process them with an upsampling network to obtain the final image. Albeit
efficient, neural rendering often entangles viewpoint and content such that
changing the camera pose results in unwanted changes of geometry or appearance.
Motivated by recent results in voxel-based novel view synthesis, we investigate
the utility of sparse voxel grid representations for fast and 3D-consistent
generative modeling in this paper. Our results demonstrate that monolithic MLPs
can indeed be replaced by 3D convolutions when combining sparse voxel grids
with progressive growing, free space pruning and appropriate regularization. To
obtain a compact representation of the scene and allow for scaling to higher
voxel resolutions, our model disentangles the foreground object (modeled in 3D)
from the background (modeled in 2D). In contrast to existing approaches, our
method requires only a single forward pass to generate a full 3D scene. It
hence allows for efficient rendering from arbitrary viewpoints while yielding
3D consistent results with high visual fidelity.
Authors
Katja Schwarz, Axel Sauer, Michael Niemeyer, Yiyi Liao, Andreas Geiger