NeRF++: Analyzing and Improving Neural Radiance Fields
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a
variety of capture settings, including 360 capture of bounded scenes and
forward-facing capture of bounded and unbounded scenes. NeRF fits multi-layer
perceptrons (MLPs) representing view-invariant opacity and view-dependent color
volumes to a set of training images, and samples novel views based on volume
rendering techniques. In this technical report, we first remark on radiance
fields and their potential ambiguities, namely the shape-radiance ambiguity,
and analyze NeRF's success in avoiding such ambiguities. Second, we address a
parametrization issue involved in applying NeRF to 360 captures of objects
within large-scale, unbounded 3D scenes. Our method improves view synthesis
fidelity in this challenging scenario. Code is available at
this https URL
Authors
Kai Zhang, Gernot Riegler, Noah Snavely, Vladlen Koltun