PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields
Recent advances in neural radiance fields have enabled the high-fidelity 3D
reconstruction of complex scenes for novel view synthesis. However, it remains
underexplored how the appearance of such representations can be efficiently
edited while maintaining photorealism.
In this work, we present PaletteNeRF, a novel method for photorealistic
appearance editing of neural radiance fields (NeRF) based on 3D color
decomposition. Our method decomposes the appearance of each 3D point into a
linear combination of palette-based bases (i.e., 3D segmentations defined by a
group of NeRF-type functions) that are shared across the scene. While our
palette-based bases are view-independent, we also predict a view-dependent
function to capture the color residual (e.g., specular shading). During
training, we jointly optimize the basis functions and the color palettes, and
we also introduce novel regularizers to encourage the spatial coherence of the
decomposition.
Our method allows users to efficiently edit the appearance of the 3D scene by
modifying the color palettes. We also extend our framework with compressed
semantic features for semantic-aware appearance editing. We demonstrate that
our technique is superior to baseline methods both quantitatively and
qualitatively for appearance editing of complex real-world scenes.
Authors
Zhengfei Kuang, Fujun Luan, Sai Bi, Zhixin Shu, Gordon Wetzstein, Kalyan Sunkavalli