We present a method for transferring the artistic features of an arbitrary style image to a 3d scene.
Our extensive evaluation demonstrates that our method outperforms baselines by generating artistic appearance that more closely resembles the style image.
We present a method for transferring the artistic features of an arbitrary style image to a 3d scene.
Previous methods that perform 3d stylization on point clouds or meshes are sensitive to geometric reconstruction errors for complex real-world scenes.
Instead, we propose to stylize the more robustradiance field representation.
We find that the commonly used gram matrix-based loss tends to produce blurry results without faithful brushstrokes, and introduce a nearest neighbor-based loss that is highly effective at capturing style details while maintaining multi-view consistency.
We also propose a novel deferred back-propagation method to enable optimization of memory-intensive radiance fields using style losses defined on full-resolution rendered images.
We present a method for transferring the artistic features of an arbitrary style image to a 3d scene.
Previous methods that perform 3d stylization on point clouds or meshes are sensitive to geometric reconstruction errors for complex real-world scenes.
Instead, we propose to stylize the more robustradiance field representation.
We find that the commonly used gram matrix-based loss tends to produce blurry results without faithful brushstrokes, and introduce a nearest neighbor-based loss that is highly effective at capturing style details while maintaining multi-view consistency.
Authors
Kai Zhang, Nick Kolkin, Sai Bi, Fujun Luan, Zexiang Xu, Eli Shechtman, Noah Snavely