NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view Reconstruction
Recent methods for neural surface representation and rendering, for example
NeuS, have demonstrated remarkably high-quality reconstruction of static
scenes. However, the training of NeuS takes an extremely long time (8 hours),
which makes it almost impossible to apply them to dynamic scenes with thousands
of frames. We propose a fast neural surface reconstruction approach, called
NeuS2, which achieves two orders of magnitude improvement in terms of
acceleration without compromising reconstruction quality. To accelerate the
training process, we integrate multi-resolution hash encodings into a neural
surface representation and implement our whole algorithm in CUDA. We also
present a lightweight calculation of second-order derivatives tailored to our
networks (i.e., ReLU-based MLPs), which achieves a factor two speed up. To
further stabilize training, a progressive learning strategy is proposed to
optimize multi-resolution hash encodings from coarse to fine. In addition, we
extend our method for reconstructing dynamic scenes with an incremental
training strategy. Our experiments on various datasets demonstrate that NeuS2
significantly outperforms the state-of-the-arts in both surface reconstruction
accuracy and training speed. The video is available at
this https URL .
Authors
Yiming Wang, Qin Han, Marc Habermann, Kostas Daniilidis, Christian Theobalt, Lingjie Liu