CityNeRF: Growing Neural Field for Multi-Scale View Modeling
CityNeRF: Building NeRF at City Scale
We introduce a progressive learning paradigm that grows the neural radiance field (nerf) model and training set synchronously.
Starting from fitting distant views with a shallow base block, as training progresses, new blocks are appended to accommodate the emerging details in the increasingly closer views.
The strategy effectively activates high-frequency channels in the positional encoding and unfolds more complex details as the training proceeds.
We demonstrate the superiority of this approach in modeling diverse city-scale scenes with drastically varying views, and its support for rendering views in different levels of detail.
Authors
Yuanbo Xiangli, Linning Xu, Xingang Pan, Nanxuan Zhao, Anyi Rao, Christian Theobalt, Bo Dai, Dahua Lin