Fast Dynamic Radiance Fields with Time-Aware Neural Voxels
Neural radiance fields (NeRF) have shown great success in modeling 3D scenes
and synthesizing novel-view images. However, most previous NeRF methods take
much time to optimize one single scene. Explicit data structures, e.g. voxel
features, show great potential to accelerate the training process. However,
voxel features face two big challenges to be applied to dynamic scenes, i.e.
modeling temporal information and capturing different scales of point motions.
We propose a radiance field framework by representing scenes with time-aware
voxel features, named as TiNeuVox. A tiny coordinate deformation network is
introduced to model coarse motion trajectories and temporal information is
further enhanced in the radiance network. A multi-distance interpolation method
is proposed and applied on voxel features to model both small and large
motions. Our framework significantly accelerates the optimization of dynamic
radiance fields while maintaining high rendering quality. Empirical evaluation
is performed on both synthetic and real scenes. Our TiNeuVox completes training
with only 8 minutes and 8-MB storage cost while showing similar or even better
rendering performance than previous dynamic NeRF methods.