Neural radiance fields (nerf) have recently gained a surge of interest within the computer vision community for their power to synthesize photorealistic novel views of real-world scenes.
One limitation of this approach is its requirement of accurate camera poses to learn the scene representations.
In this paper, we propose bundle-adjusting neural radiance fields (barf) for training nerf from imperfect (or even unknown) camera poses.
Experiments on synthetic and real-world data show that barf can effectively optimize the neural scenerepresentations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems (e.g.slam) and potential applications for dense 3d mapping and reconstruction.
Authors
Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, Simon Lucey