CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental Learning

Authors

  • Qingsong Yan Wuhan University
  • Qiang Wang Harbin Institute of Technology (Shenzhen)
  • Kaiyong Zhao XGRIDS
  • Jie Chen Hong Kong Baptist University
  • Bo Li The Hong Kong University of Science and Technology
  • Xiaowen Chu The Hong Kong University of Science and Technology (Guangzhou) The Hong Kong University of Science and Technology
  • Fei Deng Wuhan University Hubei Luojia Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i6.28464

Keywords:

CV: 3D Computer Vision

Abstract

Neural Radiance Fields have demonstrated impressive performance in novel view synthesis. However, NeRF and most of its variants still rely on traditional complex pipelines to provide extrinsic and intrinsic camera parameters, such as COLMAP. Recent works, like NeRFmm, BARF, and L2G-NeRF, directly treat camera parameters as learnable and estimate them through differential volume rendering. However, these methods work for forward-looking scenes with slight motions and fail to tackle the rotation scenario in practice. To overcome this limitation, we propose a novel camera parameter free neural radiance field (CF-NeRF), which incrementally reconstructs 3D representations and recovers the camera parameters inspired by incremental structure from motion. Given a sequence of images, CF-NeRF estimates camera parameters of images one by one and reconstructs the scene through initialization, implicit localization, and implicit optimization. To evaluate our method, we use a challenging real-world dataset, NeRFBuster, which provides 12 scenes under complex trajectories. Results demonstrate that CF-NeRF is robust to rotation and achieves state-of-the-art results without providing prior information and constraints.

Published

2024-03-24

How to Cite

Yan, Q., Wang, Q., Zhao, K., Chen, J., Li, B., Chu, X., & Deng, F. (2024). CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6440–6448. https://doi.org/10.1609/aaai.v38i6.28464

Issue

Section

AAAI Technical Track on Computer Vision V