Multi-Space Neural Radiance Fields

Ze-Xin Yin, Jiaxiong Qiu, Ming-Ming Cheng, Bo Ren
VCIP, CS, Nankai University

Paper Supp. Bibtex Code Dataset
Mip-NeRF MS-Mip-NeRF_B Mip-NeRF 360 MS-Mip-NeRF 360

Abstract

Neural Radiance Fields (NeRF) and its variants have reached state-of-the-art performance in many novel-view-synthesis-related tasks. However, current NeRF-based methods still suffer from the existence of reflective objects, often resulting in blurry or distorted rendering. Instead of calculating a single radiance field, we propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces, which leads to a better under-standing of the neural network toward the existence of reflective and refractive objects. Our multi-space scheme works as an enhancement to existing NeRF methods, with only small computational overheads needed for training and inferring the extra-space outputs. We demonstrate the superiority and compatibility of our approach using three representative NeRF-based models, i.e., NeRF, Mip-NeRF, and Mip-NeRF 360. Comparisons are performed on a novelly constructed dataset consisting of 25 synthetic scenes and 7 real captured scenes with complex reflection and refraction, all having 360-degree viewpoints. Extensive experiments show that our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes concerned with complex light paths through mirror-like objects.

Method Pipeline

Model Pipeline

Novel View Rendering(Mip-NeRF 360 v.s. MS-Mip-NeRF 360)

Novel View Rendering(Mip-NeRF v.s. MS-Mip-NeRFB)

Sub-space RGB and weights(based on MS-NeRFB)

BibTeX

@InProceedings{Yin_2023_CVPR,
    author    = {Yin, Ze-Xin and Qiu, Jiaxiong and Cheng, Ming-Ming and Ren, Bo},
    title     = {Multi-Space Neural Radiance Fields},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {12407-12416}
}
   

Acknowledgement

This work is funded by the Natural Science Foundation of China (NO. 62132012). We borrow the template from NoPe NeRF, which is a great work that enchances the ability of NeRF-based methods on scenes with no poses.