ra-neus

RaNeuS

Implementation of RaNeuS: Ray-adaptive Neural Surface Reconstruction.

RaNeuS

Features

This repository aims to provide a highly efficient while customizable boilerplate for research projects based on NeRF or NeuS.

Please subscribe to #26 for our latest findings on quality improvements!

Requirements

Note:

Run

Training on NeRF-Synthetic

Download the NeRF-Synthetic data here and put it under load/. The file structure should be like load/nerf_synthetic/lego.

Run the launch script with --train, specifying the config file, the GPU(s) to be used (GPU 0 will be used by default), and the scene name:

# train NeRF
python launch.py --config configs/nerf-blender-wbg.yaml --gpu 0 --train dataset.scene=lego tag=example

# train NeuS with mask
python launch.py --config configs/neus-blender.yaml --gpu 0 --train dataset.scene=lego tag=example
# train NeuS without mask
python launch.py --config configs/neus-blender-wbg.yaml --gpu 0 --train dataset.scene=lego tag=example

The code snapshots, checkpoints and experiment outputs are saved to exp/[name]/[tag]@[timestamp], and tensorboard logs can be found at runs/[name]/[tag]@[timestamp]. You can change any configuration in the YAML file by specifying arguments without --, for example:

python launch.py --config configs/nerf-blender.yaml --gpu 0 --train dataset.scene=lego tag=iter50k seed=0 trainer.max_steps=50000

Training on DTU

Download preprocessed DTU data provided by NeuS or IDR. In the provided config files we assume using NeuS DTU data. If you are using IDR DTU data, please set dataset.cameras_file=cameras.npz. You may also need to adjust dataset.root_dir to point to your downloaded data location.

# train NeuS on DTU without mask
python launch.py --config configs/neus-dtu.yaml --gpu 0 --train
# train NeuS on DTU with mask
python launch.py --config configs/neus-dtu-wmask.yaml --gpu 0 --train
# train NeuS on DTU with mask using tricks from Neuralangelo (experimental)
python launch.py --config configs/neuralangelo-dtu-wmask.yaml --gpu 0 --train

Notes:

Training on Custom COLMAP Data

To get COLMAP data from custom images, you should have COLMAP installed (see here for installation instructions). Then put your images in the images/ folder, and run scripts/imgs2poses.py specifying the path containing the images/ folder. For example:

python scripts/imgs2poses.py ./load/bmvs_dog # images are in ./load/bmvs_dog/images

Existing data following this file structure also works as long as images are store in images/ and there is a sparse/ folder for the COLMAP output, for example the data provided by MipNeRF 360. An optional masks/ folder could be provided for object mask supervision. To train on COLMAP data, please refer to the example config files config/*-colmap.yaml. Some notes:

Testing

The training procedure are by default followed by testing, which computes metrics on test data, generates animations and exports the geometry as triangular meshes. If you want to do testing alone, just resume the pretrained model and replace --train with --test, for example:

python launch.py --config path/to/your/exp/config/parsed.yaml --resume path/to/your/exp/ckpt/epoch=0-step=20000.ckpt --gpu 0 --test

Citation

If you find this codebase useful, please consider citing:

@inproceedings{wang2023raneus,
  title={RaNeuS: Ray-adaptive Neural Surface Reconstruction},
  author={Wang, Yida and Tan, David and Tombari, Federico and Navab, Nassir},
  booktitle={Proceedings of the IEEE/CVF International Conference on 3D Vision},
  year={2023}
}