Skip to content

[CVPR2022] Neural Rays for Occlusion-aware Image-based Rendering

License

Notifications You must be signed in to change notification settings

liuyuan-pal/NeuRay

Repository files navigation

NeuRay

Rendered video without training on the scene.

Todo List

  • Generalization models and rendering codes.
  • Training of generalization models.
  • Finetuning codes and finetuned models.

Usage

Setup

git clone git@github.com:liuyuan-pal/NeuRay.git
cd NeuRay
pip install -r requirements.txt
Dependencies
  • torch==1.7.1
  • opencv_python==4.4.0
  • tensorflow==2.4.1
  • numpy==1.19.2
  • scipy==1.5.2

Download datasets and pretrained models

  1. Download processed datasets: DTU-Test / LLFF / NeRF Synthetic.
  2. Download pretrained model NeuRay-Depth and NeuRay-CostVolume.
  3. Organize datasets and models as follows
NeuRay
|-- data
    |--model
        |-- neuray_gen_cost_volume
        |-- neuray_gen_depth
    |-- dtu_test
    |-- llff_colmap
    |-- nerf_synthetic

Render

# render on lego of the NeRF synthetic dataset
python render.py --cfg configs/gen/neuray_gen_depth.yaml \  
                 --database nerf_synthetic/lego/black_800 \ # nerf_synthetic/lego/black_400
                 --pose_type eval                 

# render on snowman of the DTU dataset
python render.py --cfg configs/gen/neuray_gen_depth.yaml \  
                 --database dtu_test/snowman/black_800 \ # dtu_test/snowman/black_400
                 --pose_type eval 
                 
# render on fern of the LLFF dataset
python render.py --cfg configs/gen/neuray_gen_depth.yaml \
                 --database llff_colmap/fern/high \ # llff_colmap/fern/low
                 --pose_type eval

The rendered images locate in data/render/<database_name>/<renderer_name>-pretrain-eval/. If the pose_type is eval, we also generate ground-truth images in data/render/<database_name>/gt.

Explanation on parameters of render.py.

  • cfg is the path to the renderer config file, which can also be configs/gen/neuray_gen_cost_volume.yaml
  • database is a database name consisting of <dataset_name>/<scene_name>/<scene_setting>.
    • nerf_synthetic/lego/black_800 means the scene "lego" from the "nerf_synthetic" dataset using "black" background and the resolution "800X800".
    • dtu_test/snowman/black_800 means the scene "snowman" from the "dtu_test" dataset using "black" background and the resolution "800X600".
    • llff_colmap/fern/high means the scene "fern" from the "llff_colmap" dataset using "high" resolution (1008X756).
    • We may also use llff_colmlap/fern/low which renders with "low" resolution (504X378)

Evaluation

# psnr/ssim/lpips will be printed on screen
python eval.py --dir_pr data/render/<database_name>/<renderer_name>-pretrain-eval \
               --dir_gt data/render/<database_name>/gt

# example of evaluation on "fern".
# note we should already render images in the "dir_pr".
python eval.py --dir_pr data/render/llff_colmap/fern/high/neuray_gen_depth-pretrain-eval \
               --dir_gt data/render/llff_colmap/fern/high/gt

Render on custom scenes

To render on custom scenes, please refer to this

Generalization model training

Download training sets

  1. Download Google Scanned Objects, RealEstate10K Space Dataset and LLFF released Scenes from IBRNet.
  2. Download colmap depth for forward-facing scenes at here.
  3. Download DTU training images at here.
  4. Download colmap depth for DTU training images at here.
  5. The COLMAP depth maps are not available anymore. You may need to run the COLMAP by yourself.

Rename directories and organize datasets like

NeuRay
|-- data
    |-- google_scanned_objects
    |-- real_estate_dataset # RealEstate10k-subset  
    |-- real_iconic_noface
    |-- spaces_dataset
    |-- colmap_forward_cache
    |-- dtu_train
    |-- colmap_dtu_cache

Train generalization model

Train the model with NeuRay initialized from the estimated depth of COLMAP.

python run_training.py --cfg configs/train/gen/neuray_gen_depth_train.yaml

Train the model with NeuRay initialized from constructed cost volumes.

python run_training.py --cfg configs/train/gen/neuray_gen_cost_volume_train.yaml

Models will be saved at data/model. On every 10k steps, we will validate the model and images will be saved at data/vis_val/<model_name>-<val_set_name>

Render with trained models

python render.py --cfg configs/gen/neuray_gen_depth_train.yaml \
                 --database llff_colmap/fern/high \
                 --pose_type eval

Scene-specific finetuning

Finetuning

# finetune on lego from the NeRF synthetic dataset
python run_training.py --cfg configs/train/ft/neuray_ft_depth_lego.yaml

# finetune on fern from the LLFF dataset
python run_training.py --cfg configs/train/ft/neuray_ft_depth_fern.yaml

# finetune on birds from the DTU dataset
python run_training.py --cfg configs/train/ft/neuray_ft_depth_birds.yaml

# finetune the model initialized from cost volume
python run_training.py --cfg configs/train/ft/neuray_ft_cv_lego.yaml

The finetuned models will be saved at data/model.

Finetuned models

We provide the finetuned models on the NeRF synthetic datasets at here.

Download the models and organize files like

NeuRay
|-- data
    |-- model
        |-- neuray_ft_lego_pretrain
        |-- neuray_ft_chair_pretrain
        ...

Render with finetuned models

# render on lego of the NeRF synthetic dataset
python render.py --cfg configs/ft/neuray_ft_lego_pretrain.yaml \  
                 --database nerf_synthetic/lego/black_800 \
                 --pose_type eval \
                 --render_type ft

Code explanation

We have provided explanation on variable naming convention in here to make our codes more readable.

Acknowledgements

In this repository, we have used codes or datasets from the following repositories. We thank all the authors for sharing great codes or datasets.

Citation

@inproceedings{liu2022neuray,
  title={Neural Rays for Occlusion-aware Image-based Rendering},
  author={Liu, Yuan and Peng, Sida and Liu, Lingjie and Wang, Qianqian and Wang, Peng and Theobalt, Christian and Zhou, Xiaowei and Wang, Wenping},
  booktitle={CVPR},
  year={2022}
}

About

[CVPR2022] Neural Rays for Occlusion-aware Image-based Rendering

Topics

Resources

License

Stars

Watchers

Forks

Languages