This is a repository is an assistant to set up PointNeRF. We set up a stable environment for point-nerf for ubuntu 20.04, and modified point-nerf code to fix some bug during running. We also share a pipeline to generate your own dataset using BlenderNeRF. We extends from the original repository of PointNeRF.
All the codes are tested in the following environment: Python 3.8; Ubuntu 20.04; CUDA > 11.7.
-
Install the environment from
yml
:conda env create -f environment.yml
-
In mainland China:
conda env create -f environment_autodl.yml
-
Install
pytorch3d
conda activate point-nerf pip install fvcore iopath pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py39_cu117_pyt1131/download.html
For the data preparation for traditional NeRF dataset, please see PointNeRF. We also show a pipeline to create your own dataset using BlenderNeRF. This pipeline is only applicable to nerf_synthetic
data, please put the dataset in the ./data_src/nerf/nerf_synthetic
folder.
-
Install Blender and BlenderNeRF add-on.
-
Download our Blender template.
-
Open the template, import the target object and rescale it such that the object is within the
BlenderNeRF Sphere
-
Press
space
to check the camera poses and scale are correct. You can set background to be transparent inFilm
option ofRender Properties
tag. -
There are two options to generate dataset
-
Random camera pose around the sphere: Select
Camera on Sphere COS
option inBlenderNeRF add-on
, choseBlenderNeRF Camera
, runPLAY COS
. -
Continuous camera pose rotate w.r.t z-axis: Select
Subset of Frames SOF
option inBlenderNeRF add-on
, choseTest Camera
, runPLAY SOF
.
-
-
Decompress the zip file, open the
json
files, remove all suffix.png
. And duplicate thetrain
folder aseval
folder andtest
folder, create correspondingjson
files (transforms_eval.json
,transforms_test.json
) -
Create corresponding scripts (e.g.
dragon_cuda.sh
), note that you only need to change thename
,scan
options first in the bash file.
You can Download ''MVSNet'' directory from google drive and place them under checkpoints/
. Please check PointNeRF for more details.
This environment only supports pytorch3d
implementations:
train scripts
bash dev_scripts/w_n360/chair_cuda.sh
bash dev_scripts/w_n360/drums_cuda.sh
bash dev_scripts/w_n360/ficus_cuda.sh
bash dev_scripts/w_n360/hotdog_cuda.sh
bash dev_scripts/w_n360/lego_cuda.sh
bash dev_scripts/w_n360/materials_cuda.sh
bash dev_scripts/w_n360/mic_cuda.sh
bash dev_scripts/w_n360/ship_cuda.sh
This repo is developed based on PointNeRF. Please cite the corresponding papers.
@inproceedings{xu2022point,
title={Point-nerf: Point-based neural radiance fields},
author={Xu, Qiangeng and Xu, Zexiang and Philip, Julien and Bi, Sai and Shu, Zhixin and Sunkavalli, Kalyan and Neumann, Ulrich},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5438--5448},
year={2022}
}
The code is released under the GPL-3.0 license.