Authors:Deheng Zhang*, Haitao Yu*, Peiyuan Xie*, Tianyi Zhang*
This is a repository containing the official implementation of Point-Based Radiance Fields for Controllable Human Motion Synthesis.
Our method exploits the explicit point cloud to train the static 3D scene and apply the deformation by encoding the point cloud translation using a deformation MLP. To make sure the rendering result is consistent with the canonical space training, we estimate the local rotation using SVD and interpolate the per-point rotation to the query view direction of the pre-trained radiance field. Extensive experiments show that our approach can significantly outperform the state-of-the-art on fine-level complex deformation which can be generalized to other 3D characters besides humans.
- Please first install the libraries as below and download/prepare the datasets as instructed.
- Point Initialization: Download pre-trained MVSNet as below and train the feature extraction from scratch or directly download the pre-trained models. (Obtain
MVSNet
andinit
folder in checkpoints folder) - Per-scene Optimization: Download pre-trained models or optimize from scratch as instructed.
All the codes are tested in the following environment: Python 3.8; Ubuntu 20.04; CUDA > 11.7.
-
Install the environment from
yml
:conda env create -f environment.yml
-
Install
pytorch3d
conda activate point-nerf-editing pip install fvcore iopath pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py39_cu117_pyt1131/download.html
We provide all data folder here: polybox, please put the folders in the following directory.
pointnerf
├── data_src
│ ├── nerf
│ │ │──nerf_synthetic
Alternatively, you can follow the instruction in PointNeRF-Assistant to create your own dataset, the data format should be the same as nerf_sythetic
dataset.
We trained MVSNet on DTU. You can Download ''MVSNet'' directory from google drive and place them under checkpoints/
.
You can skip training and download the checkpoint folders of nerfsynth
here polybox, and place them in the following directory.
pointnerf
├── checkpoints
│ ├── init
├── MVSNet
├── nerfsynth
In each scene, we provide points and weights at 200K steps 200000_net_ray_marching.pth
.
train scripts
bash dev_scripts/w_n360/dragon_cuda.sh
bash dev_scripts/w_n360/gangnam_cuda.sh
bash dev_scripts/w_n360/human_cuda.sh
bash dev_scripts/w_n360/phoenix_cuda.sh
bash dev_scripts/w_n360/robot_cuda.sh
bash dev_scripts/w_n360/samba_cuda.sh
bash dev_scripts/w_n360/spiderman_cuda.sh
bash dev_scripts/w_n360/turtle_cuda.sh
bash dev_scripts/w_n360/woman_cuda.sh
deformation scripts
bash dev_scripts/w_n360/dragon_deform.sh
bash dev_scripts/w_n360/gangnam_deform.sh
bash dev_scripts/w_n360/human_deform.sh
bash dev_scripts/w_n360/phoenix_deform.sh
bash dev_scripts/w_n360/robot_deform.sh
bash dev_scripts/w_n360/samba_deform.sh
bash dev_scripts/w_n360/spiderman_deform.sh
bash dev_scripts/w_n360/turtle_deform.sh
bash dev_scripts/w_n360/woman_deform.sh
Notes on the configuration in deform.sh
ray_bend=0 # 0: no bending; 1: use ray bending
sample_num -1 # -1: use whole set of keypoints; 0~1: ratio from the original keypoint; >1: number of keypoints
@misc{yu2023pointbased,
title={Point-Based Radiance Fields for Controllable Human Motion Synthesis},
author={Haitao Yu and Deheng Zhang and Peiyuan Xie and Tianyi Zhang},
year={2023},
eprint={2310.03375},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Our repo is developed based on MVSNet, PointNeRF, DPF. Please also consider citing the corresponding papers. We thank our supervisor Dr. Sergey Prokudin from Computer Vision and Learning Group ETH Zurich for the help and tons of useful advice for this project.
The code is released under the GPL-3.0 license.