Yang Fu
·
Sifei Liu
·
Amey Kulkarni
·
Jan Kautz
Alexei A. Efros
·
Xiaolong Wang
Paper | Video | Project Page
The codes have been tested on python 3.10, CUDA>=11.6. The simplest way to install all dependences is to use anaconda and pip in the following steps:
conda create -n cf3dgs python=3.10
conda activate cf3dgs
conda install conda-forge::cudatoolkit-dev=11.7.0
conda install pytorch==2.0.0 torchvision==0.15.0 pytorch-cuda=11.7 -c pytorch -c nvidia
git clone --recursive git@github.com:NVlabs/CF-3DGS.git
pip install -r requirements.txt
DATAROOT is ./data
by default. Please first make data folder by mkdir data
.
Download the data preprocessed by Nope-NeRF as below, and the data is saved into the ./data/Tanks
folder.
wget https://www.robots.ox.ac.uk/~wenjing/Tanks.zip
Download our preprocessed data, and put it saved into the ./data/co3d
folder.
python run_cf3dgs.py -s data/Tanks/Francis \ # change the scene path
--mode train \
--data_type tanks
# pose estimation
python run_cf3dgs.py --source data/Tanks/Francis \
--mode eval_pose \
--data_type tanks \
--model_path ${CKPT_PATH}
# by default the checkpoint should be store in "./output/progressive/Tanks_Francis/chkpnt/ep00_init.pth"
# novel view synthesis
python run_cf3dgs.py --source data/Tanks/Francis \
--mode eval_nvs \
--data_type tanks \
--model_path ${CKPT_PATH}
We release some of the novel view synthesis results (gdrive) for comparison with future works.
-
To run CF-3DGS on your own video, you need to first convert your video to frames and save them to
./data/$CUSTOM_DATA/images/
-
Camera intrincics can be obtained by running COLMAP (check details in
convert.py
). Otherwise, we provide a heuristic camera setting which should work for most landscope videos. -
Run the following commands:
python run_cf3dgs.py -s ./data/$CUSTOM_DATA/ \ # change to your data path
--mode train \
--data_type custom
Our render is built upon 3DGS. The data processing and visualization codes are partially borrowed from Nope-NeRF. We thank all the authors for their great repos.
If you find this code helpful, please cite:
@InProceedings{Fu_2024_CVPR,
author = {Fu, Yang and Liu, Sifei and Kulkarni, Amey and Kautz, Jan and Efros, Alexei A. and Wang, Xiaolong},
title = {COLMAP-Free 3D Gaussian Splatting},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {20796-20805}
}