Dongbin Zhang*,
Chuming Wang*,
Weitao Wang,
Peihao Li,
Minghan Qin,
Haoqian Wang†
(* indicates equal contribution, † means corresponding author)
Webpage | Full Paper | Video
This repository contains the official author's implementation associated with the paper "Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections", which can be found here.
Pipeline of GS-W |
The repository contains submodules, thus please check it out with
# SSH
git clone git@github.com:EastbeanZhang/Gaussian-Wild.git --recursive
or
# HTTPS
git clone https://github.com/EastbeanZhang/Gaussian-Wild.git --recursive
The components have been tested on Ubuntu Linux 18.04. Instructions for setting up and running each of them are in the below sections.
Download the scenes (We use Brandenburg gate, Trevi fountain, and Sacre coeur in our experiments) from Image Matching Challenge PhotoTourism (IMC-PT) 2020 dataset Download the train/test split from NeRF-W and put it under each scene's folder (the same level as the "dense" folder, see more details in the tree structure of each dataset.
The synthetic lego dataset can be downloaded from Nerf_Data.
brandenburg_gate/
├── dense/
│ ├── images/
│ ├── sparse/
│ ├── stereo/
│
├──brandenburg.tsv
trevi_fountain/
├── dense/
│ ├── images/
│ ├── sparse/
│ ├── stereo/
│
├──trevi.tsv
sacre_coeur/
├── dense/
│ ├── images/
│ ├── sparse/
│ ├── stereo/
│
├──sacre.tsv
lego/
├── train/
├── test/
├── val/
├── transforms_train.json
├── transforms_test.json
├── transforms_val.json
The optimizer uses PyTorch and CUDA extensions in a Python environment to produce trained models.
- CUDA-ready GPU with Compute Capability 7.0+
- 24 GB VRAM (to train to paper evaluation quality)
- Conda (recommended for easy setup)
- C++ Compiler for PyTorch extensions (we used VS Code)
- CUDA SDK 11 for PyTorch extensions (we used 11.8)
- C++ Compiler and CUDA SDK must be compatible
Our default, provided install method is based on Conda package and environment management:
conda env create --file environment.yml
conda activate GS-W
Taking the Sacre Coeur scene as an example, more specific commands are shown in run_train.sh.
# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./train.py --source_path /path/to/sacre_coeur/dense/ --scene_name sacre --model_path outputs/sacre/full --eval --resolution 2 --iterations 70000
(This is automatically done after training by default)
# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./render.py --model_path outputs/sacre/full
# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./render.py --model_path outputs/sacre/full --skip_train --skip_test --render_multiview_vedio
# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./render.py --model_path outputs/sacre/full --skip_train --skip_test --render_interpolate
(This is automatically done after training by default)
Similar to NeRF-W, Ha-NeRF, CR-NeRF, We evaluate the metrics of the right half image to compare with them.
# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./metrics_half.py -model_path --model_path outputs/sacre/full
If desired, it can also be evaluated on the whole image.
# sacre coeur
CUDA_VISIBLE_DEVICES=0 python ./metrics.py -model_path --model_path outputs/sacre/full
Our code is based on the awesome Pytorch implementation of 3D Gaussian Splatting (3DGS). We appreciate all the contributors.