# download this repo
git clone git@github.com:Karbo123/RGBD-Diffusion.git --depth=1
cd RGBD-Diffusion
git submodule update --init --recursive
# set up environment
conda create -n RGBD2 python=3.8
conda activate RGBD2
# install packages
pip install torch # tested on 1.12.1+cu116
pip install torchvision
pip install matplotlib # tested on 3.5.3
pip install opencv-python einops trimesh diffusers ninja open3d
# install dependencies
cd ./third_party/nvdiffrast && pip install . && cd ../..
cd ./third_party/recon && pip install . && cd ../..
Download some files:
- the preprocessed ScanNetV2 dataset. Extract via
mkdir data_file && unzip scans_keyframe.zip -d data_file && mv data_file/scans_keyframe data_file/ScanNetV2
. - model checkpoint. Extract via
mkdir -p out/RGBD2/checkpoint && unzip model.zip -d out/RGBD2/checkpoint
.
Copy the config file to an output folder:
mkdir -p out/RGBD2/backup/config
cp ./config/cfg_RGBD2.py out/RGBD2/backup/config
We provide a checkpoint, so you actually don't need to train a model from scratch. To launch training, simply run:
CUDA_VISIBLE_DEVICES=0 python -m recon.runner.train --cfg config/cfg_RGBD2.py
If you want to train with multiple GPUs, try setting, e.g. CUDA_VISIBLE_DEVICES=0,1,2,3
.
We note that it visualizes the training process by producing some TensorBoard files.
To generate a test scene, simply run:
CUDA_VISIBLE_DEVICES=0 python experiments/run.py
By additionally providing --interactive
, you can control the generation process via manual control using a GUI.
Our GUI code uses Matplotlib, so you can even run the code on a remote server, and use x-server (e.g. MobaXterm) to enable graphic control!
If you find our work useful, please consider citing our paper:
@InProceedings{Lei_2023_CVPR,
author = {Lei, Jiabao and Tang, Jiapeng and Jia, Kui},
title = {RGBD2: Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023}
}
This repo is yet an early-access version which is under active update.
If you have any questions or needs, feel free to contact me, or just create a GitHub issue.