You can setup the required environment by the following commands:
# install python dependencies
conda env create -f environment.yaml
conda activate scenedreamer
# compile third party libraries
export CUDA_VERSION=$(nvcc --version| grep -Po "(\d+\.)+\d+" | head -1)
CURRENT=$(pwd)
for p in correlation channelnorm resample2d bias_act upfirdn2d; do
cd imaginaire/third_party/${p};
rm -rf build dist *info;
python setup.py install;
cd ${CURRENT};
done
for p in gancraft/voxlib; do
cd imaginaire/model_utils/${p};
make all
cd ${CURRENT};
done
cd gridencoder
python setup.py build_ext --inplace
python -m pip install .
cd ${CURRENT}
# Now, all done!
Please download our checkpoints from Google Drive to run the following inference scripts. You may store the checkpoint at the root directory of this repo:
├── ...
└── SceneDreamer
├── inference.py
├── README.md
└── scenedreamer_released.pt
You can run the following command to generate your own 3D world!
python inference.py --config configs/scenedreamer_inference.yaml --output_dir ./test/ --seed 8888 --checkpoint ./scenedreamer_released.pt
The results will be saved under ./test
as the following structures:
├── ...
└── test
└── camera_{:02d} # camera mode for trajectory
├── rgb_render # per frame RGB renderings
├── 00000.png
├── 00001.png
└── ...
├── rgb_render.mp4 # rendered video
├── height_map.png # height map
├── semantic_map.png # semantic map
└── style.npy # sampled style code
Here is a sampled scene with my default rendering parameters:
rgb_render.2.mp4
You can also locally launch demo with gradio UI by:
python app_gradio.py