Project Page | Video | Paper
In CVPR 2021
Shangzhe Wu1,4, Ameesh Makadia4, Jiajun Wu2, Noah Snavely4, Richard Tucker4, Angjoo Kanazawa3,4
1 University of Oxford, 2 Stanford University, 3 University of California, Berkeley, 4 Google Research
teaser.mp4
We propose a model that de-renders a single image of a vase into shape, material and environment illumination, trained using only a single image collection, without explicit 3D, multi-view or multi-light supervision.
conda env create -f environment.yml
OR manually:
conda install -c conda-forge matplotlib opencv scikit-image pyyaml tensorboard
2. Install PyTorch:
conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch
Note: The code is tested with PyTorch 1.4.0 and CUDA 10.1. A GPU version is required, as the neural_renderer package only has a GPU implementation.
3. Install neural_renderer:
This package is required for training and testing, and optional for the demo. It requires a GPU device and GPU-enabled PyTorch.
pip install neural_renderer_pytorch==1.1.3
Note: If this fails or runtime error occurs, try compiling it from source. If you don't have a gcc>=5, you could one available on conda: conda install gxx_linux-64=7.3
.
git clone https://github.com/daniilidis-group/neural_renderer.git
cd neural_renderer
python setup.py install
This vase dataset is collected from Metropolitan Museum of Art Collection through their open-access API under the CC0 License. It contains 1888 training images and 526 testing images of museum vases with segmentation masks obtained using PointRend and GrabCut.
Download the preprocessed dataset using the provided script:
cd data && sh download_met_vases.sh
This synthetic vase dataset is generated with random vase-like shapes, poses (elevation), lighting (using spherical Gaussian) and shininess materials. The diffuse texture is generated using the texture maps provided in CC0 Textures (now called ambientCG) under the CC0 License.
Download the dataset using the provided script:
cd data && sh download_syn_vases.sh
We also provide the scripts for downloading CC0 Textures and generating this dataset in data/syn_vases/scripts/
. Note the script uses API V1 of CC0 Textures to download the texture maps, which appears outdated already. Many assets have now been removed. API V2 has been released. Please check and adapt the code to the new API.
Download the pretrained models using the scripts provided in pretrained/
, eg:
cd pretrained && sh download_pretrained_met_vase.sh
Check the configuration files in configs/
and run experiments, eg:
python run.py --config configs/train_met_vase.yml --gpu 0 --num_workers 4
After generating the results on the test set (see configs/test_syn_vase.yml
), check and run:
python eval/eval_syn_vase.py
To render animations of rotating vases and rotating light, check and run this script:
python render_animation.py
@InProceedings{wu2021derender,
author={Shangzhe Wu and Ameesh Makadia and Jiajun Wu and Noah Snavely and Richard Tucker and Angjoo Kanazawa},
title={De-rendering the World's Revolutionary Artefacts},
booktitle = {CVPR},
year = {2021}
}