Skip to content

yehonathanlitman/MaterialFusion

Repository files navigation

MaterialFusion: Enhancing Inverse Rendering with Material Diffusion Priors

Teaser image

Installation

Tested on Pop OS 24.04 + Pytorch 2.1.2 using a RTX6000

conda create -n materialfusion python=3.8
conda activate materialfusion
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
pip install git+https://github.com/NVlabs/nvdiffrast/
pip install imageio PyOpenGL glfw xatlas gdown wget kornia diffusers["torch"] transformers

Datasets

BlenderVault

BlenderVault is a curated dataset containing 11,709 synthetic Blender objects designed by artists. The objects are diverse in nature and contain high quality property assets that are extracted and used to generate training data for our material diffusion prior. The object files are available for download here. Due to size, the dataset was split into 12 partitions.

blendervault_video.mp4

Downloading

We evaluate MaterialFusion and our material diffusion prior on a mixture of synthetic and real image data. To download the NeRF, NeRFactor, Stanford-ORB, MaterialFusion, and StableMaterial datasets, run the data/download_datasets.py script. The configs that correspond to each object and dataset are included at configs/materialfusion and are needed by MaterialFusion.

If you would like to download the datasets individually, follow the links below and put them in the data directory:

  • NeRF-Synthetic - Extract this into data/nerf_synthetic.

  • NeRFactor - The four datasets should be placed in the folders data\nerfactor\hotdog_2163, data\nerfactor\drums_3072, data\nerfactor\ficus_2188, and data\nerfactor\lego_3072.

  • MaterialFusion Dataset - This is a dataset containing multi-view images of 9 unseen objects from BlenderVault. Extract them into data/materialfusion_dataset.

  • StableMaterial Dataset - This dataset contains 4 images per object for 8 unseen objects from BlenderVault for evaluating the material diffusion prior. Extract them into data/stablematerial_dataset.

  • Stanford-ORB - Download and extract blender_LDR.tar.gz into data/blender_LDR. We will upload the config files and dataloader for Stanford-ORB objects soon!

Preparing your own data

The training data assumes poses are available and the background is masked out. For synthetic data, you may use the BlenderVault rendering script (which we will soon upload) or use NeRFactor's code. For real data, you can use SAM to mask out the background and process your images. You can also use COLMAP to estimate the poses.

Training

To begin training MaterialFusion:

python run_materialfusion.py --config configs/materialfusion/vault-box.json

Note that MaterialFusion uses a lot of memory during inverse rendering and may crash. To alleviate this you can reduce the batch parameter in the config file or pass a smaller batch to StableMaterial via the --sds_batch_limiter flag:

python run_materialfusion.py --config configs/materialfusion/vault-box.json --sds_batch_limiter 2

Importing into Blender

Once you are done training MaterialFusion, the output folder will contain the recovered mesh geometry, material maps, and environment lighting. They can be loaded into Blender using blender.py.

StableMaterial - Material Diffusion Prior

We also provide inference (and soon training) code for our material diffusion prior.

The model checkpoints can be downloaded with the data/download_stablematerial_ckpts.py script. For simplicity, we provide checkpoints for the single-view and multi-view models.

  • StableMaterial - Extract this into data/stablematerial-model.

  • StableMaterial-MV - This checkpoint attends to information across views to predict materials that are consistent across multiple views. This helps with difficult views, as seen in the cup example below. Extract this into data/stablematerial-mv-model.

Preparing your own data

StableMaterial doesn't need pose information and only assumes masked images. StableMaterial-MV requires pose information in addition to masked images.

Evaluation

For single view prediction (StableMaterial):

python run_stablematerial.py --data_path data/stablematerial_dataset/<object_id>/train/<object_id>

Results will be saved in out/stablematerial_pred/single_view/<object_id>.

For multi-view prediction (StableMaterial-MV)

python run_stablematerial.py --data_path data/stablematerial_dataset/<object_id> --num_views 4

Results will be saved in out/stablematerial_pred/multi_view/<object_id>.

stablematerial

Citation

If you use any parts of our work, please cite the following:

@article{litman2024materialfusion,
  author    = {Yehonathan Litman and Or Patashnik and Kangle Deng and Aviral Agrawal and Rushikesh Zawar and Fernando De la Torre and Shubham Tulsiani},
  title     = {MaterialFusion: Enhancing Inverse Rendering with Material Diffusion Priors},
  journal   = {arXiv preprint arXiv:2409.15273},
  year      = {2024}
}

Acknowledgments

MaterialFusion was built on top of nvdiffrecmc, check it out!

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published