Skip to content

Neural Scene Decomposition for Accurate Light and Material Reconstruction via Physically-Based Global Illumination Estimation

Notifications You must be signed in to change notification settings

YueChenGithub/neural-inverse-rendering

Repository files navigation

neural-inverse-rendering


Neural Scene Decomposition for Accurate Light and Material Reconstruction via Physically-Based Global Illumination Estimation
Yue Chen1, Peter Kocsis1, Matthias Niessner1
1Technical University of Munich

Abstract: Recent advances in neural rendering have achieved pinpoint reconstruction of 3D scenes from multi-view images. To enable scene editing under different lighting conditions, an increasing number of methods are integrating differentiable surface rendering into the pipelines. However, many of these methods rely heavily on simplified surface rendering algorithms, while considering primarily direct lighting or fixed indirect illumination only. We introduce a more realistic rendering pipeline that embraces multi-bounce Monte-Carlo path tracing. Benefiting from the multi-bounce light path estimation, our method can decompose high-quality material properties without necessitating additional prior knowledge. Additionally, our model can accurately estimate and reconstruct secondary shading effects, such as indirect illumination and self-reflection. We demonstrate the advantages of our model to baseline methods qualitatively and quantitatively across synthetic and real-world scenes.

Setup

We suggest install the environment using conda:

# Create a new environment
conda create -n ma python=3.8
conda activate ma

# Install PyTorch
pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117
pip install pytorch-lightning==1.7.7

# Install Mitsuba
pip install mitsuba==3.0.2 drjit==0.2.2

# Install other dependencies
pip install wandb==0.13.5 lpips==0.1.4 imageio-ffmpeg==0.4.8 torchmetrics==0.11.2

# Install TinyCudaNN
conda install conda-forge::cudatoolkit-dev
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Data

Light Probes

Download the light probes from here and put them in ./light_probe.

Datasets

Download the NeRFactor-Synthetic dataset from here and put them in ./dataset/nerfactor/.
Download the TensoIR-Synthetic dataset from here and put them in ./dataset/tensoir/.
Download the (modified) DTU/MonoSDF dataset from here and put them in ./dataset/monosdf/.

Meshes

We provide GT meshes and estimated meshes, you can download them from here and put them in ./dataset/mesh/.

The estimated meshes for synthetic dataset are generated by TensoIR, and the estimated meshes for real world dataset are generated by MonoSDF.

Running code

python run.py

We provide configs responsible for the thesis results in ./config/, you can change the config in run.py:

configs = ['./config/dtu/lego_3072.ini']

Acknowledgements

The code for multiple importance sampling rendering and BRDFs are based on Mitsuba and the excellent pbr book.
The datasets are from NeRFactor, TensoIR, and DTU. The estimated meshes are from TensoIR and MonoSDF.
We thank all the authors for their great work.

About

Neural Scene Decomposition for Accurate Light and Material Reconstruction via Physically-Based Global Illumination Estimation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages