This repo provides the official code implementation and related data for our paper
Learning to Learn and Sample BRDFs
by Chen Liu, Michael Fischer and Tobias Ritschel
in Eurographics 2023
For more details, please check out (Paper, Project Page)!
After cloning this repo,
git clone https://github.com/ryushinn/meta-sampling.git && cd meta-sampling/
it would be easy to configure everything by running following scripts.
We recommend using Anaconda to setup the environment
conda env create -n meta-sampling -f environment.yaml
conda activate meta-sampling
By default, this command installs cpu-only pytorch. If you are using CUDA machines, please select the correct version of CUDA support manually in environment.yaml
.
Or you can download by running commands as instructed here. But please note that we didn't test for this case.
The necessary data, in the minimal requirement for running our repo, can be downloaded using this script:
bash scripts/download_data.sh
In case that the script failed due to network issues, you can download them manually:
- download MERL BRDF dataset into
data/brdfs/
; - download pretrained models into
data/meta-models/
; - download trained samplers into
data/meta-samplers/
.
Briefly, data/brdfs
contains 100 isotropic measured BRDFs from MERL dataset and we randomly choose 80 of them as our training dataset.
data/meta-models
is meta-learned initializations and learning rates for the three nonlinear models Neural BRDF
, Cooktorrance
, and Phong
. Besides, there are 5 precomputed components for PCA
model, obtained by running NJR15 codebase over the our training dataset. Note that there are 80 * 3 = 240
PCs but only the first 5 are used in our PCA model.
data/meta-samplers
is those optimal samples for each model. The number ranges from 1 to 512.
As illustrated by Algorithm 1 in the paper, our pipeline generally runs in two stages: 1. meta-model and 2. meta-sampler.
Here we provide scripts to easily run in this framework and reproduce the main experiments.
scripts/meta_model.sh
offers a quick configuration of meta-model experiments, while scripts/meta_sampler.sh
is the counterpart of meta-sampler experiments.
It is expected to run them in order:
bash scripts/meta_model.sh
bash scripts/meta_sampler.sh
By default, this will run for Neural BRDF
model. But in the scripts the model being fit is modifiable and can be set to one of Phong
, Neural BRDF
, and Cooktorrance
.
There is also a script scripts/classic.sh
for simply fitting models to BRDF without any "meta training", which is called classic
method in the paper.
bash script/classic.sh
The classic
mode only has access to limited resources (1 ~ 512 samples and 20 learning iterations) to fit models.
In the contrary, the overfit
mode represents the fitting process with sufficient samples and iterations.
Thanks to linearity of PCA model, we employ Ridge Regression to analytically solve the model parameters from measurements instead of iterative SGD. Hence we can directly meta-train samples:
bash scripts/meta_sampler_PCARR.sh
To evaluate, we render BRDFs using Mitsuba 0.6 with some plugins.
- dj_brdf renders BRDFs of
.binary
format - NBRDF codebase contains a plugin to render pretrained NBRDFs.
- This plugin renders BRDFs of the cooktorrance equation presented in our paper.
- The built-in Modified Phong BRDF (phong) plugin is used to render
phong
model BRDFs.
We highly appreciate these existing works. Please refer to the document for how to use custom plugins in Mitsuba.
Please consider citing as follows if you find our paper and repo useful:
@article{liuLearningLearnSample2023,
title={Learning to Learn and Sample BRDFs},
author={Liu, Chen and Fischer, Michael and Ritschel, Tobias},
journal={Computer Graphics Forum (Proceedings of Eurographics)},
year={2023},
volume={42},
number={2},
pages={201--211},
doi={10.1111/cgf.14754},
}