Skip to content

Latest commit

 

History

History
120 lines (77 loc) · 5.11 KB

README.md

File metadata and controls

120 lines (77 loc) · 5.11 KB

Learning to Learn and Sample BRDFs

This repo provides the official code implementation and related data for our paper

Learning to Learn and Sample BRDFs
by Chen Liu, Michael Fischer and Tobias Ritschel
in Eurographics 2023

For more details, please check out (Paper, Project Page)!

repo-illustration

Setup

After cloning this repo,

git clone https://github.com/ryushinn/meta-sampling.git && cd meta-sampling/

it would be easy to configure everything by running following scripts.

Environment

We recommend using Anaconda to setup the environment

conda env create -n meta-sampling -f environment.yaml
conda activate meta-sampling

By default, this command installs cpu-only pytorch. If you are using CUDA machines, please select the correct version of CUDA support manually in environment.yaml.

Or you can download by running commands as instructed here. But please note that we didn't test for this case.

Data

The necessary data, in the minimal requirement for running our repo, can be downloaded using this script:

bash scripts/download_data.sh

In case that the script failed due to network issues, you can download them manually:

Briefly, data/brdfs contains 100 isotropic measured BRDFs from MERL dataset and we randomly choose 80 of them as our training dataset.

data/meta-models is meta-learned initializations and learning rates for the three nonlinear models Neural BRDF, Cooktorrance, and Phong. Besides, there are 5 precomputed components for PCA model, obtained by running NJR15 codebase over the our training dataset. Note that there are 80 * 3 = 240 PCs but only the first 5 are used in our PCA model.

data/meta-samplers is those optimal samples for each model. The number ranges from 1 to 512.

Run

As illustrated by Algorithm 1 in the paper, our pipeline generally runs in two stages: 1. meta-model and 2. meta-sampler.

Here we provide scripts to easily run in this framework and reproduce the main experiments.

Meta-train models & samplers

scripts/meta_model.sh offers a quick configuration of meta-model experiments, while scripts/meta_sampler.sh is the counterpart of meta-sampler experiments.

It is expected to run them in order:

bash scripts/meta_model.sh
bash scripts/meta_sampler.sh

By default, this will run for Neural BRDF model. But in the scripts the model being fit is modifiable and can be set to one of Phong, Neural BRDF, and Cooktorrance.

Classic fitting

There is also a script scripts/classic.sh for simply fitting models to BRDF without any "meta training", which is called classic method in the paper.

bash script/classic.sh

The classic mode only has access to limited resources (1 ~ 512 samples and 20 learning iterations) to fit models. In the contrary, the overfit mode represents the fitting process with sufficient samples and iterations.

Meta-train samplers for PCA model

Thanks to linearity of PCA model, we employ Ridge Regression to analytically solve the model parameters from measurements instead of iterative SGD. Hence we can directly meta-train samples:

bash scripts/meta_sampler_PCARR.sh

Rendering side

To evaluate, we render BRDFs using Mitsuba 0.6 with some plugins.

We highly appreciate these existing works. Please refer to the document for how to use custom plugins in Mitsuba.

Citation

Please consider citing as follows if you find our paper and repo useful:

@article{liuLearningLearnSample2023,
  title={Learning to Learn and Sample BRDFs},
  author={Liu, Chen and Fischer, Michael and Ritschel, Tobias},
  journal={Computer Graphics Forum (Proceedings of Eurographics)},
  year={2023},
  volume={42},
  number={2},
  pages={201--211},
  doi={10.1111/cgf.14754},
}