Skip to content

Support code for controlnet diffuser step of "A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis", EGSR2024

Notifications You must be signed in to change notification settings

graphdeco-inria/controlnet-diffusers-relighting

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ControlNet Single-Image Relighting

This is the secondary repository of our work "A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis". The main repository is available at (https://github.com/graphdeco-inria/generative-radiance-field-relighting); this repository contains code for the 2D single-image relighting network we describe in the paper. To use our method for relightable novel-view synthesis with Gaussian Splatting, you will first need to setup this codebase and use it to transform your single-illumination capture into a (generated) multi-illumination capture.

Installation

git clone https://gitlab.inria.fr/ypoirier/controlnet-diffusers-relighting.git

First clone this repository, which includes a modified copy of "A Dataset of Multi-Illumination Images in the Wild" (https://projects.csail.mit.edu/illumination/) in multi_illumination/ and of "Diffusers" (https://github.com/huggingface/diffusers) in diffusers/.

Important You will need to make these modules visible in your search path. This can be done with:

export PYTHONPATH=$PYTHONPATH:multi_illumination:diffusers/src

Make sure you do not have a copy of diffusers installed using pip.

Creating the environment

Then create a virtual environment on python 3.9.7 and install the requirements in requirements.txt:

conda create -n csir python==3.9.7
conda activate csir
pip install -r requirements.txt

Other versions likely work fine, but we have not tried them. We recommend using a separate environment as the one you use for Gaussian Splatting.

Downloading Pretrained weights

Pretrained ControlNet weights and decoder weights can be downloaded with:

wget https://repo-sam.inria.fr/fungraph/generative-radiance-field-relighting/content/weights/{controlnet,decoder}_1536x1024.safetensors -P weights/

Which will place them into weights/. Both sets of weights are required for inference. The weights for Stable Diffusion and the Marigold depth estimator (https://github.com/prs-eth/Marigold) are also needed, but will download automatically as the code runs and end up in ~/.cache/huggingface/.

(Optional) In the paper, we also trained at a smaller resolution network for quantitative evaluation against other methods. To download the weights for these networks, simply replace 1536x1024 with 768x512 in the previous URL.

Inference

Inference scripts can be run directly from these weights. You will need at least 20GB of GPU memory.

Relighting single images

To try our networks on a individual images, use the sample_single_image.py script. Some example images are already provided in the exemplars directory.

python sample_single_image.py --image_paths exemplars/*.png

Images will be saved into samples/.

You can select which directions to relight to using --dir_ids. The directions are numbered using the convention from "A Dataset of Multi-Illumination Images in the Wild", and their exact coordinates are listed in relighting/light_directions.py. On an A40 card inference takes 3-4 seconds per relighting.

Relighting entire colmap captures

You can download our scenes at: https://repo-sam.inria.fr/fungraph/generative-radiance-field-relighting/datasets/ If you wish to use your own, we expect capture images with the following structure:

colmap/**/<scene_name>/
    └── train/
        └── images/
            ├── 0000.png
            ├── 0001.png
            └── ...

Normally this will a Colmap output directory. For example, for our scene colmap/real/garage_wall, you can launch sampling with:

python sample_entire_capture.py --capture_paths colmap/real/garage_wall

Relit images will be saved into colmap/real/garage_wall/relit_images/. On an A40 relighting a dataset takes approximatively 1 minute per image. So for a capture with 100 images you can expect about 1 hour and a half of processing to generate all relit images.

(Optional) Rendering light sweeps

We provide a small script that renders a short video where the light direction is interpolated between different values.

python sample_and_render_light_sweep.py --image_paths exemplars/kettle.png

Producing the video requires having ffmpeg available on your system. The sampled images will be saved in sweep_samples/ and the videos in sweep_videos/.

Training

Coming soon.

(Optional) Computing the illumination direction from light probes

We computed the directions of illumination in the multilum dataset using the diffuse light probes (gray spheres). You can reproduce this with:

python compute_light_directions.py

Citing our work

Please cite us with:

@article{
      10.1111:cgf.15147,
      journal = {Computer Graphics Forum},
      title = {{A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis}},
      author = {Poirier-Ginter, Yohan and Gauthier, Alban and Philip, Julien and Lalonde, Jean-François and Drettakis, George},
      year = {2024},
      publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
      ISSN = {1467-8659},
      DOI = {10.1111/cgf.15147}
    }

About

Support code for controlnet diffuser step of "A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis", EGSR2024

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages