Skip to content

Official implementation of the SIGGRAPH Asia 2024 paper titled "Online Neural Denoising with Cross-Regression for Interactive Rendering"

License

Notifications You must be signed in to change notification settings

CGLab-GIST/cross-denoiser

Repository files navigation

Online Neural Denoising with Cross-Regression for Interactive Rendering

Online Neural Denoising with Cross-Regression for Interactive Rendering

Authors:
Hajin Choi, Seokpyo Hong, Inwoo Ha, Nahyup Kang, Bochang Moon


This is the official implementation for the paper titled "Online Neural Denoising with Cross-Regression for Interactive Rendering" in ACM Transactions on Graphics (Proceedings of the SIGGRAPH Asia 2024).

PyTorch version of the code is also available. Check out the torch branch.

⚙️ Prerequisites

🖥️ Tested Environments

  • OS: Ubuntu 22.04 or Windows 10

    For Windows users: Do not run the Docker command on Ubuntu under Windows Subsystem for Linux (WSL), as the default instance does not support GPU acceleration. We recommend installing Docker Desktop and using the native Windows Command Prompt or PowerShell. To enable the GPU on WSL2, refer to this link.

  • GPU:
    • RTX 2080 Ti
    • RTX 3090
    • RTX 4090
    • Others with CUDA support may work, but we have not tested them.
    • We recommend installing the latest NVIDIA driver on the host OS.
  • Docker
  • (For Ubuntu) NVIDIA Docker

🚀 Steps to Run the Code

  1. Clone this repository

    git clone https://github.com/CGLab-GIST/cross-denoiser.git
    
  2. Download the dataset (see Dataset) and extract it to a directory.

  3. Modify the docker-compose.yml file to mount the dataset directory to the container.
    Update the DATASET_DIR variable in the volumes section. For example, if the downloaded dataset is located at /home/user/dataset:

    volumes:
      - /home/user/dataset:/dataset
  4. Navigate to the repository and run the following command to build the docker image and create a container:

    docker compose up --build -d
    

    (Optionally) In Ubuntu, give the current user/group permission to the container to access and modify the files in the mounted current directory:

    USER_UID=$(id -u) USER_GID=$(id -g) USER_NAME=$(whoami) docker compose up -d --build
    
  5. Attach to the container

    docker exec -it cross-denoiser bash
    
  6. In the container, navigate to ~/cross-denoiser and run ./build_customop.sh

    • It will build the custom CUDA C++ operations and generate .cu.o and .so file inside the ops directory.
  7. Run the following command to denoise:

    python scripts/main.py --scene Bistro --frames 101 --out_dir ./results
    

    The script accepts the following arguments:

    • --scene: The scene name (and the directory name) to denoise.
    • --frames: The number of frames to denoise.
    • --out_dir: The directory to save the denoised frames.
    • Other arguments can be found in scripts/main.py.

    ⚠️ WARNING: Ensure you have sufficient disk space, as it generates .npy files in the temporary directory ./tmp_scenes for faster loading. These files consume more disk space than the original .exr files because they are not compressed. If you don't have enough disk space, you can disable this feature by adding --no_npy to the command.

  8. Check the results in the ./results (default) directory.

📁 Dataset

We provide the evaluation dataset used in the paper. Full dataset dataset.zip includes 101 frames of five scenes (Bistro, BistroDynamic, EmeraldSquare, Staircase, and Musicroom). If you want to run a quick test, you can download a smaller dataset dataset_small.zip, which includes first 3 frames of the Bistro scene.

Dataset Structure

dataset
├── Bistro
│   ├── color_0000.exr
│   ├── color2_0000.exr
│   ├── emissive_0000.exr
│   ├── envLight_0000.exr
│   ├── albedo_0000.exr
│   ├── normal_0000.exr
│   ├── linearZ_0000.exr
│   ├── mvec_0000.exr
│   ├── pnFwidth_0000.exr
│   ├── opacity_0000.exr
│   ├── ref_0000.exr
│   ├── ref_emissive_0000.exr
│   ├── ref_envLight_0000.exr
│   ├── ...
├── BistroDynamic
├── ...

The images are rendered using ReSTIR PT (based on Falcor 5.0), with slight modifications like splitting emissive/envLight from color.

Acknowledgements

We thank the anonymous reviewers for their valuable feedback during the review process. We also thank the authors of ReSTIR PT for providing their renderer. We took scripts/exr.py from the KPCN. The following 3D models were used to generate images in the dataset:

Hajin Choi created the Musicroom scene using the following assets:

License

All source codes are released under a BSD License.

About

Official implementation of the SIGGRAPH Asia 2024 paper titled "Online Neural Denoising with Cross-Regression for Interactive Rendering"

Resources

License

Stars

Watchers

Forks