📰 News | 🔧 Install | 📺Quick Demo | 📷 Training and Editing | 🚧 Contribute | 📜 License | ❓FAQ
Note: This repository is also a BasicSR style codebase for 3D Gaussian Splatting! Please feel free to use it for your own projects! If this repo helps you, please consider giving us a 🌟!
TL; DR: LE3D is a project for fast training and real-time rendering for HDR view synthesis from noisy RAW images using 3DGS.
This repository contains the official implementation of the following paper:
Lighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering for HDR View Synthesis
Xin Jin*, Pengyi Jiao*, Zheng-Peng Duan, Xingchao Yang, Chongyi Li, Chunle Guo#, Bo Ren#
(* denotes equal contribution. # denotes the corresponding author.)
arxiv preprint, [Homepage], [Paper Link]
Please note: These videos are encoded using HEVC with 10-bit HDR colors and are best viewed on a compatible display with HDR support, e.g. recent Apple devices.
This is how we make the demo videos.
edit_demo.mp4
Just set the KeyFrames
! All the interpolation will be done automatically! (BTW, you could set the acceleration of the camera motion in the Interpolations
!)
windowlegovary.mp4
gardenlights.mp4
Want to make your own 3D video story board? Please refer to LE3D Editor for more details.
- First of all, 🔧 Dependencies and Installation.
- For quick preview, please refer to our web viewer.
- For training and editing with your own data, please refer to 📷 Training and Editing.
- For further development, please refer to 🚧 Further Development.
Future work can be found in todo.md.
- Jan 15, 2025: Update pretrained scenes on the RawNeRF dataset, can be seen in Google Drive.
- Jan 8, 2025: Code released.
- Jan 3, 2025: Release a web demo for LE3D! you could view your own recontructed HDR scene in real-time! Codes on hdr-splat.
- Oct 10, 2024: LE3D is accepted by NIPS 2024!
History
Note: We only tested on Ubuntu 20.04, CUDA 11.8, Python 3.10 and Pytorch 1.12.1.
- Nvidia GPU with at least 12GB VRAM. (Since the resolution for RAW images is relatively high, we recommend at least 16GB VRAM.)
- Python 3.10 installed.
- CUDA 11.8 installed.
We have provided a script for easy installation.
> ./install.sh -h
Usage: ./install.sh [options]
Options:
-i|--interactive Interactive installation
-cuda|--install-cuda Install CUDA
-colmap|--install-colmap Install COLMAP
cuda_enabled Enable CUDA support (must follow --install-colmap)
-env|--create-env Create and activate conda environment
By default, only Python packages will be installed
For interactive installation, you can run and select the options you want:
> ./install.sh -i
Do you want to install CUDA? Root permission required. (y/N): Y
Do you want to install COLMAP? Root permission required. (y/N): Y
Do you want to enable CUDA support? (y/N): Y
Do you want to create and activate a conda environment called 'basicgs'? (y/N): Y
----------------------------------------
INSTALL_CUDA: true
INSTALL_COLMAP: true
COLMAP_CUDA_ENABLED: true
CREATE_ENV: true
INTERACTIVE: true
INSTALL_PYTHON_PACKAGES: true
----------------------------------------
...
or you can run ./install.sh -cuda -colmap cuda_enabled -env
to install CUDA, COLMAP, create a conda environment and then install all the python packages.
This would help you install all the dependencies and the python packages. as well as all the submodules.
To relieve the burden of data collection, we recommend users only capture the scene with forward-facing cameras. Just like LLFF, a guidence for camera placement is provided on Youtube.
For the camera setting, we recommend you fix the ISO and aperture to a reasonable value. The exposure value (EV) could be set as some lower value, e.g., -2. The aperture should be set as large as possible to avoid the defocus blur.
If you want to capture multi-exposure images, you could fix the ISO and aperture, and then change the exposure value (EV) to capture different exposure images.
Note: the reason why we recommend you to fix the ISO and aperture is that
- Fix the ISO to keep the noise level consistent.
- Fix the aperture to avoid the defocus blur.
For capturing tools, we recommend those tools which could capture DNG files. For IOS devices, we using Halide for capturing. For other devices, we recommend to use DNGConverter for converting the RAW images to DNG files.
- For training, we first need to make the data ready like this:
DATAPATH |-- images | |-- IMG_2866.JPG | |-- IMG_2867.JPG | `-- ... `-- raw |-- IMG_2866.dng |-- IMG_2866.json |-- IMG_2867.dng |-- IMG_2867.json `-- ...
- If you do not have the json files, please use
scripts/data/extract_exif_as_json.sh
to extract the exif information from the raw images. You could run this script in the dataset directory:bash scripts/data/extract_exif_as_json.sh pat/to/your/dataset/raw
.
And then you could calibrate the camera poses using COLMAP.USE_GPU=1 bash scripts/data/local_colmap.sh path/to/your/dataset PINHOLE
Note: The
PINHOLE
is the camera model which must be used for the calibration of 3DGS. - After the calibration, you could write your own yaml file for training!
base: options/le3d/base.yaml # for scenes without multi-exposure images base: options/le3d/base_wme.yaml # for scenes with multi-exposure images name: le3d/bikes # the name of the experiment datasets: train: name: rawnerf_bikes_train scene_root: datasets/rawnerf/scenes/bikes_pinhole # change to the path of your dataset val: name: rawnerf_bikes_val scene_root: datasets/rawnerf/scenes/bikes_pinhole # change to the path of your dataset network_g: # change to the path of your sparse point cloud, this file will be created during the dataset initialization. init_ply_path: datasets/rawnerf/scenes/bikes_pinhole/sparse/0/points3D.ply
We provide two ways for share your reconstructed HDR scene on social media.
- Use the hdr-splat to deploy your own HDR scene viewer with plain JavaScript code. You could use the following command to convert the your
.ply
file to a.splat
file.Then you could runbash scripts/export_splat.sh path/to/your/experiment [ITERATION] # e.g. bash scripts/export_splat.sh output/le3d/bikes latest
python -m http.server ./output/splat/bikes
to start a viewer for the reconstructed scenes.
Selected scenes can be found in our web viewer. - Use the LE3D Editor to create a video story board and share it on social media.
If you would like to develop/use LE3D in your projects, welcome to let us know. We will list your projects in this repository.
If you find our repo useful for your research, please consider citing our paper:
@inproceedings{jin2024le3d,
title={Lighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering for HDR View Synthesis},
author={Jin, Xin and Jiao, Pengyi and Duan, Zheng-Peng and Yang, Xingchao and Li, Chong-Yi and Guo, Chun-Le and Ren, Bo},
booktitle={NIPS},
year={2024}
}
This code is licensed under the Creative Commons Attribution-NonCommercial 4.0 International for non-commercial use only. Please note that any commercial use of this code requires formal permission prior to use.
For technical questions, please contact xjin[AT]mail.nankai.edu.cn
.
For commercial licensing, please contact cmm[AT]nankai.edu.cn
.
This repository borrows heavily from BasicSR and gaussian-splatting.
We would like to extend heartfelt gratitude to Ms. Li Xinru for crafting the exquisite logo for our project.
We also thank all of our contributors.