[NeurIPS 2024] The official implementation of the Paper: "LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes"
conda create -n lushnerf
conda activate lushnerf
git clone https://github.com/quzefan/LuSh-NeRF
cd LuSh-NeRF
pip install -r requirements.txt
Please get the GIM pretrained model gim_dkm_100h.ckpt and put it in the ./gim/weights
folder.
You can find our data at here. Please put the LOL-BlurNeRF
folder in the ./data
folder.
The images
, images_1_preprocess
, HL
(Synthetic Only) folder in each scene contain the original, preprocess and GT images.
Please make sure your data is in LLFF
format and put it in the ./data folder.
For example, to train Poster
scene,
python run_lushnerf.py --config ./configs/poster_lushnerf
The training and tensorboard results will be save in <basedir>/<expname>
.
When you training on your own data, please modify the scaleup-gamma
and the scaleup-clahe
hyper-parameters to make sure the images in the images_1_preprocess
have a satisfactory color distribution.
For example, to render Poster
scene,
python run_lushnerf.py --config ./configs/poster_lushnerf --render_only --render_radius_scale 2.0
Since our method need to optimize mutiple MLP network, it's training speed may be a bit slow. If your GPU memory is not sufficient or need a quick test, you can lower the N_samples
and N_importance
(64 to 32).
For the other notes, please refer to the Deblur-NeRF.
If you find our work is helpful to your research, please cite the papers as follows:
@article{qu2024lush,
title={LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes},
author={Qu, Zefan and Xu, Ke and Hancke, Gerhard Petrus and Lau, Rynson WH},
journal={arXiv preprint arXiv:2411.06757},
year={2024}
}
Our codebase builds on Deblur-NeRF, DP-NeRF and GIM. Thanks the authors for sharing their awesome codebases!