Skip to content

The code release of paper "AAAI Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and Transformer-Based Method", AAAI 2023

License

Notifications You must be signed in to change notification settings

TaoWangzj/LLFormer

Repository files navigation

Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and Transformer-Based Method (AAAI 2023 -- Oral)

Tao Wang, Kaihao Zhang, Tianrun Shen, Wenhan Luo, Bjorn Stenger, Tong Lu

paper supplement video slides Summary

Abstract: As the quality of optical sensors improves, there is a need for processing large-scale images. In particular, the ability of devices to capture ultra-high definition (UHD) images and video places new demands on the image processing pipeline. In this paper, we consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution. We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms. As second contribution, we introduce LLFormer, a transformer-based low-light enhancement method. The core components of LLFormer are the axis-based multi-head self-attention and cross-layer attention fusion block, which significantly reduces to linear complexity. Extensive experiments on the new dataset as well as on existing public datasets show that LLFormer outperforms state-of-the-art methods. We also show that employing existing LLIE methods trained on our benchmark as a pre-processing step significantly improves the performance of downstream tasks, e.g., face detection in low-light conditions. The source code and pre-trained models are available at https://github.com/TaoWangzj/LLFormer.

News

  • Jan 11, 2023: Our paper is selected for an ORAL presentation at AAAI 2023
  • Dec 24, 2022: Pre-trained models are released!
  • Dec 23, 2022: Codes is released!
  • Dec 23, 2022: Homepage is released!

This repository contains the dataset, code and pre-trained models for our paper. Please refer to our project page for a quick project overview.

UHDLOL Benchmark for Image Enhancement

We create a new large-scale UHD-LLIE dataset called UHDLOL to benchmark the performance of existing LLIE methods and explore the UHD-LLIE problem. It consists of two subsets: UHD-LOL4K and UHD-LOL8K. The UHD-LOL4K subset contains 8, 099 pairs of 4K low-light/normal-light images. Among them, 5,999 pairs of images are used for training and 2, 100 for testing. The UHD-LOL8K subset includes 2, 966 pairs of 8K low-light/normal-light images, which are split into 2, 029 pairs for training and 937 for testing. Please refer to to our project page for more detail.

Network Architecture

We propose a new transformer-based method for UHD-LLIE. The core design of LLFormer includes an axis-based transformer block and a cross-layer attention fusion block. In the former, axis-based multi-head self-attention performs self-attention on the height and width axis across the channel dimension sequentially to reduce the computational complexity, and a dual gated feed-forward network employs a gated mechanism to focus more on useful features. The cross-layer attention fusion block learns the attention weights of features in different layers when fusing them.

Quantitative results

Results on UHD-LOL

Results on LOL and MIT-Adobe FiveK

Get Started

Dependencies and Installation

  1. Create Conda Environment
conda create -n LLFormer python=3.7
conda activate LLFormer
conda install pytorch=1.8 torchvision=0.3 cudatoolkit=10.1 -c pytorch
pip install matplotlib scikit-image opencv-python yacs joblib natsort h5py tqdm
  1. Clone Repo
git clone https://github.com/TaoWangzj/LLFormer.git
  1. Install warmup scheduler
cd LLFormer
cd pytorch-gradual-warmup-lr; python setup.py install; cd ..

Dataset

You can use the following links to download the datasets

  1. UHD-LOL4K [OneDrive | Baidu drive]
  2. UHD-LOL8K [OneDrive | Baidu drive]
  3. LOL [Link]
  4. MIT-Adobe FiveK [Google drive | Baidu drive]

Pretrained Model

We provide the pre-trained models under different datasets:

  • LLFormer trained on UHD-LOL4K [Google drive | Baidu drive] with training config file ./configs/UHD-LOL4K/train/training_UHD_4K.yaml.

  • LLFormer trained on UHD-LOL8K [Google drive | Baidu drive] with training config file ./configs/UHD-LOL8K/train/training_UHD_8K.yaml.

  • LLFormer trained on LOL [Google drive | Baidu drive] with training config file ./configs/LOL/train/training_LOL.yaml

  • LLFormer trained on MIT-Adobe FiveK [Google drive | Baidu drive] with training config file ./configs/MIT-Adobe-FiveK/train/training_MIT_5K.yaml.

Visual comparison results

We provide the visual results of all existing methods in Table 2 on LOL and MIT-Adobe FiveK datasets

Test

You can directly test the pre-trained model as follows

  1. Modify the paths to dataset and pre-trained mode.
# Tesing parameter 
input_dir # the path of data
result_dir # the save path of results 
weights # the weight path of the pre-trained model
  1. Test the models for LOL and MIT-Adobe FiveK dataset

You need to specify the data path input_dir, result_dir, and model path weight_path. Then run

python test.py --input_dir your_data_path --result_dir your_save_path --weights weight_path
  1. Test the models for UHD-LOL dataset

You need to specify the data path input_dir, result_dir, and model path weight_path. Then run

python test_UHD.py --input_dir your_data_path --result_dir your_save_path --weights weight_path

(Due to GPU memory limitation, we suggest the users to test with patch-based mode for UHD images)

Train

  1. To download UHD-LOL training and testing data

  2. Generate image patches from full-resolution training images of UHD-LOL dataset

python scripts/extract_subimages_UHD.py
  1. To train LLFormer, run
python train.py -yml_path your_config_path
You need to modify the config for your own training environment

Citations

If UHDLOL benchmark and LLFormer help your research or work, please consider citing:

@inproceedings{wang2023ultra,
  title={Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method},
  author={Wang, Tao and Zhang, Kaihao and Shen, Tianrun and Luo, Wenhan and Stenger, Bjorn and Lu, Tong},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={37},
  number={3},
  pages={2654--2662},
  year={2023}
}
 @inproceedings{zhang2021benchmarking,
      title={Benchmarking ultra-high-definition image super-resolution},
      author={Zhang, Kaihao and Li, Dongxu and Luo, Wenhan and Ren, Wenqi and Stenger, Bjorn and Liu, Wei and Li, Hongdong and Yang, Ming-Hsuan},
      booktitle={ICCV},
      pages={14769--14778},
      year={2021}
    }

Contact

If you have any questions, please contact taowangzj@gmail.com


Our Related Works

  • Benchmarking Ultra-High-Definition Image Super-resolution, ICCV 2021. Paper | Code
  • MC-Blur: A Comprehensive Benchmark for Image Deblurring, arXiv 2022. Paper | Code

Reference Repositories

This implementation is based on / inspired by:


statistics

visitors

About

The code release of paper "AAAI Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and Transformer-Based Method", AAAI 2023

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages