Skip to content

wenzhouyidu/Light-VQA

Repository files navigation

Light-VQA

Light_VQA: A Multi-Dimensional Quality Assessment Model for Low Light Video Enhancement

Description

This is a repository for the model proposed in the paper "Light_VQA: A Multi-Dimensional Quality Assessment Model for Low Light Video Enhancement"(accepted by ACM MM 2023). image The framework of Light-VQA is illustrated in here. Considering that among low-level features, brightness and noise have the most impact on low-light enhanced VQA [37], in addition to semantic features and motion features extracted from deep neural network, we specially handcraft the brightness, brightness consistency, and noise features to improve the ability of the model to represent the quality-aware features of low-light enhanced videos. Extensive experiments validate the effectiveness of our network design.

Usage

Install Requirements

pytorch
opencv
scipy
pandas
torchvision
torchvideo

Download databases

LLVE-QA(1) LLVE-QA(2)

Train models

  1. Extract key frames (Set the file path internally)
python extract_key_frames.py
  1. Extract brightness consistency features
python brightness_consistency.py
  1. Extract temporal features
python extract_temporal_features.py
  1. Train the model
python train.py
  1. Test the model
python test.py

Pretrained weights can be downloaded here: https://drive.google.com/file/d/1GEvjpbDwG7L3fekkLt2eQQ3ozzAz3qCx/view?usp=sharing.

Citation

If you find this code is useful for your research, please cite:

@inproceedings{lightvqa2023,
title = {Light_VQA: A Multi-Dimensional Quality Assessment Model for Low Light Video Enhancement},
author = {Yunlong Dong, Xiaohong Liu, Yixuan Gao, Xunchu Zhou, Tao Tan and Guangtao, Zhai},
booktitle={Proceedings of the 31th ACM International Conference on Multimedia},
year = {2023},
}

About

Dataset and codes will be released soon.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages