Light_VQA: A Multi-Dimensional Quality Assessment Model for Low Light Video Enhancement
This is a repository for the model proposed in the paper "Light_VQA: A Multi-Dimensional Quality Assessment Model for Low Light Video Enhancement"(accepted by ACM MM 2023). The framework of Light-VQA is illustrated in here. Considering that among low-level features, brightness and noise have the most impact on low-light enhanced VQA [37], in addition to semantic features and motion features extracted from deep neural network, we specially handcraft the brightness, brightness consistency, and noise features to improve the ability of the model to represent the quality-aware features of low-light enhanced videos. Extensive experiments validate the effectiveness of our network design.
pytorch
opencv
scipy
pandas
torchvision
torchvideo
- Extract key frames (Set the file path internally)
python extract_key_frames.py
- Extract brightness consistency features
python brightness_consistency.py
- Extract temporal features
python extract_temporal_features.py
- Train the model
python train.py
- Test the model
python test.py
Pretrained weights can be downloaded here: https://drive.google.com/file/d/1GEvjpbDwG7L3fekkLt2eQQ3ozzAz3qCx/view?usp=sharing.
If you find this code is useful for your research, please cite:
@inproceedings{lightvqa2023,
title = {Light_VQA: A Multi-Dimensional Quality Assessment Model for Low Light Video Enhancement},
author = {Yunlong Dong, Xiaohong Liu, Yixuan Gao, Xunchu Zhou, Tao Tan and Guangtao, Zhai},
booktitle={Proceedings of the 31th ACM International Conference on Multimedia},
year = {2023},
}