This repository contains the implementation of the paper "HazeCLIP: Towards Language Guided Real-World Image Dehazing".
We present HazeCLIP, a language-guided adaptation framework designed to enhance the real-world performance of pre-trained dehazing networks.
Set up conda environment via
conda create -n HazeCLIP python=3.9
conda activate HazeCLIP
pip install -r requirements.txt
Please modify the corresponding yaml configuration file before running the command.
Download checkpoint from Baidu Yun (code: haze) and put it in ./weights/ folder.
python inference.py --config configs/inference.yaml
Download synthetic data from RIDCP and put it under ./data/ folder.
python pretrain.py --config configs/pretrain.yaml
Download fine-tuning dataset from Baidu Yun (code: haze) and put it under ./data/ folder.
python finetune.py --config configs/finetune.yaml
If you find our work helpful, please consider cite our work as
@misc{wang2024hazecliplanguageguidedrealworld,
title={HazeCLIP: Towards Language Guided Real-World Image Dehazing},
author={Ruiyi Wang and Wenhao Li and Xiaohong Liu and Chunyi Li and Zicheng Zhang and Xiongkuo Min and Guangtao Zhai},
year={2024},
eprint={2407.13719},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.13719},
}
Parts of the codes are adopted from RIDCP, CLIP Surgery and CLIP-LIT. Thanks for their work!