This is the code for paper "Temporal Consistent Automatic Video Colorization via Semantic Correspondence"
Our method achieves the 3rd place in NTIRE 2023 Video Colorization Challenge, Track 2: Color Distribution Consistency (CDC) Optimization
To run the test code, please modify the "--data_root_val" in ./stage1/test.py and "--test_path" in ./stage2/inference_colorvid.py to the path of the test dataset.
Please run test.sh to test the model.
bash test.sh
The checkpoint 342000 in stage1 is finetuned for the NTIRE2023 video colorization challenge. And 340000 is the model we used in the paper.
Model can be download in Model_Released
Any problem about the implementation, please contact sqchen@bupt.edu.cn
Please cite this paper in your publications if it helps your research:
@inproceedings{zhang2023temporal,
title={Temporal consistent automatic video colorization via semantic correspondence},
author={Zhang, Yu and Chen, Siqi and Wang, Mingdao and Zhang, Xianlin and Zhu, Chuang and Zhang, Yue and Li, Xueming},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={1835--1844},
year={2023}
}
@misc{exemplarvcld,
title={Exemplar-based Video Colorization with Long-term Spatiotemporal Dependency},
author={Siqi Chen and Xueming Li and Xianlin Zhang and Mingdao Wang and Yu Zhang and Jiatong Han and Yue Zhang},
year={2023},
eprint={2303.15081},
archivePrefix={arXiv}}