Skip to content

Latest commit

 

History

History
106 lines (72 loc) · 5.16 KB

README.md

File metadata and controls

106 lines (72 loc) · 5.16 KB

iColoriT (WACV 2023) Official Implementation

This is the official PyTorch implementation of the paper: iColoriT: Towards Propagating Local Hint to the Right Region in Interactive Colorization by Leveraging Vision Transformer.

iColoriT is pronounced as "I color it ".

PWC
PWC
PWC

iColoriT: Towards Propagating Local Hint to the Right Region in Interactive Colorization by Leveraging Vision Transformer
Jooyeol Yun*, Sanghyeon Lee*, Minho Park*, and Jaegul Choo
KAIST
In WACV 2023. (* indicate equal contribution)

Paper: https://arxiv.org/abs/2207.06831
Project page: https://pmh9960.github.io/research/iColoriT/

Abstract: Point-interactive image colorization aims to colorize grayscale images when a user provides the colors for specific locations. It is essential for point-interactive colorization methods to appropriately propagate user-provided colors (i.e., user hints) in the entire image to obtain a reasonably colorized image with minimal user effort. However, existing approaches often produce partially colorized results due to the inefficient design of stacking convolutional layers to propagate hints to distant relevant regions. To address this problem, we present iColoriT, a novel point-interactive colorization Vision Transformer capable of propagating user hints to relevant regions, leveraging the global receptive field of Transformers. The self-attention mechanism of Transformers enables iColoriT to selectively colorize relevant regions with only a few local hints. Our approach colorizes images in real-time by utilizing pixel shuffling, an efficient upsampling technique that replaces the decoder architecture. Also, in order to mitigate the artifacts caused by pixel shuffling with large upsampling ratios, we present the local stabilizing layer. Extensive quantitative and qualitative results demonstrate that our approach highly outperforms existing methods for point-interactive colorization, producing accurately colorized images with a user's minimal effort.

Demo 🎨

Try colorizing images yourself with the demo software!

Pretrained iColoriT

Checkpoints for iColoriT models are available in the links below.

Backbone Link
iColoriT ViT-B iColoriT (Google Drive)
iColoriT-S ViT-S iColoriT-S (Google Drive)
iColoriT-T ViT-Ti iColoriT-T (Google Drive)

Testing

Installation

Our code is implemented in Python 3.8, torch>=1.8.2

git clone https://github.com/pmh9960/iColoriT.git
pip install -r requirements.txt

Testing iColoriT

You can generate colorization results when iColoriT is provided with randomly selected groundtruth hints from color images. Please fill in the path to the model checkpoints and validation directories in the scripts/infer.sh file.

bash scripts/infer.sh

Then, you can evaluate the results by running

bash scripts/eval.sh

Randomly sampled hints used in our paper is available in this link

The codes used for randomly sampling hint locations can be seen in hint_generator.py

Training

First prepare an official ImageNet dataset with the following structure.

train
 └ id1
   └ image1.JPEG
   └ image2.JPEG
   └ ...
 └ id2
   └ image1.JPEG
   └ image2.JPEG
   └ ...     

Please fill in the train/evaluation directories in the scripts/train.sh file and execute

bash scripts/train.sh

Citation

@InProceedings{Yun_2023_WACV,
    author    = {Yun, Jooyeol and Lee, Sanghyeon and Park, Minho and Choo, Jaegul},
    title     = {iColoriT: Towards Propagating Local Hints to the Right Region in Interactive Colorization by Leveraging Vision Transformer},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2023},
    pages     = {1787-1796}
}