Project Page | Paper | Bibtex
Semi-supervised Parametric Real-world Image Harmonization.
CVPR 2023
Ke Wang, Michaël Gharbi, He Zhang, Zhihao Xia, Eli Shechtman
A novel semi-supervised training strategy and the first harmonization method that learns complex local appearance harmonization from unpaired real composites.
The code was developed by Ke Wang when Ke was a research scientist intern at Adobe research.
Please contact Ke (kewang@berkeley.edu) or Michaël (mgharbi@adobe.com) if you have any question.
Our results show better visual agreements with the ground truth compared to SOTA methods in terms of color harmonization (rows 1,2 and 4) and shading correction (row 3).
RGB curves harmonize the global color/tone (center), while our shading map corrects the local shading in the harmonization output (right).
- Linux
- Python 3
- NVIDIA GPU + CUDA CuDNN
- Conda installed
Table of Contents:
- Setup - set up the enviroment
- Pretrained Models - download pretrained models and resources
- Interactive Demo - off-line interactive demo
- Inference - inference on high-resolution images with pretrained model
- Dataset - prepare your own dataset for the training
- Training - pipeline for training PIH
- Citation - bibtex citation
- Clone this repo:
git clone git@github.com:adobe/PIH.git
- Install dependencies
We create a environment.yml
to install the dependencies, you need to have Conda installed. Run
conda env create -f environment.yml
(essentially install PyTorch)
We provide our pre-trained model (93M parameters) on Artist Retouched Dataset from this link and put it in the folder.
./pretrained/
We provide an interactive demo host offline built with PyGame
First, we install the dependencies:
python -m pip install -U pygame --user
pip install pygame_gui
pip install timm
Then, simpy run the following command to start the demo:
python demo.py
Here we provide a tutorial video for the demo.
We provide the inference code for evaluations:
python inference.py --bg <background dir *.png> --fg <foreground dir *.png> --checkpoints <checkpoint dir> [--gpu]
notes:
- arguments
--gpu
enable inference on GPU using cuda, default is by using CPU. - arguments
--checkpoints
specifies the dir for the checkpoint.
Example:
python inference.py --bg Demo_hr/Real_09_bg.jpg --fg Demo_hr/Real_09_fg.png --checkpoints pretrained/ckpt_g39.pth --gpu
Check the results/
folder for output images.
We prepare a guidline of preparing Artist Retouched Dataset.
For image with name <image-name>
, we organize the data
directory like this:
data
|--train
|--bg
|-- <image-name>_before.png
|-- <image-name>_after.png
|--masks
|-- <image-name>_before.png
|-- <image-name>_after.png
|--real_images
|-- <image-name>_before.png
|-- <image-name>_after.png
|--test
|--bg
|-- <image-name>_before.png
|-- <image-name>_after.png
|--masks
|-- <image-name>_before.png
|-- <image-name>_after.png
|--real_images
|-- <image-name>_before.png
|-- <image-name>_after.png
notes:
- bg (background): Inpainted background using foreground masks. Here we use LAMA to perform inpainting.
- masks: Foreground masks, should be consistent between
Before
, andAfter
. - real_images: Ground truth real images.
Our approach uses a dual-stream semi-supervised training to bridge the domain gap, alleviating the generalization issues that plague many state-of-the-art harmonization models
We provide the script train_example.sh
to perform training.
Training notes:
- modify
--dir_data
to the path of your custom dataset. - arguments
recon_weight
correspons to the weighting parameter to balance stream 1 and stream 2.
Simply run:
bash scripts/train_example.sh
to start the training.
If you use this code for your research, please cite our paper.
@article{wang2023semi,
title={Semi-supervised Parametric Real-world Image Harmonization},
author={Wang, Ke and Gharbi, Micha{\"e}l and Zhang, He and Xia, Zhihao and Shechtman, Eli},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023}
}