Skip to content

Code for ECCV 2020 paper "Open-Edit: Open-domain Image Manipulation with Open-Vocabulary Instructions"

Notifications You must be signed in to change notification settings

xh-liu/Open-Edit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Open-Edit: Open-Domain Image Manipulation with Open-Vocabulary Instructions

Xihui Liu, Zhe Lin, Jianming Zhang, Handong Zhao, Quan Tran, Xiaogang Wang, and Hongsheng Li.
Published in ECCV 2020.

results

Installation

Clone this repo.

git clone https://github.com/xh-liu/Open-Edit
cd Open-Edit

Install PyTorch 1.1+ and other requirements.

pip install -r requirements.txt

Download pretrained models

Download pretrained models from Google Drive

Data preparation

We use Conceptual Captions dataset for training. Download the dataset and put it under the dataset folder. You can also use other datasets

Training

The visual-semantic embedding model is trained with VSE++.

The image decoder is trained with:

bash train.sh

Testing

You can specify the image path and text instructions in test.sh.

bash test.sh

Citation

If you use this code for your research, please cite our papers.

@inproceedings{liu2020open,
  title={Open-Edit: Open-Domain Image Manipulation with Open-Vocabulary Instructions},
  author={Liu, Xihui and Lin, Zhe and Zhang, Jianming and Zhao, Handong and Tran, Quan and Wang, Xiaogang and Li, Hongsheng},
  booktitle={European Conference on Computer Vision},
  year={2020}
}

About

Code for ECCV 2020 paper "Open-Edit: Open-domain Image Manipulation with Open-Vocabulary Instructions"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published