[update 8/28/2022]
Official Pytorch code for "AesUST: Towards Aesthetic-Enhanced Universal Style Transfer"
AesUST is a novel Aesthetic-enhanced Universal Style Transfer approach that can generate aesthetically more realistic and pleasing results for arbitrary styles. It introduces an aesthetic discriminator to learn the universal human-delightful aesthetic features from a large corpus of artist-created paintings. Then, the aesthetic features are incorporated to enhance the style transfer process via a novel Aesthetic-aware Style-Attention (AesSA) module. Moreover, we also develop a new two-stage transfer training strategy with two aesthetic regularizations to train our model more effectively, further improving stylization performance.
- Python 3.6
- Pytorch 1.8.0
Clone this repo:
git clone https://github.com/EndyWon/AesUST
cd AesUST
Test:
-
Download pre-trained models from this google drive. Unzip and place them at path
models/
. -
Test a pair of images:
python test.py --content inputs/content/1.jpg --style inputs/style/1.jpg
-
Test two collections of images:
python test.py --content_dir inputs/content/ --style_dir inputs/style/
Train:
-
Download content dataset MS-COCO and style dataset WikiArt and then extract them.
-
Download the pre-trained vgg_normalised.pth, place it at path
models/
. -
Run train script:
python train.py --content_dir ./coco2014/train2014 --style_dir ./wikiart/train
Content-style trade-off:
python test.py --content inputs/content/1.jpg --style inputs/style/1.jpg --alpha 0.5
Style interpolation:
python test.py --content inputs/content/1.jpg --style inputs/style/30.jpg,inputs/style/36.jpg --style_interpolation_weights 0.5,0.5
Color-preserved style transfer:
python test.py --content inputs/content/1.jpg --style inputs/style/1.jpg --preserve_color
If you find the ideas and codes useful for your research, please cite the paper:
@inproceedings{wang2022aesust,
title={AesUST: towards aesthetic-enhanced universal style transfer},
author={Wang, Zhizhong and Zhang, Zhanjie and Zhao, Lei and Zuo, Zhiwen and Li, Ailin and Xing, Wei and Lu, Dongming},
booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
pages={1095--1106},
year={2022}
}
We refer to some codes from SANet and IEContraAST. Great thanks to them!