This is a modified Pytorch implementation of Universal Style Transfer via Feature Transforms.
It makes some modifications:
- Slightly improved parametrization introduced in Unsupervised Learning of Artistic Styles with Archetypal Style Analysis to control the preservation of detail vs the strength of stylization. This is most useful if you're modifying the style of an image that is already an artwork. But may also be of interest to preserve detail in photos. For the original parametrization, see @sunshineatnoon's repository (or go back through the git log) until I manage to clean this up and have both neatly next to each other.
- Improved feature transforms as described by Lu et al. in A Closed-form Solution to Universal Style Transfer. This specifically leads to better contour preservation.
The official Torch implementation can be found here and Tensorflow implementation can be found here.
- Pytorch
- torchvision
- scikit-image
- CUDA + CuDNN
Simply put content and image pairs in images/content
and images/style
respectively. Note that correspoding conternt and image pairs should have same names.
python WCT.py --cuda
Many thanks to the author Yijun Li for his kind help.
Li Y, Fang C, Yang J, et al. Universal Style Transfer via Feature Transforms[J]. arXiv preprint arXiv:1705.08086, 2017.