Skip to content

Take two images, and produce a new image that reflects the content of one but the artistic "style" of the other.

Notifications You must be signed in to change notification settings

iafarhan/style-transfer-and-feature-inversion

Repository files navigation

Style Transfer using Convolutional neural networks

We implement the style transfer technique from "Image Style Transfer Using Convolutional Neural Networks" (Gatys et al., CVPR 2015).

The general idea is to take two images, and produce a new image that reflects the content of one but the artistic "style" of the other. We will do this by first formulating a loss function that matches the content and style of each respective image in the feature space of a deep network, and then performing optimization on the pixels of the image itself.

The deep network we use as a feature extractor is SqueezeNet, a small model that has been trained on ImageNet.

For detailed overview please review the notebook file.

About

Take two images, and produce a new image that reflects the content of one but the artistic "style" of the other.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published