Neural Style Transfer is a technique discussed in paper A Neural Algorithm of Artistic Style which introduces an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. In simple terms, Neural Style transfer is a computer vision topic for stylization that takes two images: a content image and a style reference image, and blends them together so that the resulting output image retains the core elements of the content image, but appears to be painted in the style of the style reference image. This stylization heavily depended on the CNN under study and is more customizatable than CycleGAN (according to the blog).
This project main focus is to take advantage of the customization provided by this technique to generate an image that looks as though painted by a famous artist.
Note: This project is partially inspired from TensorFlow's Neural Style Transfer Documentation and Assignment in TensorFlow: Advanced Techniques Specialization - Coursera.
- Importing the Data (content and style images).
- Choosing a model architecture to generate stylized images.
- Preprocessing the Data.
- Exploring the Data (optional: for the sake of audience)
- Creating custom loss functions.
- Utilizing TensorFlow's Gradient Tape to update the stylized image.
- Using Adam's optimizer instead of LBFGS (as it also works fine).
- Saving the results in .gif format.
Let's take one example to show how the neural style transfer actually works.