Neural style transfer is an optimization technique used to take two images—a content image and a style reference image (such as an artwork by a famous painter)—and blend them together so the output image looks like the content image, but “painted” in the style of the style reference image.
This style transfer implementation makes use of the VGG-19 model and its specific layers to extract high level features that can be compared to a randomly generated image into some image that has the content from the content image and style from the style image.
The content and style factors can be adjusted by manipulating the alpha and beta parameters.