This is a project of XJTLU 2017 Summer Undergraduate Research Fellowship, it aims at designing a generative adversarial network to implement style transfer from a style image to content image. Related literature could be viewed from Wiki
Neural Style Transfer is one of the cutting-edge topic in deep learning field. It aims at transfer the style of an image to content image, which could be sketch or colored.
Our goal is to implement the neural style transfer by using cycleGAN. At the same time, we also want to take one step further by using CAN, which could generate image itself after a well-feed training process.
Auxiliary conditional GAN network is used with residual U-net with two guided decoders on generator side. The discriminator consists of deConvNets, which minimize the difference the generated image with ground-truth.
This file is essential for the network, the download link could be viewed from here.The VGG19 is a pretrained very deep ConvNets that could be used directly. Similar pretrained models such as Resnet, VGG16 will be tested.
A profound and very laconic architecture, the origin paper could be viewed here.
Completed image results are listed below. These results are obtained from cGAN and cycleGAN with small training epochs. They will be more accurate if trained sufficiently.
For poster, please see this link. Codes are going to be open-sourced soon.