Skip to content

Similar to Neural Style

ProGamerGov edited this page Jul 24, 2016 · 43 revisions

Artistic-Videos

  • This is the torch implementation for the paper "Artistic style transfer for videos", based on neural-style code by Justin Johnson https://github.com/jcjohnson/neural-style .

  • It's the same as Neural-Style but with support for creating video instead of just single images.


CNNMRF

  • code for paper "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis".

  • Seems to work well for using real images as styles.

  • Work similar to Neural-Style in how you input your commands. Requires a style image and content image, just like Neural-Style.


Neural-Doodle

  • Turn your two-bit doodles into fine artworks with deep neural networks, generate seamless textures from photos, transfer style from one image to another, perform example-based upscaling, but wait... there's more! (An implementation of Semantic Style Transfer.)

Deepdream

  • Used to create those trippy dream like images

Deep_dream

  • crowsonkb's implementation of Google's Deepdream

  • Capable of creating high resolution images in just a matter of minutes due to tiling.

  • Tiling allows for even GPU's and CPUs with few resources, to create high res images.


Image Analogies

  • This is basically an implementation of this "Image Analogies" paper, In our case, we use feature maps from VGG16. The patch matching and blending is inspired by the method described in "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis". Effects similar to that paper can be achieved by turning off the analogy loss (or leave it on!) --analogy-w=0 and turning on the B/B' content weighting via the --b-content-w parameter. Also, instead of using brute-force patch matching we use the PatchMatch algorithm to approximate the best patch matches. Brute-force matching can be re-enabled by setting --model=brute

Neural-Style in Tensor flow

  • An implementation of neural style in TensorFlow.

  • This implementation is a lot simpler than a lot of the other ones out there, thanks to TensorFlow's really nice API and automatic differentiation.

  • TensorFlow doesn't support L-BFGS (which is what the original authors used), so we use Adam. This may require require a little bit more hyperparameter tuning to get nice results.

  • TensorFlow seems to be slower than a lot of the other deep learning frameworks out there. I'm sure this implementation could be improved, but it would probably take improvements in TensorFlow itself as well to get it to operate at the same speed as other implementations. As of now, it seems to be around 3x slower than implementations using Torch.


fzliu's Style-Transfer

  • Neural net operations are handled by Caffe, while loss minimization and other miscellaneous matrix operations are performed using numpy and scipy. L-BFGS is used for minimization.

  • Can use GoogLeNet models?


Neural-Art Mini: Using SqueezeNet