This is the repository for our Machine Learning Term project on Computer Generated Art. We have worked on the Artistic Rendering of Images.
##Group Members
- Ashrujit Ghoshal (14CS10060)
- Sayan Ghosh (14CS10061)
- Arundhati Banerjee (14CS30043)
- Sayan Mandal (14CS30032)
- Mousam Roy (14CS30019)
- Sourav Pal (14CS10062)
- Sohan Patro (14CS30044)
- Projjal Chanda (14CS10057)
- Pradeep Dogga (14CS10013)
- Aniket Suri (14CS10004)
The folder Artistic Rendering of 2D images using CNN contains an implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys,Alexander S. Ecker,Matthias Bethge.
A folder containing test cases and outputs is included inside it.
The folder NaiveImplementation is an attempt to implement the paper Separating Style and Content by Joshua B. Tenenbaum and William T. Freeman.
The necessary training data and test set is included alongwith.
The folder Doodles contains an implementation of the paper Semantic Style Transfer and Turning Two-Bit Doodles into Fine Artwork by Alex J. Champandard
A folder containing test outputs is included inside it.
The folder Artistic rendering of videos is an attempt to implement the paper Artistic style transfer for videos by Manuel Ruder, Alexey Dosovitskiy, Thomas Brox
A folder containing a test case is included inside it.
Using bilinear model to separate the style and content of image and then applying the style of one image on the content of another image (by the principle of extrapolation).
We have used the VGG-19 model to achieve separation of style and content of image
Using a deep neural network to borrow the skills of real artists and turn two-bit doodles into masterpieces! (based on the Neural Patches algorithm (Li, 2016). )
The algorithm allows to transfer the style from one image (for example, a painting) to a whole video sequence and generates consistent and stable stylized video sequences.
###For the naive implementation python BilinearClassifier.py
python neural_style.py --content <content file> --styles <style file> --output <output file>
(run python neural_style.py --help
to see a list of all options)
python3 doodle.py --style <style file> --content <content file> --output <output file> --device=gpu0 --phases=4 --iterations=80
th artistic_video.lua --style_image<style file> --content_pattern<content frames>
- TensorFlow
- SciPy
- Pillow
- NumPy
- Pre-trained VGG network (MD5
8ee3263992981a1d26e73b3ca028a123
) - Lasagne(for neural doodle)
- lua(for artistic videos)
- torch(for artistic videos)
Due to the huge size of the VGG network,it could not be pushed to github. It can be downloaded here(http://www.vlfeat.org/matconvnet/models/imagenet-vgg-verydeep-19.mat).