Generate images based on text descriptions, using Generative Adversarial Networks.
This is a PyTorch implementation of an end to end Text to Image synthesis application.
Based on [1], we train a Char CNN-RNN
to extract useful text embeddings from (image, captions) pairs.
Based on [2], we use these embeddings to condition a DCGAN
on generating new and previous unseen images of the same domain, given a custom text description written by a human.
- Clone the repo
git clone https://github.com/bojito/text-to-image-synthesis
- Install requirements
pip install -r requirements.txt
- Download datasets per case, as described in
data
folder
Based on the method presented by Scott Reed et al we use the Char CNN-RNN architecture to build a network that trains on (image, captions)
combinations and outputs an embedding vector that captures their underlying relationship.
First download the original weights as presented in Reed, ICML2016.
The weights are saved as a Torch file so we use the code provided here to parse the weights into the pyTorch model.
Based on the method presented by Scott Reed et al we use a conditional GAN, conditioned on text descriptions.
Added:
- One sided label smoothing
- Feature Matching Loss
- L1 distance between generated and real images
- FID evaluation
[1] Learning deep representations of fine-grained visual descriptions. link
[2] Generative adversarial text to image synthesis. link
[3] Improved Techniques for Training GANs. link
-
https://github.com/reedscot/cvpr2016 the authors version of Char CNN-RNN
-
https://github.com/reedscot/icml2016 the authors version of conditional GAN
-
https://github.com/aelnouby/Text-to-Image-Synthesis excellent repo of Text to Image synthesis
-
https://github.com/martinduartemore/char_cnn_rnn_pytorch excellent repo on Text Embeddings