Skip to content

Generate Images from Captions using Generative Adversarial Networks

License

Notifications You must be signed in to change notification settings

bojito/text-to-image-synthesis

Repository files navigation

Text to Image Synthesis

Generate images based on text descriptions, using Generative Adversarial Networks.



Introduction

This is a PyTorch implementation of an end to end Text to Image synthesis application.

Based on [1], we train a Char CNN-RNN to extract useful text embeddings from (image, captions) pairs.

Based on [2], we use these embeddings to condition a DCGAN on generating new and previous unseen images of the same domain, given a custom text description written by a human.

Installation

  1. Clone the repo
    git clone https://github.com/bojito/text-to-image-synthesis
  2. Install requirements
    pip install -r requirements.txt
  3. Download datasets per case, as described in data folder

Text Embeddings 1_Text_Embed

Based on the method presented by Scott Reed et al we use the Char CNN-RNN architecture to build a network that trains on (image, captions) combinations and outputs an embedding vector that captures their underlying relationship.

First download the original weights as presented in Reed, ICML2016.

The weights are saved as a Torch file so we use the code provided here to parse the weights into the pyTorch model.

Text to Image 2_Text_to_Image

Based on the method presented by Scott Reed et al we use a conditional GAN, conditioned on text descriptions.

Added:

  • One sided label smoothing
  • Feature Matching Loss
  • L1 distance between generated and real images
  • FID evaluation

Results

Results

References

[1] Learning deep representations of fine-grained visual descriptions. link

[2] Generative adversarial text to image synthesis. link

[3] Improved Techniques for Training GANs. link

Other Implementations

About

Generate Images from Captions using Generative Adversarial Networks

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published