Skip to content

Text to Image Synthesis using Generative Adversarial Networks

Notifications You must be signed in to change notification settings

b628alloon/text-to-image

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Text to Image Synthesis using Generative Adversarial Networks

This is the official code for Text to Image Synthesis using Generative Adversarial Networks. Please be aware that the code is in an experimental stage and it might require some small tweaks.

If you find my research useful, please use the following to cite:

@article{Bodnar2018TextTI,
  title={Text to Image Synthesis Using Generative Adversarial Networks},
  author={Cristian Bodnar},
  journal={CoRR},
  year={2018},
  volume={abs/1805.00676}
}

Images generated by the Conditional Wasserstein GAN

As it can be seen, the generated images do not suffer from mode collapse.

Sample from the flowers dataset

Illustration of Conditional Wasserstein Progressive Growing GAN on the flowers dataset:

Sample from the flowers dataset

Samples from the birds dataset

Sample from the birds dataset

How to download the dataset

  1. Setup your PYTHONPATH to point to the root directory of the project.
  2. Download the preprocessed flowers text descriptions and extract them in the /data directory.
  3. Download the images from Oxford102 and extract the images in /data/flowers/jpg. You can alternatively run python prep_incep_img/download_flowers_dataset.py from the root directory of the project.
  4. Run the python prep_incep_img/preprocess_flowers.py script from the root directory of the project.

Requirements

  • python 3.6
  • tensorflow 1.4
  • scipy
  • numpy
  • pillow
  • easydict
  • imageio
  • pyyaml

About

Text to Image Synthesis using Generative Adversarial Networks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%