Generative Adversarial Networks are a deep learning model architecture that fabricate realistic data in the image of a real data set.
The basic setup of a GAN consists of two networks. One of the two networks, known as the discriminator, tries to distinguish between real and generated images. The other network, known as the generator, generates images with the goal of fooling the first network.
Wasserstein GANs use the Wassertein Distance as the optimization metric between real and generated distributions. This makes the GAN more stable during training, improves the diversity of the generated images, and reduces the sensitivity to hyperparameters.
A reason for these benefits is that the Wasserstein Distance is continuous and defined even when the two distributions are equal to 0 (unlike the KL or JS Divergences and many others). This means that we can get a meaningful gradient even when the two distributions are completely different.
Loss difference is calculated as the difference between the loss from the real data and the loss from the generated data.
Wasserstein GAN Improved Training of Wasserstein GANs Improved Training of Wasserstein GANs Github Repo Read-Through: Wasserstein GAN