You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Presenting a Deep Convolutional Generative Adversarial Network (DCGAN) for generating anime faces. The process involves training a discriminator and generator neural network on a dataset comprising 21,551 manually resized anime faces (64x64 pixels). The generator is capable of producing realistic anime faces after being trained for 50 epochs.
This repository contains notebooks showcasing various generative models, including DCGAN and VAE for anime face generation, an Autoencoder for converting photos to sketches, a captioning model using an attention mechanism for an image caption generator, and more.
Accompanying GitHub repository for the paper "Deep convolutional generative adversarial network for generation of computed tomography images of discontinuously carbon fiber reinforced polymer microstructures". The paper can be accessed under the following DOI:
DCGAN (Deep Convolutional Generative Adversarial Network) custom architecture builder and image synthesizer to specify the architecture of the generator and discriminator, visualize the models, train the GAN, synthesize images, and analyze synthetic imagery losslessly visualized.
This project uses Deep Convolutional Generative Adversarial Networks (DCGANs) to generate images of handwritten digits from the MNIST dataset, specifically focusing on generating the digit '8'.
The CelebA dataset was used to train a DCGAN (Deep Convolutional Generative Adversarial Network). The project was broken down into a series of tasks, from loading in data to defining and training adversarial networks. The trained network generated new images of faces that seemed to be fairly realistic with reduced noise
This project utilizes a Deep Convolutional Generative Adversarial Network (DCGAN) to generate realistic human face images based on the Flickr-Faces-HQ (FFHQ) dataset. By training a GAN on high-quality face images, the model learns to synthesize diverse and lifelike faces.