Welcome to the GAN Playground! This project allows users to explore and experiment with Generative Adversarial Networks (GANs) through an interactive web interface. Users can choose different GAN models, set training parameters, and visualize the results.
Visit the GAN Playground at gan-playground.vercel.app
- Choose from multiple GAN models (DCGAN, WGAN-GP, and more coming soon).
- Set training parameters such as number of epochs, learning rate, etc.
- Visualize training progress and results at each step of every epoch.
- API access to integrate GAN functionalities into other projects.
- Python 3.8+
- Node.js 14+
- npm 6+
-
Clone the repository:
git clone https://github.com/yourusername/gan-playground.git cd gan-playground
-
Backend setup:
cd backend python3 -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate` pip install -r requirements.txt
To install the required packages with CUDA support (cu117), follow these steps:
- Make sure you have the correct CUDA version installed (CUDA 11.7).
- Run the following command to install the required packages:
pip install -r requirements.txt --index-url https://download.pytorch.org/whl/cu117
-
Frontend setup:
cd ../frontend npm install
To start the frontend:
cd playground-frontend
npm start
This will start the React app on localhost:3000
.
To start the backend on a new terminal:
cd backend
source venv/bin/activate # On Windows use `venv\Scripts\activate`
python main.py
This will start the React app on localhost:5000
.
- Endpoint:
/api/dcgan/train
- Method:
POST
- Description: Train the DCGAN model with the provided parameters.
- Parameters:
epochs
(int): Number of epochs to train.batch_size
(int): Batch size for training.learning_rate
(float): Learning rate for the optimizer.
- Example Request:
{ "epochs": 100, "batch_size": 64, "learning_rate": 0.0002 }
- Method:
- Endpoint:
/api/wgan_gp/train
- Method:
POST
- Description: Train the WGAN-GP model with the provided parameters.
- Parameters:
epochs
(int): Number of epochs to train.batch_size
(int): Batch size for training.learning_rate
(float): Learning rate for the optimizer.
- Example Request:
{ "epochs": 100, "batch_size": 64, "learning_rate": 0.0001 }
- Method:
Additional models will be added soon. Stay tuned for updates!
Contributions are welcome! Please fork the repository and create a pull request with your changes.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE
for more information.