Experiment code associated with our paper on Applied Soft Computing: MO-PaDGAN: Reparameterizing Engineering Designs for Augmented Multi-objective Optimization.
This code is licensed under the MIT license. Feel free to use all or portions for your research or related projects so long as you provide the following citation information:
Chen, W., & Ahmed, F. (2021). MO-PaDGAN: Reparameterizing Engineering Designs for augmented multi-objective optimization. Applied Soft Computing, 113, 107909.
@article{chen2021mo,
title={MO-PaDGAN: Reparameterizing Engineering Designs for augmented multi-objective optimization},
author={Chen, Wei and Ahmed, Faez},
journal={Applied Soft Computing},
volume={113},
pages={107909},
year={2021},
publisher={Elsevier}
}
- tensorflow < 2.0.0
- gpflow
- gpflowopt
- sklearn
- numpy
- matplotlib
- seaborn
- pexpect
-
Go to example directory:
cd synthetic
-
Train MO-PaDGAN:
python train.py
positional arguments:
mode train or evaluate data dataset name (specified in datasets.py; available datasets are Ring2D, Grid2D, Donut2D, ThinDonut2D, and Arc2D) func function name (specified in functions.py; available functions are VLMOP2 and NKNO1)
optional arguments:
-h, --help show this help message and exit --lambda0 coefficient controlling the weight of quality in the DPP kernel --lambda1 coefficient controlling the weight of the performance augmented DPP loss in the PaDGAN loss --disc_lr learning rate for the discriminator --gen_lr learning rate for the generator --batch_size batch size --train_steps training steps --save_interval number of intervals for saving the trained model and plotting results
The default values of the optional arguments will be read from the file
synthetic/config.ini
.The trained model and the result plots will be saved under the directory
synthetic/trained_gan/<data>_<func>/<lambda0>_<lambda1>/
, where<data>
,<func>
,<lambda0>
, and<lambda1>
are specified in the arguments or insynthetic/config.ini
.Note that we can set
lambda0
andlambda1
to zeros to train a vanilla GAN.Datasets and functions are defined in
synthetic/functions.py
andsynthetic/datasets.py
, respectively.Specifically, here are the dataset and function names (
<data>
and<func>
) for the two synthetic examples in the paper:Example Dataset name Function name I Ring2D NKNO1 II Ring2D VLMOP2 -
Multi-objective optimization by Bayesian optimization:
i) Run single experiment:
python optimize.py
positional arguments:
data dataset name (specified in datasets.py; available datasets are Ring2D, Grid2D, Donut2D, ThinDonut2D, and Arc2D) func function name (specified in functions.py; available functions are Linear, MixGrid, MixRing, MixRing4, and MixRing6)
optional arguments:
-h, --help show this help message and exit --lambda0 coefficient controlling the weight of quality in the DPP kernel --lambda1 coefficient controlling the weight of the performance augmented DPP loss in the PaDGAN loss --id experiment ID
ii) Run experments in batch mode to reproduce results from the paper:
python run_batch_experiments.py
-
Install XFOIL.
-
Go to example directory:
cd airfoil
-
Download the airfoil dataset here and extract the NPY files into
airfoil/data/
. -
Go to the surrogate model directory:
cd surrogate
-
Train a surrogate model to predict airfoil performances:
python train_surrogate.py train
positional arguments:
mode train or evaluate
optional arguments:
-h, --help show this help message and exit --save_interval interval for saving checkpoints
-
Go back to example directory:
cd ..
Train MO-PaDGAN:
python train.py train
positional arguments:
mode train or evaluate
optional arguments:
-h, --help show this help message and exit --lambda0 coefficient controlling the weight of quality in the DPP kernel --lambda1 coefficient controlling the weight of the performance augmented DPP loss in the PaDGAN loss
The default values of the optional arguments will be read from the file
airfoil/config.ini
.The trained model and the result plots will be saved under the directory
airfoil/trained_gan/<lambda0>_<lambda1>/
, where<lambda0>
and<lambda1>
are specified in the arguments or inairfoil/config.ini
. Note that we can setlambda0
andlambda1
to zeros to train a vanilla GAN. -
Multi-objective optimization by Bayesian optimization:
i) Run single experiment:
python optimize_bo.py
positional arguments:
parameterization airfoil parameterization (GAN, SVD, or FFD)
optional arguments:
-h, --help show this help message and exit --lambda0 coefficient controlling the weight of quality in the DPP kernel --lambda1 coefficient controlling the weight of the performance augmented DPP loss in the PaDGAN loss --id experiment ID
ii) Run experments in batch mode to reproduce results from the paper:
python run_batch_experiments_bo.py
-
Multi-objective optimization by Evolutionary Algorithm:
i) Run single experiment:
python optimize_ea.py
positional arguments:
parameterization airfoil parameterization (GAN, SVD, or FFD)
optional arguments:
-h, --help show this help message and exit --lambda0 coefficient controlling the weight of quality in the DPP kernel --lambda1 coefficient controlling the weight of the performance augmented DPP loss in the PaDGAN loss --id experiment ID
ii) Run experments in batch mode to reproduce results from the paper:
python run_batch_experiments_ea.py