Skip to content

Tutorial: Advance

Paht J edited this page Aug 20, 2020 · 4 revisions

NSGA-III

Yuan__Xu__Wang_-2014-_An_Improved_NSGA-III_Procedure_for_Evolutionary_Many-objective_Optimization-annotated.pdf

NSGA_3_Single_Loop The difference between NSGA 3 and NSGA 2 is the removal of crowding distance which can often lead to convergence issues. Crowding distance has been replaced with reference points along the pareto front. Individuals closest to the reference points are selected for mutation and crossover.

Kursawe Function

https://en.wikipedia.org/wiki/Test_functions_for_optimization

7-Figure8-1

Set up code is located in glennopt/test_functions/KUR In in there are 3 folders for serial, parallel, and parallel_nas

Serial - single cpu execution Parallel - you can specify how many cpus to use Parellel_NAS - this version takes a machinefile containing host names and allocates them to each execution (4 cores per execution etc)

Serial

Optimization_setup.py - sets up and executes the optimization

"""
    Simple, non parallel optimization set up example. 
"""
import sys,os
sys.path.insert(0,'../../../../')
from glennopt.helpers import Parameter, mutation_parameters, de_mutation_type
from glennopt.nsga3 import NSGA3,mutation_parameters, de_mutation_type
from glennopt.doe import generate_reference_points

# Generate the DOE
current_dir = os.getcwd()
ns = NSGA3(eval_script = "Evaluation/evaluation.py", eval_folder="Evaluation",num_populations=10,pop_size=40,optimization_folder=current_dir)

eval_parameters = []
eval_parameters.append(Parameter(name="x1",min_value=-5,max_value=5))
eval_parameters.append(Parameter(name="x2",min_value=-5,max_value=5))
eval_parameters.append(Parameter(name="x3",min_value=-5,max_value=5))
ns.add_eval_parameters(eval_params = eval_parameters)

objectives = []
objectives.append(Parameter(name='objective1'))
objectives.append(Parameter(name='objective2'))
ns.add_objectives(objectives=objectives)

# No performance Parameters
performance_parameters = []
performance_parameters.append(Parameter(name='p1'))
performance_parameters.append(Parameter(name='p2'))
performance_parameters.append(Parameter(name='p3'))
ns.add_performance_parameters(performance_params = performance_parameters)

params = mutation_parameters
ns.mutation_params.mutation_type = de_mutation_type.de_best_2_bin

ns.start_doe(doe_size=128)
ns.optimize_from_population(pop_start=-1,n_generations=10)

Execution folder structure

Calculation
| - DOE
| -- IND000
| ----- input.txt   (Generated by optimizer)
| ----- evaluate.py (Executes the cfd and reads results)
| ----- output.txt  (Generated by evaluate.py)
| -- ...
| -- IND128
| - POP000
| -- IND000
| -- ...
| -- IND039
Data
| - evaluate.py     (Gets copied to each individual directory)
optimization_setup.py
optimization_plot.py
machinefile.txt (Optional, add this if you want to break down hosts per evaluation)

Starting a Population from DOE

This part the code ns.optimize_from_population(pop_start=-1,n_generations=10) launches the optimization from DOE.

Launching from Past Population or Restart File

To launch the optimization from a past population ns.optimize_from_population(pop_start=2,n_generations=10) However if there is a restart file, the restart file will be used.

Generating a restart file

Generating a restart file is easy as calling ns.create_restart() this reads the calculation folder and creates a restart file of all individuals that have output.txt within the evaluation folder

Plotting the Pareto Front

A plot of the pareto front of 2 objectives can be made by calling ns.read_calculation_folder() ns.plot_2D('objective1','objective2')

Parallel

"""
    Simple, non parallel optimization set up example. 
"""
import sys,os
sys.path.insert(0,'../../../../')
from glennopt.helpers import Parameter, mutation_parameters, de_mutation_type, parallel_settings
from glennopt.nsga3 import NSGA3
from glennopt.doe import generate_reference_points


# Generate the DOE
current_dir = os.getcwd()
ns = NSGA3(eval_script = "Evaluation/evaluation.py", eval_folder="Evaluation",num_populations=10,pop_size=40,optimization_folder=current_dir)

eval_parameters = []
eval_parameters.append(Parameter(name="x1",min_value=-5,max_value=5))
eval_parameters.append(Parameter(name="x2",min_value=-5,max_value=5))
eval_parameters.append(Parameter(name="x3",min_value=-5,max_value=5))
ns.add_eval_parameters(eval_params = eval_parameters)

objectives = []
objectives.append(Parameter(name='objective1'))
objectives.append(Parameter(name='objective2'))
ns.add_objectives(objectives=objectives)

# No performance Parameters
performance_parameters = []
performance_parameters.append(Parameter(name='p1'))
performance_parameters.append(Parameter(name='p2'))
performance_parameters.append(Parameter(name='p3'))
ns.add_performance_parameters(performance_params = performance_parameters)

# Mutation settings
params = mutation_parameters
ns.mutation_params.mutation_type = de_mutation_type.de_best_2_bin

# Parallel settings
parallelSettings = parallel_settings()
parallelSettings.concurrent_executions = 16
parallelSettings.cores_per_execution: 1
parallelSettings.execution_timeout = 1 # minutes
# * These are not needed 
# parallelSettings.machine_filename = 'machinefile.txt' 
# parallelSettings.database_filename = 'database.csv'
ns.parallel_settings = parallelSettings

ns.start_doe(doe_size=128)
ns.optimize_from_population(pop_start=-1,n_generations=10)

Simple Parallel

Simply don't specify or don't make a machinefile.txt at the root directory and it will run 1 core per execution and with N concurrent executions.

NAS Parallel

# Parallel settings
parallelSettings = parallel_settings()
parallelSettings.concurrent_executions = 16
parallelSettings.cores_per_execution: 1
parallelSettings.execution_timeout = 1 # minutes
parallelSettings.machine_filename = 'machinefile.txt' 
# parallelSettings.database_filename = 'database.csv'
ns.parallel_settings = parallelSettings

Machine file looks like for 2 nodes and 2 cores per node.

paht-pc
paht-pc
dave-pc
dave-pc