-
Notifications
You must be signed in to change notification settings - Fork 9
Tutorial: Advance
The difference between NSGA 3 and NSGA 2 is the removal of crowding distance which can often lead to convergence issues. Crowding distance has been replaced with reference points along the pareto front. Individuals closest to the reference points are selected for mutation and crossover.
https://en.wikipedia.org/wiki/Test_functions_for_optimization
Set up code is located in glennopt/test_functions/KUR In in there are 3 folders for serial, parallel, and parallel_nas
Serial - single cpu execution Parallel - you can specify how many cpus to use Parellel_NAS - this version takes a machinefile containing host names and allocates them to each execution (4 cores per execution etc)
Optimization_setup.py - sets up and executes the optimization
"""
Simple, non parallel optimization set up example.
"""
import sys,os
sys.path.insert(0,'../../../../')
from glennopt.helpers import Parameter, mutation_parameters, de_mutation_type
from glennopt.nsga3 import NSGA3,mutation_parameters, de_mutation_type
from glennopt.doe import generate_reference_points
# Generate the DOE
current_dir = os.getcwd()
ns = NSGA3(eval_script = "Evaluation/evaluation.py", eval_folder="Evaluation",num_populations=10,pop_size=40,optimization_folder=current_dir)
eval_parameters = []
eval_parameters.append(Parameter(name="x1",min_value=-5,max_value=5))
eval_parameters.append(Parameter(name="x2",min_value=-5,max_value=5))
eval_parameters.append(Parameter(name="x3",min_value=-5,max_value=5))
ns.add_eval_parameters(eval_params = eval_parameters)
objectives = []
objectives.append(Parameter(name='objective1'))
objectives.append(Parameter(name='objective2'))
ns.add_objectives(objectives=objectives)
# No performance Parameters
performance_parameters = []
performance_parameters.append(Parameter(name='p1'))
performance_parameters.append(Parameter(name='p2'))
performance_parameters.append(Parameter(name='p3'))
ns.add_performance_parameters(performance_params = performance_parameters)
params = mutation_parameters
ns.mutation_params.mutation_type = de_mutation_type.de_best_2_bin
ns.start_doe(doe_size=128)
ns.optimize_from_population(pop_start=-1,n_generations=10)
Calculation
| - DOE
| -- IND000
| ----- input.txt (Generated by optimizer)
| ----- evaluate.py (Executes the cfd and reads results)
| ----- output.txt (Generated by evaluate.py)
| -- ...
| -- IND128
| - POP000
| -- IND000
| -- ...
| -- IND039
Data
| - evaluate.py (Gets copied to each individual directory)
optimization_setup.py
optimization_plot.py
machinefile.txt (Optional, add this if you want to break down hosts per evaluation)
This part the code ns.optimize_from_population(pop_start=-1,n_generations=10)
launches the optimization from DOE.
To launch the optimization from a past population ns.optimize_from_population(pop_start=2,n_generations=10)
However if there is a restart file, the restart file will be used.
Generating a restart file is easy as calling ns.create_restart()
this reads the calculation folder and creates a restart file of all individuals that have output.txt within the evaluation folder
A plot of the pareto front of 2 objectives can be made by calling
ns.read_calculation_folder() ns.plot_2D('objective1','objective2')
"""
Simple, non parallel optimization set up example.
"""
import sys,os
sys.path.insert(0,'../../../../')
from glennopt.helpers import Parameter, mutation_parameters, de_mutation_type, parallel_settings
from glennopt.nsga3 import NSGA3
from glennopt.doe import generate_reference_points
# Generate the DOE
current_dir = os.getcwd()
ns = NSGA3(eval_script = "Evaluation/evaluation.py", eval_folder="Evaluation",num_populations=10,pop_size=40,optimization_folder=current_dir)
eval_parameters = []
eval_parameters.append(Parameter(name="x1",min_value=-5,max_value=5))
eval_parameters.append(Parameter(name="x2",min_value=-5,max_value=5))
eval_parameters.append(Parameter(name="x3",min_value=-5,max_value=5))
ns.add_eval_parameters(eval_params = eval_parameters)
objectives = []
objectives.append(Parameter(name='objective1'))
objectives.append(Parameter(name='objective2'))
ns.add_objectives(objectives=objectives)
# No performance Parameters
performance_parameters = []
performance_parameters.append(Parameter(name='p1'))
performance_parameters.append(Parameter(name='p2'))
performance_parameters.append(Parameter(name='p3'))
ns.add_performance_parameters(performance_params = performance_parameters)
# Mutation settings
params = mutation_parameters
ns.mutation_params.mutation_type = de_mutation_type.de_best_2_bin
# Parallel settings
parallelSettings = parallel_settings()
parallelSettings.concurrent_executions = 16
parallelSettings.cores_per_execution: 1
parallelSettings.execution_timeout = 1 # minutes
# * These are not needed
# parallelSettings.machine_filename = 'machinefile.txt'
# parallelSettings.database_filename = 'database.csv'
ns.parallel_settings = parallelSettings
ns.start_doe(doe_size=128)
ns.optimize_from_population(pop_start=-1,n_generations=10)
Simply don't specify or don't make a machinefile.txt at the root directory and it will run 1 core per execution and with N concurrent executions.
# Parallel settings
parallelSettings = parallel_settings()
parallelSettings.concurrent_executions = 16
parallelSettings.cores_per_execution: 1
parallelSettings.execution_timeout = 1 # minutes
parallelSettings.machine_filename = 'machinefile.txt'
# parallelSettings.database_filename = 'database.csv'
ns.parallel_settings = parallelSettings
Machine file looks like for 2 nodes and 2 cores per node.
paht-pc
paht-pc
dave-pc
dave-pc