-
Notifications
You must be signed in to change notification settings - Fork 2
Agent
agent = Agent(name, model, controller, T_s, x0, uPrev_0 = zeros(n_u, 1));
An agent handles the actual execution of the simulation of a model and controller. In this sense, it also tracks the simulation history (i.e. state trajectory, inputs and realised disturbances) as well as the current simulation state.
The agent is called by the simulation at three different points. During negotiation between agents (doNegotiation), for the execution of the actual simulation step (doStep) and for eventually storing the simulation results (storeHistory).
The agent derives the optimal input on the modelled system in getOptimalTrajectory, by either calling the controller directly or by performing pareto evaluation on the model and controller.
An agents history and current status, which contains the predictions made in current timestep, are public and accessible by other agents as well as evaluation functions and disturbance prediction functions. That way, agents can make decision based on each other's state, and disturbances and evaluations can be state-aware (across the agents).
As all main PARODIS classes, the Agent class derived off the handle
class, meaning that each agent has one unique instance across the simulation, allowing for read and write access from all other agents during the simulation, allowing for complex agent interactions.
Property | Description |
---|---|
controller |
A Controller object, used for deriving an optimal input trajectory and for disturbance prediction |
model |
A model struct, containing the respective model |
history |
struct containing the simulation history |
virtualHistory |
struct containing the virtual simulation history |
config |
struct containing the agent's configuration |
status |
struct containing the current status |
previousStatus |
struct containing the status of the previous timestep |
name |
string with the agent's name during the simulation |
callbackMeasureState |
optional callback that can manipulate the agent's history based on other agents, called in measureState() |
callbackNegotation |
optional callback that is called in doNegotiation to perform negotiation tasks |
simulation |
Reference to the simulation in which the agent is contained |
Property | Default value | Description |
---|---|---|
T_s |
[] |
Time steps for each step in the prediction horizon, length(T_s) == N_pred
|
solver |
'gurobi' |
Solver to be called by YALMIP to solve MPC problem |
evalFuns |
struct |
Struct used for storing evaluation functions, interfaced by addEvalFunction |
disturbanceMeasured |
false |
Whether to set the prediction d(0 |
debugMode |
false |
Will toggle YALMIP debug mode and verbosity = 2 and disable use of optimizer |
testFeasibility |
false |
If enabled, every time before actual problem is solved, an empty problem (no objective function, but set constraints) will be solved using optimize() to test if the constraints are infeasible |
verboseOptimizer |
false |
If true, the optimizer flags debug = 1 and verbose = 2 will be set, independently of debugMode
|
pauseOnError |
false |
If set to 'on' , simulation will pause right after plotting, if solver code returned from getOptimalInput is not 0 (successfully solved). Alternatively, can be set to pause only on specific codes, e.g. pauseOnError = 12 (see solver codes) |
solverOptions |
{} |
List of additional (solver) options that will be passed so sdpsettings when initialising the controller (e.g. {'quadprog.maxIter', 1000} ) |
reuseNegotiationResult |
false |
If agent engages in negotiation and has derived an optimal input, if this flag is set to true , this result will be reused in the agent's doStep
|
doParetoOnlyOnce |
false |
If the ParetoController is used an this flag is set to true , pareto evaluation will only be done once in a given timestep, meaning that if agent is called multiple times during negotiation, only the first call will result in a pareto evaluation |
In the agent's history struct, all realised simulation values are stored. This includes the actual state trajectory, the actual applied inputs and disturbances, the values of the configured evaluation functions as well as of the cost functions, and the weights associated with the cost functions.
The history is used to store and access the actual state of the system. Thus, it is updated only within doStep of the agent, when the derived input will be applied to the system together with the actual disturbance. The resulting state is stored into the history, together with the input, disturbance, the value of the evaluation function calculated on this resulting state.
An agent's history is also accessed by the figures to plot depending plots.
Field | Description |
---|---|
x |
Matrix |
u |
Matrix |
d |
Matrix |
evalValues |
struct where each field contains a |
costs |
struct where each field contains a |
simulationTime |
Matrix |
pareto |
struct with pareto history, all fields are empty if pareto evaluation is deactivated |
pareto.fronts |
Cell array containing the retrieved pareto front at each timestep |
pareto.paretoParameters |
Cell array containing the parameters corresponding to the points on the front at each timestep |
pareto.utopias |
Matrix |
pareto.nadir |
Matrix |
The virtual history tracks the history that would have occured if only dPred
was applied in each step, and no state measurement occured. It contains the same fields as the real history, excluding the pareto
struct.
In the status struct, all predictions and assumed values of the current timestep are stored. The status is first set by doNegotiation, if negotiation is performed. It remains set until right after the actual simulation step is performed and the history updated. Afterwards, alle fields except k
are cleared, and k
is increased by one.
According to execution order in the negotiation loop (and the main loop for that matter), an agent's status can be used to determine parameter values or disturbance of other agents, and to track progress in the negotiation process.
Since all the predictions become void right after the actual simulation step is performed, the status is cleared of all predictions.
Field | Description |
---|---|
k |
Current local simulation step |
xPred |
Cell array containing predicted state trajectory for each scenario |
uPred |
Latest derived predicted optimal input trajectory |
dPred |
Cell array containing predicted disturbances for each scenario |
evalPred |
struct , each field contains vector containing predicted eval values over horizon |
costsPred |
struct , each field contains vector containing predicted cost function values over horizon |
paramValues |
The currently determined parameter values |
chosenWeights |
The weights chosen for the current timestep |
horizonTime |
Vector with the simulation times of the current horizon |
slackVariables |
The determined values of the configured slack variables |
pareto |
struct with pareto status, all fields are empty if pareto evaluation is deactivated |
pareto.front |
the current evaluated pareto front |
pareto.lastStepWithPareto |
the last local time step at which pareto evaluation was performed |
If an agent's behaviour is dependent on other agents, this can be mapped using the negotiation functionality. At the beginning of each simulation step, so when all agents' time steps match, the negotiation loop will be initiated. This is either done by defining a static negotiation order in which the agents' doNegotiation()
methods will be called, or by calling a completely user-defined negotiation callback, which makes use of doNegotiation()
in some way.
Below shows what happens in an agent's doNegotiation
method, which should help understand at which point the negotiation callback comes into place and how you can work with it.
function doNegotiation(this)
% doNegotation Executes a negotiation step for the agent
this.measureState();
this.status.dPred = this.getDisturbance();
this.setParameterValues();
this.status.uPred = this.getOptimalTrajectory();
if ~isempty(this.callbackNegotiation)
this.callbackNegotiation(this, this.simulation);
end
end
The negotiation callback will be called with the calling agent and the simulation object of the current simulation. In this simulation object there is a shared struct
, which can be used to exchange information between agents during the negotiation loop.
[] = callbackMeasureState(agent, simulation)
State measurement in PARODIS is completely user-defined. If the callback callbackMeasureState
is set, it is called in the measureState
method of the agent. The callback is not expected to return any data, but instead to directly manipulate the agent's history and/or state.
This callback can be useful to update coupled systems to their actual state after a simulation step.
The Agent object calls the Pareto object's methods for calculating the Pareto front in three steps:
- Call the extreme point function for calculating at least n extreme points.
- Call the Pareto front determination scheme to calculate Pareto-optimal points on the Pareto front.
- Call either the Interactivity Tool for selecting one Pareto-optimal point or call the selected metric function for automatically choosing a suitable Pareto-optimal point.
- Prerequisites
- Installing PARODIS
- Creating an Ansatz
- Setting up an Agent
- Setting up a Simulation
- Setting up Figures
Pareto Optimization
Modules
- Simulation
- Agent
- Model
- Controllers
- Cost Functions
- Sources
- Plotting
Other