By: Maura Gallarotti – Project Engineer
The Optimization library provides several numerical optimization algorithms for solving different kinds of optimization tasks. If the model has some parameters that are not completely defined and one wants to tune their values to improve the system behaviour, this article may be of interest.
First of all, to optimize a model, one has to define the Tuner and the Criteria parameters of the optimization process as Modelica parameters in the model. The Tuner parameters are varied during the optimization process to meet the criteria.
For a new user, it could be worth giving a look at the sub-library Optimization.Criteria, which contains a selection of models for optimization criteria that could be helpful while preparing the system model for optimization. The library (see Figure 1) provides models with a collection of typical criteria, gathered in the blocks in Optimization.Criteria.Signals.
Figure 1: Optimization library.
Once Criteria and Tuner parameters have been defined in the model, one can set up the optimization process executing the function Optimization.Tasks.ModelOptimization.start.
Figure 2: Function for the model optimization.
Let us try with an example: Figure 3 shows a simple example of a control system for driving an inertia to a set speed. The criteria for the optimization is to minimize the integratedNormU and integratedNormY which represent respectively the control effort and the offset between the set speed and the actual speed. The tuner parameters are the gain and the time constant of the PI controller.
Figure 3: Simple model of a control system for driving an inertia to a set speed.
Figure 4: GUI of the optimization process.
In ModelOptimizationSetup the name of the model to be optimized can be specified, together with the name of the Setup file where the results of the optimization will be saved (default: OptimizationLastRunModel.mo).
Figure 5: Tuner parameters specification.
In Tuner parameters, the parameters to be tuned can be selected clicking on the button Select parameters on the bottom right (see figure 5). Doing so, a variable tree browser of the selected model opens. The range of variation of the parameters has to be specified in min and max. In case just discrete values can be used for the parameters, these values can be specified in the table in discreteValues. If one decides to keep a parameter fixed during the optimization process, the parameter can be deactivated (active set to false).
Figure 6: Criteria specification.
In Criteria, the criteria for the optimization process can be specified (see figure 6). Since usually several criteria are used, they can be weighted with regards to each other, using a different demand value and the criteria will become the value divided by the demand.
The objective of the optimization process can be to minimize or limit the error criteria and this can be specified through usage in figure 7.
Figure 7: Usage for the criteria.
In Preferences – Optimization, the optimization method can be specified.
The following optimization methods are available in the Optimization library:
- Sequential Quadratic Programming (SQP)
- Bounded BFGS method
- Pattern Search
- Simplex Method
- Genetic Algorithm
Two evaluation methods are implemented in the Optimization library:
- Random Search
- Systematic Tuner Variation
The BFGS algorithm approximates Newton’s method and so has good convergence properties for simple optimization problems. In case of more complex optimization problems, Pattern Search and Simplex Method are more robust.
If discrete values are used for the parameters, it could be worth trying the Genetic Algorithm to get a good result.
Figure 8: Optimization preferences specifications.
In Simulation, one can specify the simulation preferences like start and stop time or the numerical integration algorithm.
To reduce the computational time, independent simulation runs of a model may be executed in parallel.
Figures 9 and 10 show the results of the optimization process for the example model, setting the same weight for both criteria. Figure 10 shows the offset between the set speed and the actual speed and the effort of the control system.
If the demand of the criteria integratedNormU is set to 5, the optimization tool will minimize the two errors: integratedNormU/5 and integratedNormY, prioritizing the error on the speed. In this case, the accuracy in the speed will be much higher, however the effort of the control system will be higher too (see figure 11).
Figure 9: Results from the optimization process.
Figure 10: simulation results with the same demand for both the criteria.
Figure 11: simulation results if the weight of the criteria integratedNormU (effort of the control system) is lower with respect to integratedNormY (speed error).
Apart from Model optimization, different optimization tasks can be performed (see Figure 12):
- Function optimization
Figure 12: Optimization tasks in the Optimization library.
The task Function Optimization is designed for the generic case of an optimization problem where the user has to provide either a Modelica function that evaluates the criteria or a user-defined function for the evaluation of the Jacobian matrix.
Multi Case Model Optimization can be used in case one is interested in the performance and stability of a system not only in one operating point, but in several operating conditions. The Multi case parameter optimization starts several model simulations (from the same model class) for each evaluation of the optimization objective functions.
In case of trajectory optimization, tuners are varied by the optimization algorithm and a model simulation is performed for each computation of the criteria. The control trajectories are approximated by B-splines defined by N equidistant knots on the time interval with an interpolation degree k.
For real time optimization, parameter optimization of a Modelica function is performed in a sampled data system during simulation (so at every sample time, an optimization algorithm is called to determine the minimum of the Modelica function).
Finally, the periodic Steady state Initialization optimizes the start values of the states, so that after a given period of time, the simulation result passes again through the start state.