- 易迪拓培训,专注于微波、射频、天线设计工程师的培养
CST2013: Optimizer - Settings
Home: Simulation Optimizer
Home: Simulation Start Simulation Optimizer
Home: Simulation Start Simulation Optimizer
Home: Simulation Start Simulation Optimizer
In this property page you can select the algorithm type, the parameters for optimization, and define the limits of these parameters.
Simulation type and general controls
See Optimizer help page.
Algorithm
Choose between seven algorithm types. The Trust Region Framework is most modern of the implemented algorithms. It uses local linear models on primary data and is able to exploit the sensitivity information if provided by the solver. The Interpolated Quasi Newton algorithm makes use of approximated gradient information to achieve fast convergence rates. The Powell optimizer applies a line search for each parameter. However these algorithms are sensitive to the choice of the starting point in the parameter space. If the starting point is close to the desired optimum or the (unknown) goal function is sufficiently smooth then the local algorithms will converge quickly. The Interpolated Quasi Newton optimizer is fast due to its support of interpolation of primary data, but in some cases it may be not as accurate as the slower Classic Powell optimizer. The Trust Region Framework is the most robust of the algorithms, because the Trust Region approach will always assure convergence to a stationary point. It is also very efficient, avoiding many solver runs by interpolating primary data without sacrificing accuracy. Especially if the system is modeled such that sensitivity information can be exploited it will achieve the best convergence rates of all the algorithms.
The Nelder Mead Simplex Algorithm generates a set of starting points and does not need gradient information to determine it's search direction. This is an advantage over local algorithms as soon as the number of variables grows. It is also less dependent on the chosen starting point because it starts with a set of points distributed in the parameter space. Which, compared with the other local algorithms, is an advantage if you have a bad starting point but a disadvantage if your starting point lies already close to the desired optimum.
Another advantage of neither using gradient information nor an interpolation approach to avoid some evaluations is that this algorithm is able to continue optimization even if for some parameter settings the model can not be evaluated. A parameter combination for which results can't be produced is called infeasible. The CMA Evolutionary Strategy, the Genetic Algorithm and the Particle Swarm Optimization also feature this continuation of optimization despite infeasible points but only if the interpolation feature is not switched on.
If a non-smooth goal function is expected, the starting point is far away from the optimum or a large parameter space is going to be explored then a global algorithm should be preferred. For the featured global optimizers a maximal number of iterations can be specified. Therefore the maximal number of goal function evaluations, and thus optimization time, can be determined a priori. Another advantage of the global optimizers is that the number of evaluations is independent from the number of parameters. Therefore the choice of a global optimizer over a local one can pay off if the optimization problem has a large number of parameters. The CMA Evolutionary Strategy, which is the most sophisticated approach of the implemented global optimizers, uses a statistical model in combination with some step size parameter. In addition the history of successful optimization steps is exploited. This improves the algorithms performance without loosing its global optimization properties.
Trust Region Framework: Selects a local optimizing technique embedded in a trust region framework. The algorithm starts with building a linear model on primary data in a "trust" region around the starting point. For building this model sensitivity information of the primary data will be exploited if provided. Fast optimizations are done based on this local model to achieve a candidate for a new solver evaluation. The new point is accepted, if it is superior to the anchors of the model. If the model is not accurate enough the radius of the trust region will be decreased and a model on the new trust region will be created. The algorithm will be converged once the trust region radius or distance to the next predicted optimum becomes smaller than the specified domain accuracy.
Nelder Mead Simplex Algorithm: Selects the local Simplex optimization algorithm by Nelder and Mead. This method is a local optimization technique. If N is the number of parameters, it starts with N+1 points distributed in the parameter space.
CMA Evolutionary Strategy: Selects the global covariance matrix adaptation evolutionary strategy.
Genetic Algorithm: Selects the global genetic optimizer.
Particle Swarm Optimization: Selects the global particle swarm optimizer.
Interpolated Quasi Newton: Selects the local optimizer supporting interpolation of primary data. This optimizer is fast in comparison to the Classic Powell optimizer but may be less accurate. In addition, you can set the number N of optimizer passes (1 to 10) for this optimizer type. A number N greater than 1 forces the optimizer to start over (N-1) times. Within each optimizer pass the minimum and maximum settings of the parameters are changed approaching the optimal parameter setting. Increase the number of passes to values greater than 1 (e.g., 2 or 3) to obtain more accurate results. It is recommended for the most common EM optimizations not to increase the number higher than 3 but to increase the number of samples in the parameter list, if the results are not suitable. The corresponding numerical solver for the optimization will only be evaluated for the defined samples. All other parameter combinations will be evaluated by using the interpolation of primary data. At the end of each optimization pass the optimum predicted by this approach will be verified by another evaluation of the numerical solver .
Classic Powell: Selects the local optimizer without interpolation of primary data. In addition, it is necessary to set the accuracy, which effects the accuracy of the optimal parameter settings and the time of termination of the optimization process. For optimizations with more than one parameter the Trust Region Framework, the Interpolated Quasi Newton or the Nelder Mead Simplex Algorithm should be preferred to this technique.
Reset min/max
Reset the minimum and maximum values of each parameter to the entered percentage of the initial value. If an initial value is 0, the minimum/maximum value is set to -/+ the ratio of percentage to 100.
Use current as initial/anchor values
Activate this check box to initialize the optimizer with the current values. This means that you are able to continue the optimization process, starting the solver with the previously achieved parameter results. However, if that you want to run the optimizer several times with the same initial parameter conditions, you have to disable this check button.
If the global optimization techniques are used, the algorithms need a set of distributed starting points. In this case the check button won't have any effect.
Use data of previous calculations
Activate this check button to trigger the import of previously calculated results for new optimizations to speed up the optimization process. If the result templates on which the optimizer goals are based were already evaluated before and the corresponding parameter combinations lie in the defined parameter space the results might be imported without the need for recalculation. For the local algorithms it's possible that the initial point is replaced if a more suitable point is found in advance. For the algorithms that use a set of initial points, multiple initial points will be replaced if suitable data is found. Points are replaced by previously calculated ones if the parameter combinations are very close or if the corresponding goal values are superior to previously calculated parameters in the neighbourhood. This may disturb the selected distribution type of the inital point set but the algorithm will find a good compromise between finding points with good goal value and a well distributed set of starting points in the parameter space. Keep in mind that this feature will make the reproducibility of optimizations more difficult because after an optimization there will be more potential imports available than before.
Parameter list
Check box: Within the parameter list you can select the parameters varied during the optimization run.
Parameter: Shows the name of the parameters (read only).
Min/Max: You can set the minimum and maximum boundaries of the parameters selected for the optimization process either manually or using the Reset min/max button as described above. The minimum/maximum values must be less/greater than the initial parameter value.
Samples: If you use the Interpolated Quasi Newton, the Genetic or the Particle Swarm optimizer you have to set a value for the number of samples (minimum 3). The number of samples defines the parameter values that are used to calculate exact 3D solutions with the currently selected solver. Please note that a high number N of samples does not automatically mean that the N 3D solver simulations will be performed. The sample value rather defines the step width for the locally searching optimizer. For a larger parameter range, a higher sample value may lead to more accurate results. If the Genetic or the Particle Swarm optimizer is used and the interpolation is switched off, this setting will have no effect.
Initial/Anchor: You can modify the initial/anchor parameter settings here.
Current: Shows the parameter values of the current model.
Best: Shows the best parameter combination the optimizer has found so far.
The following settings are available depending on the chosen algorithm type:
Properties
If a global algorithm or the Nelder Mead Simplex Algorithm is selected the properties dialog for the corresponding global optimizer will be opened.
Use interpolation
This check box is only available for the Genetic Algorithm or the Particle Swarm Optimization. Check this box to activate the interpolation, and disables the sample values in the parameter list.
For both global optimizers it is possible to switch on the Interpolation of Primary Data. If the interpolation is applied the only true solver runs that will be done are the ones for the evaluation of the specified anchors and a final solver run for the estimated best parameters. All other goal function evaluations will be interpolated.
Please note that global optimization algorithms have the probability of exploring most of the parameter space. Thus it is most likely that all or nearly all anchor points will actually be evaluated. Keep in mind that the number of solver runs needed for interpolation is dependant of the number of parameters whereas the number of solver runs needed for the two global optimization algorithms are independent of the number of parameters. Because of this, the usage of the interpolation feature will only pay off if the parameter space is not too high dimensional or a large number of iterations is planned.
Since the possible goal functions that can be defined have always non negative values the optimization will automatically be stopped if one of the anchor evaluations yields a goal value equal zero.
Include anchor in initial point set
This check box is only available for the Nelder Mead Simplex Algorithm. If this feature is switched on then the point that is defined as anchor point in the parameter list will be included in the initial data set of the algorithm. If the current parameter settings are already quite good then it makes sense to include this point in the starting set. After the set of initial points is generated the closest point from the automatically generated set will be substituted with the predefined point. However if the current point was created by a previous optimization run of a local optimizer and a second optimization is planned on a reduced parameter space this setting should be turned off because it increases the risk that the second optimization will converge to the same local optimum as before. In this case the second optimization won't yield any improvement.
Optimizer passes
This check box is only available for the Interpolated Quasi Newton. Set the number of samples required for the Interpolated Quasi Newton optimizer.
Domain accuracy
This check box is only available for the Trust Region Framework. Set the accuracy of the optimizer in the parameter space if all parameter ranges are mapped to the interval [0,1].
Accuracy
This check box is only available for the Classic Powell. Set the accuracy, which effects the accuracy of the optimal parameter settings and the time of termination of the optimization process.
最全面、最专业的CST微波工作室视频培训课程,可以帮助您从零开始,全面系统学习CST的设计应用【More..】
频道总排行
- CST2013: Mesh Problem Handling
- CST2013: Field Source Overview
- CST2013: Discrete Port Overview
- CST2013: Sources and Boundary C
- CST2013: Multipin Port Overview
- CST2013: Farfield Overview
- CST2013: Waveguide Port
- CST2013: Frequency Domain Solver
- CST2013: Import ODB++ Files
- CST2013: Settings for Floquet B