Scilab Website | Contribute with GitLab | Mailing list archives | ATOMS toolboxes
Scilab Online Help
5.3.0 - English

Change language to:
Français - 日本語 - Português

Please note that the recommended version of Scilab is 2025.0.0. This page might be outdated.
See the recommended documentation of this function

Scilab manual >> Optimization and Simulation > Neldermead > neldermead

neldermead

Provides several direct search optimization algorithms based on the simplex method.

SYNOPSIS

newobj = neldermead_new ()
this = neldermead_destroy (this)
this = neldermead_configure (this,key,value)
value = neldermead_cget (this,key)
value = neldermead_get ( this , key )
this = neldermead_search ( this )
this = neldermead_restart ( this )
[ this , result ] = neldermead_function ( this , x )

Description

This class provides several direct search optimization algorithms based on the simplex method.

The optimization problem to solve is the minimization of a cost function, with bounds and nonlinear constraints

min f(x)
l_i <= x_i <= h_i, i = 1,n
g_i(x) <= 0, i = 1,nbineq

where

n

number of variables

nbineq

number of inequality constraints

The provided algorithms are direct search algorithms, i.e. algorithms which do not use the derivative of the cost function. They are based on the update of a simplex, which is a set of k>=n+1 vertices, where each vertex is associated with one point and one function value.

The following algorithms are available :

Spendley, Hext and Himsworth fixed shape simplex method

This algorithm solves an unconstrained optimization problem with a fixed shape simplex made of k=n+1 vertices.

Nelder and Mead variable shape simplex method

This algorithm solves an unconstrained optimization problem with a variable shape simplex made of k=n+1 vertices.

Box complex method

This algorithm solves an constrained optimization problem with a variable shape simplex made of an arbitrary k number of vertices (k=2n is recommended by Box).

See the demonstrations, in the Optimization section, for an overview of this component.

See the "Nelder-Mead User's Manual" on Scilab's wiki and on the Scilab forge for further informations.

Design

The neldermead component is built on top of the optimbase and optimsimplex components.

Functions

The following functions are available.

newobj = neldermead_new ()

Creates a new neldermead object.

newobj

The new object.

this = neldermead_destroy (this)

Destroy the given object.

this

The current object.

this = neldermead_configure (this,key,value)

Configure the current object with the given value for the given key.

this

The current object.

key

the key to configure. The following keys are available.

-verbose

set to 1 to enable verbose logging. (default is 0)

-verbosetermination

set to 1 to enable verbose termination logging. (default is 0)

-x0

the initial guess, as a n x 1 column vector, where n is the number of variables.

-maxfunevals

the maximum number of function evalutations (default is 100). If this criteria is triggered, the status of the optimization is set to "maxfuneval".

-maxiter

the maximum number of iterations (default is 100). If this criteria is triggered, the status of the optimization is set to "maxiter".

-tolfunabsolute

the absolute tolerance for the function value (default is 0.0).

-tolfunrelative

the relative tolerance for the function value (default is %eps).

-tolfunmethod

the method used for the tolerance on function value in the termination criteria.

The following values are available : %t, %f (default is %f). If this criteria is triggered, the status of the optimization is set to "tolf".

-tolxabsolute

the absolute tolerance on x (default is 0.0).

-tolxrelative

the relative tolerance on x (default is %eps).

-tolxmethod

the method used for the tolerance on x in the termination criteria.

The following values are available : %t, %f (default is %t). If this criteria is triggered, the status of the optimization is set to "tolx".

-function

the objective function, which computes the value of the cost and the non linear constraints, if any.

See below for the details of the communication between the optimization system and the cost function.

-costfargument

an additionnal argument, passed to the cost function.

-outputcommand

a command which is called back for output.

See below for the details of the communication between the optimization system and the output command function.

-outputcommandarg

an additionnal argument, passed to the output command.

-numberofvariables

the number of variables to optimize (default is 0).

-storehistory

set to 1 to enable the history storing (default is 0).

-boundsmin

the minimum bounds for the parameters, as an array of values (default is empty, i.e. there are no bounds).

-boundsmax

the maximum bounds for the parameters, as an array of values (default is empty, i.e. there are no bounds).

-nbineqconst

the number of inequality constraints (default is 0)

-method

the name of the algorithm to use. The following methods are available :

"fixed"

the Spendley et al. fixed simplex shape algorithm. This algorithm is for unconstrained problems (i.e. bounds and non linear constraints are not taken into account)

"variable"

the Nelder-Mead variable simplex shape algorithm. This algorithm is for unconstrained problems (i.e. bounds and non linear constraints are not taken into account)

"box"

the Box complex algorithm. This algorithm takes into account bounds and nonlinear inequality constraints.

"mine"

the user-defined algorithm, associated with the -mymethodoption. See below for details.

-simplex0method

the method to use to compute the initial simplex. The first vertex in the simplex is always the initial guess associated with the -x0 option. The following methods are available :

"given"

the coordinates associated with the -coords0 option are used to compute the initial simplex, with arbitrary number of vertices.

This allow the user to setup the initial simplex by a specific method which is not provided by the current component (for example with a simplex computed from a design of experiments). This allows also to configure the initial simplex so that a specific behaviour of the algorithm an be reproduced (for example the Mac Kinnon test case).

The given matrix is expected to have n rows and k columns, where n is the dimension of the problem and k is the number of vertices.

"axes"

the simplex is computed from the coordinate axes and the length associated with the -simplex0length option.

"spendley"

the simplex is computed so that it is regular with the length associated with the -simplex0length option (i.e. all the edges have the same length).

"pfeffer"

the simplex is computed from an heuristic, in the neighborhood of the initial guess. This initial simplex depends on the -simplex0deltausual and -simplex0deltazero.

"randbounds"

the simplex is computed from the bounds and a random number. This option is available only if bounds are available : if bounds are not available, an error is generated. This method is usually associated with Box's algorithm. The number of vertices in the simplex is taken from the -boxnbpoints option.

-coords0

the coordinates of the vertices of the initial simplex. If the -simplex0method option is set to "given", these coordinates are used to compute the initial simplex. This matrix is expected to have shape nbve x n where nbve is the number of vertices and n is the number of variables.

-simplex0length

the length to use when the initial simplex is computed with the "axes" or "spendley" methods. If the initial simplex is computed from "spendley" method, the length is expected to be a scalar value. If the initial simplex is computed from "axes" method, it may be either a scalar value or a vector of values, with rank n, where n is the number of variables.

-simplex0deltausual

the relative delta for non-zero parameters in "pfeffer" method. The default value is 0.05.

-simplex0deltazero

the absolute delta for non-zero parameters in "pfeffer" method. The default value is 0.0075.

-rho

the reflection coefficient. This parameter is used when the -method option is set to "fixed" or "variable". The default value is 1.0.

-chi

the expansion coefficient. This parameter is used when the -method option is set to "variable". The default value is 2.0.

-gamma

the contraction coefficient. This parameter is used when the -method option is set to "variable". The default value is 0.5.

-sigma

the shrinkage coefficient. This parameter is used when the -method option is set to "fixed" or "variable". The default value is 0.5.

-tolsimplexizemethod

set to %f to disable the tolerance on the simplex size. The default value is %t. If this criteria is triggered, the status of the optimization is set to "tolsize".

When this criteria is enabled, the values of the options -tolsimplexizeabsolute and -tolsimplexizerelative are used in the termination criteria. The method to compute the size is the "sigmaplus" method.

-tolsimplexizeabsolute

the absolute tolerance on the simplex size. The default value is 0.0.

-tolsimplexizerelative

the relative tolerance on the simplex size. The default value is %eps.

-tolssizedeltafvmethod

set to %t to enable the termination criteria based on the size of the simplex and the difference of function value in the simplex. The default value is %f. If this criteria is triggered, the status of the optimization is set to "tolsizedeltafv".

This termination criteria uses the values of the options -tolsimplexizeabsolute and -toldeltafv. This criteria is identical to Matlab's fminsearch.

-toldeltafv

the absolute tolerance on the difference between the highest and the lowest function values.

-tolvarianceflag

set to %t to enable the termination criteria based on the variance of the function value. If this criteria is triggered, the status of the optimization is set to "tolvariance".

This criteria is suggested by Nelder and Mead.

-tolabsolutevariance

the absolute tolerance on the variance of the function values of the simplex.

-tolrelativevariance

the relative tolerance on the variance of the function values of the simplex.

-kelleystagnationflag

set to %t to enable the termination criteria using Kelley's stagnation detection, based on sufficient decrease condition. The default value is %f. If this criteria is triggered, the status of the optimization is set to "kelleystagnation".

-kelleynormalizationflag

set to %f to disable the normalization of the alpha coefficient in Kelley's stagnation detection, i.e. use the value of the option -kelleystagnationalpha0 as is. Default value is %t, i.e. the simplex gradient of the initial simplex is taken into account in the stagnation detection.

-kelleystagnationalpha0

the parameter used in Kelley's stagnation detection. The default value is 1.e-4.

-restartflag

set to %t to enable the automatic restart of the algorithm. Default value is %f.

-restartdetection

the method to detect if the automatic restart must be performed. The following methods are available :

"oneill"

the factorial local optimality test by O'Neill is used. If the test finds a local point which is better than the computed optimum, a restart is performed.

"kelley"

the sufficient decrease condition by O'Neill is used. If the test finds that the status of the optimization is "kelleystagnation", a restart is performed. This status may be generated if the -kelleystagnationflag option is set to %t.

The default method is "oneill".

-restartmax

the maximum number of restarts, when automatic restart is enabled via the -restartflag option. Default value is 3.

-restarteps

the relative epsilon value used to check for optimality in the factorial O'Neill restart detection. The default value is %eps.

-restartstep

the absolute step length used to check for optimality in the factorial O'Neill restart detection. It is expected to be either a positive double or a column vector of n positive doubles, where n is the number of variables in the problem. The default value is 1.0.

-restartsimplexmethod

the method to compute the initial simplex after a restart. The following methods are available.

"given"

the coordinates associated with the -coords0 option are used to compute the initial simplex, with arbitrary number of vertices.

This allow the user to setup the initial simplex by a specific method which is not provided by the current component (for example with a simplex computed from a design of experiments). This allows also to configure the initial simplex so that a specific behaviour of the algorithm an be reproduced (for example the Mc Kinnon test case).

The given matrix is expected to have n rows and k columns, where n is the dimension of the problem and k is the number of vertices.

"axes"

the simplex is computed from the coordinate axes and the length associated with the -simplex0length option.

"spendley"

the simplex is computed so that it is regular with the length associated with the -simplex0length option (i.e. all the edges have the same length).

"pfeffer"

the simplex is computed from an heuristic, in the neighborhood of the initial guess. This initial simplex depends on the -simplex0deltausual and -simplex0deltazero.

"randbounds"

the simplex is computed from the bounds and a random number. This option is available only if bounds are available : if bounds are not available, an error is generated. This method is usually associated with Box's algorithm. The number of vertices in the simplex is taken from the -boxnbpoints option.

"oriented"

the simplex is computed so that it is oriented, as suggested by C.T. Kelley.

The default method is "oriented".

-scalingsimplex0

the algorithm used to scale the initial simplex into the nonlinear constraints. The following two algorithms are provided :

  • "tox0": scales the vertices toward the initial guess.

  • "tocentroid": scales the vertices toward the centroid, as recommended by Box.

If the centroid happens to be unfeasible, because the constraints are not convex, the scaling of the initial simplex toward the centroid may fail. Since the initial guess is always feasible, scaling toward the initial guess cannot fail. The default value is "tox0".

-boxnbpoints

the number of points in the initial simplex, when the -simplex0method is set to "randbounds". The value of this option is also use to update the simplex when a restart is performed and the -restartsimplexmethod option is set to "randbounds". The default value is so that the number of points is twice the number of variables of the problem.

-boxineqscaling

the scaling coefficient used to scale the trial point for function improvement or into the constraints of Box's algorithm. Default value is 0.5.

-guinalphamin

the minimum value of alpha when scaling the vertices of the simplex into nonlinear constraints in Box's algorithm. Default value is 1.e-5.

-boxreflect

the reflection factor in Box's algorithm. Default value is 1.3.

-boxtermination

set to %t to enable Box termination criteria. Default value is %f.

-boxtolf

the absolute tolerance on difference of function values in the simplex, suggested by Box. This tolerance is used if the -boxtermination option is set to %t. Default value is 1.e-5.

-boxnbmatch

the number of consecutive match of Box termination criteria. Default value is 5.

-boxboundsalpha

the parameter used to project the vertices into the bounds in Box's algorithm. Default value is 0.000001.

-mymethod

a user-derined simplex algorithm. See below for details (default is empty).

-myterminate

a user-defined terminate function. See below for details (default is empty).

-myterminateflag

set to %t to enable the user-defined terminate function (default is %f).

value

the value.

value = neldermead_cget (this,key)

Get the value for the given key. If the key is unknown, generates an error.

this

The current object.

key

the name of the key to quiery. The list of available keys is the same as for the neldermead_configure function.

value = neldermead_get ( this , key )

Get the value for the given key. If the key is unknown, generates an error.

this

The current object.

key

the key to get.

The following keys are available :

-funevals

the number of function evaluations

-iterations

the number of iterations

-xopt

the x optimum, as a n x 1 column vector, where n is the number of variables.

-fopt

the optimum cost function value

-historyxopt

an array, with nbiter values, containing the history of x during the iterations.

This array is available after optimization if the history storing was enabled with the -storehistory option.

-historyfopt

an array, with nbiter values, containing the history of the function value during the iterations.

This array is available after optimization if the history storing was enabled with the -storehistory option.

-fx0

the function value for the initial guess

-status

a string containing the status of the optimization. See below for details about the optimization status.

-historysimplex

a matrix containing the history of the simplex during the iterations. This matrix has rank nbiter x nbve x n, where nbiter is the number of iterations, nbve is the number of vertices in the simplex and n is the number of variables.

-simplex0

the initial simplex. This is a simplex object, which is suitable for processing with the optimsimplex component.

-simplexopt

the optimum simplex. This is a simplex object, which is suitable for processing with the optimsimplex component.

-restartnb

the number of actual restarts performed.

Most fields are available only after an optimization has been performed with one call to the neldermead_search method.

this = neldermead_search ( this )

Performs the optimization associated with the method associated with the -method option and find the optimum.

this

The current object.

If the -restartflag option is enabled, automatic restarts are performed, based on the -restartdetection option.

this = neldermead_restart ( this )

Restarts the optimization by updating the simplex and performing a new search.

this

The current object.

[ this , result ] = neldermead_function ( this , x )

Call the cost function and return the value.

this

The current object.

x

the point where the function is to be evaluated

index

optionnal, a flag to pass to the cost function (default = 1). See the section "The cost function" for available values of index.

The cost function

The option -function allows to configure the cost function. The cost function is used to compute the objective function value f. If the -nbineqconst option is configured to a non-zero value, the cost function must also compute the value of the nonlinear, positive, inequality constraints c. The cost function can also take as input/output an additional argument, if the -costfargument option is configured. Depending of these options, the cost function can have one of the following headers :

//  Without additionnal data
   [ f , index ] = costf ( x , index )
   [ f , c , index ] = costf ( x , index )
//  With -costfargument
   [ f , index , data ] = costf ( x , index , data )
   [ f , c , index , data ] = costf ( x , index , data )

where

x

the current point, as a column vector

index

optional, an integer representing the value to compute

f

the value of the cost function

c

the value of the non-linear, positive, inequality constraints

data

an user-provided input/output argument

The index input parameter tells to the cost function what is expected in the output arguments. It has the following meaning

index = 2

compute f

index = 5

compute c

index = 6

compute f and c

In the most simplex case, there is no additionnal cost function argument and no nonlinear constraints. In this case, the cost function is expected to have the following header

function [f, index]=myfunction(x, index)

The data argument is both input and output and the following paragraph explains for what reason and how it works.

This feature may be used in the situation where the cost function has to update its environment from call to call. Its simplest use is to count the number of calls to the cost function, but this feature is already available directly. Consider the more practical situation where the optimization requires the execution of an underlying Newton method (a chemical solver for example). This Newton method requires an initial guess x0. If the initial guess for this underlying Newton method is kept constant, the Newton method may have problems to converge when the current optimization point get far away from the its initial point. If the -costfargument option is provided, the initial guess for the Newton method can be updated so that it gets the value of the previous call. This way, the Newton method will have less problems to converge and the cost function evaluation may be faster.

We now present how the feature works. Everytime the cost function is called back, the -costfargument option is passed to the cost function as an input argument. If the cost function modifies its content in the output argument, the content of the -costfargument option is updated accordingly. Once the optimization is performed, the user may call the neldermead_cget function and get back an updated -costfargument content.

The output function

The option -outputcommand allows to configure a command which is called back at the start of the optimization, at each iteration and at the end of the optimization.

The output function must have the following header

function outputcmd(state, data, myobj)

where

state

a string representing the current state of the algorithm. Available values are "init", "iter", "done".

data

a tlist containing at least the following entries

x

the current optimum

fval

the current function value

iteration

the current iteration index

funccount

the number of function evaluations

simplex

the current simplex

step

the previous step in the algorithm. The following values are available : "init", "done", "reflection", "expansion", "insidecontraction", "outsidecontraction", "reflectionnext", "shrink".

myobj

a user-defined parameter.

This input parameter is defined with the -outputcommandarg option.

The output function may be used when debugging the specialized optimization algorithm, so that a verbose logging is produced. It may also be used to write one or several report files in a specialized format (ASCII, LaTeX, Excel, Hdf5, etc...). The user-defined parameter may be used in that case to store file names or logging options.

Termination

The current component takes into account for several generic termination criterias.

The following termination criterias are enabled by default :

  • -maxiter,

  • -maxfunevals,

  • -tolxmethod.

  • -tolsimplexizemethod.

The optimization_terminate function uses a set of rules to compute if the termination occurs, which leads to an optimization status which is equal to one of the following : "continue", "maxiter", "maxfunevals", "tolf", "tolx", "tolsize", "tolsizedeltafv", "kelleystagnation", "tolboxf", "tolvariance". The value of the status may also be a user-defined string, in the case where a user-defined termination function has been set.

The following set of rules is examined in this order.

  • By default, the status is "continue" and the terminate flag is %f.

  • The number of iterations is examined and compared to the -maxiter option : if the following condition

    iterations >= maxiter

    is true, then the status is set to "maxiter" and terminate is set to %t.

  • The number of function evaluations and compared to the -maxfunevals option is examined : if the following condition

    funevals >= maxfunevals

    is true, then the status is set to "maxfuneval" and terminate is set to %t.

  • The tolerance on function value is examined depending on the value of the -tolfunmethod.

    %f

    then the criteria is just ignored.

    %t

    if the following condition

    abs(currentfopt) < tolfunrelative * abs(previousfopt) + tolfunabsolute

    is true, then the status is set to "tolf" and terminate is set to %t.

    The relative termination criteria on the function value works well if the function value at optimum is near zero. In that case, the function value at initial guess fx0 may be used as previousfopt.

    This criteria is sensitive to the -tolfunrelative and -tolfunabsolute options.

    The absolute termination criteria on the function value works if the user has an accurate idea of the optimum function value.

  • The tolerance on x is examined depending on the value of the -tolxmethod.

    %f

    then the criteria is just ignored.

    %t

    if the following condition

    norm(currentxopt - previousxopt) < tolxrelative * norm(currentxopt) + tolxabsolute

    is true, then the status is set to "tolx" and terminate is set to %t.

    This criteria is sensitive to the -tolxrelative and -tolxabsolute options.

    The relative termination criteria on x works well if x at optimum is different from zero. In that case, the condition measures the distance between two iterates.

    The absolute termination criteria on x works if the user has an accurate idea of the scale of the optimum x. If the optimum x is near 0, the relative tolerance will not work and the absolute tolerance is more appropriate.

  • The tolerance on simplex size is examined depending on the value of the -tolsimplexizemethod option.

    %f

    then the criteria is just ignored.

    %t

    if the following condition

    ssize < tolsimplexizerelative * simplexsize0 + tolsimplexizeabsolute

    is true where simplexsize0 is the size of the simplex at iteration 0, then the status is set to "tolsize" and terminate is set to %t.

    The size of the simplex is computed from the "sigmaplus" method of the optimsimplex component. This criteria is sensitive to the -tolsimplexizeabsolute and the -tolsimplexizerelative options.

  • The absolute tolerance on simplex size and absolute difference of function value is examined depending on the value of the -tolssizedeltafvmethod option.

    %f

    then the criteria is just ignored.

    %t

    if both the following conditions

    ssize < tolsimplexizeabsolute
    shiftfv < toldeltafv

    is true where ssize is the current simplex size and shiftfv is the absolute value of the difference of function value between the highest and lowest vertices, then the status is set to "tolsizedeltafv" and terminate is set to %t.

  • The stagnation condition based on Kelley sufficient decrease condition is examined depending on the value of the -kelleystagnationflag option.

    %f

    then the criteria is just ignored.

    %t

    if the following condition

    newfvmean <= oldfvmean - alpha * sg' * sg

    is true where newfvmean (resp. oldfvmean) is the function value average in the current iteration (resp. in the previous iteration), then the status is set to "kelleystagnation" and terminate is set to %t. Here, alpha is a non-dimensional coefficient and sg is the simplex gradient.

  • The termination condition suggested by Box is examined depending on the value of the -boxtermination option.

    %f

    then the criteria is just ignored.

    %t

    if both the following conditions

    shiftfv < boxtolf
    boxkount == boxnbmatch

    is true where shiftfvis the difference of function value between the best and worst vertices, and boxkount is the number of consecutive iterations where this criteria is met, then the status is set to "tolboxf" and terminate is set to %t. Here, the boxtolf parameter is the value associated with the -boxtolf option and is a user-defined absolute tolerance on the function value. The boxnbmatch parameter is the value associated with the -boxnbmatch option and is the user-defined number of consecutive match.

  • The termination condition based on the variance of the function values in the simplex is examined depending on the value of the -tolvarianceflag option.

    %f

    then the criteria is just ignored.

    %t

    if the following condition

    var < tolrelativevariance * variancesimplex0 + tolabsolutevariance

    is true where varis the variance of the function values in the simplex, then the status is set to "tolvariance" and terminate is set to %t. Here, the tolrelativevariance parameter is the value associated with the -tolrelativevariance option and is a user-defined relative tolerance on the variance of the function values. The tolabsolutevariance parameter is the value associated with the -tolabsolutevariance option and is the user-defined absolute tolerance of the variance of the function values.

  • The user-defined termination condition is examined depending on the value of the -myterminateflag option.

    %f

    then the criteria is just ignored.

    %t

    if the term output argument boolean returned by the termination function is true, then the status is set to the user-defined status and terminate is set to %t.

Kelley's stagnation detection

The stagnation detection criteria suggested by Kelley is based on a sufficient decrease condition, which requires a parameter alpha > 0 to be defined. The -kelleynormalizationflag option allows to configure the method to use to compute this alpha parameter : two methods are available, where each method corresponds to a different paper by Kelley :

constant

In "Detection and Remediation of Stagnation in the Nelder--Mead Algorithm Using a Sufficient Decrease Condition", Kelley uses a constant alpha, with the suggested value 1.e-4, which is is typical choice for line search method.

normalized

in "Iterative Methods for Optimization", Kelley uses a normalized alpha, computed from the following formula

alpha = alpha0 * sigma0 / nsg

where sigma0 is the size of the initial simplex and nsg is the norm of the simplex gradient for the initial guess point.

O'Neill factorial optimality test

In "Algorithm AS47 - Function minimization using a simplex procedure", R. O'Neill presents a fortran 77 implementation of the simplex method. A factorial test is used to check if the computed optimum point is a local minimum. If the -restartdetection option is set to "oneill", that factorial test is used to see if a restart should be performed. O'Neill's factorial test requires 2n function evaluations, where n is the number of variables.

Spendley et al. implementation notes

The original paper may be implemented with several variations, which might lead to different results. This section defines what algorithmic choices have been used.

The paper states the following rules.

  • "Rule 1. Ascertain the lowest reading y, of yi ... yk+1 Complete a new simplex Sp by excluding the point Vp corresponding to y, and replacing it by V* defined as above."

  • "Rule 2. If a result has occurred in (k + 1) successive simplexes, and is not then eliminated by application of Rule 1, do not move in the direction indicated by Rule 1, or at all, but discard the result and replace it by a new observation at the same point."

  • "Rule 3. If y is the lowest reading in So , and if the next observation made, y* , is the lowest reading in the new simplex S , do not apply Rule 1 and return to So from Sp . Move out of S, by rejecting the second lowest reading (which is also the second lowest reading in So)."

We implement the following "rules" of the Spendley et al. method.

  • Rule 1 is strictly applied, but the reflection is done by reflection the high point, since we minimize a function instead of maximizing it, like Spendley.

  • Rule 2 is NOT implemented, as we expect that the function evaluation is not subject to errors.

  • Rule 3 is applied, ie reflection with respect to next to high point.

The original paper does not mention any shrink step. When the original algorithm cannot improve the function value with reflection steps, the basic algorithm stops. In order to make the current implementation of practical value, a shrink step is included, with shrinkage factor sigma. This perfectly fits into to the spirit of the original paper. Notice that the shrink step make the rule #3 (reflection with respect to next-to-worst vertex) unnecessary. Indeed, the minimum required steps are the reflection and shrinkage. Never the less, the rule #3 has been kept in order to make the algorithm as close as it can be to the original.

Nelder-Mead implementation notes

The purpose of this section is to analyse the current implementation of Nelder-Mead's algorithm.

The algorithm that we use is described in "Iterative Methods for Optimization" by C. T. Kelley.

The original paper uses a "greedy" expansion, in which the expansion point is accepted whatever its function value. The current implementation, as most implementations, uses the expansion point only if it improves over the reflection point, that is,

  • if fe<fr, then the expansion point is accepted,

  • if not, the reflection point is accepted.

The termination criteria suggested by Nelder and Mead is based on an absolute tolerance on the standard deviation of the function values in the simplex. We provide this original termination criteria with the -tolvarianceflag option, which is disabled by default.

Box's complex algorithm implementation notes

In this section, we analyse the current implementation of Box's complex method.

The initial simplex can be computed as in Box's paper, but this may not be safe. In his paper, Box suggest that if a vertex of the initial simplex does not satisfy the non linear constraints, then it should be "moved halfway toward the centroid of those points already selected". This behaviour is available when the -scalingsimplex0 option is set to "tocenter". It may happen, as suggested by Guin, that the centroid is not feasible. This may happen if the constraints are not convex. In this case, the initial simplex cannot be computed. This is why we provide the "tox0" option, which allows to compute the initial simplex by scaling toward the initial guess, which is always feasible.

In Box's paper, the scaling into the non linear constraints is performed "toward" the centroid, that is, by using a scaling factor equal to 0.5. This default scaling factor might be sub-optimal in certain situations. This is why we provide the -boxineqscaling option, which allows to configure the scaling factor.

In Box's paper, whether we are concerned with the initial simplex or with the simplex at a given iteration, the scaling for the non linear constraints is performed without end. This is because Box's hypothesis is that "ultimately, a satisfactory point will be found". As suggested by Guin, if the process fails, the algorithm goes into an infinite loop. In order to avoid this, we perform the scaling until a minimum scaling value is reached, as defined by the -guinalphamin option.

We have taken into account for the comments by Guin, but it should be emphasized that the current implementation is still as close as possible to Box's algorithm and is not Guin's algorithm. More precisely, during the iterations, the scaling for the non linear constraints is still performed toward the centroid, be it feasible or not.

User-defined algorithm

The -mymethod option allows to configure a user-defined simplex-based algorithm. The reason for this option is that many simplex-based variants of Nelder-Mead's algorithm have been developped over the years, with specific goals. While it is not possible to provide them all, it is very convenient to use the current structure without being forced to make many developments.

The value of the -mymethod option is expected to be a Scilab function with the following header

function this=myalgorithm(this)
endfunction

where -this is the current object.

In order to use the user-defined algorithm, the -method option must be set to "mine". In this case, the component performs the optimization exactly as if the user-defined algorithm was provided by the component.

The user interested in that feature may use the internal scripts provided in the distribution as templates and tune his own algorithm from that point. There is of course no warranty that the user-defined algorithm improves on the standard algorithm, so that users use this feature at their own risks.

User-defined termination

Many termination criteria are found in the bibliography. Users which aim at reproducing the results exhibited in a particular paper may find that that none of the provided termination criteria match the one which is used in the paper. It may also happen that the provided termination criteria are not suitable for the specific test case. In those situation the -myterminate option allows to configure a user-defined termination function.

The value of the -myterminate option is expected to be a Scilab function with the following header

function [this, terminate, status]=mystoppingrule(this, simplex)
endfunction

where -this is the current object and -simplex is the current simplex. The terminate output argument is a boolean which is false if the algorithm must continue and true if the algorithm must stop. The status output argument is a string which is associated with the current termination criteria.

In order to enable the use of the user-defined termination function, the value of the -myterminateflag must be set to true. At each iteration, if the -myterminateflag option has been set to true, the user-defined termination is called. If the terminate output argument is true, then the algorithm is stopped. In that case, the value of the -status option of the neldermead_get is the value of the status output argument of the user-defined termination function.

Example #1 : basic use

In the following example, we solve a simple quadratic test case. We begin by defining the cost function, which takes 2 input arguments and returns the objective. The classical starting point [-1.2 1.0] is used. The neldermead_new creates a new neldermead object. Then we use the neldermead_configure method to configure the parameters of the problem. We use all default settings and perform the search for the optimum. The neldermead_display function is used to display the state of the optimization and the neldermead_get is used to retrieve the optimum parameters.

function [f, index]=quadratic(x, index)
  f = x(1)^2 + x(2)^2;
endfunction
x0 = [1.0 1.0].';
nm = neldermead_new ();
nm = neldermead_configure(nm,"-numberofvariables",2);
nm = neldermead_configure(nm,"-function",quadratic);
nm = neldermead_configure(nm,"-x0",x0);
nm = neldermead_search(nm);
nm
xopt = neldermead_get(nm,"-xopt");
nm = neldermead_destroy(nm);

Example #2 : customized use

In the following example, we solve the Rosenbrock test case. We begin by defining the Rosenbrock function, which takes 2 input arguments and returns the objective. The classical starting point [-1.2 1.0] is used. The neldermead_new creates a new neldermead object. Then we use the neldermead_configure method to configure the parameters of the problem. The initial simplex is computed from the axes and the single length 1.0 (this is the default, but is explicitely written here as an example). The variable simplex algorithm by Nelder and Mead is used, which corresponds to the -method "variable" option. The neldermead_search function performs the search for the minimum. Once the minimum is found, the neldermead_contour allows to compute the data required by the contour function. This is possible since our problem involves only 2 parameters. This function uses the cost function previously configured to compute the required data. The contour plot is directly drawn from the data provided by neldermead_contour. Then we plot the initial guess on the contour plot as a blue dot. The neldermead_get function is used to get the optimum, which is associated with the -xopt option. The optimum is plot on the contour plot as a red dot.

mprintf("Defining Rosenbrock function...\n");
function [ f , index ] = rosenbrock ( x , index )
  y = 100*(x(2)-x(1)^2)^2+(1-x(1))^2;
endfunction
x0 = [-1.2 1.0]';
mprintf("x0=%s\n",strcat(string(x0)," "));
mprintf("Creating object...\n");
nm = neldermead_new ();
mprintf("Configuring object...\n");
nm = neldermead_configure(nm,"-numberofvariables",2);
nm = neldermead_configure(nm,"-function",rosenbrock);
nm = neldermead_configure(nm,"-x0",x0);
nm = neldermead_configure(nm,"-maxiter",200);
nm = neldermead_configure(nm,"-maxfunevals",300);
nm = neldermead_configure(nm,"-tolfunrelative",10*%eps);
nm = neldermead_configure(nm,"-tolxrelative",10*%eps);
nm = neldermead_configure(nm,"-simplex0method","axes");
nm = neldermead_configure(nm,"-simplex0length",1.0);
nm = neldermead_configure(nm,"-method","variable");
nm = neldermead_configure(nm,"-verbose",0);
nm = neldermead_configure(nm,"-verbosetermination",0);
mprintf("Searching for minimum...\n");
nm = neldermead_search(nm);
mprintf("Plot contour...\n");
xmin = -2.0 ; xmax = 2.0 ; ymin = -2.0 ; ymax = 2.0 ; nx = 100 ; ny = 100;
stepx = (xmax - xmin)/nx
xdata = xmin:stepx:xmax;
stepy = (ymax - ymin)/ny
ydata = ymin:stepy:ymax;
for ix = 1:length(xdata)
  for iy = 1:length(ydata)
    experiment = [xdata(ix) ydata(iy)];
    [ nm , fiexp ] = neldermead_function ( nm , experiment );
    zdata ( ix , iy ) = fiexp;
  end
end
wnum = 100001;
my_handle             = scf(wnum);
contour ( xdata , ydata , zdata , [1 10 100 500 1000 2000] )
// Plot starting point
mprintf("x0 : blue dot\n");
plot(x0(1),x0(2));
my_handle.children.children(1).children.mark_mode="on";
my_handle.children.children(1).children.mark_size = 5;
my_handle.children.children(1).children.mark_foreground = 2;
mprintf("xopt : red dot\n");
xopt = neldermead_get(nm,"-xopt");
plot(xopt(1),xopt(2));
my_handle.children.children(1).children.mark_mode="on";
my_handle.children.children(1).children.mark_size = 5;
my_handle.children.children(1).children.mark_foreground = 5;

nm = neldermead_destroy(nm);

The -verbose option allows to get detailed informations about the current optimization process. The following is a sample output for an optimization based on the Nelder and Mead variable-shape simplex algorithm. Only the output corresponding to the iteration #156 is displayed. In order to display specific outputs (or to create specific output files and graphics), the -outputcommand option should be used.

=================================================================
Iteration #156 (total = 156)
Function Eval #297
Xopt : 1 1
Fopt : 6.871176e-027
DeltaFv : 2.880999e-026
Center : 1 1
Size : 2.548515e-013
Vertex #1/3 : fv=0.000000, x=1.000000 1.000000
Vertex #2/3 : fv=0.000000, x=1.000000 1.000000
Vertex #3/3 : fv=0.000000, x=1.000000 1.000000
nmplot_outputcmd (1)
Reflect
xbar=1 1
Function Evaluation #298 is [1.155D-25] at [1 1]
xr=[1 1], f(xr)=0.000000
Contract - inside
Function Evaluation #299 is [6.023D-27] at [1 1]
xc=1 1, f(xc)=0.000000
  > Perform Inside Contraction
Sort

Example #3 : use output function

In the following example, we show how to use the output command to create specialized outputs. These outputs may be used to create specific data files or make interactive graphic outputs.

We define the function "myoutputcmd", which takes the current state as its first argument. The state is a string which can contain "init", "iter" or "done", depending on the status of the optimization. The data input argument is a tlist, which contains the data associated with the current iteration. In this case, we use the fields to print a message in the console. As another example of use, we could format the message so that it uses LaTeX formatting rules, which may allow the user to directly copy and paste the output into a LaTeX report.

function [f, index]=rosenbrock(x, index)
  f = 100*(x(2)-x(1)^2)^2 + (1-x(1))^2;
endfunction

//
// myoutputcmd --
//  This command is called back by the Nelder-Mead
//  algorithm.
// Arguments
//  state : the current state of the algorithm
//    "init", "iter", "done"
//  data : the data at the current state
//    This is a tlist with the following entries:
//    * x : the optimal vector of parameters
//    * fval : the minimum function value
//    * simplex : the simplex, as a simplex object
//    * iteration : the number of iterations performed
//    * funccount : the number of function evaluations
//    * step : the type of step in the previous iteration
//
function myoutputcmd(state, data)
  iter = data.iteration
  if ( state == "init" ) then
    mprintf ( "=================================\n");
    mprintf ( "Initialization\n");
  elseif ( state == "done" ) then
    mprintf ( "=================================\n");
    mprintf ( "End of Optimization\n");
  end
  fc = data.funccount
  fval = data.fval
  x = data.x
  simplex = data.simplex
  // Simplex is a data structure, which can be managed
  // by the optimsimplex class.
  ssize = optimsimplex_size ( simplex )
  mprintf ( "Iteration #%d, Feval #%d, Fval = %e, x = %s, Size = %e\n", iter, fc, fval, strcat(string(x)," "), ssize);
endfunction

nm = neldermead_new ();
nm = neldermead_configure(nm,"-numberofvariables",2);
nm = neldermead_configure(nm,"-function",rosenbrock);
nm = neldermead_configure(nm,"-x0",[-1.2 1.0]');
nm = neldermead_configure(nm,"-maxiter",200);
nm = neldermead_configure(nm,"-maxfunevals",300);
nm = neldermead_configure(nm,"-tolfunrelative",10*%eps);
nm = neldermead_configure(nm,"-tolxrelative",10*%eps);
nm = neldermead_configure(nm,"-simplex0method","axes");
nm = neldermead_configure(nm,"-simplex0length",1.0);
nm = neldermead_configure(nm,"-method","variable");
nm = neldermead_configure(nm,"-verbose",0);
nm = neldermead_configure(nm,"-verbosetermination",0);
nm = neldermead_configure(nm,"-outputcommand",myoutputcmd);
nm = neldermead_search(nm);
nm = neldermead_destroy(nm);

The previous script produces the following output.

=================================
Initialization
Iteration #0, Feval #4, Fval = 2.420000e+001, x = -1.2 1, Size = 1.000000e+000
Iteration #1, Feval #4, Fval = 2.420000e+001, x = -1.2 1, Size = 1.000000e+000
Iteration #2, Feval #6, Fval = 2.420000e+001, x = -1.2 1, Size = 1.000000e+000
Iteration #3, Feval #8, Fval = 2.420000e+001, x = -1.2 1, Size = 1.000000e+000
Iteration #4, Feval #10, Fval = 9.999182e+000, x = -1.0125 0.78125, Size = 5.970304e-001
...
Iteration #155, Feval #296, Fval = 2.024754e-026, x = 1 1, Size = 4.601219e-013
Iteration #156, Feval #298, Fval = 6.871176e-027, x = 1 1, Size = 2.548515e-013
Iteration #157, Feval #300, Fval = 6.023002e-027, x = 1 1, Size = 2.814328e-013
=================================
End of Optimization
Iteration #157, Feval #300, Fval = 6.023002e-027, x = 1 1, Size = 2.814328e-013

Bibliography

"Sequential Application of Simplex Designs in Optimisation and Evolutionary Operation", Spendley, W. and Hext, G. R. and Himsworth, F. R., American Statistical Association and American Society for Quality, 1962

"A Simplex Method for Function Minimization", Nelder, J. A. and Mead, R., The Computer Journal, 1965

"A New Method of Constrained Optimization and a Comparison With Other Methods", M. J. Box, The Computer Journal 1965 8(1):42-52, 1965 by British Computer Society

"Discussion and correspondence: modification of the complex method of constrained optimization", J. A. Guin, The Computer Journal, 1968

"Detection and Remediation of Stagnation in the Nelder--Mead Algorithm Using a Sufficient Decrease Condition", Kelley C. T., SIAM J. on Optimization, 1999

"Iterative Methods for Optimization", C. T. Kelley, SIAM Frontiers in Applied Mathematics, 1999

"Algorithm AS47 - Function minimization using a simplex procedure", O'Neill, R., Applied Statistics, 1971

"Nelder Mead's User Manual", Consortium Scilab - Digiteo, Michael Baudin, 2010

Authors

Michael Baudin - INRIA - 2008-2009

Michael Baudin - Digiteo - 2009

<< fminsearch Neldermead overview >>

Copyright (c) 2022-2024 (Dassault Systèmes)
Copyright (c) 2017-2022 (ESI Group)
Copyright (c) 2011-2017 (Scilab Enterprises)
Copyright (c) 1989-2012 (INRIA)
Copyright (c) 1989-2007 (ENPC)
with contributors
Last updated:
Wed Jan 26 16:23:42 CET 2011