- Scilabヘルプ
- Optimization and Simulation
- Simplex
- optimsimplex_center
- optimsimplex_check
- optimsimplex_compsomefv
- optimsimplex_computefv
- optimsimplex_deltafv
- optimsimplex_deltafvmax
- optimsimplex_destroy
- optimsimplex_dirmat
- optimsimplex_fvmean
- optimsimplex_fvstdev
- optimsimplex_fvvariance
- optimsimplex_getall
- optimsimplex_getallfv
- optimsimplex_getallx
- optimsimplex_getfv
- optimsimplex_getn
- optimsimplex_getnbve
- optimsimplex_getve
- optimsimplex_getx
- optimsimplex_gradientfv
- optimsimplex_new
- overview
- optimsimplex_reflect
- optimsimplex_setall
- optimsimplex_setallfv
- optimsimplex_setallx
- optimsimplex_setfv
- optimsimplex_setn
- optimsimplex_setnbve
- optimsimplex_setve
- optimsimplex_setx
- optimsimplex_shrink
- optimsimplex_size
- optimsimplex_sort
- optimsimplex_xbar
Please note that the recommended version of Scilab is 2025.0.0. This page might be outdated.
See the recommended documentation of this function
overview
An overview of the Optimsimplex toolbox.
Purpose
The goal of this component is to provide a building block for optimization algorithms based on a simplex. The optimsimplex package may be used in the following optimization methods:
the Spendley et al. simplex method,
the Nelder-Mead method,
the Box algorithm for constrained optimization,
the multi-dimensional search by Virginia Torczon,
etc...
This toolboxes is designed with Oriented Object ideas in mind.
Features
The following is a list of features the Nelder-Mead prototype algorithm currently provides:
Manage various simplex initializations
initial simplex given by user,
initial simplex computed with a length and along the coordinate axes,
initial regular simplex computed with Spendley et al. formula,
initial simplex computed by a small perturbation around the initial guess point,
initial simplex computed from randomized bounds.
Sort the vertices by increasing function values,
Compute the standard deviation of the function values in the simplex,
Compute the simplex gradient with forward or centered differences,
Shrink the simplex toward the best vertex.
Description
This set of commands allows to manage a simplex made of k>=n+1 points in a n-dimensional space. This component is the building block for a class of direct search optimization methods such as the Nelder-Mead algorithm or Torczon's Multi-Dimensional Search.
A simplex is designed as a collection of k>=n+1 vertices. Each vertex is made of a point and a function value at that point.
The simplex can be created with various shapes. It can be configured and queried at will. The simplex can also be reflected or shrinked. The simplex gradient can be computed with an order 1 forward formula and with an order 2 centered formula.
The optimsimplex_new function allows to create a simplex. If vertices coordinates are given, there are registered in the simplex. If a function is provided, it is evaluated at each vertex. The optimsimplex_destroy function destroys the object and frees internal memory. Several functions allow to create a simplex with special shapes, including axes-by-axes (optimsimplex_axes), regular (optimsimplex_spendley), randomized bounds simplex with arbitrary nbve vertices (optimsimplex_randbounds) and a heuristical small variation around a given point (optimsimplex_pfeffer).
In the following functions, simplices and vertices are, depending on the functions either input or output arguments. The following general principle have been used to manage the storing of the coordinates of the points.
The vertices are stored row by row, while the coordinates are stored column by column. This implies the following rules.
The coordinates of a vertex are stored in a row vector, i.e. a 1 x n matrix where n is the dimension of the space.
The function values are stored in a column vector, i.e. a nbve x 1 matrix where nbve is the number of vertices.
Example : Creating a simplex with randomized bounds
In the following example, one creates a simplex with in the 2D domain [-5 5]^2, with [-1.2 1.0] as the first vertex. One uses the randomized bounds method to generate a simplex with 5 vertices. The function takes an additional argument mystuff, which is counts the number of times the function is called. After the creation of the simplex, the value of mystuff.nb is 5, which is the expected result because there is one function call by vertex.
function y=rosenbrock(x) y = 100*(x(2)-x(1)^2)^2+(1-x(1))^2; endfunction function [y, mystuff]=mycostf(x, mystuff) y = rosenbrock(x); mystuff.nb = mystuff.nb + 1 endfunction mystuff = tlist(["T_MYSTUFF","nb"]); mystuff.nb = 0; [ s1 , mystuff ] = optimsimplex_new ( "randbounds" , [-1.2 1.0], mycostf, ... [-5.0 -5.0] , [5.0 5.0], 5 , mystuff ); s1 mprintf("Function evaluations: %d\n",mystuff.nb) s1 = optimsimplex_destroy ( s1 );
Initial simplex strategies
In this section, we analyse the various initial simplex which are provided in this component.
It is known that direct search methods based on simplex designs are very sensitive to the initial simplex. This is why the current component provides various ways to create such an initial simplex.
The first historical simplex-based algorithm is the one presented in "Sequential Application of Simplex Designs in Optimisation and Evolutionary Operation" by W. Spendley, G. R. Hext and F. R. Himsworth. The "spendley" simplex creates the regular simplex which is presented in the paper.
The "randbounds" simplex is due to M.J. Box in "A New Method of Constrained Optimization and a Comparison With Other Methods".
Pfeffer's method is a heuristic which is presented in "Global Optimization Of Lennard-Jones Atomic Clusters" by Ellen Fan. It is due to L. Pfeffer at Stanford and it is used in fminsearch.
Bibliography
"Sequential Application of Simplex Designs in Optimisation and Evolutionary Operation", Spendley, W. and Hext, G. R. and Himsworth, F. R., American Statistical Association and American Society for Quality, 1962
"A Simplex Method for Function Minimization", Nelder, J. A. and Mead, R. The Computer Journal, January, 1965, 308--313
"A New Method of Constrained Optimization and a Comparison With Other Methods", M. J. Box, The Computer Journal 1965 8(1):42-52, 1965 by British Computer Society
"Iterative Methods for Optimization", C.T. Kelley, 1999, Chapter 6., section 6.2
"Compact Numerical Methods For Computers - Linear Algebra and Function Minimization", J.C. Nash, 1990, Chapter 14. Direct Search Methods
"Sequential Application of Simplex Designs in Optimisation and Evolutionary Operation", W. Spendley, G. R. Hext, F. R. Himsworth, Technometrics, Vol. 4, No. 4 (Nov., 1962), pp. 441-461, Section 3.1
"A New Method of Constrained Optimization and a Comparison With Other Methods", M. J. Box, The Computer Journal 1965 8(1):42-52, 1965 by British Computer Society
"Detection and Remediation of Stagnation in the Nelder--Mead Algorithm Using a Sufficient Decrease Condition", SIAM J. on Optimization, Kelley,, C. T., 1999
"Multi-Directional Search: A Direct Search Algorithm for Parallel Machines", by E. Boyd, Kenneth W. Kennedy, Richard A. Tapia, Virginia Joanne Torczon,, Virginia Joanne Torczon, 1989, Phd Thesis, Rice University
"Grid Restrained Nelder-Mead Algorithm", Ãrpád BÅ©rmen, Janez Puhan, Tadej Tuma, Computational Optimization and Applications, Volume 34 , Issue 3 (July 2006), Pages: 359 - 375
"A convergent variant of the Nelder-Mead algorithm", C. J. Price, I. D. Coope, D. Byatt, Journal of Optimization Theory and Applications, Volume 113 , Issue 1 (April 2002), Pages: 5 - 19,
"Global Optimization Of Lennard-Jones Atomic Clusters", Ellen Fan, Thesis, February 26, 2002, McMaster University
Report an issue | ||
<< optimsimplex_new | Simplex | optimsimplex_reflect >> |