- Scilab Help
- Optimization and Simulation
- Optimization base
- optimbase_cget
- optimbase_checkbounds
- optimbase_checkcostfun
- optimbase_checkx0
- optimbase_configure
- optimbase_destroy
- optimbase_function
- optimbase_get
- optimbase_hasbounds
- optimbase_hasconstraints
- optimbase_hasnlcons
- optimbase_histget
- optimbase_histset
- optimbase_incriter
- optimbase_isfeasible
- optimbase_isinbounds
- optimbase_isinnonlincons
- optimbase_log
- optimbase_new
- optimbase_outputcmd
- optimbase_outstruct
- overview
- optimbase_proj2bnds
- optimbase_set
- optimbase_stoplog
- optimbase_terminate
Please note that the recommended version of Scilab is 2025.0.0. This page might be outdated.
See the recommended documentation of this function
optimbase_function
Calls cost function.
Calling Sequence
[opt, f, index] = optimbase_function(opt, x, index) [opt, f, c, index] = optimbase_function(opt, x, index) [opt, f, g, index] = optimbase_function(opt, x, index) [opt, f, g, c, gc, index] = optimbase_function(opt, x, index)
Argument
- opt
The object of TOPTIM type (tlist). (INPUT/OUTPUT)
- x
A column vector of doubles, the current point.
- index
An integer. (INPUT/OUTPUT)
The index input parameter has the following meaning.
index = 1: nothing is to be computed, the user may display messages, for example.
index = 2: compute f.
index = 3: compute g.
index = 4: compute f and g.
index = 5: compute c.
index = 6: compute f and c.
index = 7: compute f, g, c and gc.
The index output parameter has the following meaning .
index > 0: everything is fine.
index = 0: the optimization must stop.
index < 0: one function could not be evaluated.
- f
Scalar. The value of the cost function.
- g
Row matrix of doubles. The gradient of the cost function.
- c
Row matrix of doubles. The non-linear, positive, inequality constraints.
- gc
Matrix of doubles. The gradient of the non-linear, positive, inequality constraints.
Description
The optimbase_function
function calls the cost function and return
the required results. If a cost function additional argument is defined in current
object, it is passed it to the function as the last argument.
The -function()
option allows to configure the cost function.
The cost function is used, dependind on the context, to compute the cost, the
non-linear inequality positive constraints, the gradient of the function and
the gradient of the non-linear inequality constraints.
The cost function can also be used to produce outputs and to terminate an optimization algorithm.
Each calling sequence of the optimbase_function
function corresponds
to a specific calling sequence of the user-provided cost function.
If the
-withderivatives
is false and there is no non-linear constraint, the calling sequence is[ this , f , index ] = optimbase_function ( this , x , index )
which corresponds to the cost functions:
[f , index ] = costf( x , index )
If the
-withderivatives
is false and there are non-linear constraints, the calling sequence is[ this , f , c , index ] = optimbase_function ( this , x , index )
which corresponds to the cost functions:
[f , c , index ] = costf( x , index )
If the
-withderivatives
is true and there is no non-linear constraint, the calling sequence is[ this , f , g , index ] = optimbase_function ( this , x , index )
which corresponds to the cost functions:
[f , g , index ] = costf( x , index )
If the
-withderivatives
is true and there are non-linear constraints, the calling sequence is[ this , f , g , c , gc , index ] = optimbase_function ( this , x , index )
which corresponds to the cost functions:
[f , g , c , gc , index ] = costf( x , index )
Each calling sequence corresponds to a particular class of algorithms, includinf for e xample
unconstrained, derivative-free algorithms,
unconstrained constrained, derivative-free algorithms,
unconstrained, derivative-based algorithms,
nonlinearily constrained, derivative-based algorithms,
etc.
It might happen that the function requires additional arguments to be evaluated.
In this case, we can use the following feature. The argument fun
can also be the list (f, a1, a2, ...)
.
In this case f
, the first element in the list, must bea function and
must have the header:
[ f , index ] = f ( x , index , a1 , a2 , ... ) [ f , c , index ] = f ( x , index , a1 , a2 , ... ) [ f , g , index ] = f ( x , index , a1 , a2 , ... ) [ f , g , c , gc , index ] = f ( x , index , a1 , a2 , ... )
where the input arguments a1, a2, ...
are automatically appended at
the end of the calling sequence.
Example : Setting up an optimization
In the following example, one searches the minimum of the 2D Rosenbrock function. One begins by defining the function "rosenbrock" which computes the Rosenbrock function. The traditionnal initial guess [-1.2 1.0] is used. The initial simplex is computed along the axes with a length equal to 0.1. The Nelder-Mead algorithm with variable simplex size is used. The verbose mode is enabled so that messages are generated during the algorithm. After the optimization is performed, the optimum is retrieved with quiery features.
function [f, index]=rosenbrock(x, index) f = 100*(x(2)-x(1)^2)^2 + (1-x(1))^2; endfunction opt = optimbase_new(); opt = optimbase_configure(opt,"-numberofvariables",2); nbvar = optimbase_cget(opt,"-numberofvariables"); opt = optimbase_configure(opt,"-function",rosenbrock); [ opt , f , index ] = optimbase_function ( opt , [0.0 0.0] , 1 ); expectedf = 1 disp(f) opt = optimbase_destroy(opt);
Example : Passing extra parameters
In the following example, we consider a function which has two
additional parameters a
and b
. In
this case, we can configure the "-function" option as a list, where the
first element is the function and the two extra arguments are located at
the end of the list.
function [f, index]=rosenbrock2(x, index, a, b) f = a*(x(2)-x(1)^2)^2 + (b-x(1))^2; endfunction opt = optimbase_new(); opt = optimbase_configure(opt,"-numberofvariables",2); nbvar = optimbase_cget(opt,"-numberofvariables"); a = 100; b = 1; opt = optimbase_configure(opt,"-function",list(rosenbrock2,a,b)); [ opt , f , index ] = optimbase_function ( opt , [0.0 0.0] , 1 ); expectedf = 1 disp(f) opt = optimbase_destroy(opt);
See Also
- optimbase_new — Creates a new optimization object.
- optimbase_configure — Configures the current object.
- optimbase_destroy — Resets the historyfopt and historyxopt fields of an object.
- optimbase_checkcostfun — Checks the cost function.
Report an issue | ||
<< optimbase_destroy | Optimization base | optimbase_get >> |