xfmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options) minimizeswith the optimization options specified in options.Use optimoptions to set theseoptions. If there are no nonlinear inequality or equality constraints,set nonlcon = [].
Set the objective function fun to be Rosenbrock's function. Rosenbrock's function is well-known to be difficult to minimize. It has its minimum objective value of 0 at the point (1,1). For more information, see Constrained Nonlinear Problem Using Optimize Live Editor Task or Solver.
The output structure reports several statistics about the solution process. In particular, it gives the number of iterations in output.iterations, number of function evaluations in output.funcCount, and the feasibility in output.constrviolation.
fmincon passes x to your objective function and any nonlinear constraint functions in the shape of the x0 argument. For example, if x0 is a 5-by-3 array, then fmincon passes x to fun as a 5-by-3 array. However, fmincon multiplies linear constraint matrices A or Aeq with x after converting x to the column vector x(:).
If you can compute the gradient of fun and the SpecifyObjectiveGradient option is set to true, as set byoptions = optimoptions('fmincon','SpecifyObjectiveGradient',true)then fun must return the gradient vector g(x) in the second output argument.
If you can also compute the Hessian matrix and the HessianFcn optionis set to 'objective' via optimoptions and the Algorithm optionis 'trust-region-reflective', fun mustreturn the Hessian value H(x), a symmetric matrix,in a third output argument. fun can give a sparseHessian. See Hessian for fminunc trust-region or fmincon trust-region-reflective algorithms fordetails.
The interior-point and trust-region-reflective algorithmsallow you to supply a Hessian multiply function. This function givesthe result of a Hessian-times-vector product without computing theHessian directly. This can save memory. See Hessian Multiply Function.
Linear inequality constraints, specified as a real matrix. A isan M-by-N matrix, where M isthe number of inequalities, and N is the numberof variables (number of elements in x0). Forlarge problems, pass A as a sparse matrix.
Linear inequality constraints, specified as a real vector. b isan M-element vector related to the A matrix.If you pass b as a row vector, solvers internallyconvert b to the column vector b(:).For large problems, pass b as a sparse vector.
Linear equality constraints, specified as a real matrix. Aeq isan Me-by-N matrix, where Me isthe number of equalities, and N is the number ofvariables (number of elements in x0). For largeproblems, pass Aeq as a sparse matrix.
Linear equality constraints, specified as a real vector. beq isan Me-element vector related to the Aeq matrix.If you pass beq as a row vector, solvers internallyconvert beq to the column vector beq(:).For large problems, pass beq as a sparse vector.
where mycon is a MATLAB function suchasfunction [c,ceq] = mycon(x)c = ... % Compute nonlinear inequalities at x.ceq = ... % Compute nonlinear equalities at x.Ifthe gradients of the constraints can also be computed and the SpecifyConstraintGradient optionis true, as set byoptions = optimoptions('fmincon','SpecifyConstraintGradient',true)then nonlcon mustalso return, in the third and fourth output arguments, GC,the gradient of c(x), and GCeq,the gradient of ceq(x). GC and GCeq canbe sparse or dense. If GC or GCeq islarge, with relatively few nonzero entries, save running time andmemory in the interior-point algorithm by representingthem as sparse matrices. For more information, see Nonlinear Constraints.
Finite differences, used to estimate gradients, are either 'forward' (default), or 'central' (centered). 'central' takes twice as many function evaluations but should be more accurate. The trust-region-reflective algorithm uses FiniteDifferenceType only when CheckGradients is set to true.
fmincon is careful to obey bounds when estimating both types of finite differences. So, for example, it could take a backward, rather than a forward, difference to avoid evaluating at a point outside bounds. However, for the interior-point algorithm, 'central' differences might violate bounds during their evaluation if the HonorBounds option is set to false.
Check whether objective function values are valid. The default setting, 'off', does not perform a check. The 'on' setting displays an error when the objective function returns a value that is complex, Inf, or NaN.
Maximum number of function evaluations allowed, a nonnegative integer. The default value for all algorithms except interior-point is 100*numberOfVariables; for the interior-point algorithm the default is 3000. See Tolerances and Stopping Criteria and Iterations and Function Counts.
Maximum number of iterations allowed, a nonnegative integer. The default value for all algorithms except interior-point is 400; for the interior-point algorithm the default is 1000. See Tolerances and Stopping Criteria and Iterations and Function Counts.
Specify one or more user-defined functions that an optimization function calls at each iteration. Pass a function handle or a cell array of function handles. The default is none ([]). See Output Function and Plot Function Syntax.
Plots various measures of progress while the algorithm executes; select from predefined plots or write your own. Pass a built-in plot function name, a function handle, or a cell array of built-in plot function names or function handles. For custom plot functions, pass function handles. The default is none ([]):
'optimplotfvalconstr' plots the best feasible objective function value found as a line plot. The plot shows infeasible points in one color and feasible points in another, using a feasibility tolerance of 1e-6.
Gradient for nonlinear constraint functions defined by the user. When set to the default, false, fmincon estimates gradients of the nonlinear constraints by finite differences. When set to true, fmincon expects the constraint function to have four outputs, as described in nonlcon. The trust-region-reflective algorithm does not accept nonlinear constraints.
Gradient for the objective function defined by the user. See the description of fun to see how to define the gradient in fun. The default, false, causes fmincon to estimate gradients using finite differences. Set to true to have fmincon use a user-defined gradient of the objective function. To use the 'trust-region-reflective' algorithm, you must provide the gradient, and set SpecifyObjectiveGradient to true.
Termination tolerance on x, a nonnegative scalar. The default value for all algorithms except 'interior-point' is 1e-6; for the 'interior-point' algorithm, the default is 1e-10. See Tolerances and Stopping Criteria.
Typical x values. The number of elements in TypicalX is equal to the number of elements in x0, the starting point. The default value is ones(numberofvariables,1). fmincon uses TypicalX for scaling finite differences for gradient estimation.
When true, fmincon estimates gradients in parallel. Disable by setting to the default, false. trust-region-reflective requires a gradient in the objective, so UseParallel does not apply. See Parallel Computing.
If [] (default), fmincon approximates the Hessian using finite differences, or uses a Hessian multiply function (with option HessianMultiplyFcn). If 'objective', fmincon uses a user-defined Hessian (defined in fun). See Hessian as an Input.
Hessian multiply function, specified as a function handle. For large-scale structured problems, this function computes the Hessian matrix product H*Y without actually forming H. The function is of the form
Y is a matrix that has the same number of rows as there are dimensions in the problem. The matrix W = H*Y, although H is not formed explicitly. fmincon uses Hinfo to compute the preconditioner. For information on how to supply values for any additional parameters hmfun needs, see Passing Extra Parameters.
Use HessPattern when it is inconvenient to compute the Hessian matrix H in fun, but you can determine (say, by inspection) when the ith component of the gradient of fun depends on x(j). fmincon can approximate H via sparse finite differences (of the gradient) if you provide the sparsity structure of H as the value for HessPattern. In other words, provide the locations of the nonzeros.
When the structure is unknown, do not set HessPattern. The default behavior is as if HessPattern is a dense matrix of ones. Then fmincon computes a full finite-difference approximation in each iteration. This computation can be very expensive for large problems, so it is usually better to determine the sparsity structure.
Maximum number of preconditioned conjugate gradient (PCG) iterations, a positive scalar. The default is max(1,floor(numberOfVariables/2)) for bound-constrained problems, and is numberOfVariables for equality-constrained problems. For more information, see Preconditioned Conjugate Gradient Method.
Upper bandwidth of preconditioner for PCG, a nonnegative integer. By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations. Setting PrecondBandWidth to Inf uses a direct factorization (Cholesky) rather than the conjugate gradients (CG). The direct factorization is computationally more expensive than CG, but produces a better quality step towards the solution.
If [] (default), fmincon approximates the Hessian using the method specified in HessianApproximation, or uses a supplied HessianMultiplyFcn. If a function handle, fmincon uses HessianFcn to calculate the Hessian. See Hessian as an Input.
A tolerance (stopping criterion) for the number of projected conjugate gradient iterations; this is an inner iteration, not the number of iterations of the algorithm. This positive integer has a default value of 2*(numberOfVariables - numberOfEqualities).
A tolerance (stopping criterion) that is a scalar. If the objective function value goes below ObjectiveLimit and the iterate is feasible, the iterations halt, because the problem is presumably unbounded. The default value is -1e20.
3a8082e126