5.11 Solving Optimization Problems Answers

0 views
Skip to first unread message

Breogan Heflin

unread,
Aug 4, 2024, 11:18:57 PM8/4/24
to acenusac
sol,fval,exitflag,output,lambda] = solve(___) also returns an exit flag describing the exit condition, an output structure containing additional information about the solution process, and, for non-integer optimization problems, a Lagrange multiplier structure.

If your objective or nonlinear constraint functions are not entirely composed of elementary functions, you must convert the functions to optimization expressions using fcn2optimexpr. See Convert Nonlinear Function to Optimization Expression and Supported Operations for Optimization Variables and Expressions.


Giving an initial point does not always improve the problem. For this problem, using an initial point saves time and computational steps. However, for some problems, an initial point can cause solve to take more steps.


Solve the problem starting from the point [0,0]. For the problem-based approach, specify the initial point as a structure, with the variable names as the fields of the structure. For this problem, there is only one variable, x.


Optimization problem or equation problem, specified as an OptimizationProblem object or an EquationProblem object. Create an optimization problem by using optimproblem; create an equation problem by using eqnproblem.


The problem-based approach does not support complex values in the following: an objective function, nonlinear equalities, and nonlinear inequalities. If a function calculation has a complex value, even as an intermediate value, the final result might be incorrect.


For some Global Optimization Toolbox solvers, x0 can be a vector of OptimizationValues objects representing multiple initial points. Create the points using the optimvalues function. These solvers are:


ga (Global Optimization Toolbox), gamultiobj (Global Optimization Toolbox), paretosearch (Global Optimization Toolbox) and particleswarm (Global Optimization Toolbox). These solvers accept multiple starting points as members of the initial population.


Minimum number of start points for MultiStart (Global Optimization Toolbox), specified as a positive integer. This argument applies only when you call solve using the ms argument. solve uses all of the values in x0 as start points. If MinNumStartPoints is greater than the number of values in x0, then solve generates more start points uniformly at random within the problem bounds. If a component is unbounded, solve generates points using the default artificial bounds for MultiStart.


Internally, the solve function calls a relevant solver as detailed in the 'solver' argument reference. Ensure that options is compatible with the solver. For example, intlinprog does not allow options to be a structure, and lsqnonneg does not allow options to be an object.


For suggestions on options settings to improve an intlinprog solution or the speed of a solution, see Tuning Integer Linear Programming. For linprog, the default 'dual-simplex' algorithm is generally memory-efficient and speedy. Occasionally, linprog solves a large problem faster when the Algorithm option is 'interior-point'. For suggestions on options settings to improve a nonlinear problem's solution, see Optimization Options in Common Use: Tuning and Troubleshooting and Improve Results.


Optimization solver, specified as the name of a listed solver. For optimization problems, this table contains the available solvers for each problem type, including solvers from Global Optimization Toolbox. Details for equation problems appear below the optimization solver details.


For converting nonlinear problems with integer constraints using prob2struct, the resulting problem structure can depend on the chosen solver. If you do not have a Global Optimization Toolbox license, you must specify the solver. See Integer Constraints in Nonlinear Problem-Based Optimization.


For maximization problems (prob.ObjectiveSense is "max" or "maximize"), do not specify a least-squares solver (one with a name beginning lsq). If you do, solve throws an error, because these solvers cannot maximize.


Indication to use automatic differentiation (AD) for nonlinear objective function, specified as 'auto' (use AD if possible), 'auto-forward' (use forward AD if possible), 'auto-reverse' (use reverse AD if possible), or 'finite-differences' (do not use AD). Choices including auto cause the underlying solver to use gradient information when solving the problem provided that the objective function is supported, as described in Supported Operations for Optimization Variables and Expressions. For an example, see Effect of Automatic Differentiation in Problem-Based Optimization.


For a general nonlinear objective function, fmincon defaults to reverse AD for the objective function. fmincon defaults to reverse AD for the nonlinear constraint function when the number of nonlinear constraints is less than the number of variables. Otherwise, fmincon defaults to forward AD for the nonlinear constraint function.


For a least-squares objective function, fmincon and fminunc default to forward AD for the objective function. For the definition of a problem-based least-squares objective function, see Write Objective Function for Problem-Based Least Squares.


Indication to use automatic differentiation (AD) for nonlinear constraint functions, specified as 'auto' (use AD if possible), 'auto-forward' (use forward AD if possible), 'auto-reverse' (use reverse AD if possible), or 'finite-differences' (do not use AD). Choices including auto cause the underlying solver to use gradient information when solving the problem provided that the constraint functions are supported, as described in Supported Operations for Optimization Variables and Expressions. For an example, see Effect of Automatic Differentiation in Problem-Based Optimization.


Indication to use automatic differentiation (AD) for nonlinear constraint functions, specified as 'auto' (use AD if possible), 'auto-forward' (use forward AD if possible), 'auto-reverse' (use reverse AD if possible), or 'finite-differences' (do not use AD). Choices including auto cause the underlying solver to use gradient information when solving the problem provided that the equation functions are supported, as described in Supported Operations for Optimization Variables and Expressions. For an example, see Effect of Automatic Differentiation in Problem-Based Optimization.


Solution, returned as a structure or an OptimizationValues vector. sol is an OptimizationValues vector when the problem is multiobjective. For single-objective problems, the fields of the returned structure are the names of the optimization variables in the problem. See optimvar.


Exitflags 3 and -9 relate to solutions that have large infeasibilities. These usually arise from linear constraint matrices that have large condition number, or problems that have large solution components. To correct these issues, try to scale the coefficient matrices, eliminate redundant linear constraints, or give tighter bounds on the variables.


In the nonlinear constraint solver, the complementarity measure is the norm of the vector whose elements are ciλi, where ci is the nonlinear inequality constraint violation, and λi is the corresponding Lagrange multiplier.


The bounds, integer, and linear constraints are feasible, but no feasible solution is found with nonlinear constraints. In this case, x is the point of least maximum infeasibility of nonlinear constraints, and fval = objconstr(x).Fval.


Internally, the solve function solves optimization problems by calling a solver. For the default solver for the problem and supported solvers for the problem, see the solvers function. You can override the default by using the 'solver' name-value pair argument when calling solve.


Before solve can call a solver, the problems must be converted to solver form, either by solve or some other associated functions or objects. This conversion entails, for example, linear constraints having a matrix representation rather than an optimization variable expression.


The first step in the algorithm occurs as you place optimization expressions into the problem. An OptimizationProblem object has an internal list of the variables used in its expressions. Each variable has a linear index in the expression, and a size. Therefore, the problem variables have an implied matrix form. The prob2struct function performs the conversion from problem form to solver form. For an example, see Convert Problem to Structure.


For nonlinear optimization problems, solve uses automatic differentiation to compute the gradients of the objective function and nonlinear constraint functions. These derivatives apply when the objective and constraint functions are composed of Supported Operations for Optimization Variables and Expressions. When automatic differentiation does not apply, solvers estimate derivatives using finite differences. For details of automatic differentiation, see Automatic Differentiation Background. You can control how solve uses automatic differentiation with the ObjectiveDerivative name-value argument.

3a8082e126
Reply all
Reply to author
Forward
0 new messages