robustness of solvers (NEOS/AMPL)

60 views
Skip to first unread message

phonenix2016

unread,
Jan 16, 2016, 12:24:29 PM1/16/16
to AMPL Modeling Language
Hello OR experts, 

I have a simple scenario based optimization problem that is non-linear (approx. 400 variables, 300 inequality constraints) in its objective and linear in constraints. I have formulated and submitted the models to NEOS server and tested several nonlinear as well as Global Optimization solvers (ASA, Couenne, KNITRO, LOQO, MINOS etc- whichever ones accept an AMPL input). 
While a majority of the variables have the same exact answer across all solvers, there is one particular variable whose scale is between 0 and 1 that yields different answers amonst the solvers.some recommend 0 while some recommend 1 and I am unable to determine which one of them is correct. I have tried using variable scaling options within these solvers but I havent been successful. Any thoughts? 

Appreciate your help. 

ps: the objective function is Max: (P1*Q1+P2*Q2)-g1*K1-g2*K2-[g3*f * (K1+K2)] - [(Q1-K1)+(Q2-K2)]*c*(1-f)   and the variables are K, P and f where f is [0,1] while others have a natualy higher scale. 



Robert Fourer

unread,
Jan 17, 2016, 5:04:47 PM1/17/16
to am...@googlegroups.com
Do all the solvers return the same optimal objective value? If so, there must be more than one optimal solution. It is common that many different ways of setting the variables give the same optimal value for the objective function.

If the optimal objective values are different, then check the solver termination messages to be sure that all of the solver runs were successful. If the termination message is "unable to make progress" or any other sort of error indication, then you cannot trust the result. Also keep in mind that if your problem is non-convex, then the solvers in the Nonlinearly Constrained Optimization category find only locally optimal solutions, which might result in different objective values for different solvers depending on which local solutions their algorithms converge to.

Bob Fourer
am...@googlegroups.com

=======

phonenix2016

unread,
Jan 17, 2016, 6:24:51 PM1/17/16
to AMPL Modeling Language, 4...@ampl.com
Thanks for the reply.

KNITRO was the one that always produced the value 0 for the variable f [0,1]. And yes, its values for the other variables and hence the objective function were slightly different (for example the optimal expected profit was higher for KNITRO). While its feasibility error was 0, there was an optimality error - but the solver cponverged to a locally optimal or satisfactory solution. 

Final feasibility error (abs / rel) =   0.00e+00 / 0.00e+00
Final optimality error  (abs / rel) =   4.12e-05 / 5.93e-08
objective 33333.44983; feasibility error 0
12 iterations; 11299 function evaluations

LOQO on the other hand (as well as BONMIN), identifies the problem as QP and produces the following objective: 

LOQO 7.00: optimal solution (25 QP iterations, 25 evaluations)
primal objective 32662.37795
  dual objective 32662.37847

For KNITRO, after some digging, I found that we can set a parameter called honorbnds = 1. If the objective function or a nonlinear constraint function is undefined at points outside the bounds, then the bounds should be enforced at all times. When this is set, it yields the exact same solutions as the other two solvers (i.e, f=1). What threw me off was that I had initially assumed that a solver that yields the highest possible solution of the objective (for my maximization problem) was correct. Was my initial asumption incorrect? 

Robert Fourer

unread,
Jan 18, 2016, 9:06:15 PM1/18/16
to am...@googlegroups.com
It's important to consider whether you have a convex optimization problem. (You'll find this discussed in any nonlinear programming textbook.) If your problem is not convex, then solvers like Knitro, LOQO, and Bonmin only find locally optimal solutions, which are better than any nearby solutions, but are not necessarily globally the best. Furthermore any change to the starting point or the solver options may cause a different locally optimal solution to be found, perhaps better and perhaps worse. If you are in this situation then it is plausible that different settings for Knitro led to different solutions with different optimal values. (You might want to experiment with the multistart options for Knitro, which try to find an improved local optimum by automatically solving many times using different starting points.)

If you do have a convex problem then any local optimum is global, and then something must be wrong with at least one of your runs since the global optimum for the objective value cannot be both 33333.44983 and 32662.37847.
Reply all
Reply to author
Forward
0 new messages