WARNING: Ipopt finished with status Error_In_Step_Computation
WARNING: Not solved to optimality, status: ErrorFrom Ipopt doc, "This messages is printed if IPOPT is unable to compute a search direction, despite several attempts to modify the iteration matrix."I don't really understand this error, as I did not change anything in the problem, which was working great prior to my today's trials! Ipopt seems to be working fine on other problems (for instance the example Raman_spectrum_fitting.ipynb works fine) but on the Diffusion_gauss_JuMPJulia.ipynb in Spectra.jl/examples/, it continuously returns this error message. I tried to add boundary constrains but this does not solve the issue.This is very weird, could it be related to some issue in Ipopt? Is there any changes made the past days that could explain that?Cheers,Charles LL.
Solving...
LoadError: IPOPT: Failed to construct problem. while loading In[6], in expression starting on line 3 in createProblem at /home/juser/.julia/v0.4/Ipopt/src/Ipopt.jl:194 in loadproblem! at /home/juser/.julia/v0.4/Ipopt/src/IpoptSolverInterface.jl:94 in _buildInternalModel_nlp at /home/juser/.julia/v0.4/JuMP/src/nlp.jl:1189 in buildInternalModel at /home/juser/.julia/v0.4/JuMP/src/solvers.jl:307 in solve at /home/juser/.julia/v0.4/JuMP/src/solvers.jl:131
Solving...
WARNING: Ipopt finished with status Error_In_Step_Computation
This is Ipopt version 3.12.1, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Starting derivative checker for first derivatives.
grad_f[ 1] = 1.5400169565206059e-02 ~ 1.5407267879596377e-02 [ 7.098e-06]
grad_f[ 2] = 1.3856712817364922e-02 ~ 1.3856899530812275e-02 [ 1.867e-07]
grad_f[ 3] = 2.4117762052415721e-05 ~ 2.4117933670152192e-05 [ 1.716e-10]
grad_f[ 4] = 7.7835339144920401e-01 ~ 7.7837040980926853e-01 [ 1.702e-05]
grad_f[ 5] = 1.3075891332837744e-05 ~ 1.3075901039022453e-05 [ 9.706e-12]
grad_f[ 6] = -7.2848399264227213e-04 ~ -7.2832211724278839e-04 [ 1.619e-07]
grad_f[ 7] = 6.7503101622955405e-02 ~ 6.7507866806956571e-02 [ 4.765e-06]
grad_f[ 8] = -5.5724501113258432e-07 ~ -5.5724468008694350e-07 [ 3.310e-13]
grad_f[ 9] = 5.8921233320435567e-03 ~ 5.8922352945151718e-03 [ 1.120e-07]
grad_f[ 10] = 9.8834946060865891e-01 ~ 9.8837640858447529e-01 [ 2.695e-05]
grad_f[ 11] = 2.2119761314927557e-09 ~ 2.2119020496084421e-09 [ 7.408e-14]
grad_f[ 12] = -1.3256165690815563e-06 ~ -1.3256149814893463e-06 [ 1.588e-12]
grad_f[ 13] = -8.7347149194780154e-24 ~ 0.0000000000000000e+00 [ 8.735e-24]
grad_f[ 14] = 0.0000000000000000e+00 ~ 0.0000000000000000e+00 [ 0.000e+00]
grad_f[ 15] = -9.5681904881980529e-10 ~ -9.3380125056317869e-10 [ 2.302e-11]
grad_f[ 16] = 1.8045004417756092e-08 ~ 1.8045044806639069e-08 [ 4.039e-14]
grad_f[ 17] = 4.1406544083719289e-08 ~ 4.1406929924377500e-08 [ 3.858e-13]
grad_f[ 18] = -1.4432274992061809e-07 ~ -1.4426388609332883e-07 [ 5.886e-11]
grad_f[ 19] = -8.6065842375113783e-07 ~ -8.6066651781395277e-07 [ 8.094e-12]
grad_f[ 20] = 2.0907455925617064e-07 ~ 2.0906666052956567e-07 [ 7.899e-12]
grad_f[ 21] = 1.7471700442983421e-08 ~ 1.7474171026315785e-08 [ 2.471e-12]
grad_f[ 22] = 1.0063328701691959e-06 ~ 1.0063431278867796e-06 [ 1.026e-11]
grad_f[ 23] = 3.2973626102939689e-07 ~ 3.2973298570715404e-07 [ 3.275e-12]
grad_f[ 24] = 6.7468523113681460e-06 ~ 6.7468540842519631e-06 [ 1.773e-12]
grad_f[ 25] = 3.5010525231563835e-06 ~ 3.5010572488016753e-06 [ 4.726e-12]
No errors detected by derivative checker.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 325
Total number of variables............................: 25
variables with only lower bounds: 0
variables with lower and upper bounds: 25
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 3.9119919e-07 0.00e+00 8.10e-03 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
ERROR: Problem in step computation, but emergency mode cannot be activated.
Number of Iterations....: 0
(scaled) (unscaled)
Objective...............: 3.9119918620997790e-07 3.9119918620997790e-07
Dual infeasibility......: 8.0991440238628032e-03 8.0991440238628032e-03
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 1.3400003699999979e+02 1.3400003699999979e+02
Overall NLP error.......: 1.3400003699999979e+02 1.3400003699999979e+02
Number of objective function evaluations = 1
Number of objective gradient evaluations = 1
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 1
Total CPU secs in IPOPT (w/o function evaluations) = 2.609
Total CPU secs in NLP function evaluations = 0.778
EXIT: Error in step computation (regularization becomes too large?)!
WARNING: Not solved to optimality, status: Error
Solver status: Error rmsd: 3.911991862099779e-7
# number of data = n = size(x_fit)[1] mod = Model(solver=IpoptSolver(print_level=0)) # here we build the model #Parameters declaration, remember that m is the number of peaks @defVar(mod,g_co[i=1:m] ) @defVar(mod,g_c1[i=1:m] ) @defVar(mod, g_D[i=1:m] ) @defVar(mod, g_freq[i=1:m] ) @defVar(mod, g_hwhm[i=1:m] ) #Parameters initialisation setValue(g_co[i=1:m],c0_guess) setValue(g_c1[i=1:m],c1_guess) setValue(g_D[i=1:m],D_guess) setValue(g_freq[i=1:m],freq_guess) setValue(g_hwhm[i=1:m],hwhm_guess) # The expression of the gaussians @defNLExpr(mod, g_mod[j=1:n],sum{((g_c1[i] - g_co[i]) * erfc(x_fit[j,1]/(2. * sqrt( 10^g_D[i] * time))) + g_co[i]) *exp(-log(2) * ((x_fit[j,2]-g_freq[i])/g_hwhm[i])^2), i=1:m}) # The objective function to solve, i.e. the simple least-square function @setNLObjective(mod,Min,sum{(g_mod[j] - y_fit[j])^2, j=1:n}) println("Constructed...")
Solving...
This is Ipopt version 3.12.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 325
Total number of variables............................: 25
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.2637671e-05 0.00e+00 7.30e-02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 5.7246241e-06 0.00e+00 2.36e-04 -2.5 1.84e-02 -4.0 1.00e+00 1.00e+00f 1
2 5.6743423e-06 0.00e+00 5.84e-06 -5.7 3.33e-02 -4.5 1.00e+00 1.00e+00f 1
3 5.5394526e-06 0.00e+00 4.65e-05 -8.6 9.25e-02 -5.0 1.00e+00 1.00e+00f 1
4 5.2337768e-06 0.00e+00 2.52e-04 -8.6 2.25e-01 -5.4 1.00e+00 1.00e+00f 1
5 4.7066798e-06 0.00e+00 6.61e-04 -8.6 4.16e-01 -5.9 1.00e+00 1.00e+00f 1
6 3.6366884e-06 0.00e+00 2.40e-03 -8.6 1.10e+00 -6.4 1.00e+00 1.00e+00f 1
7 3.3279149e-06 0.00e+00 1.30e-02 -8.6 3.36e+00 -6.9 1.00e+00 5.00e-01f 2
8 2.9751777e-06 0.00e+00 1.31e-04 -8.6 1.76e-02 -3.7 1.00e+00 1.00e+00f 1
9 2.6993327e-06 0.00e+00 1.06e-04 -8.6 6.53e-02 -4.2 1.00e+00 1.00e+00f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 2.5331409e-06 0.00e+00 5.52e-05 -8.6 8.33e-02 -4.7 1.00e+00 1.00e+00f 1
11 2.3843155e-06 0.00e+00 3.07e-03 -8.6 3.72e-01 -5.2 1.00e+00 1.00e+00f 1
12 2.3518006e-06 0.00e+00 7.89e-06 -8.6 1.41e-02 -4.7 1.00e+00 1.00e+00f 1
13 2.3464782e-06 0.00e+00 1.39e-06 -8.6 7.94e-03 -4.3 1.00e+00 1.00e+00f 1
14 2.2522178e-06 0.00e+00 1.79e-04 -8.6 2.50e-01 -4.8 1.00e+00 5.00e-01f 2
15 2.2097216e-06 0.00e+00 1.61e-04 -8.6 6.34e-02 -5.3 1.00e+00 1.00e+00f 1
16 2.1590005e-06 0.00e+00 5.91e-05 -8.6 6.68e-02 -5.7 1.00e+00 1.00e+00f 1
17 2.0415012e-06 0.00e+00 3.26e-04 -8.6 1.69e-01 -6.2 1.00e+00 1.00e+00f 1
18 1.9622640e-06 0.00e+00 1.11e-04 -8.6 1.72e-01 -5.8 1.00e+00 1.00e+00f 1
19 1.5797684e-06 0.00e+00 1.78e-03 -8.6 1.34e+00 -5.4 1.00e+00 2.50e-01f 3
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
20 1.4724305e-06 0.00e+00 8.90e-05 -8.6 1.75e-01 -5.8 1.00e+00 1.00e+00f 1
21 1.4011069e-06 0.00e+00 2.22e-04 -8.6 1.68e-01 -6.3 1.00e+00 1.00e+00f 1
22 1.3584942e-06 0.00e+00 4.46e-04 -8.6 1.81e-01 -6.8 1.00e+00 1.00e+00f 1
23 1.3369586e-06 0.00e+00 4.21e-04 -8.6 1.73e-01 -7.3 1.00e+00 1.00e+00f 1
24 1.3271136e-06 0.00e+00 3.23e-04 -8.6 1.47e-01 -7.7 1.00e+00 1.00e+00f 1
25 1.3233800e-06 0.00e+00 1.14e-04 -8.6 7.79e-02 -7.3 1.00e+00 1.00e+00f 1
26 1.3207397e-06 0.00e+00 1.55e-04 -8.6 1.13e-01 -7.8 1.00e+00 1.00e+00f 1
27 1.3197521e-06 0.00e+00 3.70e-05 -8.6 4.66e-02 -7.4 1.00e+00 1.00e+00f 1
28 1.3188481e-06 0.00e+00 8.18e-05 -8.6 8.53e-02 -7.9 1.00e+00 1.00e+00f 1
29 1.3185262e-06 0.00e+00 1.40e-05 -8.6 3.06e-02 -7.4 1.00e+00 1.00e+00f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
30 1.3181632e-06 0.00e+00 5.12e-05 -8.6 6.35e-02 -7.9 1.00e+00 1.00e+00f 1
31 1.3179164e-06 0.00e+00 5.35e-05 -8.6 6.28e-02 -8.4 1.00e+00 1.00e+00f 1
32 1.3178075e-06 0.00e+00 1.34e-05 -8.6 2.95e-02 -8.0 1.00e+00 1.00e+00f 1
33 1.3177089e-06 0.00e+00 4.44e-05 -8.6 5.51e-02 -8.4 1.00e+00 1.00e+00f 1
34 1.3176686e-06 0.00e+00 5.63e-06 -8.6 1.84e-02 -8.0 1.00e+00 1.00e+00f 1
35 1.3176302e-06 0.00e+00 2.44e-05 -8.6 3.97e-02 -8.5 1.00e+00 1.00e+00f 1
36 1.3176167e-06 0.00e+00 2.75e-06 -8.6 1.27e-02 -8.1 1.00e+00 1.00e+00f 1
37 1.3176021e-06 0.00e+00 1.17e-05 -8.6 2.70e-02 -8.5 1.00e+00 1.00e+00f 1
38 1.3175931e-06 0.00e+00 1.04e-05 -8.6 2.51e-02 -9.0 1.00e+00 1.00e+00f 1
39 1.3175902e-06 0.00e+00 1.89e-06 -8.6 1.04e-02 -8.6 1.00e+00 1.00e+00f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
40 1.3175882e-06 0.00e+00 4.06e-06 -8.6 1.56e-02 -9.1 1.00e+00 1.00e+00f 1
41 1.3175877e-06 0.00e+00 3.44e-07 -8.6 4.43e-03 -8.6 1.00e+00 1.00e+00f 1
42 1.3175875e-06 0.00e+00 6.03e-07 -8.6 5.97e-03 -9.1 1.00e+00 1.00e+00f 1
43 1.3175874e-06 0.00e+00 2.02e-07 -8.6 3.44e-03 -9.6 1.00e+00 1.00e+00f 1
44 1.3175874e-06 0.00e+00 1.41e-08 -8.6 9.05e-04 -10.1 1.00e+00 1.00e+00f 1
45 1.3175874e-06 0.00e+00 1.22e-10 -9.0 8.43e-05 -10.5 1.00e+00 1.00e+00f 1
Number of Iterations....: 45
(scaled) (unscaled)
Objective...............: 1.3175874444875093e-06 1.3175874444875093e-06
Dual infeasibility......: 1.2195790677334437e-10 1.2195790677334437e-10
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 1.2195790677334437e-10 1.2195790677334437e-10
Number of objective function evaluations = 62
Number of objective gradient evaluations = 46
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 45
Total CPU secs in IPOPT (w/o function evaluations) = 0.025
Total CPU secs in NLP function evaluations = 5.479
EXIT: Optimal Solution Found.
Solver status: Optimal
rmsd: 1.3175874444875093e-6--
You received this message because you are subscribed to the Google Groups "julia-opt" group.
To unsubscribe from this group and stop receiving emails from it, send an email to julia-opt+...@googlegroups.com.
Visit this group at https://groups.google.com/group/julia-opt.
For more options, visit https://groups.google.com/d/optout.
...
...
...
Just some news, I tried to solve the problem in a badly updated JuliaBox and even with good scaling the problem did not solved. After update of ReverseDiffSparse, it is solved. So the update really helps!
Le lundi 7 mars 2016 13:07:39 UTC+11, Charles Ll a écrit :I just saw your answer after posting my reply. Yes, my problem was badly scaled, and I definitely should avoid that. I think Matlab does some internal scaling before optimisation, because it was not a problem with it... With proper scaling of the y_fit observation values, the problem seems to disappear. But as I said in my previous answer, the problem is even solved without scaling but with removing 0 values in the y_fit array... I think combining them with the very small y values made the algorithm crazy. It still is interesting that the problem was actually working with older versions of JuMP?