ipopt and jump

496 views
Skip to first unread message

tcs

unread,
Jul 18, 2014, 7:54:43 PM7/18/14
to juli...@googlegroups.com
Hi all,

I have two JuMP related questions: 

(1) I have noticed a significant slowdown when I re-estimate a model with JuMP. I am mostly using the non-linear solver interface in combination with Ipopt. The first estimation seems to block memory resources that are not freed afterwards. 
This might of course be a more general Julia issue or a problem with my operating system (Ubuntu 14.4). The weird thing is that the problem does not even disappear after restarting Julia, only a system restart solves it. Can you confirm this issue on other platforms/operating systems?

(2) In general I would like to understand better what happens when JuMP builds the model because I feel like I am testing the limits of JuMP's nonlinear solver with the number of variables (sometimes > 100.000) and constraints that I am using in my applications. I like to use JuMP because it provides a convenient way of estimating economic models based on http://web.stanford.edu/group/SITE/archive/SITE_2007/segment_5/Judd_SMaxLikJuly2007.pdf without having to use AMPL or hand-coding derivatives in MATLAB. However, some of the specifications I have to abort because the solver does not get to the estimation part in an acceptable time (in my case that means I let the computer run for a night and nothing seems to happen). 

I would like to understand whether I just need to run my code on a cluster that has more working memory or whether the problem is processor time. I can imagine that there are problems with the hessian computation due to the larger number of variables. If that was true, is it possible to sacrifice the hessian when using Ipopt if it means the difference between not being able to estimate the model and estimating it very slowly?

Unfortunately, I can not not simply provide you with examples of what I am doing due to the data that I am using. The problem I am talking about are structurally similar to the engine replacement problems mentioned in the paper above but of much higher dimensionality.

Thank you!



 

Tony Kelman

unread,
Jul 18, 2014, 11:14:13 PM7/18/14
to juli...@googlegroups.com
1) Not sure on that one, but I also have not yet ported my large nonlinear models over to JuMP yet for detailed benchmarking. I know reducing the setup time and memory consumption for nonlinear models is on the JuMP team's to-do list, but the second-best-way to help with that is to provide reproducible example code. Can you replace any sensitive data with made-up inputs just to test the solver in a way that you could share? The best solution would be patches to address the problem, but that's much harder.

2) Are you watching the output from Ipopt? Does it even start the optimization process, or is the setup itself seeming to take a long time? Can you provide a breakdown of setup time vs Ipopt linear solver time vs function callback timing, perhaps for smaller instances of your problem that are able to run? There are too many unknowns when it comes to large nonlinear optimization models so you'll need to provide more information about what behavior you're seeing. I recommend setting the Ipopt option "print_timing_statistics" to "yes," this will give you a detailed timing breakdown but only if the optimization actually terminates (successfully, or via "max_iter" or "max_cpu_time").

If you want to solve a single very large problem, the version of Ipopt interfaced by Julia is not capable of parallelizing across multiple nodes of a distributed-memory cluster. There is an experimental MPI branch in the Ipopt repository, but to my knowledge it has not been hooked up to Julia, and the scalability results even from C++ or AMPL were not very encouraging. If you use a linear solver other than MUMPS, you can however parallelize the Newton step direction at each Ipopt iteration using shared-memory multithreading. But we'd have to know whether the Newton step direction is actually the bottleneck for your problems. In C++ or AMPL it often is, but your models may be taxing the JuMP auto-differentiation implementation to an unusual extent.

You can set the Ipopt option "hessian_approximation" to "limited-memory" to test whether the second derivatives are dramatically more expensive than the first derivatives. Typically using this quasi-Newton approximation comes at the cost of requiring more iterations to converge than using exact Hessian information, sometimes not converging at all, but for some problem types the Hessian may be very expensive to calculate and it may be worth it.

-Tony

Miles Lubin

unread,
Jul 19, 2014, 12:37:13 AM7/19/14
to juli...@googlegroups.com
To extend Tony's comments:

1) We're not aware of any memory leaks within JuMP or Ipopt, but without a test case, as Tony mentioned, it's hard to say much more. Persisting memory usage after Julia is closed is most likely a property of the Linux memory manager and isn't something that you should need to worry about. You can use a tool like "top" to observe memory usage system-wide and per process. Within a julia session, you could try calling gc(), which will invoke the garbage collector and hopefully free any memory that's not explicitly needed.

2) There's a discussion of nonlinear optimization performance issues in the JuMP manual: http://jump.readthedocs.org/en/release-0.5/nlp.html#performance. More specifically, it's important to find out whether the bottleneck is in the function evaluation (JuMP's job) or in Ipopt itself. Could you report the "Total CPU secs in ..." lines from the output?

Thanks,
Miles

tcs

unread,
Jul 19, 2014, 9:56:28 AM7/19/14
to juli...@googlegroups.com
First of all thank you for your responses. I am always amazed by how quickly people here respond to questions. I will try to post some code with synthetic data later today.

tcs

unread,
Jul 19, 2014, 2:02:47 PM7/19/14
to juli...@googlegroups.com
Also in response to Iains e-mail: my previous comment seemed to suggest a little more than I meant. For now I just wanted to generate some random data that has the approximate size of the data-set that I am looking at and post it together with my JuMP model code. It would take me a little longer to write the code that generates data from the true model. Of course, only the latter allows to compare recovered to true parameters. If you think that something like this would be helpful I try to work on it but it might take a while.

Iain Dunning

unread,
Jul 19, 2014, 2:11:43 PM7/19/14
to Tobias Salz, julia-opt

Woops I don't think I reply-all-ed. Really anything you can provide would be very useful, even if its synthetic data.

--
You received this message because you are subscribed to the Google Groups "julia-opt" group.
To unsubscribe from this group and stop receiving emails from it, send an email to julia-opt+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

tcs

unread,
Jul 23, 2014, 3:11:48 PM7/23/14
to juli...@googlegroups.com, tobia...@gmail.com
I got the following 

ERROR: type: typeassert: expected Dict{K,V}, got Array{(ASCIIString,Float64),1}

when trying to pass a tolerance option for Ipopt as indicated in the documentation:

solve(m, IpoptOptions=[("tol",1e-6)])

I am using the most up to date version of JuMP and Ipopt.

PS: I still owe some sample code of my model but I unfortunately can't find the time right now. As soon as I have more time I will post it here. 

Iain Dunning

unread,
Jul 23, 2014, 8:47:27 PM7/23/14
to juli...@googlegroups.com, tobia...@gmail.com
Try

IpoptOptions=["tol" => 1e-6]

on the master version of JuMP its actually
m = Model(solver=IpoptSolver(tol=1e-6))
but looks like we need to manually rebuild documentation.

Florian Oswald

unread,
Aug 27, 2014, 9:00:56 AM8/27/14
to juli...@googlegroups.com
Hi there,

I'm familiar with the paper / model type you posted. If you want to know if and how your program runs (or why not), the data seem to be irrelevant, so just take rand() and construct some garbage. I would think that you basically want to know whether you can compute the likelihood (if using likelihood) and structural constraints before you even think about which solver to use, and how long it takes for a given model size. I would be extremely interested in seeing how you feed your model to JuMP, since I doubt you can formulate the constraints as a convenient one-liner. Can't you give us a snapshot in a gist [https://gist.github.com/] or show us something on github? that would be helpful.

cheers
florian
Reply all
Reply to author
Forward
0 new messages