"RuntimeError: std::bad_alloc" for "large?" problems

1,558 views
Skip to first unread message

anders

unread,
May 3, 2013, 10:21:15 AM5/3/13
to casadi...@googlegroups.com
Hi,
I'm working on a project where I try to estimate some parameters with the use of MHE and collocation implementet in CasADi. The model which I'm using is a 1D spatial discretized advection equation. The number of states in my problem is the same as the number of spatial discretization points (the variable 'r' in my example).
The estimation works as long as i keep the number of states low, or the estimation horizon short. But if I set the number of states to 30, unknown parameters to 3 and an estimation horizon of 10, I get the following error: "RuntimeError: std::bad_alloc".
The error arises when I try to initialize the NLP solver(Ipopt). 

I'm guessing that it might have to do with model that I use, which looks a bit complex/long if I use print f_rhs[0], but I'm not sure.
My questions are:
a) Is the bad_alloc error a bug, or is it my problem formulation that is the problem?
b) Are there any obvious improvements (types,structures, methods, etc) that would make my problem easier to work with for CasADi/Ipopt?

I have attached my project, where MHEcollocation.py is the main file. It runs without error if "r <= 27" and crashes if "r >= 28".

Best regards,
Anders
GetModel1D.py
MHEcollocation.py
RunSimulation.py

Joel Andersson

unread,
May 3, 2013, 10:46:07 AM5/3/13
to casadi...@googlegroups.com
Hello!

I tested running your script and I can confirm that it's running very slow. The exact Hessian you are generating has over 16 million operations which could mean that there could be duplicate expressions (current version of CasADi does not have common subexpression elimination).

In any case, you currently have set the options:
solver.setOption("expand_f",True)
solver.setOption("expand_g",True)

which creates one large expression graph with scalar expressions instead of the original much smaller expression graph which has vector-valued expressions. When I comment out those lines, the memory use and initialization time are much smaller.

You might need to update CasADi before, since there have been recently fixed bugs and major performance improvements in this part of the code. When updating, note the syntax change announcement I sent to the forum some time ago.

If computational speed is a bottleneck for you, it is possible to generate compilable C-code for the NLP Hessian and Jacobian. This should give speed-ups of arounds 10 times. This is somewhat experimental, though.

Hope this helps!
Joel

Joel Andersson

unread,
May 3, 2013, 11:01:44 AM5/3/13
to casadi...@googlegroups.com
Ah, one more thing. JModelica.org has currently their CasADi revision locked. If you want the latest known-to-be-working Windows build, check in the "tested" subdirectory under files.casadi.org.

Joel

Fredrik Magnusson

unread,
May 11, 2013, 5:52:30 AM5/11/13
to casadi...@googlegroups.com
Hello!

I feel that question a) is still unanswered. My experience is that the bad_alloc error happens when the problem is too large. I am guessing that it happens because there is not a large enough chunk of memory available for allocation, typically needed for the computation of the Hessian. Is this correct?

This seems to happen for me between 1 and 1.5 GB consumption of RAM by my Python process. However, at this point, I still have 6 GB of free RAM available. So if the problem indeed is a lack of RAM, is there a way for me to utilize my remaining 6 GB of RAM? If not, would installing more RAM help?

/Fredrik

Joel Andersson

unread,
May 13, 2013, 7:15:20 AM5/13/13
to casadi...@googlegroups.com
Hello!

In the latest revisions of CasADi, the above issue gets resolved by using the MX datatype instead of SX. Which is exactly the purpose of MX - i.e. to use less memory. It is possible to allocate more memory to a process, but I think how to do it is platform specific. You can google for it.

Joel

Joel Andersson

unread,
May 13, 2013, 7:18:04 AM5/13/13
to casadi...@googlegroups.com
There have been problems with memory usage when calculating exact Hessians before, but as far as I know, these should be resolved now. If you encounter an out-of-memory issue like this on a recent revision of CasADi (e.g. the one in the nightly builds), then please try to isolate the error and file a bug report on Github.

Joel

Afaq Ahmad

unread,
Aug 4, 2015, 4:35:41 PM8/4/15
to CasADi
Hi joel,

as you said " it is possible to generate compilable C-code for the NLP Hessian and Jacobian. This should give speed-ups of arounds 10 times." i am using python, how can i do that i am facing also memory problem because of Jacobian.

Regards
AFAQ AHMAD

Joel Andersson

unread,
Aug 27, 2015, 6:20:46 AM8/27/15
to CasADi
If you're facing memory problems because of the Jacobian, you probably need to analyse why and restructure your code first.

Joel
Reply all
Reply to author
Forward
0 new messages