Cannot initialize NLP or Jacobians (large problem sizes)

377 views
Skip to first unread message

Alex Tătulea

unread,
Mar 6, 2017, 7:50:52 AM3/6/17
to CasADi
Dear CasADi community,

I have been struggling for a while with this size issues when initializing my NLPs... I was avoiding it by testing smaller configurations but it's finally become a really pressing issue and I would like to run it at full desired size. Quick overview:
I have an NMPC problem where I use a model of around 470 ODEs, which I discretize with an orthogonal collocation scheme on finite elements. Sure enough, the size explodes rapidly and in a typical application I'd be looking at >28000 nlp variables. However, I can't seem to be able to initialize my NlpSolver when having more than 7000 variables...

Th error I'm getting:

  File "../../setup_functions/setup_solver.py", line 71, in setup_solver
    solver.init()
  File "...\casadi-python27-numpy1.9.1-v2.3.0\casadi\casadi.py", line 1641, in init
    return _casadi.SharedObject_init(self, *args)
RuntimeError: std::bad_alloc

To point out an interesting aspect, I would like to say that for other problems I have I can easily go beyond this variable barrier. For a simpler ODE model, say with only 6 ODEs, I can artifically increase the problem size to 70000 nlp variables and my solver initializes without sweat. But I suppose this doesn't surprise the developers of CasADi, as the issue probably comes down to the way the model is implemented and use as an SXFunction.
 
 I am using CasADi 2.3 (but I've tried also 2.4) on a Windows machine. Tested for both x86 and x64 systems, no success. I suspected it might be a RAM issue, where I've had my Win 7 and Win 10 machines behave differently as to the maximum RAM they seem to be using during the initialization. I have also increased the paging file size on my machines, in an attempt to give Winows more room. Do you think I have any chance of getting this started? If yes, how?
Are there any optimizing steps I could take to have CasADi handle my model in a more efficent way? I would be happy to discuss more details about my implementation but I think fo rthe first error statement this should be enough.

Looking forward to hearing from you. Cheers,
Alex

Alex Tătulea

unread,
Mar 7, 2017, 10:02:04 AM3/7/17
to CasADi
What I find very interesting is how different versions of CasADi behave with respect to the same problem. I have three configurations: (1) Windows 7 x64, Python 32+ Python(x,y) and Spyder, casadi 2.30  (2)Windows 7 x64, Python 32 with Jmodelica SDK 1-11 and casadi 2.0.0 and finally (3) Windows 10 x64 with Python 64 where I manually installed the python modules and I have casadi 2.4.1-x64

I take the same configuration, which creates an NLP with 9982 free variables and 9954 equality constrainty and 14 inequality constraints. This setup only runs on configuration (2) of the above, while on the others it crashes with the std::bad_alloc error. Again, the first 2 configs are running on the same PC and the third is running on a different one. What is happening here?
Initially I thought this had to do with the 2GB memory limitation for processes in 32-bit applications, but my observation seems to contradict that. Most importantly, the solver cannot even be initialized in the "all 64-bit" configuration (3). I see that there are no responses on the thread so far, but I've still got my fingers crossed :) I know such a large model of ODEs is probably not very common so most of the CasADi users will never experience such a problem...

Cheers,
Alex

Filip Jorissen

unread,
Mar 7, 2017, 11:28:20 AM3/7/17
to CasADi
I'm no expert but I know one or two things about this.

I think your problem is either caused by having not enough memory OR because you're filling up the stack. The first one should be easy to check if the program is quite slow: just have a look at how much memory is free in the task manager while/before your program crashes. If it's not that, then maybe the stack:

In general I believe the stack size can be expanded. For instance in Java you have variable '-Xmx' for this. I'm not sure if you're using casadi in C++ or python? In the former case make sure to allocate memory for variables in the memory (i.e. not on the stack) by using 'new' (don't forget 'delete' afterwards to avoid a memory leak). 
In python I don't know how to increase the stack size but I'm sure it's possible. Have a look on google and I'm sure you'll find it!

Filip Jorissen

unread,
Mar 7, 2017, 11:40:11 AM3/7/17
to CasADi
Now that I think about it I may be confusing the 'stack' with the 'heap'. See here (http://stackoverflow.com/questions/79923/what-and-where-are-the-stack-and-heap) for some more information. 

Based on what you tell us I think it's the memory though. Also I see from your error message that you are using Python =)

Alex Tătulea

unread,
Mar 7, 2017, 12:34:50 PM3/7/17
to CasADi
Thanks for your comments Filip! I haven't thought about the stack or heap sizes... but either way I'm affraid this is, in a large measure, out of my hands. The error occurs inside the casadi code, when the solver object is initialized. Not much I can directly do about that. Indirectly I can try to optimize my symbolic expressions so that they hopefully need less memory. That's what I'm working on now.

The memory might be an issue, but not because I don't have enough. I have 24 GB on my PC with Windows 7, but win32 limits that to 2GB per process anyway so there's not much I can do...
On my other machine I have a 64 bit OS and Python 64. There I see how the memory gets eaten up (all 8 GB I have, plus 8GB of virtual memory) and still the initialization crashes. It just takes longer, but the outcome is the same. It seems like some inner part of the code simply increases exponentially the memory demand.

Joel Andersson

unread,
Mar 7, 2017, 5:07:49 PM3/7/17
to CasADi
Hi,

First of all, CasADi version might matter a lot. There was a major refactoring going on in 2.3-to-2.4-to-3.0. Especially when it comes to the setup of NLP solver objects. If you are using 2.4, make sure that you're using the latest minor revision, but better still is to use the latest release, 3.1. It's hard for us to give any help with previous releases.

Given any release, the amount of memory you need will depend greatly on *how* you construct the CasADi expressions. If the memory bottleneck is in CasADi - and not in IPOPT - you can almost certainly reorganize your code so that memory is no longer an issue. Make sure that you use the MX type for the NLP formulation (but possibly SX for the ODE).

Joel

Alex Tătulea

unread,
Mar 8, 2017, 3:28:52 AM3/8/17
to CasADi
Thank you for your reply Joel! As you might have seen from my posts, changing the version and the Windows version doesn't seem to work. I have, however, tried out the same code on Ubuntu x64 and it seems to work (or at least the memory barrier is pushed further - I can't tell for sure). What matters in the end is that on Windows I can't use it. The only option seems to be indeed restructuring the code.

I have a complex model, which consists of many identical submodels and connections between them. I expect this can be better modularized and organized, such that the memory overhead is reduced. I hope to comment back on this thread in a while, posting my progress.

Cheers,
Alex
Reply all
Reply to author
Forward
0 new messages