Dear CasADi community,
I have been struggling for a while with this size issues when initializing my NLPs... I was avoiding it by testing smaller configurations but it's finally become a really pressing issue and I would like to run it at full desired size. Quick overview:
I have an NMPC problem where I use a model of around 470 ODEs, which I discretize with an orthogonal collocation scheme on finite elements. Sure enough, the size explodes rapidly and in a typical application I'd be looking at >28000 nlp variables. However, I can't seem to be able to initialize my NlpSolver when having more than 7000 variables...
Th error I'm getting:
File "../../setup_functions/setup_solver.py", line 71, in setup_solver
solver.init()
File "...\casadi-python27-numpy1.9.1-v2.3.0\casadi\casadi.py", line 1641, in init
return _casadi.SharedObject_init(self, *args)
RuntimeError: std::bad_alloc
To point out an interesting aspect, I would like to say that for other problems I have I can easily go beyond this variable barrier. For a simpler ODE model, say with only 6 ODEs, I can artifically increase the problem size to 70000 nlp variables and my solver initializes without sweat. But I suppose this doesn't surprise the developers of CasADi, as the issue probably comes down to the way the model is implemented and use as an SXFunction.
I am using CasADi 2.3 (but I've tried also 2.4) on a Windows machine. Tested for both x86 and x64 systems, no success. I suspected it might be a RAM issue, where I've had my Win 7 and Win 10 machines behave differently as to the maximum RAM they seem to be using during the initialization. I have also increased the paging file size on my machines, in an attempt to give Winows more room. Do you think I have any chance of getting this started? If yes, how?
Are there any optimizing steps I could take to have CasADi handle my model in a more efficent way? I would be happy to discuss more details about my implementation but I think fo rthe first error statement this should be enough.
Looking forward to hearing from you. Cheers,
Alex