Creating Solver takes minutes for large problem

573 views
Skip to first unread message

Fabian Kessler

unread,
Dec 21, 2020, 2:10:21 PM12/21/20
to CasADi
Hi everyone,

I'm currently implenting a MPC + multiple shooting approach for an active SLAM system. 

I optimize only over the mean state (i.e. 9x1) and construct the Covariance (9x9) for my objective function via belief propagation. Creating the objective function and constraint functions works fine and within a matter of seconds although they are quite large, whereas creating an instance of the solver takes a few minutes to complete....... 

Here is my setup procedure:
#create solver / planner 
OPT_variables = vertcat(X.reshape((nx*(N+1),1)), U.reshape((2*N,1)))
nlp_prob = {'f' : obj, 'x' : OPT_variables, 'g' : g, 'p' : P}

#optimization Options 
opts = {}
opts['print_time'] = 1

#ipopt options 
opts['ipopt'] = {}
opts['ipopt']['max_iter'] = 1500
opts['ipopt']['acceptable_tol'] = 1e-8
opts['ipopt']['acceptable_obj_change_tol'] = 1e-8


print('create solver')
solver = nlpsol('solver', 'ipopt', nlp_prob, opts)

The optimization itself is much faster, then the creation of the solver object. Given that I need to reconstruct the solver multiple times during my MPC approach...What could be the bottleneck of my code or is it simply the size of my problem? I'm happy to share specific lines of code if needed. 

Best, 
Fabian 

Joris Gillis

unread,
Dec 21, 2020, 3:42:06 PM12/21/20
to CasADi
Dear Fabian,

Can you clarify why a reconstruction online is required?
Such scheme should really be avoided..

Best regards,
 Joris

Fabian Kessler

unread,
Dec 22, 2020, 12:54:06 PM12/22/20
to CasADi
Hi Joris, 
thanks for your reply! 

I figured that I can handle all the necessary steps via passing changing parameters via p to the solver. 
Additionally not calculating the full Hessian speeds up the problem tremendously. 

That being said, creating the solver and especially doing the C-Compiliation still is a very time intensive and might take days... at least that how it looks like. 

Here is some of my code: 

Maybe you can have a quick glance and tell me a couple of things, that might speed up the compilation process. 
The covariance is constructed on the go, as I don't want it in the optimization objective (covariance-free trajectory optimization). 

Here is what I have been thinking: 
- e.g. is calculating matrix multiplication via @ fast? 
- can I somehow manage to get the for loops into fold / mapaccum schemes that would make the compilation faster? 

Best,
Fabian 

Joris Gillis

unread,
Dec 29, 2020, 10:55:01 AM12/29/20
to CasADi
Dear Fabian,

Compilation speed is directly related to the number of nodes in an expression graph.
You can reduce graph size by:
 1) switching to MX for the outer layers of the formulation (non_linear_constraints, objective, maybe even belief_propagation already)
 2) switching to for-loop equivalents such as map and mapaccum (usage of MX is needed or else everything will be inlined and expanded again)

Best regards,
  Joris

Fabian Kessler

unread,
Dec 30, 2020, 6:48:27 PM12/30/20
to CasADi
Hi Joris, 
thanks for your answer. I'm not entirely sure about how to go about this.... given that my code is already quite complex. 

Before I invest any significant amount of time in trying to reformulate my entire problem. 
- Is there any example of how to convert SX Code to MX Code? Or an example for multiple shooting using MapAccum?
- Will the MX formulation result in faster runtime or only compile time? (I read that SX is preferred over MX for runtime optimization)

Using 'jit' with my current setup does only work with NO additional optimization flags and DOES NOT speed things up using the generated C-Code. Ultimately, I would like to have much fewer nodes in the expression graph, which hopefully allows for "-O1", "-O2" or even "-O3" optimization during compilation, which I would assume will then speed up my problem..

Here are my current runtimes:
Bildschirmfoto 2020-12-31 um 00.38.22.png
Getting it down by a factor of 10 would be tremendously helpful. 

Best,
Fabian 

Joris Gillis

unread,
Feb 6, 2021, 2:00:56 PM2/6/21
to CasADi
Dear Fabian,

The sysid example shows an example of how to achieve compact graphs that are good candidates for code-generation with optimization flags.

A factor 10 is not feasible with your present formulation/solver settings: the time spent in your solver (2.3 secs cfr https://github.com/casadi/casadi/wiki/FAQ:-is-the-bottleneck-of-my-optimization-in-CasADi-function-evaluations-or-in-the-solver%3F ) will become the bottleneck if function evaluations get faster.

Best regards,
  Joris
Reply all
Reply to author
Forward
0 new messages