The way the Lagrangian is defined in CasADi is simply: f(x) + trans(lam_g)*g(x). We usually do not include the Lagrange multipliers corresponding to simple bounds (bounds on x), since they will just result in an additive contribution to the gradient.
We also handle inequality constraints the same way as equality constraints by combining the multipliers corresponding to upper and lower bounds.
So if you have symbolic expressions corresponding to the decision variable (x), the objective function (f), the constraint function (g), you can construct the gradient of the Lagrangian as follows:
# Multipliers corresponding to bounds on x
lam_x = MX.sym("lam_x", x.sparsity())
# Multipliers corresponding to bounds on g(x)
lam_g = MX.sym("lam_g", g.sparsity())
L = f + dot(lam_g, g) + dot(lam_x, x)
# Gradient of the Lagrangian
grad_L = gradient(L, x)
# Lagrangian formulation with lam_x terms excluded (this is how they are normally formulated in CasADi)
L_alt = f + dot(lam_g, g)
# Gradient of the Lagrangian (equivalent to grad_L above)
grad_L_alt = gradient(
L_alt, x) + lam_x
Best regards,
Joel