Issue With Optimizer Runtime on Variable Elimination

29 views
Skip to first unread message

Ethan Foss

unread,
May 21, 2025, 2:44:51 AMMay 21
to YALMIP
Hi, 

I have a model that is working as intended but I am noticing that the optimizer takes very long to solve the problems. Also, depending on the variable size of the problem, yalmip will encounter an out of memory error:

Error using  +
Requested 81371x27234 (16.5GB) array exceeds maximum array size preference (15.7GB). This might cause MATLAB to become unresponsive.

Error in optimizer/optimizer_precalc (line 33)
        sys.model.precalc.aux = rmvmonoms*0+1;

The way I have set up my optimizer is to pre-vectorize all of the solution variables and the constraints. This vectorization leads to some large matrix parameters in the optimizer, which may or may not be contributing to the issue. If I do not pre-vectorize, then the solution time using the optimizer is much faster. It would be nice to get the optimizer to work for this other version, so any suggestions are appreciated. Thanks for the help!

Screenshot 2025-05-20 233855.png

Johan Löfberg

unread,
May 21, 2025, 2:59:51 AMMay 21
to YALMIP
You would have to supply a reproducible example. What the message is revealing is that the parameterized models contains an enormous amount of symbolic monomials that the framework has to deal with (either due to some poor modeling, or that the model simply is too large)

Regardless, a reproducible example would be interesting 

Johan Löfberg

unread,
May 21, 2025, 3:00:42 AMMay 21
to YALMIP
Unclear what you mean with pre-vectorize solution variables and constraints

Ethan Foss

unread,
May 22, 2025, 6:14:51 PMMay 22
to YALMIP
Here is a reproducible example. Basically, all I am doing is reformulating the constraints. But I think you are right that this is an example of bad modeling, as the reformulation leads to large parameter matrices with no sparsity encoded. When timing the results its clear that option 2 takes way longer, I'm guessing due to the large size of the H matrix. I am curious if there is some workaround for encoding sparsity in the parameter matrices that are sent to the optimizer. 

Also, I am not sure if the version I present here has the same problem as my original question, which has large runtimes when I am recursively calling the optimizer, but I still notice long runtimes from the eliminatevariables function. 

% Yalmip Test:
%% Option 1:
n = 6;
N = 100;
nu = 3;
A = sdpvar(n,n,N-1,'full');
B = sdpvar(n,nu,N-1,'full');
x = sdpvar(n,N,'full');
u = sdpvar(nu,N-1,'full');
x0 = sdpvar(6,1,'full');
xf = sdpvar(6,1,'full');
Constraints = [x0==x(:,1)];
Objective = 0;
for i = 1:N-1
Constraints = [Constraints,x(:,i+1) == A(:,:,i)*x(:,i) + B(:,:,i)*u(:,i)];
Objective = Objective + norm(u(:,i),2);
end
Constraints = [Constraints,xf == x(:,N)];
Optimizer1 = optimizer(Constraints,Objective,sdpsettings('solver','mosek'),{A,B,x0,xf},{x,u});
% Run Optimizer:
A = zeros(n,n,N-1);
B = zeros(n,nu,N-1);
for i = 1:N-1
A(:,:,i) = [eye(3) .1*eye(3);zeros(3,3) eye(3)];
B(:,:,i) = [.1*eye(3);eye(3)];
end
x0 = 1*eye(6,1);
xf = zeros(6,1);
Solution = Optimizer1{{A,B,x0,xf}};
%% Option 2:
z = sdpvar(n*N+nu*(N-1),1);
IndexX = @(z,i) z((i-1)*n+1:i*n);
IndexU = @(z,i) z(N*n+(i-1)*nu+1:N*n+i*nu);
H = sdpvar(n*(N+1),n*N+nu*(N-1));
g = sdpvar(n*(N+1),1);
Objective = 0;
for i = 1:N-1
Objective = Objective + norm(IndexU(z,i),2);
end
Constraints = [H*z==g];
Optimizer2 = optimizer(Constraints,Objective,sdpsettings('solver','mosek'),{H,g},{z});
% Run Optimizer:
H = zeros(n*(N+1),n*N+nu*(N-1));
for i = 1:N-1
H((i-1)*n+1:i*n,(i-1)*n+1:i*n) = [eye(3) .1*eye(3);zeros(3,3) eye(3)];
H((i-1)*n+1:i*n,(i)*n+1:(i+1)*n) = -eye(6);
H((i-1)*n+1:i*n,N*n+(i-1)*nu+1:N*n+i*nu) = [.1*eye(3);eye(3)];
g((i-1)*n+1:i*n) = zeros(6,1);
end
H((N-1)*n+1:N*n,1:n) = eye(6);
g((N-1)*n+1:N*n) = 1*eye(6,1);
H((N)*n+1:(N+1)*n,(N-1)*n+1:N*n) = eye(6);
g((N)*n+1:(N+1)*n) = zeros(6,1);
Solution = Optimizer2{{H,g}};



FlameGraph.pdf

Johan Löfberg

unread,
May 23, 2025, 12:40:49 AMMay 23
to YALMIP
Well, I would question the general use of YALMIP/optimizer here as you basically have your model in low-level format already thus making it easy to make direct calls to a solver.

Having said said, you model is massively sparse, so you can probably circumvent all your performance issues by setting up a sparse optimizer

Ethan Foss

unread,
May 23, 2025, 12:35:56 PMMay 23
to YALMIP
Thanks, yeah I figure what I am doing is overkill. However, in practice what I am doing is taking a bunch of convex constraints, which may be affine, SOCs, SDCs, and appending large affine equality/inequality constraints of the form shown. If there is some easy way to append these sort of large affine constraints to an existing model then that would be ideal. Essentially I am using yalmip for identifying constraint type and convexity. But it seems like the sparse optimizer you point me to should work. Thanks!
Reply all
Reply to author
Forward
0 new messages