Parsing speed issue

46 views
Skip to first unread message

Erik van den Eshof

unread,
Jul 29, 2025, 10:07:38 AMJul 29
to YALMIP
Hi Johan,

I'm trying to do some sequential convex programming and ran into an issue.

I want to use the optimizer function to avoid having to parse the model every iteration for maximum efficiency.

This means I have to define the variables I want to update after each iteration as sdpvars, but this leads to incredibly long parsing times.

To replicate:

a = sdpvar(1,1000);
x = sdpvar(1,1000); y = sdpvar(1,1000);
tic; a.*(x+y);toc

Already takes 10 seconds for me!

In this case x and y would be variables and a the variable that is updated after every iteration, and would be a constant in the optimization.

Any idea what causes this and if there is a fix? It seems to get stuck in some big loop in the times.m function.

Erik van den Eshof

unread,
Jul 29, 2025, 10:15:56 AMJul 29
to YALMIP
Strange update after more testing:

Running this code:

a = sdpvar(1,1000);
x = sdpvar(1,1000); y = sdpvar(1,1000);
tic; a.*(x+y);toc

After yalmip('clear'); is super quick (<0.2s).
But when I run it after trying to parse my model it takes over 10 seconds. The variables a,x,y are completely new and not included in my model.

It's like the more statements of this form I run, the slower it becomes.

Is there any possible explanation for this?

Thank you.

Op dinsdag 29 juli 2025 om 16:07:38 UTC+2 schreef Erik van den Eshof:

Erik van den Eshof

unread,
Jul 29, 2025, 10:18:45 AMJul 29
to YALMIP
Thrown into a for-loop to replicate, you can see each time it takes longer:

yalmip('clear');
for i = 1:10

a = sdpvar(1,1000);
x = sdpvar(1,1000); y = sdpvar(1,1000);
tic; a.*(x+y);toc
end

Op dinsdag 29 juli 2025 om 16:15:56 UTC+2 schreef Erik van den Eshof:

Johan Löfberg

unread,
Jul 29, 2025, 10:46:49 AMJul 29
to YALMIP

Yes, it is expected. The more nonlinear variables YALMIP is keeping track of, the more time it takes to create new ones (as it has to check if that variable already is defined and has an internal index)

Erik van den Eshof

unread,
Jul 29, 2025, 10:55:26 AMJul 29
to YALMIP
Ah I see.
But this does cause an issue for large models with sequential convex programming.

Does it really need to keep track of nonlinear variables, when these variables will be constants in the optimizer function?
Eventually the final model is completely convex and linear:

optimizer(constraints,objective,options,a,[x;y])

Are there ways to speed it up? Right now it would take me hours to parse. Whereas with directly defining them as constants it would take less than 10 seconds to parse my model.
But then the issue is I have to parse for every iteration (solving an iteration takes less than 1 second).

Op dinsdag 29 juli 2025 om 16:46:49 UTC+2 schreef Johan Löfberg:

Johan Löfberg

unread,
Jul 29, 2025, 10:59:47 AMJul 29
to YALMIP

How large is your model if it takes hours to parse? Sounds like something is done inefficiently or incorrectly.

Yes, optimizer is an add-on which operates on a predefined nonlinear model, nothing around that. 

Erik van den Eshof

unread,
Jul 29, 2025, 10:59:55 AMJul 29
to YALMIP
Best I could think of was exporting the solver input matrices and manually overwriting them.

I did this for a model in the past and it worked really well. But this becomes a pain when the model is more complex. :(

It would be great if yalmip had a better solution for this!



Op dinsdag 29 juli 2025 om 16:55:26 UTC+2 schreef Erik van den Eshof:

Erik van den Eshof

unread,
Jul 29, 2025, 11:03:50 AMJul 29
to YALMIP
Size is around 20000 variables (excluding constants I want to update with optimizer.m).
It's convex, but including the constants I want to update as sdpvars makes it highly nonlinear.

As you say the more nonlinear variables you have the longer it takes.
So parsing takes less than 10 seconds if I define the constants numerically, but hours if I define them symbolically for use with optimizer.m.

Op dinsdag 29 juli 2025 om 16:59:47 UTC+2 schreef Johan Löfberg:

Johan Löfberg

unread,
Jul 29, 2025, 11:03:53 AMJul 29
to YALMIP
Post your code and I can see if you are doing something that can be done smarter

Johan Löfberg

unread,
Jul 29, 2025, 11:06:59 AMJul 29
to YALMIP
Super common mistake is to use x*boringfunction(y) where y are parameters, which simply should be done as x*z where z is a parameter and you send the value boringfunction(ydata)

Erik van den Eshof

unread,
Jul 29, 2025, 11:29:35 AMJul 29
to YALMIP
Here's a section of the model that should be easy to understand. This already takes ages to compute, so you can imagine it will be even worse when kappa is used for further computations in the model. n0,dn0,ddn0 are the variables that are passed as constants to the optimizer.m function. The model itself is convex, so if these are given as a constants, the model computes instantly. (You can try this yourself by setting them as constants instead of sdpvar) yalmip('clear');
N = 1000;
% dummy values:
dxref = ones(1,2*N-1);
dyref = ones(1,2*N-1);
ddxref = ones(1,2*N-1);
ddyref = ones(1,2*N-1);
xn = ones(1,2*N-1);
yn = ones(1,2*N-1);
dxn = ones(1,2*N-1);
dyn = ones(1,2*N-1);
ddxn = ones(1,2*N-1);
ddyn = ones(1,2*N-1);
tic;
n = sdpvar(1,2*N-1);s.n = 5;
dn = sdpvar(1,2*N-1);s.dn = 0.1;
ddn = sdpvar(1,2*N-1);s.ddn = 0.005;
n0 = sdpvar(1,2*N-1);%ones(1,2*N-1);%
dn0 = sdpvar(1,2*N-1);%ones(1,2*N-1);% <- if you set these as constants, it finishes very quickly
ddn0 = sdpvar(1,2*N-1);%ones(1,2*N-1);%
dy = dyref + s.n.*n.*dyn + s.dn.*dn.*yn;
dx = dxref + s.n.*n.*dxn + s.dn.*dn.*xn;
dy0 = dyref + n0.*dyn + dn0.*yn;
dx0 = dxref + n0.*dxn + dn0.*xn;
ddx = ddxref+s.ddn.*ddn.*xn+2*s.dn.*dn.*dxn+s.n.*n.*ddxn;
ddy = ddyref+s.ddn.*ddn.*yn+2*s.dn.*dn.*dyn+s.n.*n.*ddyn;
ddx0 = ddxref+ddn0.*xn+2*dn0.*dxn+n0.*ddxn;
ddy0 = ddyref+ddn0.*yn+2*dn0.*dyn+n0.*ddyn;
a = (ddy0.*dx0 - ddx0.*dy0)./ (dx0.^2 + dy0.^2).^(3/2);
b = (dx0)./ (dx0.^2 + dy0.^2).^(3/2);
c = -(dy0)./ (dx0.^2 + dy0.^2).^(3/2);
d = ( ddy0.*(dx0.^2+dy0.^2)-3*dx0.*(ddy0.*dx0-ddx0.*dy0) ) ./ (dx0.^2+dy0.^2).^(5/2) ;
e = ( -ddx0.*(dx0.^2+dy0.^2)-3*dy0.*(ddy0.*dx0-ddx0.*dy0) ) ./ (dx0.^2+dy0.^2).^(5/2) ;
kappa = a + b.*(ddy-ddy0) + c.*(ddx-ddx0) + d.*(dx-dx0) + e.*(dy-dy0);
toc

Op dinsdag 29 juli 2025 om 17:06:59 UTC+2 schreef Johan Löfberg:

Johan Löfberg

unread,
Jul 29, 2025, 11:37:10 AMJul 29
to YALMIP
So what are parameters and what are decision variables, and what is objective and what are constraints

For instance, if everything with 0 is a parameter, this


a = (ddy0.*dx0 - ddx0.*dy0)./ (dx0.^2 + dy0.^2).^(3/2);

is a waste of symbolic manipulations and overhead. You should simply introduce a as a parameter, and then compute that expression and send as parameter value

Erik van den Eshof

unread,
Jul 29, 2025, 11:48:16 AMJul 29
to YALMIP
My decision variables would be n,dn,ddn. Through kappa there are different constraints imposed.
n0,dn0,ddn0 (and everything that follows from them) would indeed be parameters.

Defining things like a-e directly as symbolic parameters was something I tried and it did make things slightly faster, but still not on a manageable level (on top of the extra inconvenience it causes ;) )

Here's what that would look like, it still takes me almost a minute to run this:

yalmip('clear');
N = 1000;
% dummy values:
dxref = ones(1,2*N-1);
dyref = ones(1,2*N-1);
ddxref = ones(1,2*N-1);
ddyref = ones(1,2*N-1);
xn = ones(1,2*N-1);
yn = ones(1,2*N-1);
dxn = ones(1,2*N-1);
dyn = ones(1,2*N-1);
ddxn = ones(1,2*N-1);
ddyn = ones(1,2*N-1);
tic;
n = sdpvar(1,2*N-1);s.n = 5;
dn = sdpvar(1,2*N-1);s.dn = 0.1;
ddn = sdpvar(1,2*N-1);s.ddn = 0.005;
%n0 = sdpvar(1,2*N-1);%ones(1,2*N-1);%
%dn0 = sdpvar(1,2*N-1);%ones(1,2*N-1);% <- if you set these as constants, it finishes very quickly
%ddn0 = sdpvar(1,2*N-1);%ones(1,2*N-1);%
dy = dyref + s.n.*n.*dyn + s.dn.*dn.*yn;
dx = dxref + s.n.*n.*dxn + s.dn.*dn.*xn;
dy0 = sdpvar(1,2*N-1);%dyref + n0.*dyn + dn0.*yn;
dx0 = sdpvar(1,2*N-1);%dxref + n0.*dxn + dn0.*xn;
ddx = ddxref+s.ddn.*ddn.*xn+2*s.dn.*dn.*dxn+s.n.*n.*ddxn;
ddy = ddyref+s.ddn.*ddn.*yn+2*s.dn.*dn.*dyn+s.n.*n.*ddyn;
ddx0 = sdpvar(1,2*N-1);%ddxref+ddn0.*xn+2*dn0.*dxn+n0.*ddxn;
ddy0 = sdpvar(1,2*N-1);%ddyref+ddn0.*yn+2*dn0.*dyn+n0.*ddyn;
a = sdpvar(1,2*N-1);%(ddy0.*dx0 - ddx0.*dy0)./ (dx0.^2 + dy0.^2).^(3/2);
b = sdpvar(1,2*N-1);%(dx0)./ (dx0.^2 + dy0.^2).^(3/2);
c = sdpvar(1,2*N-1);%-(dy0)./ (dx0.^2 + dy0.^2).^(3/2);
d = sdpvar(1,2*N-1);%( ddy0.*(dx0.^2+dy0.^2)-3*dx0.*(ddy0.*dx0-ddx0.*dy0) ) ./ (dx0.^2+dy0.^2).^(5/2) ;
e = sdpvar(1,2*N-1);%( -ddx0.*(dx0.^2+dy0.^2)-3*dy0.*(ddy0.*dx0-ddx0.*dy0) ) ./ (dx0.^2+dy0.^2).^(5/2) ;
kappa = a + b.*(ddy-ddy0) + c.*(ddx-ddx0) + d.*(dx-dx0) + e.*(dy-dy0);
toc

Op dinsdag 29 juli 2025 om 17:37:10 UTC+2 schreef Johan Löfberg:

Johan Löfberg

unread,
Jul 29, 2025, 12:00:37 PMJul 29
to YALMIP
Introducing a new variable ddy_ddy0 etc and add constraints ddy_ddy0 == ddy-ddy0 will likely remove almost all overhead, as that bilinear product introduces a lot of monomials

Erik van den Eshof

unread,
Jul 29, 2025, 1:07:56 PMJul 29
to YALMIP
I see, but then the optimization structure is modified to simplify symbolic operations, and you still keep some bilinear products (i.e. b.*ddy_ddy0)

It would be really neat though for sequential convex programming and other repetitive algorithms if parameters could be symbolically defined (with something other than sdpvar), and if this could then avoid the overhead of tracking nonlinear variables (not sure if this is possible or why this overhead is needed). :)

Thanks anyway for your time and suggestions!

Op dinsdag 29 juli 2025 om 18:00:37 UTC+2 schreef Johan Löfberg:

Johan Löfberg

unread,
Jul 29, 2025, 1:19:37 PMJul 29
to YALMIP
sdpvar.*sdpvar is several orders of magnitudes faster than sdpvar*affineoperator(sdpvar) as there are some internal short-cuts for exactly that case

>> tic; sdpvar(1000,1).*(2*sdpvar(1000,1)+3*sdpvar(1000,1)); toc
Elapsed time is 7.421675 seconds.
>> tic; sdpvar(1000,1).*sdpvar(1000,1); toc
Elapsed time is 0.027322 seconds.
Reply all
Reply to author
Forward
0 new messages