Here's a section of the model that should be easy to understand. This already takes ages to compute, so you can imagine it will be even worse when kappa is used for further computations in the model.
n0,dn0,ddn0 are the variables that are passed as constants to the optimizer.m function. The model itself is convex, so if these are given as a constants, the model computes instantly. (You can try this yourself by setting them as constants instead of sdpvar)
yalmip('clear');
N = 1000;
% dummy values:
dxref = ones(1,2*N-1);
dyref = ones(1,2*N-1);
ddxref = ones(1,2*N-1);
ddyref = ones(1,2*N-1);
xn = ones(1,2*N-1);
yn = ones(1,2*N-1);
dxn = ones(1,2*N-1);
dyn = ones(1,2*N-1);
ddxn = ones(1,2*N-1);
ddyn = ones(1,2*N-1);
tic;
n = sdpvar(1,2*N-1);s.n = 5;
dn = sdpvar(1,2*N-1);s.dn = 0.1;
ddn = sdpvar(1,2*N-1);s.ddn = 0.005;
n0 = sdpvar(1,2*N-1);%ones(1,2*N-1);%
dn0 = sdpvar(1,2*N-1);%ones(1,2*N-1);% <- if you set these as constants, it finishes very quickly
ddn0 = sdpvar(1,2*N-1);%ones(1,2*N-1);%
dy = dyref + s.n.*n.*dyn + s.dn.*dn.*yn;
dx = dxref + s.n.*n.*dxn + s.dn.*dn.*xn;
dy0 = dyref + n0.*dyn + dn0.*yn;
dx0 = dxref + n0.*dxn + dn0.*xn;
ddx = ddxref+s.ddn.*ddn.*xn+2*s.dn.*dn.*dxn+s.n.*n.*ddxn;
ddy = ddyref+s.ddn.*ddn.*yn+2*s.dn.*dn.*dyn+s.n.*n.*ddyn;
ddx0 = ddxref+ddn0.*xn+2*dn0.*dxn+n0.*ddxn;
ddy0 = ddyref+ddn0.*yn+2*dn0.*dyn+n0.*ddyn;
a = (ddy0.*dx0 - ddx0.*dy0)./ (dx0.^2 + dy0.^2).^(3/2);
b = (dx0)./ (dx0.^2 + dy0.^2).^(3/2);
c = -(dy0)./ (dx0.^2 + dy0.^2).^(3/2);
d = ( ddy0.*(dx0.^2+dy0.^2)-3*dx0.*(ddy0.*dx0-ddx0.*dy0) ) ./ (dx0.^2+dy0.^2).^(5/2) ;
e = ( -ddx0.*(dx0.^2+dy0.^2)-3*dy0.*(ddy0.*dx0-ddx0.*dy0) ) ./ (dx0.^2+dy0.^2).^(5/2) ;
kappa = a + b.*(ddy-ddy0) + c.*(ddx-ddx0) + d.*(dx-dx0) + e.*(dy-dy0);
toc