Dear Johan,
I am trying to solve a nonlinear MPC problem. I am using the implicit form mentioned in the tutorial. The only difference is that the control is a summation of two quantities/functions:
u_k= g({y_k}_k) + eta_k.
The first one is a fixed function and depends on the output/measurements (y_k)_k. The second quantity eta_k is the decision variable along with the state (x_k)_k. Something of this form:
x_(k+1) = x_k +
g({y_k}_k) + eta_k.
The underlying optimization problem (screenshot attached) can be written as:
\minimize_(x(0), (eta_k)_k) \sum_{t=0}^{T-1} c(x(k+1),x(k))
subject to: x_(k+1) = x_k +
g({y_k}_k) + eta_k,
x_k \in M for every k.
Here the objective function c is a nonlinear function.
The problem I am facing is how to write the dynamics as a constraint (similar to what you have done), the only caveat being there is an extra term
g({y_t}_t) that takes in value from a pre-defined array (for example (y_k)_k may be a set of observations).
I have also attached a screenshot of the optimization problem for you reference. A snippet of the code is given below:
u = sdpvar(repmat(nu,1,N), repmat(1,1,N));
x = sdpvar(repmat(nx,1,N+1), repmat(1,1,N+1));
constraints = [xmin <= x{1} <= xmax];
objective = 0;
alpha = 1;
for k = 1 : N
objective = objective + (x{k+1}/x{k}) + log(-x{k}) - log(-x{k+1})-1;
constraints = [constraints, x{k+1} == x{k} + u{k} + alpha*data{k+1}];
constraints = [constraints, umin <= u{k}<= umax, xmin <= x{k+1} <= xmax];
end
%objective = objective + (x{N+1}-xr)'*QN*(x{N+1}-xr);
%options = sdpsettings('solver', 'osqp');
options = sdpsettings('solver','bmibnb','verbose',2);
controller = optimizer(constraints, objective,options, x{1}, [u{:}]);
Here data is a 1D array.
Any suggestions/reference/material will be helpful.