Dear Prof, I'm new to YALMIP, and trying to simulate a few examples of open-loop versions of min-max MPC. I have read Prof. Johan's 2003 approximations of the closed-loop MPC article and also went through the example in YALMIP:
Model predictive control - robust solutions - YALMIP.
This example differs from the one given in Yalmip, which has constraints on the output for regulation purposes. But the one given in the article is a plain state-control-constrained MPC with bounded disturbance: -10 <= x_k <= 10, -3 <= u_k <= 3, -1.5 <= w_k <= 1.5, system matrices are A = [ 1 1 ; 0 1 ], B = [ 0 ; 1 ], E = [ 1 0 ; 0 1 ], the cost is infinity norm cost, with P=Q= identity (2X2), R=1.8.
I have tried to solve the open-loop problem, the reproducible code is attached and posted here as well, I'm not sure if the code is alright or not:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
yalmip('clear')
clear all
%% system
A = [1 1;0 1];
B = [0;1]; E = eye(2);
Q = [1 1;0 1]; P = Q; R = 1.8;
%% initialization
nu = 1; % number of inputs
nx = 2; % number of states
N = 4; % Prediction horizon
u = sdpvar(nu,N);
x0 = sdpvar(nx,1);
x(:,1) = x0;
w = sdpvar(2, N);
%% constraints
umin = -3; umax = 3;
xmin = -10; xmax = 10;
wmin = -1.5; wmax = 1.5;
%% loops
objective = 0;
constraints = [];
for k = 1 : N
x(:,k+1) = A*x(:,k) + B*u(k) + E*w(:,k);
objective = objective + norm(Q*x(:,k),Inf) + norm(R*u(:,k),Inf);
constraints = [constraints, umin <= u(k) <= umax]; % input constraints
constraints = [constraints, xmin <= x(:,k) <= xmax];
end
objective = objective + norm(P*x(:,N+1));
G = [uncertain(w), wmin <= w <= wmax];
ops = sdpsettings('verbose',1,'solver','gurobi');
controller = optimizer([constraints,G], objective,ops,{x0,w},{u});
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Moreover is it possible to see how the receding horizon min-max control looks over the feasible region (See Figure 5 (a)-(b) in the article)? i.e over a mesh of feasible initial conditions?
I apologize for the long question, I hope somebody can help me with the queries posted.
Many thanks