How to pass a parameter in the implicit form?

93 views
Skip to first unread message

Souvik Das

unread,
Jan 5, 2022, 6:19:08 AM1/5/22
to YALMIP
Dear Johan,
I am trying to solve a nonlinear MPC problem. I am using the implicit form mentioned in the tutorial. The only difference is that the control is a summation of two quantities/functions:
u_k= g({y_k}_k) + eta_k.
The first one is a fixed function and depends on the output/measurements (y_k)_k. The second quantity eta_k is the decision variable along with the state (x_k)_k. Something of this form:
x_(k+1) = x_k + g({y_k}_k) + eta_k.

The underlying optimization problem (screenshot attached) can be written as:

\minimize_(x(0), (eta_k)_k)   \sum_{t=0}^{T-1} c(x(k+1),x(k))
subject to:                               x_(k+1) = x_k + g({y_k}_k) + eta_k,
                                                  x_k \in M for every k.
Here the objective function c is a nonlinear function.

The problem I am facing is how to write the dynamics as a constraint (similar to what you have done), the only caveat being there is an extra term g({y_t}_t) that takes in value from a pre-defined array (for example (y_k)_k may be a set of observations).

I have also attached a screenshot of the optimization problem for you reference. A snippet of the code is given below:

u = sdpvar(repmat(nu,1,N), repmat(1,1,N));
x = sdpvar(repmat(nx,1,N+1), repmat(1,1,N+1));
constraints = [xmin <= x{1} <= xmax];
objective = 0;
alpha = 1;
for k = 1 : N
    objective = objective + (x{k+1}/x{k}) + log(-x{k}) - log(-x{k+1})-1;
    constraints = [constraints, x{k+1} == x{k} + u{k} + alpha*data{k+1}];
    constraints = [constraints, umin <= u{k}<= umax, xmin <= x{k+1} <= xmax];
end
%objective = objective + (x{N+1}-xr)'*QN*(x{N+1}-xr);
%options = sdpsettings('solver', 'osqp');
options = sdpsettings('solver','bmibnb','verbose',2);
controller = optimizer(constraints, objective,options, x{1}, [u{:}]);

Here data is a 1D array.
Any suggestions/reference/material will be helpful.
Screenshot (39).png

Johan Löfberg

unread,
Jan 5, 2022, 6:33:40 AM1/5/22
to YALMIP
I guess you are saying you want to add data parameter conveniently to the list of parameters? i.e. {x{1},data{:}}

you are very optimistic in using BMIBNB in an MPC setting (in particular this nasty model with singularity-inducing divisions and logarithms). I hope you first have simulated this and actually confirmed it works (i.e. never use an optimizer before you've made sure verything runs nicely using optimize, i.e. never develop and debug using optimizer)

Johan Löfberg

unread,
Jan 5, 2022, 6:34:13 AM1/5/22
to YALMIP
meant  {x{1},data}

Souvik Das

unread,
Jan 6, 2022, 1:41:30 PM1/6/22
to YALMIP
I think there is a confusing. Let me share the code:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
addpath(genpath('C:\Users\dsouv\OneDrive\Desktop\YALMIP-master'))

yalmip('clear')
clear all

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Generating the exponenetial distributed samples
mu = 3;
sz1 = 1;
sz2 = 50;
data = exprnd(mu,sz1,sz2);

%'data' is being generated and is known.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

nx = 1;
nu = 1;
% Constraints
%u0 = 10.5916;
u0 = 0;
umin = -Inf;   
umax = Inf;

xmin = [-Inf];
xmax = [-0.01];


% Objective function
% Initial and reference states
initial = -2.5;
x0 = [initial];
xr = [0];

% Prediction horizon
N = 5;

% Define problem

u = sdpvar(repmat(nu,1,N), repmat(1,1,N));
x = sdpvar(repmat(nx,1,N+1), repmat(1,1,N+1));
constraints = [xmin <= x{1} <= xmax];
objective = 0;
alpha = 1;
for k = 1 : N
    objective = objective + (x{k+1}/x{k}) + log(-x{k}) - log(-x{k+1})-1;
    constraints = [constraints, x{k+1} == x{k} + u{k} + alpha*data(k+1)];

    constraints = [constraints, umin <= u{k}<= umax, xmin <= x{k+1} <= xmax];
end
%objective = objective + (x{N+1}-xr)'*QN*(x{N+1}-xr);
%options = sdpsettings('solver', 'osqp');
options = sdpsettings('solver','ipopt','verbose',2);

controller = optimizer(constraints, objective,options, x{1}, [u{:}])

%simulations
nsim = 30;
sdim = nx;
cdim = nu;
% state = zeros(sdim,nsim);
% control = zeros(cdim,nsim);
state = zeros([nx,nsim]);
control = zeros([nu,nsim]);

cost = zeros([1,nsim+1]);
for i = 1 : nsim
    U = controller{x0};
    x0 = x0 + U(:,1) + alpha*sum(data(i+1:i+1+W));
    state(i) = x0;
    control(i) = U(:,1);
    fprintf('state\n')
    disp(x0)
    fprintf('control\n')
    disp(U(:,1))
    cost(i+1) = cost(i) + ((state(i) + control(i) +alpha*sum(data(i+1:i+1+W)))/state(i)) + log(-state(i)) - log(-(state(i) + control(i) + alpha*sum(data(i+1:i+1+W))))-1;
end


Yes I have simulated this, but I am getting total nonsense. I am  using the ipopt solver.  After two three iterations the output I am getting is NaN.
I want to know if that is the correct way to write the constraint corresponding to the dynamics. Moreover, I would be grateful if you could suggest any other way to simulate this.
Thanks.

Johan Löfberg

unread,
Jan 6, 2022, 7:16:23 PM1/6/22
to YALMIP
You would have to define what total nnosense means. You are using a local nonlinear solver on a nasty nonlinear nonconvex problem, so all bests are off really

However, you model appears inconsistent. You use a fixed data-sequence for your prediction model, i.e. at every sample when you compute the input, the data used in the model is not changing, but in your simulation, you are evolving the data and thus changing the model compared to the model you've defined in the optimizer. Hence, it really sounds like data should be a parameter, and that you thus should compute the optimal input for {x(i),datavalues(i:i+W)}. As you do it now, you compute the solution for x{i} and datavalues(1:W) (except that you confusingly in the model use data while in the simulation you use sum(data))

Souvik Das

unread,
Jan 10, 2022, 9:27:47 AM1/10/22
to YALMIP
Dear Johan,

Thank you for pointing out the inconsistency. I have fixed that by sliding the window while defining the constraint functions and the objective function. But I think the cost is a convex function in the control u(k), isn't it?

On simulating, I have the following observations:
1. Sensitivity to the data: It is very sensitive to the data. For some sample paths of the random data, the computations and the simulation do not run and simply returns an NaN.
2. For those data sequence where the computations and the simulations are executed over the entire horizon, the state seems to diverge to -ininity, the control sequence hardly changes its value and the cost increases and converges to a value (e.g I am attaching a set of sample plots for your reference).
Theoretically, the state x(k) should converge to a finite value (atleast).

-Souvik

Note: I have run the simulation for a horizon T = 100 just to give you an idea about the convergence. For practice I do not need such a large horizon.
states.jpg
cost.jpg
control.jpg

Johan Löfberg

unread,
Jan 10, 2022, 9:41:38 AM1/10/22
to YALMIP
the objective is not obviously convex as it is a difference of two concave functions to begin with. It might be convex, but then you should formulate it in a form for which convexity can be derived by YALMIP so that a convex exponential-cone representation or similar can be used. As it is formulated now, it is a typical case of a model where a numerical solver will struggle as you have both singularity inducing divisions and logarithms

Souvik Das

unread,
Jan 10, 2022, 3:07:10 PM1/10/22
to YALMIP
Understood. I will try to reformulate the problem and let you know, if the problem still persist. Just wanted to confirm again, the way I have define the dynamics

Johan Löfberg

unread,
Jan 10, 2022, 3:36:40 PM1/10/22
to YALMIP
you didn't supply any fixed code so we cannot know if you fixed it
Reply all
Reply to author
Forward
0 new messages