Bilinear constraints and big M relaxation

408 views
Skip to first unread message

Thomas D'hondt

unread,
Aug 14, 2015, 8:11:00 AM8/14/15
to YALMIP
Hello Johan !

I'm currently working on an MPC problem where I need to control a vehicle in order to follow a velocity profile. In order to decrease the fuel consumption, two models are considered: one where the injection is active and the other one where there is a cut-off. 
This leads me to solve a MIQP where the optimizer has to choose at each time step if he wants to inject or not, taking into account the fuel consumption and the error with respect to the trajectory.

Since I am linearising the model of the car around the current position at each time step, I'm sending new dynamic matrices to the optimizer object through the parameters. However, this results in equality constraints that are considered as bilinear instead of just linear ( similar to this : http://users.isy.liu.se/johanl/yalmip/pmwiki.php?n=Blog.Beta-version-of-a-more-general-optimizer ). Now, this is not a problem if I use only one model. But when I solve a MIQP, I get warnings about a bad big M relaxation because there are no bounds on those matrices (and the optimizer object encounters numerical errors after some iterations when I try to simulate the system).

I've already been doing some research and I think this problem you already solved could also help me : https://groups.google.com/forum/#!topic/yalmip/TrEv-cNnlh8
I'm currently trying to adapt the new hull function you've uploaded in that post so it may accept more than one "fixed parameter". Therefore, I have replaced the 'sdpvar' check with a 'cell' and I then give all the getVariable values to ParameterVariables:

if isa(varargin{end},'cell')
    Parameter = varargin{end};
    ParameterVariables = getvariables([Parameter{:}]);
    varargin = {varargin{1:end-1}};     
end

However, the constraints still remain bilinear and, therefore, this condition gets activated:

elseif ~(islinear(varargin{i}))
         error('Hull can only be applied to linear and convex quadratic constraints');
end

Firstly, do you think this is the right way to solve my big M relaxation problem ? And secondly, do you have an idea for a more general hull.m that could accept my type of constraints ? 

Thanks in advance for your help :) !

Thomas

Johan Löfberg

unread,
Aug 14, 2015, 9:07:01 AM8/14/15
to YALMIP
Can you mock up a MWE to describe your issues?

Thomas D'hondt

unread,
Aug 14, 2015, 9:21:56 AM8/14/15
to YALMIP
Thanks for your fast response !

Give me 5 minutes and I'll try to cut as much as possible from the code :).

Thomas D'hondt

unread,
Aug 14, 2015, 10:11:05 AM8/14/15
to YALMIP
Sorry, took me a bit longer than I thought. I've found something else that looks weird aswell...

 I took a look at the predicted output rpm of motor ( y(1) ) that's outputted by the optimizer and compared it to the computation done manually.
 The code should pause at the time step 406 and show this computation to you. The results are quite different which would mean the output equality constraint y = C*x + c is not verified. 

After this pause, the problem will become unfeasible ( yalmip error 12 ) , which I thought was due to numerical problems ( first post ). 

To launch the code, just use MPC_MWE. 

Thanks for your help !
MWE.zip

Thomas D'hondt

unread,
Aug 14, 2015, 10:31:22 AM8/14/15
to YALMIP
Small sign error on my side when I added the extra constraints for the big M stuff.

Line 111 has a bad sign : 

con = [con, Hx*x{N+1} <= [ 15; 5; 200; 0] ];

Johan Löfberg

unread,
Aug 14, 2015, 11:27:54 AM8/14/15
to YALMIP
Well, I wouldn't call that an MWE...

I wold start by fixing all the warnings about unbounded warnings in the big-M models. That can easily lead to weird behaviour.

Johan Löfberg

unread,
Aug 14, 2015, 11:36:20 AM8/14/15
to YALMIP
Another issue is that all parametric varaibles for ABC are symmetric, but your data is not symmetric

Johan Löfberg

unread,
Aug 14, 2015, 11:44:05 AM8/14/15
to YALMIP
sorry about the comment about the warnings, that was the original question..

Johan Löfberg

unread,
Aug 14, 2015, 11:54:17 AM8/14/15
to YALMIP
or maybe not. In this model, you have to have bounds on A,B,C etc, in order for the big-M model to be derived

Thomas D'hondt

unread,
Aug 14, 2015, 12:24:55 PM8/14/15
to YALMIP
Thanks for the answer :) !

I totally forgot about the symmetric spdvar matrices. What is the default behaviour of yalmip when you put non symmetric stuff in it anyway ? Is it going to override the information ? This might explain the difference between the predictions and the manual computations.

For the big M relaxation, are those bounds really needed if the matrices are fixed for a given optimization problem ? They won't change over the horizon. This is the reason I brought up the convex hull function in the first place, as I thought it might solve this issue.

I'm afraid I won't be able to work on this before monday as I'll be abroad this week end. Thanks again for your help, I'm looking forward to implement your fixes !

Johan Löfberg

unread,
Aug 14, 2015, 12:27:33 PM8/14/15
to YALMIP
Don't know what happens when you give unsymmetric data to symmetric parameter. Something bad...

Yes, bounds always needed for everything in big-M models. YALMIP has to derive that big-M model before the data is given,and thus be able to upper bound the bilinear product

Thomas D'hondt

unread,
Aug 17, 2015, 9:22:43 AM8/17/15
to YALMIP
Hello Johan !

Thanks again for the advice ! Adding a 'full' statement to the sdpvar matrices indeed fixed the bad predictions and, subsequently, the unfeasibilities midway the simulation.

I've also added inequalities on the matrices by sampling the linearisation function over the entire operating range. However, I still get the same warning for the big-M relaxation. Here is a short summary of all "hard constraints" that are applied for this purpose:

u : upper and lower bound from 1 to N 
y : upper and lower bound from 1 to N
x : upper and lower bound from 1 to N+1
A,B,C,... upper and lower bound once
reference : upper and lower from 1 to N

I think this should cover all the symbolic variables used for the optimisation, or at least I can't find any others. Here is the related part of the code : 

% Initialisation of the arrays of symbolic values
x    = sdpvar(repmat(nx,1,N+1),ones(1,N+1));
u    = sdpvar(repmat(nu,1,N),ones(1,N));
y    = sdpvar(repmat(ny,1,N),ones(1,N));
r    = sdpvar(N,1);

AoptIN = sdpvar(nx,nx,'full');
BoptIN = sdpvar(nx,nu,'full');
CoptIN = sdpvar(ny,nx,'full');
aoptIN = sdpvar(nx,1);
coptIN = sdpvar(ny,1);

AoptCO = sdpvar(nx,nx,'full');
BoptCO = sdpvar(nx,nu,'full');
CoptCO = sdpvar(ny,nx,'full');
aoptCO = sdpvar(nx,1);
coptCO = sdpvar(ny,1);

inj  = binvar(repmat(2,1,N),ones(1,N));
br   = binvar(repmat(2,1,N),ones(1,N));

con = set([]);
obj = 0;

mult = 4;
con = [con, expandBounds(bounds.inj.Amin,mult,'min') <= AoptIN <= expandBounds(bounds.inj.Amax,mult,'max')];
con = [con, expandBounds(bounds.inj.Bmin,mult,'min') <= BoptIN <= expandBounds(bounds.inj.Bmax,mult,'max')];
con = [con, expandBounds(bounds.inj.Cmin,mult,'min') <= CoptIN <= expandBounds(bounds.inj.Cmax,mult,'max')];
con = [con, expandBounds(bounds.inj.amin,mult,'min') <= aoptIN <= expandBounds(bounds.inj.amax,mult,'max')];
con = [con, expandBounds(bounds.inj.cmin,mult,'min') <= coptIN <= expandBounds(bounds.inj.cmax,mult,'max')];
con = [con, expandBounds(bounds.co.Amin,mult,'min')  <= AoptCO <= expandBounds(bounds.co.Amax,mult,'max')];
con = [con, expandBounds(bounds.co.Bmin,mult,'min')  <= BoptCO <= expandBounds(bounds.co.Bmax,mult,'max')];
con = [con, expandBounds(bounds.co.Cmin,mult,'min')  <= CoptCO <= expandBounds(bounds.co.Cmax,mult,'max')];
con = [con, expandBounds(bounds.co.amin,mult,'min')  <= aoptCO <= expandBounds(bounds.co.amax,mult,'max')];
con = [con, expandBounds(bounds.co.cmin,mult,'min')  <= coptCO <= expandBounds(bounds.co.cmax,mult,'max')];

con = [con, min(ref)*ones(N,1) <= r <= ones(N,1)*max(ref)];

for k = 1:N
    % Dynamics of the car
    ModelIN = [x{k+1} == AoptIN*x{k} + BoptIN*u{k} + aoptIN, y{k}   == CoptIN*x{k} + coptIN]; 
    ModelCO = [x{k+1} == AoptCO*x{k} + BoptCO*u{k} + aoptCO, y{k}   == CoptCO*x{k} + coptCO]; 
    con = [con, implies(inj{k}(1), ModelIN), implies(inj{k}(2), ModelCO)];    
    con = [con, iff(inj{k}(2),br{k}(1))];
    con = [con, sum(inj{k}) == 1];
    
    % State constraints
    con = [con, Hx*x{k} <= hx];
    % Output constraints
    con = [con, Hy*y{k} <= hy];
    % Input constraints
    con = [con, Hu*u{k} <= hu]; 
        
    % Input exclusion
    con = [con, implies(br{k}(1), u{k}(1) == 0)];
    con = [con, implies(br{k}(2), u{k}(2) == 0)];
    con = [con, sum(br{k}) == 1];
    
    % Objective function
    obj = obj + (x{k}(2)-r(k))'*Qtr*(x{k}(2)-r(k)) + u{k}'*R*u{k};
end

con = [con, Hx*x{k+1} <= hx];

% Optimizer object creation
options = sdpsettings('solver','gurobi');
ctrl = optimizer(con, obj, options, {x{1},r,AoptIN,BoptIN,aoptIN,CoptIN,coptIN,AoptCO,BoptCO,aoptCO,CoptCO,coptCO},{[u{:}],[x{:}],[inj{:}],y{1}(2),[y{:}]});



The H_i vectors are computed using kron(eye(ny), [1;-1]); and the h_i are 4x1 column vectors. 


Johan Löfberg

unread,
Aug 17, 2015, 12:23:16 PM8/17/15
to YALMIP
Don't use set, simply write [].

Can you post code which runs.

Thomas D'hondt

unread,
Aug 18, 2015, 3:47:23 AM8/18/15
to YALMIP
Here is a simplified code: https://www.dropbox.com/s/x5nlblggjxkzvnx/Code.zip?dl=0

Thanks for your help :).

Thomas D'hondt

unread,
Aug 19, 2015, 6:13:27 AM8/19/15
to YALMIP
Okay, this is maybe linked to the way I write the matrix inequalities...

I did some tests in matlab and he seems to interpret a<=b<=c as (a<=b)<=c, where the first parenthesis returns a matrix of 1 and 0. 
He then interprets these boolean values as actual numbers and compares them with the second matrix...

Does Yalmip do the same thing inside his constraints definition ? I would probably need to write (a<=b) && (b<=c), but this doesn't seem to work with matrices. Other solution is to write the inequality for each component separately.

Thomas D'hondt

unread,
Aug 19, 2015, 6:15:28 AM8/19/15
to YALMIP
Or there is the single & that works element wise with matrices.

Johan Löfberg

unread,
Aug 19, 2015, 4:37:18 PM8/19/15
to YALMIP
a<=b<=c is interpreted in yALMIP as a list of constraints,

>> a = sdpvar(2,1);
>> b = sdpvar(2,1);
>> c = sdpvar(2,1);
>> [a <= b <= c]
++++++++++++++++++++++++++++++++++++++
|   ID|                    Constraint|
++++++++++++++++++++++++++++++++++++++
|   #1|   Element-wise inequality 4x1|
++++++++++++++++++++++++++++++++++++++

which is erquivalent to

[a <= b, b <= c]

and

 (a <= b) & (b <= c)

and

 (a <= b) + (b <= c)



Johan Löfberg

unread,
Aug 19, 2015, 4:47:20 PM8/19/15
to YALMIP
Just to make sure I understand: The issue now is imply understanding the reason for the warnings?

Johan Löfberg

unread,
Aug 19, 2015, 4:53:13 PM8/19/15
to YALMIP
THe following MWE shows that YALMIP doesn't propagate bounds for nonlinear epressions inside the logic operators

>> sdpvar a x
>> binvar d
>> optimizer([-1 <= [x a] <= 1,implies(d, 1 == x*a)],x^2,[],a,x)
Warning: You have unbounded variables in IMPLIES leading to a lousy big-M relaxation.

I will have to see if this can be done easily and cheaply

Johan Löfberg

unread,
Aug 19, 2015, 4:55:10 PM8/19/15
to YALMIP
and note that YALMIP currently uses the big-M constant 1e4, which possibly will be bad for your model, since you model has huge number (1e5 etc). You should probably try to come up with better scales

Thomas D'hondt

unread,
Aug 27, 2015, 10:15:31 AM8/27/15
to YALMIP
Sorry for the lack of activity on my side lately !

I have normalized the part of the model that generated the huge numbers. All those values are now smaller than 1 (in norm). 

The example you showed is spot on (and a much better MWE than what I've sent to you, sorry :(  ). 

Would computing those big-M constants during the call of the optimizer object, instead of during its creation, seem a viable solution to you? 


Johan Löfberg

unread,
Aug 27, 2015, 11:26:52 AM8/27/15
to YALMIP
No, the big-M modelling occurs before the optimizer object is finalized. Hence, I will have to update the basic bound-propagation to also propagate monomial bounds

Johan Löfberg

unread,
Aug 27, 2015, 3:29:15 PM8/27/15
to YALMIP
add the following before the last line of setupbounds.m

% propagate bilinear. This is needed when implies etc are used in a
% bilinearly parameterized optimzer object
[monomtable,variabletype] = yalmip('monomtable');
bilin = find(variabletype == 1);
if ~isempty(bilin)
    monomtable = monomtable(bilin,:);
    [i,j] = find(monomtable');
    i = reshape(i,2,[]);
    x = i(1,:)';
    y = i(2,:)';
    z = bilin(:);
    lb = LU(:,1);
    ub = LU(:,2)
    corners = [lb(x).*lb(y) ub(x).*lb(y) lb(x).*ub(y) ub(x).*ub(y)];
    
    maxz = max(corners,[],2);
    minz = min(corners,[],2);
    
    LU(bilin,1) = max(LU(bilin,1),minz);
    LU(bilin,2) = min(LU(bilin,2),maxz);
end


Thomas D'hondt

unread,
Aug 28, 2015, 9:09:33 AM8/28/15
to YALMIP
A huge thank you for all your assistance ! I really appreciate all the time you've spent for fixing this big M issue. The constants now propagate as intended :).

See you around !
Reply all
Reply to author
Forward
0 new messages