Unbounded MIQP problem

263 views
Skip to first unread message

Morten Hestdal

unread,
Nov 1, 2013, 8:28:49 AM11/1/13
to yal...@googlegroups.com
Dear Johan,

I am making an MPC in simulink, and my dynamics are piecewise affine. I am getting the warning Warning: You have unbounded variables in IFF leading to a lousy big-M relaxation, but I cannot see which variables that are missing bounds in my program. 
Here is the code: The vector x_dod is a large vector that consists of N 2x1 concatenated vectors. I do believe this is where the error lies, but I can't identify it. Even if I choose only N = 2, this program takes forever to run. 


persistent Controller


if t == 0


    %Number of inputs:
    M = 2;

    %Number of sites:

    N = M;

    eta_load = 0.8;

    eta_gen = 0.7;

    %Discrete system matrices:

    Ac = [1 0; 0 0 ];

    Ad = [0 0; 0 1];

    Bc = [1;0];

    Bd = [0;1];

    %Prediction horizon:

    L = 20;
    q = 2;
    Q_DOD = eye(2*M);
    Q_SOC = 1;
    R = 1;


    u_min = -20;
    u_max = 20;

    C = 1200;

    ref_SOC = 0.5;

    SOC_max = 1;
    SOC_min = 0.3;

    yalmip('clear');


  

    u = sdpvar(repmat(M,1,L),repmat(1,1,L));


   x_dod = sdpvar(repmat(2*M,1,L+1),repmat(1,1,L+1));
  


    x_soc = sdpvar(repmat(N,1,L+1),repmat(1,1,L+1));



    sdpvar ref
    
    constraints = [];
    objective = 0;

    for k = 1:L
        objective = objective + norm(Q_DOD*x_dod{k},2) + norm(Q_SOC*(x_soc{k}-ref_SOC.*ones(2,1)),2) + norm(R*u{k},2);
        for i = 1:M
            Model_dod1 = [x_dod{k+1}(2*i-1:2*i) == Ad*x_dod{k}(2*i-1:2*i) + Bd*u{k}(i)./(C*eta_gen),-3*ones(M,1) <= x_dod{k}(2*i-1:2*i) <= zeros(M,1), u{k}(i) <= 0];
            Model_dod2 = [x_dod{k+1}(2*i-1:2*i) == Ac*x_dod{k}(2*i-1:2*i) + eta_load.*Bc*u{k}(i)./C, zeros(M,1) <= x_dod{k}(2*i-1:2*i) <= 3*ones(M,1), u{k}(i) > 0];
 
         
            Model_soc1 = [x_soc{k+1}(i) == x_soc{k}(i) + (u{k}(i))/(C*eta_gen), u{k}(i) <= 0];
            Model_soc2 = [x_soc{k+1}(i) == x_soc{k}(i) + (eta_load*u{k}(i))/C,  u{k}(i) > 0];

            constraints = [constraints, Model_dod1 | Model_dod2,u_min <= u{k}(i) <= u_max];
            constraints = [constraints, Model_soc1 | Model_soc2, SOC_min <= x_soc{k}(i) <= SOC_max];
           
            
                  
        end
        constraints = [constraints, sum(u{k}) == ref];    
    end
    
    for j = 1:M
        
        constraints = [constraints,-3*ones(2,1) <= x_dod{k+1}(2*j-1:2*j) <= 3*ones(2,1)];
        constraints = [constraints, SOC_min <= x_soc{k+1}(j) <= SOC_max];
    end
            
    
    


    Controller = optimizer(constraints,objective,[],{x_soc{1},x_dod{1},ref},u{1});

    uout = Controller{{currentx_soc,[0;0;0;0],P_ref}}
else
    
    uout = Controller{{currentx_soc,[0;0;0;0],P_ref}}
    
end


Morten

Johan Löfberg

unread,
Nov 1, 2013, 9:47:54 AM11/1/13
to yal...@googlegroups.com
There are no explicit bounds on xdod. All bounds are implied and hidden inside logical models. To derive a big-M model, the bounds have to be trivially available.

BTW, logical | is a bit shaky, I've seen some strange behavior sometimes. It is recommended that you create the model manually


 ddod = binvar(repmat(2,1,L+1),repmat(1,1,L+1));
 dsoc = binvar(repmat(2,1,L+1),repmat(1,1,L+1));
 dcase
= binvar(repmat(2,1,L+1),repmat(1,1,L+1));
...

constraints = [constraints, implies(ddod(k}(1),Model_dod1);
constraints = [constraints, implies(ddod(k}(2),Model_dod2);
constraints = [constraints, implies(dsoc(k}(1),Model_soc1);
constraints = [constraints, implies(dsoc(k}(2),Model_soc2);
constraints = [constraints, implies(dcase(k}(1),u_min <= u{k}(i) <= u_max);
constraints = [constraints, implies(dcase(k}(2),SOC_min <= x_soc{k}(i) <= SOC_max);
constraints
= [
constraints, ddod(k}(1) + ddod{k}(2) >= dcase{k}(1)];
constraints = [constraints, dsoc(k}(1) + dsoc{k}(2) >= dcase{k}(2)];
constraints = [constraints, dcase(k}(1)+dcase{k}(2) == 1];







Johan Löfberg

unread,
Nov 1, 2013, 9:49:16 AM11/1/13
to yal...@googlegroups.com
Also note that strict inequality is impossible. If you want it to be strict, write >=smallnumber.

Johan Löfberg

unread,
Nov 1, 2013, 9:52:15 AM11/1/13
to yal...@googlegroups.com

Morten Hestdal

unread,
Nov 1, 2013, 10:01:02 AM11/1/13
to yal...@googlegroups.com
Thank you! I've actually seen some strange behavior from the controller, maybe that is the explanation.

Johan Löfberg

unread,
Nov 1, 2013, 10:01:39 AM11/1/13
to yal...@googlegroups.com
and note that the code above misses the fact that there should be new binaries for every i also. 

A cleaned up version which defines the binaries repeatedly thus skipping the messy index w.r.t k and i yields


            
            ddod = binvar(2,1);
            dsoc = binvar(2,1);
            dcase = binvar(2,1);
            constraints = [constraints, implies(ddod(1),Model_dod1)];
            constraints = [constraints, implies(ddod(2),Model_dod2)];
            constraints = [constraints, implies(dsoc(1),Model_soc1)];
            constraints = [constraints, implies(dsoc(2),Model_soc2)];
            constraints = [constraints, implies(dcase(1),u_min <= u{k}(i) <= u_max)];
            constraints = [constraints, implies(dcase(2),SOC_min <= x_soc{k}(i) <= SOC_max)];
            constraints = [constraints, ddod(1) + ddod(2) >= dcase(1)];
            constraints = [constraints, dsoc(1) + dsoc(2) >= dcase(2)];
            constraints = [constraints, dcase(1)+dcase(2) == 1];
                
...
constraints = [constraints,-3 <= [x_dod{:}] <= 3,-3 <= [x_soc{:}] <= 3, u_min <= [u{:}] <= u_max];



Johan Löfberg

unread,
Nov 1, 2013, 10:04:38 AM11/1/13
to yal...@googlegroups.com
Exclusive or can just as well be used to reduce the number of combinations. Since we use the implications in only one direction, this can not lead to any issues (in implies(d,X), the X can occur even though d is forced to be zero.)

constraints = [constraints, ddod(1) + ddod(2) == dcase(1)];
constraints = [constraints, dsoc(1) + dsoc(2) == dcase(2)];


Johan Löfberg

unread,
Nov 1, 2013, 10:06:13 AM11/1/13
to yal...@googlegroups.com
Any particular reason for using norms in the objective? This is a much harder problem than using a standard quadratic cost (much less developed MISOCP solvers)

Morten Hestdal

unread,
Nov 1, 2013, 10:15:00 AM11/1/13
to yal...@googlegroups.com
not really, I originally used a quadratic cost function, but I saw that norms were used in one example in the YALMIP wiki. My prof also suggested that by using the 1-norm I would get a linear cost function instead of a quadratic one which may be quicker to solve. But I'll change it back to a std. quadratic cost function.

Johan Löfberg

unread,
Nov 1, 2013, 10:47:39 AM11/1/13
to yal...@googlegroups.com
1-norm leading to MILP: Slow
Quadratic objective leading to MIQP: Slower
2-norm objective leading to MISOCP: Slowest

typically...

Morten Hestdal

unread,
Nov 5, 2013, 4:35:05 AM11/5/13
to yal...@googlegroups.com
There is one thing about the code that I don't understand, and that is why do you add limits on u{k}(i) and x_soc{k}(i) inside the for loop, when the limits are added in this line:


 constraints = [constraints,-3 <= [x_dod{:}] <= 3, SOC_min <= [x_soc{:}] <= SOC_max, u_min <= [u{:}] <= u_max];


wouldn't this code suffice: 

  ddod = binvar(2,1);
            constraints = [constraints, implies(ddod(1),Model_dod1)];
            constraints = [constraints, implies(ddod(2),Model_dod2)];
            constraints = [constraints, implies(ddod(1),Model_soc1)];
            constraints = [constraints, implies(ddod(2),Model_soc2)];
            constraints = [constraints, ddod(1) + ddod(2) == 1];

Johan Löfberg

unread,
Nov 5, 2013, 5:11:22 AM11/5/13
to yal...@googlegroups.com
YALMIP can not deduce those constraints, as it would require a much deeper understand of the global logic of the constraint. It would have to understand that some bound on u is always active. To see that in general requires essentially a solver on its own (a very advanced presolver)

Morten Hestdal

unread,
Nov 23, 2013, 9:51:16 AM11/23/13
to yal...@googlegroups.com
Hi, I'm sorry to bother you, but the MPC does not seem to work as it should. So, I've changed it a bit to produce the results that I want. But now, I again have the problem with unbounded variables, even though this time I have explicit bounds on all variables. Can you see how I can speed this up?'

    u = sdpvar(repmat(M,1,L),repmat(1,1,L));

    x_soc = sdpvar(repmat(M,1,L+1),repmat(1,1,L+1));
    
    x_dod = sdpvar(repmat(2*M,1,L+1),repmat(1,1,L+1));
    




    sdpvar ref
    
    constraints = [];
    objective = 0;

    for k = 1:L
        objective = objective + (x_soc{k}-ref_SOC.*ones(M,1))'*Q_SOC*(x_soc{k}-ref_SOC.*ones(M,1));
    
        for i = 1:M
            
            Model_dod1 = [x_dod{k+1}(2*i-1:2*i) == Ad*x_dod{k}(2*i-1:2*i) - Bd*u{k}(i)./(C*eta_gen),u{k}(i) <= 0];
            Model_dod2 = [x_dod{k+1}(2*i-1:2*i) == Ac*x_dod{k}(2*i-1:2*i) + eta_load.*Bc*u{k}(i)./C, u{k}(i) >= 0];
            
            Model_soc1 = [x_soc{k+1}(i) == x_soc{k}(i) + (u{k}(i))/(C*eta_gen), u{k}(i) <= 0];
            Model_soc2 = [x_soc{k+1}(i) == x_soc{k}(i) + (eta_load*u{k}(i))/C ,u{k}(i) >= 0];
            
            
            ddod = binvar(2,1);
            dsoc = binvar(2,1);
            constraints = [constraints, implies(ddod(1),Model_dod1)];
            constraints = [constraints, implies(ddod(2),Model_dod2)];
            constraints = [constraints, implies(dsoc(1),Model_soc1)];
            constraints = [constraints, implies(dsoc(2),Model_soc2)];
            constraints = [constraints, sum(ddod) >= 1];
            constraints = [constraints, sum(dsoc) >= 1];
        end
        
        constraints = [constraints,implies(abs(ref) >= 0.01,sum(u{k}) == ref)] ;  
    
    end
    
    constraints = [constraints,-30 <= [x_dod{:}] <= 30, SOC_min <= [x_soc{:}] <= SOC_max , u_min <= [u{:}] <= u_max];  

Johan Löfberg

unread,
Nov 25, 2013, 6:43:49 AM11/25/13
to yal...@googlegroups.com
You are defining the nonconvex but MILP-representable constraint abs(ref)>=0.01, hence YALMIP needs bounds to derive the MILP model of this. However, since ref is a parameter, I don't see why you add abs(ref)>=0.01 to the model.

Morten Hestdal

unread,
Nov 25, 2013, 7:12:33 AM11/25/13
to yal...@googlegroups.com
The functionality I'm looking for is that if ref != 0, then sum(u) == ref, but I figured I'd use abs(ref) >= 0.01 since it is easier to comprehend numerically. I see now that this is indeed nonconvex. Any ideas how to make this convex?

Johan Löfberg

unread,
Nov 25, 2013, 7:14:40 AM11/25/13
to yal...@googlegroups.com
Well, it is nonconvex so there is no way to make it convex
Reply all
Reply to author
Forward
0 new messages