Hybrid MPC (small) problem

297 views
Skip to first unread message

Marko Gulin

unread,
Oct 15, 2014, 7:16:28 AM10/15/14
to yal...@googlegroups.com
Hi all!

I have difficulties using hybrid MPC in YALMIP. Let me explain you my problem.
I'm trying to solve a power flow optimization in a microgrid with 2 independent storage systems - battery and fuel cell. Each of these systems have charging and discharging efficiency, and this is why I use MILP formulation.
Plant equation can be written as: x(k+1)=Ax(k)-Bu(k), where u(k) is vector of powers for battery and fuel cell, respectively (power is positive for discharging, and negative for charging), A is 2x2 identity matrix, and matrix B is 2x2 diagonal matrix with 4 possibilites:
B00 - battery discharging, fuel cell discharging; Pb>=0, Pfc>=0
B01 - battery discharging, fuel cell charging; Pb>=0, Pfc<=0
B10 - battery charging, fuel cell discharging; Pb<=0, Pfc>=0
B11 - battery charging, fuel cell charging; Pb<=0, Pfc<=0

This is how I defined this formulation in YALMIP (note that this formulation does not work, with solver complaining about bunch of unbounded variables in implies leading to lousy big-M relaxation):

x = sdpvar(repmat(nx,1,N+1),repmat(1,1,N+1)); % nx=2
u = sdpvar(repmat(nu,1,N),repmat(1,1,N)); % nu=2

constraints = [];
objective = 0;

for k=1:N
    
    Pg = [-1 -1]*u{k} - Ppv(k) + Pl(k); % power balance
    
    objective = objective + pk(k)*Pg;
    
    Model1 = [ x{k+1}==(A*x{k}-B00*u{k}), u{k}(1)>=0, u{k}(2)>=0 ];
    Model2 = [ x{k+1}==(A*x{k}-B01*u{k}), u{k}(1)>=0, u{k}(2)<=0 ];
    Model3 = [ x{k+1}==(A*x{k}-B10*u{k}), u{k}(1)<=0, u{k}(2)>=0 ];
    Model4 = [ x{k+1}==(A*x{k}-B11*u{k}), u{k}(1)<=0, u{k}(2)<=0 ];
    
    constraints = [ constraints, Model1 | Model2 | Model3 | Model4, ...
        cxm<=x{k}<=cxM, cum<=u{k}<=cuM, Pgm<=Pg<=PgM ];
    
end

constraints = [ constraints, cxm<=x{N+1}<=cxM, x{1}==x0 ];
objective = objective - pN*[ebd efd]*x{N+1};

Then I tried to neglect models 2 and 3 in above formulation by removing them from constraints, and then it worked without any problems. By following this clue, I redefined my system as follows:

xb = sdpvar(repmat(1,1,N+1),repmat(1,1,N+1));
ub = sdpvar(repmat(1,1,N),repmat(1,1,N));
xf = sdpvar(repmat(1,1,N+1),repmat(1,1,N+1));
uf = sdpvar(repmat(1,1,N),repmat(1,1,N));

constraints = [];
objective = 0;

for k=1:N
    
    Pg = -ub{k} - uf{k} - Ppv(k) + Pl(k); % power balance
    
    objective = objective + pk(k)*Pg;
    
    Model_b1 = [ xb{k+1}==xb{k}-1/ebd*ub{k}, ub{k}>=0 ];
    Model_b2 = [ xb{k+1}==xb{k}-1*ebc*ub{k}, ub{k}<=0 ];
    
    Model_f1 = [ xf{k+1}==xf{k}-1/efd*uf{k}, uf{k}>=0 ];
    Model_f2 = [ xf{k+1}==xf{k}-1*efc*uf{k}, uf{k}<=0 ];
    
    constraints = [ constraints, Model_b1 | Model_b2, Model_f1 | Model_f2, ...
        cbm<=xb{k}<=cbM, cfm<=xf{k}<=cfM, Pbm<=ub{k}<=PbM, Pfm<=uf{k}<=PfM, Pgm<=Pg<=PgM ];
    
end

constraints = [ constraints, cbm<=xb{N+1}<=cbM, cfm<=xf{N+1}<=cfM, xb{1}==x0(1), xf{1}==x0(2) ];
objective = objective - pN*ebd*xb{N+1} - pN*efd*xf{N+1};

The above formulation (i.e. the second formulation) works just fine. Can someone please tell me what is wrong with first formulation?
Since this is my first post in this group, I will take the opportunity to say thanks to Johan for developing and maintaining YALMIP. Thank you!

Best,
Marko.

Johan Löfberg

unread,
Oct 15, 2014, 7:30:41 AM10/15/14
to yal...@googlegroups.com
Since I cannot run your code, I can only guess, and my guess is that the bound propagation is broken in complex or constructions.

Having said that, I really think you should model this using explicit binary variables so that you see what actually is done, and making it possible to exploit the disjoint structure of your model

delta = binvar(repmat(4,1,N),repmat(1,1,N));
constraints = [];
objective = 0;

for k=1:N
   
   Pg = [-1 -1]*u{k} - Ppv(k) + Pl(k); % power balance
   
   objective = objective + pk(k)*Pg;
   
   Model1 = [ x{k+1}==(A*x{k}-B00*u{k}), u{k}(1)>=0, u{k}(2)>=0 ];
   Model2 = [ x{k+1}==(A*x{k}-B01*u{k}), u{k}(1)>=0, u{k}(2)<=0 ];
   Model3 = [ x{k+1}==(A*x{k}-B10*u{k}), u{k}(1)<=0, u{k}(2)>=0 ];
   Model4 = [ x{k+1}==(A*x{k}-B11*u{k}), u{k}(1)<=0, u{k}(2)<=0 ];
  

   constraints = [constraints, implies(delta{k}(1), Model1)];

   constraints = [constraints, implies(delta{k}(2), Model2)];
   constraints = [constraints, implies(delta{k}(3), Model3)];
   constraints = [constraints, implies(delta{k}(4), Model4)];
   constraints = [constraints, sum(delta{k})==1];

   constraints
= [ constraints,cxm<=x{k}<=cxM, cum<=u{k}<=cuM, Pgm<=Pg<=PgM ];
   
end


Message has been deleted

Marko Gulin

unread,
Oct 15, 2014, 11:30:07 AM10/15/14
to yal...@googlegroups.com
I also tried to do that, but finding the optimal solution is taking to long. I terminated solving after 300-400 sec.
Attached you can find implementation of all 3 formulations. Formulations are implemented as separate functions, and are all called from ProgramRoutine.m (lines 51-53).
If you don't have time for this, I can continue using "simpler formulation" that is working.

Thank you!

Best,
Marko.
3._milp.zip

Johan Löfberg

unread,
Oct 15, 2014, 4:02:26 PM10/15/14
to yal...@googlegroups.com
In YALMIP_highlevel_bin you are saying the binaries should sum up to 2. That makes no sense. Exactly one model is active.

However, YALMIP_highlevel and YALMIP_highlevel_bin should lead to the same complexity, roughly. When running the code, this is not the case though as the first one is solved in no time. Unfortunately, the reason appears to be due to a bug in the or operator. This can be seen if you check the feasibility of the last models after solving the problem. None of them is feasible (at least one of them should)

optimize(constraints, objective);
checkset(Model1)
checkset(Model2)
checkset(Model3)
checkset(Model4)

Use implies.

...or use a convex hull formulation, which appears to be really fast on this model. cplex solves the problem in the root node in 0.02s. It takes a lot longer to set up the model, but that is only a one-time thing anyway (as you use the optimizer framework if you want to simulate)
[H,bins] = hull(Model1,Model2,Model3,Model4);    
constraints
= [constraints, H, binary(bins)];






Marko Gulin

unread,
Oct 16, 2014, 12:12:04 PM10/16/14
to yal...@googlegroups.com
Thank you for your suggestion on using convex hull formulation, now it works just fine. As for sum of binaries - I figured out that later, that is why I deleted my first attachment and uploaded a new one :)

Best,
Marko.

Marko Gulin

unread,
Oct 17, 2014, 6:12:16 AM10/17/14
to yal...@googlegroups.com
It looks like that when using optimizer and "convex hull formulation" there are some issues in constraints x{k+1}==A*x{k}-B*u{k}, although solver is not complaining on constraint violation which is weird. I'm attaching program routines with 4 optimizations and 2 scenarios, where all 4 optimizations are run from ProgramRoutine.m, you only have to choose which scenario to use.

Optimizations:
- fixed parameter optimization using convex hull formulation - YALMIP_highlevel.m
- fixed parameter optimization using old working formulation - YALMIP_highlevel_new.m
- variable parameter optimization using convex hull formulation - YALMIP_highlevel_mpc.m <- problems here
- variable parameter optimization using old working formulation - YALMIP_highlevel_mpc_new.m

Scenarios:
- data_bad.mat - simulation is good for 3 optimization, but fails for YALMIP_highlevel_mpc.m
- data_good.mat - simulation is good for all 4 optimizations

Note that 'valid trajectory' in displayed results is calculated for optimal control sequence obtained by solving optimization problem.

Best,
Marko.

3._controller.zip

Johan Löfberg

unread,
Oct 17, 2014, 8:15:59 AM10/17/14
to
Unfortunately, now that I think of it, it is not possible to use hull in combination with the optimizer framework. The generation of the hull is deeply dependant on knowing the constant stuff in the model. Hence it fails when it is set up parametrically as it doesn't know some variables actually will be fixed later. You would have to create the convex hull model manually (although I will think about extending the hull operator to support an argument to mark to-be constant variables)

f1(x)+constant1 = 0 or f2(x)+constant2=0

hull given by

f1(y1) +t1*constant1 = 0, f2(y2)+t2*constant2==0, x = y1+y2, 1 = t1+t2, t>=0

Johan Löfberg

unread,
Oct 17, 2014, 9:03:46 AM10/17/14
to yal...@googlegroups.com
It was rather easy to generalize the code for linearly parameterized hulls, which you have. Simply append a last argument to declare a parameter (in your case x{1}, so you only have to do this when k=1)

Replace extras/@lmi/hull

Here is a test

yalmip('clear')
x1 = sdpvar(1);
v1 = sdpvar(1);
v2 = 5;
[H,t] = hull(x1==v1,x1==v2,v1);
P = optimizer(H,x1,sdpsettings('solver','gurobi'),v1,x1);
P{7} % Should give 5
P{2} % Should give 2




hull.m

Marko Gulin

unread,
Oct 17, 2014, 11:59:20 AM10/17/14
to yal...@googlegroups.com
I tried to use this new hull.m function, but without any luck -- I still get wrong results.

I'm simulating power flow optimization with 1 h sample time for a 24 h prediction horizon on a 14 days case study in closed-loop. In this simulation, with "convex hull optimizer" formulation I get 75/313 optimizations to be bad (same as with old hull.m function). I also tried to simulate my case study with "fixed parameters hull" formulation and everything was fine. However, the latter formulation has bunch of YALMIP overhead because of which simulation lasted for 3 h.

If you use this new hull.m function on a program example from my previous post, you will get the same bad results.

Best,
Marko.

Johan Löfberg

unread,
Oct 17, 2014, 12:27:20 PM10/17/14
to yal...@googlegroups.com
When I run the code with data_bad, I get two plots with a blue and red line almost on top of each other, and the 4 problems solved all yield a cost of 0.27. Do you get something different?

Johan Löfberg

unread,
Oct 17, 2014, 12:30:01 PM10/17/14
to
Just to make sure, what is returned if you type " dbtype 25 @lmi/hull"

Marko Gulin

unread,
Oct 17, 2014, 12:36:35 PM10/17/14
to yal...@googlegroups.com
When I type "dbtype 25 @lmi/hull" I get "25    if N==1"

Maybe it is a problem how I define the "hull constraints". I replace this line
[H,bins] = hull(Model1,Model2,Model3,Model4);

with following:
if k==1
    [H,bins] = hull(Model1,Model2,Model3,Model4,x{1});
else
    [H,bins] = hull(Model1,Model2,Model3,Model4);
end

Is this OK?

Johan Löfberg

unread,
Oct 17, 2014, 12:41:09 PM10/17/14
to yal...@googlegroups.com
OK, so you have installed it correctly.

That's how I would do it as a test to make sure it works. The quick fix I did can very likely have bugs when the parameter actually isn't part of the model

Did my results show that your code works on my machine?

Marko Gulin

unread,
Oct 17, 2014, 12:49:49 PM10/17/14
to yal...@googlegroups.com
When I run the optimization on my machine, I get the cost to be 0.4658.

Just a thought, but wouldn't be a simpler to define these variable parameters as new data type, e.g. sdpconst. In that case, parser will definitely know that the user is going to define these parameters?

Marko Gulin

unread,
Oct 17, 2014, 12:52:14 PM10/17/14
to yal...@googlegroups.com
You are using GUROBI? If you're using GUROBI, can you please try to run the optimization using CPLEX?

Johan Löfberg

unread,
Oct 17, 2014, 12:58:50 PM10/17/14
to yal...@googlegroups.com
the problem is I piggyback completely on the standard internals. Doing the compilation and analysis while taking into account that some stuff is parametric would force me to make a lot of changes, and I am too lazy for that

I used cplex as that is what your code specified

Marko Gulin

unread,
Oct 17, 2014, 1:07:50 PM10/17/14
to yal...@googlegroups.com
Ok, never mind then :) I also tried to solve my 14 days case study by using optimizer formulation, but with dismantled system equation A*x{k}+B*u{k} into two equations since states for my system are not coupled. However, by using this formulation, optimization is prone to long optimization times. I will try to write down the problem into standard matrix form, and see if this helps considering optimization time.

Thanks for the support!

Best,
Marko.

Johan Löfberg

unread,
Oct 17, 2014, 1:16:02 PM10/17/14
to yal...@googlegroups.com
With gurobi I get costs .47/.27 (plots identical) and then .27/.27 (plots identical).

Still haven't understood how I am supposed to see an incorrect result, as all plots look the same

Marko Gulin

unread,
Oct 17, 2014, 1:32:09 PM10/17/14
to yal...@googlegroups.com
I download 3._controller.zip and run the optimizations, just to eliminate possibility of different versions of code (maybe I changed something in my programs, I can't remember) :)

For data_good.mat I get 0.2067 for all 4 optimizations (plots identical).
For data_bad.mat I get 0.4658 for all 4 optimizations, but plots are not identical for "optimizer convex hull" formulation.

I'm using CPLEX since I don't have GUROBI installed.

Johan Löfberg

unread,
Oct 17, 2014, 1:45:14 PM10/17/14
to yal...@googlegroups.com
I see -.13 for all solves in data_good (I am talking about the value cplex displays)

When you say different, how different?

Marko Gulin

unread,
Oct 17, 2014, 2:14:38 PM10/17/14
to yal...@googlegroups.com
Now I'm not sure if I'm looking well considering objective value. Now I slightly changed program (attached) to print out values, here are my prints:

data_good.mat
YALMIP_highlevel :   0.2067
YALMIP_highlevel_new :   0.2067
YALMIP_highlevel_mpc :   0.1999
YALMIP_highlevel_mpc_new :   0.2067

data_bad.mat
YALMIP_highlevel :   0.4658
YALMIP_highlevel_new :   0.4658
YALMIP_highlevel_mpc :   0.0503
YALMIP_highlevel_mpc_new :   0.4658

Attached you can find my plots for optimizer formulation.
3._controller_new.zip
plots.jpg

Johan Löfberg

unread,
Oct 19, 2014, 6:42:01 AM10/19/14
to yal...@googlegroups.com
bug fixes
hull.m

Marko Gulin

unread,
Oct 19, 2014, 11:00:36 AM10/19/14
to yal...@googlegroups.com
I still have an error, much smaller then before, but still it is an error. Here I post my program and plots. If you're going to simulate program, please use only this newest version because I've changed some annotations (small changes). I also added a new function with low level formulation (i.e. directly via matrices).

Error occurs only when lambda is different then 0 (you can define this parameter in ProgramRoutine.m line 26). Parameter lambda is used to penalize deviation from predefined power exchange trajectory. Note that low level formulation currently works only for lambda=0. I'm going to extend this low level formulation now to use lambda.

Best,
Marko.
3._controller_19.10.2014.zip
optimizer.jpg

Johan Löfberg

unread,
Oct 19, 2014, 11:43:08 AM10/19/14
to yal...@googlegroups.com
I get

         YALMIP_highlevel :   0.4658
     YALMIP_highlevel_new :   0.4658
          YALMIP_lowlevel :   0.0991
     YALMIP_highlevel_mpc :   0.4658
 YALMIP_highlevel_mpc_new :   0.4658


which to me indicates that your low-level model is wrong

Marko Gulin

unread,
Oct 19, 2014, 11:54:29 AM10/19/14
to yal...@googlegroups.com
I get the same results for data_bad.mat and lambda=0.05. Note that the low level formulation does not support lambda yet, i.e. low level formulation currently works only for lambda=0.
I'm now implementing support for linear and quadratic penalization of deviation from power trajectory Pgt.

Johan Löfberg

unread,
Oct 19, 2014, 12:41:13 PM10/19/14
to yal...@googlegroups.com
So what do you mean with "but still an error". The small diffrence in simulation, at the same cost, is most likely an effect of a non-zero tolerance in the solve.

Marko Gulin

unread,
Oct 19, 2014, 12:53:59 PM10/19/14
to yal...@googlegroups.com
Yes, I was referring to the small difference in the simulation. I've tested the program for quadratic penalization, everything works fine with all 5 formulations.
I will test linear and quadratic penalization on the whole case study, and I'll let you know the results.

Marko Gulin

unread,
Oct 19, 2014, 3:03:55 PM10/19/14
to yal...@googlegroups.com
I ran the 14-days case study and came to the following conclusion:
- quadratic penalization of the deviation is working fine for all formulations (corresponds to 'miqp' value when calling optimizations)
- linear penalization of the deviation is working fine for 4 formulations, and is not working well for "optimizer" formulation (corresponds to 'milp' value when calling optimizations)

After looking at the code of the YALMIP_highlevel_mpc.m (this one is making problems), I realized that I used norm(expression,1) term to obtain absolute value of the expression:
objective = objective + d{k}(4)*Pg + lambda*norm(d{k}(3)-Pg,1);

Then I redefined this by introducing additional variable:
z = sdpvar(repmat(1,1,N),repmat(1,1,N));

and the "absolute value" function is now defined as:
Model_z1 = [ z{k}==+d{k}(3)-Pg, d{k}(3)>=Pg ];
Model_z2 = [ z{k}==-d{k}(3)+Pg, d{k}(3)<=Pg ];

[H,bins] = hull(Model_z1,Model_z2);
constraints = [ constraints, H, binary(bins) ];

and the objective is now defined as:
objective = objective + d{k}(4)*Pg + lambda*z{k};

Everything works just fine now on the whole 14-days case study, so you might wanna check that norm function when used in optimizer.

Note that OR is still not working, i.e. when these two models are put in the constraints as follows:
constraints = [ constraints, Model_z1 | Model_z2 ]
optimization breaks down.

Johan Löfberg

unread,
Oct 19, 2014, 3:10:25 PM10/19/14
to yal...@googlegroups.com
and what if you use the standard model for abs (you are using a very expensive MILP model)

constraints = [ constraints,  -z{k} <= d{k}(3)-Pg <= z{k}];


Johan Löfberg

unread,
Oct 19, 2014, 3:25:37 PM10/19/14
to yal...@googlegroups.com
Aha, now I see the problem. Didn't see you had lambda as a parameter. If you have nonlinear parametrization on expressions where YALMIP is trying to perform convexity analysis, YALMIP can be led to believe you are setting up more complicated stuff than you actually do. lambda*norm can not be detected as convex (as lambda could be negative), hence, it sets up a MILP model for the norm. You are better of manually setting up the simple LP representation

onstraints = [ constraints,  -z{k} <= d{k}(3)-Pg <= z{k}];

Johan Löfberg

unread,
Oct 19, 2014, 3:26:41 PM10/19/14
to yal...@googlegroups.com
and you get a really bad milp model as there are no bounds on d, hence the big-M coefficients will be huge

Marko Gulin

unread,
Oct 19, 2014, 3:33:31 PM10/19/14
to yal...@googlegroups.com
I tested that formulation as well, and works fine. Thanks for the suggestion! :)

Should I define constraints for d as well, since I know bounds? Maybe if I say in constraints lambda>=0, YALMIP will know that the problem is convex?

Johan Löfberg

unread,
Oct 19, 2014, 3:38:19 PM10/19/14
to yal...@googlegroups.com
No, it won't help as YALMIP doesn't make any distinction between parameters and decision variables at the compilation time, and a bilinear product means YALMIP has to go for an exact representation of the absolute value, i.e., a MILP model.

Marko Gulin

unread,
Oct 19, 2014, 3:41:52 PM10/19/14
to yal...@googlegroups.com
Would you recommend defining constraints for parameters in optimizer? Usually these bounds are known, can defining them improve optimization performance? I'm asking this because you mentioned "lousy big-M formulation" if bounds are not known.

Marko Gulin

unread,
Oct 19, 2014, 6:00:03 PM10/19/14
to yal...@googlegroups.com
Another question - why this formulation:
-z{k} <= c <=z{k}

equals to:
z{k} = abs(c)

First inequality can also be written as
z{k} >= abs(c)

How come that that variable z{k} will be set on exactly abs(c) value considering the last inequality, i.e. last inequality allows variable z also to be greater than abs(c)?

Johan Löfberg

unread,
Oct 20, 2014, 1:28:46 AM10/20/14
to yal...@googlegroups.com
Since the only place where d{k} enters in an expression which will lead to advanced modelling is in the absolute value in the objective, it will help but is really only curing a symptom instead of the problem. As the absolute value enters in a convex fashion, modelling it using a MILP model is not something one would like to do. It is a complete waste of CPU power

Johan Löfberg

unread,
Oct 20, 2014, 2:49:44 AM10/20/14
to yal...@googlegroups.com
Yes, z>=abs(c) (this is a called an epigraph reformulation) and then


-z{k} <= c <=z{k}

is the standard way to proceed as it leads to LP constraints, and since you are minimizing z, at optimality, you will have z=abs(c). This works as abs(c) enters the model in a convexity preserving fashion (minimized, constrained from above)
Reply all
Reply to author
Forward
0 new messages