Yalmip beginer needs help...

866 views
Skip to first unread message

maiw...@hotmail.com

unread,
Feb 17, 2014, 5:02:26 AM2/17/14
to yal...@googlegroups.com
Dear all,

I am new to yalmip. trying to formulate a MPC using yalmip.

The model and code are attached.

Basically, I am working on a building model concerning temperature dynamics.

The model has 5 states (x), 1 controllable input (u), 5 disturbances/un-controllable inputs (d).
elow and eup in the code means the temperature violations from the lower bound and upper bound.

objective function is to minimize:
sum(uk) + penalty on the temperature constraint violations + penalty on maximum of controllable inputs(which is related to energy consumption).

disturbances/un-controllable inputs are all known data in dk.mat, I need to update the states and disturbances/un-controllable inputs every time step.

for some reasons, my program is not working as shown in the paper http://www.eecs.berkeley.edu/~maasoumy/Research_files/DT_DTSI-2012-01-0015.R1_Maasoumy.pdf

I am not sure what goes wrong, can anyone give me some advices? I am new in Yalmip, have been struggling for a week already.

Thank you very much in advance!

Best Regards,
Kit
dk.mat
initial.mat
ModelData.mat
MPC2.m

Johan Löfberg

unread,
Feb 17, 2014, 5:22:37 AM2/17/14
to
To begin with, you are not optimizing the same objective. To me, it looks like they use

OBJ = OBJ + a*max(u);
for k = 1:N
    OBJ
= OBJ + u(:,k);
end
for k = 1:N
    OBJ
= OBJ + b*(elow(:,k) + eup(:,k));
end

Note that you can write this as

OBJ = a*max(u)+ sum(u) + b*(sum(elow) + sum(eup));

Same thing holds for most of your code, unnecessary for-loops

For b small enough, it is optimal to do nothing. To see any control action, you will have to increase the penalty on the constraint slacks (increase b)

maiw...@hotmail.com

unread,
Feb 17, 2014, 6:31:58 AM2/17/14
to yal...@googlegroups.com
OBJ = OBJ + a*max(u);
for k = 1:N
    OBJ
= OBJ + u(:,k);
end
for k = 1:N
    OBJ
= OBJ + b*(elow(:,k) + eup(:,k));
end...

OBJ = a*max(u)+ sum(u) + b*(sum(elow) + sum(eup));

Aren't these two objectives the same?
 
I was actually worrying about the way I update the dk matrix in the dynamic prediction of the model, Can I do the following line:
controller = optimizer(CON, OBJ, sdpsettings('verbose',1), [x0 d], u0);

I looked up your wiki but it seems that it doesn't have an example that does the following:

[x0 d]


Thanks Johan!

Johan Löfberg

unread,
Feb 17, 2014, 7:02:29 AM2/17/14
to
The two objectives I wrote are the same. You however used

OBJ = OBJ + a*max(u(:,k));

I don't understand your second question. Since you have specified the varying parameter as [x d], then you send data as [xdata ddata] exactly as you do

maiw...@hotmail.com

unread,
Feb 17, 2014, 10:51:05 PM2/17/14
to yal...@googlegroups.com
Thanks Johan.

It still doesn't work as expected after I made the following modification:
OBJ = a*max(u)+ sum(u) + b*(sum(elow) + sum(eup));

I think I found another problem in my code, but I don't know how to solve it:
The temperature constraint is time-varying. But I just define the upper and lower bound arrays with 24 elements in each (Prediction horizon is 24). When the optimization problem moves to next time step, the new prediction horizon will have different temperature bounds which should be a shifted version of the one I define at the beginning. Should I shift the temperature bounds and update it by specifying the varying parameter as [x d Tbounds] and sending data as [xdata ddata newTemperatureBounds]?

Or is there another more efficient way to do it?

Cheers,
Kit

Johan Löfberg

unread,
Feb 18, 2014, 2:02:15 AM2/18/14
to yal...@googlegroups.com
If you have time-varying constraints and want to avoid overhead from setting up the problem repeatedly, that is the recommended way to do it.

maiw...@hotmail.com

unread,
Feb 18, 2014, 2:16:30 AM2/18/14
to yal...@googlegroups.com
Thanks Johan.

When you mention "want to avoid overhead from setting up the problem repeatedly", do you mean that using the command "optimizer" to set up the problem?

Error occurs. it seems that I can't do it by using "optimizer" because the varying parameters have different size:  x is 5by1, d is 5by24 and Tbound is 1by24

Any idea to solve this?

Johan Löfberg

unread,
Feb 18, 2014, 2:18:57 AM2/18/14
to yal...@googlegroups.com
Yes.

Don't try to put all parameters in the same matrix. Use cell-based format. See last paragraph in help optimizer

maiw...@hotmail.com

unread,
Feb 18, 2014, 10:55:27 PM2/18/14
to yal...@googlegroups.com
Thanks Johan.

I am feeling that I am getting closed to what I want.

I implemented what you suggested me to do, i.e. specifying the varying parameter as [x d Tbounds] and sending data as [xdata ddata newTemperatureBounds].
And it sort of is working, which is in the MPC.m file.

Then, I realize I could use a binary variable to construct an occupancy signal n(k), 1 for occupied and 0 for unoccupied. So that I can formulate the time-varying temperature constraints like the following:

(1-n(k))*Tlb_un + n(k)*Tlb - elow(:,k) <= x(5,k+1) <= (1-n(k))*Tub_un + n(k)*Tub + eup(:,k)

which is in the MPCnew.m file.

I think these two program is doing the same thing only with two different implementations on the time-varying temperature constraints.

But the results turn out to be different from each other.

Why is that?

Thanks in adavance! Waiting online!

Cheers,
Kit
dk.mat
initial.mat
ModelData.mat
MPC.m
MPCnew.m

maiw...@hotmail.com

unread,
Feb 19, 2014, 2:05:01 AM2/19/14
to yal...@googlegroups.com
Continue to last post, for time-varying temperature constraints

Method 1 (MPC.m)

CON = CON + ( Tmin(k)- elow(:,k) <= x(5,k+1) <= Tmax(k)+ eup(:,k) );

then I define the bounds in an array with 24 elements (prediction horizon 24 hours):

Tmax_IC = [23 23 23 23 23 23 23 23 22 22 22 22 22 22 22 22 22 22 23 23 23 23 23 23]; % Temperature upper bound degreeC


Tmin_IC = [19 19 19 19 19 19 19 19 20 20 20 20 20 20 20 20 20 20 19 19 19 19 19 19]; % Temperature lower bound degree

and shift the bound every time step in simulation bofore sending data as [xdata ddata newTemperatureBounds].

Method 2 (MPCnew.m)
I could use a binary variable to construct an occupancy signal n(k), 1 for occupied and 0 for unoccupied:

n = binvar(1,N);              %occupancy signal, 1 for occupied, 0 for unoccupied

Tlb_un = 19;                  %low bound temperature when unoccupied, degreeC

Tlb = 20;                     %low bound temperature when occupied, degreeC

Tub_un = 23;                  %upper bound temperature when unoccupied, degreeC

Tub = 22;                     %upper bound temperature when occupied, degreeC


then the constraint becomes

CON = CON + ( (1-n(k))*Tlb_un + n(k)*Tlb - elow(:,k) <= x(5,k+1) <= (1-n(k))*Tub_un + n(k)*Tub + eup(:,k) );


Then I just initialize the occupancy signal to be

n_IC = [0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0];

and shift this signal every time step in simulation bofore sending data as {xdata ddata ndata}.


But these two methods did not give me the same results, which one is correct? did I make some silly mistakes here?

Thanks in advance.

Cheers,
Kit

Johan Löfberg

unread,
Feb 19, 2014, 3:18:00 AM2/19/14
to yal...@googlegroups.com
I got identical plots (as far as I can tell).

Which solver are you using?
>> struct(controller).model.solver.callhandle

ans
=

   
@call_cplexibm_qcmiqp

Could be that the solution not is unique, and that the different models leads to slightly different numerical data, possibly leading to different solutions

maiw...@hotmail.com

unread,
Feb 19, 2014, 3:24:25 AM2/19/14
to yal...@googlegroups.com
I did not specify the solver.....

I mean this is not a complex and difficult problem, I am just using it to get familiar with yalmip so that I can apply it for a larger problem. So I didn't bother to specify it.

Should I specify it?

I tracked it down:

It turns out that, MPC.m used @calllinprog while MPCnew.m used @bnb.

Does it matter? I think I still didn't understand what is going on here

Johan Löfberg

unread,
Feb 19, 2014, 3:33:22 AM2/19/14
to
In your new model, you have variables which are binary. At compile-time, YALMIP does not analyze the model deeply to understand that all binary variables will be eliminated when you fix the parameters. Hence, YALMIP has to pick a mixed-integer solver. In your case, you don't have any MILP solver, so YALMIP has to resort to YALMIPs own, very very basic, MILP solver bnb, using, most likely, linprog as the underlying LP solver. Once the parameters are fixed at a given sample, bnb will immediately realize that the problem does not have any integers and the problem will be solved in bnbs root-node. However, since bnb is a very basic implementation, things can go wrong. Even worse, linprog is pretty much the worst possible LP solver for a b&b framework, so things are even more likely to go wrong.

Hence, any of the two simulations can give strange results, since you are, at the bottom layer, using a very poor LP solver (linprog). Install a better solver to begin with (cplex, gurobi, mosek, scip,...)

You can easily get the same results if you simply specify n as an sdpvar instead (no reason to specify it as binary, no one cares in the end since it is you who set the values, and you just happen to set them to 0 or 1), or specify solver forcefully to linprog using the + notation, as described in the help.

maiw...@hotmail.com

unread,
Feb 19, 2014, 3:47:52 AM2/19/14
to yal...@googlegroups.com
Even if I specify n as an sdpvar instead, nothing has changed?



New question:

Now I want to add a new input constraints:
0<=u<=100 between 5am to 5pm
and u=0 otherwise.

What I did was:

n_flow = binvar(1,N);         %signal for input(air flow rate), if n_flow=0 then input=0;


umax = 100;                   %max input


    CON = CON + ( 0 <= u(:,k) <= (1-n_flow(k))*0 + n_flow(k)*umax ); %  when n_flow=0, u=0; n_flow=1, 0<=u<=100;


    %CON = CON + (0 <= u(:,k) <= 100);                               %  0 <= u <= 100 always (The one I used before)

Then I shift the n_flow in every time step in simulation again bofore sending data as {xdata ddata ndata n_flow_data} as what I did before.


The problem turns out to be Infeasible problem just at the first step of simulation. What is the problem here? Something to do with the solver again?

maiw...@hotmail.com

unread,
Feb 19, 2014, 4:05:56 AM2/19/14
to yal...@googlegroups.com
Or is there another good way to implement this new input constraint? i.e 0<=u<=100 between 5am to 5pm, and u=0 otherwise.

Thanks.

Johan Löfberg

unread,
Feb 19, 2014, 4:37:18 AM2/19/14
to
When I switch to linprog, the ulog plots look very different. Note that they have different scales though. Putting on same scale makes them essentially indiscernible. Conclusion, linprog computes every so slightly different solutions for the two models, due to small numerical differences.

Your second question: Well, do you know that the problem is feasible? Anyhow, this debugging is pretty useless until you have installed a better LP solver. Could just as well be linprog which messes things up

Johan Löfberg

unread,
Feb 19, 2014, 4:36:41 AM2/19/14
to yal...@googlegroups.com
Well, once again, no reason to specify it as a binary and trick YALMIP to select a MILP solver, since it is a parameter which you can set to 0/1 however you want.

Looks correct though

maiw...@hotmail.com

unread,
Feb 19, 2014, 9:48:30 PM2/19/14
to yal...@googlegroups.com
Dear Johan,

I have already installed Gurobi (academic version), and also got the license.

But when I specify the solver:

controller = optimizer(CON, OBJ, sdpsettings('verbose',1,'solver','gurobi'), {x0 d n}, u0);


I got error:
Error using optimizer (line 194)
Failed exporting the model: Solver not found (+gurobi)

What should I do to make yalmip find Gurobi?

Thanks!

maiw...@hotmail.com

unread,
Feb 19, 2014, 10:00:47 PM2/19/14
to yal...@googlegroups.com
it is working!

I forgot to set the path in matlab!

The results look ok, and it seems that the programs I wrote are ok!

Thank you very much for your help, Johan!

maiw...@hotmail.com

unread,
Feb 21, 2014, 1:57:13 AM2/21/14
to yal...@googlegroups.com
Dear Johan,

I modified the model and tried to add one more controller input to this problem.

And the objective has a term of u(1)*u(2), so I used IPOPT solver.

But I got an error:
In an assignment  A(I) = B, the number of elements in B and I must be the same.
Error in optimizer/subsref (line 105)
                x(keptvariables) = output.Primal;
Error in MPCnewer (line 89)
    [u_IC,errorcode] = controller{{x_IC d_IC n_IC}};                %get the new control signal


Please help, thanks in advance!

dk.mat
initial.mat
ModelData.mat
MPCnewer.m

maiw...@hotmail.com

unread,
Feb 21, 2014, 2:26:00 AM2/21/14
to yal...@googlegroups.com
Continue to last post, when I used gurobi as solver, it turns out to be either infeasible or unbounded, which was not the result I expected.

Johan Löfberg

unread,
Feb 21, 2014, 2:30:11 AM2/21/14
to yal...@googlegroups.com
Reproduced here. Thanks

the problem is infeasible though

solvesdp([CON,x_IC==x0, d_IC==d, n_IC==n], OBJ, sdpsettings('verbose',1,'solver','ipopt'));
u_IC
= double(u0);    

ipopts struggle is not related to the nasty objective though, the linear problem without objective is infeasible

solvesdp([CON,x_IC==x0, d_IC==d, n_IC==n])


maiw...@hotmail.com

unread,
Feb 21, 2014, 2:33:42 AM2/21/14
to yal...@googlegroups.com
Sorry I did not understand, are you saying my problem is infeasible?

Should I use IPOPT or GUROBI?

Johan Löfberg

unread,
Feb 21, 2014, 2:38:11 AM2/21/14
to yal...@googlegroups.com
You have to use ipopt (or some other nonlinear solver) since the objective is nonlinear.

However, if you remove the objective, the problem is linear and you can use almost any solver. Removing the objective reveals that the problem is infeasible, so obviously the real problem will also be infeasible. The reason optimizer crashes is because ipopt find the problem to be trivially infeasible somehow and returns some output YALMIP expecting. When I debug in optimizer, I see
Exiting due to infeasibility:  96 lower bounds exceed the corresponding upper bounds.


Reply all
Reply to author
Forward
0 new messages