Robust LP with an Uncertainty Set that is Parameterized by the Decision Variables

58 views
Skip to first unread message

Baihong Jin

unread,
Apr 14, 2016, 12:12:08 AM4/14/16
to YALMIP

Dear Johan,


I am interested in knowing whether such kind of problem (as sketched in the code below) can be handled by YALMIP? And if not, is there any good way to transform it into a better form so that it can be solved either by YALMIP or other means?


Although at first glance it may seem to be a normal robust optimization problem, the actual uncertainty set I am interested in is not -1 <= w <= 1 but rather a subset of it (the code might have not expressed my intention clearly). To be more specific, I want the constraints to be robustly satisfied for any w within “some set”  such that 0 <= S(1) <= 10, 0 <= S(2) - S(1) <= 10 and -1 <= w <= 1. Plus, since a and b here are decision variables themselves, the actual uncertainty set is a polytope whose shape is decided by a and b, the decision variables.


Can you shed some light on this problem? Your help would be much appreciated!


Baihong  


a = sdpvar(1,2)

b = sdpvar(1,2)

w = sdpvar(1,2)


cost = -min(a)

S(k) = a(k) * w'(k) + b(k);

F = [uncertain(w), a >= 0, b >= 0, -1 <= w <= 1]

F = [F, 0 <= S(1) <= 10, 0 <= S(2) - S(1) <= 10]


Johan Löfberg

unread,
Apr 14, 2016, 1:30:45 AM4/14/16
to YALMIP
No, you cannot have parameterized uncertainty sets. The model will pass YALMIPs framework, but it will not describe a parameterized uncertainty set, the constraints involving S will simply be uncertain constraints, not uncertainty desciptions.

Johan Löfberg

unread,
Apr 14, 2016, 1:49:54 AM4/14/16
to YALMIP
Sounds like you simply want to find a and b, such that the constraints actually describes a non-empty set.

Hence, simply derive the robust counterpart manually

F = [a>=0, b>=0, 0 <= -abs(a(1)) + b(1), abs(a(1)) + b(1) <= 10, 0 <= -abs(a(2)-a(1)) + b(2)-b(1), abs(a(2)-a(1)) + b(2) - b(1) <= 10];

This is a nonconvex problem as you have the abs operator acting in both <= and >=. Some of the abs(a) and abs(b) will be reduced to a and b though as they are positive. The resulting MILP is trivially solved

>> a = sdpvar(1,2);
>> b = sdpvar(1,2);
>> F = [a>=0,b>=0,0 <= -abs(a(1)) + b(1), abs(a(1)) + b(1) <= 10, 0 <= -abs(a(2)-a(1)) + b(2)-b(1), abs(a(2)-a(1)) + b(2) - b(1) <= 10];
>> optimize(F,-min(a));
Optimize a model with 14 rows, 7 columns and 28 nonzeros
Coefficient statistics:
  Matrix range    [1e+00, 1e+00]
  Objective range [1e+00, 1e+00]
  Bounds range    [0e+00, 0e+00]
  RHS range       [1e+01, 1e+01]
Presolve removed 9 rows and 2 columns
Presolve time: 0.02s
Presolved: 5 rows, 6 columns, 14 nonzeros

Iteration    Objective       Primal Inf.    Dual Inf.      Time
       0   -1.0000000e+30   3.000000e+30   1.000000e+00      0s
       2   -5.0000000e+00   0.000000e+00   0.000000e+00      0s

Solved in 2 iterations and 0.02 seconds
Optimal objective -5.000000000e+00
>> value([a' b'])

ans =

     5     5
     5    15


Baihong Jin

unread,
Apr 14, 2016, 12:00:21 PM4/14/16
to YALMIP
Now I understand. Thank you very much for the detailed and insightful analysis!
Reply all
Reply to author
Forward
0 new messages