Slow in Deriving Robust Counterpart

17 views
Skip to first unread message

XB G

unread,
Jan 24, 2019, 12:25:55 AM1/24/19
to YALMIP
Dear Johan,
I'm solving a "small scale" robust linear optimization problem. I have ~200 decision variables and ~20 uncertain variables w. The uncertainty set is a simple box, i.e. norm(w, inf) <= 1, uncertain(w)
The robust optimization theories (papers by Nemirovski and Shapiro) tell us RLO with box uncertainty set is equivalent with a linear program (by introducing aux variables), we have explicit equivalent representations of the robust counterpart. 

However, YALMIP tries to enumerate all 131073 vertices (btw, 2^17 = 131074) of the box...and takes a long time to derive equivalent forms. Are there any options for YALMIP to use a smarter way?
Can I use sdpsettings('robust.lplp','duality') ? This is much faster and the solution looks correct.

Capture.PNG


Thanks!



Johan Löfberg

unread,
Jan 24, 2019, 2:20:34 AM1/24/19
to yal...@googlegroups.com
Yes, that's the whole idea of the duality filter. Enumeration is intractable for anything but trivial problems

However, with a simple box, it shouldn't try the enumeration, it should derive an explicit model without any extra variables etc thus much faster than duality filter. Appears to be a bug recently introduced which causes it to fail to detect the box in the lifted space you get with the abs operator model. Write the box as -1<=w<=1 instead


Reply all
Reply to author
Forward
0 new messages