uncertain() report some errors

155 views
Skip to first unread message

Xinwei Shen

unread,
Mar 20, 2018, 3:44:51 AM3/20/18
to YALMIP
 ***** Starting YALMIP robustification module. *********************
- Detected 432 uncertain variables
 
- Detected 432 independent group(s) of uncertain variables
 
- Complicating terms in w encountered. Trying to eliminate by forcing some decision variables to 0

What does it mean by "Complicating terms in w encountered" ? What is "w" ?

Xinwei Shen

unread,
Mar 20, 2018, 4:15:08 AM3/20/18
to YALMIP
OK now I see that "w" is the uncertain variables in the model according to

Johan Löfberg (2012) Automatic robust convex programming, Optimization Methods and Software, 27:1, 115-129, DOI: 10.1080/10556788.2010.517532

Xinwei Shen

unread,
Mar 20, 2018, 4:33:24 AM3/20/18
to YALMIP
and it showed in the command line of Matlab that:

Error using filter_eliminatation (line 15)
Cannot get rid of nonlinear uncertainty in uncertain constraint


Error in robustify (line 154)
[F_eq_left,F_eliminate_equality] = filter_eliminatation(F_eq,w,0,ops);


Error in solverobust (line 64)
[F,h,failure] = robustify(varargin{:});

Johan Löfberg

unread,
Mar 20, 2018, 6:31:17 AM3/20/18
to YALMIP
It simply means that you have a model which YALMIP cannot convert to a robust counterpart, such as x + w^3 <= 1, or just about anything which doesn't follow the library of supported cases

Johan Löfberg

unread,
Mar 20, 2018, 6:32:35 AM3/20/18
to YALMIP
had it been x + x*w^3 <= 1, which involves a cubic term which YALMIP cannot reason over in terms of robustness, it would have been eliminated by adding the conservative constraint x==0

Xinwei Shen

unread,
Mar 21, 2018, 2:09:32 AM3/21/18
to YALMIP
Dear Prof. Johan, 
I guess I've found the problem in my model, which is that I used one of "w" in the equality constraint, in the form of A*w==C*x, where x are decision variables, A, C are matrix of coefficients. 
So I guess uncertain() in yalmip cannot handle equality constraint by transforming them into two inequalities yet?

Johan Löfberg

unread,
Mar 21, 2018, 2:51:49 AM3/21/18
to YALMIP
YALMIP can do that when you tell me what a robust solution to (e.g.) the uncertain equality w+x == 1 is, i.e give an example value of x, which satisfies that for any w in a set such as -1<=w<=1


Xinwei Shen

unread,
Mar 21, 2018, 3:06:38 AM3/21/18
to YALMIP
Sorry about my wrong question.

The following question is that: now I solve this model with uncertain "w" successfully, how do I know what's the worst case? using value(w) I see lots of "NaN" in it.

Johan Löfberg

unread,
Mar 21, 2018, 3:33:33 AM3/21/18
to YALMIP
Worst case is not computed (it is typically a much harder problem, or not even defined)

Example

2 <= x + w <= 4 for all -1<= w = 1

what is the worst case?

Xinwei Shen

unread,
Mar 21, 2018, 4:55:40 AM3/21/18
to YALMIP
Thanks for the example, however, there's no constraint like that in my model. 

I guess if there are only constraints like "w <= A*x" in the model, the worst case can be illustrated by A*value(x)?

Johan Löfberg

unread,
Mar 21, 2018, 5:00:06 AM3/21/18
to YALMIP
I'm just illustrating that the general question of worst-case is not well defined

In your case, the worst case is trivially when w is at it's upper bound and if the constraint is active in the robust model, it will be A*x if w is a vector, or min(A*x) if w is a scalar

Xinwei Shen

unread,
Mar 21, 2018, 8:17:21 AM3/21/18
to YALMIP
Perfect. 
Here comes another question:
I defined in my model with w_lb<=w<=w_ub, where w is the vector of uncertain variables and w_lb, w_ub are the lower and upper bounds of w.
it showed that

***** Starting YALMIP robustification module. *********************

 
- Detected 72 uncertain variables
 
- Detected 72 independent group(s) of uncertain variables
 
- Eliminating uncertainty using explicit maximization of inf-norm

so I guess yalmip deal with the uncertain variables with Explicit maximization filter (Section 5.3 in your paper), where norm(w,1) ≤ 1, 
and the dual norm is the inf-norm.

However, when I tried to turn the model into an Adaptive Robust Optimization (Bertsimas, D. and M. Sim, The Price of Robustness. Operations Research, 2004. 52(1): p. 35-53.)  with constraint

Cons=[uncertain(w), sum(abs(w-w_ref))/sum(w_ub-w_lb)<=ARO_Factor];

where w_ref is the known vector of expected parameters. ARO_Factor is the parameter to adjust the robustness.

It seems that Yalmip cannot handle it any more. Why?


Johan Löfberg

unread,
Mar 21, 2018, 8:25:47 AM3/21/18
to YALMIP
by using sum(abs)) you are hiding the simple structure, hence you should use w = wref + z, uncertain(z), norm(z,1) <= sum(wub-wlb)*ARO


Johan Löfberg

unread,
Mar 21, 2018, 8:39:40 AM3/21/18
to YALMIP
BTW, what do you mean with "does not work?".  Works here for simple test, but has to resort to a general duality based scheme

Johan Löfberg

unread,
Mar 21, 2018, 8:43:45 AM3/21/18
to yal...@googlegroups.com
note, in YALMIP, variables are either decision variables, or uncertain variables, and constraints are either definitions of uncertainty (only involving uncertain variables), or uncertain constraints (involving both). Variables defining size of uncertainty thus has to be dealt with carefully, since you would get a mixed constraint, and it is thus interpreted as an uncertain constraint

In your case, if you want ARO_factor to scale the possible uncertaities, you would have to use w = wref + ARO*z, uncertain(z), norm(z,1) <= sum(wub-wlb)


Xinwei Shen

unread,
Mar 22, 2018, 12:41:47 AM3/22/18
to YALMIP
Prof. 

I did as what you've said, 

 Cons_ARO_Definition=[uncertain(w), norm(w,1)<=sum(w_ub-w_lb)*ARO_Factor, L==w_ref+w];

but there is still an error that 

Johan Löfberg

unread,
Mar 22, 2018, 1:22:44 AM3/22/18
to yal...@googlegroups.com
L=w_ref+w
...

 
Cons_ARO_Definition=[uncertain(w), norm(w,1)<=sum(w_ub-w_lb)*ARO_Factor];

and as I said, if AROfactor is a decision variable scale defining the uncertainty, you must move it too

Xinwei Shen

unread,
Apr 29, 2018, 3:40:40 AM4/29/18
to YALMIP
Sir, it seems that there is another problem: uncertain variables cannot be constrainted by two constraints at the same time, right?

 e.g. 
      Cons_ARO_Definition2=[uncertain(w_L_e),uncertain(w_L_h),uncertain(w_L_c),...
            norm(w_L_e,1)<=Real_ARO_Num,...
            norm(w_L_h,1)<=Real_ARO_Num,...
            norm(w_L_c,1)<=Real_ARO_Num];
        Cons_ARO_Definition=[uncertain(w_L_e),uncertain(w_L_h),uncertain(w_L_c),...
            abs(w_L_e)<=1;abs(w_L_c)<=1;abs(w_L_h)<=1];

Cons_ARO_Definition2 won't be effective, but Cons_ARO_Definition is effective.

Johan Löfberg

unread,
Apr 29, 2018, 7:53:00 AM4/29/18
to YALMIP
Why are you doing uncertain(w_L_e) etc twice? Completely redundant. Your code is equivalent to

[uncertain(w_L_e),uncertain(w_L_h),uncertain(w_L_c),...
           norm(w_L_e,1)<=Real_ARO_Num,...
           norm(w_L_h,1)<=Real_ARO_Num,...
           norm(w_L_c,1)<=Real_ARO_Num,
           abs(w_L_e)<=1;abs(w_L_c)<=1;abs(w_L_h)<=1];

Depnding on Real_ARO_Num you will get different sets as some of the constraints can be redundant. For instance if it is <=1 it will definitely lead to abs(w_L_e)<=1 being redundant, and if it is larger than n or something like that, the norm constraints will be redundant. For values in between you will get box with cut-off corners

x = sdpvar(2,1);
plot
([abs(x)<=1,norm(x,1)<=1.5],x)


Reply all
Reply to author
Forward
0 new messages