Robust Lyapunov function for polynomial systems

202 views
Skip to first unread message

Ubaldo Tiberi

unread,
Dec 11, 2015, 1:16:10 PM12/11/15
to YALMIP
Hello!

First of all, congratulations for the tool! I found it extremely powerful and user-friendly! 

I am currently interested in robust and constrained nonlinear control. After having read the the nice example in Sec. 5.3 of the the paper here: http://users.isy.liu.se/johanl/yalmip/pub/documents/robust.pdf I am trying to run an example by myself.

Basically, I want to see if there exists a parameter-independent Lyapunov function for a system described in the code below.

% yalmip
sdpvar x1 x2 x3 Delta

u_1 = -x1*x2-2*x2*x3-x1-x3;
u_2 = 2*x1*x2*x3+3*x3^2-x2; 
uncertain_system=[u_1;u_2;Delta*x1*x2];

x = [x1;x2;x3];
[V,c] = polynomial(x,2);
dVdt = jacobian(V,x)*uncertain_system;

r =  x'*x;
C = [uncertain(Delta),1<=Delta<=10];
C = [sos(V-eps*r),sos(-dVdt-eps*r)];
[sol,v,Q] = solvesos(C,[],[],c);

Note that for Delta = 1 it is possible to find a Lyapunov function for the system above in closed form as per  http://www.sciencedirect.com/science/article/pii/0167691189900807

As you can see, I want to test the robustness with respect to Delta, and I setup a the sos problem described above.  
However, I have some questions:

1. Is it OK to use the constraint C = [sos(V-eps*r),sos(-dVdt-eps*r)] to ensure the positiveness/negativeness of the Lyapunov function/its derivative? 
I have noticed that by increasing eps (e.g. try to replace eps with 1e8*eps) I hit a threshold over which the solver always run into numerical problems, or, for even larger values, the primal becomes unfeasible. 
On the other hand, I performed a number of simulations with different initial conditions and very large value of Delta resulting stable all the time. I am suspicious that the system is globally asymptotically stable for all the positive Delta, but I want to use a numerical tool to see if it is actually true. If I can use the constraint sos(V-eps*r) then, it is done.

2. In which way I should rewrite the code if I want to find a parameter-dependent Lyapunov function instead?

3. If I want to find bounds for the Lyapunov function V, i.e. I want e.g.  to verify if there exists constants ci such that c1||x||+c2||x||^2 <= V <= c3||x||+c4||x||^2 I think that I should modify the constraints such that C = [sos(V-eps*r-c1*r-c2*r^2),sos(-V+eps*r-c3*r-c4*r^2),sos(-dVdt-eps*r)], where ci are decision variables, right? But what if I run again into numerical problems? Is there a way to re-scale the problem?

4. How to find the tightest bounds of the ci of the question 3.? 
Namely, how to write the code for finding the maximum c1, c2 and the minimum c3, c4  such that c1||x||+c2||x||^2 <= V <= c3||x||+c4||x||^2 (provided that such ci exist)? I understand that I should write an optimization problem where the ci are decision variable, but how to traduce it into Yalmip?

Many thanks! :)
/Ubaldo

Johan Löfberg

unread,
Dec 12, 2015, 4:28:04 AM12/12/15
to YALMIP
1. Only reasonable if eps is a number large enough not to drown in the numerical tolernace of the solver, i,e. 1e-5 or something like that, or larger. Being stable does not means that your problem is feasible. Do you know that there exist a quadratic Lyapunov which decays quadratically, as you impose

2. Several alternatives. Use parameter dependent functions, such as x'(P + delta*P1)x, and similiar functions for the decay requirements. Alternatively, perform sos in [x;delta], and apply positivstellensatz or simuliar to constrain the analysis to the region of interest in delta

3. ? You have norms, that cannot be used here. Perhaps you mean you want to have quartic stuff.But generally, of course you can have general polynomials. Makes no sense to constrain V from above though. If you parameterize it as a quartic, you know it is bounded from above by definition

4. Well, minimize abs(c4-c2)+abs(c3-c1), although I don't see the relevance of solving this problem (assuming you don't mean ||x|| as that cannot be used in SOS)




Ubaldo Tiberi

unread,
Dec 12, 2015, 10:09:06 AM12/12/15
to YALMIP

Thanks for the answers. I think that things are getting interesting. Below I reported also some thoughts about the problem, perhaps they may be useful for other users (or they perhaps provides input for future version of Yalmip) :)


1. Ok, as I expected. However, the condition that I imposed is V \le eps*||x|| and it can be solved only if the system is exponentially stable. Unfortunately, I picked a system that is not (it is asymptotically stable, not exponentially stable). Therefore, the SOS problem V - eps*||x|| \ge 0 (as I formulated) cannot admit solution, not even for Delta = 1 since the system is NOT exponentially stable. Sorry, that was my bad.  

Unfortunately, the existence of  positive c1 and c2 such that c1||x||^p \le V \le c2||x||^p holds only for exponentially stable systems, and my system is asymptotically stable (therefore there exist K-class functions \alpha_1, \alpha_2 such that \alpha_1(||x||) \le V(x) \le \alpha_2(||x||), but the \alpha_i are not in the form c||x||^p, otherwise I would have exponential stability. I’ll follow up this  part in point 3. ). 

See page 162 of “Nonlinear System” - 3rd Edition of H. Khalil. 


2. OK


3. In nonlinear control it is common to find upper and lower bounds of the Lyapunov function that are function of the state norm. This allows to compute a number of things (overshoot, ultimate bounds, regions of attraction, bisimulations, etc.). If the bounds are NOT in terms of norms, then we are grilled since we cannot proceed with standard Lyapunov-based techniques to find system properties. 


Given that, in order to try to find the functions \alpha_1 and \alpha_2 described above, my idea was to find V, c1 and c2 such that V-c1(x1^2+x2^2+x_3^2)-c2(x1^2+x2^2+x_3^2)^2 \ge 0. Then, provided that my V is quadratic, I would set alpha_1(||x||) = min{c1||x||, c*2||x||^2} (or something like that) which is a K-class function. 

So, given that V is a quadratic positive polynomial with a unique zero in x = 0 , depending where is the state at a certain point in time, the function \alpha_1 would switch between c1||x|| and c2*||x||^2 ensuring a lower bound for V.

For the upper-bound I would do the same. By recalling that a+b \le max{2a,2b} for any a,b>0, then I would set \alpha_2(||x||) = max{2*c3||x||,2*c4|x||^2} which is, again K-class.  


Also, I am not sure if such attempt to find the bounds alpha_i() of the Lyapunov function makes sense, if it is correct or not, and if it is possible to use numerical tools for dealing with that. Any input is greatly appreciated. 

On the other hand, If it would be possible to deal with those problems numerically, I am sure that a lot of people would become very happy! :)


As a second attempt, I may think to “cheat”, in the sense that I can try to solve the problem with the condition V(x) \ge h(x), \dot V(x) \le -h(x), where h(x) is SOS.  Then, IF h() is also a norm, then I can take V(x) \le ||x||_h, where ||x||_h is the norm induced by h(). Then, I would have V \le ||x||_h and \dot V \le -||x||_h.  

However, in general a SOS does not induce any norm (think for example if the SOS has a constant term), but exploiting the property of V (positiveness and unique zero in x=0), it may happen that h() would induce a norm… But I honestly don’t know. 

In addition, even if h() would induce a norm, I am not sure if the exponential stability converse theorem hold for ANY norm, including an eventual || ||_h norm. If so, then this method would not apply for asymptotically stable systems. :(


On the other hand, if h() would induce a norm, we would have something good for exponentially stable system anyways. For instance, we may be able to find less conservative bounds for exponentially stable systems compared to the classic “ball” often leads to extremely conservative results. 

In-fact, whenever one uses Lyapunov arguments for finding ultimate bounds, regions of attractions, etc they shall be intended in the sense of the || ||_h (or || ||_g) and not in the sense Euclidean norm. But that would be very good anyways (for example, in the case of finding an ultimate bound, instead of concluding that the trajectories are ultimately bounded in a region ||x|| < r one may conclude that are ultimately bounded in a region ||x||_h < r. So far, so good). 

Unfortunately, a number of other problem arise with this other method. Let’s assume that h() induces a norm.  First, the SOS h() for V and \dot V should be the same. If we find two SOS, say h() and g() such that  V \le h(x) and \dot V \le g(x), one shall  somehow relate the norms || ||_h and || ||_g anyways, which at first sight, it is not simple. 


4. Can I use abs( ) in the objective function when solving a SOS problem? The importance of finding such bounds is that as more tight is the bound for V, ass less conservative will be all the results obtained for that particular nonlinear system (bound in trajectories, overshoot, bisimulations, etc. )


Here, I have considered a global case. However, I would be happy also to get “local” results, i.e. to find bounds that hold locally. 


Sorry for the long message, but I just wanted to better clarify my problem (and perhaps some users may find it useful, provided that my reasonings are correct). 

As reference, some of my claims can be also found in the Chapter 4 of the book “Nonlinear Systems” of H. Khalil.  

Ubaldo Tiberi

unread,
Dec 12, 2015, 11:59:21 AM12/12/15
to YALMIP
...and, forgot to mention, if numerical methods (i.e. Yalmip) can help in dealing with these kind of problems :)

Johan Löfberg

unread,
Dec 12, 2015, 12:08:56 PM12/12/15
to YALMIP
Your notation is strange. Your code says V >= eps*||x||^2, not ||x||

x'*x is ||x||^2

Yes, you can optimize general expressions on the decision variables (c etc) as long as it boils down to something reasonable (sdp representable etc). The sos-stuff is only allowed to involve polynomial expressions on the dependents though (i.e. x) 

Ubaldo Tiberi

unread,
Dec 13, 2015, 3:45:25 AM12/13/15
to YALMIP

Right. I forget to add some square term - but it is quite hard to avoid typos here.

I understand that everything may work as long as the problem can be converted into a reasonable SDP. 


Just for a matter of correctness, I reformulate the problem. Here is an example.

If I found a solution for 


V \ge 0

-dotV \ge 0 

V^2-c1||x||^2-c2||x||^4 \ge 0

-V^2+c3||x||^2-c4||x||^4 \ge 0

- dotV^2-c5||x||^2 \ge 0

ci > 0 (that I can rewrite as ci*pi(x)>0 where pi(x) are SOS)


then, considering that for a,b> 0 I have min(a,b) \le a+b \le max(2a,2b) I would have found the bounds


min(sqrt(c1)||x||, sqrt(c2) ||x||^2) \le V \le   min(sqrt(c3)||x||, sqrt(c4) ||x||^2)

dotV \le -sqrt(c5)||x||


and the solution is valid as long as the ci are larger than the solver precision (btw, how to see and eventually change the solver precision?). Is that correct?

Johan Löfberg

unread,
Dec 13, 2015, 3:50:12 AM12/13/15
to YALMIP
Why would you write ci>0 as ci*pi(x)>0 where pi(x) are SOS? ci>0 is written as ci>=eps. You seem to miss the fact that the parametric decision variables are something completely different from the expressions in x which have to be massaged into sos representations


Johan Löfberg

unread,
Dec 13, 2015, 3:52:11 AM12/13/15
to YALMIP
and you cannot work with V^2, as V^2 will be quadratic in the decision variables, and thus you will not obtain a convex SDP.

Ubaldo Tiberi

unread,
Dec 13, 2015, 4:58:18 AM12/13/15
to YALMIP
Right. ci are decision variables. 
Regarding the V^2 I was not aware that you cannot have quadratic decision variables in SDP (sorry, but I know very little about SDP, but I am interested in nonlinear control). 
Very good to know. 

For this example, the best I can do is to try to solve (even locally it is OK)

V-c1||x||^2-c2||x||^4 \ge 0

-dotV \ge 0


and then, once I get the numerical solution of dotV (say SOL_dotV), I can verify if there is a c5 such that -SOL_dotV^2-c5^2||x||^2 \le 0 (which would lead to SOL_dotV \le -c5||x||)

The only drawback I see here is that there may be some sets of points y \neq 0 for which SOL_dotV(y) =  0... in that case one should resort to some  invariance principle...  well, things are getting very messy... perhaps I should find an easier example. 

Johan Löfberg

unread,
Dec 13, 2015, 7:49:23 AM12/13/15
to YALMIP
You van have an arbitrary parameterization in the decision variables, but you will typically not be able to solve the resulting semidefinite program

Ubaldo Tiberi

unread,
Dec 14, 2015, 4:08:04 AM12/14/15
to YALMIP
Ok. I'll definitely read your paper. Thanks.

However, just for info, this morning I am back at work and I am trying to play around with the parameters with a loose setting (Delta = 1) and searching for dotV \le 0 (that means points x \neq 0 for which dotV = 0 are allowed)

sdpvar x1 x2 x3 Delta

u_1 = -x1*x2-2*x2*x3-x1-x3;
u_2 = 2*x1*x2*x3+3*x3^2-x2; 
uncertain_system=[u_1;u_2;Delta*x1*x2];

x = [x1;x2;x3];
[V,c] = polynomial(x,4);
dVdt = jacobian(V,x)*uncertain_system;

r =  x'*x;
a = 1e-5;
C = [uncertain(Delta),1<=Delta<=1];
C = [sos(V-a*r),sos(-dVdt)];
[sol,v,Q] = solvesos(C,[],[],c);

Here is the output that I get 

-------------------------------------------------------------------------
YALMIP SOS module started...
-------------------------------------------------------------------------
Detected 35 parametric variables and 4 independent variables.
Detected 0 linear inequalities, 0 equality constraints and 0 LMIs.
Using kernel representation (options.sos.model=1).
Initially 10 monomials in R^3
Newton polytope (0 LPs).........Keeping 10 monomials (0sec)
Finding symmetries..............Found no symmetries (0sec)
Initially 28 monomials in R^4
Newton polytope (17 LPs)........Keeping 10 monomials (0.4524sec)
Finding symmetries..............Found no symmetries (0sec)
 
Warning: Rank deficient, rank = 100, tol =  5.756301e-12. 
> In sedumi at 268
  In callsedumi at 41
  In solvesdp at 338
  In solvesos at 137 
The coefficient matrix is not full row rank, numerical problems may occur.
SeDuMi 1.32 by AdvOL, 2005-2008 and Jos F. Sturm, 1998-2003.
Alg = 2: xz-corrector, theta = 0.250, beta = 0.500
Put 35 free variables in a quadratic cone
eqs m = 114, order n = 23, dim = 237, blocks = 4
nnz(A) = 292 + 0, nnz(ADA) = 12996, nnz(L) = 6555
 it :     b*y       gap    delta  rate   t/tP*  t/tD*   feas cg cg  prec
  0 :            1.48E+00 0.000
  1 :   4.10E-06 5.20E-01 0.000 0.3520 0.9000 0.9000   1.00  1  1  1.7E+00
  2 :   3.73E-06 1.37E-01 0.000 0.2645 0.9000 0.9000   1.00  1  1  4.5E-01
  3 :   2.80E-06 3.19E-02 0.000 0.2320 0.9000 0.9000   1.00  1  1  1.0E-01
  4 :   1.75E-06 1.35E-02 0.000 0.4232 0.9000 0.9000   1.00  1  1  4.4E-02
  5 :   9.16E-07 4.42E-03 0.000 0.3275 0.9000 0.9000   1.00  1  1  1.4E-02
  6 :   4.18E-07 1.02E-03 0.000 0.2298 0.9000 0.9000   1.00  1  1  3.3E-03
  7 :   2.22E-07 2.44E-04 0.000 0.2402 0.9000 0.9000   1.00  1  1  8.0E-04
  8 :   1.34E-07 7.09E-05 0.000 0.2906 0.9000 0.9000   1.00  1  1  2.3E-04
  9 :   8.20E-08 2.42E-05 0.000 0.3419 0.9000 0.9000   1.00  1  1  8.0E-05
 10 :   4.56E-08 7.24E-06 0.000 0.2985 0.9000 0.9000   1.00  1  2  2.4E-05
 11 :   2.66E-08 2.21E-06 0.000 0.3048 0.9000 0.9000   0.99  1  2  7.3E-06
 12 :   1.48E-08 6.55E-07 0.000 0.2970 0.9000 0.9000   0.98  1  3  2.2E-06
 13 :   8.41E-09 1.98E-07 0.000 0.3023 0.9000 0.9000   0.97  2 18  6.8E-07
 14 :   4.92E-09 6.33E-08 0.000 0.3196 0.9000 0.9000   0.94 25 51  2.2E-07
 15 :   2.96E-09 2.37E-08 0.000 0.3742 0.9000 0.9000   0.90 51 26  8.3E-08
 16 :   1.87E-09 8.44E-09 0.000 0.3564 0.9000 0.9000   0.84 26 51  3.2E-08
Run into numerical problems.

iter seconds digits       c*x               b*y
 16      0.6   Inf  0.0000000000e+00  1.8732570082e-09
|Ax-b| =   2.6e-08, [Ay-c]_+ =   3.9E-09, |x|=  4.5e+00, |y|=  1.0e+01

Detailed timing (sec)
   Pre          IPM          Post
1.700E-02    3.740E-01    2.002E-03    
Max-norms: ||b||=4.111336e-06, ||c|| = 0,
Cholesky |add|=0, |skip| = 42, ||L.L|| = 15107.

I have tried to play around with different values of a, with the degree of the Lyapunov function and with the positivity constraints (e.g.   by using constraints of the form C = [sos(V-a*r^2),sos(-dVdt)] or using nonlinear parametrization). What I get are either numerical errors, or Primal unfeasibility. 

However, regadelss of the result, I think that it is interesting to see that despite that system is AS (with Lyapunov function V(x) = 0.5(x1+x3)^2+0.5*(x2-x3^2)^2+x3^2), the utilization of numerical methods for finding a Lyapunov function incur into problems. Yes, of course it may be my fault that I am not formulating the problem properly (even if I trivially imposed V-ar'*r \ge 0 and -dotV \ge 0), or, more likely, it may be something somehow expected (existence of a Lyapunov function does not mean that one can ALWAYS find it with numerical tools, not even if the system is polynomial). 

On the other hand, given the Lyapunov function property of exponentially stable system, I am suspiciuos that numerical methods have more chances to find a Lyapunov function. That would score an additional plus for exponentially stable systems. Just for my personal curiosity, I'll try some exponentially stable system and I will check how many times I will succeed.  

Regarding the robustness problem and the search for the \alpha_i functions, I think I will definitely take another example :)

Many thanks for the support!

Johan Löfberg

unread,
Dec 14, 2015, 9:51:22 AM12/14/15
to YALMIP
You haven't concatenated correctly

C = [uncertain(Delta),1<=Delta<=1];
C = [C,sos(V-a*r),sos(-dVdt)];


Note, the problem you've posed appears homogenuous in c, so you should ust as well use a significant constant a (i.e. a = 1) to ensure that things don't drown in numerical tolerances

Ubaldo Tiberi

unread,
Dec 15, 2015, 7:29:53 AM12/15/15
to YALMIP
Right. I haven't concatenated that properly.
Regarding the choice of a, it actually happens that I run into numerical problems for large values of a. It seems that for a > 1e-2 I run into numerical problems, whereas for smaller values of a (e.g. 1e-3) I... don't know how should i interpret the solution (but here we go off topic, and although I googled and read into the SeDuMi manual, I haven't get an answer).

In my understanding, a problem is solved when "feas = 1" and it is unfeasible when "feas = -1", but in my case I have 0.77 at the 19-th iteration. 
It follows the code and the Matlab output. 

sdpvar x1 x2 x3 Delta

u_1 = -x1*x2-2*x2*x3-x1-x3;
u_2 = 2*x1*x2*x3+3*x3^2-x2; 
uncertain_system=[u_1;u_2;Delta*x1*x2];

x = [x1;x2;x3];
[V,c] = polynomial(x,4);
dVdt = jacobian(V,x)*uncertain_system;

r =  x'*x;
a = 1e-3;
C = [uncertain(Delta),1<=Delta<=1];
C = [C;sos(V-a*r),sos(-dVdt)];
[sol,v,Q] = solvesos(C,[],[],c);

-------------------------------------------------------------------------
YALMIP SOS module started...
-------------------------------------------------------------------------
Detected 36 parametric variables and 3 independent variables.
Detected 0 linear inequalities, 0 equality constraints and 0 LMIs.
Using image representation (options.sos.model=2). Nonlinear parameterization found
Initially 10 monomials in R^3
Newton polytope (0 LPs).........Keeping 10 monomials (0sec)
Finding symmetries..............Found no symmetries (0sec)
Initially 18 monomials in R^3
Newton polytope (7 LPs).........Keeping 10 monomials (0.2652sec)
Finding symmetries..............Found no symmetries (0sec)
 
***** Starting YALMIP robustification module. *********************
 - Detected 1 uncertain variables
 - Detected 1 independent group(s) of uncertain variables
 - Complicating terms in w encountered. Trying to eliminate by forcing some decision variables to 0
 - Enumerated 2 vertices
 - Reduced to 1 unique vertices
***** Derivation of robust counterpart done ***********************
SeDuMi 1.32 by AdvOL, 2005-2008 and Jos F. Sturm, 1998-2003.
Alg = 2: xz-corrector, theta = 0.250, beta = 0.500
Put 28 free variables in a quadratic cone
eqs m = 72, order n = 23, dim = 230, blocks = 4
nnz(A) = 362 + 0, nnz(ADA) = 4470, nnz(L) = 2271
 it :     b*y       gap    delta  rate   t/tP*  t/tD*   feas cg cg  prec
  0 :            1.51E+00 0.000
  1 :   0.00E+00 4.70E-01 0.000 0.3115 0.9000 0.9000   1.00  1  1  1.5E+00
  2 :   0.00E+00 1.77E-01 0.000 0.3774 0.9000 0.9000   1.00  1  1  5.7E-01
  3 :   0.00E+00 4.05E-02 0.000 0.2282 0.9000 0.9000   1.00  1  1  1.3E-01
  4 :   0.00E+00 9.04E-03 0.000 0.2234 0.9000 0.9000   1.00  1  1  2.9E-02
  5 :   0.00E+00 2.41E-03 0.000 0.2663 0.9000 0.9000   1.00  1  1  7.8E-03
  6 :   0.00E+00 7.51E-04 0.000 0.3120 0.9000 0.9000   1.00  1  1  2.4E-03
  7 :   0.00E+00 2.81E-04 0.000 0.3744 0.9000 0.9000   1.00  1  1  9.2E-04
  8 :   0.00E+00 8.88E-05 0.000 0.3158 0.9000 0.9000   1.00  1  1  2.9E-04
  9 :   0.00E+00 2.63E-05 0.000 0.2960 0.9000 0.9000   0.99  1  1  8.6E-05
 10 :   0.00E+00 7.82E-06 0.000 0.2974 0.9000 0.9000   0.99  1  1  2.6E-05
 11 :   0.00E+00 2.39E-06 0.000 0.3055 0.9000 0.9000   0.99  1  2  7.9E-06
 12 :   0.00E+00 6.95E-07 0.000 0.2911 0.9000 0.9000   0.98  1  2  2.3E-06
 13 :   0.00E+00 2.10E-07 0.000 0.3023 0.9000 0.9000   0.98  1  2  7.1E-07
 14 :   0.00E+00 6.27E-08 0.000 0.2983 0.9000 0.9000   0.97  2  3  2.2E-07
 15 :   0.00E+00 1.93E-08 0.000 0.3087 0.9000 0.9000   0.95  3  3  6.9E-08
 16 :   0.00E+00 5.98E-09 0.000 0.3090 0.9000 0.9000   0.92  3  5  2.2E-08
 17 :   0.00E+00 1.85E-09 0.000 0.3100 0.9000 0.9000   0.89  9 10  7.4E-09
 18 :   0.00E+00 5.63E-10 0.000 0.3038 0.9000 0.9000   0.84 11 14  2.5E-09
 19 :   0.00E+00 1.72E-10 0.000 0.3057 0.9000 0.9000   0.77  6  7  8.6E-10

iter seconds digits       c*x               b*y
 19      0.6   Inf -1.2333550528e-10  0.0000000000e+00
|Ax-b| =   5.9e-10, [Ay-c]_+ =   1.3E-10, |x|=  8.1e+00, |y|=  1.6e+01

Detailed timing (sec)
   Pre          IPM          Post
4.003E-03    2.450E-01    9.958E-04    
Max-norms: ||b||=0, ||c|| = 3.289069e-04,
Cholesky |add|=1, |skip| = 1, ||L.L|| = 2.30835e+06.
 
-> Only partial decomposition is returned (since you have uncertain data).




Johan Löfberg

unread,
Dec 15, 2015, 7:42:51 AM12/15/15
to YALMIP
The sos module currently doesn't support situations where you have both parametric decision variables (c) and uncertain variables (Delta) and nonlinear products of these. You'll have to do the sos decomposition in [x;Delta], or do the robust stuff manually (trivial if you simply have one interval)

Johan Löfberg

unread,
Dec 15, 2015, 8:08:08 AM12/15/15
to YALMIP
The following works

C = [sos(V-a*r),sos(-dVdt)];
C = compilesos(C,[],sdpsettings('sos.model',2),[c;Delta]);
C = C(~is(C,'equality'));
C = robustify([C, uncertain(Delta),1 <= Delta <= 1]);
optimize(C)


Ubaldo Tiberi

unread,
Dec 15, 2015, 8:28:52 AM12/15/15
to YALMIP
Great! Many thanks. 

Ubaldo Tiberi

unread,
Jan 8, 2016, 3:04:03 AM1/8/16
to YALMIP
Hello again.

I went back to play around my initial problem, and I exploited the part of code you kindly provided. For instance, I think I have managed to find all the bounds for the Lyapunov function that I was looking for as per the initial post, but I am not 100% sure. Anyways, here is the code I developed:

% Yalmip

sdpvar x1 x2 x3 Delta

u_1 = -x1*x2-2*x2*x3-x1-x3;
u_2 = 2*x1*x2*x3+3*x3^2-x2; 
uncertain_system=[u_1;u_2;Delta*x1*x2];

x = [x1;x2;x3];
[V,c] = polynomial(x,2);
dVdt = jacobian(V,x)*uncertain_system;

r = x'*x;
C = [sos(V-r),sos(-dVdt-min(r,r^2)),sos(-V+6.5*r)];
C = compilesos(C,[],sdpsettings('sos.model',2),[c;Delta]);
C = C(~is(C,'equality'));
C = [C,c(1)==0];
C = robustify([C, uncertain(Delta),1 <= Delta <= 220]);
optimize(C).

According to that, I have found a Lyapunov function such that r \le V(x) \le 6.5r and \dot V(x) \le -min(r,r^2), where r = ||x||^2. Such inequalities holds for  1 \le Delta \le 220. Note that I have to set c(1)==0, otherwise I would lose the condition V(0) = 0 (for x=0 I would have V(0) = c(1)). 
Next, I have to find a bound for norm(jacobian(V,x)). Given that it is not possible to directly work with norms, I proceeded as follows

I%SolV = clean(value(c)'*monolist(x,2),1e-3);
SolV = value(c)'*monolist(x,2); % get the numerical Lyapunov polynomial
C = -jacobian(SolV,x)*jacobian(SolV,x)'+r >= 0;
optimize(C)

Now, I have some questions:

1. Is the constraint c(1)==0 placed correctly? 
2. I used the "min" function in the constraint definition. Am I allowed to do that? Can I trust the optimization result when using "min"?
3. What the line C = C(~is(C,'equality')) is used for?
4. Is my overall code correct/does it make sense? For both the optimization problem I get the message info: 'Successfully solved (SeDuMi-1.3)', so I assume that the results are also correct, is that right?

Many thanks. 

Johan Löfberg

unread,
Jan 8, 2016, 3:52:08 AM1/8/16
to YALMIP
I wouldn't trust anything of what comes out of this. With the min operator, you are creating a very nasty non-convex combinatorial object, and I am surprised YALMIP let's it through (will investigate). YALMIP is fooled (doesn't expect this stuff in a sos program) and returns garbage model

Since the problem is infeasible in a certain much easier case it immediately tells us that you cannot trust these results
% Yalmip
yalmip
('clear')
sdpvar x1 x2 x3
Delta=100;


u_1
= -x1*x2-2*x2*x3-x1-x3;
u_2
= 2*x1*x2*x3+3*x3^2-x2;
uncertain_system
=[u_1;u_2;Delta*x1*x2];

x
= [x1;x2;x3];
[V,c] = polynomial(x,2);
dVdt
= jacobian(V,x)*uncertain_system;

r
= x
'*x;
C = [sos(V-r),sos(-dVdt)]
solvesos(C,[],[],c)



Johan Löfberg

unread,
Jan 8, 2016, 4:04:24 AM1/8/16
to YALMIP
The equalities should not have been removed I see know. It was a hack I thought would make us go around the limitation of not supporting nonlinear parameterization in the uncertainty of a SOS program

To model this using SOS, you have to perform the SOS-decomposition in (x,delta), and thus handle the delta region using positivstellensatz

Ubaldo Tiberi

unread,
Jan 8, 2016, 8:38:23 AM1/8/16
to YALMIP
Ok! I have a small update. In order to maximize the chance of getting a feasible solution I tried to find a local parameter-dependent Lyapunov function. 
I actually found such a Lyapunov function (or at least I think if the problem setup makes sense), and also a reasonable lower-bound for it, i.e. a1*r \le V(x,Delta) and a lower bound of it, i.e. a1 r \le V(x) with a reasonable value for a1

clear all
sdpvar x1 x2 x3 Delta a1

u_1 = -x1*x2-2*x2*x3-x1-x3;
u_2 = 2*x1*x2*x3+3*x3^2-x2; 
uncertain_system=[u_1;u_2;Delta*x1*x2];

% Local analysis
% Uncertainty domain
Delta_l = 1;
Delta_u = 220;
uncertain_domain = (Delta-Delta_l)*(Delta_u-Delta);

% State-space domain
x = [x1;x2;x3];
r = x'*x;
ss_domain = 100-r;

% Parameter-dependent Lyapunov function
[V,c] = polynomial([x;Delta],4);
dVdt = jacobian(V,x)*uncertain_system;

% Aux SOS
[s1,c1] = polynomial([x;Delta],2);
[s2,c2] = polynomial([x;Delta],2);
[s3,c3] = polynomial([x;Delta],2);
[s4,c4] = polynomial([x;Delta],2);

% Optimization problem
C = [sos(s1),sos(s2),sos(s3),sos(s4)];
C = [C,sos(V-a1*r-s1*ss_domain-s2*uncertain_domain)];
C = [C;sos(-dVdt-s3*ss_domain-s4*uncertain_domain)];
C = [C;c(1)==0;a1>=1e-3]; % c(1) == 0 to ensure V(0)=0, and a1 large enough to don't drop below the solver precision  
C = compilesos(C,-a1,sdpsettings('sos.model',2),[a1;c;c1;c2;c3;c4]);
optimize(C)

I can also find a Lyapunov function and an upper-bound of it, i.e. V(x,Delta) \le a2*r with a reasonable value of a2

clear all
sdpvar x1 x2 x3 Delta a2

u_1 = -x1*x2-2*x2*x3-x1-x3;
u_2 = 2*x1*x2*x3+3*x3^2-x2; 
uncertain_system=[u_1;u_2;Delta*x1*x2];

% Local analysis
% Uncertainty domain
Delta_l = 1;
Delta_u = 220;
uncertain_domain = (Delta-Delta_l)*(Delta_u-Delta);

% State-space domain
x = [x1;x2;x3];
r = x'*x;
ss_domain = 100-r;

% Parameter-dependent Lyapunov function
[V,c] = polynomial([x;Delta],4);
dVdt = jacobian(V,x)*uncertain_system;

% Aux SOS
[s1,c1] = polynomial([x;Delta],2);
[s2,c2] = polynomial([x;Delta],2);
[s3,c3] = polynomial([x;Delta],2);
[s4,c4] = polynomial([x;Delta],2);

% Optimization problem
C = [sos(s1),sos(s2),sos(s3),sos(s4)];
C = [C;sos(-V+a2*r-s1*ss_domain-s2*uncertain_domain)];
C = [C;sos(-dVdt-s3*ss_domain-s4*uncertain_domain)];
C = [C;c(1)==0;a2>=0];
C = compilesos(C,a2,sdpsettings('sos.model',2),[a2;c;c1;c2;c3;c4]);
optimize(C)


However, if I want to find a Lyapunov function such that a1*r \le V(x,Delta) \le a2*r within the same optimization problem (which is actually what I need), then my problem becomes unfeasible. 

clear all
sdpvar x1 x2 x3 Delta a1 a2

u_1 = -x1*x2-2*x2*x3-x1-x3;
u_2 = 2*x1*x2*x3+3*x3^2-x2; 
uncertain_system=[u_1;u_2;Delta*x1*x2];

% Local analysis
% Uncertainty domain
Delta_l = 1;
Delta_u = 220;
uncertain_domain = (Delta-Delta_l)*(Delta_u-Delta);

% State-space domain
x = [x1;x2;x3];
r = x'*x;
ss_domain = 100-r;

% Parameter-dependent Lyapunov function
[V,c] = polynomial([x;Delta],4);
dVdt = jacobian(V,x)*uncertain_system;

% Aux SOS
[s1,c1] = polynomial([x;Delta],2);
[s2,c2] = polynomial([x;Delta],2);
[s3,c3] = polynomial([x;Delta],2);
[s4,c4] = polynomial([x;Delta],2);
[s5,c5] = polynomial([x;Delta],2);
[s6,c6] = polynomial([x;Delta],2);

% Optimization problem
C = [sos(s1),sos(s2),sos(s3),sos(s4),sos(s5),sos(s6)];
C = [C,sos(V-a1*r-s1*ss_domain-s2*uncertain_domain)];
C = [C;sos(-V+a2*r-s3*ss_domain-s4*uncertain_domain)];
C = [C;sos(-dVdt-s5*ss_domain-s6*uncertain_domain)];
C = [C;c(1)==0;a1>=1e-3;a2-a1>=0];
C = compilesos(C,a2-a1,sdpsettings('sos.model',2),[a1;a2;c;c1;c2;c3;c4;c5;c6]);
optimize(C)

That's just to say that although I can prove robust stability (at least local) of the closed-loop system, finding tight bounds for the Lyapunov function (and its derivatives, which is not reported here anyways) looks much harder. Of course any other hint is more than welcome. 

Johan Löfberg

unread,
Jan 8, 2016, 8:58:31 AM1/8/16
to YALMIP
Just a comment: No reason to use compilesos or explicitly select an image-baed model here.
Reply all
Reply to author
Forward
0 new messages