Thanks for the answers. I think that things are getting interesting. Below I reported also some thoughts about the problem, perhaps they may be useful for other users (or they perhaps provides input for future version of Yalmip) :)
1. Ok, as I expected. However, the condition that I imposed is V \le eps*||x|| and it can be solved only if the system is exponentially stable. Unfortunately, I picked a system that is not (it is asymptotically stable, not exponentially stable). Therefore, the SOS problem V - eps*||x|| \ge 0 (as I formulated) cannot admit solution, not even for Delta = 1 since the system is NOT exponentially stable. Sorry, that was my bad.
Unfortunately, the existence of positive c1 and c2 such that c1||x||^p \le V \le c2||x||^p holds only for exponentially stable systems, and my system is asymptotically stable (therefore there exist K-class functions \alpha_1, \alpha_2 such that \alpha_1(||x||) \le V(x) \le \alpha_2(||x||), but the \alpha_i are not in the form c||x||^p, otherwise I would have exponential stability. I’ll follow up this part in point 3. ).
See page 162 of “Nonlinear System” - 3rd Edition of H. Khalil.
2. OK
3. In nonlinear control it is common to find upper and lower bounds of the Lyapunov function that are function of the state norm. This allows to compute a number of things (overshoot, ultimate bounds, regions of attraction, bisimulations, etc.). If the bounds are NOT in terms of norms, then we are grilled since we cannot proceed with standard Lyapunov-based techniques to find system properties.
Given that, in order to try to find the functions \alpha_1 and \alpha_2 described above, my idea was to find V, c1 and c2 such that V-c1(x1^2+x2^2+x_3^2)-c2(x1^2+x2^2+x_3^2)^2 \ge 0. Then, provided that my V is quadratic, I would set alpha_1(||x||) = min{c1||x||, c*2||x||^2} (or something like that) which is a K-class function.
So, given that V is a quadratic positive polynomial with a unique zero in x = 0 , depending where is the state at a certain point in time, the function \alpha_1 would switch between c1||x|| and c2*||x||^2 ensuring a lower bound for V.
For the upper-bound I would do the same. By recalling that a+b \le max{2a,2b} for any a,b>0, then I would set \alpha_2(||x||) = max{2*c3||x||,2*c4|x||^2} which is, again K-class.
Also, I am not sure if such attempt to find the bounds alpha_i() of the Lyapunov function makes sense, if it is correct or not, and if it is possible to use numerical tools for dealing with that. Any input is greatly appreciated.
On the other hand, If it would be possible to deal with those problems numerically, I am sure that a lot of people would become very happy! :)
As a second attempt, I may think to “cheat”, in the sense that I can try to solve the problem with the condition V(x) \ge h(x), \dot V(x) \le -h(x), where h(x) is SOS. Then, IF h() is also a norm, then I can take V(x) \le ||x||_h, where ||x||_h is the norm induced by h(). Then, I would have V \le ||x||_h and \dot V \le -||x||_h.
However, in general a SOS does not induce any norm (think for example if the SOS has a constant term), but exploiting the property of V (positiveness and unique zero in x=0), it may happen that h() would induce a norm… But I honestly don’t know.
In addition, even if h() would induce a norm, I am not sure if the exponential stability converse theorem hold for ANY norm, including an eventual || ||_h norm. If so, then this method would not apply for asymptotically stable systems. :(
On the other hand, if h() would induce a norm, we would have something good for exponentially stable system anyways. For instance, we may be able to find less conservative bounds for exponentially stable systems compared to the classic “ball” often leads to extremely conservative results.
In-fact, whenever one uses Lyapunov arguments for finding ultimate bounds, regions of attractions, etc they shall be intended in the sense of the || ||_h (or || ||_g) and not in the sense Euclidean norm. But that would be very good anyways (for example, in the case of finding an ultimate bound, instead of concluding that the trajectories are ultimately bounded in a region ||x|| < r one may conclude that are ultimately bounded in a region ||x||_h < r. So far, so good).
Unfortunately, a number of other problem arise with this other method. Let’s assume that h() induces a norm. First, the SOS h() for V and \dot V should be the same. If we find two SOS, say h() and g() such that V \le h(x) and \dot V \le g(x), one shall somehow relate the norms || ||_h and || ||_g anyways, which at first sight, it is not simple.
4. Can I use abs( ) in the objective function when solving a SOS problem? The importance of finding such bounds is that as more tight is the bound for V, ass less conservative will be all the results obtained for that particular nonlinear system (bound in trajectories, overshoot, bisimulations, etc. )
Here, I have considered a global case. However, I would be happy also to get “local” results, i.e. to find bounds that hold locally.
Sorry for the long message, but I just wanted to better clarify my problem (and perhaps some users may find it useful, provided that my reasonings are correct).
As reference, some of my claims can be also found in the Chapter 4 of the book “Nonlinear Systems” of H. Khalil.
Right. I forget to add some square term - but it is quite hard to avoid typos here.
I understand that everything may work as long as the problem can be converted into a reasonable SDP.
Just for a matter of correctness, I reformulate the problem. Here is an example.
If I found a solution for
V \ge 0
-dotV \ge 0
V^2-c1||x||^2-c2||x||^4 \ge 0
-V^2+c3||x||^2-c4||x||^4 \ge 0
- dotV^2-c5||x||^2 \ge 0
ci > 0 (that I can rewrite as ci*pi(x)>0 where pi(x) are SOS)
then, considering that for a,b> 0 I have min(a,b) \le a+b \le max(2a,2b) I would have found the bounds
min(sqrt(c1)||x||, sqrt(c2) ||x||^2) \le V \le min(sqrt(c3)||x||, sqrt(c4) ||x||^2)
dotV \le -sqrt(c5)||x||
and the solution is valid as long as the ci are larger than the solver precision (btw, how to see and eventually change the solver precision?). Is that correct?
V-c1||x||^2-c2||x||^4 \ge 0
-dotV \ge 0
and then, once I get the numerical solution of dotV (say SOL_dotV), I can verify if there is a c5 such that -SOL_dotV^2-c5^2||x||^2 \le 0 (which would lead to SOL_dotV \le -c5||x||)
The only drawback I see here is that there may be some sets of points y \neq 0 for which SOL_dotV(y) = 0... in that case one should resort to some invariance principle... well, things are getting very messy... perhaps I should find an easier example.
C = [uncertain(Delta),1<=Delta<=1];
C = [C,sos(V-a*r),sos(-dVdt)];
C = [sos(V-a*r),sos(-dVdt)];C = compilesos(C,[],sdpsettings('sos.model',2),[c;Delta]);C = C(~is(C,'equality'));C = robustify([C, uncertain(Delta),1 <= Delta <= 1]);optimize(C)
% Yalmip
sdpvar x1 x2 x3 Delta
u_1 = -x1*x2-2*x2*x3-x1-x3;u_2 = 2*x1*x2*x3+3*x3^2-x2; uncertain_system=[u_1;u_2;Delta*x1*x2];
x = [x1;x2;x3];
[V,c] = polynomial(x,2);
dVdt = jacobian(V,x)*uncertain_system;
r = x'*x;
C = [sos(V-r),sos(-dVdt-min(r,r^2)),sos(-V+6.5*r)];
C = compilesos(C,[],sdpsettings('sos.model',2),[c;Delta]);C = C(~is(C,'equality'));
C = [C,c(1)==0];C = robustify([C, uncertain(Delta),1 <= Delta <= 220]);optimize(C).
I%SolV = clean(value(c)'*monolist(x,2),1e-3);
% Yalmip
yalmip('clear')
sdpvar x1 x2 x3
Delta=100;
u_1 = -x1*x2-2*x2*x3-x1-x3;
u_2 = 2*x1*x2*x3+3*x3^2-x2;
uncertain_system=[u_1;u_2;Delta*x1*x2];
x = [x1;x2;x3];
[V,c] = polynomial(x,2);
dVdt = jacobian(V,x)*uncertain_system;
r = x
'*x;
C = [sos(V-r),sos(-dVdt)]
solvesos(C,[],[],c)
clear allsdpvar x1 x2 x3 Delta a1
u_1 = -x1*x2-2*x2*x3-x1-x3;u_2 = 2*x1*x2*x3+3*x3^2-x2; uncertain_system=[u_1;u_2;Delta*x1*x2];
% Local analysis% Uncertainty domainDelta_l = 1;Delta_u = 220;uncertain_domain = (Delta-Delta_l)*(Delta_u-Delta);
% State-space domain
x = [x1;x2;x3];
r = x'*x;ss_domain = 100-r;
% Parameter-dependent Lyapunov function[V,c] = polynomial([x;Delta],4);dVdt = jacobian(V,x)*uncertain_system;
% Aux SOS[s1,c1] = polynomial([x;Delta],2);[s2,c2] = polynomial([x;Delta],2);[s3,c3] = polynomial([x;Delta],2);[s4,c4] = polynomial([x;Delta],2);
% Optimization problemC = [sos(s1),sos(s2),sos(s3),sos(s4)];C = [C,sos(V-a1*r-s1*ss_domain-s2*uncertain_domain)];C = [C;sos(-dVdt-s3*ss_domain-s4*uncertain_domain)];C = [C;c(1)==0;a1>=1e-3]; % c(1) == 0 to ensure V(0)=0, and a1 large enough to don't drop below the solver precision C = compilesos(C,-a1,sdpsettings('sos.model',2),[a1;c;c1;c2;c3;c4]);optimize(C)
clear allsdpvar x1 x2 x3 Delta a2
u_1 = -x1*x2-2*x2*x3-x1-x3;u_2 = 2*x1*x2*x3+3*x3^2-x2; uncertain_system=[u_1;u_2;Delta*x1*x2];
% Local analysis% Uncertainty domainDelta_l = 1;Delta_u = 220;uncertain_domain = (Delta-Delta_l)*(Delta_u-Delta);
% State-space domain
x = [x1;x2;x3];
r = x'*x;ss_domain = 100-r;
% Parameter-dependent Lyapunov function[V,c] = polynomial([x;Delta],4);dVdt = jacobian(V,x)*uncertain_system;
% Aux SOS[s1,c1] = polynomial([x;Delta],2);[s2,c2] = polynomial([x;Delta],2);[s3,c3] = polynomial([x;Delta],2);[s4,c4] = polynomial([x;Delta],2);
% Optimization problemC = [sos(s1),sos(s2),sos(s3),sos(s4)];C = [C;sos(-V+a2*r-s1*ss_domain-s2*uncertain_domain)];C = [C;sos(-dVdt-s3*ss_domain-s4*uncertain_domain)];C = [C;c(1)==0;a2>=0];C = compilesos(C,a2,sdpsettings('sos.model',2),[a2;c;c1;c2;c3;c4]);optimize(C)
clear allsdpvar x1 x2 x3 Delta a1 a2
u_1 = -x1*x2-2*x2*x3-x1-x3;u_2 = 2*x1*x2*x3+3*x3^2-x2; uncertain_system=[u_1;u_2;Delta*x1*x2];
% Local analysis% Uncertainty domainDelta_l = 1;Delta_u = 220;uncertain_domain = (Delta-Delta_l)*(Delta_u-Delta);
% State-space domain
x = [x1;x2;x3];
r = x'*x;ss_domain = 100-r;
% Parameter-dependent Lyapunov function[V,c] = polynomial([x;Delta],4);dVdt = jacobian(V,x)*uncertain_system;
% Aux SOS[s1,c1] = polynomial([x;Delta],2);[s2,c2] = polynomial([x;Delta],2);[s3,c3] = polynomial([x;Delta],2);[s4,c4] = polynomial([x;Delta],2);[s5,c5] = polynomial([x;Delta],2);[s6,c6] = polynomial([x;Delta],2);
% Optimization problemC = [sos(s1),sos(s2),sos(s3),sos(s4),sos(s5),sos(s6)];C = [C,sos(V-a1*r-s1*ss_domain-s2*uncertain_domain)];C = [C;sos(-V+a2*r-s3*ss_domain-s4*uncertain_domain)];C = [C;sos(-dVdt-s5*ss_domain-s6*uncertain_domain)];C = [C;c(1)==0;a1>=1e-3;a2-a1>=0];C = compilesos(C,a2-a1,sdpsettings('sos.model',2),[a1;a2;c;c1;c2;c3;c4;c5;c6]);optimize(C)