define constraints

162 views
Skip to first unread message

Ali Esmaeilpour

unread,
Nov 8, 2019, 6:43:43 AM11/8/19
to YALMIP
Hello, I wanted to define these constraints:

I've already defined some of them, but I can't complete them:
clc;
clear
;
close all
;
t0
= 0;
tf
=20;
dt
= 0.1;
for t=t0:dt:tf-dt
    hsim
=100;
   
Tetamin=10^(-4);
   
Tetamax=10^(+4);
    Q
=sdpvar(repmat(1,1,hsim),repmat(1,1,hsim));
    R
=sdpvar(repmat(1,1,hsim),repmat(1,1,hsim));
    Q
{1}=1;
    R
{1}=100;
   
Teta=blkdiag(Q,R);
    ysp
=0;
    constraint
= [];
    objective
= 0;
   
for k=1:hsim        
        objective
=objective+norm(yhat(t+k)-ysp,Q)+norm(uhat(t+k),R);    
        constraint1
=Tetamin<=Teta(t)<=Tetamax;
        constraint2
=Tetamin<=(1+Teta(t+k))*Teta(t-1)<=Tetamax;
   
end
end


Johan Löfberg

unread,
Nov 8, 2019, 7:28:58 AM11/8/19
to YALMIP
>> help sdpvar/norm
 norm (overloaded)
 
  t = norm(x,P)
 
  The variable t can only be used in convexity preserving
  operations such as t<=1, max(t,y)<=1, minimize t etc.
 
     For matrices...
       norm(X)       models the largest singular value of X, max(svd(X)).
       norm(X,2)     is the same as norm(X).
       norm(X,1)     models the 1-norm of X, the largest column sum, max(sum(abs(X))).
       norm(X,inf)   models the infinity norm of X, the largest row sum, max(sum(abs(X'))).
       norm(X,'inf') same as above
       norm(X,'fro') models the Frobenius norm, sqrt(sum(diag(X'*X))).
       norm(X,'nuc') models the Nuclear norm, sum of singular values.
       norm(X,'*')   same as above
       norm(X,'tv')  models the (isotropic) total variation semi-norm 
     For vectors...
       norm(V) = norm(V,2) = standard Euclidean norm.
       norm(V,inf) = max(abs(V)).
       norm(V,1) = sum(abs(V))

Johan Löfberg

unread,
Nov 8, 2019, 7:47:48 AM11/8/19
to YALMIP
and you are most likely not going to be able to construct or solve a model involving those bilinear constraints. I feel you don't understand how the problem is that you are setting up. Norm is intended to be for structured programs where you would use socp or lp solvers depending on the norm, and those cannot be used with nonconvex bilinear stuff

Ali Esmaeilpour

unread,
Nov 8, 2019, 10:28:15 AM11/8/19
to YALMIP
so norm can't be used to define this:


Johan Löfberg

unread,
Nov 8, 2019, 10:41:41 AM11/8/19
to yal...@googlegroups.com
of course it can, that's a squared (and weighted) 2-norm

Ali Esmaeilpour

unread,
Nov 8, 2019, 10:47:18 AM11/8/19
to YALMIP
so these constraints can't be defined for that squared 2-norm?

Johan Löfberg

unread,
Nov 8, 2019, 10:53:48 AM11/8/19
to YALMIP
why are you talking about norms and showing that picture. that's a bunch of linear and polynomial inequalities

the objective you showed was a sum of squared weighted 2-norms

The code you showed using norm(something,Q) makes no sense because that's not valid code, you cannot have "Q-norm" for decision variable Q

Ali Esmaeilpour

unread,
Nov 8, 2019, 11:05:50 AM11/8/19
to YALMIP
well actually "Teta" is dicision variable and its form is:

objective=objective+(yhat(t+k)-ysp)*Q*(yhat(t+k)-ysp)'+(uhat(t+k))*R*(uhat(t+k))';    
constraint1
=Tetamin<=Teta(t)<=Tetamax;
constraint2
=Tetamin<=(1+Teta(t+k))*Teta(t-1)<=Tetamax;
I've defined two of those three constraints but what about this one?

I attach the paper for you maybe it could help.
7-21.pdf

Johan Löfberg

unread,
Nov 8, 2019, 11:18:04 AM11/8/19
to YALMIP
bilinear in theta, hence ugly and hard

What's your problem with the last one. It is just a linear constraint

Ali Esmaeilpour

unread,
Nov 8, 2019, 11:23:35 AM11/8/19
to YALMIP
I donnow how to define it:
constraint3=...

and another question is that which solver is suitable for this bilinear problem?

Johan Löfberg

unread,
Nov 8, 2019, 11:27:30 AM11/8/19
to YALMIP
you are obviously defining a lot of other linear inequalities, so why is this any harder? it's exactly the same

bilinear means you have to use a general nonlinear solver

Ali Esmaeilpour

unread,
Nov 8, 2019, 11:53:53 AM11/8/19
to YALMIP
because there is

I'm confused...

Johan Löfberg

unread,
Nov 8, 2019, 12:27:49 PM11/8/19
to YALMIP
so you only add the constraints for which j>=ht when you add the constraints in a loop

for j = ...
 if j>=ht
 add
end
end

A_E

unread,
Nov 10, 2019, 8:22:12 AM11/10/19
to YALMIP
you mean this way?
if j>=ht
            constraints2
=Teta(j+1)=0;
       
end


Johan Löfberg

unread,
Nov 10, 2019, 8:41:32 AM11/10/19
to YALMIP
there is absolutely nothing special about this one, so you would have to do as always, i.e. build the list of constraints 

constraints2 = [constraints2, Teta(j+1)==0];


A_E

unread,
Nov 10, 2019, 8:55:06 AM11/10/19
to YALMIP
when I want to define two mpc layers and one of them is one step ahead of the first one and it is close loop simulation I'd have:
for t=t0:dt:tf-dt
for k=1:N-1
first mpc layer
for j=2:N-1
second mpc layer
end
end
end
is this a valid structure???

Johan Löfberg

unread,
Nov 10, 2019, 9:21:13 AM11/10/19
to YALMIP
I have no idea, and neither does anyone else I guess. You will have to sort that out your self based on what you want to do.

You've been given advice on how to simulate MPC, and you have been given a full example on how to setup a black-box model based on simulating MPC and optimizing tuning variables in the MPC controller. Now, it is just up to you to understand your formulation and see how it is an application of all these things and optimization in general, and patch together all the advice you already have received.

A_E

unread,
Nov 10, 2019, 9:35:03 AM11/10/19
to YALMIP
well the structure is sth like this:

but I'm not sure that
1.there should be one solver for each layer or one solver for both layers?
2. there is an optimizer there that find optimized "Teta" (tuning parameters) and it is evaluated by solving a bilinear problem and it is in the loop with those two mpc layers. again is it necessary to define another solver for finding "Teta"? looks like it is, but I have a lot of hesitation to arrange these parts and make them work in a close loop etc

Johan Löfberg

unread,
Nov 10, 2019, 9:45:20 AM11/10/19
to YALMIP
As I said already, you won't get anymore help. It looks 100% percent like a blackbox optimization problem where some of the stuff involves simulating an MPC controller for objective computation, i.e. precisely the same thing you have been asking about before. The fact that the solution actually is used in an MPC controller immediately and the whole optimization thus is running repeatedly, and that the simulation is simulated from the current state in the running MPC controller does not in any sense change this.

I am going to be honest with you: If you don't see this, you've are addressing a problem too hard for you, and you should seek advice from colleagues closer to you, and/or try solve much simpler to build up confidence in the field in general. This is a site for YALMIP issues, and those have already answered.
Reply all
Reply to author
Forward
Message has been deleted
0 new messages