multi-objective optimization

257 views
Skip to first unread message

Ali_E

unread,
Aug 29, 2019, 6:09:37 AM8/29/19
to YALMIP
Hello professor, I wanted to know is it possible to use YALMIP for multi-objective optimization? because fmincon and fminunc and also fgoalattain are satisfying enough for my problem.

Johan Löfberg

unread,
Aug 29, 2019, 6:11:47 AM8/29/19
to YALMIP
you would have to code the upper layer of the multi-objective logic your self, but for that you can use YALMIP of course

Ali_E

unread,
Aug 29, 2019, 6:21:45 AM8/29/19
to YALMIP
I've defined the objectives of my problem, but my question is that which solver is a good choice to replace with fmincon or fminunc???

Johan Löfberg

unread,
Aug 29, 2019, 6:30:28 AM8/29/19
to YALMIP
The question you have to answer first is what you are looking for. If it is *a* solution to your problem, then you have to define what a solution is. If your objectives are f (cost of building sports car) and p (weight of the car), what is an optimal solution, and how do you define it?

Ali_E

unread,
Aug 29, 2019, 6:42:52 AM8/29/19
to YALMIP
ok, I got a function named F(x) with x as a vector of decision variables. I wanted to ask you how can I find this F(x) because I got it as Sum of Squared Errors. I've solved my problem of optimization with YALPMIP and mosek and now I want to tune my controller parameters.

I've attached the picture of this F(x) and I want to know how can this be a function while there is no x in the SSE formulation???
Screenshot (59).png

Johan Löfberg

unread,
Aug 29, 2019, 6:46:02 AM8/29/19
to YALMIP
How you get F(x)? sum((yref-y).^2) would be one way

Johan Löfberg

unread,
Aug 29, 2019, 6:47:51 AM8/29/19
to YALMIP
although I don't see why you are talking about this, when the question initially was about multi-objective optimization

Ali_E

unread,
Aug 29, 2019, 7:08:29 AM8/29/19
to YALMIP
well this F(x) is the objective function of multi-objective optimization tuning method for my MPC problem. the paper that introduces this tuning method has defined F(x) as a summation of (yref-y)^2 but my question is that there is no x in there as variable so how can F(x) be a function??? should i replace those y and yref with corresponding dynamic of them? because F(x) can't be constant.

I've attached the paper for you.
base.pdf

Johan Löfberg

unread,
Aug 29, 2019, 10:22:19 AM8/29/19
to YALMIP
Then F = (yref-y).^2, and is a vector, and y is created from the decision variables, generically x in the introductory notation, specifically here I guess your control inputs or something like that.

Regardless, you cannot minimize a vector, it makes no sense. In multi-objective, you take your vector objective and then map that to a scalar objective using some strategy. Once you've decided how to do that, it is just a matter of solving that problem using a standard solver.  In the case of fgoalattain that you talk about, there you simply settle on a goal value "g", and then you simply try to make all objectives smaller than that, F <= g. That is typically not feasible (if it is you are done), so you introduce a new variable gamma which acts as a slack, and then relax all goals, using a weight w to account for different scales, so you minimize gamma subject to F <= g + w*gamma



 

Ali Esmaeilpour

unread,
Aug 29, 2019, 10:44:18 AM8/29/19
to YALMIP
Yeah that is right, but about F(x) the Context Says that it is a Function of x and x is diagonal matrix of tuning parameters and y is evaluated from control moves so it is a Function of tuning parameters Q and R. What I can not figure out is that I can not Find the FORMULA of y which have Q and R in its formulation. What I wanna Know is that y should be replaced by its dynamic or its numeric values? Is it a Function or a numeric constant value?

Johan Löfberg

unread,
Aug 29, 2019, 11:42:42 AM8/29/19
to YALMIP
In the paper x is defined as [q;r] after (14), and y is the output in closed-loop when using that tuning. The map [q;r] -> y is a pretty nasty expression and they never explicitly derive it, and probably just solve the problem using some black-box solver which only requires a function which returns objective value for specific choice of decisions

Ali Esmaeilpour

unread,
Aug 29, 2019, 11:49:19 AM8/29/19
to YALMIP
What kind of black box? How can I Find out about it? Is it routine to use a black box to map [q;r]->y? Where can I Find any of these black boxes?

Johan Löfberg

unread,
Aug 29, 2019, 11:52:54 AM8/29/19
to YALMIP
A black-box solver, such as fminunc. That solver only requires you to write a function which takes an input ([q;r]) and return an objective. In that function, you can simulate a system or do what ever you want to construct the output value

Ali Esmaeilpour

unread,
Aug 29, 2019, 12:03:40 PM8/29/19
to YALMIP
So I have solved the optimization problem and got numerical values of y. Now I want to tune it. I put those numerical values in F(x) and now I have a constant F not a Function. Then I use this F for the rest of tuning strategy?
How that black box solver fmincon map q and r to y?
in Paper F(x)=sum(yref-y) ^2 and x=[q;r] and I got numerical values of y and I consider yref=0
But there is No x in yref-y and I wont have different values for F with different values of x

Johan Löfberg

unread,
Aug 29, 2019, 12:08:57 PM8/29/19
to YALMIP
I don't know what optimization problem you have solved, but if you mean you compute y, given [q;r], and you want to optimize the choice of [q;r] based on some metric on resulting y, then you would have

function F = myblackbox(x)
q = x(1)
r = x(2);
y = solvemyoptimizationproblem(q,r);
F = somefunctionofy(y);

I can only advise you to google black-box optimization, I cannot help you further as these are basic questions and not YALMIP related.

Ali Esmaeilpour

unread,
Aug 29, 2019, 12:16:04 PM8/29/19
to YALMIP
Eh excuse me Processor, I was asking about that F(x)
When you use fminunc you provide a Function and you have the variable x in that function
Question is Simple here, why we do not have x in right hand Side of that paper's Function F(x)=sum(yref-y) ^2 ???
there should be x in right hand Side so it can be providable for fminunc or any other blackbox

Johan Löfberg

unread,
Aug 29, 2019, 4:33:40 PM8/29/19
to YALMIP
An general setup is described in the paper in the section Multi-Objective Optimization , using the name x for the decision variables. Then a particular problem is described in the paper, where the decision variables are q and r, and they generate something called y, which generates/define some other result F. Hence, F is a function of y, F(y). y is a function of [q;r], hence F([q;r]). Hence, in short F(x) where the decision variables are x=[q;r]

Ali_E

unread,
Aug 29, 2019, 4:55:49 PM8/29/19
to YALMIP
so you are saying that because x=[q;r] and q and r generate y so F(y) = F([q;r]) and unbelievably we can define F(y) = sum((yref-y)^2) ???

Johan Löfberg

unread,
Aug 30, 2019, 1:44:22 AM8/30/19
to YALMIP
Obviously, sum((yref-y)^2)  is a function of y, so you can call that F(y) if you want..That would not be interesting though in the big picture, as y isn't any variable in the optimization problem but a function of the actual decision variables [q;r] which you call x. Hence, in the context of the whole optimization problem, the notation F(x) or F([q1;r]) or F(y(x)) would make more sense



Ali Esmaeilpour

unread,
Aug 30, 2019, 2:58:41 AM8/30/19
to YALMIP
So we have F(y(x)) = sum((yref-y)^2) and we should define it like a nested Function in matlab? Or use sdpvar for x and then define F as a Function with variable y???

Johan Löfberg

unread,
Aug 30, 2019, 3:19:33 AM8/30/19
to YALMIP
the logical notation then would be  F(y(x)) = sum((yref-y(x)^2) 

Talking about sdpvars here is of no use, as you cannot solve this using YALMIP, as YALMIP doesn\t support black-box optimization

Implementing this with fgoalattain or fminunc (depending on what you actually want to do) is trivial though, I basically gave you the code above
Message has been deleted

Ali_E

unread,
Aug 30, 2019, 8:23:01 AM8/30/19
to YALMIP
Now y(x) is needed there as a function and I know that it's close-loop response and I got it as a matrix 1 by 100 numerical values from the following program that gives mux1 and mux2 as closed-loop responses that each of them can be supposed y:
clc;
clear
;
close all
;
%% Time structure
t0
= 0;        
tf
= 10;        
dt
= 0.1;      
time
= t0:dt:tf-dt;
%% System structure  ===> x(k+1) = A*x(k) + B*u(k) + G*w(k)
A
= [1.02 -0.1;0.1 0.98];
B
= [0.5 0;0.05 0.5];
G
= [0.3 0;0 0.3];
[nx,nu] = size(B);
%% MPC parameters
Q
= eye(nx);
R
= eye(nu)*50;
Qf = eye(nu)*50;
N
= 26;
%% Problem parameters
M
= blkdiag(Q,R);
w1
= randn(1,100);
w1
= w1-mean(w1);
muw1
= mean(w1);
sigmaw1
= var(w1);
w2
= randn(1,100);
w2
= w2-mean(w2);
muw2
= mean(w2);
sigmaw2
= var(w2);
w
= [w1;w2];
sigmaw
= [sigmaw1 0 ;0 sigmaw2];
x
= randn(2,1);
mux
{1} = [-2;2];
sigmax
{1} = [1,0;0,1];
h1x
= [-1/sqrt(5);-2/sqrt(5)];
h2u
= [-0.4/sqrt(1.16);1/sqrt(1.16)];
epsilon
= 0.5;
g1
= 3;
g2
= 0.2;
c
= 0;
%% Main loop
for t = t0:dt:tf-dt
c
= c+1;  
sigmax
= sdpvar(repmat(nx,1,N+1),repmat(nu,1,N+1));
p
= sdpvar(repmat(4,1,N-1),repmat(4,1,N-1));
pN
= sdpvar(2,2);
F
= sdpvar(repmat(2,1,N+1),repmat(2,1,N+1));
K
= sdpvar(repmat(2,1,N+1),repmat(2,1,N+1));
muu
= sdpvar(repmat(nu,1,N),repmat(1,1,N));
sigmau
= sdpvar(repmat(2,1,N),repmat(2,1,N));
constraint
= [];
objective
= 0;
for k = 1:N-1  
    mux
{k+1} = A*mux{k} + B*muu{k};        
    objective
= objective + 1/2*(trace(M*p{k}));        
    constraint2
= [sigmax{k+1} (A*sigmax{k}+B*F{k}) G*sigmaw;(A*sigmax{k}+B*F{k})' sigmax{k} zeros(2);(G*sigmaw)' zeros(2) sigmaw]>=0;
    constraint3
= [sigmau{k} (F{k});(F{k})' sigmax{k}]>=0;
    constraint4 = h1x'
*mux{k}<=(1-(0.5*epsilon))*g1-((0.95/2*epsilon*g1*(1-0.95))*h1x'*sigmax{k}*h1x);
    constraint5 = h2u'
*muu{k}<=(1-(0.5*epsilon))*g2-((0.95/2*epsilon*g2*(1-0.95))*h2u'*sigmau{k}*h2u);
    constraint6 = [p{k} [sigmax{k};F{k}] [mux{k};muu{k}];[sigmax{k};F{k}]'
sigmax{k} zeros(2,1);[mux{k};muu{k}]' zeros(1,2) ones(1)]>=0;
    constraint7 = [F{k} K{k};sigmax{k} eye(2)];
    constraint = [constraint,constraint2,constraint3,constraint4,constraint5,constraint6,constraint7];          
end        
objective = objective + (1/2*trace((trace(Q*pN))));
constraint8 = h1x'
*mux{N}<=(1-(0.5*epsilon))*g1-((0.95/2*epsilon*g1*(1-0.95))*h1x'*sigmax{N}*h1x);
constraint9 = [pN-sigmax{N} mux{N};mux{N}'
1];
constraint
= [constraint,constraint8,constraint9];
option
= sdpsettings('solver','sedumi');
a
= optimize (constraint,objective,option);  
u
= value(muu{1});
U1
(c) = u(1,1);
U2
(c) = u(2,1);
U3
= [U1;U2];  
mux3
= value(mux{1});
mux1
(c) = mux3(1,1);
mux2
(c) = mux3(2,1);
F
= value(F{1});
X
= value(sigmax{1});
f1
{c} = F;
X1
{c} = X;
K
= F*inv(X);
% K=value(K{1})
K1
{c} = K;
K2
= value(K1{c});
x
= value(x);
u1
= u+K2*(x-mux3);
u2
{c} = u1;
x
= A*x+B*u1+G*w(:,c);
mux
{1} = value(mux{2});
sigmax
{1} = value(sigmax{2});    
ud
= value(u2{c});
ud1
(c) = ud(1,1);
ud2
(c) = ud(2,1);
x1
(c) = x(1,1);
x2
(c) = x(2,1);      
end


Johan Löfberg

unread,
Aug 30, 2019, 8:58:12 AM8/30/19
to YALMIP
Hence, if you want to optimize something related to mux1 and mux2 using a black-box solver, define that something as the output and what ever it is you want to optimize (Q and R which you currently fix as 50?) as inputs, thus you have your black box y = simulateclosedloopresponsewiththeseQandRandcomputeObjective(x)

BTW, your simulation code is violently slow as you repeatedly setup the problem from scratch. It clearly looks like you can do everything using an optimizer object in initial state mux{1}
Message has been deleted
Message has been deleted

Ali Esmaeilpour

unread,
Sep 2, 2019, 12:04:34 PM9/2/19
to YALMIP
Excuse me professor Talking about my optimization program speed, How can I Make it faster?

Johan Löfberg

unread,
Sep 2, 2019, 12:21:30 PM9/2/19
to YALMIP
impossible to answer without knowing what you are doing
Reply all
Reply to author
Forward
0 new messages