Modelling to minimize sum of differences where one value is constant

13 views
Skip to first unread message

Salamino15

unread,
Jun 24, 2018, 12:43:18 PM6/24/18
to YALMIP

Hi there,


I'm modelling the two objective functions below and would like to make sure that I'm doing so correctly.


In both problems, I'm trying to minimize the sum of the differences between two values, given that the first value in both equations is a constant (p and f, respectively), and the second value is the decision variable (m and x, respectively).


Am I actually modelling them correctly in YALMIP? Does the first value get ignored in the optimization because it's a constant? I'm confused and would like some help please. Thanks :)



constraint = [b < m < p,
p - m <= r_max,
sum ( p - m ) == Q_req];

for i = 1:customers obj = obj + utility (i, p(i)) - utility (i, m(i)); end
options = sdpsettings('verbose',2);
sol = solvesdp(constraint,obj,options);




constraint = [b < p - x < p, sum ( x ) == Q_req, 0 < x <= r_max]; for i = 1:customers obj = obj + utility(i, f(i)) - utility(i, x(i)); end

options = sdpsettings('verbose',2);

sol = solvesdp(constraint,obj,options);



Johan Löfberg

unread,
Jun 24, 2018, 12:52:25 PM6/24/18
to YALMIP
Before anything else, are you optimizing over indices? I hope you realize that leads to very complicated integer models

I don't understand your question about constants being ignored. (4-x)^2 has a constant, but of course it cannot be ignored

(and strict inequalities are not supported, YALMIP will scream at you about that)

Johan Löfberg

unread,
Jun 24, 2018, 12:53:02 PM6/24/18
to YALMIP
...and solvesdp is obsolete. It is called optimize

Salamino15

unread,
Jun 24, 2018, 1:17:16 PM6/24/18
to YALMIP
Thanks for your reply :)

I've worked on this quite a while ago and solvesdp worked for me.. didn't know it's currently obselete.

Sorry I'm not that into optimization, so not sure what you mean by optimizing over indices, I just need the array of decision variable that satisfies the objective and constraints. Does that mean I'm optimizing over indices?

Regarding constants - I was told a while back that because there's a constant in the objective function then there's no need to include it! So what I got to understand was that if fun1(x) gives a constant and I'm minimizing sum (fun1(x) - fun2(y)) then I can take out the fun1(x)! I didn't get convinced so I'm double checking whether it's that way or otherwise and whether I'm doing the whole modelling correctly?

Thanks for telling me about strict inequalities. But what if I use those and get results that are reasonable?

Johan Löfberg

unread,
Jun 24, 2018, 2:17:49 PM6/24/18
to YALMIP
optimization over indices I mean you have x(y) where x is either constants or decision variables, and y a decision variable. That is a much more more complicated than a linear operator x(i) where i is a constant. In your case, you say m is a decision variable, and you have u(m)

If you are minimizing x^2 + 5, you will get exactly the same optimal solution as when you minimize x^2, of course. THe optimal objective will be different naturally, as the 5 is missing

YALMIP simply replaces strict inequalities with non-strict.
Reply all
Reply to author
Forward
0 new messages