Problem with unbounded objective function

588 views
Skip to first unread message

YalmipBeginner

unread,
Aug 21, 2016, 6:01:38 AM8/21/16
to YALMIP


Dear Mr. Lofberg,

I am working on a algorithm that is solving through iterations a convex optimization problem. First I have implemented code in cvx, but due to the slow execution of the code I have started using Yalmip since I read about optimizer which allows to predefine model and then save a lot of time. In my case, the only thing that changes in the problem through iterations is objective function, so I used optimizer command. However, the results that I get is that the objective function is unbounded which returns wrong results. Actually there is a part of objective function which when added causes this problem. This part is a sum of 2 norms squared. More precisely:

      penalty=z-[s(:,1);s(:,2);s(:,3)];
      penalty1=z_S0-S0;
      Cost(nr)=Dist.Price*El(:,nr)+((0.5)/c)*(penalty'*penalty+penalty1'*penalty1);

I would be extremely thankful for Your help,

Kindest regards,

Yalmipnewbie

P.S. I have sent code to Your email

Solver i used is : Mosek 7.1.0.54
Matlab R2014a



Johan Löfberg

unread,
Aug 21, 2016, 7:10:26 AM8/21/16
to YALMIP
YALMIP will not be able to realize that the constraints are SOCP representable once iteration is fixed, but will simply assume that they are LPs once iteration is fixed (I think). Hence, you'll have to write the parameterized SOCP constraints using the cone operator. YALMIP cannot deal with models where the structure changes depending on parameter variables being fixed or not (from general nonlinear elementwise, to SOCP-representable)

As the code is noe though, it is not clear that you actually want the objective to be the max of a set of quadratic functions, or a sum of quadratics (as you only have 1 term). If you meant it to be a sum, you should just sum of the terms, and YALMIP will be able to derive the quadratic objective once optimizer is called

However, to simplify things, don't define c=alpha/(1+iteration), and the use 1/c in model, and have iteration as parameter. Simply define gamma as a paramete and use gamma*quadratic, and then compute it as (1+iteration)/alpha when you have the data. This will simplyfy the model both pre- and during solver call, and reduce the chance that YALMIP introduced auxilliary variables which creates a nonlinear model (I would not be surprised if the internal model looks like *(newvariable)*quadratic, newvariable == 1/anothervariable, anothervariable==1+iteration. This model is not linear when parameter is fixed

YalmipBeginner

unread,
Aug 21, 2016, 9:26:44 AM8/21/16
to YALMIP
Dear Prof. Lofberg,

Thanks for very quick reply.

I don't know if interpreted well Your response. Especially, I am not sure about middle paragraph. My goal is to minimize the upper bound of fcn Cost which is a sum of 2 terms ( now i put it as 3 terms). I made changes according to your post by putting:

 gamma=sdpvar(1,1); %% gamma is a new parameter, iteration is gone


 penalty=z-[s(:,1);s(:,2);s(:,3)];
 penalty1=z_S0-S0;
 square1=penalty'*penalty;
 square2=penalty1'*penalty1;
    % Cost
   
      Cost(nr)=Dist.Price*El(:,nr)+gamma*square1+gamma*square2;

But, I still get the same result. Did I interpret well the change I should make ?

Johan Löfberg

unread,
Aug 21, 2016, 10:20:24 AM8/21/16
to YALMIP
As I said, as you want to end up with SOCP constraints, you have to make sure you're constraints really are SOCP cones to begin with. YALMIP will not be able to restructure an initial elementwise constraint to an SOCP once inside optimizer and value is fixed

a + z^2*(x'*x + y'*y) <= t 

where z is a parameter

is cone([2*z*x;2*z*y;1-t+a],1+t-a)

etc




YalmipBeginner

unread,
Aug 21, 2016, 11:38:08 AM8/21/16
to YALMIP
Thanks a lot ! It works now !!
Special thanks for Yalmip !

YalmipBeginner

unread,
Aug 21, 2016, 4:04:25 PM8/21/16
to YALMIP
Dear Mr. Lofberg,

The optimizer works now, but the execution of the code is much more slower w.r.t. the case when i simply use optimize command over iterations. I suppose it has something to do with the conic formulation I have done but I have no idea how could I possibly make it faster ?


Johan Löfberg

unread,
Aug 21, 2016, 4:06:38 PM8/21/16
to YALMIP
Supply code for the two cases to compare

YalmipBeginner

unread,
Aug 21, 2016, 4:38:30 PM8/21/16
to YALMIP
I have to correct myself. What I said was not true. The optimize command is much faster but I used it for the fist pre-iteration computations and cost function of that pre-iteration doesn't include the 2 penalty terms - that probably explains the difference. Anyway, is there a way to speed up the optimizer call ? I have sent You a code. The code I am sending you is  with 1 scenario and number of scenarios that i need to simulate is way bigger. It would really take me too much time with this speed.

Johan Löfberg

unread,
Aug 22, 2016, 1:53:50 AM8/22/16
to YALMIP
Well, optimizer takes 19.6 seconds of which 19.5 is spent in mosek, so it is simply a large problem.

However, the fact that you create a variable "suma" which sums up quadratic terms once again makes me uncertain whether you actually want to minimize the maximum of a set of quadratics as you do know. Is that really the intention?

And btw, although it probaly works, much better to parameterize in terms of a simple variable gamma*... instead of sqrt(gamma), which will add overhead both in preprocessing and when fixing variables

YalmipBeginner

unread,
Aug 22, 2016, 5:40:00 AM8/22/16
to YALMIP
Dear Prof. Lofberg,

The execution time on my computer is: 221. 175 secs . I am running on server ( Linux OS) , Matlab R2014a and Mosek 7.1.0.54

I want to minimize the sum of quadratics not the maximum of them i.e. the cost function I am trying to minimize is:

J=Cost1+suma ( more precisely I want to minimize it in the worst case scenario which I have defined by cone constraint) . I wrote it as:     C1=cone([2*gamma*penalty;2*gamma*penalty1;1-h+Cost1(:)],1+h-Cost1(:)); Is it correct ?

Best regards,

YalmipBeginner

Johan Löfberg

unread,
Aug 22, 2016, 6:47:03 AM8/22/16
to YALMIP
Which version of YALMIP? I tried both my develop version, and the official latest release, and they both take 38 seconds, of which 19 is spent in the optimizer call, och which 99% is spent in solver

You'll have to profile it to see where time is spent

Johan Löfberg

unread,
Aug 22, 2016, 6:59:30 AM8/22/16
to YALMIP
I take that back. It appears to be varying due to varying times in the mixed-integer socp solver of mosek

You get a mixed-integer model as you have the operator max inside the cone operator. This is not guranteed to be structurally convex and representable by epigraphs, and YALMIP has to model the max by introducing integers

Hence, you should get that part outside the max. I.e., you model  max() + x'*x <=t, max()+y'*y <=t, as max() + s1 <= t, max() + s2 <= t, and then use cones to model x'*x <= s1, and y'*y <= s2

YalmipBeginner

unread,
Aug 22, 2016, 11:19:05 AM8/22/16
to YALMIP
Dear Mr. Lofberg,

Thanks a lot. Now the call of optimizer takes less than 2 secs. ( caused by getting the max operator out of the cone consraint )

One last question: Could You recommend me a literature on SOCP with possibly its usage in Yalmip ?

Best regards,

YalmipBeginner

Johan Löfberg

unread,
Aug 22, 2016, 11:27:40 AM8/22/16
to YALMIP
Pretty much everyting you find googling second-order cone programming

Note that all the issues you've had with this model comes from the fact that you are precompiling it. That adds a massive layer of odd stuff that can happen before yalmip can create soething which can be used efficiently in a compiled format

Erling D. Andersen

unread,
Aug 24, 2016, 5:28:59 AM8/24/16
to YALMIP
The MOSEK modeling cookbook at


has a lot info. about SOCP that may be useful. Well, we call it conic quadratic optimization but it is the same thing.

Enjoy.
Reply all
Reply to author
Forward
0 new messages