Lagrange Relaxation of difficult constraints

825 views
Skip to first unread message

Christin

unread,
Nov 27, 2014, 12:11:20 PM11/27/14
to
Dear AIMMS-Team,

My problem is a large scale mixed integer linear Programm and I am using CPLEX as a solver.
I would like to improve my solving time by using the "Lagrange Relaxation", but i can't find any example or further introductions.
My main idea is to relax my difficult constraints to improve my solving time, which is the aim of the Lagrangian Relaxation. For this i need to now my Lagrange Multiplier for each relaxed constraint.

I already saw the GMP function: GMP::Linearization::GetLagrangeMultiplier but I can't find any example on how to use it. Additionally it seems to me that this function is used in case of constraint linearization, which is not the case in my problem.
Furthermore are the shadow price described in the "Outter Approcimations Functions" to be equal to the Lagrangian Multiplier (?).

I would like to know, how to tell AIMMS which (already linear) constraint he should relax and how to get the Multiplier afterwards so that I can solve the problem with this Multiplier afterwards. Furthermore it would be good to know, which algorithm AIMMS used for getting the Multiplier ( Subgradient, Outer Approximation etc....)

I am very thankful for any help or further instructions.

Best regards,
Christin

Christin

unread,
Dec 1, 2014, 9:44:30 AM12/1/14
to ai...@googlegroups.com

anyone?

I just need to be sure, that there is no other way than writing the "Subgradient method"  in CPLEX.

Marcel Hunting

unread,
Dec 2, 2014, 4:52:22 AM12/2/14
to ai...@googlegroups.com
Hi,

In Lagrange relaxation you have to find good starting values for the Lagrange multipliers and a suitable algorithm to update the multipliers. For the starting values you could solve the relaxed problem (in which all integer variables are made continuous) and then use the dual values of the difficult constraints. These dual values are available as shadow prices.

If you want to implement Lagrange relaxation in AIMMS you have to create a new mathematical program for the relaxed problem in which the difficult constraints have been moved to the objective, multiplied by the Lagrange multipliers.

The function GMP::Linearization::GetLagrangeMultiplier just returns the shadow price of a constraint during the Outer Approximation algorithm and it cannot be used for implementing Lagrange relaxation. You have to implement a method for updating the Lagrange multipliers, e.g., the subgradient method, yourself.

Note that Lagrange relaxation will not give you an (optimal) solution for your original problem, unless your problem satisfies the so-called integrality property but that is usually not the case. More information can be found here: http://www.oscogen.ethz.ch/reports/oscogen_dp3.pdf .

Best regards,

Marcel Hunting
AIMMS Optimization Specialist

Christin

unread,
Dec 3, 2014, 5:21:05 PM12/3/14
to ai...@googlegroups.com
Hello Marcel,

thank you for your answer.

I know that with the Lagrangian Relaxation I don't get the optimal value, but we want to improve the solving time and see the difference to the optimal value.

Unfortunately, I tried to read through some manuals but I couldn't find anything similar and I think it takes to much time to implement it by myself, due to the fact that i am a  beginner with aimms and only implemented the benders decomposition and some models with excel input and output and thats it.
So I think i have to try it with cplex first, because i found a similar example there and then maybe with aimms again.

Thank you again, I think it's great to get professional support in this forum.

best regards,
Christin

Christin

unread,
Dec 3, 2014, 7:44:43 PM12/3/14
to
Hello again,

ok i guess i have to try it in aimms to, because my solution in cplex is wrong.
I tried for the subgradient method something like this :

scale := 2;  ! scale for subgradient method

for ( i ) do
u(i) := 0  ;       ! all lagrange multiplier start with value zero
endfor


for (k ) do  ! loop for the iterative approximation of u

    step := 0;
    Langrangian := solve(ObjectiveVar); ! get value ob objective including the relaxed constraint in the objective
    for i do

        slack(i):=0;  ! slack or the subgradient method
       
    endfor


    for (i) do
   
    slack(i) := 1- sum(j)x(i,j); !slack gets value of the relaxed contraint with the optimal decision variables of each iteration
   
    endfor

   

       
    step := scale * ( 19 - Langrangian / sum(i) slack(i)²);
    for i do  ! update u
    temp(i) := u(i);
        if( temp(i)+ step * slack(i) > 0 ) then
            u(i) := temp(i) + (step * slack(i));
        else u(i) := 0 ;
        endif
   

    for i do ! update scale

        help := 0;
        if ( u(i) - temp(i) < 0.0001) then
            help := help +1 ;
        endif

    endfor
    if ( help >= 1 ) then
    scale := scale/2 ;
    endif
   
endfor

But i get the error, that "for" is not expected. Furthermore  I have the question, if i change my values of u(i) (= standing for the langrangian multiplier) at the end of my first iteration k and solve the mathematicalproblem in (k+1) iteration, does it use the new values of u(i) or the old ones?

best regards and thank you
christin
   

Marcel Hunting

unread,
Dec 4, 2014, 4:47:06 AM12/4/14
to ai...@googlegroups.com
Hi,

You should use ';' after every 'endfor' and 'endif'.

If you solve the mathematical problem in iteration (k+1) then AIMMS will use the new values of u(i), if u is a parameter used inside that mathematical program.

Best regards,

Marcel
Message has been deleted

Christin

unread,
Dec 4, 2014, 4:34:51 PM12/4/14
to ai...@googlegroups.com
Hello Marcel

thank you so much for your answer! i had a few other mistakes, too, but i already corrected them! it almost works now :)

best regrads,
christin

Christin

unread,
Dec 9, 2014, 2:10:23 PM12/9/14
to
Dear AIMMS-Support-Team,

i successfully implemented the Lagrangian Relaxation of a simple example and want to extend it to a bigger problem. For this, i have some questions about getting the starting values. In the literature it is said, to use the dual variables of the relaxed constraint as starting value.
In my problem i have 14 constraints, and i want to relax constraint nr.8.
So if i create my dual problem by :

myGMP:= GMP::Instance::Generate(HubLocation);

GMP::Instance::CreateDual(myGMP,DualHubLocation) ;

and solve it afterwards.

GMP::Instance::Solve(DualHubLocation);

 Constraint8.Shadowprice ;

First of all I get the error " CreateDual" is not expected.

Second, how do I know which decision variables belong to which constraint? It's a really large problem: it has 3 indices for every constraint. And I need all the dual variables of only constraint 8.

I'm also not sure about the use of shadowprice. I found the definition " which are associated with constraints and their right hand side". My constraint has more the structure of Ax+ By + Cy <= M + Dz.

Marcel Hunting

unread,
Dec 10, 2014, 4:27:40 AM12/10/14
to
Hi,

You should use

   DualHubLocation := GMP::Instance::CreateDual(myGMP,'DualModel') ;

where DualHubLocation is an element parameter with range 'AllGeneratedMathematicalPrograms'. After the solve the shadow price of constraint8 can be accessed by using

   Constraint8.Shadowprice ;

Please note that you should specify the ShadowPrice property for this constraint otherwise AIMMS will not store the shadow prices after the solve.

The shadow price is independent of the structure of your constraint. It does not matter is you use x+ By + Cy <= M + Dz or x+ By + Cy - Dz <= M.

I do not understand this question: "how do I know which decision variables belong to which constraint?".

Best regards,

Marcel

Christin

unread,
Dec 10, 2014, 5:43:25 AM12/10/14
to ai...@googlegroups.com
Hello Marcel,

thank you for your answer! it really helped. I specified the property ShadowPrice for constraint8.
Now I have:

myGMp:= GMP::Instance::Generate(HubLocation);
DualHubLocation:=GMP::Instance::CreateDual(myGMP,DualModel);
GMP::Instance::Solve(DualHubLocation);
for(k,g,s) do
Multiplikator[k,g,s]:= Constraint8[k,g,s].Shadowprice;
endfor;

But now I get the Error " The symbol "Shadowprice" is not expected.

Thank you very much.
Best regards,
Christin

Marcel Hunting

unread,
Dec 10, 2014, 7:31:43 AM12/10/14
to ai...@googlegroups.com
Hi Christin,

Please try using Constraint8(k,g,s).Shadowprice. So, using '(' and ')' instead of '[' and ']'.

Best regards,

Marcel

Christin

unread,
Dec 10, 2014, 5:08:35 PM12/10/14
to ai...@googlegroups.com
Thank you!

It's working now, but all my Multiplieres are zero. I think there is something wrong....

Best regards,
Christin

Marcel Hunting

unread,
Dec 11, 2014, 3:49:25 AM12/11/14
to ai...@googlegroups.com
Hi,

Did you set the ShadowPrice property of that constraint? I could be that these constraints are not restricting, namely if the value of the left-hand-side is strictly smaller than the right-hand-side; this can be checked by printing the constraint listing.

For more advice I would need your project. In that case, could you please attach your project here (or send it to our support if it contains confidential information)?

Best regards,

Marcel

Christin

unread,
Dec 11, 2014, 4:00:47 AM12/11/14
to ai...@googlegroups.com

Hello Marcel,

yes i already set the ShadowPrice property of the constraint. The constraint is very restricting, because it changes a lot the result if you consider this constraint or not.

I will send my project to AIMMS support.
Thank you!

Marcel Hunting

unread,
Dec 11, 2014, 9:40:38 AM12/11/14
to ai...@googlegroups.com
Hi Christin,

After solving the dual problem the shadow prices of Constraint8 are indeed 0. Instead of solving the dual problem you can also solve the relaxed primal problem as it will automatically provide shadow prices:

myGMP:= GMP::Instance::Generate(HubLocation);
GMP::Instance::SetMathematicalProgrammingType( myGMP, 'LP' );
GMP::Instance::Solve(myGMP);

Note that I changed the mathematical programming type to LP. That way you do not have to change the binary variables into [0,1] variables manually.

For this instance, solving this relaxed primal problem is done much faster than solving the dual problem.

Also for the relaxed primal problem the shadow prices returned for Constraint8 are all 0. This can be seen by opening the Math Program Inspector (MPI) after the solve and selecting the Constraint Solution tab. It will show that some of the Constraint8 constraints are binding yet the corresponding shadow price is 0 (the shadow prices are displayed in the 'Marginal' column in the MPI). This indicates that the relaxed primal problem is degenerated. Note that the shadow prices/marginals of Constraint1 are greater than 0 which shows that the relaxed primal problem was solved correctly and that shadow prices were calculated.

Best regards,

Marcel

Christin

unread,
Dec 11, 2014, 3:23:05 PM12/11/14
to ai...@googlegroups.com
Thank you very much ! This helped a lot!

best regards,
christin

Reply all
Reply to author
Forward
0 new messages