Node Limit Parameter

347 views
Skip to first unread message

Andrew Orso

unread,
Sep 19, 2014, 10:31:06 AM9/19/14
to gur...@googlegroups.com
Hi,

I am using Lazy Constraints in my MIP model and am a bit confused. My Lazy Constraint cuts off an infeasible solution, think subtour eliminations, and for specific reasons, I want to restrict my MIP to the root node which I assume would just solve the LP relaxation and go to the callback which would then add a cut and then resolve. At this point I have all heuristics and cuts other than Lazy turned off. I have NodeLimit = 1 but when I run my code, it says that it explores 101 nodes even though it only added 4 lazy constraints and the solution is suboptimal (which if it works as above, it shouldn't be due to some problem specifics) which I don't understand at all. What 101 nodes is it exploring if I am restricting the problem to a NodeLimit of 1?

Thanks,

Andrew

Renan Garcia

unread,
Sep 19, 2014, 11:14:27 AM9/19/14
to gur...@googlegroups.com
Andrew,

When you write "it says that it explores 101 nodes", are you getting 101 from the logs or from an API attribute query? Can you post the logs from that run?

Which API are you using? If possible, can you also post the snippet where you set NodeLimit?

Thanks,
Renan

--

---
You received this message because you are subscribed to the Google Groups "Gurobi Optimization" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gurobi+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Andrew Orso

unread,
Sep 19, 2014, 12:05:04 PM9/19/14
to gur...@googlegroups.com
Renan,

I am using C++ in Ubuntu 14.04 if that makes any difference. The snippet I used is

    g_Model.getEnv().set(GRB_DoubleParam_NodeLimit,1);

and the output below is all from the command line which is also where it says 101 nodes. I am pretty sure I am misunderstanding what Gurobi is doing, in fact, I'm sure I'm misunderstanding.

Optimize a model with 1 rows, 127 columns and 121 nonzeros
Variable types: 1 continuous, 126 integer (126 binary)

Root relaxation: objective -5.404926e+03, 4 iterations, 0.00 seconds

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0 -5404.9256    0    1          - -5404.9256     -      -    0s
     0     0 -3915.5459    0    2          - -3915.5459     -      -    0s
     0     6 -2976.9529    0    3          - -2976.9529     -      -    0s

Cutting planes:
  Lazy constraints: 4

Explored 101 nodes (155 simplex iterations) in 0.11 seconds
Thread count was 8 (of 8 available processors)

Thanks for your help.

Andrew

Renan Garcia

unread,
Sep 19, 2014, 12:42:11 PM9/19/14
to gur...@googlegroups.com
Andrew,

It looks like you are doing everything correctly. I checked with the Gurobi developers, and they confirmed that it is possible for the number of explored nodes to be slightly larger than NodeLimit due to parallelism. If you want the algorithm to terminate quicker, just set the Threads parameter to 1.

-Renan

Andrew Orso

unread,
Sep 19, 2014, 1:00:37 PM9/19/14
to gur...@googlegroups.com
Renan,

Thanks for that information. So I was wondering what exactly is happening in the background then and if there is a way to display more output than any normal human would want so I get an idea. To me, it seems like if I restrict NodeLimit to one and turn off all heuristics and cuts, then the root node relaxation will get solved, I'll generate a LazyConstraint and then another root node relaxation is solved and so forth until no LazyConstraint is generated. This is not what my code is doing though so I'm a bit confused. I know this seems silly as I could do normal column generation on the LP but I'd like to eventually try to utilize some of the cuts Gurobi has to improve my computation time so I'm trying to get a better understanding of what the MIP solver is doing.

Andrew

Renan Garcia

unread,
Sep 19, 2014, 1:24:25 PM9/19/14
to gur...@googlegroups.com
Andrew,

So I was wondering what exactly is happening in the background then and if there is a way to display more output than any normal human would want so I get an idea.

You can use callbacks to print extra information during the algorithm. See http://www.gurobi.com/documentation/5.6/reference-manual/callback_codes#sec:CallbackCodes for the entire list of 'what' information is available 'where' in the algorithm.

To me, it seems like if I restrict NodeLimit to one and turn off all heuristics and cuts, then the root node relaxation will get solved, I'll generate a LazyConstraint and then another root node relaxation is solved and so forth until no LazyConstraint is generated. This is not what my code is doing though so I'm a bit confused.

I'm not clear on what your code *is* doing that you aren't expecting, but keep in mind that you can add lazy constraints for fractional solutions when where=MIPNODE and integral solutions when where=MIPSOL. Do you include both cases in your callback?

-Renan

Andrew Orso

unread,
Sep 19, 2014, 6:16:44 PM9/19/14
to gur...@googlegroups.com
Renan,

The where component was something that I wasn't fully clear on so this helps. I guess I have a slightly more specific question regarding my model right now if you don't mind. I have binary variables in my model and they are the only variables outside of one continuous variable. I am printing the value of these variables in my callback and it seems ok for the first two or three iterations (agreeing with my LP implementation) but after that, it seems like the solver completely disregards the [0,1] restriction and the values get very strange which screws up the rest of the solve. Any ideas?

Thanks,

Andrew

Renan Garcia

unread,
Sep 19, 2014, 7:09:30 PM9/19/14
to gur...@googlegroups.com
Andrew,

Can you be more specific about how you are accessing the values in your callback? Posting that bit of code would be helpful.

Thanks,
Renan

Andrew Orso

unread,
Sep 19, 2014, 7:18:33 PM9/19/14
to gur...@googlegroups.com
class PolyCut: public GRBCallback
{
  public:
    GRBVar* vars;
    GRBVar* vvars;
    string xtype;
    map<int,double> xsetMap;
    Data_Read *ptrXDat;
    Data_Read2 *ptrXDat2;
    double xfNull;
    int n;
    map<int,double> setMap;

   
    PolyCut(GRBVar* xvars, GRBVar* yvar, int xn, string type, Data_Read* ptrData, Data_Read2* ptrData2,double fNull) {
      vars = xvars;
      n    = xn;
      vvars = yvar;
      xtype = type;
      ptrXDat = ptrData;
      ptrXDat2 = ptrData2;
      xfNull = fNull;
      }
  protected:
    void callback() {
      try {
        if (where == GRB_CB_MIPNODE || where == GRB_CB_MIPSOL) {
          vector<double> x(n);
          double y;
          vector<double> cutAdd(n);
          for (int i = 0; i < n; i++){
            x[i] = getSolution(vars[i]);}
            y = getSolution(vvars[0]);



During the first few iterations, it seems like x is what I expect it to be but then it gets really strange all of a sudden. The root relaxation should be on [0,1] as my x variables are binary but on the third iteration I'll get numbers like 543, 456, ... it makes no sense. A lot of this code was copied over from the TSP example and modified for my specific case so maybe I mangled something when changing it. After adding a lazy cut, what exactly does Gurobi do? Is there any way to see the LP that is being solved after I add the lazy cut?

Renan Garcia

unread,
Sep 19, 2014, 7:42:48 PM9/19/14
to gur...@googlegroups.com
Andrew,

GRBCallback::getSolution() retrieves the values for integral solutions when where=MIPSOL, and should not be called when where=MIPNODE. If you want to retrieve the values for fractional node relaxation solutions when where=MIPNODE, use GRBCallback::getNodeRel().

Renant

Andrew Orso

unread,
Sep 19, 2014, 7:52:33 PM9/19/14
to gur...@googlegroups.com
Ah! I copied the TSP example not realizing there was a difference. I can't test it out at the moment but that seems like it will do the trick. Thank you so much for your help Renan!

Andrew

Peter Surovy

unread,
Sep 20, 2014, 7:58:50 PM9/20/14
to gur...@googlegroups.com
Hello,

I was following this topic regarding lazy constraints and TSP, I am
working on similar problem maybe you can give me some advice. I have a
TSP like model and I try to compare the solution by using lazy
constraints and using a kind of Miller – Tucker – Zemlin formulation
from Sherali HD, Driscoll PJ (2002).

Somehow the solution using lazy constraints is always slower. I wonder
if I may be doing something wrong, thank you for any hints.
Here is the log you can see the difference in node amount and time:
-----------
Lazy:

Cutting planes:
Gomory: 8
Clique: 7
Zero half: 10
Lazy constraints: 1829

Explored 65545 nodes (1613840 simplex iterations) in 194.24 seconds
Thread count was 8 (of 8 available processors)

Optimal solution found (tolerance 1.00e-04)
Best objective 9.100000000000e+01, best bound 9.100000000000e+01, gap 0.0%

-------------
MTZ:

Cutting planes:
Gomory: 24
Implied bound: 522
Clique: 13
MIR: 32
Zero half: 53

Explored 23806 nodes (1022890 simplex iterations) in 57.89 seconds
Thread count was 8 (of 8 available processors)

Optimal solution found (tolerance 1.00e-04)
Best objective 9.100000000000e+01, best bound 9.100000000000e+01, gap 0.0%
------------


As I say, thank you for any hints
Peter

Andrew Orso

unread,
Sep 21, 2014, 4:13:23 PM9/21/14
to gur...@googlegroups.com
Hi,

I wish I could help Peter but I am still having plenty of problems with my code and really just trying to wrap my head around what Gurobi is actually doing. The problem I am running into now is that my code will generate the correct lazy constraints for a bit but then it terminates long before it should. It will generate a lazy constraint which should prompt another LP relaxation solve which should be giving me a new solution but it just exits and reports that it is done.

Thanks,

Andrew

Renan Garcia

unread,
Sep 23, 2014, 10:02:41 AM9/23/14
to gur...@googlegroups.com
Peter,

There's not much to go on here, but my guess is the MTZ formulation is tighter (i.e., better LP bound), which is why you are exploring 3 times as many nodes in the Lazy algorithm. It also looks like the size of the MTZ formulation is not prohibitive (~18,000 simplex iterations per second). Assuming you are only adding lazy cuts for integral solutions (MIPSOL callbacks), you could try adding them for fractional solutions (MIPNODE callbacks) as well to reduce the number of explored nodes. However, it could just be the case that the MTZ formulation is preferable.

Renan
Reply all
Reply to author
Forward
0 new messages