Excessively long runtime to add cuts

49 views
Skip to first unread message

acastillo

unread,
Sep 24, 2012, 10:19:02 PM9/24/12
to coopr...@googlegroups.com
I noticed that the following lines in my script take an *extremely long time* to run:

instance.defn_Cut.add(iterCuts, (0, sum(instance.alpha_a1[iterCuts,x]*instance.z1[x,y]*instance.alpha_a1[iterCuts,y] + \

instance.alpha_a1[iterCuts,x]*instance.z2[x,y]*instance.alpha_a2[iterCuts,y] + \

instance.alpha_a2[iterCuts,x]*instance.z3[x,y]*instance.alpha_a1[iterCuts,y] + \

instance.alpha_a2[iterCuts,x]*instance.z4[x,y]*instance.alpha_a2[iterCuts,y] for x in instance.X for y in instance.Y), None))

instance.preprocess()

This is similar to the Benders example where additional cuts are added to the master problem. For iterCuts=1 and for [x,y] where X = Y = {0, 1, 2, . . . 13}, the time is 11.374 seconds for the "add"-line and 4.875 for the "preprocess"-line. This is for a relatively small test problem.

The instance.alpha_a1[iterCuts,y] and instance.alpha_a2[iterCuts,y] are parameters, which are set uniquely for each additional cut.  The z's are variables.

The long run time is making it difficult to even test the simplest of problems.  Any suggestions to improve the performance? 

Many Thanks, Anya

Watson, Jean-paul

unread,
Sep 24, 2012, 10:47:57 PM9/24/12
to coopr...@googlegroups.com
The preprocess makes some sense, as this is a not-trivial operation to perform on a full instance. The add taking that long doesn't make a ton of sense, at least at first glance. How big are the index sets X and Y? 

The other thing that would be interesting to see is the profile, which you can generate by something along the lines of: python –m cProfile –s time my_script.py

If you could post the output to the forum, or at least the first 20 lines or so, that can help us identify what might be going on.

Alternatively, or in addition to, if you would like to send me your script off-line, I can take a look at it in confidence.

Pyomo is still written in Python, so some things can take non-trivial amounts of time. There are ways to improve the instance preprocess, and I can discuss those with you as well.

Another question relating to performance: What version of Coopr are you using? Or, more accurately when did you install? There are many changes in Coopr 3.2 that yield significant run-time reductions.

jpw

acastillo

unread,
Sep 25, 2012, 5:48:51 PM9/25/12
to coopr...@googlegroups.com
Thanks, that is a handy command. I do have Coopr 3.2.  I realized that when I run Python in debugging mode (PyDev via Eclipse), the runtime is extremely slow. But, in execution mode, the runtime is not unusual (see below). I am not too concerned about runtime at the moment, but I may be more concerned when I get to the larger test problems.  Thanks!

         2731223 function calls (2718664 primitive calls) in 5.061 CPU seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    44715    0.566    0.000    1.095    0.000 canonical_repn.py:272(_collect_linear_prod)
    39348    0.318    0.000    0.424    0.000 expr.py:733(generate_expression)
39600/39490    0.315    0.000    1.424    0.000 canonical_repn.py:248(_collect_linear_sum)
      308    0.222    0.001    2.180    0.007 compute_canonical_repn.py:56(preprocess_constraint)
    44770    0.207    0.000    0.300    0.000 canonical_repn.py:333(_collect_linear_const)
        2    0.197    0.098    0.197    0.098 {posix.waitpid}
109994/109884    0.171    0.000    0.217    0.000 numvalue.py:68(value)
     3585    0.152    0.000    0.162    0.000 expr.py:634(pprint)
       56    0.145    0.003    0.325    0.006 constraint.py:566(pprint)
      155    0.111    0.001    0.111    0.001 {method 'read' of 'file' objects}
39600/39490    0.107    0.000    0.310    0.000 expr.py:307(polynomial_degree)
    40106    0.093    0.000    1.896    0.000 canonical_repn.py:74(generate_canonical_repn)
    45276    0.081    0.000    0.107    0.000 canonical_repn.py:317(_collect_linear_var)
   178805    0.074    0.000    0.097    0.000 expr.py:492(<genexpr>)
     3641    0.060    0.000    0.066    0.000 cpxlp.py:179(_print_expr_canonical)
    40106    0.060    0.000    1.484    0.000 canonical_repn.py:359(new_collect_linear_canonical_repn)
       47    0.056    0.001    1.923    0.041 __init__.py:11(<module>)
        1    0.045    0.045    0.166    0.166 cpxlp.py:394(_print_model_LP)
     1970    0.045    0.000    0.349    0.000 lp_base.py:468(<genexpr>)
    44715    0.044    0.000    0.174    0.000 expr.py:487(polynomial_degree)
   160490    0.040    0.000    0.040    0.000 var.py:168(polynomial_degree)
    44815    0.037    0.000    0.544    0.000 {sum}
       14    0.037    0.003    0.212    0.015 __init__.py:1(<module>)
     3650    0.036    0.000    0.080    0.000 constraint.py:313(add)
    40106    0.032    0.000    0.051    0.000 canonical_repn.py:627(canonical_is_constant)
   116221    0.030    0.000    0.030    0.000 numvalue.py:311(__call__)

Watson, Jean-paul

unread,
Sep 26, 2012, 1:49:58 PM9/26/12
to coopr...@googlegroups.com
Thanks for the profile – the good news is that a good chunk of the time can be mitigated.

The basic issue relates to the "preprocess" call. What this is doing is translating from an expression tree into a "flat" representation, for use by a solver. This is inherently a non-trivial operation, but only really needs to be expensive once: after you instantiate a big model. If you are incrementally modifying a model (at least if you are only adding new constraints), then you only have to preprocess those new constraints. This will save you a ton of time, but we don't need to worry about that quite yet. When you are ready for "production" runs, let me know, and I'll guide you through the process of how to apply preprocessing incrementally. 

Carlos Suazo M.

unread,
May 5, 2015, 2:44:15 PM5/5/15
to pyomo...@googlegroups.com, coopr...@googlegroups.com
JP,

Do you have any documentation to apply preprocessing incrementally? Or maybe an example?

Carlos

Watson, Jean-Paul

unread,
May 5, 2015, 2:56:28 PM5/5/15
to pyomo...@googlegroups.com, coopr...@googlegroups.com
Yes – I’ll try to post an example later today…

jpw

--
You received this message because you are subscribed to the Google Groups "Pyomo Forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pyomo-forum...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Watson, Jean-Paul

unread,
May 6, 2015, 11:37:55 AM5/6/15
to pyomo...@googlegroups.com
Here’s what you can do. First:

“import pyomo.repn.compute_canonical_repn import preprocess_constraint”

Then you just need to invoke something like:

preprocess_constraint(instance, instance.Plin_cons)


for the constraint you just modified. 


This should help in terms of preprocess time. The only real drawback is that we don’t do this per-index as of yet, but we could easily add that if it would be helpful.

jpw


From: "Carlos Suazo M." <csu...@gmail.com>
Reply-To: "pyomo...@googlegroups.com" <pyomo...@googlegroups.com>
Date: Tuesday, May 5, 2015 at 12:44 PM
To: "pyomo...@googlegroups.com" <pyomo...@googlegroups.com>
Reply all
Reply to author
Forward
0 new messages