Running CBC Solver in parallel

553 views
Skip to first unread message

Shawwash

unread,
Jun 27, 2018, 8:11:19 PM6/27/18
to AMPL Modeling Language
I am trying to run the Coin CBC solver to solve a binary problem. The problem is not large
I then set the following CBC option:
cbc_options 'loglevel=50 maxN=25000';

Adjusted problem:
6046 variables:
        713 binary variables
        5333 linear variables
4278 constraints, all linear; 15955 nonzeros
        920 equality constraints
        3358 inequality constraints
1 linear objective; 714 nonzeros.

It takes about 235 seconds to solve the problem and I am trying to get it to solve in less than 60 seconds so I tried to solve the problem in parallel and set the following CBC options:
cbc_options 'loglevel=50 maxN=25000 threads=12';

As CBC start to solve the problem it uses 13 threads and then it stalls.
I noticed that it is stalling when it is using the clp linear solver.
Here is the log when it stalled:

Cbc0014I Cut generator 7 (ImplicationCuts) - 0 row cuts average 0.0 elements, 1 column cuts (1 active)
Clp0061I Crunch 3383 (-364) rows, 5848 (-182) columns and 25482 (-1302) elements
Clp1001I Initial range of elements is 2.0510979e-006 to 7376.4984
Clp1003I Final range of elements is 0.0023036211 to 100
Clp0000I Optimal - objective value 880916.2
Clp0006I 0  Obj 880916.2 Primal inf 0.45711629 (1)
.
.
Clp0006I 0  Obj 880916.2 Primal inf 0.11393885 (1)
Clp0006I 0  Obj 880916.2 Primal inf 0.19561808 (1)
Node 1 depth 1 unsatisfied 99 sum 3.7911e+014 obj 880916 guess 880934 branching on 5966
Coin0506I Presolve 1726 (-1317) rows, 4809 (-1221) columns and 10420 (-4284) elements
Cgl0009I 1 elements changed
Cgl0004I processed model has 1722 rows, 4807 columns (63 integer (63 of which binary)) and 10410 elements
Clp0014I Perturbing problem by 0.001 % of 698.95915 - largest nonzero change 7.0877675e-005 (% 0.030738378) - largest zero change 7.0866012e-005
Clp0006I 0  Obj 880898.98 Primal inf 0.69072781 (1)
Clp0000I Optimal - objective value 880906.31
Clp0014I Perturbing problem by 0.001 % of 698.95915 - largest nonzero change 7.0664295e-005 (% 0.023232201) - largest zero change 7.0642473e-005
Clp0006I 0  Obj 880906.31 Primal inf 46.547774 (51)
Clp0006I 97  Obj 880916.46
Clp0000I Optimal - objective value 880916.46
Clp0000I Optimal - objective value 880916.46
Clp0014I Perturbing problem by 0.001 % of 698.95915 - largest nonzero change 7.0485231e-005 (% 0.12733209) - largest zero change 7.0470547e-005
Clp0006I 0  Obj 880916.46 Primal inf 82.105354 (54)
Clp0006I 97  Obj 880


Any help on resolving this problem is much appreciated.
Cheers,
Ziad

AMPL Google Group

unread,
Jun 30, 2018, 6:52:37 PM6/30/18
to Ampl Modeling Language
Have you tried specifying a number of threads greater than 1 but less than 12? Perhaps a somewhat smaller number will still give you a substantial speedup without leading the the problem in CLP that you see.

If you want to get the situation with threads=12 fixed, however, then you will need to take the steps that are appropriate for getting support of an open-source solver. You will need to consult the COIN-OR pages to determine the contact information for the community that supports CBC/CLP, and then send them a problem report. For best results, you should include sample files that can be run to reproduce the problem, and also a listing of the entire log from the CBC run that has the problem. Also if you put the statement "write gtest;" before "solve;", then AMPL will produce a problem file test.nl which will enable your problem to be submitted to CBC even on a computer that does not have AMPL installed.

--
Robert Fourer
am...@googlegroups.com
{#HS:610064579-12191#}
--
You received this message because you are subscribed to the Google Groups "AMPL Modeling Language" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ampl+uns...@googlegroups.com.
To post to this group, send email to am...@googlegroups.com.
Visit this group at https://groups.google.com/group/ampl.
For more options, visit https://groups.google.com/d/optout.



Reply all
Reply to author
Forward
0 new messages