Solving 20 LPs on 20 threads vs. solving them serially using 20 threads each

105 views
Skip to first unread message

Hugh Medal

unread,
Aug 29, 2016, 12:14:51 PM8/29/16
to Gurobi Optimization
Suppose I have 20 LPs I want to solve and I have 20 threads on my machine. I am wondering if you have any insight about which would be faster using Gurobi:

1. Solve them in parallel, each on a separate thread (e.g., using Python's multiprocessing package).
2. Solve them serially, using 20 threads to solve each one (i.e., just calling model.optimize() and letting Gurobi use all of the available processors).

Before I ran some benchmark tests to find out, I wanted to see if you had any insights or experience about this.

Thanks,
Hugh

des...@gurobi.com

unread,
Sep 13, 2016, 7:30:01 PM9/13/16
to Gurobi Optimization

Hugh,

This is very problem dependent and it is hard to come up with a set of recommendations either way. The best is to run some benchmarks and find the configuration that works for your model.

The answer will depend on number of factors, such as:

a) Is it a LP or a MIP.
b) Which algorithm works best for your model (Barrier, dual or primal simplex). Barrier is parallizable  while simplex is not.
c) Do you need to run the crossover algorithm?
d) The size of your problem. Larger problems tend to parallelize better.
e) The sparsity of your constraint matrix (A matrix).
f) Do you have large blocks in your constraint matrix.
g) if you are solving MIPS then how well balanced is the search tree.

Also, I think the best configuration is possibly something between the two extreme configurations ( one thread or 20 threads for a single LP). For example, solve 5 LP’s each using 4 threads.


Dr. Amal de Silva
Gurobi Optimization

des...@gurobi.com

unread,
Sep 14, 2016, 12:48:45 PM9/14/16
to Gurobi Optimization
Also, the Gurobi development team has done some tests and found that you get the best results when you run more threads than the number of cores. For example, something like 10 LP's using 4 threads each.  Parallel efficiency varies during the solve, so having extra threads waiting around allows you to grab a CPU when another solve isn't using it.


Dr. Amal de Silva
Gurobi Optimization


On Monday, August 29, 2016 at 9:14:51 AM UTC-7, Hugh Medal wrote:
Reply all
Reply to author
Forward
0 new messages