Using python multiprocessing.Pool with Gurobi

927 views
Skip to first unread message

Hugh Medal

unread,
Sep 24, 2013, 12:51:19 PM9/24/13
to gur...@googlegroups.com
Hello,

I am trying to use python's multiprocessing library to solve many LPs in parallel (each LP has a different righthand side). However, every time I run my code, I get slightly different results for the optimal solution of the LPs (Below: "output" is not the same every time; neither is "sum(output)". I am new to python multiprocessing so I could be missing something.

Thanks,
Hugh

Here is my code:

gurobiPool.py

from multiprocessing import Pool
import gurobipy

m = None

def  f(x):
    m.optimize()
    m.getConstrs()[0].setAttr("rhs", 2.0 + x)
    return m.objVal
    
if __name__ == '__main__':
    #global gurobiModel
    
    # Create a new model
    m = gurobipy.Model("model")

    # Create variables
    x = m.addVar(vtype=gurobipy.GRB.BINARY, name="x")
    y = m.addVar(vtype=gurobipy.GRB.BINARY, name="y")
    z = m.addVar(vtype=gurobipy.GRB.BINARY, name="z")

    # Integrate new variables
    m.update()

    # Set objective
    m.setObjective(x + y + 2 * z, gurobipy.GRB.MAXIMIZE)

    # Add constraint: x + 2 y + 3 z <= 4
    m.addConstr(x + 2 * y + 3 * z <= 4, "c0")

    # Add constraint: x + y >= 1
    m.addConstr(x + y >= 1, "c1")
    m.setParam('OutputFlag', False )
    pool = Pool(processes=4)              # start 4 worker processes
    output = pool.map(f, range(10))
    print output
    print sum(output)

Jakob Sch.

unread,
Sep 24, 2013, 4:10:41 PM9/24/13
to gur...@googlegroups.com
Hi Hugh,
I haven't tested your code but I would suggest that you only have in reality one model and each subprocess changes the righthandside of that one model. You may want to copy your model (deepcopy that is) and then try to run your code, so every subprocess has it's own model.

Best regards,
Jakob

Jakob Sch.

unread,
Sep 24, 2013, 4:33:58 PM9/24/13
to gur...@googlegroups.com
Appendix:

Here is the code that should do what you need:

from multiprocessing import Pool
import gurobipy

m = None


def f(i):
    model=m.copy()
    model.getConstrs()[0].setAttr("rhs", 2.0 + i)
    model.optimize()
    # print model.objVal
    return model.objVal

   
if __name__ == '__main__':
    #global gurobiModel
   
    # Create a new model
    m = gurobipy.Model("model")

    # Create variables
    x = m.addVar(vtype=gurobipy.GRB.BINARY, name="x")
    y = m.addVar(vtype=gurobipy.GRB.BINARY, name="y")
    z = m.addVar(vtype=gurobipy.GRB.BINARY, name="z")

    # Integrate new variables
    m.update()

    # Set objective
    m.setObjective(x + y + 2 * z, gurobipy.GRB.MAXIMIZE)

    # Add constraint: x + 2 y + 3 z <= 4
    m.addConstr(x + 2 * y + 3 * z <= 4, "c0")

    # Add constraint: x + y >= 1
    m.addConstr(x + y >= 1, "c1")
    m.setParam('OutputFlag', False)
    m.update()

    pool = Pool(processes=4)              # start 4 worker processes
    output = pool.map(f, range(10))
    print output
    print sum(output)

anne...@gmail.com

unread,
Jun 23, 2015, 8:37:30 AM6/23/15
to gur...@googlegroups.com
Do you always have to do such a deep copy? I use multiprocessing on a genetic algorithm that calls my gurobi model. Now I am wondering whether actually for every set of parameters that is handed over to the MIP, a MIP is created and solved or whether it might be mixed up. So let's say the genetic algorithm creates 10 sets of parameters at the same time and thus it calls the MIP 10 times. Will I need a deep copy for every MIP so that it really only uses "its" parameters?

Am Dienstag, 24. September 2013 18:51:19 UTC+2 schrieb Hugh Medal:

anne...@gmail.com

unread,
Jun 23, 2015, 8:37:34 AM6/23/15
to gur...@googlegroups.com
I forgot something (that might pretty silly): i have set the thread parameter manually to 10 (out of 12) - do I need to consider anything when using the multiprocessing module?

Am Dienstag, 24. September 2013 18:51:19 UTC+2 schrieb Hugh Medal:

anne...@gmail.com

unread,
Jun 24, 2015, 6:00:53 AM6/24/15
to gur...@googlegroups.com
I got the impression that a deep copy is needed. During the process of running my algorithm multiple times using multiprocessing, it throws a "Model is infeasible" at some point. If I then hard code the parameters a,b,c that were handed over before to the MIP automatically, it solves perfectly fine. So I guess that the parameters get mixed up between the differently parameterised MIPs.
I am still pretty new to programming and therefore I am unsure where to put the model.copy() command and how to implement it correctly...
The master algorithm at some point calls the MIP module in a way as the following:
MIPvalue = MIP.runMIP(a,b,c)

MIP.py does only include the MIP model, which is wrapped in the function runMIP(a,b,c). Can someone please assist me with this?
Thanks a lot in advance for your support!
Reply all
Reply to author
Forward
0 new messages