Support for pickle in the python API

683 views
Skip to first unread message

Stuart Mitchell

unread,
Apr 14, 2014, 10:33:43 PM4/14/14
to gur...@googlegroups.com
Hi would it be possible to support the pickling of a gurobi model in the python api.

I would like to use the multiprocessing module in python and it requires that the arguments to the function that is forked into a new process are picklable. 

At the moment this happens when I try to pickle a gurobi model.

>>> import gurobipy
>>> m = gurobipy.Model()
>>> import pickle
>>> pickle.dumps(m)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/pickle.py", line 1374, in dumps
    Pickler(file, protocol).dump(obj)
  File "/usr/lib/python2.7/pickle.py", line 224, in dump
    self.save(obj)
  File "/usr/lib/python2.7/pickle.py", line 306, in save
    rv = reduce(self.proto)
  File "/usr/lib/python2.7/copy_reg.py", line 74, in _reduce_ex
    getstate = self.__getstate__
  File "model.pxi", line 118, in gurobipy.Model.__getattr__ (../../src/python/gurobipy.c:30205)
  File "model.pxi", line 1077, in gurobipy.Model.getAttr (../../src/python/gurobipy.c:38499)
gurobipy.GurobiError: Unknown attribute '__getstate__'
>>> 

Thanks stu

--
Stuart Mitchell
PhD Engineering Science
Extraordinary Freelance Programmer and Optimisation Guru

Greg Glockner

unread,
Apr 14, 2014, 10:43:58 PM4/14/14
to gur...@googlegroups.com
Stu:

> Hi would it be possible to support the pickling of a gurobi model in the python api.

I'm sure the developers may consider it, but in the meantime, you can do 99% of file pickling via something like:

import gurobipy
m = gurobipy.Model()
# ...
m.write('mymodel.mps')
# optionally save state:
# parameters:
m.write('mymodel.prm')
# for MIP:
m.write('mymodel.mst') # MIP start
# for LP:
m.write('mymodel.bas') # LP basis

To read, do something like the following:
m = read('mymodel.mps')
m.read('mymodel.prm')
m.read('mymodel.mst') # MIP start

Since it's Python, I'd probably be lazy and just read/write all files, ignoring exceptions.

There are some edge cases where this will fail, mostly involving poor choice of variable or constraint names. Bottom line: don't go crazy when naming variables or constraints.

arkha

unread,
Sep 23, 2015, 3:13:56 AM9/23/15
to Gurobi Optimization
I encountered the same issue as Stuart.

While writing models into .mps files and then reading them is a valid workaround, now the problem is that I have to read and write from the disk which causes major slowdowns.

On the other hand, if it were possible to pickle Gurobi models, then all the operations could be kept in main memory. Any recommendations?

Thanks

Shawn

unread,
Sep 24, 2015, 1:04:12 AM9/24/15
to Gurobi Optimization
Hi Stu,

We have a very large model written in python, and more time is consumed building the dictionaries and generating the model than actually solving it, so we might be in a similar situation.  Our partial work-around for cutting down on the load and build time is to pickle the dictionaries, so those at least do not to be re-built them every time.

Here is an example with one of our larger dictionaries we build.

# Adjacency matrix gets loaded from file, if it exists. Otherwise, it gets pulled from database
if os.path.isfile(r"../storage/adjacency_matrix.p"):
print("Loading adjacency matrix from '../storage/adjacency_matrix.p'")
adjacency_matrix = pickle.load( open(r'../storage/adjacency_matrix.p', 'rb') )
else:
print("Adjacency matrix file ('../storage/adjacency_matrix.p') not found; loading from database.")
server.execute("""select rtrim(cast(planning_area_i as char)) i
, rtrim(cast(planning_area_j as char)) j
from dbo.EDGE_LIST""")
result = server.fetchall()

edge_list = [(i,j) for i,j in result]
adjacency_matrix = {}

for i in planning_areas:
for j in planning_areas:
if (i,j) in edge_list:
adjacency_matrix[i,j] = 1
else:
adjacency_matrix[i,j] = 0

print("Saving adjacency matrix to r'../storage/adjacency_matrix.p' for future use")
pickle.dump(adjacency_matrix, open(r'../storage/adjacency_matrix.p', 'wb') 


Good luck,
Shawn

Jamie Noss

unread,
Feb 8, 2019, 10:51:30 AM2/8/19
to Gurobi Optimization
I too would like this, pretty please?


It would also be a cool if we could write the files in binary format. Is there a binary file format?

At present, I'm tasked with increasing the performance of our usage of gurobipy. I have the Python side sorted, i.e. model building etc, however, I need to investigate the gurobi side better. Our current model size is ~150GB and takes 5hrs to produce (due to math/simulations). To save memory, I add these to the model as they are built rather than producing all and then building the entire model at the end, thus preventing a total mem footprint duplication. 

What I want to do is dump the model to disk so that I can explore the gurobi optimization performance and not have to wait 5hrs each time to build the model. Building the model as I go along means that I am now dependent on dumping gurobi.Model to disk which doesn't look feasible, especially since it's ascii and NOT binary.

I can work around this myself, by pickeling the data, as it chugs along, instead of adding it to the model, doing the same in reverse when reading and creating the model to still prevent total footprint duplication. It would just be nice if I didn't have to ;)

Cheers,
   Jamie Noss
Reply all
Reply to author
Forward
0 new messages