Faster MPS or API

218 views
Skip to first unread message

Mainak Ghosh

unread,
Sep 20, 2016, 12:43:29 AM9/20/16
to Gurobi Optimization
Hello,

I am new to Gurobi and and have written this LP using Gurobi APIs in C#. Unfortunately, the model creation runs out of memory. Does using MPS give any advantage in terms of memory or execution than using apis?

Thanks and Regards,
Mainak

Tobias Achterberg

unread,
Sep 20, 2016, 5:27:00 AM9/20/16
to gur...@googlegroups.com
Hi Mainak,

I doubt that you really want to write MPS files as a means to creating models.
Constructing the model through an API should just work.

How many variables and how many constraints do you add? Are you using a 32 bit
operating system? And how much memory does your machine have?


Regards,

Tobias

Mainak Ghosh

unread,
Sep 20, 2016, 3:34:18 PM9/20/16
to Gurobi Optimization
Hello,

Around 15 billion vertices, and similar number of constraints. We are using a 64 bit machine with 140 GB memory.

Mainak

Tobias Achterberg

unread,
Sep 21, 2016, 3:49:10 AM9/21/16
to gur...@googlegroups.com
This seems way too big for the amount of memory you have.

I guess 15 billion vertices means 15 billion variables in the model. For each
variable we store at least the following:
- 3 doubles (8 byte): lower bound, upper bound, objective coefficient
- 1 size_t (8 byte): begin of column vector in non-zero index/value arrays
- 1 int (4 byte): length of column vector
- 1 char (1 byte): type of variable

This is already 37 bytes per variable. Often, we need more data (like another
double for the scaling value, another char for marking bound flips), and if you
use names for the variables the situation obviously gets worse.

For each constraint we store at least:
- 3 size_t (8 byte): start/end positions of non-zero index/value arrays
- 1 double (8 byte): right hand side
- 1 char (1 byte): sense

This is 33 bytes per constraint. Again, scaling and row names increase the
memory foot-print.

Thus, for your model we talk about 37*15e+9 + 33*15e+9 = 1.05e+12 bytes (1 TB)
of memory just to store the column and row data. And now you still need to store
the non-zero matrix coefficients!

Each non-zero matrix element needs 2 ints and 2 doubles (we need both the column
and row-wise representation), which totals to 24 bytes. Assuming that you have 3
non-zeros for each variable, this gives another 3*24*15e+9 = 1.08e+12 bytes.
Thus, you would already need 2 TB of RAM just to store the model, and then you
did not even try to solve it.

Solving a model usually involves presolve, where another copy of the problem is
created (i.e., another 2 TB in your case). Then, we need to factorize the basis
matrix. The memory consumption for this greatly depends on the structure of your
problem, but for a model that has at least as many constraints as variables, the
factor of the basis matrix will certainly not be smaller than the original
problem matrix.

Overall, to be able to solve such a huge model, I estimate the memory
requirements to be at least 20 TB. And even if you have such a big machine, it
will take very long.

Btw: I wonder how you are able to store your input graph in memory. I guess you
also need a few bytes for each vertex and each edge in the graph, so that the
graph itself should easily be larger than 140 GB.


Regards,

Tobias

Michael Winkler

unread,
Sep 21, 2016, 8:22:15 AM9/21/16
to Gurobi Optimization
Please note that Gurobi is also only "capable" of handling 2 billion variables and 2 billion constraint, only the number of non-zero entries in the constraints matrix can be larger. Also even if you could read in/create such a huge instance, you probably will get an out of memory error.

David Nehme

unread,
Oct 19, 2016, 12:35:42 AM10/19/16
to Gurobi Optimization
If you are running out of memory while you are constructing your model, you likely need to use some sort of delayed column and row generation techniques.  Roughly, you start by giving Gurobi a small fraction of the variables in your model, then iteratively add variables and constraints to the model based on solutions to the smaller problem.


Reply all
Reply to author
Forward
0 new messages