Re: automatically killed

693 views
Skip to first unread message

Ed Rothberg

unread,
Oct 8, 2012, 11:54:35 AM10/8/12
to gur...@googlegroups.com

A 'Killed' message from the OS usually indicates that you ran out of memory.


Jiho Yoon

unread,
Oct 8, 2012, 1:19:48 PM10/8/12
to gur...@googlegroups.com
However, actually, I am using 256GB (not MB)... is there any different reason of this message?


On Mon, Oct 8, 2012 at 11:54 AM, Ed Rothberg <erot...@gmail.com> wrote:

A 'Killed' message from the OS usually indicates that you ran out of memory.





--

Jiho Yoon
PhD Graduate Student
Department of Supply Chain Management
The Eli Broad Graduate School of Management
N468 North Business Complex
Michigan State University
East Lansing, MI 48824-1121
(e) yo...@bus.msu.edu

Ed Rothberg

unread,
Oct 8, 2012, 1:41:42 PM10/8/12
to gur...@googlegroups.com

256GB is a lot, but 256GB/7500 models is only 34MB per model.  If you aren't throwing away models when you are done with them, you could easily exhaust 256GB.

Can you confirm that you are using the win64 version of Gurobi 5.0.1?  What programming language are you using?  Are you calling GRBfreemodel() or model.dispose() after you done with each of the 7500 models you solve?  What does the task manager say about how much memory is in use when the process is killed?


Greg Glockner

unread,
Oct 8, 2012, 1:56:27 PM10/8/12
to gur...@googlegroups.com
Also, did you check if you have any limits set on your account? Check the output of ulimit -a


Jiho Yoon

unread,
Oct 8, 2012, 2:13:32 PM10/8/12
to gur...@googlegroups.com
yes it is gurobi 5.0.1 and I'm using python 2.7... would you let me know how to reduce the memory usage? or clear the memory after each iteration done?


On Mon, Oct 8, 2012 at 1:56 PM, Greg Glockner <gloc...@gurobi.com> wrote:
Also, did you check if you have any limits set on your account?  Check the output of ulimit -a





Ed Rothberg

unread,
Oct 8, 2012, 2:30:14 PM10/8/12
to gur...@googlegroups.com
> yes it is gurobi 5.0.1 and I'm using python 2.7

64-bit?

> ... would you let me know how to reduce the memory usage?

Python uses reference counting, so you just need to make sure that all references to a model are gone when you are done with it.

To be sure, you can explicitly set your model variable to None

m = read('test.mps')
m.optimize()
...
m = None


Jiho Yoon

unread,
Oct 15, 2012, 12:57:59 PM10/15/12
to gur...@googlegroups.com
what about changing parameters from default values? such as tolerance...

On Mon, Oct 15, 2012 at 12:04 PM, Jiho Yoon <yoon...@gmail.com> wrote:
yes mine is 64bit

and I tried to apply m = None... but still same problem comes...
--

Jiho Yoon
PhD Graduate Student
Department of Supply Chain Management
The Eli Broad Graduate School of Management
N468 North Business Complex
Michigan State University
East Lansing, MI 48824-1121
(e) yo...@bus.msu.edu

Jiho Yoon

unread,
Oct 15, 2012, 12:04:45 PM10/15/12
to gur...@googlegroups.com
yes mine is 64bit

and I tried to apply m = None... but still same problem comes...

On Mon, Oct 8, 2012 at 2:30 PM, Ed Rothberg <erot...@gmail.com> wrote:

Ed Rothberg

unread,
Oct 22, 2012, 10:46:21 AM10/22/12
to gur...@googlegroups.com

I seriously doubt that one model is using 256GB of memory.  My guess is still that you have a memory leak.  You may want to read up on the 'gc' module in Python (http://docs.python.org/library/gc.html), which is quite helpful for debugging leaks.

Ed



Stuart Mitchell

unread,
Oct 22, 2012, 6:34:57 PM10/22/12
to gur...@googlegroups.com
also objgraph is very good see my blog post on this


Stu


On Tue, Oct 23, 2012 at 3:46 AM, Ed Rothberg <erot...@gmail.com> wrote:

I seriously doubt that one model is using 256GB of memory.  My guess is still that you have a memory leak.  You may want to read up on the 'gc' module in Python (http://docs.python.org/library/gc.html), which is quite helpful for debugging leaks.

Ed






--
Stuart Mitchell
PhD Engineering Science
Extraordinary Freelance Programmer and Optimisation Guru

Reply all
Reply to author
Forward
0 new messages