ERROR: memory mapping failed: Cannot allocate memory

2,450 views
Skip to first unread message

Marc Williams

unread,
Apr 10, 2015, 8:00:00 AM4/10/15
to julia...@googlegroups.com
Hi,

I'm doing some analysis where I need to compute some large matrices (up to about 50,000 X 50,000) so I quickly run out of RAM, it seems like using a memory mapped array could be a useful approach so I compute the matrices and save them to a binary file and then read them in using mmap_array(). For the smaller matrices everything works fine but when the size of the binary file is greater than the amount of RAM available I get the following error: 

ERROR: memory mapping failed: Cannot allocate memory

Once I've read the matrices in using mmap_array() I do some basic calculation like computing the sum over all the elements, the sum over rows and I access every element a couple of times.

I've not used memory mapping before so am I using it in the right way and is there anything I'm missing that I need to do to make this a solution to my RAM issue? 

Many Thanks
Marc

Tom Short

unread,
Apr 10, 2015, 10:12:14 AM4/10/15
to julia...@googlegroups.com
More information would help, especially a concise reproducible example.

Marc Williams

unread,
Apr 11, 2015, 3:04:03 PM4/11/15
to julia...@googlegroups.com

So I get the error when I call mmap_array() as follows:

    s = open("matrixfile.bin")   

    m = read(s, Float64)

    weight = mmap_array(Float64, (int64(m),int64(m)), s)

    close(s)

When my "matrixfile.bin" is small everything works fine, but when I get to a stage where the file size is similar to the amount of RAM available I get the following error:

ERROR: memory mapping failed: Cannot allocate memory

 in mmap at mmap.jl:35

 in mmap_array at mmap.jl:110

 in readinfile at none:4

Tim Holy

unread,
Apr 11, 2015, 4:01:20 PM4/11/15
to julia...@googlegroups.com
I'll bet you're working within a constrained environment. If you're on a unix
platform, what does 'ulimit -a' say?

Best,
--Tim

Marc Williams

unread,
Apr 11, 2015, 4:34:04 PM4/11/15
to julia...@googlegroups.com
I'm working on a managed cluster that runs scientific lunix. I get the following with ulimit -a when I've requested a session with 4gb:

core file size          (blocks, -c) unlimited

data seg size           (kbytes, -d) unlimited

scheduling priority             (-e) 0

file size               (blocks, -f) unlimited

pending signals                 (-i) 191971

max locked memory       (kbytes, -l) 137435659880

max memory size         (kbytes, -m) unlimited

open files                      (-n) 1024

pipe size            (512 bytes, -p) 8

POSIX message queues     (bytes, -q) 819200

real-time priority              (-r) 0

stack size              (kbytes, -s) unlimited

cpu time               (seconds, -t) unlimited

max user processes              (-u) 191971

virtual memory          (kbytes, -v) 4194304

file locks                      (-x) unlimited

Thanks
Marc

Marc Williams

unread,
Apr 11, 2015, 5:31:07 PM4/11/15
to julia...@googlegroups.com
Yes this must be the problem. If I try running locally accessing the files on the server through sshfs, everything seems to work fine. Would be interested to know what the problem is though? And if there are any workarounds? 

Thanks
Marc

Jameson Nash

unread,
Apr 11, 2015, 7:40:56 PM4/11/15
to julia...@googlegroups.com
mmap consumes virtual memory. Presumably that is counted against the process's limit. You've got a cap of 4gb total allowed according to that ulimit output.
Reply all
Reply to author
Forward
0 new messages