shared memory usage of STAR

556 views
Skip to first unread message

ShengLin Mei

unread,
Jun 17, 2014, 2:39:35 PM6/17/14
to rna-...@googlegroups.com
Hi, Alex,
I got a question about memory usage. 
I have a lot of data to process,  for single process, STAR will occupy about 10% memory of my server. I noticed that The --genomeLoad parameter controls how the genome is loaded into memory. I test some data using genomeLoad=LoadAndKee, but the memory  usage for each process is still 10%,  Is it normal ?  



Best 
Shenglin

ShengLin Mei

unread,
Jun 18, 2014, 11:44:53 AM6/18/14
to rna-...@googlegroups.com
Hi, Alex,

Another thing I want to know is about memory overflow. Will program stop for insufficient memory and which kind of error message will be generated ? Thanks!


Best
Shenglin




在 2014年6月17日星期二UTC-4下午2时39分35秒,ShengLin Mei写道:

Alexander Dobin

unread,
Jun 18, 2014, 7:25:10 PM6/18/14
to rna-...@googlegroups.com
Hi ShengLin,

the memory usage here is per process, and you can see that out of 24GB physical memory (RES),  22GB are shared (SHR) for each process.
You can check that all these processes connect to the same shared memory chunk with 'ipcs' command.

STAR does not check for memory 'overflow", it is not straightforward with RAM paging.
It can throw a "std::bad_alloc" error like this:
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc

However, it could also just slow down/hang your system with excessive page-faulting.
To avoid this, I recommend setting ulimit -v in a shell from which you submit multiple STAR jobs. For example, ulimit -v 40000000 will limit virtual memory for a shell to 40GB.
Once this limit is exceeded, STAR will definitely throw the  "std::bad_alloc" error without hanging the system.

Cheers
Alex

ShengLin Mei

unread,
Jun 19, 2014, 3:18:17 PM6/19/14
to rna-...@googlegroups.com
Thanks very much !


Best
Shenglin




在 2014年6月18日星期三UTC-4下午7时25分10秒,Alexander Dobin写道:
Reply all
Reply to author
Forward
0 new messages