Hi ShengLin,
the memory usage here is per process, and you can see that out of 24GB physical memory (RES), 22GB are shared (SHR) for each process.
You can check that all these processes connect to the same shared memory chunk with 'ipcs' command.
STAR does not check for memory 'overflow", it is not straightforward with RAM paging.
It can throw a "std::bad_alloc" error like this:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
However, it could also just slow down/hang your system with excessive page-faulting.
To avoid this, I recommend setting ulimit -v in a shell from which you submit multiple STAR jobs. For example, ulimit -v 40000000 will limit virtual memory for a shell to 40GB.
Once this limit is exceeded, STAR will definitely throw the "std::bad_alloc" error without hanging the system.
Cheers
Alex