According to link, it seems that the issues were fixed with the
latest RH6.x kernels, by adjusting the new parameters.
"
Now you can set two primary parameters: physpages and swappages,
while all the other beancounters become secondary and optional.
physpages
This parameter limits the physical memory (RAM) available to
processes inside a container.
The barrier is ignored and should be set to 0, and the limit
sets the limit.
Currently (as of >= 042stab042) the
user memory, the
kernel memory and the page cache are accounted into physpages.
swappages
This parameter limits the amount of swap space which can be used
for processes inside a container.
The barrier is ignored and should be set to 0, and the limit
sets the limit.
The sum of physpages.limit and swappages.limit limits the maximum
amount of allocated memory which can be used by a container. When
physpages limit is reached, memory pages belonging to the container
are pushed out to so called virtual swap (vswap).
The difference
between normal swap and vswap is that with vswap no actual disk
I/O usually occurs. Instead, a container is artificially slowed
down, to emulate the effect of the real swapping. Actual swap out
occurs only if there is a global memory shortage on the system.
"
I need to read a little bit more about vSwap, that concept of
slowing down a vm doesn't make any sense in my view.
Might have to look into KVM+KSM instead.
this is just for dev-test and a bit of research, I want to evaluate
the use of NFS with mongodb.
I've seen notes that mongodb doesn't advise the NFS use for data
files or journalling,
"we have found that some version of NFS perform poorly, or simply
don't work, and do not suggest using NFS"
http://www.mongodb.org/display/DOCS/NFS
I want to understand the reasons and out it can scale, as we run
large oracle, db2, mysql, informix DBs on NFS with no issues or
minimal performance degradation, I'd expect mongodb to perform in a
similar manner.
cheers for the update
jorge