Hi Laurent,
No, "sandboxing" isn't what I want - at least not usually. I want
to be able to limit the VM process itself ... particularly the heap
size but occasionally other things as well. I often have the need
to squeeze a Racket application into the corner of a small cloud VM,
and I would like is more fine-grained control over Racket processes.
Also, sandboxing only notices the overrun when it's too late. If
the memory is known limited from the beginning, it would be used
differently, e.g., GC'd more often.
Without a lot of details about the memory use of various
features[*], "ulimit -H -d ..." at best is a guess. "ulimit -H -m
..." works to limit memory use, but it can't be used without swap,
and without limiting the data segment as well, it's easy to start
thrashing code vs data and kill performance.
"cgroups" helps with multiprocess applications, but it is
complicated to set up properly.
But in Windows there is no built-in user control for resource use
... there are some 3rd party utilities, but many admins won't permit
using them. I work a lot with various DBMS, and things may get
easier as SQL Server is available for Linux, but most people who run
it still run it on Windows.
And, of course, containers can limit (at least) memory and CPU, but
they have their own sets of issues, and the container system itself
can require substantial resources. Generally I prefer to avoid
containers and run on the bare machine wherever possible.
YMMV,
George
[*] particularly JIT: e.g., application mapped files are "data"
from the POV of the OS regardless of whether the mapping is
executable. So JIT'd code really is data for "ulimit" purposes.