Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Application-specific memory overcommitment

285 views
Skip to first unread message

Anton Ertl

unread,
Oct 9, 2004, 2:54:07 AM10/9/04
to
Just in case nobody has proposed this before, here's an idea for
dealing with the memory overcommitment and out-of-memory killer
problem:

Some applications handle failed memory allocations (and failed forks
etc.) gracefully, and would work well with a no-overcommitment
policy.

Other applications do not do anything useful on a failed allocation
(exiting with an "out-of-memory" error message is not really more
useful than being OOM-killed). These applications work better with
overcommitment, because the overcommitment delays the failure of the
application, usually until after the application has terminated.

My idea is to have two application classes (maybe identified by a bit
in the executable) with different overcommitment policies instead of
having a global overcommitment policy through
/proc/sys/vm/overcommit_memory:

- The no-overcommitment applications: for these applications the
memory commitment is accounted, and memory allocations fail as soon as
the commitable memory becomes empty; these applications are never
(directly) killed by the OOM killer. An executable would be marked as
no-overcommit, if it should not be OOM-killed, but only if it can
handle failed allocations gracefully.

- The overcommitment applications: these applications do not count in
memory commitment accounting, and memory allocations do not fail
because of lack of memory; the OOM killer kills one of these
applications when the system runs out of memory. An executable should
be marked as overcommit if it cannot handle failed allocations in a
useful way, or if it's expendable anyway.

Ideally, all the important applications would be able to handle failed
allocations gracefully, and would be marked as no-overcommit, and thus
would be safe from the OOM killer.

One consequence of this design is that the no-overcommit applications
could theoretically consume all the physical memory and swap space,
and the OOM killer would have to kill all the overcommit applications.
I believe that this will not happen, because applications usually do
not use all the memory they allocate, but if one wants more fairness
for the overcommit applications, one could limit the commitable memory
to, e.g., 80% of the physical memory and swap though a tunable system
parameter.

- anton
--
M. Anton Ertl Some things have to be seen to be believed
an...@mips.complang.tuwien.ac.at Most things have to be believed to be seen
http://www.complang.tuwien.ac.at/anton/home.html

Kasper Dupont

unread,
Oct 9, 2004, 4:11:42 AM10/9/04
to
Anton Ertl wrote:
>
[snip]

I agree with most of that. But I don't think the choice
should depend (only) on a bit in the executable header.
It should be possible for the user to explicitly
specify if a particular instance should run with or
without overcommitment.

>
> One consequence of this design is that the no-overcommit applications
> could theoretically consume all the physical memory and swap space,
> and the OOM killer would have to kill all the overcommit applications.

You can improve on this situation if you try to commit
to the memory as soon as an overcommited process try to
use it.

--
Kasper Dupont

Anton Ertl

unread,
Oct 9, 2004, 5:27:39 AM10/9/04
to
Kasper Dupont <kas...@daimi.au.dk> writes:
>Anton Ertl wrote:
>>
>[snip]
>
>I agree with most of that. But I don't think the choice
>should depend (only) on a bit in the executable header.
>It should be possible for the user to explicitly
>specify if a particular instance should run with or
>without overcommitment.

Yes, somewhat like nice. Certainly the option should be there, but I
guess that in most cases a given executable will always run with the
same policy. Certainly the reaction to failed allocations is
determined by the executable; the expendability of the process can
depend on the instance, but is probably also often the same for all
instances.

>> One consequence of this design is that the no-overcommit applications
>> could theoretically consume all the physical memory and swap space,
>> and the OOM killer would have to kill all the overcommit applications.
>
>You can improve on this situation if you try to commit
>to the memory as soon as an overcommited process try to
>use it.

So the no-overcommit processes would commit the memory on allocation,
and the others would commit it on use, if there is commitable memory
left; if there is no commitable memory left, they would get unused
commited memory, and when that rouns out, the OOM killer would do its
work (by killing an overcommit process).

Essentially this would mean that for no-overcommit processes
allocations would fail earlier, limiting their usage to a lower
maximum.

I am not sure that this is always an improvement, because presumably
the no-overcommit processes are more important. I guess one should
make use-commiting a system (or per-process) configuration option. In
any case, it adapts to the actual workload on the system, unlike the
memory ratio parameter I proposed.

Kasper Dupont

unread,
Oct 12, 2004, 3:12:41 PM10/12/04
to
Anton Ertl wrote:
>
> Essentially this would mean that for no-overcommit processes
> allocations would fail earlier, limiting their usage to a lower
> maximum.

True. But at that time we already know, that if we
commit to the memory, the only way we could live up
to the commitment would be by killing somebody.

--
Kasper Dupont

0 new messages