Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Linux is 'creating' memory ?!

516 views
Skip to first unread message

mnij...@et.tudelft.nl

unread,
Feb 7, 1995, 11:26:06 AM2/7/95
to
Linux & the memory.

I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition
My compiler is GCC 2.5.8

As I was writing my program, I noticed an oddity (=bug?).
It's probably best explained by a simple program:

#include <stdlib.h>
int main(void) {
int i,*p;
/* 1st stage */
for(i=0;i<10000;i++) {
p[i]=malloc(4096)
if (p[i]==NULL) {
fprintf(stderr,"Out of memory\n");
exit(1);
}
}
/* 2nd stage */
for(i=0;i<10000;i++)
*(p[i])=1;
}

As you can see the first stage tries to allocate 40Mb of memory. Since
I don't have that kind of memory it should fail ofcourse. To my
surprise it didn't. (!)
Well then, the second stage tries to access the 40Mb. At this point
Linux figures out that that kind of memory isn't there, so it kind of
hangs. Not really it just becomes increadably slow, I was able to exit
the program with CTRL-C but it did take a few minutes to do that.

BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
is faster that calloc, so I prefer to malloc.

Am I doing something wrong ? Or is it a bug in Linux or GCC ?


Marc.


+-------------------------------------------------------------------+
| Marc Nijweide Delft University of Technology, Netherlands |
| M.Nij...@et.TUDelft.nl http://morra.et.tudelft.nl:80/~nijweide |
+-------------------------------------------------------------------+

If builders build things the way programmers write programs, the
first woodpecker that came along, would destroy civilisation.

iafi...@et.tudelft.nl

unread,
Feb 7, 1995, 3:59:28 PM2/7/95
to
In article <1995Feb7.1...@tudedv.et.tudelft.nl>, mnij...@et.tudelft.nl writes:
> Linux & the memory.
>
> I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition
> My compiler is GCC 2.5.8
>
> As I was writing my program, I noticed an oddity (=bug?).
> It's probably best explained by a simple program:
>
> #include <stdlib.h>
> int main(void) {
> int i,*p;

Has to be "int i, *p[10000];

> /* 1st stage */
> for(i=0;i<10000;i++) {
> p[i]=malloc(4096)
> if (p[i]==NULL) {
> fprintf(stderr,"Out of memory\n");
> exit(1);
> }
> }
> /* 2nd stage */
> for(i=0;i<10000;i++)
> *(p[i])=1;
> }
>
> As you can see the first stage tries to allocate 40Mb of memory. Since
> I don't have that kind of memory it should fail ofcourse. To my
> surprise it didn't. (!)
> Well then, the second stage tries to access the 40Mb. At this point
> Linux figures out that that kind of memory isn't there, so it kind of
> hangs. Not really it just becomes increadably slow, I was able to exit
> the program with CTRL-C but it did take a few minutes to do that.
>
> BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
> is faster that calloc, so I prefer to malloc.
>
> Am I doing something wrong ? Or is it a bug in Linux or GCC ?
>
>
> Marc.
>

Have the same "problem".
The program top shows the 'real' memory you allocated, but it does not
exist.

Arjan


------------------------------------------
Arjan Filius
Email : IAfi...@et.tudelft.nl
------------------------------------------

Bill C. Riemers

unread,
Feb 7, 1995, 5:14:41 PM2/7/95
to
In article <1995Feb7.1...@tudedv.et.tudelft.nl>,

Hmmm. I've been told that Linux doesn't really allocate the memory until
the first time you access it. I always wondered how it gracefully handled
out of memory errors this way. Now I guess I know. It doesn't... I'm
sure that there is supposed to be some clean way of handling this, but
I don't see how...

>BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
>is faster that calloc, so I prefer to malloc.

Yes, calloc() access the memory by clearing it. So I wouldn't expect it to
have problems.

>Am I doing something wrong ? Or is it a bug in Linux or GCC ?

Looks like it to me...

Bill


--
<A HREF=" http://physics.purdue.edu/~bcr/homepage.html ">
<EM><ADDRESS> Dr. Bill C. Riemers, b...@physics.purdue.edu </ADDRESS></EM></A>
<A HREF=" http://www.physics.purdue.edu/ ">
<EM> Department of Physics, Purdue University </EM></A>

Kevin Lentin

unread,
Feb 7, 1995, 5:47:00 PM2/7/95
to
mnij...@et.tudelft.nl wrote:
> Linux & the memory.

> I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition
> My compiler is GCC 2.5.8

> As I was writing my program, I noticed an oddity (=bug?).
> It's probably best explained by a simple program:

> #include <stdlib.h>
> int main(void) {
> int i,*p;
> /* 1st stage */
> for(i=0;i<10000;i++) {
> p[i]=malloc(4096)

Try allocating p first. Your pointer p points to random memory. It could be
anywhere. You're probably just lucky not to get an error on this line.

--
[==================================================================]
[ Kevin Lentin |___/~\__/~\___/~~~~\__/~\__/~\_| ]
[ kev...@bruce.cs.monash.edu.au |___/~\/~\_____/~\______/~\/~\__| ]
[ Macintrash: 'Just say NO!' |___/~\__/~\___/~~~~\____/~~\___| ]
[==================================================================]

Jeffrey Sturm

unread,
Feb 7, 1995, 6:30:12 PM2/7/95
to
mnij...@et.tudelft.nl wrote:

: I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition


: My compiler is GCC 2.5.8

: As I was writing my program, I noticed an oddity (=bug?).
: It's probably best explained by a simple program:

: #include <stdlib.h>
: int main(void) {
: int i,*p;
: /* 1st stage */
: for(i=0;i<10000;i++) {
: p[i]=malloc(4096)
: if (p[i]==NULL) {
: fprintf(stderr,"Out of memory\n");
: exit(1);
: }
: }
: /* 2nd stage */
: for(i=0;i<10000;i++)
: *(p[i])=1;

: }

I don't think this is exactly the program you ran. It has several problems,
like trying to dereference p before it is initialized. Anyway, it won't
compile as it stands.

: As you can see the first stage tries to allocate 40Mb of memory. Since


: I don't have that kind of memory it should fail ofcourse. To my
: surprise it didn't. (!)

Linux has paged virtual memory. Even if you have only 40MB physical
memory, a program has almost the entire 4GB address space available to it.
Not until you access a page of memory does Linux try to map it to
physical memory.

: Well then, the second stage tries to access the 40Mb. At this point


: Linux figures out that that kind of memory isn't there, so it kind of
: hangs. Not really it just becomes increadably slow, I was able to exit
: the program with CTRL-C but it did take a few minutes to do that.

That's right. Linux will first run out of physical memory, then it begins
to fill your paging file. That's when it starts to get very slow.

: BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc


: is faster that calloc, so I prefer to malloc.

That's because calloc() initializes the memory with zeros. This causes it
to be mapped immediately.

: Am I doing something wrong ? Or is it a bug in Linux or GCC ?

It's a "feature" in Linux, and in some other OS's too.

-Jeff

S. Joel Katz

unread,
Feb 8, 1995, 12:00:56 AM2/8/95
to
In <1995Feb7.1...@tudedv.et.tudelft.nl> mnij...@et.tudelft.nl writes:

>Linux & the memory.

>I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition
>My compiler is GCC 2.5.8

>As I was writing my program, I noticed an oddity (=bug?).
>It's probably best explained by a simple program:


[program deleted]

>As you can see the first stage tries to allocate 40Mb of memory. Since
>I don't have that kind of memory it should fail ofcourse. To my
>surprise it didn't. (!)
>Well then, the second stage tries to access the 40Mb. At this point
>Linux figures out that that kind of memory isn't there, so it kind of
>hangs. Not really it just becomes increadably slow, I was able to exit
>the program with CTRL-C but it did take a few minutes to do that.

>BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
>is faster that calloc, so I prefer to malloc.

>Am I doing something wrong ? Or is it a bug in Linux or GCC ?

It is a feature in the Linux C library and GCC and is seldom
appreciated and little used. Allocating or declaring storage does nothing
in Linux except advance the process' break point.

Linux does not actually allocate a page until a fault occurs,
such as when a read of write to the memory takes place. Then the fault
handler maps a page.

I use this all the time in programs to save the hassle of dynamic
allocation. If I 'might need' up to 10,000,000 ints for something, I
allocate 10,000,000, safe in the knowledge that the allocation will never
fail. Then I use the array as I need 'em.

For example, consider the following program

int nums[10000000];
int num_count=0;

void main(void)
{
int j;
while((j=get_num())!=-1)
nums[num_count++]=j;
for(j=0; j<num_count; j++)
printf("%d->%d\n",j,nums[j];
}

Space allocated for up to 10,000,000 ints and it still won't
waste space if you only use a dozen. Damn convenient; no bug at all.
--

S. Joel Katz Information on Objectivism, Linux, 8031s, and atheism
Stim...@Panix.COM is available at http://www.panix.com/~stimpson/

John Henders

unread,
Feb 8, 1995, 5:28:55 AM2/8/95
to

>Hmmm. I've been told that Linux doesn't really allocate the memory until
>the first time you access it. I always wondered how it gracefully handled
>out of memory errors this way. Now I guess I know. It doesn't... I'm
>sure that there is supposed to be some clean way of handling this, but
>I don't see how...

Actually, if you wait long enough, it will eventually announce
that there's not enough memory and kill the program. It just attempts to
give you the memory first, which leads ot major swapping while it tries
to free enough pages.
At least that's how it worked the last time I checked, which was
pre 1.0.

--
GAT/MU/AE d- -p+(--) c++++ l++ u++ t- m---
e* s-/+ n-(?) h++ f+ g+ w+++ y*

S. Joel Katz

unread,
Feb 8, 1995, 8:04:11 AM2/8/95
to

>>Hmmm. I've been told that Linux doesn't really allocate the memory until
>>the first time you access it. I always wondered how it gracefully handled
>>out of memory errors this way. Now I guess I know. It doesn't... I'm
>>sure that there is supposed to be some clean way of handling this, but
>>I don't see how...

It certainly does handle it gracefully. There _are_ no out of
memory errors unless you actually exceed the total amount of memory and
swap space available on the system. If _you_ do _that_, there _is_ no
graceful way to handle it!

> Actually, if you wait long enough, it will eventually announce
>that there's not enough memory and kill the program. It just attempts to
>give you the memory first, which leads ot major swapping while it tries
>to free enough pages.
> At least that's how it worked the last time I checked, which was
>pre 1.0.

This is not really true. You can allocate the memory and sit with
it forever, if you want. As I mentioned earlier, this is DAMN convenient
to replace dynamic allocation with huge static allocation for boosts in
performance and simplicity.

If you start to _use_ the memory, Linux will swap everything it
can to give it to you. The kernel has no way of knowing when you're going
to stop asking for memory, so it can do nothing but do its best to give
you what you ask for.

Monty H. Brekke

unread,
Feb 8, 1995, 3:16:11 PM2/8/95
to
In article <3h9j68$5...@panix3.panix.com>,

S. Joel Katz <stim...@panix.com> wrote:
>
> It is a feature in the Linux C library and GCC and is seldom
>appreciated and little used. Allocating or declaring storage does nothing
>in Linux except advance the process' break point.
>
> Linux does not actually allocate a page until a fault occurs,
>such as when a read of write to the memory takes place. Then the fault
>handler maps a page.
>
> I use this all the time in programs to save the hassle of dynamic
>allocation. If I 'might need' up to 10,000,000 ints for something, I
>allocate 10,000,000, safe in the knowledge that the allocation will never
>fail. Then I use the array as I need 'em.
>
> For example, consider the following program
>
>int nums[10000000];
>int num_count=0;
>
> void main(void)
> {
> int j;
> while((j=get_num())!=-1)
> nums[num_count++]=j;
> for(j=0; j<num_count; j++)
> printf("%d->%d\n",j,nums[j];
> }
>
> Space allocated for up to 10,000,000 ints and it still won't
>waste space if you only use a dozen. Damn convenient; no bug at all.
>--

I've noticed this feature on other operating systems also. The thing
that bothers me is that if I request more memory than I have available
(phsical + swap), my program has no way (as far as I can tell) of
knowing when/if an out-of-memory condition occurs. Say, for example,
that I have allocated space for 25,000,000 integers, at 4 bytes each.
That's 100,000,000 bytes of memory. I've got 16MB physical and 32MB of
swap. Clearly, then, the following loop will fail at some point.

for (i = 0; i < 25000000; ++i)
huge_array[i] = 0;

How does my program know that this loop generated a memory fault?
Can I catch some signal? AT any rate, it seems like it would be simpler
to be able to count on malloc()'s return value being correct. I can
understand the advantage of the current implementation when the amount
of memory requested is less than the total available, but I fail to
see why malloc() doesn't return a failure when I try to request more
memory than I can possibly allocate. Anyone?

--
===============================================================================
mhbr...@iastate.edu | "You don't have to thank me. I'm just trying
bre...@dopey.me.iastate.edu | to avoid getting a real job."
| --Dave Barry

Bill C. Riemers

unread,
Feb 8, 1995, 7:24:36 PM2/8/95
to
In article <3h9j68$5...@panix3.panix.com>,
S. Joel Katz <stim...@panix.com> wrote:
>>Am I doing something wrong ? Or is it a bug in Linux or GCC ?
>
> It is a feature in the Linux C library and GCC and is seldom
>appreciated and little used. Allocating or declaring storage does nothing
>in Linux except advance the process' break point.
>
> Linux does not actually allocate a page until a fault occurs,
>such as when a read of write to the memory takes place. Then the fault
>handler maps a page.
>
> I use this all the time in programs to save the hassle of dynamic
>allocation. If I 'might need' up to 10,000,000 ints for something, I
>allocate 10,000,000, safe in the knowledge that the allocation will never
>fail. Then I use the array as I need 'em.
>
> For example, consider the following program
>
>int nums[10000000];
>int num_count=0;
>
> void main(void)
> {
> int j;
> while((j=get_num())!=-1)
> nums[num_count++]=j;
> for(j=0; j<num_count; j++)
> printf("%d->%d\n",j,nums[j];
> }
>
> Space allocated for up to 10,000,000 ints and it still won't
>waste space if you only use a dozen. Damn convenient; no bug at all.

Ahhh, but it is a bug. It could be your program could recover if the
malloc() failed, and get by without the extra memory it requested...
As it is now, programs will always crash if you are out of memory.
Damm inconvient when it is something important like inetd... I always
wondered why programs start crashing when I use too much swap without
logging anything in syslogd.

For example, consider if I write an program to convert picture formats.
It could be I've included two algorithms. One algorithm would create
a new immage in separate memory because then I could "undo". The other could
be intended for when I can't malloc() enough memory to overwrite the old
image with the new image.

Jesper Peterson

unread,
Feb 9, 1995, 1:44:20 AM2/9/95
to
In article <3h8vq4$4pu$1...@heifetz.msen.com>,
Jeffrey Sturm <jst...@garnet.msen.com> wrote:
>> [re: malloc not grabbing a real page untill memory is accessed]

>Linux has paged virtual memory. Even if you have only 40MB physical
>memory, a program has almost the entire 4GB address space available to it.
>Not until you access a page of memory does Linux try to map it to
>physical memory.

Is there any way of testing or accessing malloc'd memory in a non-blocking
fashion under these circumstances? e.g.:

ptr = malloc(BIGNUM);
if ( non_block_map(ptr+foo, size) )
read(ptr+foo, .....);
else
arrrgh;

This would make programs that use data structure with 'holes' (similar
to sparse files) more robust.
--
Jesper Peterson j...@digideas.com.au
j...@mtiame.mtia.oz.au
j...@io.com

Lars Hofhansl

unread,
Feb 8, 1995, 8:51:06 AM2/8/95
to

In article <1995Feb7.1...@tudedv.et.tudelft.nl>, mnij...@et.tudelft.nl writes:
>As I was writing my program, I noticed an oddity (=bug?).
>It's probably best explained by a simple program:
>
>#include <stdlib.h>
>int main(void) {
> int i,*p;
> /* 1st stage */
> for(i=0;i<10000;i++) {
> p[i]=malloc(4096)
> if (p[i]==NULL) {
> fprintf(stderr,"Out of memory\n");
> exit(1);
> }
> }
> /* 2nd stage */
> for(i=0;i<10000;i++)
> *(p[i])=1;
>}
>
>As you can see the first stage tries to allocate 40Mb of memory. Since
>I don't have that kind of memory it should fail ofcourse. To my
>surprise it didn't. (!)
>Well then, the second stage tries to access the 40Mb. At this point
>Linux figures out that that kind of memory isn't there, so it kind of
>hangs. Not really it just becomes increadably slow, I was able to exit
>the program with CTRL-C but it did take a few minutes to do that.
>
>BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
>is faster that calloc, so I prefer to malloc.
>

There's nothing odd in this behavior (well, except, that malloc should
check wether there is enough virtual memory or not).

Remember that your program runs in an environment of virtual memory !

Physical memory is only used when virtual memory is really accessed. That's BTW
the reason why you can run processes which are require more memory than
physically available; just because at one moment it's only necessary to have
the part in memory which is currently accessed.


In the 1st stage you malloc 40MB of ram. Malloc does nothing more than
generate a new node in the (info-)list of allocated memory portions, to esure,
that succeeding mallocs don't allocate the same part of memory again.

The memory is not used (accessed) yet, so there is no need to page in (or out)
any part of the memory.
Now we reach the 2nd stage of your program. The allocated memory is
(write-)accessed now. Since it cannot fit into the main memory in whole some
pages need to be paged out. That's why it becomes that incredibly slow.

The difference between malloc and calloc is that calloc tries to initialize
the allocated memory with 0. May be calloc also checks the size of available
virtual memory...
Anyway, because calloc does initialization the memory is accessed just when
it is "calloced", so calloc should fail, because it cannot initialize the
memory.


Lars

S. Joel Katz

unread,
Feb 9, 1995, 8:28:43 AM2/9/95
to
In <3hb8qb$6...@news.iastate.edu> bre...@dopey.me.iastate.edu (Monty H. Brekke) writes:

> I've noticed this feature on other operating systems also. The thing
>that bothers me is that if I request more memory than I have available
>(phsical + swap), my program has no way (as far as I can tell) of
>knowing when/if an out-of-memory condition occurs. Say, for example,
>that I have allocated space for 25,000,000 integers, at 4 bytes each.
>That's 100,000,000 bytes of memory. I've got 16MB physical and 32MB of
>swap. Clearly, then, the following loop will fail at some point.

> for (i = 0; i < 25000000; ++i)
> huge_array[i] = 0;

> How does my program know that this loop generated a memory fault?
>Can I catch some signal? AT any rate, it seems like it would be simpler
>to be able to count on malloc()'s return value being correct. I can
>understand the advantage of the current implementation when the amount
>of memory requested is less than the total available, but I fail to
>see why malloc() doesn't return a failure when I try to request more
>memory than I can possibly allocate. Anyone?

The problem with malloc failing is it would break the program I
showed above. Programs often malloc huge arrays (larger than they will
ever need) and count on them working. If the program later really
requires more RAM than it allocated, of course, it will fail.

As a simple example, a 'disassociated press' program I worte
allocates space for 10,000,000 word nodes at about 16 bytes apiece. This
program would fail on any system with less than 160M of virtual memory if
all of the memory was really allocated immediately.

If you want, you can write a '1' every 4K to force the memory to
be instantiated, but this is a horrible waste. Many programs allocate
memory they never use or do not use until much later in their execution.
Linux is very smart about this.

If you really care, you can always read /proc/meminfo and see how
much memory is available.

I am quite happy with the present Linux implementation and find
taking advantage of it a win-win situation over dynamic allocation (which
has execution penalties) or truly allocating the maximum needed (which
has space penalities).

Though, a signal that a program could request that would be sent
to it if memory started to get 'low' might be nice. Though, if you really
need the RAM (which you presumably do since you wrote to it),what can you
do. Paging to disk is silly, that is what swap was for.

Bill C. Riemers

unread,
Feb 9, 1995, 2:54:41 PM2/9/95
to
In article <3hd5ab$9...@panix3.panix.com>,

S. Joel Katz <stim...@panix.com> wrote:
> Though, a signal that a program could request that would be sent
>to it if memory started to get 'low' might be nice. Though, if you really
>need the RAM (which you presumably do since you wrote to it),what can you
>do. Paging to disk is silly, that is what swap was for.

Often programs can free up ram if needed. For example, a cad program
running out of memory, could free some of the buffers it was keeping for
a undo function. Otherwise, there is no real point in even checking the
malloc() return value. If malloc() can never fail, why bother? Why even
bother passing a value to malloc? How about just always having malloc
allocate 10MB's of space...

As near as I can tell, the only hope is to always use calloc() instead.
However, this leaves me wondering what happens if I want realloc() to increase a
buffer size? Does Linux really allocate the new memory, or should I try
zeroing things to avoid crashing later? Normally I write programs to
only request memory they will be using and free it as soon as it is not
needed. So not really having the memory allocated is a useless feature,
that just makes logically flawless programs crash.

Steven Buytaert

unread,
Feb 10, 1995, 4:31:16 AM2/10/95
to
mnij...@et.tudelft.nl wrote:

: As I was writing my program, I noticed an oddity (=bug?).


: It's probably best explained by a simple program:

: for(i=0;i<10000;i++) {


: p[i]=malloc(4096)
: if (p[i]==NULL) {
: fprintf(stderr,"Out of memory\n");
: exit(1);

: }
: }
: for(i=0;i<10000;i++)
: *(p[i])=1;

: As you can see the first stage tries to allocate 40Mb of memory. Since
: I don't have that kind of memory it should fail ofcourse. To my
: surprise it didn't. (!)

: Well then, the second stage tries to access the 40Mb. [...]

The physical memory pages are not allocated until there is a reference
to the pages. Check out /usr/src/linux/mm/*.c for more precise information.
(When sbrk() is called, during a malloc, a vm_area structure is enlarged
or created, it's not until a page fault that a page is realy taken to
use it)

It's not a bug. IMHO, a program should allocate and use the storage as
it goes, not in chunks of 40Megabytes...

--
Steven Buytaert

WORK buyt...@imec.be
HOME buyt...@clever.be

'Imagination is more important than knowledge.'
(A. Einstein)

Arnt Gulbrandsen

unread,
Feb 10, 1995, 11:50:25 AM2/10/95
to
In article <3hb8qb$6...@news.iastate.edu>,

Monty H. Brekke <bre...@dopey.me.iastate.edu> wrote:
> I've noticed this feature on other operating systems also. The thing
>that bothers me is that if I request more memory than I have available
>(phsical + swap), my program has no way (as far as I can tell) of
>knowing when/if an out-of-memory condition occurs.
<deletia>

Your program has no way to detect that five other users all start
memory-intensive applications while your program is running either.

This has been discussed both here and on the linux-kernel mailing
list. My impression from those threads is that if anyone writes
something intelligent, Linus will accept it. (Truism, I know.)

My thinking, FWIW, is that there ought to be two new signals,
SIGMEMLOW and SIGLOADHIGH, which default to SIGIGN and may be sent to
any process (that has installed handlers) at any time, and which hint
to the process that memory is low or load high. The process would
then do something or not do anything, depending on the programmer's
whim. A loadable kernel module might wake up now and then (using
timers, not the scheduler, of course) and send some signals if its
criteria of high load or low memory are fulfilled.

I'm not going to write any monitoring module, but if there were such
signals I might patch my www/ftp daemons, and after 1.3 is out I may
actually write a (trivial) patch to add the signals and make them
default to SIGIGN. It would at least change the discussions :)

--Arnt

Cameron Hutchison

unread,
Feb 11, 1995, 9:33:57 AM2/11/95
to
stim...@panix.com (S. Joel Katz) writes:
> If you really care [about not being able to malloc more memory
>than what exists], you can always read /proc/meminfo and see how
>much memory is available.

This wont work. You need some sort or atomic operation to check and malloc
memory. Otherwise you could read /proc/meminfo and find you have enough
memory, your quantum expires and another process takes the memory, and
you're back where you started.

It would be better to have a per process flag that indicated the memory
allocation policy you want for that process. How you manipulate this flag
is left as an exercise for the reader.

Cheers
--
Cameron Hutchison (ca...@nms.otc.com.au) | Beware of the clams
GCS d--@ -p+ c++(++++) l++ u+ e+ m+(-) s n- h++ f? !g w+ t r+

Lars Wirzenius

unread,
Feb 12, 1995, 10:35:19 AM2/12/95
to
stim...@panix.com (S. Joel Katz) writes:
> The problem with malloc failing is it would break the program I showed above.

That would be a good thing. Seriously. If a program can't rely on the
memory it has allocated to actually be usable, it can't handle low memory
situations intelligently. Instant Microsoftware. Instant trashing systems.
Instant "Linux is unreliable, let's buy SCO". Instant end of the univ...,
er, forget that one, but it's not a good idea anyway.

There's more to writing good software than getting it through the
compiler. Error handling is one of them, and Linux makes it impossible
to handle low memory conditions properly. Score -1 big design misfeature
for Linus.

> Programs often malloc huge arrays (larger than they will
> ever need) and count on them working.

I've never seen such a program, but they're buggy. Any program using
malloc and not checking its return value is buggy. Since malloc almost
always lies under Linux, all programs using malloc under Linux are
buggy.

This `lazy allocation' feature of Linux, and Linus's boneheadedness
about it, is about the only reason why I'm still not sure he isn't a
creature from outer space (oops, I'm going to be hit by a Koosh ball
the next time Linus comes to work :-). The lazy allocation is done, as
far as I can remember from earlier discussions, to avoid a fork+exec
from requiring, even temporarily, twice the amount of virtual memory,
which would be expensive for, say, Emacs. For this gain we sacrifice
reliability; not a very good sacrifice, in my opinion. I also don't buy the
argument that it's important to make it easy to write sparse arrays.
(Such arrays are not all that common, and it's easy enough to implement
them in traditional systems.)

What would be needed, in my opinion, is at least a kernel compilation
or bootup option that allows the sysadmin to specify the desired behaviour,
perhaps even having a special system call so that each process can
decide for itself. (That shouldn't even be all that difficult to write
for someone who rewrites the memory management in one day during a so
called code freeze.)

> As a simple example, a 'disassociated press' program I worte
> allocates space for 10,000,000 word nodes at about 16 bytes apiece. This
> program would fail on any system with less than 160M of virtual memory if
> all of the memory was really allocated immediately.

Guess what it does on any system with reliable virtual memory. Guess
what it does when you use more word nodes than there is memory for on
your Linux box.

> If you really care, you can always read /proc/meminfo and see how
> much memory is available.

No you can't. 1) The OS might not allow you to use all that memory, and
duplicating memory allocation in every application so that it can check
it properly is rather stupid. 2) During the time between the check and
the allocation, the situation might change radically; e.g., some other
application might have allocated memory. 3) The free memory might be
a lie, e.g., the OS might automatically allocate more swap if there is
some free disk space.

--
Lars.Wi...@helsinki.fi (finger wirz...@klaava.helsinki.fi)
Publib version 0.4: ftp://ftp.cs.helsinki.fi/pub/Software/Local/Publib/

Richard L. Goerwitz

unread,
Feb 14, 1995, 12:27:31 AM2/14/95
to
In article <3hl9rn$t...@klaava.Helsinki.FI>, Lars Wirzenius <wirz...@cc.Helsinki.FI> wrote:
>
>This `lazy allocation' feature of Linux, and Linus's boneheadedness
>about it, is about the only reason why I'm still not sure he isn't a
>creature from outer space (oops, I'm going to be hit by a Koosh ball
>the next time Linus comes to work :-).

Geez, I'd hit you with more than that if you were my co-worker.
Boneheadedness?

--

Richard L. Goerwitz *** go...@midway.uchicago.edu

Peter Funk

unread,
Feb 14, 1995, 1:59:19 AM2/14/95
to
In <3hl9rn$t...@klaava.Helsinki.FI> wirz...@cc.Helsinki.FI (Lars Wirzenius) writes:
[...] The lazy allocation is done, as

> far as I can remember from earlier discussions, to avoid a fork+exec
> from requiring, even temporarily, twice the amount of virtual memory,
> which would be expensive for, say, Emacs. For this gain we sacrifice
> reliability; not a very good sacrifice, in my opinion.

Wouldn't a 'vfork' solve this problem ? What's wrong with 'vfork' ?

Regards, Peter
-=-=-
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany
office: +49 421 2041921 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)

Ralf Schwedler

unread,
Feb 16, 1995, 4:30:17 AM2/16/95
to

In article <1995Feb10.0...@imec.be>, buyt...@imec.be (Steven Buytaert) writes:
mnij...@et.tudelft.nl wrote:

: As I was writing my program, I noticed an oddity (=bug?).
: It's probably best explained by a simple program:

: for(i=0;i<10000;i++) {
: p[i]=malloc(4096)
: if (p[i]==NULL) {
: fprintf(stderr,"Out of memory\n");
: exit(1);
: }
: }
: for(i=0;i<10000;i++)
: *(p[i])=1;

: As you can see the first stage tries to allocate 40Mb of memory. Since
: I don't have that kind of memory it should fail ofcourse. To my
: surprise it didn't. (!)
: Well then, the second stage tries to access the 40Mb. [...]

I have read about all of this thread. I think I understand the (mainly
efficiency oriented) arguments which support this behaviour. It's
probably not useful to discuss changing this behaviour, as some software
may rely on this.

Anyhow, from the point of view of an application programmer, I consider
the way malloc is realized absolutely dangerous. I want to be able to
handle error conditions as close as possible to the point of their
origin. The definition of malloc is 'allocate memory', not
'intend to allocate memory'. I want to decide myself how to handle
memory overflow conditions; from that point of view I cannot accept
any program abort not controlled by my application. All hints given
so far (e.g. using some technique to find the amount of free memory)
are useless (If I understood it well, even calloc will abort in situations
where the memory is not available; please stop reading here if this is not
the case). Such methods would rely on friendly behaviour of other apps
running; which is not acceptable in a multitasking environment.

My question:

Is there a version of malloc available for Linux which guarantees
allocation of memory, or returns NULL (this is the functionality
which I consider safest for programming) ? Maybe -libnmalloc?

Thanks,

Ralf

--
#####################################################################
Dipl.-Phys. Ralf Schwedler Tel. +49-241-80-7908
Institut fuer Halbleitertechnik II Fax. +49-241-8888-246
Sommerfeldstrasse 24 ra...@fred.basl.rwth-aachen.de
D-52074 Aachen

Lars Wirzenius

unread,
Feb 19, 1995, 11:33:16 AM2/19/95
to
go...@midway.uchicago.edu writes:
> Geez, I'd hit you with more than that if you were my co-worker.
> Boneheadedness?

As it happens, Linus seems to have missed my article altogether. I haven't
been hit by anything yet. :-)

Lars Wirzenius

unread,
Feb 19, 1995, 11:37:37 AM2/19/95
to
p...@artcom0.north.de (Peter Funk) writes:
> Wouldn't a 'vfork' solve this problem ? What's wrong with 'vfork' ?

The problem with vfork is that it doesn't solve the problem for
programs that don't use it; many programs don't. It's semantics are
also stupid (although necessary). The same speed can be achieved with
copy-on-write and other memory management trickery.

Alan Cox

unread,
Feb 21, 1995, 1:41:40 PM2/21/95
to
In article <3hl9rn$t...@klaava.Helsinki.FI> wirz...@cc.Helsinki.FI (Lars Wirzenius) writes:
>situations intelligently. Instant Microsoftware. Instant trashing systems.
>Instant "Linux is unreliable, let's buy SCO". Instant end of the univ...,
>er, forget that one, but it's not a good idea anyway.

Tried SCO with any resource limits on the problem.

>There's more to writing good software than getting it through the
>compiler. Error handling is one of them, and Linux makes it impossible
>to handle low memory conditions properly. Score -1 big design misfeature
>for Linus.

Scientists like it that way, other people should read the limit/rusage
man pages.

Alan


--
..-----------,,----------------------------,,----------------------------,,
// Alan Cox // iia...@www.linux.org.uk // GW4PTS@GB7SWN.#45.GBR.EU //
``----------'`--[Anti Kibozing Signature]-'`----------------------------''
One two three: Kibo, Lawyer, Refugee :: Green card, Compaq come read me...

Marty Galyean

unread,
Feb 21, 1995, 12:48:48 PM2/21/95
to
Lars Wirzenius (wirz...@cc.Helsinki.FI) wrote:

After reading this thread it seems there are two views at work...
the first says that a program should either get the memory it wants
guaranteed, or be told it can't...while the other view is that the
previous view is too inefficient and that a program should rely on
swapping on demand to handle fault and just not worry about
a real situation of no memory, swap or otherwise, available.

Neither of these seems very satisfying for all the reasons discussed
previously in the thread.

However, I kind of like the way Linux works. Here's why... People are fond
of presenting the fact that in a multitasking env memory that was avail a
moment before may not be there a moment later. But guys, the opposite is
also true...memory that did not appear available a moment before might be
*freed* a moment later, and thus be available...OS's are becoming
sophisticated enough that you just can't plan everything out
deterministically...your program has to go with the flow and adjust.

I also agree (with a previous post) that signals to indicate system load,
swap frequency, etc. would be nice...and integral to any program that does
'go with the flow'...
It would be nice if your program could just take a look around, see that
its just too hard to get anything useful done, and stop with appropriate
messages...perhaps with the option of resuming where it left off later
automatically. This could be done just be looking at the system time
once in a while to measure lag...doesn't really need os support...
this would be gambling, of course.

I don't like the idea that if my program didn't look quick enough or
guessed wrong it could fail ungracefully when swap space ran out. It does
not seem right...new signals could make this a little easier, but the
unavoidable fact is that you can never guarantee you have access to 'your'
memory ...kind of like reservations on the airlines...I can't see
either of these as as ever being 'easy-to-error-handle' situations ;-)
Things like this keep things interesting though.

Marty
gal...@madnix.uucp

Doug DeJulio

unread,
Feb 22, 1995, 5:05:10 PM2/22/95
to
In article <1995Feb21.1...@madnix.uucp>,

Marty Galyean <gal...@madnix.uucp> wrote:
>After reading this thread it seems there are two views at work...
>the first says that a program should either get the memory it wants
>guaranteed, or be told it can't...while the other view is that the
>previous view is too inefficient and that a program should rely on
>swapping on demand to handle fault and just not worry about
>a real situation of no memory, swap or otherwise, available.

Either behavior should be available. Both functionalities should be
present.

Any function defined by POSIX should conform exactly to the behavior
POSIX specifies. This is very important. We can't claim Linux is a
POSIX OS if it openly violates standards on purpose.

If the standard does not specify the exact way "malloc()" is supposed
to perform, then no POSIX-compliant C program can depend on either
behavior. You've got to write all your programs assuming either
behavior could occur, or they're not portable.

Any functionality not offered within the POSIX standard should be done
via extensions of some sort.

If you disagree with any of these assertions besides the first one
(that both behaviors should be present), you're basically saying that
it's not important that Linux attempt to conform to the POSIX
standard.

So, what *does* the POSIX standard say about the behavior of malloc()?
--
Doug DeJulio | R$+@$=W <-- sendmail.cf file
mailto:dd...@pitt.edu | {$/{{.+ <-- modem noise
http://www.pitt.edu/~ddj/ | !@#!@@! <-- Mr. Dithers swearing

Bruce Thompson

unread,
Feb 24, 1995, 2:27:58 AM2/24/95
to
In article <3i7rsc$e...@kruuna.helsinki.fi>, wirz...@cc.Helsinki.FI (Lars
Wirzenius) wrote:

I hate to say it, but I agree. Sorry Linus, but Boneheadedness if you
consider this a feature.

A bit of history. I used to work on Apollo workstations. The Apollo had a
novel way of allocating swap space, it essentially created temporary files
that were automatically deleted on close. Under Aegis (the O/S) version
9.2, memory allocation was handled as Linux handles it now. Hardly
surprising that we had all kinds of untraceable errors when the disk
filled up. As of 9.5, backing store was _always_ allocated when a process'
brk value was increased. This allowed malloc to correctly return NULL when
no more memory was available.

The problem with the disk filling up still remained, but at least
processes could handle an out-of-memory condition gracefully.

I can understand Linus' reluctance to create a situation where a fork+exec
from emacs requires the duplication of megs of data-segment which will be
released immediately when the exec occurs. I see a few possibilities here.
The first idea that springs to mind to simply to _do_ just that. If a
process needs to do a fast fork+exec, there's always vfork. That's it's
intended purpose. The second, and perhaps preferred solution (but more
work) is to simply clone the page tables and mark both sets
"copy-on-write". Then when either process attempts to write to the page,
the page is cloned before the write is allowed. This is the method used in
SCO, and it's one of the few things that I think they've done right. The
difference between vfork and fork becomes minimal. The overhead of this
method is only a copy of the page tables, and some extra page-faults until
the working sets are actually copied.

On the 486, this could be easily implemented by protecting all the pages
and using one of the available bits in the page table entries to indicate
"copy-on-write". There may be some additional overhead, but I highly doubt
it's going to be all that bad.

It's absolutely critical that processes see that they've run out of memory
in a controlled manner. There are two defined ways of doing this. The
first is when sbrk is called, it's defined to return -1. The second method
is malloc returning NULL. I'd like to echo the opinions of others who've
said that any program that doesn't check the return values of malloc,
sbrk, new (C++) or _ANY_ library or system call _IS_BROKEN_. There's
frankly no excuse for not checking for errors. I freely admit that I don't
check the results as often as I should, but that doesn't excuse me. If my
programs fail because I'm not checking correctly the fault is purely my
own.

IMNSHO the arguments that changing the memory behavior so that mallocing
10M but only using a tiny bit and getting away with it are _not_ valid
arguments. As Lars pointed out, sparse matrices can be written in other
ways, and indeed should be. When malloc, or rather, when sbrk returns a
pointer to you, the system is telling you "that memory is yours." Another
way of putting it is that the system has made a commitment to you that you
can access the memory that you requested. The current kernel behavior
_violates_ that commitment.

Please, Linus (and/or other kernel hackers) let's fix this! Given the
current push for 1.2, let's at least commit to addressing this (pardon the
pun) during 1.3. Writing reliable software is difficult enough without
adding needless sources of potential error.

Cheers,
Bruce.

--
--------------------------------------------------------------------
Bruce Thompson | "Never put off till tomorrow
PIE Developer Information Group | what you can put off till next
Apple Computer Inc. | week".
AppleLink: bthompson | -- Unknown
Internet: br...@newton.apple.com

Bruce Thompson

unread,
Feb 24, 1995, 2:38:57 AM2/24/95
to
I really hate it when I have to follow-up my own posting. Damn. Everyone
together: Bruce, RTFM!


In article <bruce-23029...@17.255.39.192>, br...@newton.apple.com
(Bruce Thompson) wrote:

[ ... ]

> I can understand Linus' reluctance to create a situation where a fork+exec
> from emacs requires the duplication of megs of data-segment which will be
> released immediately when the exec occurs. I see a few possibilities here.
> The first idea that springs to mind to simply to _do_ just that. If a
> process needs to do a fast fork+exec, there's always vfork. That's it's
> intended purpose. The second, and perhaps preferred solution (but more
> work) is to simply clone the page tables and mark both sets
> "copy-on-write". Then when either process attempts to write to the page,
> the page is cloned before the write is allowed. This is the method used in
> SCO, and it's one of the few things that I think they've done right. The
> difference between vfork and fork becomes minimal. The overhead of this
> method is only a copy of the page tables, and some extra page-faults until
> the working sets are actually copied.
>
> On the 486, this could be easily implemented by protecting all the pages
> and using one of the available bits in the page table entries to indicate
> "copy-on-write". There may be some additional overhead, but I highly doubt
> it's going to be all that bad.

I just now read the fork(2) manpage, and it claims that Linux already uses
copy-on-write. Given that, can someone please tell me the justification
for not allocating page-frames when sbrk (malloc) is called? The only
possible justification I had been able to come up with, doesn't actually
exist.

Thierry Bousch

unread,
Feb 24, 1995, 8:01:11 AM2/24/95
to
Doug DeJulio (dd...@pitt.edu) wrote:

: So, what *does* the POSIX standard say about the behavior of malloc()?

Nothing. The malloc() function doesn't belong to the POSIX standard.
(It conforms to ANSI C).

The problem, unfortunately, is not only with malloc(). On most Unix systems,
the stack is automatically expanded when needed; therefore, any procedure
call is an implicit memory allocation; if it fails, how are you going to
report the error to the user? There is no way to handle this kind of
errors gracefully, you have to suspend or to kill the process.

Note also that if you really run out of virtual memory, the system is
probably already paging like hell, and you won't be able to do anything
useful on it; it's not very different from a freezed system, and you'll
probably have to hit the Big Red Button anyway because even Ctrl-Alt-Del
won't respond (in a reasonable time, that is).

Thierry.

Doug DeJulio

unread,
Feb 24, 1995, 7:10:15 PM2/24/95
to
In article <3iklan$2...@linotte.republique.fr>,

Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>Doug DeJulio (dd...@pitt.edu) wrote:
>
>: So, what *does* the POSIX standard say about the behavior of malloc()?
>
>Nothing. The malloc() function doesn't belong to the POSIX standard.
>(It conforms to ANSI C).

What does ANSI C say about the behavior of malloc() then?

Doug DeJulio

unread,
Feb 24, 1995, 7:12:08 PM2/24/95
to
In article <3iklan$2...@linotte.republique.fr>,
Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>Note also that if you really run out of virtual memory, the system is
>probably already paging like hell, and you won't be able to do anything
>useful on it; it's not very different from a freezed system, and you'll
>probably have to hit the Big Red Button anyway because even Ctrl-Alt-Del
>won't respond (in a reasonable time, that is).

But running out of virutal memory isn't the only reason malloc() could
fail. What about the per-process memory limit (as in "ulimit -a")?
What happens with that under Linux right now? A *process* can run out
of available memory even before the system starts paging.

Greg Comeau

unread,
Feb 24, 1995, 7:24:39 PM2/24/95
to
In article <3ilsh7$a...@usenet.srv.cis.pitt.edu> dd...@pitt.edu (Doug DeJulio) writes:
>In article <3iklan$2...@linotte.republique.fr>,
>Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>>Doug DeJulio (dd...@pitt.edu) wrote:
>>
>>: So, what *does* the POSIX standard say about the behavior of malloc()?
>>
>>Nothing. The malloc() function doesn't belong to the POSIX standard.
>>(It conforms to ANSI C).
>
>What does ANSI C say about the behavior of malloc() then?

Very little. But I don't see the beginning of this thread:
What part of malloc()s behavior are you interested in?

- Greg
--
Comeau Computing, 91-34 120th Street, Richmond Hill, NY, 11418-3214
Here:com...@csanta.attmail.com / BIX:comeau or com...@bix.com / CIS:72331,3421
Voice:718-945-0009 / Fax:718-441-2310 / Prodigy: tshp50a / WELL: comeau

Doug DeJulio

unread,
Feb 24, 1995, 9:04:01 PM2/24/95
to
In article <3iltc7$l...@panix.com>,

Greg Comeau <com...@csanta.attmail.com> wrote:
>In article <3ilsh7$a...@usenet.srv.cis.pitt.edu> dd...@pitt.edu (Doug DeJulio) writes:
>>In article <3iklan$2...@linotte.republique.fr>,
>>Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>>>Doug DeJulio (dd...@pitt.edu) wrote:
>>>
>>>: So, what *does* the POSIX standard say about the behavior of malloc()?
>>>
>>>Nothing. The malloc() function doesn't belong to the POSIX standard.
>>>(It conforms to ANSI C).
>>
>>What does ANSI C say about the behavior of malloc() then?
>
>Very little. But I don't see the beginning of this thread:
>What part of malloc()s behavior are you interested in?

Well, as I understand it, Linux's malloc() will basically always
succeed, even if there's much less virtual memory available than you
requested. It's only when you acutally try to *use* the memory you've
been allocated that you get a problem. Apparently, the page isn't
actually allocated until it's touched.

The traditional Unix approach has been to have malloc() fail when you
try to allocate too much memory, so the application knows ahead of
time that it's not going to have the memory it wants.

I'm trying to figure out if both behaviors are compliant with relevant
standards. If so, portable software must be written assuming either
behavior could occur. If, on the other hand, Linux violates a
standard, that's good ammunition to use when lobbying for a change in
Linux's behavior.

I don't really care which behavior Linux uses, AS LONG AS it exactly
conforms to the written (not de-facto) standards.

S. Lee

unread,
Feb 25, 1995, 1:39:32 AM2/25/95
to
In article <3ilsh7$a...@usenet.srv.cis.pitt.edu>,

Doug DeJulio <dd...@pitt.edu> wrote:
>
>What does ANSI C say about the behavior of malloc() then?

7.10.3 Memory management functions

The order and contiguity of storage allocated by successive calls
to the calloc, malloc, and realloc functions is unspecified. The pointer
returned if the allocation succeeds is suitably aligned.... If the space
cannot be allocated, a null pointer is returned....

Stephen
--
sl...@cornell.edu
Witty .sig under construction.

Bruce Thompson

unread,
Feb 25, 1995, 12:33:31 PM2/25/95
to

It would, but in private discussions, someone (sorry, I can't remember
who) pointed out that vfork was developed originally to get around bugs in
the Copy-on-write implementation on VAXes. The Linux kernel apparently
already does copy-on-write on forks, so the difference between fork and
vfork is now irrelevant.

Either way, I can't see that there's a _valid_ reason for keeping the
behavior. I hate to beat a dead horse, but I have to. The job of the
kernel is to manage the resources of the machine. By allowing processes to
think they've received more memory than they actual have, the kernel is
abdicating that responsibility. IMNSHO this is a Bad Thing(tm). I'm sure
I've mentioned it before, but it seems to me that a swap page could be
allocated (not written, just allocated) when pages are allocated to a
process. This would allow the kind of performance in the face of large
allocations that people may have come to expect. It would still ensure
that when the kernel told a process "here's a page" there actually _was_ a
page for that process. This last item is the whole point. Again, IMNSHO,
the kernel should never _EVER_ allocate resources it doesn't have.

Cheers,
Bruce.

--
--------------------------------------------------------------------
Bruce Thompson | "Never put off till tomorrow what
PIE Developer Information Group | you can comfortably put off till
Apple Computer Inc. | next week."
| -- Unknown
Usual Disclaimers Apply |

Damjan Lango

unread,
Feb 27, 1995, 3:20:55 PM2/27/95
to
Bruce Thompson (br...@newton.apple.com) wrote:

: In article <57...@artcom0.north.de>, p...@artcom0.north.de (Peter Funk) wrote:

: > In <3hl9rn$t...@klaava.Helsinki.FI> wirz...@cc.Helsinki.FI (Lars
: Wirzenius) writes:
: > [...] The lazy allocation is done, as
: > > far as I can remember from earlier discussions, to avoid a fork+exec
: > > from requiring, even temporarily, twice the amount of virtual memory,
: > > which would be expensive for, say, Emacs. For this gain we sacrifice
: > > reliability; not a very good sacrifice, in my opinion.
: >
: > Wouldn't a 'vfork' solve this problem ? What's wrong with 'vfork' ?

: It would, but in private discussions, someone (sorry, I can't remember


: who) pointed out that vfork was developed originally to get around bugs in
: the Copy-on-write implementation on VAXes. The Linux kernel apparently
: already does copy-on-write on forks, so the difference between fork and
: vfork is now irrelevant.

: Either way, I can't see that there's a _valid_ reason for keeping the
: behavior. I hate to beat a dead horse, but I have to. The job of the
: kernel is to manage the resources of the machine. By allowing processes to
: think they've received more memory than they actual have, the kernel is
: abdicating that responsibility. IMNSHO this is a Bad Thing(tm). I'm sure
: I've mentioned it before, but it seems to me that a swap page could be
: allocated (not written, just allocated) when pages are allocated to a
: process. This would allow the kind of performance in the face of large
: allocations that people may have come to expect. It would still ensure
: that when the kernel told a process "here's a page" there actually _was_ a
: page for that process. This last item is the whole point. Again, IMNSHO,
: the kernel should never _EVER_ allocate resources it doesn't have.

: Cheers,
: Bruce.

Absolutely agree!
And I can't understand how this malloc bug came so far up to 1.1.x
It *must* be fixed before 1.2!!!
Even all those shitty oses as dog windoze and NT does this in the right way...
(well ok, dog doesn't have virtual mem. but NT does)
I would realy like to see this fixed NOW or there will people start saying
hey this Linux sux, it can't even do memory allocation right!

Maybe I should give an example how is it done under NT if u want to have
this kind of behavior from malloc but controlled of course!
and malloc is still malloc but there is an additional VirtualAlloc.
I am not trying to say that there should be exactly a VirtualAlloc but
the current malloc should be at least renamed to something like
hazard_malloc_with_hope and written a new bug free malloc!

Well here is an example of NT VirtualAlloc for a very large bitmap
that has only a few pixels set:

BTW shouldn't we move this to comp.os.linux.development.system?

---8<---

#include <windows.h>
#define PAGESIZE 4096
#define PAGELIMIT 100

class Bitmap{
private:
BYTE *lpBits;
BYTE *pages[PAGELIMIT];
WORD width,heigth;
WORD page;
public:
Bitmap(WORD width,WORD heigth);
~Bitmap();

void setPixel(WORD x,WORD y,BYTE c);
void resetPixel(WORD x,WORD y);
BYTE getPixel(WORD x,WORD y);
};

Bitmap::Bitmap(WORD w,WORD h){
page=0;
width=w;
height=h;
lpBits=(BYTE *)VirtualAlloc(NULL, // start
w*h, // size
MEM_RESERVE, PAGE_NOACCESS);
assert(lpBits);
}

Bitmap::~Bitmap(){
for(int i=0;i<page;i++) VirtualFree(pages[i],PAGESIZE,MEM_DECOMMIT);
VirtualFree(lpBits,0,MEM_RELEASE);
}

void Bitmap::setPixel(WORD x,WORD y,BYTE c){
try{
lpBits[y*width+x]=c;
}
except(EXCEPTION_EXECUTE_HANDLER){
pages[page]=VirtualAlloc(
lpBits+(y*width+x)/PAGESIZE, //start
PAGESIZE, //size
MEM_COMMIT, PAGE_READWRITE);
assert(pages[page]);
lpBits[y*width+x]=c;
}
}

void Bitmap::resetPixel(WORD x,WORD y){
try{
lpBits[y*width+x]=0;
}
except(EXCEPTION_EXECUTE_HANDLER){
}
}

BYTE Bitmap::getPixel(WORD x,WORD y){
BYTE bit;

try{
bit=lpBits[y*width+x];
}
except(EXCEPTION_EXECUTE_HANDLER){
bit=0;
}
return bit;
}


void main(void){
Bitmap &bmp=*new bmp(10000,10000);
bmp.setPixel(0,0);
bmp.setPixel(5000,5000);
bmp.setPixel(9999,9999);
delete &bmp;
}

---8<---


bye
Damjan Lango

Hannes Reinecke

unread,
Feb 28, 1995, 7:57:26 AM2/28/95
to
>>>>> "Ralf" == Ralf Schwedler <ra...@fred.basl.rwth-aachen.de> writes:

Ralf> In article <1995Feb10.0...@imec.be>,
Ralf> buyt...@imec.be (Steven Buytaert) writes:

[ malloc-prg deleted ]

Ralf> Anyhow, from the point of view of an application programmer,
Ralf> I consider the way malloc is realized absolutely
Ralf> dangerous. I want to be able to handle error conditions as
Ralf> close as possible to the point of their origin. The
Ralf> definition of malloc is 'allocate memory', not 'intend to
Ralf> allocate memory'.

Hmm. Having read this, i wondered whether you have heard about virtual
memory. _Every_ process has access to an so-called virtual memory
segment, which has under linux(i386) the size of 3 GB
(cf <asm/processor.h>). So, if you malloc() normally, you will get (in
best cases) this amount (unless the system crashes :-).
The amount of installed physical memory is mere a matter of speed.

Ralf> I want to decide myself how to handle
Ralf> memory overflow conditions; from that point of view I cannot
Ralf> accept any program abort not controlled by my
Ralf> application.

In normal conditions, in fact you are the only one responsible for
out-of-memory cases created by your program; as far as the system is
concerned, it will only deny to give you any memory (i.e. malloc and
friends will return NULL).

Ralf> All hints given so far (e.g. using some
Ralf> technique to find the amount of free memory) are useless (If
Ralf> I understood it well, even calloc will abort in situations
Ralf> where the memory is not available; please stop reading here
Ralf> if this is not the case). Such methods would rely on
Ralf> friendly behaviour of other apps running; which is not
Ralf> acceptable in a multitasking environment.

Really ?

Have fun

Hannes
-------
Hannes Reinecke |
<ha...@vogon.mathi.uni-heidelberg.de> | XVII.: WHAT ?
|
PGP fingerprint available | T.Pratchett: Small Gods
see 'finger' for details |

Vivek Kalra

unread,
Feb 28, 1995, 10:38:14 AM2/28/95
to
In article <3iklan$2...@linotte.republique.fr>,
Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>
>Note also that if you really run out of virtual memory, the system is
>probably already paging like hell, and you won't be able to do anything
>useful on it; it's not very different from a freezed system, and you'll
>probably have to hit the Big Red Button anyway because even Ctrl-Alt-Del
>won't respond (in a reasonable time, that is).
>
Okay, let's see: I have a machine with 8M of RAM and 12M of swap.
At this given moment, I have, say, 8 of those megs available. So I
run this super-duper image-processing program I have -- it checks
the current input size and determines that it needs 16M of memory
to do its thing on this input. So it malloc()s 16M and finds that
everything is fine and starts its thing, runs for three hours, and,
err, ooops, runs out of memory. Now, if malloc() had failed
earlier, I wouldn't have had to wait for three hours to find that
out, would I? Presumably, the program would have just told me at
the very beginning that not enough memory was available to do its
thing on the current input. And, no, the system before running
this program need not have been paging like hell, as you put it -- there was 8M of
memory available, remember?

Even worse, I might have a program that may already have modified
its input before finding out that it cannot finish its thing
becuase of lack of memory and so cannot write out the correct
output -- but the input is gone too. So now what?

The problems of not handling a NULL return from malloc() are well
known. To have a malloc() that might fail in a way that doesn't
give the programmer any chance to recover is just mind-boggling.

Vivek
--
Vivek email address signature
dsclmr: any ideas above, if there, are mine. All mine. And an illusion.
Oh, what a tangled web we weave, when first we practice to weave.
Quote for the day: '

Greg Comeau

unread,
Feb 28, 1995, 11:04:29 AM2/28/95
to
In article <3im36h$a...@usenet.srv.cis.pitt.edu> dd...@pitt.edu (Doug DeJulio) writes:
>In article <3iltc7$l...@panix.com>,
>Greg Comeau <com...@csanta.attmail.com> wrote:
>>In article <3ilsh7$a...@usenet.srv.cis.pitt.edu> dd...@pitt.edu (Doug DeJulio) writes:
>>>In article <3iklan$2...@linotte.republique.fr>,
>>>Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>>>>Doug DeJulio (dd...@pitt.edu) wrote:
>>>>
>>>>: So, what *does* the POSIX standard say about the behavior of malloc()?
>>>>
>>>>Nothing. The malloc() function doesn't belong to the POSIX standard.
>>>>(It conforms to ANSI C).
>>>
>>>What does ANSI C say about the behavior of malloc() then?
>>
>>Very little. But I don't see the beginning of this thread:
>>What part of malloc()s behavior are you interested in?
>
>Well, as I understand it, Linux's malloc() will basically always
>succeed, even if there's much less virtual memory available than you
>requested. It's only when you acutally try to *use* the memory you've
>been allocated that you get a problem. Apparently, the page isn't
>actually allocated until it's touched.

Ok, ANSI malloc() doesn't actually say how the memory is obtained and stuff
like that since it's OS/environemnt specific. This sounds like perhaps
a RTL responsibility for sure then. That is, perhaps it should optimally
figure out how to touch the page, because the space for the object does
need to be available (in my interpretation) before the object returned
is used. If not, the null pointer is returned. As I recall, there is
some fuzziness about exactly what is meant by things like "object"
and "allocate", but IMO they do not interfere here. I'd post the
actual words but cannot find any of my copies of the standard at the
moment.

>The traditional Unix approach has been to have malloc() fail when you
>try to allocate too much memory, so the application knows ahead of
>time that it's not going to have the memory it wants.

Yes (well, not UNIX per se but the RTL). And this is what I believe the
standard says.

> I'm trying to figure out if both behaviors are compliant with relevant
>standards. If so, portable software must be written assuming either
>behavior could occur. If, on the other hand, Linux violates a
>standard, that's good ammunition to use when lobbying for a change in
>Linux's behavior.

> don't really care which behavior Linux uses, AS LONG AS it exactly
>conforms to the written (not de-facto) standards.

If this is behaving as you describe it, I believe it violates.
Even despite the fuzziness ,the committement comes at the call,
because at that point either you have a "returned object" or not.
If not, there is no category of invalid pointer or invalid object here.

Ian McCloghrie

unread,
Feb 28, 1995, 12:39:39 PM2/28/95
to
lan...@ana.fer.uni-lj.si (Damjan Lango) writes:

>Bruce Thompson (br...@newton.apple.com) wrote:
>: In article <57...@artcom0.north.de>, p...@artcom0.north.de (Peter Funk) wrote:

>: Either way, I can't see that there's a _valid_ reason for keeping the
>: behavior. I hate to beat a dead horse, but I have to. The job of the
>: kernel is to manage the resources of the machine. By allowing processes to
>: think they've received more memory than they actual have, the kernel is
>: abdicating that responsibility. IMNSHO this is a Bad Thing(tm). I'm sure

If I'm not mistaken, SVR4 has the same behaviour as Linux in this
respect. I've not tested it empirically, but I spent a little time
looking at the SVR4 vmem sources (secondary to different project in
the fs) about a year ago, and that seemed to be what it was doing.
So it's not unheard-of behaviour.

--
Ian McCloghrie work: ia...@qualcomm.com home: i...@egbt.org
____ GCS d-- H- s+:+ !g p?+ au a- w+ v- C+++$ UL++++ US++$ P+>++
\bi/ L+++ 3 E+ N++ K--- !W--- M-- V-- -po+ Y+ t+ 5+++ jx R G''''
\/ tv- b+++ D- B--- e- u* h- f+ r n+ y*

The above represents my personal opinions and not necessarily those
of my employer, Qualcomm Inc.

Ivica Rogina

unread,
Feb 28, 1995, 2:35:18 PM2/28/95
to

ha...@mathi.uni-heidelberg.de (Hannes Reinecke) wrote:

> Hmm. Having read this, i wondered whether you have heard about virtual
> memory. _Every_ process has access to an so-called virtual memory
> segment, which has under linux(i386) the size of 3 GB
> (cf <asm/processor.h>). So, if you malloc() normally, you will get (in
> best cases) this amount (unless the system crashes :-).

This is not a matter of virtual memory. If I do a malloc(), I don't care
what the size of the RAM or the swap space or the virtual memory is.
Whatever it is, I want to be sure that I can use all the memory that was
assigned to me without having to wait for the sysop to push in another
couple-of-gigs-disc.
And, I don't want any user to be able to bring the entire system to a halt
by simply allocating a lot of memory.

Ivica

Thierry EXCOFFIER

unread,
Mar 1, 1995, 1:22:58 PM3/1/95
to
In article <bruce-25029...@17.205.4.52>, br...@newton.apple.com (Bruce Thompson) writes:

|> Either way, I can't see that there's a _valid_ reason for keeping the
|> behavior. I hate to beat a dead horse, but I have to. The job of the
|> kernel is to manage the resources of the machine. By allowing processes to
|> think they've received more memory than they actual have, the kernel is
|> abdicating that responsibility. IMNSHO this is a Bad Thing(tm). I'm sure

A few months ago, the actual behaviour of "malloc" was removed.
I remember somebody saying:

Allocation of a big chunk of memory (possibly greater than virtual memory)
is useful to avoid copy and copy and copy... in successive taller table.

{
char *t ;

t = malloc(10000000) ;
fgets(t,10000000,stdin) ;
t = realloc( t,strlen(t)+1 ) ;

return(t) ;
}

This function reads the unknown data with a minimum number of copy.

Just to add my 2 centimes

Thierry.
--
If you are a UNIX user, type the following 2 lines to see my signature :
/bin/sh -c 'for I in xmosaic Mosaic midasWWW tkWWW chimera lynx perlWWW ; do
$I http://www710.univ-lyon1.fr/%7Eexco/ && break ; done'

Vivek Kalra

unread,
Mar 1, 1995, 2:20:51 PM3/1/95
to
In article <HARE.95Fe...@mathi.uni-heidelberg.de>,

Hannes Reinecke <ha...@mathi.uni-heidelberg.de> wrote:
>
>Ralf> In article <1995Feb10.0...@imec.be>,
>Ralf> buyt...@imec.be (Steven Buytaert) writes:
>
>Ralf> Anyhow, from the point of view of an application programmer,
>Ralf> I consider the way malloc is realized absolutely
>Ralf> dangerous. I want to be able to handle error conditions as
>Ralf> close as possible to the point of their origin. The
>Ralf> definition of malloc is 'allocate memory', not 'intend to
>Ralf> allocate memory'.
>
>Hmm. Having read this, i wondered whether you have heard about virtual
>memory. _Every_ process has access to an so-called virtual memory
>segment, which has under linux(i386) the size of 3 GB
>(cf <asm/processor.h>). So, if you malloc() normally, you will get (in
>best cases) this amount (unless the system crashes :-).
>The amount of installed physical memory is mere a matter of speed.
>
Hot damn! And I thought I was going to have to buy more memory for
my machine! So, let's see, I have 12Megs of RAM and 24Megs of swap
and they add up to 3GB of virtual memory? Where does the
difference come from? Microsoft?

>Ralf> I want to decide myself how to handle
>Ralf> memory overflow conditions; from that point of view I cannot
>Ralf> accept any program abort not controlled by my
>Ralf> application.
>
>In normal conditions, in fact you are the only one responsible for
>out-of-memory cases created by your program; as far as the system is
>concerned, it will only deny to give you any memory (i.e. malloc and
>friends will return NULL).
>

Huh? Clearly, *some* one here has *heard* about virtual memory.
I'd like to know just what that was...

Ivica Rogina

unread,
Mar 3, 1995, 1:01:34 PM3/3/95
to

agu...@nvg.unit.no (Arnt Gulbrandsen) wrote:

> FYI, malloc isn't part of the kernel.

That's not the issue. malloc() relies on what the kernel has to say about
sbrk. I believe that malloc is implemented correctly, it will actually return
a NULL pointer if the kernel doesn't allow to increase the data-segment of a
process. The problem is that the kernel will allow to increase the data-segment
without making sure that it can provide the granted resources.

> 1. Any process can run out of memory asynchronously, since it can
> run out of stack memory, and since root can shrink the ulimit (I'm
> not sure if this is implemented in linux yet).

Huh? What do you mean? Are you saying that root can take away allocated
memory from a running process? Never heard of that. Of course a vicious
root can even kill a process, but again, this is not the issue. All
Unices I've been working with (Sun OS, HP-UX, OSF, Ultrix) except Linux
guarantee that a process can use the memory it was granted. I've never
heard of memory being taken away. I don't object to not getting requested
memory (no matter if stack or data) but I strongly object to not fulfulling
a promise, and I regard malloc (i.e. sbrk) as a promise, what else is it
good for, if I don't have to check its return value.

> 2. People running programs that need to store lots of data but not
> access it very often need virtual memory.

So what? The above mentioned Unices have virtual memory too, and they still
have a working malloc/sbrk.

> Therefore, there's a good chance that your extremely robust program
> would be paralysed by swapping long before a hypothetical "safe
> malloc" detected out-of-VM.

Are you sure you mean what you are saying? "out-of-VM". I don't want malloc
to tell me it's out-of-VM I want it to tell me that it's out of available
memory (RAM+swap).

For me, malloc/sbrk is kinda contract. The process is asking for memory, and
the kernel is granting that request. I don't want the kernel to say later:
"haha, April fool, I don't really have the memory that I've promised you".
That's really ridiculous. Name one program that takes advantage of the
Linux-style memory allocation and that can run on other Unices.

-- Ivica

Doug DeJulio

unread,
Mar 3, 1995, 6:55:55 PM3/3/95
to
In article <3j6fk8$5...@hydra.Helsinki.FI>,
Jussi Lahtinen <jmal...@cs.Helsinki.FI> wrote:

>In <D4pvF...@nntpa.cb.att.com> v...@rhea.cnet.att.com (Vivek Kalra) writes:
>
>>The problems of not handling a NULL return from malloc() are well
>>known. To have a malloc() that might fail in a way that doesn't
>>give the programmer any chance to recover is just mind-boggling.
>
>Malloc-checking is not enough. If you alloc all memory and then call
>a function which needs more stack space, your program will be dead and
>there is no way to prevent it.

If you alloc *all* memory, sure. But not if you alloc all *available*
memory.

You can have limits for mem and stack that are below the total VM of
your system (see sh's "ulimit" or csh's "limit"). You can set "mem"
and "stack" limits for a process. Use them.

Ian A. McCloghrie

unread,
Mar 3, 1995, 7:44:43 PM3/3/95
to
rog...@ira.uka.de (Ivica Rogina) writes:

>agu...@nvg.unit.no (Arnt Gulbrandsen) wrote:
>> 1. Any process can run out of memory asynchronously, since it can
>> run out of stack memory, and since root can shrink the ulimit (I'm
>> not sure if this is implemented in linux yet).
>Huh? What do you mean? Are you saying that root can take away allocated
>memory from a running process? Never heard of that. Of course a vicious

He's saying two things. One, when you make a function call,
saved registers and return values need to be pushed onto the stack,
and local variables for the new function need to be allocated (which
are also done on the stack). It's quite possible for the growing
stack to need an extra page of virtual memory and not be able to get
it.

Second, under most unixes, you can set certain resource limits on a
per-process basis, such as coredumpsize, cputime, stacksize,
and datasize.

>root can even kill a process, but again, this is not the issue. All
>Unices I've been working with (Sun OS, HP-UX, OSF, Ultrix) except Linux
>guarantee that a process can use the memory it was granted. I've never

SVR4 uses a similar allocation policy, I believe. So if the SunOS
you're referring to is Solaris 2, then they don't all guarantee it.

>heard of memory being taken away. I don't object to not getting requested
>memory (no matter if stack or data) but I strongly object to not fulfulling
>a promise, and I regard malloc (i.e. sbrk) as a promise, what else is it
>good for, if I don't have to check its return value.

If a process can die for other uncatachable resource problems (such
as no memory left to grow the stack), then what does it matter if it
can die for malloc()'d memory not really being available?

>> Therefore, there's a good chance that your extremely robust program
>> would be paralysed by swapping long before a hypothetical "safe
>> malloc" detected out-of-VM.
>Are you sure you mean what you are saying? "out-of-VM". I don't want malloc
>to tell me it's out-of-VM I want it to tell me that it's out of available
>memory (RAM+swap).

What do you think RAM+swap *is* if not VM?

>"haha, April fool, I don't really have the memory that I've promised you".
>That's really ridiculous. Name one program that takes advantage of the
>Linux-style memory allocation and that can run on other Unices.

Any program which allocates a large array and only uses scattered
parts of it (such as is often done in hash tables).

IMHO, the whole question is silly anyway. Just make 32M of swap
(that's about $20 given today's disk prices), and your system
will swap itself into the ground well before you start getting
problems with allocated memory not really being there.

Ian A. McCloghrie

unread,
Mar 3, 1995, 7:46:33 PM3/3/95
to
dd...@pitt.edu (Doug DeJulio) writes:
>You can have limits for mem and stack that are below the total VM of
>your system (see sh's "ulimit" or csh's "limit"). You can set "mem"
>and "stack" limits for a process. Use them.

Ummm... limiting your program's stack size to 1M (say) doesn't help a
lot if there's another user running emacs and xv who just ate up
all of the ram except for 500K.

S. Lee

unread,
Mar 3, 1995, 9:30:45 PM3/3/95
to
In article <EWERLID.95...@frej.teknikum.uu.se>,
Ove Ewerlid <ewe...@frej.teknikum.uu.se> wrote:
>I've read this thread and I cannot see the problem wrt linux!
>If you attempt to allocate more then the available 'real' amount of
>memory malloc WILL return 0.

I have 16MB RAM+20MB Swap but I can malloc() two 20M arrays without
malloc() returning 0. Guess what would happen if I start filling them?

>If the system malloc (in libc) was changed to
>prefill the allocated memory with all zeroes, then no process would be able to
>"cheat" away pages. As the linux-libc is available in source, changing
>libc is trivial.

Filling the pages is inefficient. The kernel should be changed to reserve
pages for processes that called malloc(). Pages once reserved for a
process should not be assigned to another.

Stephen

Clark Cooper

unread,
Mar 3, 1995, 2:01:34 PM3/3/95
to

>There are two points which fascist-malloc proponents ought to
>consider very carefully:
> ...

>2. People running programs that need to store lots of data but not
>access it very often need virtual memory.

There seems to be a misunderstanding here that I've seen expressed in other
articles. Sbrk (which *is* a system call serviced by the kernel and upon
which malloc is built) returns its error value when *virtual* memory is
exhausted (also when limits are exceeded). It has nothing to do with
exhaustion of *physical* memory. Virtual memory wouldn't be very useful
if it did.

We malloc "fascists" simply want the kernel to tell us as early as possible
(when we request memory [which on a VM system is *virtual* memory]) that
it can't provide the resources.

We don't want to take away your virtual memory, honest.
--
--
Clark Cooper GE Industrial & Power Systems coop...@dsos00.sch.ge.com
(518) 385-8380 ASPL-2 Project, Bldg 273, Rm 3090
1 River Rd, Schenectady NY 12345

Ove Ewerlid

unread,
Mar 3, 1995, 8:38:36 PM3/3/95
to
In article <D4D59...@info.swan.ac.uk> iia...@iifeak.swan.ac.uk (Alan Cox) writes:

Alan Cox writes:
>In article <3hl9rn$t...@klaava.Helsinki.FI> wirz...@cc.Helsinki.FI
>>There's more to writing good software than getting it through the
>>compiler. Error handling is one of them, and Linux makes it impossible
>>to handle low memory conditions properly. Score -1 big design misfeature
>for Linus.
>
> Scientists like it that way, other people should read the limit/rusage
> man pages.

Seconded!

If I was writing an application were I needed to know, NOW,
if the memory allocated by malloc represented 'real' memory then
I'll add this to my malloc wrapper:

for (i = 0; i < size; i++)
memory[i] = 0;

(BTW; My normal malloc wrapper checks if NULL is returned.)

Perhaps clearing the memory allocated by malloc is a good idea anyway
to avoid indeterministic behaviour (unless speed is critical).

I've read this thread and I cannot see the problem wrt linux!
If you attempt to allocate more then the available 'real' amount of

memory malloc WILL return 0. If the system malloc (in libc) was changed to


prefill the allocated memory with all zeroes, then no process would be able to
"cheat" away pages. As the linux-libc is available in source, changing
libc is trivial.

Infact, if needed, the syscall interface to sbrk could be changed to
prefill with zeroes (e.g., in libc) should an application mess
with that directly.

To me, this problem seems to be a libc-problem.
The kernel is handling things as flexible as it can ...

Anyway, as stated in this thread, memory can run out due to
the stack and that is more tricky to detect/handle in a controled
manner.

Cheers,
Ove

Ruurd Pels

unread,
Mar 2, 1995, 2:50:03 PM3/2/95
to
In article <3ivvvq$l...@tuba.cit.cornell.edu>, sl...@crux3.cit.cornell.edu (S. Lee) writes:

>>The problems of not handling a NULL return from malloc() are well
>>known. To have a malloc() that might fail in a way that doesn't
>>give the programmer any chance to recover is just mind-boggling.

>Agreed. This is bad behaviour. Is Linus aware of this? He doesn't seem
>to have said anything on this thread.

Well, modifying the kernel-part of memory allocation in order not to let it
do a 'lazy' allocation would probably make it significantly slower. That is
the downside of checking wether there is real memory available in the case
one might actually want to use the malloc()ed memory. However, it should be
possible to devise some in-between method, that is, let malloc() be lazy,
but, in the event that memory and swap are exhausted, create a swapfile on
the fly on a partition that has room enough. That should not be that diffi-
cult to implement...

>P.S. Is this a kernel or glibc problem?

It's a kernel feature.
--
Grtz, RFP ;-)

|o| Ruurd Pels, Kamgras 187, 8935 EJ Leeuwarden, The Netherlands |o|
|o| GC2.1 GAT/!/?/CS/B -d+(---) H s !g p? a w--(+++) v--(+++) C++ UL+++ |o|
|o| P? L++ !3 E? N++ !V t+ !5 !j G? tv- b++ D B? u++(---) h-- f? y++++ |o|

Message has been deleted
Message has been deleted
Message has been deleted

Michael Shields

unread,
Mar 4, 1995, 4:37:49 PM3/4/95
to
In article <3j57hb$8...@foo.autpels.nl>,

Ruurd Pels <ru...@autpels.maxine.wlink.nl> wrote:
> Well, modifying the kernel-part of memory allocation in order not to let it
> do a 'lazy' allocation would probably make it significantly slower.

Why? It would just have to keep a count of total memory, raising it when
memory is freed or swap is added, and lowering it when memory is allocated
or swap removed. If the amount of memory you request is more than the
current count, the request fails. This seems like a trivial change.
--
Shields.

Michael Shields

unread,
Mar 4, 1995, 4:39:31 PM3/4/95
to
In article <D4wrK...@pe1chl.ampr.org>,
Rob Janssen <pe1...@wab-tis.rabobank.nl> wrote:
> Say, 500KB per process == 64MB of swap for the current kernel configuration.

Linux 1.1.95 raised the default NR_TASKS to 512.
--
Shields.

Michael Shields

unread,
Mar 4, 1995, 4:42:09 PM3/4/95
to
In article <D4wrE...@pe1chl.ampr.org>,
Rob Janssen <pe1...@wab-tis.rabobank.nl> wrote:
> Of course, what you can expect when this change is made: a lot of complaints
> saying "Linux is now using a lot more swap than it did before" and "why do
> I get 'cannot fork', 'cannot exec' and 'out of memory' messages while this
> system worked so beautifilly with last week's kernel".

Make it CONFIG_FASCIST_MALLOC, then.
--
Shields.

Mike Jagdis

unread,
Mar 5, 1995, 9:06:00 AM3/5/95
to
* In message <3ivttm$1...@nz12.rz.uni-karlsruhe.de>, Ivica Rogina said:

IR> This is not a matter of virtual memory. If I do a malloc(),
IR> I don't care
IR> what the size of the RAM or the swap space or the virtual
IR> memory is.
IR> Whatever it is, I want to be sure that I can use all the
IR> memory that was
IR> assigned to me without having to wait for the sysop to push
IR> in another couple-of-gigs-disc.

Then you *have* to dirty each page in the area you request yourself to
forcibly map them as individual, distinct pages.

What the less experienced application writers don't realise is that even
the kernel has no way of knowing just how much memory+swap is really usable
to any one time. Text regions may be paged from the executable file - they
may or may not require a physical memory page at any moment and *never*
require a swap page. Similarly the OS cannot know in advance which pages
will be shared and which will require a new page to be used, nor can it know
when a shared page will need to be split due to a copy on write.

The *only* way the OS could guarantee to have a page available for you is
to take the most pessimistic view and save a swap page for *every* possible
page used - i.e. every process requires text pages + data pages + shared
library pages of swap (shared libraries are shared in memory but require
distinct swap for each process). And then you have to figure out how to
handle stack allocations which can probably only be guaranteed by committing
plenty (a few meg? gig?) of pages...

Seriously, if your programmers cannot handle this they should be trained
or moved back to non-VM programming.

Mike

Arnt Gulbrandsen

unread,
Mar 3, 1995, 2:17:23 AM3/3/95
to
In article <bruce-25029...@17.205.4.52>,

Bruce Thompson <br...@newton.apple.com> wrote:
>Either way, I can't see that there's a _valid_ reason for keeping the
>behavior. I hate to beat a dead horse, but I have to. The job of the
>kernel is to manage the resources of the machine.

FYI, malloc isn't part of the kernel.

There are two points which fascist-malloc proponents ought to
consider very carefully:

1. Any process can run out of memory asynchronously, since it can


run out of stack memory, and since root can shrink the ulimit (I'm
not sure if this is implemented in linux yet).

Therefore, you can't _depend_ on catching all out-of-memory
situations by checking the return value of malloc, no matter how
malloc is implemented.

2. People running programs that need to store lots of data but not
access it very often need virtual memory.

Therefore, there's a good chance that your extremely robust program


would be paralysed by swapping long before a hypothetical "safe
malloc" detected out-of-VM.

--Arnt

Message has been deleted

Jussi Lahtinen

unread,
Mar 3, 1995, 2:14:16 AM3/3/95
to

>The problems of not handling a NULL return from malloc() are well
>known. To have a malloc() that might fail in a way that doesn't
>give the programmer any chance to recover is just mind-boggling.

Malloc-checking is not enough. If you alloc all memory and then call


a function which needs more stack space, your program will be dead and
there is no way to prevent it.

Jussi Lahtinen

Jim Balter

unread,
Mar 6, 1995, 6:54:11 AM3/6/95
to
In article <3jed9q$m...@due.unit.no>,
Arnt Gulbrandsen <agu...@nvg.unit.no> wrote:
>In article <D4s0E...@nntpa.cb.att.com>,

>Vivek Kalra <v...@rhea.cnet.att.com> wrote:
>>In article <HARE.95Fe...@mathi.uni-heidelberg.de>,
>>Hannes Reinecke <ha...@mathi.uni-heidelberg.de> wrote:
>>>Hmm. Having read this, i wondered whether you have heard about virtual
>>>memory. _Every_ process has access to an so-called virtual memory
>>>segment, which has under linux(i386) the size of 3 GB
>>>(cf <asm/processor.h>).
>....

>>Hot damn! And I thought I was going to have to buy more memory for
>>my machine! So, let's see, I have 12Megs of RAM and 24Megs of swap
>>and they add up to 3GB of virtual memory? Where does the
>>difference come from? Microsoft?
>
>mmap() for instance. I routinely mmap() in 30MB files, and I think
>at least one rather common program (INN) mmap()'s in a far bigger
>file. Three such processes, and you'd have over 100MB of addressed
>memory on a 12+24MB machine.

mmapping doesn't count. The whole point is that the problem is with
virtual memory that is *not* mapped anywhere. If the total amount of
non-mapped memory exceeds the amount of potentially mappable memory,
either primary ("physical") memory or secondary ("swap") memory, then
the system is over-committed. You are then in the situation where
the processes may attempt to access so much memory that you run out
of total physical and swap space. What do you do then? You can either
send a signal to the process which kills it if it wasn't prepared for
it, or you can suspend the process. The former violates the ANSI C spec
for malloc (this has been discussed extensively in comp.std.c) and the
latter can lead to system deadlock. That's why systems designed to satisfy
ANSI/POSIX requirements keep count of the total amount of available
real memory, refuse allocation requests that exceed it, and decrement the
count when address space is allocated, even when no actual memory has
been committed. That's where the difference between fork and vfork comes
in; fork, even with copy-on-write, increases the potential demand for
memory and thus must check the count and decrement it, whereas vfork need
not.

>The lesson is: It isn't that simple. Linux is a complex, capable
>operating system, and simple assumptions about what it can and
>cannot do can be a long way from the truth.

Sweeping feel-good generalizations won't do. Linux violates POSIX by
violating the ANSI C spec upon which it is based. If Linux wants to
satisfy POSIX *and* provide a malloc that does not commit to providing
the memory when accessed, then it should provide another function or a
global switch or *something* to provide the distinction. But a system
that can randomly crash programs properly written to the POSIX spec
simply because they access malloc'ed memory (*any* access can do it;
you run your program that mallocs and accesses 1 byte while I'm
running my program that mallocs and accesses 30MB and *your* program
dies if I got the last byte ahead of you) is broken.
--
<J Q B>

Vivek Kalra

unread,
Mar 6, 1995, 3:19:40 PM3/6/95
to
In article <JEM.95Ma...@delta.hut.fi>,
Johan Myreen <j...@snakemail.hut.fi> wrote:

>In article <D4pvF...@nntpa.cb.att.com> v...@rhea.cnet.att.com (Vivek Kalra) writes:
>
>>Okay, let's see: I have a machine with 8M of RAM and 12M of swap.
>>At this given moment, I have, say, 8 of those megs available. So I
> ^^^^^^^^^^^^^^^^^^^^

>
>>run this super-duper image-processing program I have -- it checks
>>the current input size and determines that it needs 16M of memory
>>to do its thing on this input. So it malloc()s 16M and finds that
>>everything is fine and starts its thing, runs for three hours, and,
>>err, ooops, runs out of memory. Now, if malloc() had failed
>>earlier, I wouldn't have had to wait for three hours to find that
>>out, would I?
>
>I agree that this situation is not so good. But if you think of the
>other side of this, what if you had started your program and it would
>had refused to do anything, because it would have needed 16 Mbytes
>three hours later, and the memory *had* been available at that time?
>
But what if it was *not* three hours but three seconds? The point
is that ANSI/ISO *require* malloc() to return NULL if the asked for
memory cannot be allocated so the programmer can take appropriate
action. A program, for example, should be able to remove a file if
malloc() does not return NULL and the program is in the process of
recreating that file. To have program simply fail (simply? Just
*how* does it fail? Geez, and I thought I was not using
MSWindoze...) after its destructive behaviour is simply not
acceptable. Would you like my bank-account program to fail without
updating your bank account when you deposit a check -- and not know
that it had?

bankaccount()
{
t_record *new_record;

if (new_record = malloc(sizeof (bank_record)))
{
UpdateBankAccount (from_bank_account, IS_A_WITHDRAWAL, amount);

/* do somthing */

InitBankRecord (new_record, amount);

/* do somthing else */

UpdateBankAccount (to_bank_account, IS_A_DEPOSIT, amount);
}
else
FatalError ("Yo! Get more memory!");
}

This brain-numb-if-not-dead function could bomb after withdrawing
the money from the source account but before depositing it the
destination account because malloc() didn't return a NULL and yet
InitBankRecord() caused the program to fail. As far as I know, it
shouldn't fail simply because InitBankRecord() tries to write to
new_record -- not as far as ANSI/ISO are concerned.

>Let's compare memory usage to disk usage: cp does not (as far as I
>know) check in advance if there is enough disk space available when
>copying a file. That would make no sense, because the outcome would be
>totally worthless. Some other process could fill up the disk during
>the copy or it could free enough space to make the copy succeed, even
>if it looked impossible from the start.
>
A more appropriate example is probably mv: cp at least does not
destroy the original. Just what happens if you try to move a file
from one file-system to another and this other fs doesn't have
enough space for the file?

I don't know what POSIX says mv should do under these
circumstances; I do know what ANSI/ISO say about
malloc()/calloc()/realloc():

ANSI section 4.10.3 Memory Management Functions

... The pointer returned points to the start (lowest byte
address) of the allocated space. If the sapce cannot be
allocated, a null pointer is returned. ...

>To be safe, every process should have a "maximum file write size"
>attribute, and the kernel should refuse to start a process if the
>available space on any of the accessible file systems was less than
>the attribute.
>
There *is* something called ulimit in this universe...

Arnt Gulbrandsen

unread,
Mar 6, 1995, 2:23:38 AM3/6/95
to
In article <D4s0E...@nntpa.cb.att.com>,
Vivek Kalra <v...@rhea.cnet.att.com> wrote:
>In article <HARE.95Fe...@mathi.uni-heidelberg.de>,
>Hannes Reinecke <ha...@mathi.uni-heidelberg.de> wrote:
>>Hmm. Having read this, i wondered whether you have heard about virtual
>>memory. _Every_ process has access to an so-called virtual memory
>>segment, which has under linux(i386) the size of 3 GB
>>(cf <asm/processor.h>).
....

>Hot damn! And I thought I was going to have to buy more memory for
>my machine! So, let's see, I have 12Megs of RAM and 24Megs of swap
>and they add up to 3GB of virtual memory? Where does the
>difference come from? Microsoft?

mmap() for instance. I routinely mmap() in 30MB files, and I think


at least one rather common program (INN) mmap()'s in a far bigger
file. Three such processes, and you'd have over 100MB of addressed
memory on a 12+24MB machine.

The lesson is: It isn't that simple. Linux is a complex, capable


operating system, and simple assumptions about what it can and
cannot do can be a long way from the truth.

--Arnt

Arnt Gulbrandsen

unread,
Mar 7, 1995, 1:06:16 AM3/7/95
to
In article <COOPERCL.9...@dso052.sch.ge.com>,

Clark Cooper <coop...@dso052.sch.ge.com> wrote:
>There seems to be a misunderstanding here that I've seen expressed in other
>articles. Sbrk (which *is* a system call serviced by the kernel and upon
>which malloc is built) returns its error value when *virtual* memory is
>exhausted (also when limits are exceeded).

It's not a misunderstanding, many modern malloc implementations are
built on top of mmap() rather than sbrk().

But you seem to ignore my point: No matter what you do to sbrk() or
malloc() it's possible to run out of memory asynchronously,
therefore it's better to simplify the model and ONLY run out of
memory asynchronously.

--Arnt

Fergus Henderson

unread,
Mar 6, 1995, 8:26:51 AM3/6/95
to
jmal...@cs.Helsinki.FI (Jussi Lahtinen) writes:

It's possible to statically determine your stack requirements
and to allocate the necessary amount of stack space in advance.

It's also possible to catch the segmentation violation when you
get a stack overflow and longjmp() out.

--
Fergus Henderson - f...@munta.cs.mu.oz.au

S. Lee

unread,
Mar 6, 1995, 12:02:54 PM3/6/95
to
In article <820.2F...@purplet.demon.co.uk>,

Mike Jagdis <ja...@purplet.demon.co.uk> wrote:
>
>Then you *have* to dirty each page in the area you request yourself to
>forcibly map them as individual, distinct pages.

[...]


>
> Seriously, if your programmers cannot handle this they should be trained
>or moved back to non-VM programming.

My test program dies if run out of memory while dirtying the pages. How
do you suggest I should handle this?

Message has been deleted

Ivica Rogina

unread,
Mar 6, 1995, 12:34:32 PM3/6/95
to

ia...@qualcomm.com (Ian A. McCloghrie) wrote:

> >to tell me it's out-of-VM I want it to tell me that it's out of available
> >memory (RAM+swap).
>
> What do you think RAM+swap *is* if not VM?

VM is 3 Gig per process. RAM+swap is physical memory.
Anyway, I'm still waiting for a sample program that takes benefit
from the ill-designed Linux memory allocation and that does also
run on, say, HP-UX (of course without #ifdef LINUX / #ifdef HP-UX
and then doing two different things).
I dare you to produce such a program.

-- Ivica

Vivek Kalra

unread,
Mar 6, 1995, 2:44:54 PM3/6/95
to
In article <3jed9q$m...@due.unit.no>,
Arnt Gulbrandsen <agu...@nvg.unit.no> wrote:
>In article <D4s0E...@nntpa.cb.att.com>,
>Vivek Kalra <v...@rhea.cnet.att.com> wrote:
>>In article <HARE.95Fe...@mathi.uni-heidelberg.de>,
>>Hannes Reinecke <ha...@mathi.uni-heidelberg.de> wrote:
>>>Hmm. Having read this, i wondered whether you have heard about virtual
>>>memory. _Every_ process has access to an so-called virtual memory
>>>segment, which has under linux(i386) the size of 3 GB
>>>(cf <asm/processor.h>).
>....
>>Hot damn! And I thought I was going to have to buy more memory for
>>my machine! So, let's see, I have 12Megs of RAM and 24Megs of swap
>>and they add up to 3GB of virtual memory? Where does the
>>difference come from? Microsoft?
>
>mmap() for instance. I routinely mmap() in 30MB files, and I think
>at least one rather common program (INN) mmap()'s in a far bigger
>file. Three such processes, and you'd have over 100MB of addressed
>memory on a 12+24MB machine.
>
Oh, Please. mmap() has nothing to do with what we're discussing.
I can have a simple program that malloc()s 6 bytes, and dumps core
when copying "hello" into it because malloc() didn't return NULL
even though the system was out of VM at the time because of
whatever other stuff was running on the system. What it boils down
to is that the ANSI/ISO spec very clearly specifies malloc()'s
behaviour and Linux seems to violate it.

>The lesson is: It isn't that simple. Linux is a complex, capable
>operating system, and simple assumptions about what it can and
>cannot do can be a long way from the truth.
>

Unfortunately, it *is* that simple: ANSI/ISO *requires* malloc() to
say what it means.

Zsoter Andras

unread,
Mar 6, 1995, 10:29:53 PM3/6/95
to
Ian A. McCloghrie (ia...@qualcomm.com) wrote:
>
> >"haha, April fool, I don't really have the memory that I've promised you".
> >That's really ridiculous. Name one program that takes advantage of the
> >Linux-style memory allocation and that can run on other Unices.
>
> Any program which allocates a large array and only uses scattered
> parts of it (such as is often done in hash tables).

I do not understand this whole thread. I am not very familiar with C
so I do not know if is there any specification that enforces malloc() to
behave in the way claimed by many programmers here.
Or is it allowed to behave as it behaves under Linux (for me that is the
natural way but I might be a pervert).
If the latter is true is there any reliable way the catch the "Out of memory"
situations inside an application?

Andras

Arnt Gulbrandsen

unread,
Mar 7, 1995, 4:26:07 PM3/7/95
to
In article <D51Au...@nntpa.cb.att.com>,

Vivek Kalra <v...@rhea.cnet.att.com> wrote:
>I can have a simple program that malloc()s 6 bytes, and dumps core
>when copying "hello" into it because malloc() didn't return NULL
>even though the system was out of VM at the time because of
>whatever other stuff was running on the system.

Well, when there aren't even 6 bytes free, it's not unlikely for the
process might also dump code because it couldn't build a stack frame
to call strcpy. So there we are again: You can't eliminate
asynchronous out-of-memory situations.

Or if your process is the one that uses all that memory, root might
find out and rlimit your process down so the other users can get
anything done: BANG says your process, asynchronously.

> What it boils down
>to is that the ANSI/ISO spec very clearly specifies malloc()'s
>behaviour and Linux seems to violate it.

Quote chapter and verse, will you?

>Unfortunately, it *is* that simple: ANSI/ISO *requires* malloc() to
>say what it means.

WHAT exactly does ANSI/ISO say?

--Arnt

Mike Jagdis

unread,
Mar 7, 1995, 3:03:00 PM3/7/95
to
* In message <3jff7u$m...@tuba.cit.cornell.edu>, S. Lee said:

SL> >Then you *have* to dirty each page in the area you request yourself to
SL> >forcibly map them as individual, distinct pages.

SL> [...]
SL> >
SL> > Seriously, if your programmers cannot handle this they should be
SL> > trained or moved back to non-VM programming.

SL> My test program dies if run out of memory while dirtying the
SL> pages. How do you suggest I should handle this?

Use a fault handler. If you need to guarantee the existence of those pages
you have no choice.

Mike

Mike Jagdis

unread,
Mar 8, 1995, 3:25:00 PM3/8/95
to
* In message <D51CG...@nntpa.cb.att.com>, Vivek Kalra said:

VK> The point is that ANSI/ISO *require* malloc() to return NULL if the
VK> asked for memory cannot be allocated so the programmer can take
VK> appropriate action.

The confusion is over the word "allocate". Malloc allocates a region of the
process memory space suitable for an object of the stated size but the OS
does not necessarily commit memory to that space until you dirty it.

If you are to have the OS avoid the seg fault trap for you then it *has*
to reserve a physical page for *every* possible page of every process - it
has a fence that specifies data size but there is no way for the OS to know
current stack requirements (unless it is Xenix 286 with stack probes enabled
:-) ).

Anything less simply sweeps the problem under the carpet. If it is
*really* a problem for you (i.e. your system is overloaded) then you are
*still* going to get shafted!

VK> [...]
VK> This brain-numb-if-not-dead function could bomb after withdrawing
VK> the money from the source account but before depositing it the
VK> destination account because malloc() didn't return a NULL
VK> and yet InitBankRecord() caused the program to fail.

This is just poorly designed code. Page allocation is just one of many ways
that the program could stop unexpectedly at any point for no clear reason.
If that is a problem you have to design around it.

Mike

Vivek Kalra

unread,
Mar 8, 1995, 4:12:49 PM3/8/95
to
In article <3jij1f$g...@due.unit.no>,

Arnt Gulbrandsen <agu...@nvg.unit.no> wrote:
>In article <D51Au...@nntpa.cb.att.com>,
>Vivek Kalra <v...@rhea.cnet.att.com> wrote:
>
>> What it boils down
>>to is that the ANSI/ISO spec very clearly specifies malloc()'s
>>behaviour and Linux seems to violate it.
>
>Quote chapter and verse, will you?
>
Yes, Sir!

>>Unfortunately, it *is* that simple: ANSI/ISO *requires* malloc() to
>>say what it means.
>
>WHAT exactly does ANSI/ISO say?
>

ANSI:

section 4.10.3 Memory Management Functions

The order and contiguity of storage allocated by successive
calls to the calloc(), malloc(), and realloc() functions is
unspecified. ... Each such allocation shall yield a pointer to
an object disjoint from any other object. The pointer returned


points to the start (lowest byte address) of the allocated

space. If this space cannot be allocated, a null pointer is
returned. ...

S. Lee

unread,
Mar 8, 1995, 10:39:09 PM3/8/95
to
In article <823.2F...@purplet.demon.co.uk>,

We seem to fail to come to an agreement here.

My concept of malloc() is that once a process calls it, if the OS (in the
sense of the OS + library) returns a non-NULL pointer, the OS would have
committed a block of memory to the process, no matter it is dirtied or
not. It is the OS's job to make sure it has this memory available, not
the user's. You are telling me my program has to do the job of the OS.
It is like telling me 'if I want my program to access this device which is
not supported by the OS I should put the code into my program', which is
not right because it is the OS's job to support hardware devices. I
can write a driver which runs in kernel space that interfaces with the
device, then have my program call the driver, but it is not correct that I
have to put the hardware access code in my (user space) code.

Now, some people say a program can still run out of stack space, but I am
not aware of the standard saying (which doesn't mean that it doesn't say,
just that I don't know about it) that the OS/library has to provide a
process with X amount of stack space, so I do feel it is reasonable (but
not a good idea) for the OS to let a program to run out of stack, because
the OS didn't make the commitment of making stack space available.

I'm not critizing Linux. I like it very much. I'm just saying this is a
Good Thing for Linux to do. Plus it is the POSIX (actually ANSI/ISO)
behaviour, and isn't Linux trying to be a POSIX OS?

I should probably shut up before somebody tell me to fix the kernel ;)
Not that I wouldn't do it , only if I have the knowledge...

S. Joel Katz

unread,
Mar 9, 1995, 5:13:56 AM3/9/95
to

>In article <3jij1f$g...@due.unit.no>,
>Arnt Gulbrandsen <agu...@nvg.unit.no> wrote:
>>In article <D51Au...@nntpa.cb.att.com>,
>>Vivek Kalra <v...@rhea.cnet.att.com> wrote:
>>
>>> What it boils down
>>>to is that the ANSI/ISO spec very clearly specifies malloc()'s
>>>behaviour and Linux seems to violate it.
>>
>>Quote chapter and verse, will you?
>>
>Yes, Sir!

>>>Unfortunately, it *is* that simple: ANSI/ISO *requires* malloc() to
>>>say what it means.
>>
>>WHAT exactly does ANSI/ISO say?
>>
>ANSI:

>section 4.10.3 Memory Management Functions
> The order and contiguity of storage allocated by successive
> calls to the calloc(), malloc(), and realloc() functions is
> unspecified. ... Each such allocation shall yield a pointer to
> an object disjoint from any other object. The pointer returned
> points to the start (lowest byte address) of the allocated
> space. If this space cannot be allocated, a null pointer is
> returned. ...

But the whole point is that the Linux malloc _can_ allocate
memory in that process' virtual space even if there is not system
phyiscal memory or virtual memory to allow it to dirty at that instant.

--

S. Joel Katz Information on Objectivism, Linux, 8031s, and atheism
Stim...@Panix.COM is available at http://www.panix.com/~stimpson/

S. Joel Katz

unread,
Mar 9, 1995, 5:15:52 AM3/9/95
to
In <ALBERT.95...@snowdon.ccs.neu.edu> alb...@snowdon.ccs.neu.edu (Albert Cahalan) writes:

>>>>>> "A" == Arnt Gulbrandsen <agu...@nvg.unit.no> writes:

>A> In article <D51Au...@nntpa.cb.att.com>, Vivek Kalra


>A> <v...@rhea.cnet.att.com> wrote:
>>> I can have a simple program that malloc()s 6 bytes, and dumps core when
>>> copying "hello" into it because malloc() didn't return NULL even though the
>>> system was out of VM at the time because of whatever other stuff was
>>> running on the system.

>A> Well, when there aren't even 6 bytes free, it's not unlikely for the
>A> process might also dump code because it couldn't build a stack frame to
>A> call strcpy. So there we are again: You can't eliminate asynchronous
>A> out-of-memory situations.

>Add a system call? Maybe call it allocstack(); it must run before
>anything important.

>A> Or if your process is the one that uses all that memory, root might find
>A> out and rlimit your process down so the other users can get anything done:
>A> BANG says your process, asynchronously.

>What if root can't even log in? There should be a user priority.
>When there is a shortage, instead of killing the process that needs
>a page, kill processes in a specific order:

>1. Normal processes die before root processes
>2. Users using more than x pages of memory get processes killed
>3. Remote logins are logged out; processes are killed
>4. Restartable servers are first root processes to get killed
>5. A small list of vital programs are kept, else panic

You can do this now. Just write a daemon that monitors virtual
memory and when it gets low, follows your priority scheme. I admit, it
would be nice to have some kernel support, however.

Bodo Moeller

unread,
Mar 8, 1995, 7:03:54 AM3/8/95
to
ru...@autpels.maxine.wlink.nl (Ruurd Pels) writes:
>sl...@crux3.cit.cornell.edu (S. Lee) writes:

>>>The problems of not handling a NULL return from malloc() are well
>>>known. To have a malloc() that might fail in a way that doesn't
>>>give the programmer any chance to recover is just mind-boggling.

>>Agreed. This is bad behaviour. Is Linus aware of this? He doesn't seem
>>to have said anything on this thread.

>Well, modifying the kernel-part of memory allocation in order not to let it
>do a 'lazy' allocation would probably make it significantly slower.

Why do you think so? Wouldn't it be enough to have a counter for
available pages of memory (RAM and swap)? When a new program/process
is started (exec or fork) or allocates more memory, the kernel would
have to decrement this counter by the number of data pages, but the
kernel would not have to actually decide where to put these data
pages. Of course, there would also have to be reserved memory for
(read-only) text pages. But text pages are different from data pages
(here "data pages" means all pages that might be changed; i.e. data
segments from executables, heap and stack): A shortage of text pages
will make the computer very slow, but the operating system does not
have to kill processes.

Of course, this approach does not solve all problems. Every program
that relies on stack memory being available when needed is still in
danger; but in critical cases, the programmers can try to avoid
problems (i.e. reserve stack space in advance and don't call functions
that might use too much of it).

Albert Cahalan

unread,
Mar 9, 1995, 1:53:29 AM3/9/95
to
>>>>> "A" == Arnt Gulbrandsen <agu...@nvg.unit.no> writes:

A> In article <D51Au...@nntpa.cb.att.com>, Vivek Kalra


A> <v...@rhea.cnet.att.com> wrote:
>> I can have a simple program that malloc()s 6 bytes, and dumps core when
>> copying "hello" into it because malloc() didn't return NULL even though the
>> system was out of VM at the time because of whatever other stuff was
>> running on the system.

A> Well, when there aren't even 6 bytes free, it's not unlikely for the


A> process might also dump code because it couldn't build a stack frame to
A> call strcpy. So there we are again: You can't eliminate asynchronous
A> out-of-memory situations.

Add a system call? Maybe call it allocstack(); it must run before
anything important.

A> Or if your process is the one that uses all that memory, root might find
A> out and rlimit your process down so the other users can get anything done:
A> BANG says your process, asynchronously.

What if root can't even log in? There should be a user priority.
When there is a shortage, instead of killing the process that needs
a page, kill processes in a specific order:

1. Normal processes die before root processes
2. Users using more than x pages of memory get processes killed
3. Remote logins are logged out; processes are killed
4. Restartable servers are first root processes to get killed
5. A small list of vital programs are kept, else panic

--

Albert Cahalan
alb...@ccs.neu.edu

Darin Johnson

unread,
Mar 9, 1995, 1:57:09 PM3/9/95
to
> I'm not critizing Linux. I like it very much. I'm just saying this is a
> Good Thing for Linux to do. Plus it is the POSIX (actually ANSI/ISO)
> behaviour, and isn't Linux trying to be a POSIX OS?

Hmm, I can't really tell if it's POSIX or not (I don't have an
ANSI C manual, but then, malloc is a UNIX library call originally,
and thus isn't under the authority of ANSI C). The POSIX manual
I have just says malloc returns NULL if the memory is not available,
but it doesn't say if this is physical+swap memory or just virtual
memory.

Plus, malloc works by calling brk usually, and this just increases the
size of the data segment (by just modifying the number that says what
the end is). Now brk can't go and verify that this new memory
actually exists, because there will be real applications that depend
upon a very large virtual address space, but won't necessarily use
that much swap (ie, a lisp process that grabs all of virtual memory
then manages it itself).

So to change this, malloc needs to be changed to verify the memory
after brk is called. It's a malloc problem, and not an OS problem.
Thus, complain to GNU, not Linux.
--
Darin Johnson
djoh...@ucsd.edu
"You used to be big."
"I am big. It's the pictures that got small."

Arnt Gulbrandsen

unread,
Mar 8, 1995, 11:41:46 PM3/8/95
to
In article <D5549...@nntpa.cb.att.com>,

Vivek Kalra <v...@rhea.cnet.att.com> wrote:
>ANSI:
>
>section 4.10.3 Memory Management Functions
> The order and contiguity of storage allocated by successive
> calls to the calloc(), malloc(), and realloc() functions is
> unspecified. ... Each such allocation shall yield a pointer to
> an object disjoint from any other object. The pointer returned
> points to the start (lowest byte address) of the allocated
> space. If this space cannot be allocated, a null pointer is
> returned. ...

Subject to the parts you've omitted, that paragraph allows linux'
current behaviour: "space" isn't defined. If space isn't defined or
is explicitly defined as address space, linux gets it right. If
space is explicitly defined as current ram+swap that hasn't been
promised to this or any other process, linux gets it wrong.
(Current, because it's possible to swapon another partition or file
when swap gets tight.)

--Arnt

Arnt Gulbrandsen

unread,
Mar 8, 1995, 11:46:10 PM3/8/95
to
In article <3j57hb$8...@foo.autpels.nl>,
Ruurd Pels <ru...@autpels.maxine.wlink.nl> wrote:
> However, it should be
>possible to devise some in-between method, that is, let malloc() be lazy,
>but, in the event that memory and swap are exhausted, create a swapfile on
>the fly on a partition that has room enough. That should not be that diffi-
>cult to implement...

Not so difficult, but it would have the disadvantage that it a fork
bomb or similar eat-all-ram bug would also kill the file system with
the new swap file. This was trashed out in a long flamewar two and
a half years ago.

The practical lesson is simple: Allocate enough swap that the
machine will crawl before it exhausts swap.

--Arnt

Linus Torvalds

unread,
Mar 8, 1995, 4:41:23 AM3/8/95
to
In article <3itc77$9...@ninurta.fer.uni-lj.si>,
Damjan Lango <lan...@ana.fer.uni-lj.si> wrote:
>
>Absolutely agree!
>And I can't understand how this malloc bug came so far up to 1.1.x
>It *must* be fixed before 1.2!!!

Too late...

Anyway, it's not a simple matter of just checking how much free memory
there is: people seem to be completely unaware of how hard a problem
this actually is.

Please, read any book about resource dead-locks etc, and you'll find
that these dead-locks *can* be resolved, but at the cost of

- efficiency (to be sure you can handle any dead-lock, you'll have to
do a *lot* of work).
- usability (to make sure you never get any dead-lock, you have to say
no to somebody, and you'll have to say "no" a *lot* earlier than most
people seem to think).

In the case of the memory handling, actually counting pages isn't that
much of an overhead (we just have one reasource, and one page is as good
as any and they don't much depend on each other, so the setup is
reasonably simple), but the usability factor is major.

As it stands, you can add these lines to your /etc/profile:

ulimit -d 8192
ulimit -s 2048

and it will limit your processes to 8MB of data space, and 2MB of stack.

And no, it doesn't guarantee anything at all, but hey, your malloc()
will return NULL.

Personally, I consider the linux mm handling a feature, not a bug (hmm..
I wrote it, so that's not too surprising). People can whine all they
want, but please at least send out patches to fix it at the same time.
You'll find that some people definitely do *not* want to use your
patches.

Handling malloc() together with fork() makes for problems, adding the
problem of the stack space makes it worse, and adding the complexity of
private file mappings doesn't help. Before complaining, *please* think
about at least the following example scenarios (and you're allowed to
think up more of your own):

1) a database process maps in a database privately into memory. The
database is 32MB in size, but you only have 16MB free memory/swap.
Do you accept the mmap()?

- The database program probably doesn't re-write the database in memory:
it may change a few records in-core, but the number of pages it needs
might be less than 10 (the pages it doesn't modify don't count as
usage, as we can always throw them out when we want the memory back).

- on the other hand, how does the kernel *know*? It might be a program
that just mmap()'s something and then starts writing to all the
pages.

2) GNU emacs (ugh) wants to start up a shell script. In the meantime,
GNU emacs has (as it's wont to do) grown to 17 MB, and you obviously
don't have much memory left. Do you accept the fork?

- emacs will happily do an exec afterwards, and will actually use only
10 pages before that in the child process (stack, mainly). Sure, let
it fork().

- How is the kernel supposed to know that it will fork? No way can it
fork, as we don't have the potential 17MB of memory that now gets
doubled.

- vfork() isn't an option. Trust me on this one. vfork is *ugly*.
Besides, we might actually want to run the same process concurrently.

3) you have a nice quiescent little program that uses about 100kB of
memory, and has been a good little boy for the last 5 minutes. Now
it obviously wants to do something, so it forks 10 times. Do we
accept it?

- hell yes, we have 10MB free, and 10 forks of this program only uses
about a megabyte of that. Go ahead.

- hell, no: what if this nice little program just tried to make us feel
secure, and after the forks turns into the Program From Hell (tm)? It
might get into a recursive loop, and start eating up stack space.
Wheee.. Our 10MB of memory are gone in 5 seconds flat, and the OS is
left stranded wondering what the hell hit it.

4) You have a nice little 4MB machine, no swap, and you don't run X.
Most programs use shared libraries, and everybody is happy. You
don't use GNU emacs, you use "ed", and you have your own trusted
small-C compiler that works well. Does the system accept this?

- why, of course it does. It's a tight setup, but there's no panic.

- NO, DEFINITELY NOT. Each shared library in place actually takes up
600kB+ of virtual memory, and the system doesn't *know* that nothing
starts using these pages in all the processes alive. Now, with just
10 processes (a small make, and all the deamons), the kernel is
actually juggling more than 6MB of virtual memory in the shared
libraries alone, although only a fraction of that is actually in use
at that time.

It's easy to make malloc() return NULL under DOS: you just see if you
have any of the 640kB free, and if you have, it's ok.

It's easy to make malloc() return NULL under Windows: there is no fork()
system call, and nobody expects the machine to stay up anyway, so who
cares? When you say "I wrote a program that crashed Windows", people
just stare at you blankly and say "Hey, I got those with the system,
*for free*".

It's also easy to make malloc() return NULL under some trusted large
UNIX server: people running those are /expected/ to have an absolute
minimum of 256MB of RAM, and double that of swap, so we really can make
sure that any emacs that wants to fork() must have the memory available
(if you're so short of memory that 17MB is even close to tight, it's ok
to say that emacs can't fork).

It's *not* easy to say no to malloc() when you have 8-32MB of memory,
and about as much swap-space, and fork/mmap/etc works. You can do it,
sure, but you'd prefer a system that doesn't.

Linus

Stephen J Bevan

unread,
Mar 10, 1995, 5:35:06 AM3/10/95
to
In article <3jm0ua$t...@due.unit.no> agu...@nvg.unit.no (Arnt Gulbrandsen) writes:
... Subject to the parts you've omitted, that paragraph allows linux'
current behaviour: "space" isn't defined. If space isn't defined or
is explicitly defined as address space, linux gets it right.

IMHO Linux gets it wrong, but rather than listen to unsubtantiated
opinions how about clearly describing the current situation wrt to
Linux malloc and sending it to comp.std.c so the various language
lawyers there can fight over it? If the result isn't clear then it is
possible that one of said lawyers will submit an official request for
clarification to settle the problem.

Andreas Schwab

unread,
Mar 10, 1995, 8:48:26 AM3/10/95
to

I think comp.std.c is the wrong group, since ANSI C does not know
anything about ram and swap, it only talks about a virtual machine
that may have a totally different view of storage or space. It's
rather a POSIX question.
--
Andreas Schwab "And now for something
sch...@issan.informatik.uni-dortmund.de completely different"

Andreas Schwab

unread,
Mar 10, 1995, 9:09:15 AM3/10/95
to
In article <ALBERT.95M...@snowdon.ccs.neu.edu>, alb...@snowdon.ccs.neu.edu (Albert Cahalan) writes:

|>>>>>> "LT" == Linus Torvalds <torv...@cc.Helsinki.FI> writes:
|LT> 2) GNU emacs (ugh) wants to start up a shell script. In the meantime, GNU
|LT> emacs has (as it's wont to do) grown to 17 MB, and you obviously don't
|LT> have much memory left. Do you accept the fork?

|LT> - emacs will happily do an exec afterwards, and will actually use only 10
|LT> pages before that in the child process (stack, mainly). Sure, let it
|LT> fork().

|LT> - How is the kernel supposed to know that it will fork? No way can it
|LT> fork, as we don't have the potential 17MB of memory that now gets doubled.

|> Why must fork() and exec() be used? It would be better to have a spawn()
|> that would produce the same result by a different method.

With spawn you cannot change the environment of the child without
affecting the parent, unless you save and restore it around the call.
For example, redirections: you have to save the original meaning of
stdin and stdout before redirecting it for the child. It can be done,
but you have to rewrite every program that forks a filter, at least.

MAEDA Atusi

unread,
Mar 10, 1995, 12:20:48 PM3/10/95
to
In article <3jlt8t$e...@tuba.cit.cornell.edu> sl...@crux4.cit.cornell.edu (S. Lee) writes:

>My concept of malloc() is that once a process calls it, if the OS (in the
>sense of the OS + library) returns a non-NULL pointer, the OS would have
>committed a block of memory to the process, no matter it is dirtied or
>not.

It's surely a bad point about Linux. And the problem is not limited
to malloc() nor stack.

Even if you cleared out all of pages returned by malloc(), still your
program can fail (i.e. after fork()). Or even if you don't use malloc()
at all, your program may fail at any write operation to memory, if it
is the first write to that page. Resource limit doesn't help at all,
because the memory shortage may be caused by other processes.

The problem is tightly coupled with many optimizations in the Linux kernel
(such as lazy allocation of pages and sharing user pages with buffers).

Linux uses memory very efficiently, but on the other hand there is no
guarantee of safe execution of programs because it's too optimistic in
allocating memory. Important system processes can die unexpectedly on
write to not-yet-written page. (In reality, Linux doesn't kill
processes so easily. It tries very hard finding free page and the
system almost locks up).

We could get more robust system if Linux stops sharing pages and
assigns backing store block for every page (a la classic BSD systems).
But such an allocation scheme is too pesimistic and unacceptably
inefficient.

Personally I prefer efficiency over robustness when I use Linux at
home. But I wish someday Linux can get both of them.

`Reserving' pages at fork/malloc, as Bodo Moeller mentioned in
<3jk6fa$f...@rzsun02.rrz.uni-hamburg.de>, sounds like a nice idea.
This way we give up using copy-on-write as space optimization, but
still we can get a speed gain from c-o-w. It is nicer if we can
choose between optimistic/pesimistic memory allocation scheme in a
process basis, as well as system default behavior.

Possiblly adding RLIMIT_RESERVE_PERCENTAGE or UL_RESERVE_PERCENTAGE
could be the ultimate solution? Setting this parameter to zero, Linux
allocates a page only when it is needed (i.e. first write). Setting
this to 100, Linux reserves (but not assigns) all pages when required
(e.g. by sbrk() or fork()) so that processs can run safely. Setting
this param to, say, 30 makes Linux assume 30% of process memory is
actually written.
--mad

Albert Cahalan

unread,
Mar 10, 1995, 2:04:57 AM3/10/95
to
>>>>> "LT" == Linus Torvalds <torv...@cc.Helsinki.FI> writes:


LT> 2) GNU emacs (ugh) wants to start up a shell script. In the meantime, GNU
LT> emacs has (as it's wont to do) grown to 17 MB, and you obviously don't
LT> have much memory left. Do you accept the fork?

LT> - emacs will happily do an exec afterwards, and will actually use only 10
LT> pages before that in the child process (stack, mainly). Sure, let it
LT> fork().

LT> - How is the kernel supposed to know that it will fork? No way can it
LT> fork, as we don't have the potential 17MB of memory that now gets doubled.

Why must fork() and exec() be used? It would be better to have a spawn()
that would produce the same result by a different method.

--

Albert Cahalan
alb...@ccs.neu.edu

Vivek Kalra

unread,
Mar 10, 1995, 2:57:54 PM3/10/95
to
In article <3jmkd4$c...@panix3.panix.com>,

S. Joel Katz <stim...@panix.com> wrote:
>In <D5549...@nntpa.cb.att.com>
>v...@rhea.cnet.att.com (Vivek Kalra) writes:
>
>>In article <3jij1f$g...@due.unit.no>,
>>Arnt Gulbrandsen <agu...@nvg.unit.no> wrote:
>>>In article <D51Au...@nntpa.cb.att.com>,
>>>Vivek Kalra <v...@rhea.cnet.att.com> wrote:
>>>
>>>> What it boils down
>>>>to is that the ANSI/ISO spec very clearly specifies malloc()'s
>>>>behaviour and Linux seems to violate it.
>>>
>
>>>>Unfortunately, it *is* that simple: ANSI/ISO *requires* malloc() to
>>>>say what it means.
>>>
>>>WHAT exactly does ANSI/ISO say?
>>>
>>ANSI:
>
>>section 4.10.3 Memory Management Functions
>> The order and contiguity of storage allocated by successive
>> calls to the calloc(), malloc(), and realloc() functions is
>> unspecified. ... Each such allocation shall yield a pointer to
>> an object disjoint from any other object. The pointer returned
>> points to the start (lowest byte address) of the allocated
>> space. If this space cannot be allocated, a null pointer is
>> returned. ...
>
> But the whole point is that the Linux malloc _can_ allocate
>memory in that process' virtual space even if there is not system
>phyiscal memory or virtual memory to allow it to dirty at that instant.
>
Well, there is allocation and there is allocation: what you are
saying sounds to me like a *promise* to allocate memory (mptalloc()
anyone?) -- memory that may or may not be *actually available* for
*use* -- just what good is such a promise? We might as well start
ignoring the return value from malloc() since surely there's always
room in a process' virtual space, as you call it (isn't this number
2GB for Linux?), even though there isn't enough physical or virtual
memory to back up that promise. When I call malloc(), I'm not
saying that I *might* need this memory -- I'm saying that I do
indeed need this much memory, so tell me whether I can have it or
not.

Besides, what's the use of having a function called realloc() if
one were to follow your reasoning: I could always ask for the
maximum possible memory my program could ever need and have the OS
worry about extending the memory that I actually used.

S. Joel Katz

unread,
Mar 10, 1995, 4:26:12 PM3/10/95
to

Precisely.

That behaviour is extremely efficient (assuming objects are
much larger than or a multiple of 4K). That's one of the neat things
about the present implementation. After all, realloc might need to copy
and might fragment memory. The 386 has a sophisticate MMU for a _reason_
and Linux lets you use it.

The whole philosophy of C (and UNIX IMHO) has been to stress
efficiency. To let the programmer take his chances and do it _HIS_ way
and get the last drop of performance out of the system.

I don't think it would be a bad thing if there were a way to
prevent important processes from dying. A priority scheme with a user
daemon to do the killing or reserved pages would be nice.

But eliminating the lazy memory allocation would be like going to
Pascal.

Joern Rennecke

unread,
Mar 10, 1995, 9:42:15 PM3/10/95
to
torv...@cc.Helsinki.FI (Linus Torvalds) writes:

>In article <3itc77$9...@ninurta.fer.uni-lj.si>,
>Damjan Lango <lan...@ana.fer.uni-lj.si> wrote:
>>
>>Absolutely agree!
>>And I can't understand how this malloc bug came so far up to 1.1.x
>>It *must* be fixed before 1.2!!!

>Too late...

>Anyway, it's not a simple matter of just checking how much free memory
>there is: people seem to be completely unaware of how hard a problem
>this actually is.

>Please, read any book about resource dead-locks etc, and you'll find
>that these dead-locks *can* be resolved, but at the cost of

> - efficiency (to be sure you can handle any dead-lock, you'll have to
> do a *lot* of work).
> - usability (to make sure you never get any dead-lock, you have to say
> no to somebody, and you'll have to say "no" a *lot* earlier than most
> people seem to think).

>In the case of the memory handling, actually counting pages isn't that
>much of an overhead (we just have one reasource, and one page is as good
>as any and they don't much depend on each other, so the setup is
>reasonably simple), but the usability factor is major.

As it stands, the problem seems to be that the kernel knows too little
about what pages should be counted. Therefore, I propose to have a new
system call that requests a number of unshared physical memory/swap to be
reserved for the process. This call would return an error code if the
space can't be reserved.
A process that has used this system call would never be killed due to low
memory unless it needs more memory than it has reserved. Thus, all old
applications will run like they used to, but new ones have the option of
ensuring clean handling of resource exhaustion by calculationg how much
space they need for data, stack and dirty mmapped pages.
If you want to implement a malloc that ensures actual allocation of memory,
you would then use this new system call along with sbrk() to add as much
claimed physical memory/swap as you add address space.
The reserved memory space would of course not be doubled by a fork; the
child would have to explicitly do this if this is desired, and send
the parent some notification if the request is denied.

Joern Rennecke

Jon Trulson

unread,
Mar 11, 1995, 12:03:17 AM3/11/95
to
Ian McCloghrie (ia...@qualcomm.com) wrote:
: lan...@ana.fer.uni-lj.si (Damjan Lango) writes:

: >Bruce Thompson (br...@newton.apple.com) wrote:
: >: In article <57...@artcom0.north.de>, p...@artcom0.north.de (Peter Funk) wrote:

[...much deletia regarding a bug (IMHO) in linux's malloc()...]

: If I'm not mistaken, SVR4 has the same behaviour as Linux in this
: respect. I've not tested it empirically, but I spent a little time
: looking at the SVR4 vmem sources (secondary to different project in
: the fs) about a year ago, and that seemed to be what it was doing.
: So it's not unheard-of behaviour.

Sorry to interject, but as a Unixware (SVR4.2) geek myself, can safely
state that when you ask for 16MB of mem via malloc(), you either get it all or
you get NULL. I couldn't imagine trying to handle the case where my program
believes it has memory that might not really exist. What do you do? Setup
signal handlers to catch all the resulting coredumps??? ;-}

: --
: Ian McCloghrie work: ia...@qualcomm.com home: i...@egbt.org
: ____ GCS d-- H- s+:+ !g p?+ au a- w+ v- C+++$ UL++++ US++$ P+>++
: \bi/ L+++ 3 E+ N++ K--- !W--- M-- V-- -po+ Y+ t+ 5+++ jx R G''''
: \/ tv- b+++ D- B--- e- u* h- f+ r n+ y*

: The above represents my personal opinions and not necessarily those
: of my employer, Qualcomm Inc.

Albert Cahalan

unread,
Mar 10, 1995, 11:46:44 PM3/10/95
to
>>>>> "A" == Andreas Schwab <sch...@issan.informatik.uni-dortmund.de> writes:

A>> Why must fork() and exec() be used? It would be better to have a
A>> spawn() that would produce the same result by a different method.

A> With spawn you cannot change the environment of the child without affecting
A> the parent, unless you save and restore it around the call. For example,
A> redirections: you have to save the original meaning of stdin and stdout
A> before redirecting it for the child. It can be done, but you have to
A> rewrite every program that forks a filter, at least.

Use a parameter to pass a new environment. You only need to rewrite the
_large_ programs that start small filters.
--

Albert Cahalan
alb...@ccs.neu.edu

Vivek Kalra

unread,
Mar 10, 1995, 3:07:51 PM3/10/95
to
In article <824.2F...@purplet.demon.co.uk>,

Mike Jagdis <ja...@purplet.demon.co.uk> wrote:
>* In message <D51CG...@nntpa.cb.att.com>, Vivek Kalra said:
>
>VK> The point is that ANSI/ISO *require* malloc() to return NULL if the
>VK> asked for memory cannot be allocated so the programmer can take
>VK> appropriate action.
>
>The confusion is over the word "allocate". Malloc allocates a region of the
>process memory space suitable for an object of the stated size but the OS
>does not necessarily commit memory to that space until you dirty it.
>
I agree that the confusion is over the word allocate: What you are
saying sounds to me not malloc() -- *m*emory *alloc*ation, mind you
-- but a mere *promise* to *try* to allocate memory when actually
used. If malloc() returning a non-NULL had actually *allocated*
the memory it said it had, it would never fail when said memory was
actually used.

>
>VK> [...]
>VK> This brain-numb-if-not-dead function could bomb after withdrawing
>VK> the money from the source account but before depositing it the
>VK> destination account because malloc() didn't return a NULL
>VK> and yet InitBankRecord() caused the program to fail.
>
>This is just poorly designed code. Page allocation is just one of many ways
>that the program could stop unexpectedly at any point for no clear reason.
>If that is a problem you have to design around it.
>

As I said, not the smartest of codes lying around in a safe deposit
box. However, the point was that it was perfectly correct as far
as the ANSI/ISO spec is concerned -- and yet it could fail simply
because it trusted the return value of malloc(). Not A Good
Thing (tm), if you ask me. In such a world, we might as well
forget that the return value of malloc() has any meaning
whatsoever. And I, for one, am not going to be the one to say so
in comp.lang.c or comp.std.c... :-)

Joe Buck

unread,
Mar 12, 1995, 1:34:33 AM3/12/95
to

In <MITCHELL.9...@leadbelly.math.ufl.edu> mitc...@leadbelly.math.ufl.edu (Bill Mitchell) writes:
|>For what it's worth, this runs happily under sunOS 1.4x:

|>main()
|>{
|> int *ptr;
|>
|> ptr = calloc(1000000,1000000);
|> getchar();
|>}

stim...@panix.com (S. Joel Katz) writes:
> Did you check to see if ptr was NULL?

No, he didn't. I did (under SunOS 4.1.3); calloc returns NULL.
Go back to C school, Bill.
--
-- Joe Buck <jb...@synopsys.com> (not speaking for Synopsys, Inc)
Phone: +1 415 694 1729

Bruce Thompson

unread,
Mar 12, 1995, 1:32:10 PM3/12/95
to
In article <D4D59...@info.swan.ac.uk>, iia...@iifeak.swan.ac.uk (Alan
Cox) wrote:

> In article <3hl9rn$t...@klaava.Helsinki.FI> wirz...@cc.Helsinki.FI (Lars
Wirzenius) writes:
> >situations intelligently. Instant Microsoftware. Instant trashing systems.
> >Instant "Linux is unreliable, let's buy SCO". Instant end of the univ...,
> >er, forget that one, but it's not a good idea anyway.
>
> Tried SCO with any resource limits on the problem.
>
> >There's more to writing good software than getting it through the
> >compiler. Error handling is one of them, and Linux makes it impossible
> >to handle low memory conditions properly. Score -1 big design misfeature
> >for Linus.
>
> Scientists like it that way, other people should read the limit/rusage
> man pages.

Scientists may like it that way, but what they _should_ do is use
something like the Map classes from libg++ which give the _effect_ they
want, without the danger!

I'm sorry, but there is absolutely _no_ excuse for malloc/sbrk not working
in a sane manner. Yes, some performance may be sacrificed in order to make
malloc/sbrk reliable. First off, I don't think the performance hit need be
that big. Second, a small sacrifice of performance in exchange for a more
reliable system is a _good_ thing!

Cheers,
Bruce

--
--------------------------------------------------------------------
Bruce Thompson | "Never put off till tomorrow what
PIE Developer Information Group | you can comfortably put off till
Apple Computer Inc. | next week."
| -- Unknown
Usual Disclaimers Apply |

Bruce Thompson

unread,
Mar 12, 1995, 1:52:41 PM3/12/95
to
In article <820.2F...@purplet.demon.co.uk>, ja...@purplet.demon.co.uk
(Mike Jagdis) wrote:

> * In message <3ivttm$1...@nz12.rz.uni-karlsruhe.de>, Ivica Rogina said:
>
> IR> This is not a matter of virtual memory. If I do a malloc(),
> IR> I don't care
> IR> what the size of the RAM or the swap space or the virtual
> IR> memory is.
> IR> Whatever it is, I want to be sure that I can use all the
> IR> memory that was
> IR> assigned to me without having to wait for the sysop to push
> IR> in another couple-of-gigs-disc.


>
> Then you *have* to dirty each page in the area you request yourself to

> forcibly map them as individual, distinct pages.
>

> What the less experienced application writers don't realise is that even
> the kernel has no way of knowing just how much memory+swap is really usable
> to any one time. Text regions may be paged from the executable file - they
> may or may not require a physical memory page at any moment and *never*
> require a swap page. Similarly the OS cannot know in advance which pages
> will be shared and which will require a new page to be used, nor can it know
> when a shared page will need to be split due to a copy on write.

Sorry, but I must take issue with a couple of your points here. At any
point in time the kernel _MUST_ know what resources are in use. True it
may not know what will be used in the future, but it simply _MUST_ know
what's in use right NOW! The management of resources is the kernel's job.
If there is a resource that the kernel is not managing correctly, then it
should be changed to manage it correctly!

>
> The *only* way the OS could guarantee to have a page available for you is
> to take the most pessimistic view and save a swap page for *every* possible
> page used - i.e. every process requires text pages + data pages + shared
> library pages of swap (shared libraries are shared in memory but require
> distinct swap for each process). And then you have to figure out how to
> handle stack allocations which can probably only be guaranteed by committing
> plenty (a few meg? gig?) of pages...

This is precisely what it _should_ be doing! Until the kernel is told that
a page is going to be something other than a swappable page, the page
should be allocated as if it was. Once the kernel knows that the page
frame isn't needed, it can free it.

Stack pages are a biut of a special case, as stack tends to grow without
direct control over failure conditions. The whole stack argument seems
specious though. If you've run out of available VM, allocating a new stack
page will fail regardless of whether malloc/sbrk is operating correctly or
not! In general, it's not considered wise to be allocating megabytes on
the stack anyways, for precisely the reason that there's no way to recover
from running out of stack space! That's what the heap is for. You allocate
a large block on the heap, and if the space isn't available, you have an
opportunity to deal with the problem.

At _no_ time are you guaranteed a new page. No one is suggesting (I hope)
that the kernel be required to make guarantees about space availability.
It simply is not possible to predict future usage, as the previous poster
correctly pointed out.

>
> Seriously, if your programmers cannot handle this they should be trained


> or moved back to non-VM programming.

I'm sorry, but I found this statement extremely patronizing. The fact of
the matter is that when sbrk tells you that you have been allocated a
page, the kernel is making a contract. It is assigning a page of memory
for your use. From that point on, there should be no need for further
programmer intervention to force the page to be "really" allocated. The
defined kernel interface for adding pages to a process _is_ sbrk! If sbrk
isn't adding pages to the process then sbrk is broken, pure and simple.

For those that are wondering where sbrk came from in this discussion, sbrk
is called by malloc when malloc needs space. Malloc manages a pool of
memory that already belongs to the process. When the process needs new
pages, malloc calls sbrk.

>
> Mike
>

Cheers,
Bruce.

Bruce Thompson

unread,
Mar 12, 1995, 1:55:39 PM3/12/95
to
In article <3ivhid$l...@panix.com>, com...@csanta.attmail.com (Greg Comeau)
wrote:

> Ok, ANSI malloc() doesn't actually say how the memory is obtained and stuff
> like that since it's OS/environemnt specific. This sounds like perhaps
> a RTL responsibility for sure then. That is, perhaps it should optimally
> figure out how to touch the page, because the space for the object does
> need to be available (in my interpretation) before the object returned
> is used. If not, the null pointer is returned. As I recall, there is
> some fuzziness about exactly what is meant by things like "object"
> and "allocate", but IMO they do not interfere here. I'd post the
> actual words but cannot find any of my copies of the standard at the
> moment.
>

Have a look at sbrk(2). This is the function that actually requests more
space for the process. I haven't looked yet, but I believe that sbrk is
specified under POSIX.

> - Greg
> --
> Comeau Computing, 91-34 120th Street, Richmond Hill, NY, 11418-3214
> Here:com...@csanta.attmail.com / BIX:comeau or com...@bix.com /
CIS:72331,3421
> Voice:718-945-0009 / Fax:718-441-2310 / Prodigy: tshp50a / WELL: comeau

Bruce Thompson

unread,
Mar 12, 1995, 2:01:11 PM3/12/95
to
In article <950660...@mulga.cs.mu.OZ.AU>, f...@munta.cs.mu.OZ.AU (Fergus
Henderson) wrote:

> jmal...@cs.Helsinki.FI (Jussi Lahtinen) writes:


>
> >In <D4pvF...@nntpa.cb.att.com> v...@rhea.cnet.att.com (Vivek Kalra) writes:
> >
> >>The problems of not handling a NULL return from malloc() are well
> >>known. To have a malloc() that might fail in a way that doesn't
> >>give the programmer any chance to recover is just mind-boggling.
> >

> >Malloc-checking is not enough. If you alloc all memory and then call
> >a function which needs more stack space, your program will be dead and
> >there is no way to prevent it.
>
> It's possible to statically determine your stack requirements
> and to allocate the necessary amount of stack space in advance.

Ummm. Yes and no. In certain classes of program it is possible to staticly
determine stack requirements, but in general this is not the case. It can
be shown that this is a variation on the classic "halting" problem, a
provably non-computable function.

>
> It's also possible to catch the segmentation violation when you
> get a stack overflow and longjmp() out.

You can indeed. The fact that stack overflows are difficult to handle, and
difficult to predict leads to the increased use of the heap, which should
be predictable.

>
> --
> Fergus Henderson - f...@munta.cs.mu.oz.au

Cheers,
Bruce.

It is loading more messages.
0 new messages