Linux is 'creating' memory ?!

387 views
Skip to first unread message

mnij...@et.tudelft.nl

unread,
Feb 7, 1995, 11:26:06 AM2/7/95
to
Linux & the memory.

I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition
My compiler is GCC 2.5.8

As I was writing my program, I noticed an oddity (=bug?).
It's probably best explained by a simple program:

#include <stdlib.h>
int main(void) {
int i,*p;
/* 1st stage */
for(i=0;i<10000;i++) {
p[i]=malloc(4096)
if (p[i]==NULL) {
fprintf(stderr,"Out of memory\n");
exit(1);
}
}
/* 2nd stage */
for(i=0;i<10000;i++)
*(p[i])=1;
}

As you can see the first stage tries to allocate 40Mb of memory. Since
I don't have that kind of memory it should fail ofcourse. To my
surprise it didn't. (!)
Well then, the second stage tries to access the 40Mb. At this point
Linux figures out that that kind of memory isn't there, so it kind of
hangs. Not really it just becomes increadably slow, I was able to exit
the program with CTRL-C but it did take a few minutes to do that.

BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
is faster that calloc, so I prefer to malloc.

Am I doing something wrong ? Or is it a bug in Linux or GCC ?


Marc.


+-------------------------------------------------------------------+
| Marc Nijweide Delft University of Technology, Netherlands |
| M.Nij...@et.TUDelft.nl http://morra.et.tudelft.nl:80/~nijweide |
+-------------------------------------------------------------------+

If builders build things the way programmers write programs, the
first woodpecker that came along, would destroy civilisation.

iafi...@et.tudelft.nl

unread,
Feb 7, 1995, 3:59:28 PM2/7/95
to
In article <1995Feb7.1...@tudedv.et.tudelft.nl>, mnij...@et.tudelft.nl writes:
> Linux & the memory.
>
> I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition
> My compiler is GCC 2.5.8
>
> As I was writing my program, I noticed an oddity (=bug?).
> It's probably best explained by a simple program:
>
> #include <stdlib.h>
> int main(void) {
> int i,*p;

Has to be "int i, *p[10000];

> /* 1st stage */
> for(i=0;i<10000;i++) {
> p[i]=malloc(4096)
> if (p[i]==NULL) {
> fprintf(stderr,"Out of memory\n");
> exit(1);
> }
> }
> /* 2nd stage */
> for(i=0;i<10000;i++)
> *(p[i])=1;
> }
>
> As you can see the first stage tries to allocate 40Mb of memory. Since
> I don't have that kind of memory it should fail ofcourse. To my
> surprise it didn't. (!)
> Well then, the second stage tries to access the 40Mb. At this point
> Linux figures out that that kind of memory isn't there, so it kind of
> hangs. Not really it just becomes increadably slow, I was able to exit
> the program with CTRL-C but it did take a few minutes to do that.
>
> BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
> is faster that calloc, so I prefer to malloc.
>
> Am I doing something wrong ? Or is it a bug in Linux or GCC ?
>
>
> Marc.
>

Have the same "problem".
The program top shows the 'real' memory you allocated, but it does not
exist.

Arjan


------------------------------------------
Arjan Filius
Email : IAfi...@et.tudelft.nl
------------------------------------------

Bill C. Riemers

unread,
Feb 7, 1995, 5:14:41 PM2/7/95
to
In article <1995Feb7.1...@tudedv.et.tudelft.nl>,

Hmmm. I've been told that Linux doesn't really allocate the memory until
the first time you access it. I always wondered how it gracefully handled
out of memory errors this way. Now I guess I know. It doesn't... I'm
sure that there is supposed to be some clean way of handling this, but
I don't see how...

>BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
>is faster that calloc, so I prefer to malloc.

Yes, calloc() access the memory by clearing it. So I wouldn't expect it to
have problems.

>Am I doing something wrong ? Or is it a bug in Linux or GCC ?

Looks like it to me...

Bill


--
<A HREF=" http://physics.purdue.edu/~bcr/homepage.html ">
<EM><ADDRESS> Dr. Bill C. Riemers, b...@physics.purdue.edu </ADDRESS></EM></A>
<A HREF=" http://www.physics.purdue.edu/ ">
<EM> Department of Physics, Purdue University </EM></A>

Kevin Lentin

unread,
Feb 7, 1995, 5:47:00 PM2/7/95
to
mnij...@et.tudelft.nl wrote:
> Linux & the memory.

> I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition
> My compiler is GCC 2.5.8

> As I was writing my program, I noticed an oddity (=bug?).
> It's probably best explained by a simple program:

> #include <stdlib.h>
> int main(void) {
> int i,*p;
> /* 1st stage */
> for(i=0;i<10000;i++) {
> p[i]=malloc(4096)

Try allocating p first. Your pointer p points to random memory. It could be
anywhere. You're probably just lucky not to get an error on this line.

--
[==================================================================]
[ Kevin Lentin |___/~\__/~\___/~~~~\__/~\__/~\_| ]
[ kev...@bruce.cs.monash.edu.au |___/~\/~\_____/~\______/~\/~\__| ]
[ Macintrash: 'Just say NO!' |___/~\__/~\___/~~~~\____/~~\___| ]
[==================================================================]

Jeffrey Sturm

unread,
Feb 7, 1995, 6:30:12 PM2/7/95
to
mnij...@et.tudelft.nl wrote:

: I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition


: My compiler is GCC 2.5.8

: As I was writing my program, I noticed an oddity (=bug?).
: It's probably best explained by a simple program:

: #include <stdlib.h>
: int main(void) {
: int i,*p;
: /* 1st stage */
: for(i=0;i<10000;i++) {
: p[i]=malloc(4096)
: if (p[i]==NULL) {
: fprintf(stderr,"Out of memory\n");
: exit(1);
: }
: }
: /* 2nd stage */
: for(i=0;i<10000;i++)
: *(p[i])=1;

: }

I don't think this is exactly the program you ran. It has several problems,
like trying to dereference p before it is initialized. Anyway, it won't
compile as it stands.

: As you can see the first stage tries to allocate 40Mb of memory. Since


: I don't have that kind of memory it should fail ofcourse. To my
: surprise it didn't. (!)

Linux has paged virtual memory. Even if you have only 40MB physical
memory, a program has almost the entire 4GB address space available to it.
Not until you access a page of memory does Linux try to map it to
physical memory.

: Well then, the second stage tries to access the 40Mb. At this point


: Linux figures out that that kind of memory isn't there, so it kind of
: hangs. Not really it just becomes increadably slow, I was able to exit
: the program with CTRL-C but it did take a few minutes to do that.

That's right. Linux will first run out of physical memory, then it begins
to fill your paging file. That's when it starts to get very slow.

: BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc


: is faster that calloc, so I prefer to malloc.

That's because calloc() initializes the memory with zeros. This causes it
to be mapped immediately.

: Am I doing something wrong ? Or is it a bug in Linux or GCC ?

It's a "feature" in Linux, and in some other OS's too.

-Jeff

S. Joel Katz

unread,
Feb 8, 1995, 12:00:56 AM2/8/95
to
In <1995Feb7.1...@tudedv.et.tudelft.nl> mnij...@et.tudelft.nl writes:

>Linux & the memory.

>I'm running Linux 1.1.88 on a 386DX-40 with 4Mb RAM, 17Mb swap partition
>My compiler is GCC 2.5.8

>As I was writing my program, I noticed an oddity (=bug?).
>It's probably best explained by a simple program:


[program deleted]

>As you can see the first stage tries to allocate 40Mb of memory. Since
>I don't have that kind of memory it should fail ofcourse. To my
>surprise it didn't. (!)
>Well then, the second stage tries to access the 40Mb. At this point
>Linux figures out that that kind of memory isn't there, so it kind of
>hangs. Not really it just becomes increadably slow, I was able to exit
>the program with CTRL-C but it did take a few minutes to do that.

>BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
>is faster that calloc, so I prefer to malloc.

>Am I doing something wrong ? Or is it a bug in Linux or GCC ?

It is a feature in the Linux C library and GCC and is seldom
appreciated and little used. Allocating or declaring storage does nothing
in Linux except advance the process' break point.

Linux does not actually allocate a page until a fault occurs,
such as when a read of write to the memory takes place. Then the fault
handler maps a page.

I use this all the time in programs to save the hassle of dynamic
allocation. If I 'might need' up to 10,000,000 ints for something, I
allocate 10,000,000, safe in the knowledge that the allocation will never
fail. Then I use the array as I need 'em.

For example, consider the following program

int nums[10000000];
int num_count=0;

void main(void)
{
int j;
while((j=get_num())!=-1)
nums[num_count++]=j;
for(j=0; j<num_count; j++)
printf("%d->%d\n",j,nums[j];
}

Space allocated for up to 10,000,000 ints and it still won't
waste space if you only use a dozen. Damn convenient; no bug at all.
--

S. Joel Katz Information on Objectivism, Linux, 8031s, and atheism
Stim...@Panix.COM is available at http://www.panix.com/~stimpson/

John Henders

unread,
Feb 8, 1995, 5:28:55 AM2/8/95
to

>Hmmm. I've been told that Linux doesn't really allocate the memory until
>the first time you access it. I always wondered how it gracefully handled
>out of memory errors this way. Now I guess I know. It doesn't... I'm
>sure that there is supposed to be some clean way of handling this, but
>I don't see how...

Actually, if you wait long enough, it will eventually announce
that there's not enough memory and kill the program. It just attempts to
give you the memory first, which leads ot major swapping while it tries
to free enough pages.
At least that's how it worked the last time I checked, which was
pre 1.0.

--
GAT/MU/AE d- -p+(--) c++++ l++ u++ t- m---
e* s-/+ n-(?) h++ f+ g+ w+++ y*

S. Joel Katz

unread,
Feb 8, 1995, 8:04:11 AM2/8/95
to

>>Hmmm. I've been told that Linux doesn't really allocate the memory until
>>the first time you access it. I always wondered how it gracefully handled
>>out of memory errors this way. Now I guess I know. It doesn't... I'm
>>sure that there is supposed to be some clean way of handling this, but
>>I don't see how...

It certainly does handle it gracefully. There _are_ no out of
memory errors unless you actually exceed the total amount of memory and
swap space available on the system. If _you_ do _that_, there _is_ no
graceful way to handle it!

> Actually, if you wait long enough, it will eventually announce
>that there's not enough memory and kill the program. It just attempts to
>give you the memory first, which leads ot major swapping while it tries
>to free enough pages.
> At least that's how it worked the last time I checked, which was
>pre 1.0.

This is not really true. You can allocate the memory and sit with
it forever, if you want. As I mentioned earlier, this is DAMN convenient
to replace dynamic allocation with huge static allocation for boosts in
performance and simplicity.

If you start to _use_ the memory, Linux will swap everything it
can to give it to you. The kernel has no way of knowing when you're going
to stop asking for memory, so it can do nothing but do its best to give
you what you ask for.

Monty H. Brekke

unread,
Feb 8, 1995, 3:16:11 PM2/8/95
to
In article <3h9j68$5...@panix3.panix.com>,

S. Joel Katz <stim...@panix.com> wrote:
>
> It is a feature in the Linux C library and GCC and is seldom
>appreciated and little used. Allocating or declaring storage does nothing
>in Linux except advance the process' break point.
>
> Linux does not actually allocate a page until a fault occurs,
>such as when a read of write to the memory takes place. Then the fault
>handler maps a page.
>
> I use this all the time in programs to save the hassle of dynamic
>allocation. If I 'might need' up to 10,000,000 ints for something, I
>allocate 10,000,000, safe in the knowledge that the allocation will never
>fail. Then I use the array as I need 'em.
>
> For example, consider the following program
>
>int nums[10000000];
>int num_count=0;
>
> void main(void)
> {
> int j;
> while((j=get_num())!=-1)
> nums[num_count++]=j;
> for(j=0; j<num_count; j++)
> printf("%d->%d\n",j,nums[j];
> }
>
> Space allocated for up to 10,000,000 ints and it still won't
>waste space if you only use a dozen. Damn convenient; no bug at all.
>--

I've noticed this feature on other operating systems also. The thing
that bothers me is that if I request more memory than I have available
(phsical + swap), my program has no way (as far as I can tell) of
knowing when/if an out-of-memory condition occurs. Say, for example,
that I have allocated space for 25,000,000 integers, at 4 bytes each.
That's 100,000,000 bytes of memory. I've got 16MB physical and 32MB of
swap. Clearly, then, the following loop will fail at some point.

for (i = 0; i < 25000000; ++i)
huge_array[i] = 0;

How does my program know that this loop generated a memory fault?
Can I catch some signal? AT any rate, it seems like it would be simpler
to be able to count on malloc()'s return value being correct. I can
understand the advantage of the current implementation when the amount
of memory requested is less than the total available, but I fail to
see why malloc() doesn't return a failure when I try to request more
memory than I can possibly allocate. Anyone?

--
===============================================================================
mhbr...@iastate.edu | "You don't have to thank me. I'm just trying
bre...@dopey.me.iastate.edu | to avoid getting a real job."
| --Dave Barry

Bill C. Riemers

unread,
Feb 8, 1995, 7:24:36 PM2/8/95
to
In article <3h9j68$5...@panix3.panix.com>,
S. Joel Katz <stim...@panix.com> wrote:
>>Am I doing something wrong ? Or is it a bug in Linux or GCC ?
>
> It is a feature in the Linux C library and GCC and is seldom
>appreciated and little used. Allocating or declaring storage does nothing
>in Linux except advance the process' break point.
>
> Linux does not actually allocate a page until a fault occurs,
>such as when a read of write to the memory takes place. Then the fault
>handler maps a page.
>
> I use this all the time in programs to save the hassle of dynamic
>allocation. If I 'might need' up to 10,000,000 ints for something, I
>allocate 10,000,000, safe in the knowledge that the allocation will never
>fail. Then I use the array as I need 'em.
>
> For example, consider the following program
>
>int nums[10000000];
>int num_count=0;
>
> void main(void)
> {
> int j;
> while((j=get_num())!=-1)
> nums[num_count++]=j;
> for(j=0; j<num_count; j++)
> printf("%d->%d\n",j,nums[j];
> }
>
> Space allocated for up to 10,000,000 ints and it still won't
>waste space if you only use a dozen. Damn convenient; no bug at all.

Ahhh, but it is a bug. It could be your program could recover if the
malloc() failed, and get by without the extra memory it requested...
As it is now, programs will always crash if you are out of memory.
Damm inconvient when it is something important like inetd... I always
wondered why programs start crashing when I use too much swap without
logging anything in syslogd.

For example, consider if I write an program to convert picture formats.
It could be I've included two algorithms. One algorithm would create
a new immage in separate memory because then I could "undo". The other could
be intended for when I can't malloc() enough memory to overwrite the old
image with the new image.

Jesper Peterson

unread,
Feb 9, 1995, 1:44:20 AM2/9/95
to
In article <3h8vq4$4pu$1...@heifetz.msen.com>,
Jeffrey Sturm <jst...@garnet.msen.com> wrote:
>> [re: malloc not grabbing a real page untill memory is accessed]

>Linux has paged virtual memory. Even if you have only 40MB physical
>memory, a program has almost the entire 4GB address space available to it.
>Not until you access a page of memory does Linux try to map it to
>physical memory.

Is there any way of testing or accessing malloc'd memory in a non-blocking
fashion under these circumstances? e.g.:

ptr = malloc(BIGNUM);
if ( non_block_map(ptr+foo, size) )
read(ptr+foo, .....);
else
arrrgh;

This would make programs that use data structure with 'holes' (similar
to sparse files) more robust.
--
Jesper Peterson j...@digideas.com.au
j...@mtiame.mtia.oz.au
j...@io.com

Lars Hofhansl

unread,
Feb 8, 1995, 8:51:06 AM2/8/95
to

In article <1995Feb7.1...@tudedv.et.tudelft.nl>, mnij...@et.tudelft.nl writes:
>As I was writing my program, I noticed an oddity (=bug?).
>It's probably best explained by a simple program:
>
>#include <stdlib.h>
>int main(void) {
> int i,*p;
> /* 1st stage */
> for(i=0;i<10000;i++) {
> p[i]=malloc(4096)
> if (p[i]==NULL) {
> fprintf(stderr,"Out of memory\n");
> exit(1);
> }
> }
> /* 2nd stage */
> for(i=0;i<10000;i++)
> *(p[i])=1;
>}
>
>As you can see the first stage tries to allocate 40Mb of memory. Since
>I don't have that kind of memory it should fail ofcourse. To my
>surprise it didn't. (!)
>Well then, the second stage tries to access the 40Mb. At this point
>Linux figures out that that kind of memory isn't there, so it kind of
>hangs. Not really it just becomes increadably slow, I was able to exit
>the program with CTRL-C but it did take a few minutes to do that.
>
>BTW, this doesn't happen if I use calloc() instead of malloc(), but malloc
>is faster that calloc, so I prefer to malloc.
>

There's nothing odd in this behavior (well, except, that malloc should
check wether there is enough virtual memory or not).

Remember that your program runs in an environment of virtual memory !

Physical memory is only used when virtual memory is really accessed. That's BTW
the reason why you can run processes which are require more memory than
physically available; just because at one moment it's only necessary to have
the part in memory which is currently accessed.


In the 1st stage you malloc 40MB of ram. Malloc does nothing more than
generate a new node in the (info-)list of allocated memory portions, to esure,
that succeeding mallocs don't allocate the same part of memory again.

The memory is not used (accessed) yet, so there is no need to page in (or out)
any part of the memory.
Now we reach the 2nd stage of your program. The allocated memory is
(write-)accessed now. Since it cannot fit into the main memory in whole some
pages need to be paged out. That's why it becomes that incredibly slow.

The difference between malloc and calloc is that calloc tries to initialize
the allocated memory with 0. May be calloc also checks the size of available
virtual memory...
Anyway, because calloc does initialization the memory is accessed just when
it is "calloced", so calloc should fail, because it cannot initialize the
memory.


Lars

S. Joel Katz

unread,
Feb 9, 1995, 8:28:43 AM2/9/95
to
In <3hb8qb$6...@news.iastate.edu> bre...@dopey.me.iastate.edu (Monty H. Brekke) writes:

> I've noticed this feature on other operating systems also. The thing
>that bothers me is that if I request more memory than I have available
>(phsical + swap), my program has no way (as far as I can tell) of
>knowing when/if an out-of-memory condition occurs. Say, for example,
>that I have allocated space for 25,000,000 integers, at 4 bytes each.
>That's 100,000,000 bytes of memory. I've got 16MB physical and 32MB of
>swap. Clearly, then, the following loop will fail at some point.

> for (i = 0; i < 25000000; ++i)
> huge_array[i] = 0;

> How does my program know that this loop generated a memory fault?
>Can I catch some signal? AT any rate, it seems like it would be simpler
>to be able to count on malloc()'s return value being correct. I can
>understand the advantage of the current implementation when the amount
>of memory requested is less than the total available, but I fail to
>see why malloc() doesn't return a failure when I try to request more
>memory than I can possibly allocate. Anyone?

The problem with malloc failing is it would break the program I
showed above. Programs often malloc huge arrays (larger than they will
ever need) and count on them working. If the program later really
requires more RAM than it allocated, of course, it will fail.

As a simple example, a 'disassociated press' program I worte
allocates space for 10,000,000 word nodes at about 16 bytes apiece. This
program would fail on any system with less than 160M of virtual memory if
all of the memory was really allocated immediately.

If you want, you can write a '1' every 4K to force the memory to
be instantiated, but this is a horrible waste. Many programs allocate
memory they never use or do not use until much later in their execution.
Linux is very smart about this.

If you really care, you can always read /proc/meminfo and see how
much memory is available.

I am quite happy with the present Linux implementation and find
taking advantage of it a win-win situation over dynamic allocation (which
has execution penalties) or truly allocating the maximum needed (which
has space penalities).

Though, a signal that a program could request that would be sent
to it if memory started to get 'low' might be nice. Though, if you really
need the RAM (which you presumably do since you wrote to it),what can you
do. Paging to disk is silly, that is what swap was for.

Bill C. Riemers

unread,
Feb 9, 1995, 2:54:41 PM2/9/95
to
In article <3hd5ab$9...@panix3.panix.com>,

S. Joel Katz <stim...@panix.com> wrote:
> Though, a signal that a program could request that would be sent
>to it if memory started to get 'low' might be nice. Though, if you really
>need the RAM (which you presumably do since you wrote to it),what can you
>do. Paging to disk is silly, that is what swap was for.

Often programs can free up ram if needed. For example, a cad program
running out of memory, could free some of the buffers it was keeping for
a undo function. Otherwise, there is no real point in even checking the
malloc() return value. If malloc() can never fail, why bother? Why even
bother passing a value to malloc? How about just always having malloc
allocate 10MB's of space...

As near as I can tell, the only hope is to always use calloc() instead.
However, this leaves me wondering what happens if I want realloc() to increase a
buffer size? Does Linux really allocate the new memory, or should I try
zeroing things to avoid crashing later? Normally I write programs to
only request memory they will be using and free it as soon as it is not
needed. So not really having the memory allocated is a useless feature,
that just makes logically flawless programs crash.

Steven Buytaert

unread,
Feb 10, 1995, 4:31:16 AM2/10/95
to
mnij...@et.tudelft.nl wrote:

: As I was writing my program, I noticed an oddity (=bug?).


: It's probably best explained by a simple program:

: for(i=0;i<10000;i++) {


: p[i]=malloc(4096)
: if (p[i]==NULL) {
: fprintf(stderr,"Out of memory\n");
: exit(1);

: }
: }
: for(i=0;i<10000;i++)
: *(p[i])=1;

: As you can see the first stage tries to allocate 40Mb of memory. Since
: I don't have that kind of memory it should fail ofcourse. To my
: surprise it didn't. (!)

: Well then, the second stage tries to access the 40Mb. [...]

The physical memory pages are not allocated until there is a reference
to the pages. Check out /usr/src/linux/mm/*.c for more precise information.
(When sbrk() is called, during a malloc, a vm_area structure is enlarged
or created, it's not until a page fault that a page is realy taken to
use it)

It's not a bug. IMHO, a program should allocate and use the storage as
it goes, not in chunks of 40Megabytes...

--
Steven Buytaert

WORK buyt...@imec.be
HOME buyt...@clever.be

'Imagination is more important than knowledge.'
(A. Einstein)

Arnt Gulbrandsen

unread,
Feb 10, 1995, 11:50:25 AM2/10/95
to
In article <3hb8qb$6...@news.iastate.edu>,

Monty H. Brekke <bre...@dopey.me.iastate.edu> wrote:
> I've noticed this feature on other operating systems also. The thing
>that bothers me is that if I request more memory than I have available
>(phsical + swap), my program has no way (as far as I can tell) of
>knowing when/if an out-of-memory condition occurs.
<deletia>

Your program has no way to detect that five other users all start
memory-intensive applications while your program is running either.

This has been discussed both here and on the linux-kernel mailing
list. My impression from those threads is that if anyone writes
something intelligent, Linus will accept it. (Truism, I know.)

My thinking, FWIW, is that there ought to be two new signals,
SIGMEMLOW and SIGLOADHIGH, which default to SIGIGN and may be sent to
any process (that has installed handlers) at any time, and which hint
to the process that memory is low or load high. The process would
then do something or not do anything, depending on the programmer's
whim. A loadable kernel module might wake up now and then (using
timers, not the scheduler, of course) and send some signals if its
criteria of high load or low memory are fulfilled.

I'm not going to write any monitoring module, but if there were such
signals I might patch my www/ftp daemons, and after 1.3 is out I may
actually write a (trivial) patch to add the signals and make them
default to SIGIGN. It would at least change the discussions :)

--Arnt

Cameron Hutchison

unread,
Feb 11, 1995, 9:33:57 AM2/11/95
to
stim...@panix.com (S. Joel Katz) writes:
> If you really care [about not being able to malloc more memory
>than what exists], you can always read /proc/meminfo and see how
>much memory is available.

This wont work. You need some sort or atomic operation to check and malloc
memory. Otherwise you could read /proc/meminfo and find you have enough
memory, your quantum expires and another process takes the memory, and
you're back where you started.

It would be better to have a per process flag that indicated the memory
allocation policy you want for that process. How you manipulate this flag
is left as an exercise for the reader.

Cheers
--
Cameron Hutchison (ca...@nms.otc.com.au) | Beware of the clams
GCS d--@ -p+ c++(++++) l++ u+ e+ m+(-) s n- h++ f? !g w+ t r+

Lars Wirzenius

unread,
Feb 12, 1995, 10:35:19 AM2/12/95
to
stim...@panix.com (S. Joel Katz) writes:
> The problem with malloc failing is it would break the program I showed above.

That would be a good thing. Seriously. If a program can't rely on the
memory it has allocated to actually be usable, it can't handle low memory
situations intelligently. Instant Microsoftware. Instant trashing systems.
Instant "Linux is unreliable, let's buy SCO". Instant end of the univ...,
er, forget that one, but it's not a good idea anyway.

There's more to writing good software than getting it through the
compiler. Error handling is one of them, and Linux makes it impossible
to handle low memory conditions properly. Score -1 big design misfeature
for Linus.

> Programs often malloc huge arrays (larger than they will
> ever need) and count on them working.

I've never seen such a program, but they're buggy. Any program using
malloc and not checking its return value is buggy. Since malloc almost
always lies under Linux, all programs using malloc under Linux are
buggy.

This `lazy allocation' feature of Linux, and Linus's boneheadedness
about it, is about the only reason why I'm still not sure he isn't a
creature from outer space (oops, I'm going to be hit by a Koosh ball
the next time Linus comes to work :-). The lazy allocation is done, as
far as I can remember from earlier discussions, to avoid a fork+exec
from requiring, even temporarily, twice the amount of virtual memory,
which would be expensive for, say, Emacs. For this gain we sacrifice
reliability; not a very good sacrifice, in my opinion. I also don't buy the
argument that it's important to make it easy to write sparse arrays.
(Such arrays are not all that common, and it's easy enough to implement
them in traditional systems.)

What would be needed, in my opinion, is at least a kernel compilation
or bootup option that allows the sysadmin to specify the desired behaviour,
perhaps even having a special system call so that each process can
decide for itself. (That shouldn't even be all that difficult to write
for someone who rewrites the memory management in one day during a so
called code freeze.)

> As a simple example, a 'disassociated press' program I worte
> allocates space for 10,000,000 word nodes at about 16 bytes apiece. This
> program would fail on any system with less than 160M of virtual memory if
> all of the memory was really allocated immediately.

Guess what it does on any system with reliable virtual memory. Guess
what it does when you use more word nodes than there is memory for on
your Linux box.

> If you really care, you can always read /proc/meminfo and see how
> much memory is available.

No you can't. 1) The OS might not allow you to use all that memory, and
duplicating memory allocation in every application so that it can check
it properly is rather stupid. 2) During the time between the check and
the allocation, the situation might change radically; e.g., some other
application might have allocated memory. 3) The free memory might be
a lie, e.g., the OS might automatically allocate more swap if there is
some free disk space.

--
Lars.Wi...@helsinki.fi (finger wirz...@klaava.helsinki.fi)
Publib version 0.4: ftp://ftp.cs.helsinki.fi/pub/Software/Local/Publib/

Richard L. Goerwitz

unread,
Feb 14, 1995, 12:27:31 AM2/14/95
to
In article <3hl9rn$t...@klaava.Helsinki.FI>, Lars Wirzenius <wirz...@cc.Helsinki.FI> wrote:
>
>This `lazy allocation' feature of Linux, and Linus's boneheadedness
>about it, is about the only reason why I'm still not sure he isn't a
>creature from outer space (oops, I'm going to be hit by a Koosh ball
>the next time Linus comes to work :-).

Geez, I'd hit you with more than that if you were my co-worker.
Boneheadedness?

--

Richard L. Goerwitz *** go...@midway.uchicago.edu

Peter Funk

unread,
Feb 14, 1995, 1:59:19 AM2/14/95
to
In <3hl9rn$t...@klaava.Helsinki.FI> wirz...@cc.Helsinki.FI (Lars Wirzenius) writes:
[...] The lazy allocation is done, as

> far as I can remember from earlier discussions, to avoid a fork+exec
> from requiring, even temporarily, twice the amount of virtual memory,
> which would be expensive for, say, Emacs. For this gain we sacrifice
> reliability; not a very good sacrifice, in my opinion.

Wouldn't a 'vfork' solve this problem ? What's wrong with 'vfork' ?

Regards, Peter
-=-=-
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany
office: +49 421 2041921 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)

Ralf Schwedler

unread,
Feb 16, 1995, 4:30:17 AM2/16/95
to

In article <1995Feb10.0...@imec.be>, buyt...@imec.be (Steven Buytaert) writes:
mnij...@et.tudelft.nl wrote:

: As I was writing my program, I noticed an oddity (=bug?).
: It's probably best explained by a simple program:

: for(i=0;i<10000;i++) {
: p[i]=malloc(4096)
: if (p[i]==NULL) {
: fprintf(stderr,"Out of memory\n");
: exit(1);
: }
: }
: for(i=0;i<10000;i++)
: *(p[i])=1;

: As you can see the first stage tries to allocate 40Mb of memory. Since
: I don't have that kind of memory it should fail ofcourse. To my
: surprise it didn't. (!)
: Well then, the second stage tries to access the 40Mb. [...]

I have read about all of this thread. I think I understand the (mainly
efficiency oriented) arguments which support this behaviour. It's
probably not useful to discuss changing this behaviour, as some software
may rely on this.

Anyhow, from the point of view of an application programmer, I consider
the way malloc is realized absolutely dangerous. I want to be able to
handle error conditions as close as possible to the point of their
origin. The definition of malloc is 'allocate memory', not
'intend to allocate memory'. I want to decide myself how to handle
memory overflow conditions; from that point of view I cannot accept
any program abort not controlled by my application. All hints given
so far (e.g. using some technique to find the amount of free memory)
are useless (If I understood it well, even calloc will abort in situations
where the memory is not available; please stop reading here if this is not
the case). Such methods would rely on friendly behaviour of other apps
running; which is not acceptable in a multitasking environment.

My question:

Is there a version of malloc available for Linux which guarantees
allocation of memory, or returns NULL (this is the functionality
which I consider safest for programming) ? Maybe -libnmalloc?

Thanks,

Ralf

--
#####################################################################
Dipl.-Phys. Ralf Schwedler Tel. +49-241-80-7908
Institut fuer Halbleitertechnik II Fax. +49-241-8888-246
Sommerfeldstrasse 24 ra...@fred.basl.rwth-aachen.de
D-52074 Aachen

Lars Wirzenius

unread,
Feb 19, 1995, 11:33:16 AM2/19/95
to
go...@midway.uchicago.edu writes:
> Geez, I'd hit you with more than that if you were my co-worker.
> Boneheadedness?

As it happens, Linus seems to have missed my article altogether. I haven't
been hit by anything yet. :-)

Lars Wirzenius

unread,
Feb 19, 1995, 11:37:37 AM2/19/95
to
p...@artcom0.north.de (Peter Funk) writes:
> Wouldn't a 'vfork' solve this problem ? What's wrong with 'vfork' ?

The problem with vfork is that it doesn't solve the problem for
programs that don't use it; many programs don't. It's semantics are
also stupid (although necessary). The same speed can be achieved with
copy-on-write and other memory management trickery.

Alan Cox

unread,
Feb 21, 1995, 1:41:40 PM2/21/95
to
In article <3hl9rn$t...@klaava.Helsinki.FI> wirz...@cc.Helsinki.FI (Lars Wirzenius) writes:
>situations intelligently. Instant Microsoftware. Instant trashing systems.
>Instant "Linux is unreliable, let's buy SCO". Instant end of the univ...,
>er, forget that one, but it's not a good idea anyway.

Tried SCO with any resource limits on the problem.

>There's more to writing good software than getting it through the
>compiler. Error handling is one of them, and Linux makes it impossible
>to handle low memory conditions properly. Score -1 big design misfeature
>for Linus.

Scientists like it that way, other people should read the limit/rusage
man pages.

Alan


--
..-----------,,----------------------------,,----------------------------,,
// Alan Cox // iia...@www.linux.org.uk // GW4PTS@GB7SWN.#45.GBR.EU //
``----------'`--[Anti Kibozing Signature]-'`----------------------------''
One two three: Kibo, Lawyer, Refugee :: Green card, Compaq come read me...

Marty Galyean

unread,
Feb 21, 1995, 12:48:48 PM2/21/95
to
Lars Wirzenius (wirz...@cc.Helsinki.FI) wrote:

After reading this thread it seems there are two views at work...
the first says that a program should either get the memory it wants
guaranteed, or be told it can't...while the other view is that the
previous view is too inefficient and that a program should rely on
swapping on demand to handle fault and just not worry about
a real situation of no memory, swap or otherwise, available.

Neither of these seems very satisfying for all the reasons discussed
previously in the thread.

However, I kind of like the way Linux works. Here's why... People are fond
of presenting the fact that in a multitasking env memory that was avail a
moment before may not be there a moment later. But guys, the opposite is
also true...memory that did not appear available a moment before might be
*freed* a moment later, and thus be available...OS's are becoming
sophisticated enough that you just can't plan everything out
deterministically...your program has to go with the flow and adjust.

I also agree (with a previous post) that signals to indicate system load,
swap frequency, etc. would be nice...and integral to any program that does
'go with the flow'...
It would be nice if your program could just take a look around, see that
its just too hard to get anything useful done, and stop with appropriate
messages...perhaps with the option of resuming where it left off later
automatically. This could be done just be looking at the system time
once in a while to measure lag...doesn't really need os support...
this would be gambling, of course.

I don't like the idea that if my program didn't look quick enough or
guessed wrong it could fail ungracefully when swap space ran out. It does
not seem right...new signals could make this a little easier, but the
unavoidable fact is that you can never guarantee you have access to 'your'
memory ...kind of like reservations on the airlines...I can't see
either of these as as ever being 'easy-to-error-handle' situations ;-)
Things like this keep things interesting though.

Marty
gal...@madnix.uucp

Doug DeJulio

unread,
Feb 22, 1995, 5:05:10 PM2/22/95
to
In article <1995Feb21.1...@madnix.uucp>,

Marty Galyean <gal...@madnix.uucp> wrote:
>After reading this thread it seems there are two views at work...
>the first says that a program should either get the memory it wants
>guaranteed, or be told it can't...while the other view is that the
>previous view is too inefficient and that a program should rely on
>swapping on demand to handle fault and just not worry about
>a real situation of no memory, swap or otherwise, available.

Either behavior should be available. Both functionalities should be
present.

Any function defined by POSIX should conform exactly to the behavior
POSIX specifies. This is very important. We can't claim Linux is a
POSIX OS if it openly violates standards on purpose.

If the standard does not specify the exact way "malloc()" is supposed
to perform, then no POSIX-compliant C program can depend on either
behavior. You've got to write all your programs assuming either
behavior could occur, or they're not portable.

Any functionality not offered within the POSIX standard should be done
via extensions of some sort.

If you disagree with any of these assertions besides the first one
(that both behaviors should be present), you're basically saying that
it's not important that Linux attempt to conform to the POSIX
standard.

So, what *does* the POSIX standard say about the behavior of malloc()?
--
Doug DeJulio | R$+@$=W <-- sendmail.cf file
mailto:dd...@pitt.edu | {$/{{.+ <-- modem noise
http://www.pitt.edu/~ddj/ | !@#!@@! <-- Mr. Dithers swearing

Bruce Thompson

unread,
Feb 24, 1995, 2:27:58 AM2/24/95
to
In article <3i7rsc$e...@kruuna.helsinki.fi>, wirz...@cc.Helsinki.FI (Lars
Wirzenius) wrote:

I hate to say it, but I agree. Sorry Linus, but Boneheadedness if you
consider this a feature.

A bit of history. I used to work on Apollo workstations. The Apollo had a
novel way of allocating swap space, it essentially created temporary files
that were automatically deleted on close. Under Aegis (the O/S) version
9.2, memory allocation was handled as Linux handles it now. Hardly
surprising that we had all kinds of untraceable errors when the disk
filled up. As of 9.5, backing store was _always_ allocated when a process'
brk value was increased. This allowed malloc to correctly return NULL when
no more memory was available.

The problem with the disk filling up still remained, but at least
processes could handle an out-of-memory condition gracefully.

I can understand Linus' reluctance to create a situation where a fork+exec
from emacs requires the duplication of megs of data-segment which will be
released immediately when the exec occurs. I see a few possibilities here.
The first idea that springs to mind to simply to _do_ just that. If a
process needs to do a fast fork+exec, there's always vfork. That's it's
intended purpose. The second, and perhaps preferred solution (but more
work) is to simply clone the page tables and mark both sets
"copy-on-write". Then when either process attempts to write to the page,
the page is cloned before the write is allowed. This is the method used in
SCO, and it's one of the few things that I think they've done right. The
difference between vfork and fork becomes minimal. The overhead of this
method is only a copy of the page tables, and some extra page-faults until
the working sets are actually copied.

On the 486, this could be easily implemented by protecting all the pages
and using one of the available bits in the page table entries to indicate
"copy-on-write". There may be some additional overhead, but I highly doubt
it's going to be all that bad.

It's absolutely critical that processes see that they've run out of memory
in a controlled manner. There are two defined ways of doing this. The
first is when sbrk is called, it's defined to return -1. The second method
is malloc returning NULL. I'd like to echo the opinions of others who've
said that any program that doesn't check the return values of malloc,
sbrk, new (C++) or _ANY_ library or system call _IS_BROKEN_. There's
frankly no excuse for not checking for errors. I freely admit that I don't
check the results as often as I should, but that doesn't excuse me. If my
programs fail because I'm not checking correctly the fault is purely my
own.

IMNSHO the arguments that changing the memory behavior so that mallocing
10M but only using a tiny bit and getting away with it are _not_ valid
arguments. As Lars pointed out, sparse matrices can be written in other
ways, and indeed should be. When malloc, or rather, when sbrk returns a
pointer to you, the system is telling you "that memory is yours." Another
way of putting it is that the system has made a commitment to you that you
can access the memory that you requested. The current kernel behavior
_violates_ that commitment.

Please, Linus (and/or other kernel hackers) let's fix this! Given the
current push for 1.2, let's at least commit to addressing this (pardon the
pun) during 1.3. Writing reliable software is difficult enough without
adding needless sources of potential error.

Cheers,
Bruce.

--
--------------------------------------------------------------------
Bruce Thompson | "Never put off till tomorrow
PIE Developer Information Group | what you can put off till next
Apple Computer Inc. | week".
AppleLink: bthompson | -- Unknown
Internet: br...@newton.apple.com

Bruce Thompson

unread,
Feb 24, 1995, 2:38:57 AM2/24/95
to
I really hate it when I have to follow-up my own posting. Damn. Everyone
together: Bruce, RTFM!


In article <bruce-23029...@17.255.39.192>, br...@newton.apple.com
(Bruce Thompson) wrote:

[ ... ]

> I can understand Linus' reluctance to create a situation where a fork+exec
> from emacs requires the duplication of megs of data-segment which will be
> released immediately when the exec occurs. I see a few possibilities here.
> The first idea that springs to mind to simply to _do_ just that. If a
> process needs to do a fast fork+exec, there's always vfork. That's it's
> intended purpose. The second, and perhaps preferred solution (but more
> work) is to simply clone the page tables and mark both sets
> "copy-on-write". Then when either process attempts to write to the page,
> the page is cloned before the write is allowed. This is the method used in
> SCO, and it's one of the few things that I think they've done right. The
> difference between vfork and fork becomes minimal. The overhead of this
> method is only a copy of the page tables, and some extra page-faults until
> the working sets are actually copied.
>
> On the 486, this could be easily implemented by protecting all the pages
> and using one of the available bits in the page table entries to indicate
> "copy-on-write". There may be some additional overhead, but I highly doubt
> it's going to be all that bad.

I just now read the fork(2) manpage, and it claims that Linux already uses
copy-on-write. Given that, can someone please tell me the justification
for not allocating page-frames when sbrk (malloc) is called? The only
possible justification I had been able to come up with, doesn't actually
exist.

Thierry Bousch

unread,
Feb 24, 1995, 8:01:11 AM2/24/95
to
Doug DeJulio (dd...@pitt.edu) wrote:

: So, what *does* the POSIX standard say about the behavior of malloc()?

Nothing. The malloc() function doesn't belong to the POSIX standard.
(It conforms to ANSI C).

The problem, unfortunately, is not only with malloc(). On most Unix systems,
the stack is automatically expanded when needed; therefore, any procedure
call is an implicit memory allocation; if it fails, how are you going to
report the error to the user? There is no way to handle this kind of
errors gracefully, you have to suspend or to kill the process.

Note also that if you really run out of virtual memory, the system is
probably already paging like hell, and you won't be able to do anything
useful on it; it's not very different from a freezed system, and you'll
probably have to hit the Big Red Button anyway because even Ctrl-Alt-Del
won't respond (in a reasonable time, that is).

Thierry.

Doug DeJulio

unread,
Feb 24, 1995, 7:10:15 PM2/24/95
to
In article <3iklan$2...@linotte.republique.fr>,

Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>Doug DeJulio (dd...@pitt.edu) wrote:
>
>: So, what *does* the POSIX standard say about the behavior of malloc()?
>
>Nothing. The malloc() function doesn't belong to the POSIX standard.
>(It conforms to ANSI C).

What does ANSI C say about the behavior of malloc() then?

Doug DeJulio

unread,
Feb 24, 1995, 7:12:08 PM2/24/95
to
In article <3iklan$2...@linotte.republique.fr>,
Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>Note also that if you really run out of virtual memory, the system is
>probably already paging like hell, and you won't be able to do anything
>useful on it; it's not very different from a freezed system, and you'll
>probably have to hit the Big Red Button anyway because even Ctrl-Alt-Del
>won't respond (in a reasonable time, that is).

But running out of virutal memory isn't the only reason malloc() could
fail. What about the per-process memory limit (as in "ulimit -a")?
What happens with that under Linux right now? A *process* can run out
of available memory even before the system starts paging.

Greg Comeau

unread,
Feb 24, 1995, 7:24:39 PM2/24/95
to
In article <3ilsh7$a...@usenet.srv.cis.pitt.edu> dd...@pitt.edu (Doug DeJulio) writes:
>In article <3iklan$2...@linotte.republique.fr>,
>Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>>Doug DeJulio (dd...@pitt.edu) wrote:
>>
>>: So, what *does* the POSIX standard say about the behavior of malloc()?
>>
>>Nothing. The malloc() function doesn't belong to the POSIX standard.
>>(It conforms to ANSI C).
>
>What does ANSI C say about the behavior of malloc() then?

Very little. But I don't see the beginning of this thread:
What part of malloc()s behavior are you interested in?

- Greg
--
Comeau Computing, 91-34 120th Street, Richmond Hill, NY, 11418-3214
Here:com...@csanta.attmail.com / BIX:comeau or com...@bix.com / CIS:72331,3421
Voice:718-945-0009 / Fax:718-441-2310 / Prodigy: tshp50a / WELL: comeau

Doug DeJulio

unread,
Feb 24, 1995, 9:04:01 PM2/24/95
to
In article <3iltc7$l...@panix.com>,

Greg Comeau <com...@csanta.attmail.com> wrote:
>In article <3ilsh7$a...@usenet.srv.cis.pitt.edu> dd...@pitt.edu (Doug DeJulio) writes:
>>In article <3iklan$2...@linotte.republique.fr>,
>>Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>>>Doug DeJulio (dd...@pitt.edu) wrote:
>>>
>>>: So, what *does* the POSIX standard say about the behavior of malloc()?
>>>
>>>Nothing. The malloc() function doesn't belong to the POSIX standard.
>>>(It conforms to ANSI C).
>>
>>What does ANSI C say about the behavior of malloc() then?
>
>Very little. But I don't see the beginning of this thread:
>What part of malloc()s behavior are you interested in?

Well, as I understand it, Linux's malloc() will basically always
succeed, even if there's much less virtual memory available than you
requested. It's only when you acutally try to *use* the memory you've
been allocated that you get a problem. Apparently, the page isn't
actually allocated until it's touched.

The traditional Unix approach has been to have malloc() fail when you
try to allocate too much memory, so the application knows ahead of
time that it's not going to have the memory it wants.

I'm trying to figure out if both behaviors are compliant with relevant
standards. If so, portable software must be written assuming either
behavior could occur. If, on the other hand, Linux violates a
standard, that's good ammunition to use when lobbying for a change in
Linux's behavior.

I don't really care which behavior Linux uses, AS LONG AS it exactly
conforms to the written (not de-facto) standards.

S. Lee

unread,
Feb 25, 1995, 1:39:32 AM2/25/95
to
In article <3ilsh7$a...@usenet.srv.cis.pitt.edu>,

Doug DeJulio <dd...@pitt.edu> wrote:
>
>What does ANSI C say about the behavior of malloc() then?

7.10.3 Memory management functions

The order and contiguity of storage allocated by successive calls
to the calloc, malloc, and realloc functions is unspecified. The pointer
returned if the allocation succeeds is suitably aligned.... If the space
cannot be allocated, a null pointer is returned....

Stephen
--
sl...@cornell.edu
Witty .sig under construction.

Bruce Thompson

unread,
Feb 25, 1995, 12:33:31 PM2/25/95
to

It would, but in private discussions, someone (sorry, I can't remember
who) pointed out that vfork was developed originally to get around bugs in
the Copy-on-write implementation on VAXes. The Linux kernel apparently
already does copy-on-write on forks, so the difference between fork and
vfork is now irrelevant.

Either way, I can't see that there's a _valid_ reason for keeping the
behavior. I hate to beat a dead horse, but I have to. The job of the
kernel is to manage the resources of the machine. By allowing processes to
think they've received more memory than they actual have, the kernel is
abdicating that responsibility. IMNSHO this is a Bad Thing(tm). I'm sure
I've mentioned it before, but it seems to me that a swap page could be
allocated (not written, just allocated) when pages are allocated to a
process. This would allow the kind of performance in the face of large
allocations that people may have come to expect. It would still ensure
that when the kernel told a process "here's a page" there actually _was_ a
page for that process. This last item is the whole point. Again, IMNSHO,
the kernel should never _EVER_ allocate resources it doesn't have.

Cheers,
Bruce.

--
--------------------------------------------------------------------
Bruce Thompson | "Never put off till tomorrow what
PIE Developer Information Group | you can comfortably put off till
Apple Computer Inc. | next week."
| -- Unknown
Usual Disclaimers Apply |

Damjan Lango

unread,
Feb 27, 1995, 3:20:55 PM2/27/95
to
Bruce Thompson (br...@newton.apple.com) wrote:

: In article <57...@artcom0.north.de>, p...@artcom0.north.de (Peter Funk) wrote:

: > In <3hl9rn$t...@klaava.Helsinki.FI> wirz...@cc.Helsinki.FI (Lars
: Wirzenius) writes:
: > [...] The lazy allocation is done, as
: > > far as I can remember from earlier discussions, to avoid a fork+exec
: > > from requiring, even temporarily, twice the amount of virtual memory,
: > > which would be expensive for, say, Emacs. For this gain we sacrifice
: > > reliability; not a very good sacrifice, in my opinion.
: >
: > Wouldn't a 'vfork' solve this problem ? What's wrong with 'vfork' ?

: It would, but in private discussions, someone (sorry, I can't remember


: who) pointed out that vfork was developed originally to get around bugs in
: the Copy-on-write implementation on VAXes. The Linux kernel apparently
: already does copy-on-write on forks, so the difference between fork and
: vfork is now irrelevant.

: Either way, I can't see that there's a _valid_ reason for keeping the
: behavior. I hate to beat a dead horse, but I have to. The job of the
: kernel is to manage the resources of the machine. By allowing processes to
: think they've received more memory than they actual have, the kernel is
: abdicating that responsibility. IMNSHO this is a Bad Thing(tm). I'm sure
: I've mentioned it before, but it seems to me that a swap page could be
: allocated (not written, just allocated) when pages are allocated to a
: process. This would allow the kind of performance in the face of large
: allocations that people may have come to expect. It would still ensure
: that when the kernel told a process "here's a page" there actually _was_ a
: page for that process. This last item is the whole point. Again, IMNSHO,
: the kernel should never _EVER_ allocate resources it doesn't have.

: Cheers,
: Bruce.

Absolutely agree!
And I can't understand how this malloc bug came so far up to 1.1.x
It *must* be fixed before 1.2!!!
Even all those shitty oses as dog windoze and NT does this in the right way...
(well ok, dog doesn't have virtual mem. but NT does)
I would realy like to see this fixed NOW or there will people start saying
hey this Linux sux, it can't even do memory allocation right!

Maybe I should give an example how is it done under NT if u want to have
this kind of behavior from malloc but controlled of course!
and malloc is still malloc but there is an additional VirtualAlloc.
I am not trying to say that there should be exactly a VirtualAlloc but
the current malloc should be at least renamed to something like
hazard_malloc_with_hope and written a new bug free malloc!

Well here is an example of NT VirtualAlloc for a very large bitmap
that has only a few pixels set:

BTW shouldn't we move this to comp.os.linux.development.system?

---8<---

#include <windows.h>
#define PAGESIZE 4096
#define PAGELIMIT 100

class Bitmap{
private:
BYTE *lpBits;
BYTE *pages[PAGELIMIT];
WORD width,heigth;
WORD page;
public:
Bitmap(WORD width,WORD heigth);
~Bitmap();

void setPixel(WORD x,WORD y,BYTE c);
void resetPixel(WORD x,WORD y);
BYTE getPixel(WORD x,WORD y);
};

Bitmap::Bitmap(WORD w,WORD h){
page=0;
width=w;
height=h;
lpBits=(BYTE *)VirtualAlloc(NULL, // start
w*h, // size
MEM_RESERVE, PAGE_NOACCESS);
assert(lpBits);
}

Bitmap::~Bitmap(){
for(int i=0;i<page;i++) VirtualFree(pages[i],PAGESIZE,MEM_DECOMMIT);
VirtualFree(lpBits,0,MEM_RELEASE);
}

void Bitmap::setPixel(WORD x,WORD y,BYTE c){
try{
lpBits[y*width+x]=c;
}
except(EXCEPTION_EXECUTE_HANDLER){
pages[page]=VirtualAlloc(
lpBits+(y*width+x)/PAGESIZE, //start
PAGESIZE, //size
MEM_COMMIT, PAGE_READWRITE);
assert(pages[page]);
lpBits[y*width+x]=c;
}
}

void Bitmap::resetPixel(WORD x,WORD y){
try{
lpBits[y*width+x]=0;
}
except(EXCEPTION_EXECUTE_HANDLER){
}
}

BYTE Bitmap::getPixel(WORD x,WORD y){
BYTE bit;

try{
bit=lpBits[y*width+x];
}
except(EXCEPTION_EXECUTE_HANDLER){
bit=0;
}
return bit;
}


void main(void){
Bitmap &bmp=*new bmp(10000,10000);
bmp.setPixel(0,0);
bmp.setPixel(5000,5000);
bmp.setPixel(9999,9999);
delete &bmp;
}

---8<---


bye
Damjan Lango

Hannes Reinecke

unread,
Feb 28, 1995, 7:57:26 AM2/28/95
to
>>>>> "Ralf" == Ralf Schwedler <ra...@fred.basl.rwth-aachen.de> writes:

Ralf> In article <1995Feb10.0...@imec.be>,
Ralf> buyt...@imec.be (Steven Buytaert) writes:

[ malloc-prg deleted ]

Ralf> Anyhow, from the point of view of an application programmer,
Ralf> I consider the way malloc is realized absolutely
Ralf> dangerous. I want to be able to handle error conditions as
Ralf> close as possible to the point of their origin. The
Ralf> definition of malloc is 'allocate memory', not 'intend to
Ralf> allocate memory'.

Hmm. Having read this, i wondered whether you have heard about virtual
memory. _Every_ process has access to an so-called virtual memory
segment, which has under linux(i386) the size of 3 GB
(cf <asm/processor.h>). So, if you malloc() normally, you will get (in
best cases) this amount (unless the system crashes :-).
The amount of installed physical memory is mere a matter of speed.

Ralf> I want to decide myself how to handle
Ralf> memory overflow conditions; from that point of view I cannot
Ralf> accept any program abort not controlled by my
Ralf> application.

In normal conditions, in fact you are the only one responsible for
out-of-memory cases created by your program; as far as the system is
concerned, it will only deny to give you any memory (i.e. malloc and
friends will return NULL).

Ralf> All hints given so far (e.g. using some
Ralf> technique to find the amount of free memory) are useless (If
Ralf> I understood it well, even calloc will abort in situations
Ralf> where the memory is not available; please stop reading here
Ralf> if this is not the case). Such methods would rely on
Ralf> friendly behaviour of other apps running; which is not
Ralf> acceptable in a multitasking environment.

Really ?

Have fun

Hannes
-------
Hannes Reinecke |
<ha...@vogon.mathi.uni-heidelberg.de> | XVII.: WHAT ?
|
PGP fingerprint available | T.Pratchett: Small Gods
see 'finger' for details |

Vivek Kalra

unread,
Feb 28, 1995, 10:38:14 AM2/28/95
to
In article <3iklan$2...@linotte.republique.fr>,
Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>
>Note also that if you really run out of virtual memory, the system is
>probably already paging like hell, and you won't be able to do anything
>useful on it; it's not very different from a freezed system, and you'll
>probably have to hit the Big Red Button anyway because even Ctrl-Alt-Del
>won't respond (in a reasonable time, that is).
>
Okay, let's see: I have a machine with 8M of RAM and 12M of swap.
At this given moment, I have, say, 8 of those megs available. So I
run this super-duper image-processing program I have -- it checks
the current input size and determines that it needs 16M of memory
to do its thing on this input. So it malloc()s 16M and finds that
everything is fine and starts its thing, runs for three hours, and,
err, ooops, runs out of memory. Now, if malloc() had failed
earlier, I wouldn't have had to wait for three hours to find that
out, would I? Presumably, the program would have just told me at
the very beginning that not enough memory was available to do its
thing on the current input. And, no, the system before running
this program need not have been paging like hell, as you put it -- there was 8M of
memory available, remember?

Even worse, I might have a program that may already have modified
its input before finding out that it cannot finish its thing
becuase of lack of memory and so cannot write out the correct
output -- but the input is gone too. So now what?

The problems of not handling a NULL return from malloc() are well
known. To have a malloc() that might fail in a way that doesn't
give the programmer any chance to recover is just mind-boggling.

Vivek
--
Vivek email address signature
dsclmr: any ideas above, if there, are mine. All mine. And an illusion.
Oh, what a tangled web we weave, when first we practice to weave.
Quote for the day: '

Greg Comeau

unread,
Feb 28, 1995, 11:04:29 AM2/28/95
to
In article <3im36h$a...@usenet.srv.cis.pitt.edu> dd...@pitt.edu (Doug DeJulio) writes:
>In article <3iltc7$l...@panix.com>,
>Greg Comeau <com...@csanta.attmail.com> wrote:
>>In article <3ilsh7$a...@usenet.srv.cis.pitt.edu> dd...@pitt.edu (Doug DeJulio) writes:
>>>In article <3iklan$2...@linotte.republique.fr>,
>>>Thierry Bousch <bousch%linott...@topo.math.u-psud.fr> wrote:
>>>>Doug DeJulio (dd...@pitt.edu) wrote:
>>>>
>>>>: So, what *does* the POSIX standard say about the behavior of malloc()?
>>>>
>>>>Nothing. The malloc() function doesn't belong to the POSIX standard.
>>>>(It conforms to ANSI C).
>>>
>>>What does ANSI C say about the behavior of malloc() then?
>>
>>Very little. But I don't see the beginning of this thread:
>>What part of malloc()s behavior are you interested in?
>
>Well, as I understand it, Linux's malloc() will basically always
>succeed, even if there's much less virtual memory available than you
>requested. It's only when you acutally try to *use* the memory you've
>been allocated that you get a problem. Apparently, the page isn't
>actually allocated until it's touched.

Ok, ANSI malloc() doesn't actually say how the memory is obtained and stuff
like that since it's OS/environemnt specific. This sounds like perhaps
a RTL responsibility for sure then. That is, perhaps it should optimally
figure out how to touch the page, because the space for the object does
need to be available (in my interpretation) before the object returned
is used. If not, the null pointer is returned. As I recall, there is
some fuzziness about exactly what is meant by things like "object"
and "allocate", but IMO they do not interfere here. I'd post the
actual words but cannot find any of my copies of the standard at the
moment.

>The traditional Unix approach has been to have malloc() fail when you
>try to allocate too much memory, so the application knows ahead of
>time that it's not going to have the memory it wants.

Yes (well, not UNIX per se but the RTL). And this is what I believe the
standard says.

> I'm trying to figure out if both behaviors are compliant with relevant
>standards. If so, portable software must be written assuming either
>behavior could occur. If, on the other hand, Linux violates a
>standard, that's good ammunition to use when lobbying for a change in
>Linux's behavior.

> don't really care which behavior Linux uses, AS LONG AS it exactly
>conforms to the written (not de-facto) standards.

If this is behaving as you describe it, I believe it violates.
Even despite the fuzziness ,the committement comes at the call,
because at that point either you have a "returned object" or not.
If not, there is no category of invalid pointer or invalid object here.

Ian McCloghrie

unread,
Feb 28, 1995, 12:39:39 PM2/28/95
to
lan...@ana.fer.uni-lj.si (Damjan Lango) writes:

>Bruce Thompson (br...@newton.apple.com) wrote:
>: In article <57...@artcom0.north.de>, p...@artcom0.north.de (Peter Funk) wrote:

>: Either way, I can't see that there's a _valid_ reason for keeping the
>: behavior. I hate to beat a dead horse, but I have to. The job of the
>: kernel is to manage the resources of the machine. By allowing processes to
>: think they've received more memory than they actual have, the kernel is
>: abdicating that responsibility. IMNSHO this is a Bad Thing(tm). I'm sure

If I'm not mistaken, SVR4 has the same behaviour as Linux in this
respect. I've not tested it empirically, but I spent a little time
looking at the SVR4 vmem sources (secondary to different project in
the fs) about a year ago, and that seemed to be what it was doing.
So it's not unheard-of behaviour.

--
Ian McCloghrie work: ia...@qualcomm.com home: i...@egbt.org
____ GCS d-- H- s+:+ !g p?+ au a- w+ v- C+++$ UL++++ US++$ P+>++
\bi/ L+++ 3 E+ N++ K--- !W--- M-- V-- -po+ Y+ t+ 5+++ jx R G''''
\/ tv- b+++ D- B--- e- u* h- f+ r n+ y*

The above represents my personal opinions and not necessarily those
of my employer, Qualcomm Inc.

Ivica Rogina

unread,
Feb 28, 1995, 2:35:18 PM2/28/95
to

ha...@mathi.uni-heidelberg.de (Hannes Reinecke) wrote:

> Hmm. Having read this, i wondered whether you have heard about virtual
> memory. _Every_ process has access to an so-called virtual memory
> segment, which has under linux(i386) the size of 3 GB
> (cf <asm/processor.h>). So, if you malloc() normally, you will get (in
> best cases) this amount (unless the system crashes :-).

This is not a matter of virtual memory. If I do a malloc(), I don't care
what the size of the RAM or the swap space or the virtual memory is.
Whatever it is, I want to be sure that I can use all the memory that was
assigned to me without having to wait for the sysop to push in another
couple-of-gigs-disc.
And, I don't want any user to be able to bring the entire system to a halt
by simply allocating a lot of memory.

Ivica

Thierry EXCOFFIER

unread,
Mar 1, 1995, 1:22:58 PM3/1/95
to
In article <bruce-25029...@17.205.4.52>, br...@newton.apple.com (Bruce Thompson) writes:

|> Either way, I can't see that there's a _valid_ reason for keeping the
|> behavior. I hate to beat a dead horse, but I have to. The job of the
|> kernel is to manage the resources of the machine. By allowing processes to
|> think they've received more memory than they actual have, the kernel is
|> abdicating that responsibility. IMNSHO this is a Bad Thing(tm). I'm sure

A few months ago, the actual behaviour of "malloc" was removed.
I remember somebody saying:

Allocation of a big chunk of memory (possibly greater than virtual memory)
is useful to avoid copy and copy and copy... in successive taller table.

{
char *t ;

t = malloc(10000000) ;
fgets(t,10000000,stdin) ;
t = realloc( t,strlen(t)+1 ) ;

return(t) ;
}

This function reads the unknown data with a minimum number of copy.

Just to add my 2 centimes

Thierry.
--
If you are a UNIX user, type the following 2 lines to see my signature :
/bin/sh -c 'for I in xmosaic Mosaic midasWWW tkWWW chimera lynx perlWWW ; do
$I http://www710.univ-lyon1.fr/%7Eexco/ && break ; done'

Vivek Kalra

unread,
Mar 1, 1995, 2:20:51 PM3/1/95
to
In article <HARE.95Fe...@mathi.uni-heidelberg.de>,

Hannes Reinecke <ha...@mathi.uni-heidelberg.de> wrote:
>
>Ralf> In article <1995Feb10.0...@imec.be>,
>Ralf> buyt...@imec.be (Steven Buytaert) writes:
>
>Ralf> Anyhow, from the point of view of an application programmer,
>Ralf> I consider the way malloc is realized absolutely
>Ralf> dangerous. I want to be able to handle error conditions as
>Ralf> close as possible to the point of their origin. The
>Ralf> definition of malloc is 'allocate memory', not 'intend to
>Ralf> allocate memory'.
>
>Hmm. Having read this, i wondered whether you have heard about virtual
>memory. _Every_ process has access to an so-called virtual memory
>segment, which has under linux(i386) the size of 3 GB
>(cf <asm/processor.h>). So, if you malloc() normally, you will get (in
>best cases) this amount (unless the system crashes :-).
>The amount of installed physical memory is mere a matter of speed.
>
Hot damn! And I thought I was going to have to buy more memory for
my machine! So, let's see, I have 12Megs of RAM and 24Megs of swap
and they add up to 3GB of virtual memory? Where does the
difference come from? Microsoft?

>Ralf> I want to decide myself how to handle
>Ralf> memory overflow conditions; from that point of view I cannot
>Ralf> accept any program abort not controlled by my
>Ralf> application.
>
>In normal conditions, in fact you are the only one responsible for
>out-of-memory cases created by your program; as far as the system is
>concerned, it will only deny to give you any memory (i.e. malloc and
>friends will return NULL).
>

Huh? Clearly, *some* one here has *heard* about virtual memory.
I'd like to know just what that was...

Ivica Rogina

unread,
Mar 3, 1995, 1:01:34 PM3/3/95
to

agu...@nvg.unit.no (Arnt Gulbrandsen) wrote:

> FYI, malloc isn't part of the kernel.

That's not the issue. malloc() relies on what the kernel has to say about
sbrk. I believe that malloc is implemented correctly, it will actually return
a NULL pointer if the kernel doesn't allow to increase the data-segment of a
process. The problem is that the kernel will allow to increase the data-segment
without making sure that it can provide the granted resources.

> 1. Any process can run out of memory asynchronously, since it can
> run out of stack memory, and since root can shrink the ulimit (I'm
> not sure if this is implemented in linux yet).

Huh? What do you mean? Are you saying that root can take away allocated
memory from a running process? Never heard of that. Of course a vicious
root can even kill a process, but again, this is not the issue. All
Unices I've been working with (Sun OS, HP-UX, OSF, Ultrix) except Linux
guarantee that a process can use the memory it was granted. I've never
heard of memory being taken away. I don't object to not getting requested
memory (no matter if stack or data) but I strongly object to not fulfulling
a promise, and I regard malloc (i.e. sbrk) as a promise, what else is it
good for, if I don't have to check its return value.

> 2. People running programs that need to store lots of data but not
> access it very often need virtual memory.

So what? The above mentioned Unices have virtual memory too, and they still
have a working malloc/sbrk.

> Therefore, there's a good chance that your extremely robust program
> would be paralysed by swapping long before a hypothetical "safe
> malloc" detected out-of-VM.

Are you sure you mean what you are saying? "out-of-VM". I don't want malloc
to tell me it's out-of-VM I want it to tell me that it's out of available
memory (RAM+swap).

For me, malloc/sbrk is kinda contract. The process is asking for memory, and
the kernel is granting that request. I don't want the kernel to say later:
"haha, April fool, I don't really have the memory that I've promised you".
That's really ridiculous. Name one program that takes advantage of the
Linux-style memory allocation and that can run on other Unices.

-- Ivica

Doug DeJulio

unread,
Mar 3, 1995, 6:55:55 PM3/3/95
to
In article <3j6fk8$5...@hydra.Helsinki.FI>,
Jussi Lahtinen <jmal...@cs.Helsinki.FI> wrote:

>In <D4pvF...@nntpa.cb.att.com> v...@rhea.cnet.att.com (Vivek Kalra) writes:
>
>>The problems of not handling a NULL return from malloc() are well
>>known. To have a malloc() that might fail in a way that doesn't
>>give the programmer any chance to recover is just mind-boggling.
>
>Malloc-checking is not enough. If you alloc all memory and then call
>a function which needs more stack space, your program will be dead and
>there is no way to prevent it.

If you alloc *all* memory, sure. But not if you alloc all *available*
memory.

You can have limits for mem and stack that are below the total VM of
your system (see sh's "ulimit" or csh's "limit"). You can set "mem"
and "stack" limits for a process. Use them.

Ian A. McCloghrie

unread,
Mar 3, 1995, 7:44:43 PM3/3/95
to
rog...@ira.uka.de (Ivica Rogina) writes:

>agu...@nvg.unit.no (Arnt Gulbrandsen) wrote:
>> 1. Any process can run out of memory asynchronously, since it can
>> run out of stack memory, and since root can shrink the ulimit (I'm
>> not sure if this is implemented in linux yet).
>Huh? What do you mean? Are you saying that root can take away allocated
>memory from a running process? Never heard of that. Of course a vicious

He's saying two things. One, when you make a function call,
saved registers and return values need to be pushed onto the stack,
and local variables for the new function need to be allocated (which
are also done on the stack). It's quite possible for the growing
stack to need an extra page of virtual memory and not be able to get
it.

Second, under most unixes, you can set certain resource limits on a
per-process basis, such as coredumpsize, cputime, stacksize,
and datasize.

>root can even kill a process, but again, this is not the issue. All
>Unices I've been working with (Sun OS, HP-UX, OSF, Ultrix) except Linux
>guarantee that a process can use the memory it was granted. I've never

SVR4 uses a similar allocation policy, I believe. So if the SunOS
you're referring to is Solaris 2, then they don't all guarantee it.

>heard of memory being taken away. I don't object to not getting requested
>memory (no matter if stack or data) but I strongly object to not fulfulling
>a promise, and I regard malloc (i.e. sbrk) as a promise, what else is it
>good for, if I don't have to check its return value.

If a process can die for other uncatachable resource problems (such
as no memory left to grow the stack), then what does it matter if it
can die for malloc()'d memory not really being available?

>> Therefore, there's a good chance that your extremely robust program
>> would be paralysed by swapping long before a hypothetical "safe
>> malloc" detected out-of-VM.
>Are you sure you mean what you are saying? "out-of-VM". I don't want malloc
>to tell me it's out-of-VM I want it to tell me that it's out of available
>memory (RAM+swap).

What do you think RAM+swap *is* if not VM?

>"haha, April fool, I don't really have the memory that I've promised you".
>That's really ridiculous. Name one program that takes advantage of the
>Linux-style memory allocation and that can run on other Unices.

Any program which allocates a large array and only uses scattered
parts of it (such as is often done in hash tables).

IMHO, the whole question is silly anyway. Just make 32M of swap
(that's about $20 given today's disk prices), and your system
will swap itself into the ground well before you start getting
problems with allocated memory not really being there.

Ian A. McCloghrie

unread,
Mar 3, 1995, 7:46:33 PM3/3/95
to
dd...@pitt.edu (Doug DeJulio) writes:
>You can have limits for mem and stack that are below the total VM of
>your system (see sh's "ulimit" or csh's "limit"). You can set "mem"
>and "stack" limits for a process. Use them.

Ummm... limiting your program's stack size to 1M (say) doesn't help a
lot if there's another user running emacs and xv who just ate up
all of the ram except for 500K.

S. Lee

unread,
Mar 3, 1995, 9:30:45 PM3/3/95
to
In article <EWERLID.95...@frej.teknikum.uu.se>,
Ove Ewerlid <ewe...@frej.teknikum.uu.se> wrote:
>I've read this thread and I cannot see the problem wrt linux!
>If you attempt to allocate more then the available 'real' amount of
>memory malloc WILL return 0.

I have 16MB RAM+20MB Swap but I can malloc() two 20M arrays without
malloc() returning 0. Guess what would happen if I start filling them?

>If the system malloc (in libc) was changed to
>prefill the allocated memory with all zeroes, then no process would be able to
>"cheat" away pages. As the linux-libc is available in source, changing
>libc is trivial.

Filling the pages is inefficient. The kernel should be changed to reserve
pages for processes that called malloc(). Pages once reserved for a
process should not be assigned to another.

Stephen

Clark Cooper

unread,
Mar 3, 1995, 2:01:34 PM3/3/95
to

>There are two points which fascist-malloc proponents ought to
>consider very carefully:
> ...

>2. People running programs that need to store lots of data but not
>access it very often need virtual memory.

There seems to be a misunderstanding here that I've seen expressed in other
articles. Sbrk (which *is* a system call serviced by the kernel and upon
which malloc is built) returns its error value when *virtual* memory is
exhausted (also when limits are exceeded). It has nothing to do with
exhaustion of *physical* memory. Virtual memory wouldn't be very useful
if it did.

We malloc "fascists" simply want the kernel to tell us as early as possible
(when we request memory [which on a VM system is *virtual* memory]) that
it can't provide the resources.

We don't want to take away your virtual memory, honest.
--
--
Clark Cooper GE Industrial & Power Systems coop...@dsos00.sch.ge.com
(518) 385-8380 ASPL-2 Project, Bldg 273, Rm 3090
1 River Rd, Schenectady NY 12345

Ove Ewerlid

unread,
Mar 3, 1995, 8:38:36 PM3/3/95
to
In article <D4D59...@info.swan.ac.uk> iia...@iifeak.swan.ac.uk (Alan Cox) writes:

Alan Cox writes:
>In article <3hl9rn$t...@klaava.Helsinki.FI> wirz...@cc.Helsinki.FI
>>There's more to writing good software than getting it through the
>>compiler. Error handling is one of them, and Linux makes it impossible
>>to handle low memory conditions properly. Score -1 big design misfeature
>for Linus.
>
> Scientists like it that way, other people should read the limit/rusage
> man pages.

Seconded!

If I was writing an application were I needed to know, NOW,
if the memory allocated by malloc represented 'real' memory then
I'll add this to my malloc wrapper:

for (i = 0; i < size; i++)
memory[i] = 0;

(BTW; My normal malloc wrapper checks if NULL is returned.)

Perhaps clearing the memory allocated by malloc is a good idea anyway
to avoid indeterministic behaviour (unless speed is critical).

I've read this thread and I cannot see the problem wrt linux!
If you attempt to allocate more then the available 'real' amount of

memory malloc WILL return 0. If the system malloc (in libc) was changed to


prefill the allocated memory with all zeroes, then no process would be able to
"cheat" away pages. As the linux-libc is available in source, changing
libc is trivial.

Infact, if needed, the syscall interface to sbrk could be changed to
prefill with zeroes (e.g., in libc) should an application mess
with that directly.

To me, this problem seems to be a libc-problem.
The kernel is handling things as flexible as it can ...

Anyway, as stated in this thread, memory can run out due to
the stack and that is more tricky to detect/handle in a controled
manner.

Cheers,
Ove

Ruurd Pels

unread,
Mar 2, 1995, 2:50:03 PM3/2/95
to
In article <3ivvvq$l...@tuba.cit.cornell.edu>, sl...@crux3.cit.cornell.edu (S. Lee) writes:

>>The problems of not handling a NULL return from malloc() are well
>>known. To have a malloc() that might fail in a way that doesn't
>>give the programmer any chance to recover is just mind-boggling.

>Agreed. This is bad behaviour. Is Linus aware of this? He doesn't seem
>to have said anything on this thread.

Well, modifying the kernel-part of memory allocation in order not to let it
do a 'lazy' allocation would probably make it significantly slower. That is
the downside of checking wether there is real memory available in the case
one might actually want to use the malloc()ed memory. However, it should be
possible to devise some in-between method, that is, let malloc() be lazy,
but, in the event that memory and swap are exhausted, create a swapfile on
the fly on a partition that has room enough. That should not be that diffi-
cult to implement...

>P.S. Is this a kernel or glibc problem?

It's a kernel feature.
--
Grtz, RFP ;-)

|o| Ruurd Pels, Kamgras 187, 8935 EJ Leeuwarden, The Netherlands |o|
|o| GC2.1 GAT/!/?/CS/B -d+(---) H s !g p? a w--(+++) v--(+++) C++ UL+++ |o|
|o| P? L++ !3 E? N++ !V t+ !5 !j G? tv- b++ D B? u++(---) h-- f? y++++ |o|

Message has been deleted
Message has been deleted
Message has been deleted

Michael Shields

unread,
Mar 4, 1995, 4:37:49 PM3/4/95
to
In article <3j57hb$8...@foo.autpels.nl>,

Ruurd Pels <ru...@autpels.maxine.wlink.nl> wrote:
> Well, modifying the kernel-part of memory allocation in order not to let it
> do a 'lazy' allocation would probably make it significantly slower.

Why? It would just have to keep a count of total memory, raising it when
memory is freed or swap is added, and lowering it when memory is allocated
or swap removed. If the amount of memory you request is more than the
current count, the request fails. This seems like a trivial change.
--
Shields.

Michael Shields

unread,
Mar 4, 1995, 4:39:31 PM3/4/95
to
In article <D4wrK...@pe1chl.ampr.org>,
Rob Janssen <pe1...@wab-tis.rabobank.nl> wrote:
> Say, 500KB per process == 64MB of swap for the current kernel configuration.

Linux 1.1.95 raised the default NR_TASKS to 512.
--
Shields.

Michael Shields

unread,
Mar 4, 1995, 4:42:09 PM3/4/95
to
In article <D4wrE...@pe1chl.ampr.org>,
Rob Janssen <pe1...@wab-tis.rabobank.nl> wrote:
> Of course, what you can expect when this change is made: a lot of complaints
> saying "Linux is now using a lot more swap than it did before" and "why do
> I get 'cannot fork', 'cannot exec' and 'out of memory' messages while this
> system worked so beautifilly with last week's kernel".

Make it CONFIG_FASCIST_MALLOC, then.
--
Shields.

Mike Jagdis

unread,
Mar 5, 1995, 9:06:00 AM3/5/95
to
* In message <3ivttm$1...@nz12.rz.uni-karlsruhe.de>, Ivica Rogina said:

IR> This is not a matter of virtual memory. If I do a malloc(),
IR> I don't care
IR> what the size of the RAM or the swap space or the virtual
IR> memory is.
IR> Whatever it is, I want to be sure that I can use all the
IR> memory that was
IR> assigned to me without having to wait for the sysop to push
IR> in another couple-of-gigs-disc.

Then you *have* to dirty each page in the area you request yourself to
forcibly map them as individual, distinct pages.

What the less experienced application writers don't realise is that even
the kernel has no way of knowing just how much memory+swap is really usable
to any one time. Text regions may be paged from the executable file - they
may or may not require a physical memory page at any moment and *never*
require a swap page. Similarly the OS cannot know in advance which pages
will be shared and which will require a new page to be used, nor can it know
when a shared page will need to be split due to a copy on write.

The *only* way the OS could guarantee to have a page available for you is
to take the most pessimistic view and save a swap page for *every* possible
page used - i.e. every process requires text pages + data pages + shared
library pages of swap (shared libraries are shared in memory but require
distinct swap for each process). And then you have to figure out how to
handle stack allocations which can probably only be guaranteed by committing
plenty (a few meg? gig?) of pages...

Seriously, if your programmers cannot handle this they should be trained
or moved back to non-VM programming.

Mike

Arnt Gulbrandsen

unread,
Mar 3, 1995, 2:17:23 AM3/3/95
to
In article <bruce-25029...@17.205.4.52>,

Bruce Thompson <br...@newton.apple.com> wrote:
>Either way, I can't see that there's a _valid_ reason for keeping the
>behavior. I hate to beat a dead horse, but I have to. The job of the
>kernel is to manage the resources of the machine.

FYI, malloc isn't part of the kernel.

There are two points which fascist-malloc proponents ought to
consider very carefully:

1. Any process can run out of memory asynchronously, since it can


run out of stack memory, and since root can shrink the ulimit (I'm
not sure if this is implemented in linux yet).

Therefore, you can't _depend_ on catching all out-of-memory
situations by checking the return value of malloc, no matter how
malloc is implemented.

2. People running programs that need to store lots of data but not
access it very often need virtual memory.

Therefore, there's a good chance that your extremely robust program


would be paralysed by swapping long before a hypothetical "safe
malloc" detected out-of-VM.

--Arnt

Message has been deleted

Jussi Lahtinen

unread,
Mar 3, 1995, 2:14:16 AM3/3/95
to

>The problems of not handling a NULL return from malloc() are well
>known. To have a malloc() that might fail in a way that doesn't
>give the programmer any chance to recover is just mind-boggling.

Malloc-checking is not enough. If you alloc all memory and then call


a function which needs more stack space, your program will be dead and
there is no way to prevent it.

Jussi Lahtinen

Jim Balter

unread,
Mar 6, 1995, 6:54:11 AM3/6/95
to
In article <3jed9q$m...@due.unit.no>,
Arnt Gulbrandsen <agu...@nvg.unit.no> wrote:
>In article <D4s0E...@nntpa.cb.att.com>,

>Vivek Kalra <v...@rhea.cnet.att.com> wrote:
>>In article <HARE.95Fe...@mathi.uni-heidelberg.de>,
>>Hannes Reinecke <ha...@mathi.uni-heidelberg.de> wrote:
>>>Hmm. Having read this, i wondered whether you have heard about virtual
>>>memory. _Every_ process has access to an so-called virtual memory
>>>segment, which has under linux(i386) the size of 3 GB
>>>(cf <asm/processor.h>).
>....

>>Hot damn! And I thought I was going to have to buy more memory for
>>my machine! So, let's see, I have 12Megs of RAM and 24Megs of swap
>>and they add up to 3GB of virtual memory? Where does the
>>difference come from? Microsoft?
>
>mmap() for instance. I routinely mmap() in 30MB files, and I think
>at least one rather common program (INN) mmap()'s in a far bigger
>file. Three such processes, and you'd have over 100MB of addressed
>memory on a 12+24MB machine.

mmapping doesn't count. The whole point is that the problem is with
virtual memory that is *not* mapped anywhere. If the total amount of
non-mapped memory exceeds the amount of potentially mappable memory,
either primary ("physical") memory or secondary ("swap") memory, then
the system is over-committed. You are then in the situation where
the processes may attempt to access so much memory that you run out
of total physical and swap space. What do you do then? You can either
send a signal to the process which kills it if it wasn't prepared for
it, or you can suspend the process. The former violates the ANSI C spec
for malloc (this has been discussed extensively in comp.std.c) and the
latter can lead to system deadlock. That's why systems designed to satisfy
ANSI/POSIX requirements keep count of the total amount of available
real memory, refuse allocation requests that exceed it, and decrement the
count when address space is allocated, even when no actual memory has
been committed. That's where the difference between fork and vfork comes
in; fork, even with copy-on-write, increases the potential demand for
memory and thus must check the count and decrement it, whereas vfork need
not.

>The lesson is: It isn't that simple. Linux is a complex, capable
>operating system, and simple assumptions about what it can and
>cannot do can be a long way from the truth.

Sweeping feel-good generalizations won't do. Linux violates POSIX by
violating the ANSI C spec upon which it is based. If Linux wants to
satisfy POSIX *and* provide a malloc that does not commit to providing
the memory when accessed, then it should provide another function or a
global switch or *something* to provide the distinction. But a system
that can randomly crash programs properly written to the POSIX spec
simply because they access malloc'ed memory (*any* access can do it;
you run your program that mallocs and accesses 1 byte while I'm
running my program that mallocs and accesses 30MB and *your* program
dies if I got the last byte ahead of you) is broken.
--
<J Q B>

Vivek Kalra

unread,