Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Implementing Malloc()

12 views
Skip to first unread message

CJ

unread,
Nov 26, 2007, 4:40:52 PM11/26/07
to
We were discussing implementing malloc(), in particular the following
situation.

Suppose the user requests 1Mb of memory. Unfortunately, we only have
512Kb available. In this situation, most mallocs() would return null.
The huge majority of programmers won't bother to check malloc() failure
for such a small allocation, so the program will crash with a SIGSEGV as
soon as the NULL pointer is dereferenced.

So why not just return a pointer to the 512Kb that's available? It's
quite possible that the user will never actually write into the upper
half of the memory he's allocated, in which case the program will have
continued successfully where before it would have crashed.

The worst thing that can happen is that the programmer _does_ write to
the end of the mallocated block. In this case, either there's a SIGSEGV
again (no worse off than before), or if the 512Kb is in the middle of
the heap malloc() is drawing from then the writes might well succeed,
and the program can continue albeit with some possible minor data
corruption.

Do any implementations of malloc() use a strategy like this?


=====================================
McCoy's a seducer galore,
And of virgins he has quite a score.
He tells them, "My dear,
You're the Final Frontier,
Where man never has gone before."

christian.bau

unread,
Nov 26, 2007, 4:43:16 PM11/26/07
to
On Nov 26, 9:40 pm, CJ <nos...@nospam.invalid> wrote:
> We were discussing implementing malloc(), in particular the following
> situation.
>
> Suppose the user requests 1Mb of memory. Unfortunately, we only have
> 512Kb available. In this situation, most mallocs() would return null.
> The huge majority of programmers won't bother to check malloc() failure
> for such a small allocation, so the program will crash with a SIGSEGV as
> soon as the NULL pointer is dereferenced.
>
> So why not just return a pointer to the 512Kb that's available? It's
> quite possible that the user will never actually write into the upper
> half of the memory he's allocated, in which case the program will have
> continued successfully where before it would have crashed.
>
> The worst thing that can happen is that the programmer _does_ write to
> the end of the mallocated block. In this case, either there's a SIGSEGV
> again (no worse off than before), or if the 512Kb is in the middle of
> the heap malloc() is drawing from then the writes might well succeed,
> and the program can continue albeit with some possible minor data
> corruption.
>
> Do any implementations of malloc() use a strategy like this?

I hope not.

Default User

unread,
Nov 26, 2007, 4:59:44 PM11/26/07
to
CJ wrote:

> We were discussing implementing malloc(), in particular the following
> situation.
>
> Suppose the user requests 1Mb of memory. Unfortunately, we only have
> 512Kb available. In this situation, most mallocs() would return null.
> The huge majority of programmers won't bother to check malloc()
> failure for such a small allocation, so the program will crash with a
> SIGSEGV as soon as the NULL pointer is dereferenced.

This can be used safely by an intelligent programmer.

> So why not just return a pointer to the 512Kb that's available? It's
> quite possible that the user will never actually write into the upper
> half of the memory he's allocated, in which case the program will have
> continued successfully where before it would have crashed.
>

> The worst thing that can happen is that the programmer does write to


> the end of the mallocated block. In this case, either there's a
> SIGSEGV again (no worse off than before), or if the 512Kb is in the
> middle of the heap malloc() is drawing from then the writes might
> well succeed, and the program can continue albeit with some possible
> minor data corruption.

This cannot.

Brian

Shadowman

unread,
Nov 26, 2007, 5:00:20 PM11/26/07
to
CJ wrote:
> We were discussing implementing malloc(), in particular the following
> situation.
>
> Suppose the user requests 1Mb of memory. Unfortunately, we only have
> 512Kb available. In this situation, most mallocs() would return null.
> The huge majority of programmers won't bother to check malloc() failure
> for such a small allocation, so the program will crash with a SIGSEGV as
> soon as the NULL pointer is dereferenced.
>
> So why not just return a pointer to the 512Kb that's available? It's
> quite possible that the user will never actually write into the upper
> half of the memory he's allocated, in which case the program will have
> continued successfully where before it would have crashed.
>
> The worst thing that can happen is that the programmer _does_ write to
> the end of the mallocated block. In this case, either there's a SIGSEGV
> again (no worse off than before), or if the 512Kb is in the middle of
> the heap malloc() is drawing from then the writes might well succeed,
> and the program can continue albeit with some possible minor data
> corruption.
>

At least in the first case the programmer *can* detect if the call to
malloc() failed. This is not possible with your solution. With your
solution, every call to malloc() presents the possibility of a SIGSEGV
or data corruption without warning.


SM
rot13 for email

Marco Manfredini

unread,
Nov 26, 2007, 5:09:11 PM11/26/07
to
CJ wrote:
> We were discussing implementing malloc(), in particular the following
> situation.
>
> Suppose the user requests 1Mb of memory. Unfortunately, we only have
> 512Kb available. In this situation, most mallocs() would return null.
> The huge majority of programmers won't bother to check malloc() failure
> for such a small allocation, so the program will crash with a SIGSEGV as
> soon as the NULL pointer is dereferenced.
>
> So why not just return a pointer to the 512Kb that's available? It's
> quite possible that the user will never actually write into the upper
> half of the memory he's allocated, in which case the program will have
> continued successfully where before it would have crashed.
>
> The worst thing that can happen is that the programmer _does_ write to
> the end of the mallocated block. In this case, either there's a SIGSEGV
> again (no worse off than before), or if the 512Kb is in the middle of
> the heap malloc() is drawing from then the writes might well succeed,
> and the program can continue albeit with some possible minor data
> corruption.
>
> Do any implementations of malloc() use a strategy like this?

The Chernobyl mainframe?

Al Balmer

unread,
Nov 26, 2007, 5:15:53 PM11/26/07
to
On Mon, 26 Nov 2007 22:40:52 +0100 (CET), CJ <nos...@nospam.invalid>
wrote:

>We were discussing implementing malloc(), in particular the following
>situation.
>
>Suppose the user requests 1Mb of memory. Unfortunately, we only have
>512Kb available. In this situation, most mallocs() would return null.
>The huge majority of programmers won't bother to check malloc() failure
>for such a small allocation, so the program will crash with a SIGSEGV as
>soon as the NULL pointer is dereferenced.

Nonsense. The vast majority of programmers I know always check for
malloc failure.

>
>So why not just return a pointer to the 512Kb that's available? It's
>quite possible that the user will never actually write into the upper
>half of the memory he's allocated, in which case the program will have
>continued successfully where before it would have crashed.

Good grief. Why not just do it right?


>
>The worst thing that can happen is that the programmer _does_ write to
>the end of the mallocated block. In this case, either there's a SIGSEGV
>again (no worse off than before), or if the 512Kb is in the middle of
>the heap malloc() is drawing from then the writes might well succeed,
>and the program can continue albeit with some possible minor data
>corruption.
>
>Do any implementations of malloc() use a strategy like this?
>

This is all a joke, isn't it?

--
Al Balmer
Sun City, AZ

James Kuyper

unread,
Nov 26, 2007, 5:18:06 PM11/26/07
to
CJ wrote:
> We were discussing implementing malloc(), in particular the following
> situation.
>
> Suppose the user requests 1Mb of memory. Unfortunately, we only have
> 512Kb available. In this situation, most mallocs() would return null.
> The huge majority of programmers won't bother to check malloc() failure
> for such a small allocation, ...

Only incompetent programmers who no sane person would hire would fail to
check for malloc() failure. If that's a "huge majority", then the C
programming world is in deep trouble.

> ... so the program will crash with a SIGSEGV as


> soon as the NULL pointer is dereferenced.
>
> So why not just return a pointer to the 512Kb that's available? It's
> quite possible that the user will never actually write into the upper
> half of the memory he's allocated, in which case the program will have
> continued successfully where before it would have crashed.

Because the C standard requires that if an implementation cannot
allocate the entire amount, then it must return a NULL pointer, a
feature which competent programmers rely upon.

Charlton Wilbur

unread,
Nov 26, 2007, 5:11:04 PM11/26/07
to
>>>>> "CJ" == CJ <nos...@nospam.invalid> writes:

CJ> Suppose the user requests 1Mb of memory. Unfortunately, we
CJ> only have 512Kb available. In this situation, most mallocs()
CJ> would return null. The huge majority of programmers won't
CJ> bother to check malloc() failure for such a small allocation,
CJ> so the program will crash with a SIGSEGV as soon as the NULL
CJ> pointer is dereferenced.

CJ> So why not just return a pointer to the 512Kb that's
CJ> available?

Because the programmer knows what he's asking for, and why he's asking
for it, and it's not the place of the C library to determine whether
he really means it or not. If he only *wanted* 512K, he would,
presumably, only *ask for* 512K; if he *asks for* 1M, then the system
needs to either give him that 1M or tell him it can't. This is, after
all, the behavior the standard calls for.

And if the programmer is careless enough to not check the return value
of malloc(), he deserves the crash. Under your scheme, a responsible
programmer who *does* check the return value of malloc() and finds
that the allocation succeeded (since that is what a non-NULL return
value means, according to the standard) will be sometimes bitten by
strange unreproducible bugs. Trying to make C less fragile in the
hands of incompetents is a fine thing, but not at the cost of making
it work inconsistently in the hands of competent programmers.

CJ> Do any implementations of malloc() use a strategy like this?

I understand that the Linux virtual memory subsystem can allow
overcommitting (and does so by default on some distributions), but
that happens at a different level than malloc().

Charlton


--
Charlton Wilbur
cwi...@chromatico.net

jacob navia

unread,
Nov 26, 2007, 5:20:33 PM11/26/07
to


Your function is a new function, not malloc, you should
call it differently, for instance

SYNOPSIS:
void *mallocTry(size_t size, size_t *new_size);

DESCRIPTION:
This function will return a valid pointer to a block of
size bytes if sucessfull, NULL if there is no block of
the requested size.

If this function fails, it will return in new_size
(if new_size is not a NULL pointer) the size of the
largest request that the malloc system is able to
find at the time of the call to mallocTry.

The user call sequence is like this:

size_t ns = 1024*1024;
char *p = mallocTry(ns,&ns);
if (p == NULL && ns > 256*1024) {
p = mallocTry(ns,NULL);
if (p == NULL) {
fprintf(stderr,"No more memory\n");
exit(-1);
}
}
// Here ns is the size of the block and p is valid.

--
jacob navia
jacob at jacob point remcomp point fr
logiciels/informatique
http://www.cs.virginia.edu/~lcc-win32

Francin...@googlemail.com

unread,
Nov 26, 2007, 5:28:59 PM11/26/07
to
On Nov 26, 9:40 pm, CJ <nos...@nospam.invalid> wrote:
> We were discussing implementing malloc(), in particular the following
> situation.
>
> Suppose the user requests 1Mb of memory. Unfortunately, we only have
> 512Kb available. In this situation, most mallocs() would return null.
> The huge majority of programmers won't bother to check malloc() failure
> for such a small allocation, so the program will crash with a SIGSEGV as
> soon as the NULL pointer is dereferenced.
>
> So why not just return a pointer to the 512Kb that's available? It's
> quite possible that the user will never actually write into the upper
> half of the memory he's allocated, in which case the program will have
> continued successfully where before it would have crashed.
>
> The worst thing that can happen is that the programmer _does_ write to
> the end of the mallocated block. In this case, either there's a SIGSEGV
> again (no worse off than before), or if the 512Kb is in the middle of
> the heap malloc() is drawing from then the writes might well succeed,
> and the program can continue albeit with some possible minor data
> corruption.
>
> Do any implementations of malloc() use a strategy like this?

I seem to remember reading that it's standard practise for C
implementations on Linux to overcommit memory in this way. Always
seemed a bit crazy to me :~

jacob navia

unread,
Nov 26, 2007, 5:31:22 PM11/26/07
to
jacob navia wrote:
> The user call sequence is like this:
>
> size_t ns = 1024*1024;
> char *p = mallocTry(ns,&ns);
> if (p == NULL && ns > 256*1024) {
> p = mallocTry(ns,NULL);
> if (p == NULL) {
> fprintf(stderr,"No more memory\n");
> exit(-1);
> }
> }
> // Here ns is the size of the block and p is valid.

BUG:

If ns <= 256K the code above will fail. The correct sequence is:


The user call sequence is like this:

size_t ns = 1024*1024;
char *p = mallocTry(ns,&ns);

if (p == NULL ) {
if (ns > 256*1024)


p = mallocTry(ns,NULL);
if (p == NULL) {
fprintf(stderr,"No more memory\n");
exit(-1);
}
}
// Here ns is the size of the block and p is valid.

Excuse me for this oversight.

Eric Sosman

unread,
Nov 26, 2007, 5:34:46 PM11/26/07
to
CJ wrote On 11/26/07 16:40,:

> We were discussing implementing malloc(), in particular the following
> situation.
>
> Suppose the user requests 1Mb of memory. Unfortunately, we only have
> 512Kb available. In this situation, most mallocs() would return null.
> The huge majority of programmers won't bother to check malloc() failure
> for such a small allocation, so the program will crash with a SIGSEGV as
> soon as the NULL pointer is dereferenced.
>
> So why not just return a pointer to the 512Kb that's available? It's
> quite possible that the user will never actually write into the upper
> half of the memory he's allocated, in which case the program will have
> continued successfully where before it would have crashed.
>
> The worst thing that can happen is that the programmer _does_ write to
> the end of the mallocated block. In this case, either there's a SIGSEGV
> again (no worse off than before), or if the 512Kb is in the middle of
> the heap malloc() is drawing from then the writes might well succeed,
> and the program can continue albeit with some possible minor data
> corruption.
>
> Do any implementations of malloc() use a strategy like this?

This idea can be extended to produce the following
extremely efficient implementation of malloc() and its
companions:

#include <stdlib.h>

static unsigned long memory;

void *malloc(size_t bytes) {
return &memory;
}

void *calloc(size_t esize, size_t ecount) {
memory = 0;
return &memory;
}

void *realloc(void *old, size_t bytes) {
return old;
}

void free(void *ptr) {
#ifdef DEBUGGING
memory = 0xDEADBEEF;
#endif
}

Not only does this implementation avoid the processing
overhead of maintaining potentially large data structures
describing the state of memory pools, but it also reduces
the "memory footprint" of every program that uses it, thus
lowering page fault rates, swap I/O rates, and out-of-memory
problems.

--
Eric....@sun.com

CBFalconer

unread,
Nov 26, 2007, 6:18:07 PM11/26/07
to
CJ wrote:
>
> We were discussing implementing malloc(), in particular the
> following situation.
>
> Suppose the user requests 1Mb of memory. Unfortunately, we only
> have 512Kb available. In this situation, most mallocs() would
> return null. The huge majority of programmers won't bother to
> check malloc() failure for such a small allocation, so the
> program will crash with a SIGSEGV as soon as the NULL pointer
> is dereferenced.

If he doesn't check the return from malloc, he should be disallowed
to use the C compiler. He is obviously an idiot.

--
Chuck F (cbfalconer at maineline dot net)
<http://cbfalconer.home.att.net>
Try the download section.


--
Posted via a free Usenet account from http://www.teranews.com

Malcolm McLean

unread,
Nov 26, 2007, 6:27:22 PM11/26/07
to

"James Kuyper" <james...@verizon.net> wrote in message

>> Suppose the user requests 1Mb of memory. Unfortunately, we only have
>> 512Kb available. In this situation, most mallocs() would return null.
>> The huge majority of programmers won't bother to check malloc() failure
>> for such a small allocation, ...
>
> Only incompetent programmers who no sane person would hire would fail to
> check for malloc() failure. If that's a "huge majority", then the C
> programming world is in deep trouble.
>
The problem is that, often, there is nothing you can do without imposing
unacceptable runtime overheads. This is especially true in windowing systems
where function that need to allocate trivial amounts of memory are called by
indirection, often several layers deep. It is no longer possible to return
an error condition to the caller.
If all you cna do is exit(EXIT_FAILURE); you might as well segfault, and
have more readable code.

That's why I introduced xmalloc(), the malloc() that never fails. It
achieves this by nagging for memory, until killed by the user as a last
resort when the cupboard is bare.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Marco Manfredini

unread,
Nov 26, 2007, 6:40:06 PM11/26/07
to
CBFalconer wrote:
> CJ wrote:
>> We were discussing implementing malloc(), in particular the
>> following situation.
>>
>> Suppose the user requests 1Mb of memory. Unfortunately, we only
>> have 512Kb available. In this situation, most mallocs() would
>> return null. The huge majority of programmers won't bother to
>> check malloc() failure for such a small allocation, so the
>> program will crash with a SIGSEGV as soon as the NULL pointer
>> is dereferenced.
>
> If he doesn't check the return from malloc, he should be disallowed
> to use the C compiler. He is obviously an idiot.
>
Two posters recalled the overcommit feature of Linux (and probably the
BSD's and AIX), once engaged malloc() *never* returns NULL. Instead, if
the system runs out of VM, the kernel goes on a killing spree and
terminates all processes that razzed him. The malloc manpage even
contains instructions how to fix this. The idiots are everywhere.

Flash Gordon

unread,
Nov 26, 2007, 6:45:28 PM11/26/07
to
Malcolm McLean wrote, On 26/11/07 23:27:

>
> "James Kuyper" <james...@verizon.net> wrote in message
>>> Suppose the user requests 1Mb of memory. Unfortunately, we only have
>>> 512Kb available. In this situation, most mallocs() would return null.
>>> The huge majority of programmers won't bother to check malloc() failure
>>> for such a small allocation, ...
>>
>> Only incompetent programmers who no sane person would hire would fail to
>> check for malloc() failure. If that's a "huge majority", then the C
>> programming world is in deep trouble.
>>
> The problem is that, often, there is nothing you can do without imposing
> unacceptable runtime overheads.

There is always something you can do without large runtime overheads.
You can always terminate the program. Of course, that is not always
acceptable.

> This is especially true in windowing
> systems where function that need to allocate trivial amounts of memory
> are called by indirection, often several layers deep. It is no longer
> possible to return an error condition to the caller.

If you design it without mechanisms for returning error conditions that
is true. However, if you design it properly it is not true.

> If all you cna do is exit(EXIT_FAILURE); you might as well segfault, and
> have more readable code.

Complete and utter rubbish. One is predictable and occurs at the actual
point of failure the other is not guaranteed.

> That's why I introduced xmalloc(), the malloc() that never fails. It
> achieves this by nagging for memory, until killed by the user as a last
> resort when the cupboard is bare.

Which it must be doing by checking the value returned by malloc. How can
you be claiming it produces an unacceptable overhead and then actually
doing it?
--
Flash Gordon

Dik T. Winter

unread,
Nov 26, 2007, 7:48:34 PM11/26/07
to
In article <fiflgn$5ds$1...@aioe.org> Marco Manfredini <ok_nos...@phoyd.net> writes:
...

> > If he doesn't check the return from malloc, he should be disallowed
> > to use the C compiler. He is obviously an idiot.
> >
> Two posters recalled the overcommit feature of Linux (and probably the
> BSD's and AIX),

As far as I remember, BSD Unix did *not* overcommit.

> once engaged malloc() *never* returns NULL. Instead, if
> the system runs out of VM, the kernel goes on a killing spree and
> terminates all processes that razzed him. The malloc manpage even
> contains instructions how to fix this. The idiots are everywhere.

I experience it the first time when we got our first SGI's (system V based).
It was my impression that the first program killed was fairly random. I have
seen X windows sessions killed due to this. Within a short time the default
to overcommit was changed on *all* those machines.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/

Richard Tobin

unread,
Nov 26, 2007, 7:59:09 PM11/26/07
to
In article <fiflgn$5ds$1...@aioe.org>,
Marco Manfredini <ok_nos...@phoyd.net> wrote:

>Two posters recalled the overcommit feature of Linux (and probably the
>BSD's and AIX), once engaged malloc() *never* returns NULL.

I don't think that's quite true. It may never return NULL because of
memory shortage, but it probably does for other reasons such as
impossible sizes and requests exceeding a settable limit.

-- Richard
--
"Consideration shall be given to the need for as many as 32 characters
in some alphabets" - X3.4, 1963.

Richard Tobin

unread,
Nov 26, 2007, 7:56:08 PM11/26/07
to
In article <nd2s15x...@news.flash-gordon.me.uk>,
Flash Gordon <sp...@flash-gordon.me.uk> wrote:

>> This is especially true in windowing
>> systems where function that need to allocate trivial amounts of memory
>> are called by indirection, often several layers deep. It is no longer
>> possible to return an error condition to the caller.
>
>If you design it without mechanisms for returning error conditions that
>is true. However, if you design it properly it is not true.
>
>> If all you cna do is exit(EXIT_FAILURE); you might as well segfault, and
>> have more readable code.
>
>Complete and utter rubbish. One is predictable and occurs at the actual
>point of failure the other is not guaranteed.

Not guaranteed by the C standard, but probably guaranteed on whatever
you're writing a window system for. The C standard isn't the only
relevant source of guarantees for most programmers.

Not that I advocate doing it. If it really isn't useful to handle the
error gracefully (which is often the case), then something like
xmalloc() is the oobvious solution.

Dik T. Winter

unread,
Nov 26, 2007, 7:51:07 PM11/26/07
to
In article <nd2s15x...@news.flash-gordon.me.uk> Flash Gordon <sp...@flash-gordon.me.uk> writes:
> Malcolm McLean wrote, On 26/11/07 23:27:
...

> > That's why I introduced xmalloc(), the malloc() that never fails. It
> > achieves this by nagging for memory, until killed by the user as a last
> > resort when the cupboard is bare.
>
> Which it must be doing by checking the value returned by malloc. How can
> you be claiming it produces an unacceptable overhead and then actually
> doing it?

Not only that. It goes into a tight loop which is not user-friendly at
all, probably tying up many resources.

Gordon Burditt

unread,
Nov 26, 2007, 8:09:05 PM11/26/07
to
>The worst thing that can happen is that the programmer _does_ write to
>the end of the mallocated block. In this case, either there's a SIGSEGV
>again (no worse off than before), or if the 512Kb is in the middle of
>the heap malloc() is drawing from then the writes might well succeed,
>and the program can continue albeit with some possible minor data
>corruption.

"possible minor data corruption", especially the kind you don't
notice, is about the worst case possible. You finally realize
what's happening, and then you discover that last year's backups
are corrupted and you've lost lots of work.

Remind me never to fly on any airplanes with your software running
them.

Remember, anything can run in zero time and zero memory if you don't
require the result to be correct.

James Fang

unread,
Nov 27, 2007, 3:50:15 AM11/27/07
to
This is very bad in engineering practice, especially when the error is
undeterminstic and it is impossible to find the root cause of such
error.

Especially in some embedded systems without memory protection, this
kind of malloc implementation will make your program dancing as a
drunk.

BRs
James Fang

Flash Gordon

unread,
Nov 27, 2007, 4:05:35 AM11/27/07
to
Richard Tobin wrote, On 27/11/07 00:56:

> In article <nd2s15x...@news.flash-gordon.me.uk>,
> Flash Gordon <sp...@flash-gordon.me.uk> wrote:
>
>>> This is especially true in windowing
>>> systems where function that need to allocate trivial amounts of memory
>>> are called by indirection, often several layers deep. It is no longer
>>> possible to return an error condition to the caller.
>> If you design it without mechanisms for returning error conditions that
>> is true. However, if you design it properly it is not true.
>>
>>> If all you cna do is exit(EXIT_FAILURE); you might as well segfault, and
>>> have more readable code.
>> Complete and utter rubbish. One is predictable and occurs at the actual
>> point of failure the other is not guaranteed.
>
> Not guaranteed by the C standard, but probably guaranteed on whatever
> you're writing a window system for.

OK, let's try it with Windows 3.1, IIRC that did not have much in terms
of memory protection (and is probably still in use in some places since
I have very good reason to believe DOS is still used in some places on
PCs), or under Gem.

> The C standard isn't the only
> relevant source of guarantees for most programmers.

True. However it is better to rely on the C standard where it can
sensibly provide for what is wanted, and then work your way up through
the steadily less portable standards and system specifics only when
necessary.

> Not that I advocate doing it. If it really isn't useful to handle the
> error gracefully (which is often the case), then something like
> xmalloc() is the oobvious solution.

I accept that a "tidy up and exit" malloc wrapper is sometimes
appropriate, also a "prompt the user and then retry" wrapper is
sometimes appropriate. I know that my customers would be *much* happier
with either of those than a segmentation violation, since a segmentation
violation implies (correctly IMHO) a bug in the program.

So I think you (Richard) and I are in agreement :-)
--
Flash Gordon

Flash Gordon

unread,
Nov 27, 2007, 4:07:32 AM11/27/07
to
Dik T. Winter wrote, On 27/11/07 00:51:

> In article <nd2s15x...@news.flash-gordon.me.uk> Flash Gordon <sp...@flash-gordon.me.uk> writes:
> > Malcolm McLean wrote, On 26/11/07 23:27:
> ...
> > > That's why I introduced xmalloc(), the malloc() that never fails. It
> > > achieves this by nagging for memory, until killed by the user as a last
> > > resort when the cupboard is bare.
> >
> > Which it must be doing by checking the value returned by malloc. How can
> > you be claiming it produces an unacceptable overhead and then actually
> > doing it?
>
> Not only that. It goes into a tight loop which is not user-friendly at
> all, probably tying up many resources.

I've not seen Malcolm's implementation. If it prompts the users to free
up space and then waits (preferably with options to retry or abort) then
it is useful. If, as you are applying, it is simply a loop repeatedly
calling *alloc, then I would also consider it completely unacceptable.
--
Flash Gordon

santosh

unread,
Nov 27, 2007, 5:24:39 AM11/27/07
to
In article <1196116487.706075@news1nwk>, Eric Sosman

ROTFL

Spoon

unread,
Nov 27, 2007, 7:49:27 AM11/27/07
to
CJ wrote:

> We were discussing implementing malloc(), in particular the following
> situation.
>
> Suppose the user requests 1Mb of memory. Unfortunately, we only have
> 512Kb available. In this situation, most mallocs() would return null.
> The huge majority of programmers won't bother to check malloc() failure
> for such a small allocation, so the program will crash with a SIGSEGV as
> soon as the NULL pointer is dereferenced.
>
> So why not just return a pointer to the 512Kb that's available? It's
> quite possible that the user will never actually write into the upper
> half of the memory he's allocated, in which case the program will have
> continued successfully where before it would have crashed.
>
> The worst thing that can happen is that the programmer _does_ write to
> the end of the mallocated block. In this case, either there's a SIGSEGV
> again (no worse off than before), or if the 512Kb is in the middle of
> the heap malloc() is drawing from then the writes might well succeed,
> and the program can continue albeit with some possible minor data
> corruption.

On a related note, the Linux kernel may be configured so as to
overcommit memory. (It is even the default.)

http://lxr.linux.no/source/Documentation/vm/overcommit-accounting

CJ

unread,
Nov 27, 2007, 12:58:31 PM11/27/07
to
On 26 Nov 2007 at 22:34, Eric Sosman wrote:
> Not only does this implementation avoid the processing
> overhead of maintaining potentially large data structures
> describing the state of memory pools, but it also reduces
> the "memory footprint" of every program that uses it, thus
> lowering page fault rates, swap I/O rates, and out-of-memory
> problems.

All very amusing, but I think you and some of the other posters have
misunderstood the context of the discussion.

Of course an implementation of malloc should return a pointer to as much
memory as requested whenever that's possible! The discussion was about
the rare failure case. Typically, programs assume malloc returns a
non-null pointer, so if it returns null on failure then the program will
crash and burn.

The idea is that instead of a guaranteed crash, isn't it better to make
a last-ditch effort to save the program's bacon by returning a pointer
to what memory there _is_ available? If the program's going to crash
anyway, it's got to be worth a shot.


=======================================
There once was an old man from Esser,
Who's knowledge grew lesser and lesser.
It at last grew so small,
He knew nothing at all,
And now he's a College Professor.

jacob navia

unread,
Nov 27, 2007, 1:01:14 PM11/27/07
to

Well, I proposed the function
mallocTry
that could give you the best of both worlds. Why do
you ignore that suggestion?

dj3v...@csclub.uwaterloo.ca.invalid

unread,
Nov 27, 2007, 1:17:53 PM11/27/07
to
In article <slrnfkomlu...@nospam.invalid>,

CJ <nos...@nospam.invalid> wrote:
>On 26 Nov 2007 at 22:34, Eric Sosman wrote:
>> Not only does this implementation avoid the processing
>> overhead of maintaining potentially large data structures
>> describing the state of memory pools, but it also reduces
>> the "memory footprint" of every program that uses it, thus
>> lowering page fault rates, swap I/O rates, and out-of-memory
>> problems.
>
>All very amusing, but I think you and some of the other posters have
>misunderstood the context of the discussion.

[...]

>The idea is that instead of a guaranteed crash,

the programmer should handle a failure to allocate resources properly,
and the implementation should make it possible to do so.

I think you're the one who's missing the point.


dave

Stephen Sprunk

unread,
Nov 27, 2007, 1:03:26 PM11/27/07
to
"CJ" <nos...@nospam.invalid> wrote in message
news:slrnfkomlu...@nospam.invalid...

> Of course an implementation of malloc should return a pointer to as much
> memory as requested whenever that's possible! The discussion was about
> the rare failure case. Typically, programs assume malloc returns a
> non-null pointer, so if it returns null on failure then the program will
> crash and burn.

A program only crashes on malloc() returning NULL if the programmer is
incompetent.

> The idea is that instead of a guaranteed crash, isn't it better to make
> a last-ditch effort to save the program's bacon by returning a pointer
> to what memory there _is_ available? If the program's going to crash
> anyway, it's got to be worth a shot.

The change you propose would mean that malloc() _might_ save some
incompetent programmers but _definitely_ make it impossible for competent
programmers to write correct programs because there's no longer a way to
detect if they got the amount of memory they requested. That goes against
both the fundamental philosophy of C and common sense.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

santosh

unread,
Nov 27, 2007, 1:25:12 PM11/27/07
to
CJ wrote:

> On 26 Nov 2007 at 22:34, Eric Sosman wrote:
>> Not only does this implementation avoid the processing
>> overhead of maintaining potentially large data structures
>> describing the state of memory pools, but it also reduces
>> the "memory footprint" of every program that uses it, thus
>> lowering page fault rates, swap I/O rates, and out-of-memory
>> problems.
>
> All very amusing, but I think you and some of the other posters have
> misunderstood the context of the discussion.
>
> Of course an implementation of malloc should return a pointer to as
> much memory as requested whenever that's possible! The discussion was
> about the rare failure case. Typically, programs assume malloc returns
> a non-null pointer, so if it returns null on failure then the program
> will crash and burn.

A properly written non-trivial program is not going to "crash and burn"
when malloc() returns NULL. It will shutdown gracefully, or even notify
the user to free up some memory and perhaps try again.

> The idea is that instead of a guaranteed crash,

No. There is no guaranteed crash. Consider:

#include <stdlib.h>

int main(void) {
char *p = malloc(SIZE_MAX);
if (p) { free(p); return 0; }
else return EXIT_FAILURE;
}

Where is the "crash"?

> isn't it better to
> make a last-ditch effort to save the program's bacon by returning a
> pointer to what memory there _is_ available? If the program's going to
> crash anyway, it's got to be worth a shot.

If malloc() returns NULL for failure, the program can at least exit
gracefully or even try and recover some memory and keep going, but if
it is going to indicate success but return insufficient space, then a
memory overwrite is almost certainly going to lead to either hard to
debug data corruption or a core dump from a segmentation violation.
This is (for any serious program) worse than a clean, controlled exit.

Besides your function (for whatever it's worth) can easily be written on
top of malloc() for anyone mad enough to want it. No need to include
another gets() into the Standard.

Eric Sosman

unread,
Nov 27, 2007, 1:41:07 PM11/27/07
to
CJ wrote On 11/27/07 12:58,:

> On 26 Nov 2007 at 22:34, Eric Sosman wrote:
>
>> Not only does this implementation avoid the processing
>>overhead of maintaining potentially large data structures
>>describing the state of memory pools, but it also reduces
>>the "memory footprint" of every program that uses it, thus
>>lowering page fault rates, swap I/O rates, and out-of-memory
>>problems.
>
>
> All very amusing, but I think you and some of the other posters have
> misunderstood the context of the discussion.

Well, yes: I understood it as a joke. But if
you're actually serious ...

> Of course an implementation of malloc should return a pointer to as much
> memory as requested whenever that's possible! The discussion was about
> the rare failure case. Typically, programs assume malloc returns a
> non-null pointer, so if it returns null on failure then the program will
> crash and burn.

One could take issue with your assertion about
"typically," and even about "rare." But that's a
side-issue.

> The idea is that instead of a guaranteed crash, isn't it better to make
> a last-ditch effort to save the program's bacon by returning a pointer
> to what memory there _is_ available?

No. For starters, this makes what you consider an
"atypical" program as bad as a "typical" one, because
it becomes impossible to detect a malloc() failure: if
malloc() returns a non-NULL pointer, how can the caller
tell whether it was or wasn't able to supply the memory?

> If the program's going to crash
> anyway, it's got to be worth a shot.

Your unstated assumption is that a crash is worse
than all other outcomes, and I dispute that assumption.
A program that ceases to operate also ceases to produce
wrong answers. It probably also draws attention to the
fact that something has gone wrong, instead of plunging
silently ahead doing who-knows-what.

In my house I have smoke alarms that use nine-volt
batteries to supply their modest electrical needs. The
service life of the battery in this application is quite
long, well over a year, so the battery's "failure rate"
is quite low: When the alarm "requests more electricity"
from the battery, it nearly always succeeds. Yet on the
extremely rare occasion where the battery cannot deliver
enough, the alarm starts beeping at intervals to alert me
to the fact. Then I change the battery, and all is well.

If the battery were designed to work the way you want
malloc() to operate, this wouldn't happen. The battery
would (somehow) pretend to be able to supply the requested
voltage even when it could not, and the alarm would never
discover that the battery was dead. Instead, the alarm
would sit silently on my wall, giving the illusion of
performing its function without actually doing so, pretending
to protect me and my family while leaving us at risk. You
may not admire me much, but you know nothing of my family
and I request that you not condemn them to the flames.

--
Eric....@sun.com

Flash Gordon

unread,
Nov 27, 2007, 2:20:30 PM11/27/07
to
CJ wrote, On 27/11/07 17:58:

> On 26 Nov 2007 at 22:34, Eric Sosman wrote:
>> Not only does this implementation avoid the processing
>> overhead of maintaining potentially large data structures
>> describing the state of memory pools, but it also reduces
>> the "memory footprint" of every program that uses it, thus
>> lowering page fault rates, swap I/O rates, and out-of-memory
>> problems.
>
> All very amusing, but I think you and some of the other posters have
> misunderstood the context of the discussion.

No, I believe that Eric and most of the other respondents understood it
perfectly, and that is why they all considered it a terrible idea.

> Of course an implementation of malloc should return a pointer to as much
> memory as requested whenever that's possible! The discussion was about
> the rare failure case.

Rare to you maybe, but I actually use machines fairly hard and it is not
uncommon for them to run out of memory.

> Typically, programs assume malloc returns a
> non-null pointer, so if it returns null on failure then the program will
> crash and burn.

Not if it is written properly, and a lot of programs *are* written properly.

> The idea is that instead of a guaranteed crash, isn't it better to make
> a last-ditch effort to save the program's bacon by returning a pointer
> to what memory there _is_ available? If the program's going to crash
> anyway, it's got to be worth a shot.

Oh dear, you have just caused my copy of VMWare to crash possibly
causing corruption of the guest systems file system and forcing me to
restart a long and complex set of tests involving a number of VMware
sessions. You have also prevented one of the applications I develop from
tidying up cleanly behind itself and notifying the user, possibly
causing corruption of a companies corporate accounts.

No, it is FAR better to do what the standard requires and actually give
the application a shot at doing something sensible when it runs out of
memory, such as warning the user and giving them a chance to free some
memory up, or shutting down tidily so that data is not corrupted etc.
--
Flash Gordon

John Gordon

unread,
Nov 27, 2007, 3:06:48 PM11/27/07
to
In <slrnfkomlu...@nospam.invalid> CJ <nos...@nospam.invalid> writes:

> Typically, programs assume malloc returns a non-null pointer, so if it
< returns null on failure then the program will crash and burn.

Huh? If malloc returns null, the program detects it handles it as best
it can. It certainly doesn't *use* the null pointer gotten from malloc.

Where's this "guaranteed crash?"

--
John Gordon A is for Amy, who fell down the stairs
gor...@panix.com B is for Basil, assaulted by bears
-- Edward Gorey, "The Gashlycrumb Tinies"

Eric Sosman

unread,
Nov 27, 2007, 4:01:14 PM11/27/07
to
jacob navia wrote:
>
> Well, I proposed the function
> mallocTry
> that could give you the best of both worlds. Why do
> you ignore that suggestion?

He's begun by assuming a caller who's too [insert
derogatory adjective] even to compare malloc's value
to NULL. Would this [ida] person have the smarts to
make effective use of mallocTry?

--
Eric Sosman
eso...@ieee-dot-org.invalid

Shadowman

unread,
Nov 27, 2007, 4:35:20 PM11/27/07
to
Al Balmer wrote:

> On Mon, 26 Nov 2007 22:40:52 +0100 (CET), CJ <nos...@nospam.invalid>
> wrote:
>
>> The worst thing that can happen is that the programmer _does_ write to
>> the end of the mallocated block. In this case, either there's a SIGSEGV
>> again (no worse off than before), or if the 512Kb is in the middle of
>> the heap malloc() is drawing from then the writes might well succeed,
>> and the program can continue albeit with some possible minor data
>> corruption.
>>
>> Do any implementations of malloc() use a strategy like this?
>>
> This is all a joke, isn't it?
>

On that note:

http://groups.google.com/group/comp.lang.c/msg/f6d42922a80362a4?dmode=source

http://groups.google.com/group/comp.lang.c/msg/6d4f4b91af15c5e6?dmode=source

Same guy?

--
SM
rot13 for email

Default User

unread,
Nov 27, 2007, 4:55:22 PM11/27/07
to
CJ wrote:


> Of course an implementation of malloc should return a pointer to as
> much memory as requested whenever that's possible! The discussion was
> about the rare failure case. Typically, programs assume malloc
> returns a non-null pointer, so if it returns null on failure then the
> program will crash and burn.

You're either an idiot or a troll. You haven't understood, or have
chosen to ignore all the follow-ups.

As such, you're a waste of my time (the greatest usenet sin).

*plonk*

Brian

Kenneth Brody

unread,
Nov 27, 2007, 4:47:22 PM11/27/07
to
CJ wrote:
>
> We were discussing implementing malloc(), in particular the following
> situation.
>
> Suppose the user requests 1Mb of memory. Unfortunately, we only have
> 512Kb available. In this situation, most mallocs() would return null.

AFAIK, _all_ conforming mallocs return NULL. (With the possible
exception of things like Linux's overcommit scheme. However, it is my
understanding that it is the kernel that lies to malloc, and malloc
does, in fact, believe that the memory is available.)

> The huge majority of programmers won't bother to check malloc() failure

I have to take exception with that statement.

> for such a small allocation, so the program will crash with a SIGSEGV as
> soon as the NULL pointer is dereferenced.

Someone who doesn't check for malloc() failure, especially on such
a large size, deserves what he gets.

Consider, too, the ease in debugging the NULL pointer dereference.

> So why not just return a pointer to the 512Kb that's available? It's

Eww...

> quite possible that the user will never actually write into the upper
> half of the memory he's allocated, in which case the program will have
> continued successfully where before it would have crashed.

I would consider that strategy "asking for trouble".

> The worst thing that can happen is that the programmer _does_ write to
> the end of the mallocated block. In this case, either there's a SIGSEGV
> again (no worse off than before), or if the 512Kb is in the middle of
> the heap malloc() is drawing from then the writes might well succeed,
> and the program can continue albeit with some possible minor data
> corruption.

So continuing "with some possible minor data corruption" is a
viable programming strategy? You want to make buffer overruns a
design strategy?

Consider the difference in difficulty in "malloc returned NULL" to
"I malloced a million bytes, but something else is mysteriously
writing into my buffer, and writing into my buffer causes some
other code to mysteriously crash".

How many Windows updates were to fix "malicious code could allow an
attacker to take over your computer" bugs which were caused by such
buffer overruns?

> Do any implementations of malloc() use a strategy like this?

I sincerely hope not. I don't want my implementation lying to me.
("Lazy mallocs" are bad enough. We don't need "outright lying
mallocs".)

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h> |
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:ThisIsA...@gmail.com>

user923005

unread,
Nov 27, 2007, 5:09:20 PM11/27/07
to
On Nov 27, 9:58 am, CJ <nos...@nospam.invalid> wrote:
> On 26 Nov 2007 at 22:34, Eric Sosman wrote:
>
> > Not only does this implementation avoid the processing
> > overhead of maintaining potentially large data structures
> > describing the state of memory pools, but it also reduces
> > the "memory footprint" of every program that uses it, thus
> > lowering page fault rates, swap I/O rates, and out-of-memory
> > problems.
>
> All very amusing, but I think you and some of the other posters have
> misunderstood the context of the discussion.
>
> Of course an implementation of malloc should return a pointer to as much
> memory as requested whenever that's possible! The discussion was about
> the rare failure case. Typically, programs assume malloc returns a
> non-null pointer, so if it returns null on failure then the program will
> crash and burn.
>
> The idea is that instead of a guaranteed crash, isn't it better to make
> a last-ditch effort to save the program's bacon by returning a pointer
> to what memory there _is_ available? If the program's going to crash
> anyway, it's got to be worth a shot.

Surely, surely. This is a troll.

If I asked for 4 MB and the implementation returned 2 MB, I can't
really imagine anything worse than that. Not only is it undefined
behavior, but the behavior is very unlikely even to be repeatable. Why
(on earth) would the program 'crash anyway'? Anyone who does not
check the return of malloc() in a production application is an
incompetent dimwit of the highest order. So there is no way that the
application is going to crash. But of course, anyone who ever did
real-life programming in C knows all of this already.

If malloc() fails, and I could succeed with less memory (e.g. if I am
creating a hash table with a small fill factor, I can increase it),
then I am going to try to allocate less memory. To simply return a
smaller block... you're a riot.

Quite an effective troll, sir. I congratulate you.

james...@verizon.net

unread,
Nov 27, 2007, 5:34:29 PM11/27/07
to
CJ wrote:
> On 26 Nov 2007 at 22:34, Eric Sosman wrote:
> > Not only does this implementation avoid the processing
> > overhead of maintaining potentially large data structures
> > describing the state of memory pools, but it also reduces
> > the "memory footprint" of every program that uses it, thus
> > lowering page fault rates, swap I/O rates, and out-of-memory
> > problems.
>
> All very amusing, but I think you and some of the other posters have
> misunderstood the context of the discussion.

No, we understood the context; we just have a well-justified lack of
respect for the attitudes that could create such a context

> Of course an implementation of malloc should return a pointer to as much
> memory as requested whenever that's possible! The discussion was about
> the rare failure case. Typically, programs assume malloc returns a
> non-null pointer, so if it returns null on failure then the program will
> crash and burn.
>
> The idea is that instead of a guaranteed crash, isn't it better to make
> a last-ditch effort to save the program's bacon by returning a pointer
> to what memory there _is_ available? If the program's going to crash
> anyway, it's got to be worth a shot.

No, it is not. The sooner and more frequently such incompetently
written code fails, the sooner the programmer will realize why it was
a bad idea to ignore null pointers returned by malloc(). If the
programmer never realizes this, then the sooner and more frequently
the programs fail, the sooner the programmer will be fired and
replaced with someone more competent. Everyone who deserves to be will
be better off as a result. Even the fired programmer will be better
off in the long run, because getting fired will give the ex-programmer
an opportunity to change to a career which doesn't require as much
attention to detail as programming, which should make the ex-
programmer happier in the long run.

Competent programmers check for failed memory allocation, and take
appropriate action. If allocation fails, their programs will either
fail gracefully (NOT crash-and-burn), or they will free up memory
elsewhere and retry the allocation. Your proposed change would cause
competently written programs that rely upon the failure indication to
fail without warning, in order to provide very questionable protection
for incompetently written code.

Mark McIntyre

unread,
Nov 27, 2007, 5:49:00 PM11/27/07
to
CJ wrote:
> We were discussing implementing malloc(), in particular the following
> situation.
>
> Suppose the user requests 1Mb of memory. Unfortunately, we only have
> 512Kb available. In this situation, most mallocs() would return null.
> The huge majority of programmers won't bother to check malloc() failure
> for such a small allocation, so the program will crash with a SIGSEGV as
> soon as the NULL pointer is dereferenced.
>
> So why not just return a pointer to the 512Kb that's available?

Yike. Take this to the extreme. Imagine you only have one byte
available. Why not just return a pointer to that? How much use is /that/?

>It's


> quite possible that the user will never actually write into the upper
> half of the memory he's allocated,

Its also quite possible the programmer knew how much memory he needed,
and will be justifiably annoyed when his programme starts randomly failing.

Dik T. Winter

unread,
Nov 27, 2007, 6:58:30 PM11/27/07
to
In article <slrnfkomlu...@nospam.invalid> CJ <nos...@nospam.invalid> writes:
...

> The idea is that instead of a guaranteed crash, isn't it better to make
> a last-ditch effort to save the program's bacon by returning a pointer
> to what memory there _is_ available? If the program's going to crash
> anyway, it's got to be worth a shot.

In that case there is no way a program can actually *check* whether there
was sufficient memory, which a well written program should do. The result
is probably a fault, but all kind of other nasty things can happen, like
completely wrong output.

J. J. Farrell

unread,
Nov 27, 2007, 7:39:40 PM11/27/07
to
CJ wrote:
> On 26 Nov 2007 at 22:34, Eric Sosman wrote:
>> Not only does this implementation avoid the processing
>> overhead of maintaining potentially large data structures
>> describing the state of memory pools, but it also reduces
>> the "memory footprint" of every program that uses it, thus
>> lowering page fault rates, swap I/O rates, and out-of-memory
>> problems.
>
> All very amusing, but I think you and some of the other posters have
> misunderstood the context of the discussion.
>
> Of course an implementation of malloc should return a pointer to as much
> memory as requested whenever that's possible! The discussion was about
> the rare failure case. Typically, programs assume malloc returns a
> non-null pointer, so if it returns null on failure then the program will
> crash and burn.

Why do you think such incompetently written buggy programs are "typical"?

> The idea is that instead of a guaranteed crash, isn't it better to make
> a last-ditch effort to save the program's bacon by returning a pointer
> to what memory there _is_ available? If the program's going to crash
> anyway, it's got to be worth a shot.

No, blatantly and obviously not. It is better for the programmer's
incompetence to cause the program to stop execution immediately rather
than allow it to go on with random behaviour doing who knows how much
damage to what. How does, for example, corrupting a few gigabytes of
valuable archive data "save the program's bacon"?

Malcolm McLean

unread,
Nov 28, 2007, 5:40:37 PM11/28/07
to
"J. J. Farrell" <j...@bcs.org.uk> wrote in message

>
> No, blatantly and obviously not. It is better for the programmer's
> incompetence to cause the program to stop execution immediately rather
> than allow it to go on with random behaviour doing who knows how much
> damage to what. How does, for example, corrupting a few gigabytes of
> valuable archive data "save the program's bacon"?
>
It does depend what the program is. For instance with a videogame there is
no point exiting with an error message. You might as well ignore the error,
and there's a chance the player won't notice. If he does notice, it won't
spoil his enjoyment any more than an exit.
Unless of course you manage to corrupt his saved character which he's spent
two years of solid gameplay bringing up to a high level. Then people get
annoyed.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

MisterE

unread,
Nov 28, 2007, 1:40:17 AM11/28/07
to

> The worst thing that can happen is that the programmer _does_ write to
> the end of the mallocated block. In this case, either there's a SIGSEGV
> again (no worse off than before), or if the 512Kb is in the middle of
> the heap malloc() is drawing from then the writes might well succeed,
> and the program can continue albeit with some possible minor data
> corruption.
>
> Do any implementations of malloc() use a strategy like this?

omg

this is dumbest thing I have read on the internet


Keith Thompson

unread,
Nov 28, 2007, 9:34:31 PM11/28/07
to

Even in that context, the proposed malloc implementation where a
request for 1024 kbytes can allocate just 512 kbytes and not report an
error doesn't seem particularly useful.

And if it is useful, you can always implement it on top of malloc *and
call it something else*.

--
Keith Thompson (The_Other_Keith) <ks...@mib.org>
Looking for software development work in the San Diego area.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

christian.bau

unread,
Nov 29, 2007, 12:08:40 PM11/29/07
to
On Nov 28, 10:40 pm, "Malcolm McLean" <regniz...@btinternet.com>
wrote:

> It does depend what the program is. For instance with a videogame there is
> no point exiting with an error message. You might as well ignore the error,
> and there's a chance the player won't notice. If he does notice, it won't
> spoil his enjoyment any more than an exit.

I once played "Who wants to be a millionaire" on a Playstation. The
program crashed at the £8,000 question. It would have been more
enjoyable if the program had crashed immediately.

0 new messages