Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

When to check the return value of malloc

45 views
Skip to first unread message

sandeep

unread,
May 15, 2010, 2:52:59 PM5/15/10
to
Hello friends~~

Think about malloc.

Obviously, for tiny allocations like 20 bytes to strcpy a filename,
there's no point putting in a check on the return value of malloc... if
there is so little memory then stack allocations will also be failing and
your program will be dead.

Whereas, if you're allocating a gigabyte for a large array, this might
easily fail, so you should definitely check for a NULL return.

So somewhere in between there must be a point where you stop ignoring the
return value, and start checking it. Where do you draw this line? It must
depend on whether you will deploy to a low memory or high memory
environment... but is there a good rule?

Regards~~

Tim Harig

unread,
May 15, 2010, 3:07:01 PM5/15/10
to
On 2010-05-15, sandeep <nos...@nospam.com> wrote:
> Obviously, for tiny allocations like 20 bytes to strcpy a filename,
> there's no point putting in a check on the return value of malloc... if
> there is so little memory then stack allocations will also be failing and
> your program will be dead.

Always check for error conditions. The heap and stack use different areas
of memory; and, the stack memory is likely already be allocated. Therefore, it
is quite possible that you still have stack memory left while you cannot
request any more from the heap.

Richard Heathfield

unread,
May 15, 2010, 3:13:06 PM5/15/10
to
sandeep wrote:
> Hello friends~~
>
> Think about malloc.
>
> Obviously, for tiny allocations like 20 bytes to strcpy a filename,
> there's no point putting in a check on the return value of malloc...

Obviously, for short trips like half a mile to the supermarket, there's
no point wearing a seatbelt.

> if
> there is so little memory then stack allocations will also be failing and
> your program will be dead.
>
> Whereas, if you're allocating a gigabyte for a large array, this might
> easily fail, so you should definitely check for a NULL return.
>
> So somewhere in between there must be a point where you stop ignoring the
> return value, and start checking it. Where do you draw this line?

I never waste my time bothering to check for a null pointer return from
malloc for any allocation under 1 byte. For 1 byte and over, I check it.

> It must
> depend on whether you will deploy to a low memory or high memory
> environment... but is there a good rule?

The good rule is: whenever you attempt to acquire an external resource,
don't assume you succeeded - check.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within

Tim Harig

unread,
May 15, 2010, 3:33:40 PM5/15/10
to

BTW, if you are wondering how you can possibly go on without any more heap
memory, most users prefer a message telling them that you cannot perform an
operation because of insufficient memory then simply having the program
crash with a segfault. It also makes your troubleshooting and debugging
much easier.

Eric Sosman

unread,
May 15, 2010, 3:48:13 PM5/15/10
to
On 5/15/2010 2:52 PM, sandeep wrote:
> Hello friends~~
>
> Think about malloc.
>
> Obviously, for tiny allocations like 20 bytes to strcpy a filename,
> there's no point putting in a check on the return value of malloc...

Obviously, you are ignorant of the Sixth Commandment. The text
of all ten Commandments, along with learned commentary can be found
at <http://www.lysator.liu.se/c/ten-commandments.html>.

> if
> there is so little memory then stack allocations will also be failing and
> your program will be dead.

On many systems -- maybe even on most -- memory for auto variables
and memory for malloc() is drawn from different "pools," and one can
run out while the other still has ample space.

> Whereas, if you're allocating a gigabyte for a large array, this might
> easily fail, so you should definitely check for a NULL return.

Yes, you should definitely check for a NULL return.

> So somewhere in between there must be a point where you stop ignoring the
> return value, and start checking it. Where do you draw this line? It must
> depend on whether you will deploy to a low memory or high memory
> environment... but is there a good rule?

Yes. You can get away with not checking the value returned by
malloc(N) if N is evenly divisible by all prime numbers (you only
need to test divisors up to sqrt((size_t)-1); any larger primes can
be ignored). For all other values of N, you must check the value
returned by malloc() -- and similarly for calloc() and realloc(),
of course.

--
Eric Sosman
eso...@ieee-dot-org.invalid

bart.c

unread,
May 15, 2010, 3:47:26 PM5/15/10
to

"sandeep" <nos...@nospam.com> wrote in message
news:hsmqib$c10$1...@speranza.aioe.org...

For a proper application, especially to be run by someone else on their own
machine, then you should check allocations of any size (and have the
machinery in place to deal with failures sensibly).

For anything else, where you don't expect a failure, or it is not a big
deal, then use a wrapper function around malloc(). That wrapper will itself
check, and abort in the unlikely event of a memory failure. But it means you
don't have to bother with it in your main code.

It is also possible to just call malloc() and assume it has worked. Your
experience will tell you when you can get away with that. But only do that
with your own programs...

--
Bartc

Seebs

unread,
May 15, 2010, 4:00:33 PM5/15/10
to
On 2010-05-15, sandeep <nos...@nospam.com> wrote:
> Hello friends~~
>
> Think about malloc.
>
> Obviously, for tiny allocations like 20 bytes to strcpy a filename,
> there's no point putting in a check on the return value of malloc... if
> there is so little memory then stack allocations will also be failing and
> your program will be dead.

Not necessarily true -- there is no reason to assume that "stack allocations"
and malloc() are using the same pool of memory. Furthermore, it's quite
possible for malloc to fail, not because 20 bytes aren't available, but
because it can't get enough space for the 20 bytes plus overhead.

> So somewhere in between there must be a point where you stop ignoring the
> return value, and start checking it. Where do you draw this line? It must
> depend on whether you will deploy to a low memory or high memory
> environment... but is there a good rule?

There is -- ALWAYS check.

-s
--
Copyright 2010, all wrongs reversed. Peter Seebach / usenet...@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!

Richard Bos

unread,
May 15, 2010, 4:25:41 PM5/15/10
to
sandeep <nos...@nospam.com> wrote:

> Obviously, for tiny allocations like 20 bytes to strcpy a filename,
> there's no point putting in a check on the return value of malloc...

Wrong.

Richard

Geoff

unread,
May 15, 2010, 4:30:35 PM5/15/10
to
On Sat, 15 May 2010 18:52:59 +0000 (UTC), sandeep <nos...@nospam.com>
wrote:

>is there a good rule?

Yes. ALWAYS check for error returns.

What you do with an error depends on your application. If it's an
unpublished personal utility an error exit might be sufficient.
Otherwise, more sophisticated error diagnostic messages or recovery
might be needed for larger or published (production) programs.

christian.bau

unread,
May 15, 2010, 4:32:00 PM5/15/10
to
There are implementations where malloc (n) returns a non-null pointer,
but when you try to use the allocated memory, the operating system
takes your program down. Not nice.

On the other hand, say you are using a Mac with 4 GB RAM and 500 GB
free on your hard drive, and you are writing a 64 bit application. And
you allocate and use ever growing amounts of memory in reasonably
small chunks without checking whether malloc returns NULL. At some
point you exceed the available RAM, and virtual memory starts swapping
data. In theory, you could use 500 GB of malloc'ed memory that way. In
practice, once you start swapping with 4 GB of RAM, things will be so
slow, you will never reach the point where malloc fails in the
lifetime of your computer. That is assuming that malloc will return
NULL when the OS runs out of swap space and not fail in some way.

Gene

unread,
May 15, 2010, 4:34:01 PM5/15/10
to

As has been said, in production code, you always check. However this
does not mean you have to code a specific check for each allocation.

The usual is to define a wrapper around malloc() that checks for null
returns and deals with them through a callback mechanism where
presumably resources are freed so the allocation can succeed and/or a
longjmp() that terminates the application gracefully.

One thing that hasn't been mentioned about heap storage is
fragmentation. Even if you are allocating 20 bytes, malloc() can
still fail with (theoretically) 19/20 = 95% memory free. The
improbable, the moral is that when heap allocation fails, an app keep
running by compacting the heap. If you aren't checking each
allocation, this doesn't work.

sandeep

unread,
May 15, 2010, 5:02:36 PM5/15/10
to
bart.c writes:

> "sandeep" <nos...@nospam.com> wrote in message
> news:hsmqib$c10$1...@speranza.aioe.org...
>> Hello friends~~
>>
>> Think about malloc.
>>
>> Obviously, for tiny allocations like 20 bytes to strcpy a filename,
>> there's no point putting in a check on the return value of malloc... if
>> there is so little memory then stack allocations will also be failing
>> and your program will be dead.
>>
>> Whereas, if you're allocating a gigabyte for a large array, this might
>> easily fail, so you should definitely check for a NULL return.
>>
>> So somewhere in between there must be a point where you stop ignoring
>> the return value, and start checking it. Where do you draw this line?
>> It must depend on whether you will deploy to a low memory or high
>> memory environment... but is there a good rule?
>
> For a proper application, especially to be run by someone else on their
> own machine, then you should check allocations of any size (and have the
> machinery in place to deal with failures sensibly).
>
> For anything else, where you don't expect a failure, or it is not a big
> deal, then use a wrapper function around malloc(). That wrapper will
> itself check, and abort in the unlikely event of a memory failure. But
> it means you don't have to bother with it in your main code.

This is a good idea. I have just made a clever macro to do this - not as
easy as it seems due to void use problems and need for a temporary.

static void* __p;
#define safeMalloc(x) ((__p=malloc(x))?__p:\
(exit(printf("unspecified error")),(void*)0))

Eric Sosman

unread,
May 15, 2010, 5:12:28 PM5/15/10
to

I often use this pair of functions in toy programs:

void crash(const char *message) {
if (message != NULL)
perror (message);
exit (EXIT_FAILURE);
}

void *getmem(size_t bytes) {
void *new = malloc(bytes);
if (new == NULL && bytes > 0)
crash ("malloc");
return new;
}

Note the limitation to "toy programs." In larger programs, low-
level functions like memory allocators lack information about the
context in which a failure occurs, and so can't make informed
decisions about what should be done. This getmem() is far too
ill-mannered and abrupt for "serious" use, because it can't tell
the difference between a recoverable failure ("Not enough memory;
close some windows and try again"), and an irrecoverable failure
("Not enough memory; shutting down"). Even in the latter case, the
program may want to do a few last-gasp things like saving the user's
work to disk before going away to push up daisies; a getmem() that
just called exit() and ripped the rug from underneath the rest of
the program would be unwelcome indeed.

... and a program that simply didn't check at all but died
with a SIGSEGV or equivalent would be even worse. Even my high-
handed getmem() allows for the possibility of atexit() routines,
while dereferencing NULL does not.

--
Eric Sosman
eso...@ieee-dot-org.invalid

Seebs

unread,
May 15, 2010, 5:05:03 PM5/15/10
to
On 2010-05-15, sandeep <nos...@nospam.com> wrote:
> This is a good idea. I have just made a clever macro to do this - not as
> easy as it seems due to void use problems and need for a temporary.

> static void* __p;
> #define safeMalloc(x) ((__p=malloc(x))?__p:\
> (exit(printf("unspecified error")),(void*)0))

Write a function, not a macro, it'll be easier to make effective use of.

Also.

1. Use fprintf(stderr,...) for error messages.
2. Terminate error messages with newlines.
3. Why the *HELL* would you use "unspecified error" as the error message
when you have ABSOLUTE CERTAINTY of what the error is? Why not:
fprintf(stderr, "Allocation of %ld bytes failed.\n", (unsigned long) x);

The first two are comprehensible mistakes. The third isn't. Under what
POSSIBLE circumstances could you think that "unspecified error" is a better
diagnostic than something that in some way indicates that a memory allocation
failed?

I really don't understand this one. Please try to explain your reasoning,
because I regularly encounter software that fails with messages like these,
and I've always assumed it was something people did out of active malice --
they hate their users and want the users to suffer. If there is any other
possible reason to, given total certainty of what the problem is, refuse
to hint at it, I do not know what it is.

Ian Collins

unread,
May 15, 2010, 5:14:56 PM5/15/10
to
On 05/16/10 09:02 AM, sandeep wrote:

> bart.c writes:
>>
>> For anything else, where you don't expect a failure, or it is not a big
>> deal, then use a wrapper function around malloc(). That wrapper will
>> itself check, and abort in the unlikely event of a memory failure. But
>> it means you don't have to bother with it in your main code.
>
> This is a good idea. I have just made a clever macro to do this - not as
> easy as it seems due to void use problems and need for a temporary.
>
> static void* __p;
> #define safeMalloc(x) ((__p=malloc(x))?__p:\
> (exit(printf("unspecified error")),(void*)0))

Why mess about with a macro when a function would do?

--
Ian Collins

sandeep

unread,
May 15, 2010, 5:21:19 PM5/15/10
to
Seebs writes:
> Write a function, not a macro, it'll be easier to make effective use of.

??
How?

> Also.
>
> 1. Use fprintf(stderr,...) for error messages. 2. Terminate error
> messages with newlines. 3. Why the *HELL* would you use "unspecified
> error" as the error message when you have ABSOLUTE CERTAINTY of what the
> error is? Why not:
> fprintf(stderr, "Allocation of %ld bytes failed.\n", (unsigned long)
> x);
>
> The first two are comprehensible mistakes. The third isn't. Under what
> POSSIBLE circumstances could you think that "unspecified error" is a
> better diagnostic than something that in some way indicates that a
> memory allocation failed?
>
> I really don't understand this one. Please try to explain your
> reasoning, because I regularly encounter software that fails with
> messages like these, and I've always assumed it was something people did
> out of active malice -- they hate their users and want the users to
> suffer. If there is any other possible reason to, given total certainty
> of what the problem is, refuse to hint at it, I do not know what it is.

Many users will only be confused by technical error messages about memory
allocation etc. It's best not to get into unwanted details - the user
doesn't know about how my program allocates memory, it just needs to know
there was an error that needs a restart. I think in books they call it
leaking abstractions.

sandeep

unread,
May 15, 2010, 5:22:07 PM5/15/10
to

Obviously for efficiency! malloc may be called many times in the course
of a program.

Richard Heathfield

unread,
May 15, 2010, 5:25:41 PM5/15/10
to

If you're calling malloc anyway, function call overhead is the least of
your worries. You're trying to micro-optimise. Stop it at once.

"If you must do this damn silly thing, don't do it in this damn silly
way." - Sir Humphrey Appleby

Seebs

unread,
May 15, 2010, 5:25:47 PM5/15/10
to
On 2010-05-15, sandeep <nos...@nospam.com> wrote:
> Seebs writes:
>> Write a function, not a macro, it'll be easier to make effective use of.

> ??
> How?

This question is too incoherent to answer.

What part of "a function" do you have trouble with? You know how to write
functions, right? You know how to call them, right?

Try adding some verbs. Questions like "how do I declare a function" or "how
do I use a function" might begin to be answerable. An explanation of what
you're having trouble with, specifically, would be even better.

> Many users will only be confused by technical error messages about memory
> allocation etc. It's best not to get into unwanted details - the user
> doesn't know about how my program allocates memory, it just needs to know
> there was an error that needs a restart. I think in books they call it
> leaking abstractions.

Wrong.

Users who are "confused" by an error message can accept that they got "an
error". MANY users, however, know enough to recognize that "out of memory"
is different from "file not found".

Stop trying to outsmart the user. Tell the truth, always. It's fine to
stop short of a register and stack dump, but at least tell people honestly
and accurately what happened.

Where did you get this bullshit? The above paragraph is by far the stupidest
thing I've ever seen you write. It's not just a little wrong; it's not just a
little stupid; it's not just a little callous or unthinking. It's one of
the most thoroughly, insideously, wrong, stupid, and evil things you could
start thinking as a programmer.

Stop. Rethink. THINK AHEAD. For fuck's sake, JUST THINK EVEN A TINY BIT
AT ALL.

1. Users will report the error message to you. You need that error message
to give you the information you need to track down the problem.

2. "Error that needs a restart" is nearly always bullshit. If the program
is running out of memory because you made a mistake causing it to try to
allocate 4GB of memory on a 2GB machine, "restart" will not fix it. Nothing
will fix it until the user finds out what's wrong and submits a bug report
allowing the developer to fix it.

3. "Error that needs a restart" is at best a surrender to inability to fix
bugs. If restarting makes the error "go away", you have a serious bug that
you ought to fix.

4. The chances are very good that many of the prospective users of any
program will, in fact, be able to program at least a little, or will have
basic computer literacy. From what I've seen of your posts in this newsgroup,
I'd guess a large proportion of the users of any software you write will,
in fact, know more about computers than you do.

5. Trying to avoid "confusing" people is power-mad idiocy. Your job here
is not to imagine yourself some kind of arbiter-of-technology, preserving the
poor helpless idiots from the dangers of actual information. Your job is
to make a program which works as well as possible, and that includes CLEAR
statements of what failed.

6. You can never make a message so clear that every concievable user will
understand it. However, a user who won't understand a simple message won't
understand an imprecise or flatly false one, either. There does not exist
a user who will have a clear idea of what went wrong and be able to react
accordingly when confronted with "unspecified error", but who will be utterly
paralyzed like a deer in headlights when confronted with "memory allocation
failed". As a result, even if we restrict our study to the set of users
who simply have no clue what those words mean, you STILL gain no benefit,
at all, from the bad message. But in the real world, you hurt many of your
users by denying them the information that would allow them to address
the issue (say, by closing other applications so that more memory becomes
available).

7. Again, don't lie to people. If you know it's a memory allocation failure,
say so.

Let me put it this way: If I saw that error message in a code sample
submitted by an applicant, that would be the end of the interview process.
I would never hire someone who wrote that. I would not want to take on
the full-time job of correcting badly-written error messages from a developer
who holds users in such contempt.

And make no mistake. By asserting that users will "just be confused", you
are indeed showing contempt for them. You're ignoring the fact that users
are, by and large, humans. If they can read the text "unspecified error",
they are well along the way to being smart enough to understand a lot more
than just that, and they probably are.

Furthermore, "many" is not "all". If writing an informative error message
required a week's effort, I could understand choosing not to. But in this
case, writing an informative message requires NO EFFORT AT ALL. You don't
have to do anything better to provide an informative message than to provide
an uninformative one. Sure, it could take more effort to produce a really
good message, such as "Allocation of 256 bytes failed at list.c, line 623".
But simply writing "memory allocation failed" is not noticably harder than
writing "unspecified error". And yet, for every one of your users who *isn't*
a terrified, panicked, idiot, you've just condemned them to a useless error
message because you *think* they *might* be too stupid or ignorant to own
a computer.

In short, this is a catastrophically bad design approach. Abandon it. Reject
it. Anything you think you know which caused you to adopt this is probably
also utterly wrong, and dangerously so, and until you root all the madness
out and start afresh, your code will be dangerous, untrustworthy, and based
on a bad attitude.

Respect the user. Worst case, the user doesn't get it, but they're no worse
off. You could learn a lot from looking at, say, Blizzard Entertainment.
Their userbase consists largely of marginally-literate teenagers. And yet,
when they encounter errors, they give clear, detailed, error messages which
can be forwarded to their support team to allow the developers to fix
problems.

I have spent an occasional idle afternoon reading their support forums. I
have never, in all that time, seen a user say "help, World of Warcraft
crashed, and it used a word I don't know, is it okay for me to restart
it?"

Seebs

unread,
May 15, 2010, 5:28:00 PM5/15/10
to
On 2010-05-15, sandeep <nos...@nospam.com> wrote:
> Obviously for efficiency! malloc may be called many times in the course
> of a program.

Please stop trying to outsmart the compiler.

The "cost" of using a function instead of a macro is likely to be so small
that you can't even measure it. If there is even a cost at all, which there
may not be.

Premature optimization is the root of a great deal of evil. Do not try
to optimize something like this until you have PROVEN that it is actually
a significant bottleneck. In general, it won't be. In fact, I do not
think I have ever, in something like twenty years of active use of C, seen
a single program in which such a thing would have been a measurable component
(say, over one hundredth of one percent) of runtime.

Geoff

unread,
May 15, 2010, 5:38:09 PM5/15/10
to
On Sat, 15 May 2010 21:21:19 +0000 (UTC), sandeep <nos...@nospam.com>
wrote:

>Many users will only be confused by technical error messages about memory
>allocation etc.

How many times does "unspecified error" appear in your programs? How
many different causes are there? How is your product maintenance staff
expected to respond to reports of "unspecified error"?

>It's best not to get into unwanted details - the user
>doesn't know about how my program allocates memory, it just needs to know
>there was an error that needs a restart.

How many man-hours are you going to spend debugging "unspecified
error" reports from the field? How many man-hours and support costs
are you going to expend on non-informative error messages from
programs that failed for "unspecified" reasons that can't be traced or
reproduced? Gee, I remember an operating system that used to crash
regularly with an error code message. The screen was called Guru
Meditation. Users hated it.

>I think in books they call it leaking abstractions.

Stop reading those books immediately.

Ian Collins

unread,
May 15, 2010, 5:44:07 PM5/15/10
to

Obviously any compiler worth using will inline such a trivial function away.

--
Ian Collins

Keith Thompson

unread,
May 15, 2010, 5:59:35 PM5/15/10
to
sandeep <nos...@nospam.com> writes:
[...]

> This is a good idea. I have just made a clever macro to do this - not as
> easy as it seems due to void use problems and need for a temporary.
>
> static void* __p;
> #define safeMalloc(x) ((__p=malloc(x))?__p:\
> (exit(printf("unspecified error")),(void*)0))

Names starting with two underscores, or with an underscore and
a capital letter, are reserved to the implementation for all
purposes. Other names starting with an underscore are reserved in
more limited ways. Basically you should *never* define your own
identifier starting with an underscore (unless you're writing a C
implementation yourself).

Seebs already covered the "unspecified error" message, and I agree
with him. Just say "memory allocation failure\n" (note the trailing
newline!) or something similar. Any naive users who don't understand
what that means will just read it as "unspecified error" anyway,
and users who do understand it just might be able to do something
about it (such as shutting down other programs and trying again,
or running the program with a smaller input file, or ...).

You might also consider adding a parameter to allow the caller to
pass in a string saying what was going on when the allocation failed.

printf() returns the number of characters printed. Passing this
value to exit() makes no sense at all. What is an exit status of
17 supposed to mean?

void *safeMalloc(size_t size)
{
void *const result = malloc(size);
if (result == NULL) {
fputs("Memory allocation failure\n", stderr);
exit(EXIT_FAILURE);
}
else {
return result;
}
}

If you're concerned about performance, just declare it inline.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Keith Thompson

unread,
May 15, 2010, 6:03:21 PM5/15/10
to
Seebs <usenet...@seebs.net> writes:
[...]

> Stop trying to outsmart the user. Tell the truth, always. It's fine to
> stop short of a register and stack dump, but at least tell people honestly
> and accurately what happened.
>
> Where did you get this bullshit? The above paragraph is by far the stupidest
> thing I've ever seen you write. It's not just a little wrong; it's not just a
> little stupid; it's not just a little callous or unthinking. It's one of
> the most thoroughly, insideously, wrong, stupid, and evil things you could
> start thinking as a programmer.
>
> Stop. Rethink. THINK AHEAD. For fuck's sake, JUST THINK EVEN A TINY BIT
> AT ALL.

Um, Seebs, maybe you should try decaf?

Keith Thompson

unread,
May 15, 2010, 6:10:40 PM5/15/10
to
Geoff <ge...@invalid.invalid> writes:
> On Sat, 15 May 2010 21:21:19 +0000 (UTC), sandeep <nos...@nospam.com>
> wrote:
>
>>Many users will only be confused by technical error messages about memory
>>allocation etc.
[...]

>>I think in books they call it leaking abstractions.
>
> Stop reading those books immediately.

At least until you can understand what they're saying. I rather
doubt that books discussing "leaking abstractions" (a useful concept
and something to avoid) would recommend an "unspecified error"
message over "memory allocation failed".

Tim Harig

unread,
May 15, 2010, 6:25:37 PM5/15/10
to
On 2010-05-15, sandeep <nos...@nospam.com> wrote:
> bart.c writes:
>> "sandeep" <nos...@nospam.com> wrote in message
>> news:hsmqib$c10$1...@speranza.aioe.org...
>>> Obviously, for tiny allocations like 20 bytes to strcpy a filename,
>>> there's no point putting in a check on the return value of malloc... if
>>> there is so little memory then stack allocations will also be failing
>>> and your program will be dead.
[SNIP]

>> For a proper application, especially to be run by someone else on their
>> own machine, then you should check allocations of any size (and have the
>> machinery in place to deal with failures sensibly).

Note this. As bart.c writes not all insufficient memory errors are causes
for termination. It may mean the the program cannot use one specific
feature do to memory requirements; but, that doesn't mean that it might not
be able to do most anything else, that it might not be able to do so in
the future, or even that you might not be able to revert to a less
optimized but less power hungary algorithm. If there is user data on the
line, the user is going to be seriously pissed if you program closes
without attempting to save any data that it can. In general, most programs
are expected to run as best they can in spite of memory availability.

>> For anything else, where you don't expect a failure, or it is not a big
>> deal, then use a wrapper function around malloc(). That wrapper will
>> itself check, and abort in the unlikely event of a memory failure. But
>> it means you don't have to bother with it in your main code.
> This is a good idea. I have just made a clever macro to do this - not as
> easy as it seems due to void use problems and need for a temporary.

Unless you are in a severvely restricted environment statically allocated
variables will likely be allocated in a different section of memory (the
stack) then will dynamically allocated variables (on the heap). Just
because one area is out of memory doesn't mean that the other is. Therefore,
it is reasonable to assume that create a function (which requires at least
a pointer on the stack) and even use temporary statically allocated
variables if necessary.

Also note that an out like this should be used judiciously. Whether or not
to terminate should be based upon the context of the program and whether
the program can otherwise function; not on how much memory was requested.
Most functions are far better returning error where it is better handled by
the main part of the program which can make better decisions about what
errors are actually terminal.

> static void* __p;
> #define safeMalloc(x) ((__p=malloc(x))?__p:\
> (exit(printf("unspecified error")),(void*)0))

As has been stated by others "unspecified error" is a rather poor error. A
more informative error such as "Insufficient memory to continue with
operation" is much more informative to the user. This isn't leaking
internals as most users *do* realize that programs require memory to
operate. If you are conserned that they are not, you could add a
suggestion such as "You could try closing some windows and try again."

It is also more useful to add debugging versions using the preprocessor
which provide more information while testing and debugging:

#ifndef NDEBUG
printf("Unable to allocate enough memory to copy strings, %lu.", line);
#else
printf("Insuffient memory. Try closing some windows and try again.");
#endif

A function is much better to handle the kind of complexity that is likely
to arise from this operation. Pre-optimization is a bad thing and often
turns out to be unsubstantiated or a performance liability because of
failed assumptions. Never second guess the optimizing abililties of the
compiler. Only optimize *after* you have confirmed that something is
actually a performance liability with hard data to support that claim.

Seebs

unread,
May 15, 2010, 6:37:11 PM5/15/10
to
On 2010-05-15, Keith Thompson <ks...@mib.org> wrote:
> Um, Seebs, maybe you should try decaf?

Hah! I've been on decaf for ages.

I guess it's just a pet peeve. There is little in this world more infuriating
than the scenario, which I'd guess everyone has been through:

You have some sort of time pressure or deadline. A piece of software fails.
And it gives you NO CLUE AT ALL what went wrong or what might be done to
address it. The error message is absent or uninformative, such that you
can't search on the message and get suggestions or ideas.

I have had programs fail in hundreds of ways, which required anything from
upgrading a kernel to switching from one kind of network to another in order
to address them. I have had programs destroy data.

But the only one that consistently, really, infuriates me, and makes me want
to whack developers upside the head with a lead pipe, is when the failure
has clearly been caught, and intercepted, and replaced with a completely
useless error message. I would rather get a segmentation fault than the
message "unspecified error"; at least I could *debug* that.

Now, to be fair, I'm not exactly at the typical "end user" level... But I
have watched people who have trouble with questions like "and what happens
when you double-click that little picture that looks like a piece of paper?"
interacting with software, and they have the same response. They can accept
that a program's error message will make no sense, but if you show them a
message which is completely legible and clearly designed to hide what went
wrong from them, they get pretty mad. It's insulting.

Sandeep's questions have led me to think that this is a person who would
love to become a good and effective programmer. For all that I think it's
absolutely ridiculous to use a macro instead of a function "for efficiency",
thinking about efficiency, while usually the wrong thing to do, is the
kind of thing that suggests someone who *wants* to become good at this stuff.

Whereupon, it's very important to address the importance of understanding
that, even if your target market is developmentally disabled people, it is
almost ALWAYS a horrific mistake to treat the user like an idiot. There is
no surer path to the uninstaller than wasting the user's time by saying "I
could tell you exactly what went wrong, but I don't think you're smart enough
to do anything about it, so I'll just lie to you."

William Ahern

unread,
May 15, 2010, 8:50:18 PM5/15/10
to

Especially with strong hints like `static inline':

#define malloc(size) xmalloc((size), __func__, __LINE__)

static inline void *xmalloc(size_t size, const char *fn, int ln) {
void *p;
if ((p = (malloc)(size)))
return p;
fprintf(stderr, "unrecoverable system error at %s:%d: %s\n",
fn, ln, strerror(errno));
abort();
}

James Dow Allen

unread,
May 15, 2010, 9:51:29 PM5/15/10
to
On May 16, 4:22 am, sandeep <nos...@nospam.com> wrote:

> Ian Collins writes:
> > Why mess about with a macro when a function would do?
> .

> Obviously for efficiency! malloc may be called many times in the course
> of a program.

Others have given useful correct responses but as someone who
prides himself on writing efficient code, I'd like to answer
this part.

Calling malloc() enough to matter and then worrying about
micro-optimizations in your interface to it is like
putting on your best running-shoes just to ride a slow-moving
elevator.

James Dow Allen

Eric Sosman

unread,
May 15, 2010, 10:21:02 PM5/15/10
to
On 5/15/2010 5:21 PM, sandeep wrote:
>
> Many users will only be confused by technical error messages about memory
> allocation etc. It's best not to get into unwanted details - the user
> doesn't know about how my program allocates memory, it just needs to know
> there was an error that needs a restart. I think in books they call it
> leaking abstractions.

*I* call it "Sandeep imagines he's smarter than everybody who
might ever use his program."

--
Eric Sosman
eso...@ieee-dot-org.invalid

Dennis (Icarus)

unread,
May 15, 2010, 10:12:25 PM5/15/10
to

"sandeep" <nos...@nospam.com> wrote in message
news:hsn38f$rqg$1...@speranza.aioe.org...

You have the message, which folks can the use to find more information in
your help.

If you're runnng a 64bit proram and try to do something requiring 6gb when
the ram and swap total 3gb, you'll run out of memory.
Telling the user what happened, and how to fix it (increasing RAM or swap
file) will let them adjust the system settings and continue.
Otherwise they may well decide it's crap and uninstall it, then tell their
friends that it’s crap, and so on.
Word of mouth spreads pretty quickly in this day of twitter, facebook,
blogs, Usenet, ....

Dennis

Dennis (Icarus)

unread,
May 15, 2010, 10:21:05 PM5/15/10
to

"Seebs" <usenet...@seebs.net> wrote in message
news:slrnhuu50r.4tq...@guild.seebs.net...

> On 2010-05-15, sandeep <nos...@nospam.com> wrote:
>> Obviously for efficiency! malloc may be called many times in the course
>> of a program.
>
> Please stop trying to outsmart the compiler.
>
> The "cost" of using a function instead of a macro is likely to be so small
> that you can't even measure it. If there is even a cost at all, which
> there
> may not be.

memset actually showed as a bottleneck in one of programs I had to debug.
It was being called 2,000,000,000 times or so.

Converting from memset to a for loop writing integers helped a bit, but the
real fix was to correct the logic error so that it wasn't called that many
times.
the memset was part of initializing an object, which was being constructed
during processing, but was only needed a fraction of the time.....

Dennis

Malcolm McLean

unread,
May 16, 2010, 2:33:28 AM5/16/10
to
On May 15, 9:52 pm, sandeep <nos...@nospam.com> wrote:
>
> So somewhere in between there must be a point where you stop ignoring the
> return value, and start checking it. Where do you draw this line? It must
> depend on whether you will deploy to a low memory or high memory
> environment... but is there a good rule?
>
Where it's statistically more likely that the computer will break than
run out of memory, you've got a good case for ignoring malloc().

There are other arguments for ignoring malloc() checks. One is that a
request for 0 bytes can return a null pointer or a non-null pointer to
zero bytes. It's much more likely that a zero request will be
legitimate and be made, and that it won't be obvious at writing time
that requests for zero allocations are legitimate, than it is that a
tiny request will fail. So you are quite likely to trigger a suprious
out of memory mesage, and worse, this won't show up in testing if the
platform always returns a non-null pointer to zero bytes. The correct
test if(request != 0 && ptr == 0) can be messy and is often not seen
in production code.

The other issues is that if an allocation request for, say, a few
bytes to hold a file path fails, it's more likely that there is some
bug in the program causing the path size to be set to a garbage
number, than it is that the program ahs actually run out of memory.
Printing out an "out of memory" message is misleading, and might be
expensive. For instance the user might try to run the computer of a
bigger computer, taking him a whole day's work to beg the bigger
computer from the neighbouring department, recompile and reinstall the
program, set it all up ...


sandeep

unread,
May 16, 2010, 5:05:35 AM5/16/10
to
Seebs writes:
> In short, this is a catastrophically bad design approach. Abandon it.
> Reject it. Anything you think you know which caused you to adopt this
> is probably also utterly wrong, and dangerously so, and until you root
> all the madness out and start afresh, your code will be dangerous,
> untrustworthy, and based on a bad attitude.

This was quite a long rant!! I think I see your point but you have to
imagine your grannie when Word crashes. I think she will not like to see
a message like
"malloc failed at src\lib\unicode\mapper\table.c:762"
I think that will confuse her!

Anyway I have adopted your suggestion, also used a function instead of a
macro, and built in some extra functionality. Now it will keep some memory
back to use later on if allocations start failing for added robustness.

void* safeMalloc(size_t x)
{
static void* emrgcy=0;
void* x1;
#define ESIZE 0x40000000uLL
if(!(emrgcy=emrgcy?emrgcy:malloc(ESIZE)))
#undef ESIZE
printf("WARNING running in unstable mode, program may crash at any "
" time....Close open programs, allow more virtual memory or "
" install extra RAM");
if(!(x1=malloc(x))) {
if(emrgcy) {
free(emrgcy);
x1=malloc(x);
} else {
printf("Severe memory failure, program cannot continue at line "
#define STRGFY(x) #x
STRGFY(__LINE__)
#undef STRGFY
" stack dump follows");
abort();
exit(1);
}
}
return x1;
}

Seebs

unread,
May 16, 2010, 5:38:42 AM5/16/10
to
On 2010-05-16, sandeep <nos...@nospam.com> wrote:
> Seebs writes:
>> In short, this is a catastrophically bad design approach. Abandon it.
>> Reject it. Anything you think you know which caused you to adopt this
>> is probably also utterly wrong, and dangerously so, and until you root
>> all the madness out and start afresh, your code will be dangerous,
>> untrustworthy, and based on a bad attitude.

> This was quite a long rant!! I think I see your point but you have to
> imagine your grannie when Word crashes. I think she will not like to see
> a message like
> "malloc failed at src\lib\unicode\mapper\table.c:762"
> I think that will confuse her!

Yes, but "unspecified error" will either:
1. Confuse her.
or
2. Infuriate her.

It's *necessarily* worse. It cannot possibly be better. It can only, at
best, be about as bad.

And again: Imagine that such users are a majority. Should you absolutely
cripple 25% of your users, infuriating them and treating them with contempt,
so that 75% of them will be completely unsure what just happened instead
of being completely unsure what just happened?

> Anyway I have adopted your suggestion, also used a function instead of a
> macro, and built in some extra functionality. Now it will keep some memory
> back to use later on if allocations start failing for added robustness.

That technique is... well, dubious at best. It doesn't necessarily work,
and may well make things worse.


> void* safeMalloc(size_t x)
> {
> static void* emrgcy=0;
> void* x1;
> #define ESIZE 0x40000000uLL
> if(!(emrgcy=emrgcy?emrgcy:malloc(ESIZE)))
> #undef ESIZE

Let me guess. Someone told you to define symbolic names for constants,
right?

This is not how you do it.

1. If you're going to define a constant, define it and leave it defined.
2. Don't use a name starting with a capital E followed by another capital
letter, those are reserved for errno values.
3. If you're only using it once, don't feel like you have to #define it.
4. Don't use "uLL" on a constant that's unambiguously within the size
range of an ordinary signed long. You don't need any qualifier at all,
although in theory a system could exist where that value is too big for
size_t, in which case you'd be allocating 0 bytes.
5. Don't get so clever. Try:

if (!emrgcy) {
emrgcy = malloc(0x40000000);
if (!emrgcy) {
fprintf(stderr, "Uh-oh, failed to allocate spare memory.\n");
}
}
6. Don't allocate a GIGABYTE of memory like that -- all this does is
massively increase the chance of catastrophic failure, as a likely response
from a system which overcommits is to determine that your process allocated
a TON of memory, doesn't use most of it, and is probably the best candidate
for being killed out of hand. A megabyte or two, sure, I guess.
7. Actually, even then, this is just cargo cult stuff. Don't do it, it
won't help.

> printf("WARNING running in unstable mode, program may crash at any "
> " time....Close open programs, allow more virtual memory or "
> " install extra RAM");

This is a really poor message, because this is not an "unstable" mode,
it's the normal state of affairs, where you don't have a spare 1GB allocation.

> if(!(x1=malloc(x))) {
> if(emrgcy) {
> free(emrgcy);
> x1=malloc(x);
> } else {
> printf("Severe memory failure, program cannot continue at line "
> #define STRGFY(x) #x
> STRGFY(__LINE__)
> #undef STRGFY

Again, don't do this. With extremely rare exceptions, you should NEVER
be using #undef on something you just defined.

Also, you're still using plain printf for error messages, which is bad for
the same reasons it was last time.

> " stack dump follows");

So's the missing newline.

> abort();

So's the assumption that abort() gives a "stack dump" -- it may not.

> exit(1);
> }
> }
> return x1;
> }

Finally, you've made a few other mistakes. You're freeing emrgcy, but you
don't set it to NULL, so your check for it is unlikely to be useful. You
don't check malloc() after calling it.

In short, this is full of cargo-cult superstitions. Here's a slightly
more realistic effort:

void *
failsafe_malloc(size_t size) {
static void *failsafe = NULL;
void *ret;

if (!failsafe) {
failsafe = malloc(1024 * 1024);
}
ret = malloc(size);
if (!ret && failsafe) {
free(failsafe);
failsafe = NULL;
ret = malloc(size);
}
if (!ret) {
fprintf(stderr, "failed to allocate %lld bytes of memory.\n",
(long long) size);
#ifndef NDEBUG
abort();
#endif
}
return ret;
}

A few things to note:

1. Picked a size that's much less likely to cause an immediate catastrophic
failure.
2. Don't bother the user with warnings about the supposed "failsafe", since
it's basically a pointless superstition anyway.
3. No trying to show off using an elaborate combination of ?: and assignment
to set something up.
4. Error message is terse, simple, and doesn't clutter the user's world.
A user who knows what it means can use it, a user who doesn't at least gets
a message that Something Went Wrong.
5. abort() is conditional on NDEBUG, for consistency with assert()'s
behavior. (I don't use assert because it yields useless messages.)
6. failsafe is correctly set to NULL when freed, and future calls will
try to reallocate it (which may work if something large has been freed
in the mean time).
7. No #define, use once, #undef hackery, because that's annoying and
generally pointless.

Reading your code, I get the impression you're trying to aim for some kind
of code density, with cool tricks you've seen all thrown in together to
make the code look more impressive. Don't do that. Write the absolute
simplest code you can that clearly expresses what you're doing. You'll
have fewer bugs (if you'd written this more simply, I bet you'd have caught
that you never set emrgcy to NULL after freeing it, but might continue
to test it), and you'll have an easier time fixing things and adding features.

Phil Carmody

unread,
May 16, 2010, 6:55:51 AM5/16/10
to
Seebs <usenet...@seebs.net> writes:
> On 2010-05-15, sandeep <nos...@nospam.com> wrote:
>> Seebs writes:
>>> Write a function, not a macro, it'll be easier to make effective use of.
>
>> ??
>> How?
>
> This question is too incoherent to answer.

Is it "How will it be easier to make effective use of?"

Answer - it won't have multiple-evaluation issues, and it both looks
and behaves like a function.

> What part of "a function" do you have trouble with? You know how to write
> functions, right? You know how to call them, right?
>
> Try adding some verbs. Questions like "how do I declare a function" or "how
> do I use a function" might begin to be answerable. An explanation of what
> you're having trouble with, specifically, would be even better.
>
>> Many users will only be confused by technical error messages about memory
>> allocation etc. It's best not to get into unwanted details - the user
>> doesn't know about how my program allocates memory, it just needs to know
>> there was an error that needs a restart. I think in books they call it
>> leaking abstractions.
>
> Wrong.
>
> Users who are "confused" by an error message can accept that they got "an
> error". MANY users, however, know enough to recognize that "out of memory"
> is different from "file not found".
>
> Stop trying to outsmart the user.

I think this error is called "out-dumbing the user".

Phil

--
I find the easiest thing to do is to k/f myself and just troll away
-- David Melville on r.a.s.f1

sandeep

unread,
May 16, 2010, 6:57:53 AM5/16/10
to
Phil Carmody writes:
> Seebs <usenet...@seebs.net> writes:
>> On 2010-05-15, sandeep <nos...@nospam.com> wrote:
>>> Seebs writes:
>>>> Write a function, not a macro, it'll be easier to make effective use
>>>> of.
>>
>>> ??
>>> How?
>>
>> This question is too incoherent to answer.
>
> Is it "How will it be easier to make effective use of?"

Yes.

> Answer - it won't have multiple-evaluation issues, and it both looks and
> behaves like a function.

Multiple evaluation is very unlikely in this case. This answer looks
spurious to me.

Phil Carmody

unread,
May 16, 2010, 7:07:21 AM5/16/10
to
Keith Thompson <ks...@mib.org> writes:
> Geoff <ge...@invalid.invalid> writes:
>> On Sat, 15 May 2010 21:21:19 +0000 (UTC), sandeep <nos...@nospam.com>
>> wrote:
>>
>>>Many users will only be confused by technical error messages about memory
>>>allocation etc.
> [...]
>>>I think in books they call it leaking abstractions.
>>
>> Stop reading those books immediately.
>
> At least until you can understand what they're saying. I rather
> doubt that books discussing "leaking abstractions" (a useful concept
> and something to avoid) would recommend an "unspecified error"
> message over "memory allocation failed".

Leaking *abstractions* sounds a lot better than leaking implementation
details, or leaking specifics.

E.g.:
"Error: out of memory (attempting to clone image buffer)"
may leak a couple of abstrations, but is way better than:
"Error: s_malloc(8192100) returned NULL, called from foo/bar/img.c:6742"
or
"Error: imgbuf_s binary buddy-heap has no free blocks"
IMHO.

Phil Carmody

unread,
May 16, 2010, 7:10:06 AM5/16/10
to

Forget everything you've learnt.

Start again.

Do not pick up idiocy like the above next time.

Phil Carmody

unread,
May 16, 2010, 7:11:51 AM5/16/10
to

An object being 'constructed'? Are you sure you were using C?

Tim Harig

unread,
May 16, 2010, 7:20:13 AM5/16/10
to
Sorry, Seebs much of this is directed at sandeep not you. You have made
goode points. It is just easier to reply here so as not to duplicate
some things.

On 2010-05-16, Seebs <usenet...@seebs.net> wrote:
> On 2010-05-16, sandeep <nos...@nospam.com> wrote:
>> This was quite a long rant!! I think I see your point but you have to
>> imagine your grannie when Word crashes. I think she will not like to see
>> a message like
>> "malloc failed at src\lib\unicode\mapper\table.c:762"
>> I think that will confuse her!

Yes, but "unspecified error" is no less confusing.

The user should receive a message that is useful enough for them to have
a basic idea of what is happening giving them an idea of how to fix it
without a lot of technical reference or inside knowledge of the program.
What they really need to know is "Insufficient Memory." That isn't too
technical for almost any user. Most people, even grannies, know that
programs need memory to run.

Somebody debugging the program would definitely prefer the "malloc failed"
message as it gives them some clue as to where the failure happened within
the program. This could be useful for tracking down other problems runaway
functions, etc.

> And again: Imagine that such users are a majority. Should you absolutely
> cripple 25% of your users, infuriating them and treating them with contempt,
> so that 75% of them will be completely unsure what just happened instead
> of being completely unsure what just happened?

There are ways to satisfy both. The first is to use the preprocessor to
differentiate between test code where detailed debugging information is
included and production code with errors more useful to the user. Another
is to provide a logging system appropriate to your target operating system
where you log messages with additional information while still showing user
oriented messages.

>> Anyway I have adopted your suggestion, also used a function instead of a
>> macro, and built in some extra functionality. Now it will keep some memory
>> back to use later on if allocations start failing for added robustness.
> That technique is... well, dubious at best. It doesn't necessarily work,
> and may well make things worse.

It is a poor choice. If an application truly cannot function then it
should handle the error gracefully and exit. That said, while one feature
of a program may not be able to perform with low memory, often other
functionality is possible. Many programs may not be able to function; but,
may be too critical to exit for what may be a temporary memory issue.

A server process, for instance, may not be able to handle a request
that requires more memory then is available. It should not just exit.
It should send an error to the client, unallocate any data that was
allocated for the clients request, close the connection, and wait for
another client. The next clients request may not be as memory intensive
or another process that was hogging memory may have released it.

In the end, the appropriate action depends on the application.

> 6. Don't allocate a GIGABYTE of memory like that -- all this does is
> massively increase the chance of catastrophic failure, as a likely response
> from a system which overcommits is to determine that your process allocated
> a TON of memory, doesn't use most of it, and is probably the best candidate
> for being killed out of hand. A megabyte or two, sure, I guess.

Unless you know the memory size of the target system, this is a crap shoot
at best.

> 5. abort() is conditional on NDEBUG, for consistency with assert()'s
> behavior. (I don't use assert because it yields useless messages.)

Assert is not designed to give useful messages and it is not designed for
error handling. It is designed to crash the application if it notices
something is wrong. This helps the programmer to know about subtle bugs
that might otherwise go unseen until they produce noticable problems. Bugs
of this nature can be difficult to detect until the software is shipped and
can be difficult to debug as they may cause problems or crash until far
later in the execution then where the actual bug resides. Assert helps
you catch them early before they ship and closer to when they actually
happen.

The execellent book "Writing Solid Code" by Steve Maguire is a must read
for any programmer. It contains great examples how to use assert()
properly and effectively.

> Reading your code, I get the impression you're trying to aim for some kind
> of code density, with cool tricks you've seen all thrown in together to
> make the code look more impressive. Don't do that. Write the absolute
> simplest code you can that clearly expresses what you're doing. You'll
> have fewer bugs (if you'd written this more simply, I bet you'd have caught
> that you never set emrgcy to NULL after freeing it, but might continue
> to test it), and you'll have an easier time fixing things and adding features.

Note that minimizing code density doesn't create a smaller or faster
binary. A concisely written ?: operator on a single line generates the
same code as the equivilant if/else operators spread across multiple lines.
The if/else version is almost always easier to read. "?:"'s are almost
always the sign of an amateur programmer trying to show off their "l337
skilz." Seasoned programmers have learned to avoid them.

jacob navia

unread,
May 16, 2010, 7:21:51 AM5/16/10
to
Phil Carmody a �crit :

>
> An object being 'constructed'? Are you sure you were using C?
>

As you (may) know, objects are allocated, constructed (initialized) in C
all the time.

For instance for some hypothetical structure "Foo":

Foo *newFoo(size_t length, double averageUse)
{
Foo *result;
if ((result=calloc(1,sizeof(Foo))) == NULL)
return NULL;
result->averageUse = averageUse;
result->length = length;
result->Stats = DEFAULT_STATS_VAL;
result->count = 0;
return result;
}

Nick Keighley

unread,
May 16, 2010, 7:42:19 AM5/16/10
to
On 15 May, 22:25, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-05-15, sandeep <nos...@nospam.com> wrote:
> > Seebs writes:

<snip>

> > Many users will only be confused by technical error messages about memory
> > allocation etc. It's best not to get into unwanted details - the user
> > doesn't know about how my program allocates memory, it just needs to know
> > there was an error that needs a restart. I think in books they call it
> > leaking abstractions.
>
> Wrong.

I'm not a fan of putting too much "computer science" into user error
messages or expecting them to know program internals. But a short
succinct description of the cause of the failure is good. I'm a fan of
log files that provide the developer with more specific information.

Note yesterday I encountered someone who got a "not enough memory to
perform operation" error and were surprised because their disk had
plenty of space.

<snip>

> Where did you get this bullshit?  The above paragraph is by far the stupidest
> thing I've ever seen you write.  It's not just a little wrong; it's not just a
> little stupid; it's not just a little callous or unthinking.  It's one of
> the most thoroughly, insideously, wrong, stupid, and evil things you could
> start thinking as a programmer.

wow. And you're on decaff?

<snip>

> 2.  "Error that needs a restart" is nearly always bullshit.  If the program
> is running out of memory because you made a mistake causing it to try to
> allocate 4GB of memory on a 2GB machine, "restart" will not fix it.  Nothing
> will fix it until the user finds out what's wrong and submits a bug report
> allowing the developer to fix it.

the other cause for memory error is that some other program has eaten
the memory

<snip>

> 4.  The chances are very good that many of the prospective users of any
> program will, in fact, be able to program at least a little,

what universe do you live in? Are most of the people you know
programmers?

> or will have basic computer literacy.

again quite strange. "basic computer literacy" can be *very* basic

<snip>

> 5.  Trying to avoid "confusing" people is power-mad idiocy.

I disagree. Have you seen Airport displays with Windows NT register
dumps?

> Your job here
> is not to imagine yourself some kind of arbiter-of-technology, preserving the
> poor helpless idiots from the dangers of actual information.  Your job is
> to make a program which works as well as possible, and that includes CLEAR
> statements of what failed.

well you draw the line at register dumps so I think this is a matter
of where the line is drawn


> 6.  You can never make a message so clear that every concievable user will
> understand it.  However, a user who won't understand a simple message won't
> understand an imprecise or flatly false one, either.  There does not exist
> a user who will have a clear idea of what went wrong and be able to react
> accordingly when confronted with "unspecified error", but who will be utterly
> paralyzed like a deer in headlights when confronted with "memory allocation
> failed".  As a result, even if we restrict our study to the set of users
> who simply have no clue what those words mean, you STILL gain no benefit,
> at all, from the bad message.  But in the real world, you hurt many of your
> users by denying them the information that would allow them to address
> the issue (say, by closing other applications so that more memory becomes
> available).

do you have a limit to this? "Database has deadlocked" "link layer
failure" "too many hash collisions"

<snip>


--
"In flipping pig mode again"
error string found in Mac resource fork

Moi

unread,
May 16, 2010, 7:45:34 AM5/16/10
to
On Sat, 15 May 2010 21:21:19 +0000, sandeep wrote:

> Seebs writes:
>> Write a function, not a macro, it'll be easier to make effective use
>> of.

>>

>> 1. Use fprintf(stderr,...) for error messages. 2. Terminate error
>> messages with newlines. 3. Why the *HELL* would you use "unspecified
>> error" as the error message when you have ABSOLUTE CERTAINTY of what
>> the error is? Why not:
>> fprintf(stderr, "Allocation of %ld bytes failed.\n", (unsigned long)
>> x);
>>
>> The first two are comprehensible mistakes. The third isn't. Under
>> what POSSIBLE circumstances could you think that "unspecified error" is
>> a better diagnostic than something that in some way indicates that a
>> memory allocation failed?
>>

>

> Many users will only be confused by technical error messages about
> memory allocation etc. It's best not to get into unwanted details - the
> user doesn't know about how my program allocates memory, it just needs
> to know there was an error that needs a restart. I think in books they
> call it leaking abstractions.

Suppose your program is a filter, used in a (unix shell: sorry!) commandline like:

$ find . -name \*\.c -print | sort | uniq | yourprogram | lpr
FAILED $

What could the user do to help you solve *your* problem ?
Would he have liked it, if "FAILED" had been written to stdout ?
Will he ever attempt to use your program again ?

HTH,
AvK

Willem

unread,
May 16, 2010, 7:50:38 AM5/16/10
to
Nick Keighley wrote:
) I'm not a fan of putting too much "computer science" into user error
) messages or expecting them to know program internals. But a short
) succinct description of the cause of the failure is good. I'm a fan of
) log files that provide the developer with more specific information.
)
) Note yesterday I encountered someone who got a "not enough memory to
) perform operation" error and were surprised because their disk had
) plenty of space.

Quite understandable, given that many OSes use disk space as virtual
memory.

) On 15 May, 22:25, Seebs <usenet-nos...@seebs.net> wrote:
)> 4. ?The chances are very good that many of the prospective users of any
)> program will, in fact, be able to program at least a little,
)
) what universe do you live in? Are most of the people you know
) programmers?

No, but 99% of the users are using 1% of the programs.
In other words: most users stick with the big, well-known software,
especially the less computer-literate ones, so any given program is
therefore most likely to be used by a computer-savvy person.


SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT

Nick Keighley

unread,
May 16, 2010, 7:55:44 AM5/16/10
to
On 16 May, 10:38, Seebs <usenet-nos...@seebs.net> wrote:
> On 2010-05-16, sandeep <nos...@nospam.com> wrote:
> > Seebs writes:

<snip>

> > void* safeMalloc(size_t x)
> > {
> >     static void* emrgcy=0;
> >     void* x1;
> > #define ESIZE 0x40000000uLL
> >     if(!(emrgcy=emrgcy?emrgcy:malloc(ESIZE)))
> > #undef ESIZE
>
> Let me guess.  Someone told you to define symbolic names for constants,
> right?
>
> This is not how you do it.
>
> 1.  If you're going to define a constant, define it and leave it defined.
> 2.  Don't use a name starting with a capital E followed by another capital
> letter, those are reserved for errno values.
> 3.  If you're only using it once, don't feel like you have to #define it.

oops! Don't agree. A named constant makes it clear what the constant
is for- it's the good use of abstraction. Then there's multiple use of
the same number (less likely with really big numbers).

sm.state = 9;
write_port (9, 9);
for (equip_num = 0; equip_num < 9; equip_num++)
reset_equip (equip_num);

If you want to change one of those you have inspect every 9 in the
program

<snip>

Phil Carmody

unread,
May 16, 2010, 8:44:06 AM5/16/10
to

The operative words are "to me". We've already ascertained that
your perspective is desperately naive. Giving us more evidence
does not strengthen your case at all. The macro you suggested
gains you _nothing_, in particular it doesn't give you the things
that you mindlessly asserted that it does give you in anything
apart from the compilers with the poorest QoI.

Sorry to break this to you, but you are *not* cleverer than your
compiler. Stop pretending you are. And please stop doing it in
public, it's painful to watch.

Dennis (Icarus)

unread,
May 16, 2010, 8:19:55 AM5/16/10
to

"Phil Carmody" <thefatphi...@yahoo.co.uk> wrote in message
news:87mxw0d...@kilospaz.fatphil.org...

I know I was using C++. I doubt C would be any different in this respect.
:-)

Dennis

Nick Keighley

unread,
May 16, 2010, 9:42:21 AM5/16/10
to

I don't understand.

Nick Keighley

unread,
May 16, 2010, 9:47:47 AM5/16/10
to
On 16 May, 12:50, Willem <wil...@turtle.stack.nl> wrote:
> Nick Keighley wrote:

[not all users are programmers]

> ) Note yesterday I encountered someone who got a "not enough memory to
> ) perform operation" error and were surprised because their disk had
> ) plenty of space.
>
> Quite understandable, given that many OSes use disk space as virtual
> memory.

this user wouldn't have known what the term "virtual memory" meant.

Virtual memory doesn't usually eat your entire disk

Ersek, Laszlo

unread,
May 16, 2010, 10:32:37 AM5/16/10
to

I don't know what is you don't understand so I'll start with the pipeline.
- Create a list of *.c files present in the directory hierarchy rooted at
the current working directory,
- sort them lexicographically,
- eliminate duplicate lines (duplicate lines were unlikely in this case,
but I digress),
- process the remaining list with "yourprogram",
- pass the output of "yourprogram" to the line printer (ie. queue the
output as a print job -- further processing is possible "within" lpr,
ie. automagically determining whether the data is plain text,
PostScript, PDF and so on, and translating it to the configured printer's
language).

(These processes run in parallel.)

If "yourprogram" writes FAILED to stdout instead of stderr, then FAILED
will show up interleaved with (or after) the other data sent to "lpr".
Supposing "yourprogram" *either* produces data *or* it writes FAILED, and
that lpr knows to ignore an empty job, writing normal data and FAILED to
different output streams works correctly. Writing FAILED to stdout would
print a page with "FAILED (snicker snicker)" on it.

Of course, if "yourprogram" writes data to stdout before it emits FAILED
to stderr, you still end up with an incomplete page (or book).

Cheers,
lacos

Moi

unread,
May 16, 2010, 10:42:36 AM5/16/10
to
On Sun, 16 May 2010 16:32:37 +0200, Ersek, Laszlo wrote:

> On Sun, 16 May 2010, Nick Keighley wrote:
>
>> On 16 May, 12:45, Moi <r...@invalid.address.org> wrote:
>
>>> Suppose your program is a filter, used in a (unix shell: sorry!)
>>> commandline like:
>>>
>>> $ find . -name \*\.c -print | sort | uniq | yourprogram | lpr FAILED $
>>>

>> I don't understand.


>
> I don't know what is you don't understand so I'll start with the
> pipeline. - Create a list of *.c files present in the directory
> hierarchy rooted at the current working directory,
> - sort them lexicographically,
> - eliminate duplicate lines (duplicate lines were unlikely in this case,
> but I digress),
> - process the remaining list with "yourprogram", - pass the output of
> "yourprogram" to the line printer (ie. queue the output as a print job
> -- further processing is possible "within" lpr, ie. automagically

>

> If "yourprogram" writes FAILED to stdout instead of stderr, then FAILED
> will show up interleaved with (or after) the other data sent to "lpr".
> Supposing "yourprogram" *either* produces data *or* it writes FAILED,
> and that lpr knows to ignore an empty job, writing normal data and
> FAILED to different output streams works correctly. Writing FAILED to
> stdout would print a page with "FAILED (snicker snicker)" on it.
>
> Of course, if "yourprogram" writes data to stdout before it emits FAILED
> to stderr, you still end up with an incomplete page (or book).
>

Exactly.
My whole point was that, if the program decides to fail
it could _at least_ try to minimize the damage.
Writing nonsense to stdout in some cases only increases the damage.

AvK

pete

unread,
May 16, 2010, 10:50:30 AM5/16/10
to
sandeep wrote:
>
> Hello friends~~
>
> Think about malloc.
>
> Obviously, for tiny allocations like 20 bytes to strcpy a filename,
> there's no point putting in a check on the return value of malloc... if
> there is so little memory then stack allocations will also be failing and
> your program will be dead.
>
> Whereas, if you're allocating a gigabyte for a large array, this might
> easily fail, so you should definitely check for a NULL return.

>
> So somewhere in between there must be a point where you stop ignoring the
> return value, and start checking it. Where do you draw this line? It must
> depend on whether you will deploy to a low memory or high memory
> environment... but is there a good rule?

Yes, there is a good rule.

If you have any idea of what it is that you would want
the program to do in the case that malloc returns a null pointer,
then you should write it into the program.

If you don't know what is that you would want
the program to do in the case that malloc returns a null pointer,
then you should think about it until you do know,
and then write it into the program.

--
pete

Willem

unread,
May 16, 2010, 10:51:46 AM5/16/10
to
Nick Keighley wrote:
) On 16 May, 12:50, Willem <wil...@turtle.stack.nl> wrote:
)> Nick Keighley wrote:
)
) [not all users are programmers]
)
)> ) Note yesterday I encountered someone who got a "not enough memory to
)> ) perform operation" error and were surprised because their disk had
)> ) plenty of space.
)>
)> Quite understandable, given that many OSes use disk space as virtual
)> memory.
)
) this user wouldn't have known what the term "virtual memory" meant.

However, he could very well have known that you get out-of-memory errors
more often when your disk is full. Which is true on some OSes. Especially
the ones that are used by people who are not computer-savvy.

) Virtual memory doesn't usually eat your entire disk

There are OSes where it does. Especially older versions of the one that
is most used by non-computer-savvy users.

Keith Thompson

unread,
May 16, 2010, 11:22:18 AM5/16/10
to

But in the safeMalloc case, there's no point in using a #define. The
fact that it's immediately followed by #undef implies that the writer
wants it to exist only in a limited scope. So why not just declare it
that way?

...
static void *emrgcy = 0;
void *x1;
const size_t ESIZE = 0x40000000;
if ((!(emrgcy=emrgcy?emrgcy:malloc(ESIZE)))
...

Of course I wouldn't call it ESIZE, and I wouldn't write that ugly
condition, and I'd use a lot more whitespace, and I wouldn't do it
this way in the first place.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Keith Thompson

unread,
May 16, 2010, 11:23:55 AM5/16/10
to

The question you should be asking yourself is why you'd want to
write a macro rather than a function, not the other way around.

Use a function unless you have a good specific reason to use a
macro.

Lew Pitcher

unread,
May 16, 2010, 11:47:47 AM5/16/10
to

The shop I worked in (before I retired) had a number of "rules of
thumb" wrt good design and implementation. I agreed with most of those
rules, and utilized them quite regularly.

The above code (and the code it was derived from) violates one of
those "rules" by making an application policy decision within a
support/utility function. In other words, I don't like the invocation
of the exit() function here; a memory allocation utility function
should return allocated memory, or NULL, and leave the handling of a
"no memory available" condition to upper layers of logic.

It is acceptable to have this utility function /log/ the error. It is
not acceptable to have the utility function decide that the
application /cannot recover/ from this error.

--
Lew Pitcher
Master Codewright & JOAT-in-training | Registered Linux User #112576
Me: http://pitcher.digitalfreehold.ca/ | Just Linux: http://justlinux.ca/
---------- Slackware - Because I know what I'm doing.
------

Seebs

unread,
May 16, 2010, 1:15:55 PM5/16/10
to
On 2010-05-16, sandeep <nos...@nospam.com> wrote:
> Phil Carmody writes:
>> Is it "How will it be easier to make effective use of?"

> Yes.

Okay. The obvious answer is that you don't have to think about execution
context or use elaborate hacks with commas and ?:, you can just call it
like a function and it'll work.

>> Answer - it won't have multiple-evaluation issues, and it both looks and
>> behaves like a function.

> Multiple evaluation is very unlikely in this case. This answer looks
> spurious to me.

Again, you're trying way too hard. Don't spend a lot of time trying to
figure out whether a given macro is likely to cause trouble if you evaluate
its arguments too often. Just remember that this is a common risk, and
avoid it.

Anyway, it's not impossible to come up with a conceivable allocation where the
argument to the allocation includes a modifier.

extern int lengths[];
static int length_index = 0;

void
next_block(void) {
return malloc(lengths[length_index++]);
}

But mostly, the issue here is one of developing good habits. If you become
a programmer, you *will* end up having to write something quickly, on not
enough sleep, or while a bit sick, or something. You will make mistakes.
At that point, whether you succeed or fail will depend, in no small part,
on whether you have good *habits*. If the way you do something when you don't
have time or resources to think things through carefully and test everything
out works anyway, because it's conservative and straight-forward, you'll be
okay. If you default to creating macros unless you've already thought of
how multiple-evaluation issues will hurt you, you're going to get bitten,
badly.

Here's the thing. You say "multiple-evaluation is very unlikely". Things
that are "very unlikely" happen pretty often, in the course of a large
programming project. Which is to say, the chances are pretty good that,
in a large program, you'll get hit by it at least once.

This comes back to the problem with your evaluation that "many" users
won't understand an informative error message. You're picking the case you
think is the most common, and building everything around that, accepting
a very high risk of extremely bad outcomes in any case other than the most
common.

You have to remember the scale on which computers operate. They run extremely
fast, and there are a lot of them. No, a one in a million chance of failure
is NOT safe. My filesystem emulator, during the course of a typical build,
executes hundreds of thousands of fairly complex operations. If you turn on
logging, it can be ten million operations or more. Per run. We typically
build for about sixty targets per spin, which is roughly daily, plus we have
over a hundred developers doing several runs a day.

Something that is "vanishingly unlikely" typically happens 2-3 times a day,
at least.

You have the interest in programming to be good at this; put some time
into picking up a good philosophy of programming, and you'll have a great
time.

Seebs

unread,
May 16, 2010, 1:23:31 PM5/16/10
to
On 2010-05-16, Nick Keighley <nick_keigh...@hotmail.com> wrote:
> Note yesterday I encountered someone who got a "not enough memory to
> perform operation" error and were surprised because their disk had
> plenty of space.

Oh, sure, they were confused.

But! They could report the error to someone less confused, and the less
confused person could at least guess at what went wrong.

> wow. And you're on decaff?

I have had a lot of very negative experiences involving "an unspecified
error occurred" and similar things.

> the other cause for memory error is that some other program has eaten
> the memory

True!

>> 4. �The chances are very good that many of the prospective users of any
>> program will, in fact, be able to program at least a little,

> what universe do you live in? Are most of the people you know
> programmers?

Consider, say, "World of Warcraft". Played by many people who are only
able to play it because someone smarter plugged in a mouse for them.

But out of eleven million users, there are hundreds to thousands of
programmers, at the least, which I consider to be "many" users.

>> 5. �Trying to avoid "confusing" people is power-mad idiocy.

> I disagree. Have you seen Airport displays with Windows NT register
> dumps?

Occasionally. I think they are not noticably worse than an airport
display containing an out-of-date schedule followed by the information
"unspecified error".

> do you have a limit to this? "Database has deadlocked" "link layer
> failure" "too many hash collisions"

My practical rule is usually: If you are targetting non-programmers,
and you don't have the resources to, say, spawn an error-handling application
that'll debug the executable automatically, package up a report, and
send it to the developers automatically, go for a simple textual explanation.
I usually aim for: If one of the non-technical users I know asked me "what
happened", what answers could I give them that I think would leave them
with a feeling that they'd gotten an answer which referred in some way to
an event... And then, if one of the technical users I know asked me "what
happened", what answers could I give them that I think would give them an
accurate (if not necessarily comprehensive) understanding of the failure.

The sets have, thus far, always had an intersection. It's okay to give less
information than someone who's been hired to debug the program would need,
and it's okay to give more information than someone who has only recently
been convinced that this is not actually magic would be able to understand.
The goal is to find something that's not hugely intimidating to reasonably
rational users (you can't do anything about that last five percent), and
not too vague to be useful at all for a more experienced user.

Seebs

unread,
May 16, 2010, 1:25:14 PM5/16/10
to
On 2010-05-16, Nick Keighley <nick_keigh...@hotmail.com> wrote:

True.

And come to think of it, every time I've used a value "only once", it's ended
up being "twice" within a week or two anyway.

William Ahern

unread,
May 16, 2010, 1:30:44 PM5/16/10
to
sandeep <nos...@nospam.com> wrote:
> Seebs writes:
> > In short, this is a catastrophically bad design approach. Abandon it.
> > Reject it. Anything you think you know which caused you to adopt this
> > is probably also utterly wrong, and dangerously so, and until you root
> > all the madness out and start afresh, your code will be dangerous,
> > untrustworthy, and based on a bad attitude.

> This was quite a long rant!! I think I see your point but you have to
> imagine your grannie when Word crashes. I think she will not like to see
> a message like
> "malloc failed at src\lib\unicode\mapper\table.c:762"
> I think that will confuse her!

If you're writing a complex application like Word you would never bail on a
malloc failure, at least not after the application has initialized itself
and reached a steady state. Such applications should have provisioned
resources for showing dialogues notifying users that a particular operation
cannot be completed, and that they're free to attempt to continue or exit.
The application or user may then choose to flush caches and release other
resources.

I think it's a fair presumption that any application bailing from malloc is
a command-line utility, where emiting a message on stderr is the analogue to
popping up a dialogue box in a GUI, and exiting is the only way to signal
failure of the operation.

On the other hand, libraries should never do any such thing. They should
return the error to the calling application directly, and should always
maintain a sane internal state.

Nick Keighley

unread,
May 16, 2010, 1:56:51 PM5/16/10
to
On 16 May, 15:32, "Ersek, Laszlo" <la...@caesar.elte.hu> wrote:
> On Sun, 16 May 2010, Nick Keighley wrote:
> > On 16 May, 12:45, Moi <r...@invalid.address.org> wrote:

reinsert snipped material
****


>
> > >> 1. Use fprintf(stderr,...) for error messages. 2. Terminate error
> > >> messages with newlines. 3. Why the *HELL* would you use "unspecified
> > >> error" as the error message when you have ABSOLUTE CERTAINTY of what
> > >> the error is? Why not:
> > >> fprintf(stderr, "Allocation of %ld bytes failed.\n", (unsigned long)
> > >> x);
>
> > >> The first two are comprehensible mistakes. The third isn't. Under
> > >> what POSSIBLE circumstances could you think that "unspecified error" is
> > >> a better diagnostic than something that in some way indicates that a
> > >> memory allocation failed?
>
> > > Many users will only be confused by technical error messages about
> > > memory allocation etc. It's best not to get into unwanted details - the
> > > user doesn't know about how my program allocates memory, it just needs
> > > to know there was an error that needs a restart. I think in books they
> > > call it leaking abstractions.

****

> >> Suppose your program is a filter, used in a (unix shell: sorry!) commandline like:
>
> >> $ find . -name \*\.c -print | sort | uniq | yourprogram | lpr
> >> FAILED $
>
> >> What could the user do to help you solve *your* problem ?
> >> Would he have liked it, if "FAILED" had been written to stdout ?
> >> Will he ever attempt to use your program again ?
>
> > I don't understand.
>
> I don't know what is you don't understand so I'll start with the pipeline.

I suppose I was alittle terse. I know what a pipeline is. I couldn't
understand what your post had to do with abstraction leakage. You were
only addressing the "don't write to stdio" bit?

<snip>

> If "yourprogram" writes FAILED to stdout instead of stderr, then FAILED
> will show up interleaved with (or after) the other data sent to "lpr".
> Supposing "yourprogram" *either* produces data *or* it writes FAILED, and
> that lpr knows to ignore an empty job, writing normal data and FAILED to
> different output streams works correctly. Writing FAILED to stdout would
> print a page with "FAILED (snicker snicker)" on it.
>
> Of course, if "yourprogram" writes data to stdout before it emits FAILED
> to stderr, you still end up with an incomplete page (or book).

well I don't write error data to stdout and I write occaisional
filters but this seems a bit nit picky. If it fails I don't get useful
output and where the error message goes isn't all that important. I
suppose "FAILED" on the printer is a bit poor! But then so is "MEMORY
ERROR"


sandeep

unread,
May 16, 2010, 4:40:36 PM5/16/10
to
Seebs writes:
> On 2010-05-16, sandeep <nos...@nospam.com> wrote:
>> #define ESIZE 0x40000000uLL
>> if(!(emrgcy=emrgcy?emrgcy:malloc(ESIZE)))
>> #undef ESIZE
>
> Let me guess. Someone told you to define symbolic names for constants,
> right?

Yes, of course!

> This is not how you do it.
>
> 1. If you're going to define a constant, define it and leave it
> defined.

This is very bad practise. Localizing the #define is good for the same
reason that using local instead of global variables.

> 2. Don't use a name starting with a capital E followed by

> another capital letter, those are reserved for errno values. 3. If


> you're only using it once, don't feel like you have to #define it.

I would say that the #define is a self-documenting form of code, no?

> 4.
> Don't use "uLL" on a constant that's unambiguously within the size range
> of an ordinary signed long. You don't need any qualifier at all,
> although in theory a system could exist where that value is too big for
> size_t, in which case you'd be allocating 0 bytes.

I think this is wrong. With no "qualifier", the number will be
interpreted as an int and undergo default promotions. Because int may be
16 bits this could overflow.

> 5. Don't get so
> clever. Try:
>
> if (!emrgcy) {
> emrgcy = malloc(0x40000000);
> if (!emrgcy) {
> fprintf(stderr, "Uh-oh, failed to allocate spare
memory.\n");
> }
> }

I think this is the same logic but with longer and more complicated
code...

> 6. Don't allocate a GIGABYTE of memory like that -- all this does is
> massively increase the chance of catastrophic failure, as a likely
> response from a system which overcommits is to determine that your
> process allocated a TON of memory, doesn't use most of it, and is
> probably the best candidate for being killed out of hand. A megabyte or
> two, sure, I guess.

I choose a gigabyte because most allocations will be less than 1 GB... if
you only allow a few MB there could easily be a late allocation for more
that the emergency memory can't satisfy.

> 7. Actually, even then, this is just cargo cult
> stuff.

I don't know what cargo cult stuff is.

>> #define STRGFY(x) #x
>> STRGFY(__LINE__)
>> #undef STRGFY
>

> Again, don't do this. With extremely rare exceptions, you should NEVER
> be using #undef on something you just defined.

By the same argument, with extremely rare exceptions you should NEVER be
using block-scope variables.

> Reading your code, I get the impression you're trying to aim for some
> kind of code density, with cool tricks you've seen all thrown in
> together to make the code look more impressive.

I like using advanced C features, yes. It makes programming fun. I think
all good programmers will be able to understand my code.

Geoff

unread,
May 16, 2010, 4:43:26 PM5/16/10
to
On Sun, 16 May 2010 20:40:36 +0000 (UTC), sandeep <nos...@nospam.com>
wrote:

>I like using advanced C features, yes. It makes programming fun. I think
>all good programmers will be able to understand my code.

Classic misconception.

Tim Harig

unread,
May 16, 2010, 5:09:58 PM5/16/10
to
On 2010-05-16, sandeep <nos...@nospam.com> wrote:
> Seebs writes:
>> On 2010-05-16, sandeep <nos...@nospam.com> wrote:
>>> #define ESIZE 0x40000000uLL
>>> if(!(emrgcy=emrgcy?emrgcy:malloc(ESIZE)))
>>> #undef ESIZE
>> Let me guess. Someone told you to define symbolic names for constants,
>> right?
> Yes, of course!

I partially agree with removing so called "magic numbers" from the code in
favor of more descriptive names where it makes sense to do so.

>> 5. Don't get so
>> clever. Try:
>>
>> if (!emrgcy) {
>> emrgcy = malloc(0x40000000);
>> if (!emrgcy) {
>> fprintf(stderr, "Uh-oh, failed to allocate spare
> memory.\n");
>> }
>> }
>
> I think this is the same logic but with longer and more complicated
> code...

The logic may be the same; however, Seebs version is much more intuitive
then yours. The extra spacing and indentation provides visual clues that
make it easy to pick up what is going on and is consistant with other
properly indented code. I have seen *many* bugs created when using "?:" to
obfuscate the code that could clearly be seen using the normal if/else
syntax.

>> 6. Don't allocate a GIGABYTE of memory like that -- all this does is
>> massively increase the chance of catastrophic failure, as a likely
>> response from a system which overcommits is to determine that your
>> process allocated a TON of memory, doesn't use most of it, and is
>> probably the best candidate for being killed out of hand. A megabyte or
>> two, sure, I guess.
> I choose a gigabyte because most allocations will be less than 1 GB... if
> you only allow a few MB there could easily be a late allocation for more
> that the emergency memory can't satisfy.

I work with many systems that *have* less then a gigabyte of memory. How
can you be sure that nobody will ever try to run your code on such a
system? How large is your memory? What if somebody runs multiple
instances of your program? You have already been told how this can
backfire on operating systems designed to overcommit.

The bottom line is that your "emergency memory" is a very bad idea. There
are much better ways of handling low memory conditions.

>> Reading your code, I get the impression you're trying to aim for some
>> kind of code density, with cool tricks you've seen all thrown in
>> together to make the code look more impressive.
>
> I like using advanced C features, yes. It makes programming fun. I think
> all good programmers will be able to understand my code.

Good code isn't clever. Good code is clear for whoever has to read an
maintain it. You may think that showing of your 1337 skilz is fun. The
guy who has to clean up the bugs you have written because you obfuscated
your code isn't going to have much fun.

The end effect is that you are creating write-once code. Write once
because it is designed to be thrown away and re-written rather then making
any effort to maintain such poorly written code.

christian.bau

unread,
May 16, 2010, 5:11:38 PM5/16/10
to
On May 16, 3:21 am, "Dennis \(Icarus\)" <nojunkm...@ever.invalid>
wrote:

> memset actually showed as a bottleneck in one of programs I had to debug.
> It was being called 2,000,000,000 times or so.
>
> Converting from memset to a for loop writing integers helped a bit, but the
> real fix was to correct the logic error so that it wasn't called that many
> times.

That's interesting. I found that I got substantial speed improvements
by replacing a loop filling an array with integers with a call to
memset.

Geoff

unread,
May 16, 2010, 5:18:36 PM5/16/10
to

I don't think the problem was the choice of initialization method, the
problem was the initialization was called repeatedly, apparently for
no legitimate reason.

The analogy might be calling memset 2x10^9 times to initialize 1 byte
each time versus calling memset once to initialize 2x10^9 bytes. The
overhead makes all the difference.

Keith Thompson

unread,
May 16, 2010, 5:20:04 PM5/16/10
to
Lew Pitcher <lpit...@teksavvy.com> writes:
[...]

> It is acceptable to have this utility function /log/ the error. It is
> not acceptable to have the utility function decide that the
> application /cannot recover/ from this error.

Another way to look at it is that the application decided, by calling
this particular utility function, that it could not recover from
a memory allocation failure. If it could, it should have called
some malloc() or other function that would permit recovery.

If allocating memory and aborting the program on failure is a
common operation, combining the two into a single utility function
makes sense.

(Figuring out how to recover from allocation failures makes even
more sense, but that can be non-trivial.)

Eric Sosman

unread,
May 16, 2010, 5:21:14 PM5/16/10
to
On 5/16/2010 4:40 PM, sandeep wrote:
> Seebs writes:
>> On 2010-05-16, sandeep<nos...@nospam.com> wrote:
>>> #define ESIZE 0x40000000uLL
>>> if(!(emrgcy=emrgcy?emrgcy:malloc(ESIZE)))
>>> #undef ESIZE
>>
>> Let me guess. Someone told you to define symbolic names for constants,
>> right?
>
> Yes, of course!
>
>> This is not how you do it.
>>
>> 1. If you're going to define a constant, define it and leave it
>> defined.
>
> This is very bad practise. Localizing the #define is good for the same
> reason that using local instead of global variables.

There are few similarities between variables and macros, even
macros that are "manifest constants," so the criteria for what is
good or bad are dissimilar.

But, okay: Let's take your "localization" dictum as Truth, and
see where it leads us:

#define SIZE 0x40000000uLL
void *ptr = malloc(SIZE);
#undef SIZE
if (ptr == NULL) ...
... forty lines ...
#define SIZE 0x40000000uLL
char *buf = malloc(SIZE);
#undef SIZE
if (buf == NULL) ...
... one hundred lines ...
#define SIZE 0x40000000uLL
void *tmp = realloc(bigbuf, SIZE);
#undef SIZE
if (tmp == NULL) ...; else bigbuf = tmp;
... still more lines ...

Each definition and use of SIZE is now localized to its minimal scope.
But one day you decide that a gigabyte is the wrong amount, and want
to change to forty megabytes instead. Seebs makes a one-line change;
you're faced with three (or perhaps more) and the possibility of having
missed a few. Who's in better shape?

>> 4.
>> Don't use "uLL" on a constant that's unambiguously within the size range
>> of an ordinary signed long. You don't need any qualifier at all,
>> although in theory a system could exist where that value is too big for
>> size_t, in which case you'd be allocating 0 bytes.
>
> I think this is wrong. With no "qualifier", the number will be
> interpreted as an int and undergo default promotions. Because int may be
> 16 bits this could overflow.

You should re-read your C textbook or other reference, because
you are wrong about the treatment of literal constants in source code.

>> 7. Actually, even then, this is just cargo cult
>> stuff.
>
> I don't know what cargo cult stuff is.

<http://en.wikipedia.org/wiki/Cargo_cult_programming>

>>> #define STRGFY(x) #x
>>> STRGFY(__LINE__)
>>> #undef STRGFY
>>
>> Again, don't do this. With extremely rare exceptions, you should NEVER
>> be using #undef on something you just defined.
>
> By the same argument, with extremely rare exceptions you should NEVER be
> using block-scope variables.

Again, the dissimilarities outweigh the similarities, and the
claim that the same argument applies has little weight.

>> Reading your code, I get the impression you're trying to aim for some
>> kind of code density, with cool tricks you've seen all thrown in
>> together to make the code look more impressive.
>
> I like using advanced C features, yes. It makes programming fun. I think
> all good programmers will be able to understand my code.

In the first place, you're not using "advanced" features, just
unnecessarily convolutions of normal features. (I don't think C
even *has* any "advanced" features -- there are seldom-used areas,
particularly in corners of the library -- but "rare" and "advanced"
are not synonyms.)

In the second place, you might do well to consider the words of
Brian Kernighan (the "K" of "K&R," in case you don't recognize the
name): "Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it." Yes, he was
probably being a bit facetious, but I think he has a point worth
pondering -- especially by someone who's already shown that he's
writing beyond the limits of his own cleverness.

--
Eric Sosman
eso...@ieee-dot-org.invalid

Seebs

unread,
May 16, 2010, 5:12:43 PM5/16/10
to
On 2010-05-16, sandeep <nos...@nospam.com> wrote:
> Seebs writes:
>> On 2010-05-16, sandeep <nos...@nospam.com> wrote:
>>> #define ESIZE 0x40000000uLL
>>> if(!(emrgcy=emrgcy?emrgcy:malloc(ESIZE)))
>>> #undef ESIZE

>> Let me guess. Someone told you to define symbolic names for constants,
>> right?

> Yes, of course!

I figured.

>> This is not how you do it.
>>
>> 1. If you're going to define a constant, define it and leave it
>> defined.

> This is very bad practise. Localizing the #define is good for the same
> reason that using local instead of global variables.

Wrong.

The entire POINT of a symbolic constant is to have every usage be the same!

With your system, it is quite easy to imagine:
#define SIZE 1024
v = malloc(size);
#undef SIZE

...

#define SIZE 2048
memcpy(v, src, SIZE);
#undef SIZE

Might I suggest that, since you are clearly at the very beginning newbie
level, you not go around telling people that something is "bad practice"
when they warn you that you're doing something very dangerous?

>> 3. If
>> you're only using it once, don't feel like you have to #define it.

> I would say that the #define is a self-documenting form of code, no?

Not as you used it.

> I think this is wrong. With no "qualifier", the number will be
> interpreted as an int and undergo default promotions. Because int may be
> 16 bits this could overflow.

Again, please consider the *remote* possibility that, with twenty years of
active experience using C, I might have a TINY bit of information.

Constants do not work that way. If a constant is too big to be an int,
it is AUTOMATICALLY made into a larger type, if needed. The constant
in question cannot overflow.

Furthermore, the rules for constants starting with 0x are different.

Furthermore, even if you needed to modify the type, "L" would be sufficient.

>> 5. Don't get so
>> clever. Try:
>>
>> if (!emrgcy) {
>> emrgcy = malloc(0x40000000);
>> if (!emrgcy) {
>> fprintf(stderr, "Uh-oh, failed to allocate spare
> memory.\n");
>> }
>> }

> I think this is the same logic but with longer and more complicated
> code...

No. It is the same logic (or close to it) with longer and SIMPLER code.

Always write something as simply as you can when first writing something.
If you need to do something fancy, do it after you've got the simple
version working.

> I choose a gigabyte because most allocations will be less than 1 GB...

I understood that. However, what you've done is cause many systems to
be unable to allocate that memory at all, and many more to fail
catastrophically because you allocated a gigabyte of memory you didn't
need, when they would have been fine without it.

> if
> you only allow a few MB there could easily be a late allocation for more
> that the emergency memory can't satisfy.

And the emergency memory *does not work* on many systems. At all. I have
used many systems on which your emergency memory would fail completely, or
cause the program to get killed preemptively by the OS. I have used many
on which freeing that "emergency" memory would have NO EFFECT AT ALL on
any allocation of under about 128MB, because the implementation treats
large allocations differently from small allocations.

The entire idea is just plain wrong. You have formed some kind of crazy
theory about "how malloc works", and that theory is incorrect, leading you
to do stuff that makes no sense.

This is like driving a car and making a point of manually triggering the
airbags before you even start the car, so that you'll be safe in the event
of a crash.

>> 7. Actually, even then, this is just cargo cult
>> stuff.

> I don't know what cargo cult stuff is.

During WWII, various forces set up staging areas on islands in the Pacific,
some of which were inhabited. Some primitive cultures on the more isolated
islands were unable to comprehend why suddenly there were planes and food
and stuff. They didn't know how planes worked, or where the food came from.
When they were hungry, they did their best to build things that looked sort
of like airstrips, because that would make more "cargo" come.

What you are doing is like this. You don't understand malloc, you've seen
something sort of like this somewhere, and you're imitating it without
understanding what it was, how it worked, or when it would (or wouldn't)
be useful.

Don't do that.

>>> #define STRGFY(x) #x
>>> STRGFY(__LINE__)
>> #undef STRGFY

>> Again, don't do this. With extremely rare exceptions, you should NEVER
>> be using #undef on something you just defined.

> By the same argument, with extremely rare exceptions you should NEVER be
> using block-scope variables.

No, not the same argument at all.

The preprocessor isn't scoped, and isn't supposed to be scoped. If you
are #defining something, it should be because you want to make sure that
any possible reference to it will get the same value.

>> Reading your code, I get the impression you're trying to aim for some
>> kind of code density, with cool tricks you've seen all thrown in
>> together to make the code look more impressive.

> I like using advanced C features, yes. It makes programming fun. I think
> all good programmers will be able to understand my code.

A couple of concerns.

1. Don't assume everyone will be a good programmer. You should write with
the intent that very inexperienced programmers will be able to understand
your code if at all possible. The fact is, someone will have to maintain it.
2. I understand it just fine, and it's bad, because you're trying to be
"clever".

I am a bit sympathetic to this, because I certainly did a bunch of crazy stuff
like this when I was first learning to program... But the best advice I ever
got was: "DON'T".

Having looked at that old code, and in a couple of cases tried to get it to
run on newer compilers, I am fully persuaded. Simple, clear, code is better.

The problem with using advanced features is that you have to know how to
use them well. Very good race drivers sometimes use the hand brake in a car
to control the car's behavior in unusual ways. This allows them to do
things that most of us could never do with a car. However, the solution
is not for me to, every time I pull up to a stop sign, use the hand brake
instead of the regular brakes. That would damage my car very severely,
very quickly.

If your skill at first aid extends about to applying band-aids, don't start
trying to do brain surgery.

Geoff

unread,
May 16, 2010, 5:37:33 PM5/16/10
to
On Sun, 16 May 2010 14:20:04 -0700, Keith Thompson <ks...@mib.org>
wrote:

>(Figuring out how to recover from allocation failures makes even
>more sense, but that can be non-trivial.)

Which is another way of saying if a supposedly safe malloc could be
written it would never need an error return.

A safe malloc function could return a pointer to and a size of the
actually allocated space and let the program decide whether to
continue with the smaller allocation or emit an error.

Keith Thompson

unread,
May 16, 2010, 5:38:07 PM5/16/10
to
sandeep <nos...@nospam.com> writes:
> Seebs writes:
>> On 2010-05-16, sandeep <nos...@nospam.com> wrote:
>>> #define ESIZE 0x40000000uLL
>>> if(!(emrgcy=emrgcy?emrgcy:malloc(ESIZE)))
>>> #undef ESIZE
>>
>> Let me guess. Someone told you to define symbolic names for constants,
>> right?
>
> Yes, of course!
>
>> This is not how you do it.
>>
>> 1. If you're going to define a constant, define it and leave it
>> defined.
>
> This is very bad practise. Localizing the #define is good for the same
> reason that using local instead of global variables.

So use a local variable.

I don't think I've ever seen C code in which a macro is #define'd, then
used, then immediately #undef'ed. I can see the argument in favor of
doing it, but in practice most macros are global anyway.

But again, there's no good reason to use a macro rather than a
constant object declaration:

const size_t esize = 0x40000000;

If you want it scoped locally, use a feature that lets the compiler
handle it for you.

Even your #define/#undef pair doesn't do the same thing as a local
declaration; it clobbers any previous definition.

[...]

>> 4.
>> Don't use "uLL" on a constant that's unambiguously within the size range
>> of an ordinary signed long. You don't need any qualifier at all,
>> although in theory a system could exist where that value is too big for
>> size_t, in which case you'd be allocating 0 bytes.
>
> I think this is wrong. With no "qualifier", the number will be
> interpreted as an int and undergo default promotions. Because int may be
> 16 bits this could overflow.

Nope. An unqualified integer constant, unless it exceeds UINTMAX_MAX (I
think that's the right name) is always of some type into which its value
will fit.

[...]

>> 7. Actually, even then, this is just cargo cult
>> stuff.
>
> I don't know what cargo cult stuff is.

Google it.

Basically, "cargo cult programming" means programming by rote
without understanding what you're doing. (The roots of the term
are fascinating, but not really relevant.)

[...]

>> Reading your code, I get the impression you're trying to aim for some
>> kind of code density, with cool tricks you've seen all thrown in
>> together to make the code look more impressive.
>
> I like using advanced C features, yes. It makes programming fun. I think
> all good programmers will be able to understand my code.

A lot of good programmers have been reading your code. Not being able
to understand it isn't the problem.

Keith Thompson

unread,
May 16, 2010, 5:48:22 PM5/16/10
to

No, that's not what I was saying at all. It didn't even occur to
me that the response to a failure to allocate a certain amount
of memory might be to allocate some smaller amount of memory.
I'm sure that approach makes sense in some cases (for example,
if you're allocating an in-memory file buffer, a smaller buffer is
better than none at all). And I can imagine a highly specialized
allocation function that allocates as much as it can up to what
was requested -- but I wouldn't call such a function a "safe malloc".

For most allocations, if you can't get what you asked for, that's
a failure, and you either need to fall back to some other approach
or prepare to shut down. If I'm trying to allocate a tree node,
half a node does me no good.

Seebs

unread,
May 16, 2010, 5:46:26 PM5/16/10
to
On 2010-05-16, Keith Thompson <ks...@mib.org> wrote:
> I don't think I've ever seen C code in which a macro is #define'd, then
> used, then immediately #undef'ed. I can see the argument in favor of
> doing it, but in practice most macros are global anyway.

I've done it very rarely, in cases where I used a macro, say, ten or fifteen
times, because it was initialization code of some sort.

Something like:
#define FOO(x) { x, #x }
struct { int value; char *name; } lookup[] = {
FOO(VAL_MIN),
FOO(VAL_ONE),
FOO(VAL_TWO),
{ 0, 0 }
}
#undef FOO


> Basically, "cargo cult programming" means programming by rote
> without understanding what you're doing. (The roots of the term
> are fascinating, but not really relevant.)

I agree that they're not very relevant, but the image is so evocative I
like to explain it. :)

> A lot of good programmers have been reading your code. Not being able
> to understand it isn't the problem.

In a way, though, it is.

There are a couple of questions you must ask about any piece of code you
wish to truly understand:

1. What does it do?
2. What was it intended to do?
3. Why does it need to do that?
4. Why is it doing that in this particular way?

The problem here is not that people can't figure out the answer to question
#1 -- surely, most of us can read this code and see what it's doing. The
problem is that if you write something in an unusual way, it leads the reader
to wonder why you did it that way rather than in a more straightforward
way.

Consider:
void
copystring(char *to, char *from) {
void *t = (void *) to;
void *f = (void *) from;
union { int zero; char c; } u;
u.zero = 0L;
u.c = 0;
do {
memcpy(t, f, 1);
t = (void *) ((char *) t) + 1;
f = (void *) ((char *) f) + 1;
} while (memcmp(f, &u.c, 1));
return;
free(&u);
}

An experienced programmer can probably say that this (unless I screwed it
up, no promises) copies the contents of "from" to "to" up to but not
including the first NUL byte encountered in "from".

But that doesn't lead to *understanding* the code. Why do we use t and f
instead of to and from? Why are we using memcmp to see whether the next
byte is 0x0? Why is u a union? Why is u.zero initialized before u.c, and
why was it initialized at all? Why does the code contain a free of a
non-allocated address, but put this after a return statement so it can't
be executed?

All of these questions suggest that one of two things is at issue:

1. The person who wrote this code had a very primitive, at best,
understanding of pointers in C. The code is probably unreliable.
2. There is something extremely unusual going on which you need to
know about.

In fact, it's really:
3. It is a contrived example.

Sandeep's code reads like a contrived example of some sort -- it acts as
though something special is going on, when nothing can be found to justify
the choices made, and that is a big red flag usually.

Ben Bacarisse

unread,
May 16, 2010, 6:45:47 PM5/16/10
to
Seebs <usenet...@seebs.net> writes:
<snip>

[About pre-allocating memory for a low-memory emergency.]



> The entire idea is just plain wrong. You have formed some kind of crazy
> theory about "how malloc works", and that theory is incorrect, leading you
> to do stuff that makes no sense.

This "crazy theory" might have been formed, in part, by reading the
standard. A straight-forward interpretation of 7.20.3.2 paragraph 2:

"The free function causes the space pointed to by ptr to be
deallocated, that is, made available for further allocation."

suggests that what the OP was doing would work.

You and I may be world-weary enough to know that such wording is never
straight-forward, and implementations are not always conforming, but
perhaps the OP took this (or some paraphrase of it in a textbook) at
face value.

> This is like driving a car and making a point of manually triggering the
> airbags before you even start the car, so that you'll be safe in the event
> of a crash.

Or it's like reading in the car manual "if you trigger the airbags early
they will keep you safe in future crashes" and believing it.

<snip>
--
Ben.

Seebs

unread,
May 16, 2010, 6:46:50 PM5/16/10
to
On 2010-05-16, Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
> Seebs <usenet...@seebs.net> writes:
><snip>
> [About pre-allocating memory for a low-memory emergency.]
>> The entire idea is just plain wrong. You have formed some kind of crazy
>> theory about "how malloc works", and that theory is incorrect, leading you
>> to do stuff that makes no sense.

> This "crazy theory" might have been formed, in part, by reading the
> standard. A straight-forward interpretation of 7.20.3.2 paragraph 2:

> "The free function causes the space pointed to by ptr to be
> deallocated, that is, made available for further allocation."

> suggests that what the OP was doing would work.

Ahh, I see. Yes, that would imply that, except of course that "available"
doesn't mean "usable". But I do agree, it might seem that way at first.

>> This is like driving a car and making a point of manually triggering the
>> airbags before you even start the car, so that you'll be safe in the event
>> of a crash.

> Or it's like reading in the car manual "if you trigger the airbags early
> they will keep you safe in future crashes" and believing it.

I think possibly reading "The air bags ensure safety in a crash", and
concluding that you should trigger them to be sure you're safe.

Phil Carmody

unread,
May 17, 2010, 1:47:23 AM5/17/10
to
"Dennis \(Icarus\)" <nojun...@ever.invalid> writes:
> "Phil Carmody" <thefatphi...@yahoo.co.uk> wrote in message
> news:87mxw0d...@kilospaz.fatphil.org...
>> "Dennis \(Icarus\)" <nojun...@ever.invalid> writes:
>>> "Seebs" <usenet...@seebs.net> wrote in message
>>> news:slrnhuu50r.4tq...@guild.seebs.net...

>>>> On 2010-05-15, sandeep <nos...@nospam.com> wrote:
>>>>> Obviously for efficiency! malloc may be called many times in the course
>>>>> of a program.
>>>>
>>>> Please stop trying to outsmart the compiler.
>>>>
>>>> The "cost" of using a function instead of a macro is likely to be
>>>> so small
>>>> that you can't even measure it. If there is even a cost at all,
>>>> which there
>>>> may not be.

>>>
>>> memset actually showed as a bottleneck in one of programs I had to debug.
>>> It was being called 2,000,000,000 times or so.
>>>
>>> Converting from memset to a for loop writing integers helped a bit,
>>> but the real fix was to correct the logic error so that it wasn't
>>> called that many times.
>>> the memset was part of initializing an object, which was being
>>> constructed during processing, but was only needed a fraction of the
>>> time.....
>>
>> An object being 'constructed'? Are you sure you were using C?
>
> I know I was using C++. I doubt C would be any different in this respect.
> :-)

All that stuff that's just done for you in C++ so you don't have to
worry about may be the stuff that you don't want to happen. Whilst
it's certainly possible, I think you'd have to go a little further
out of your way to get such behaviour in C, you would have to
deliberately design it in more.

However, thanks for sharing your C++ woes on comp.lang.c.

Phil
--
I find the easiest thing to do is to k/f myself and just troll away
-- David Melville on r.a.s.f1

Phil Carmody

unread,
May 17, 2010, 2:14:14 AM5/17/10
to
sandeep <nos...@nospam.com> writes:
>> 4.
>> Don't use "uLL" on a constant that's unambiguously within the size range
>> of an ordinary signed long. You don't need any qualifier at all,
>> although in theory a system could exist where that value is too big for
>> size_t, in which case you'd be allocating 0 bytes.
>
> I think this is wrong. With no "qualifier", the number will be
> interpreted as an int and undergo default promotions. Because int may be
> 16 bits this could overflow.

On the other hand, you could unlearn all the nonsense you've learnt
and actually learn C properly this time. Next time pay more attention
to:
[#5] The type of an integer constant is the first of the
corresponding list in which its value can be represented.

>> 5. Don't get so
>> clever. Try:
>>
>> if (!emrgcy) {
>> emrgcy = malloc(0x40000000);
>> if (!emrgcy) {
>> fprintf(stderr, "Uh-oh, failed to allocate spare
> memory.\n");
>> }
>> }
>
> I think this is the same logic but with longer and more complicated
> code...

Nope. It's significantly easier to read code. Count the number of
references to the emrgcy variable, for example. You had 6. Seebs
has 4. 2 of yours are demonstrably completely unnecessary, and
are just warts in the code.

>> 7. Actually, even then, this is just cargo cult
>> stuff.
>
> I don't know what cargo cult stuff is.

It's stuff from somewhere external that is adopted without proper
understanding of what it is, and what it does, and therefore it is
almost always misused.

> I like using advanced C features, yes. It makes programming fun. I think
> all good programmers will be able to understand my code.

You are the classic example of "a little knowledge is a dangerous thing".

I pity the maintenance programmer who ever has to maintain your
unreadable illogical code. Which I guess is self-pity - I have
seen plenty of such code written by just-out-of-university or
student workers that was as bad, and been forced to fix it (at
the end of a pointy paycheque, of course, I'd not do that for
fun).

Phil Carmody

unread,
May 17, 2010, 2:19:01 AM5/17/10
to

The main problem was that he wanted a "?:=" version of gcc's "?:"
operator. If he'd have actually wanted "? :", it wouldn't have been
so bad to use "? :" (I like dense code, I don't like redundant code).

And as an aside, I think ?: should be standardised.

(And please overlook the syntactical sloppiness, I didn't want to have
to explain ?:)

Richard Heathfield

unread,
May 17, 2010, 2:30:18 AM5/17/10
to
Seebs wrote:

<snip>

> The problem with using advanced features is that you have to know how to
> use them well. Very good race drivers sometimes use the hand brake in a car
> to control the car's behavior in unusual ways. This allows them to do
> things that most of us could never do with a car. However, the solution
> is not for me to, every time I pull up to a stop sign, use the hand brake
> instead of the regular brakes. That would damage my car very severely,
> very quickly.

In the UK, at a STOP sign (black text in red triangle on white
background), you are *required* to use your hand-brake, since you must
bring the vehicle to a complete and safe stop. (But of course you don't
use it /instead/ of the regular brakes - you use the footbrake to stop
the car, and then the handbrake to keep it stopped.)

> If your skill at first aid extends about to applying band-aids, don't start
> trying to do brain surgery.

Perhaps sandeep will be persuaded by this argument: if you treat your
users like idiots, smart people will soon stop using your program.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within

Richard Heathfield

unread,
May 17, 2010, 2:38:52 AM5/17/10
to
Phil Carmody wrote:
<snip>

>
> An object being 'constructed'? Are you sure you were using C?

I construct objects very often in C. What's the problem?

Message has been deleted

io_x

unread,
May 17, 2010, 6:16:58 AM5/17/10
to

"Tim Harig" <use...@ilthio.net> ha scritto nel messaggio > Note that minimizing
code density doesn't create a smaller or faster
> binary. A concisely written ?: operator on a single line generates the
> same code as the equivilant if/else operators spread across multiple lines.
> The if/else version is almost always easier to read. "?:"'s are almost
> always the sign of an amateur programmer trying to show off their "l337
> skilz." Seasoned programmers have learned to avoid them.

programming in about reduce text size
it is to say to the computer many little precise thing all togheter

if "?:" reduce the size of the prog at last of one char: it is ok

Geoff

unread,
May 17, 2010, 7:19:49 AM5/17/10
to

Programming is about correctness of the solution to a problem, not
about terseness. To obtain correctness and ease of maintenance the
programmer should write it as though someone unfamiliar with the
solution will be maintaining the code. The compiler will optimize the
code. Yes, there are occasions when the solution has bottlenecks, but
these are conceptual problems with the implementation of the solution,
not the size of the code.

>
>if "?:" reduce the size of the prog at last of one char: it is ok
>

Simply false. As Tim stated above, if/else generates essentially the
same machine code as ?: yet the latter is harder to read and maintain.

Message has been deleted

pete

unread,
May 17, 2010, 7:42:54 AM5/17/10
to
Geoff wrote:

> I made 3 trips to the UK and drove there for a total of 4 weeks, I
> can't remember encountering a stop sign. Lots of roundabouts and
> traffic lights, but no stop signs.

In the US, some intersections have stop signs on one road,
and yield signs on the intersecting road.

I have never been able to understand explanations
about what the yield signs are supposed to mean in such situations.

--
pete

Richard Bos

unread,
May 17, 2010, 8:47:45 AM5/17/10
to
Keith Thompson <ks...@mib.org> wrote:

> Lew Pitcher <lpit...@teksavvy.com> writes:
> [...]
> > It is acceptable to have this utility function /log/ the error. It is
> > not acceptable to have the utility function decide that the
> > application /cannot recover/ from this error.
>
> Another way to look at it is that the application decided, by calling
> this particular utility function, that it could not recover from
> a memory allocation failure. If it could, it should have called
> some malloc() or other function that would permit recovery.

In theory, yes. In practice, when I've seen such functions used, either
the code is throwaway or command line utility code, or the programmer
has _assumed_, not decided, that that he could not recover from malloc()
failures. In the latter case, the programmer is almost always wrong. (In
the first ones, the decision is not so much "could not" as "considered
it too much bother", with, for that size program, is more ofter
justifiable.)

Richard

Richard Heathfield

unread,
May 17, 2010, 12:23:52 PM5/17/10
to
Tim Streater wrote:
> In article <UYKdnZBMGKz-f23W...@bt.com>,

> Richard Heathfield <r...@see.sig.invalid> wrote:
>
>> Seebs wrote:
>>
>> <snip>
>>
>>> The problem with using advanced features is that you have to know how to
>>> use them well. Very good race drivers sometimes use the hand brake in a car
>>> to control the car's behavior in unusual ways. This allows them to do
>>> things that most of us could never do with a car. However, the solution
>>> is not for me to, every time I pull up to a stop sign, use the hand brake
>>> instead of the regular brakes. That would damage my car very severely,
>>> very quickly.
>> In the UK, at a STOP sign (black text in red triangle on white
>> background), you are *required* to use your hand-brake, since you must
>> bring the vehicle to a complete and safe stop. (But of course you don't
>> use it /instead/ of the regular brakes - you use the footbrake to stop
>> the car, and then the handbrake to keep it stopped.)
>
> The difference with the US is that in the UK we only use Stop signs
> where there is a junction with some danger to it (such as a steep slope
> to the junction, or hedges). So they are quite rare, since with most
> junctions a Give Way is quite sufficient.

Actually, they are /so/ rare in the UK that I misreported what they look
like! (They are actually octagonal, red with a thin white border, and
white text.) I don't recall ever coming across one that was there for
spurious reasons.

> The US has almost no Give Way signs and lots of Stop signs where they
> are not needed. Like, you're going along a road and there is a Stop sign
> serving no purpose whatever (not at a junction, not by a school, even).

Well, that's your problem, not mine. :-)

Keith Thompson

unread,
May 17, 2010, 12:24:53 PM5/17/10
to
Phil Carmody <thefatphi...@yahoo.co.uk> writes:
[...]

> The main problem was that he wanted a "?:=" version of gcc's "?:"
> operator. If he'd have actually wanted "? :", it wouldn't have been
> so bad to use "? :" (I like dense code, I don't like redundant code).
>
> And as an aside, I think ?: should be standardised.
>
> (And please overlook the syntactical sloppiness, I didn't want to have
> to explain ?:)

Since you're talking about a gcc-specific extension (and since C already
has a ?: operator), perhaps explaining it would have been a good idea.

gcc allows the middle operand of the conditional operator to be
omitted. ``x ? : y'' is equivalent to ``x ? x : y'' except that x
is evaluated only once. It yields the value of x if x is non-zero,
otherwise it yields the value of y.

Keith Thompson

unread,
May 17, 2010, 12:26:04 PM5/17/10
to
Richard Heathfield <r...@see.sig.invalid> writes:
> Phil Carmody wrote:
> <snip>
>>
>> An object being 'constructed'? Are you sure you were using C?
>
> I construct objects very often in C. What's the problem?

Who said there was a problem?

The word "constructed" is often (but by no means always) C++-specific
jargon. And it turned out the previous poster was using C++.

Richard Heathfield

unread,
May 17, 2010, 12:27:59 PM5/17/10
to
io_x wrote:
> "Tim Harig" <use...@ilthio.net> ha scritto nel messaggio > Note that minimizing
> code density doesn't create a smaller or faster
>> binary. A concisely written ?: operator on a single line generates the
>> same code as the equivilant if/else operators spread across multiple lines.
>> The if/else version is almost always easier to read. "?:"'s are almost
>> always the sign of an amateur programmer trying to show off their "l337
>> skilz." Seasoned programmers have learned to avoid them.
>
> programming in about reduce text size

That's an interesting perspective, but not one that would keep you
employed for very long. Other than as an interesting intellectual
exercise (or, perhaps, as an IOCCC entry), a program that is optimised
for source code length is of no use whatsoever. Programs should be
readable, and readability *requires* redundancy.

> it is to say to the computer many little precise thing all togheter
>
> if "?:" reduce the size of the prog at last of one char: it is ok

Not if it is at the expense (and I do mean *expense*) of a reduction in
readability.

Richard Heathfield

unread,
May 17, 2010, 12:34:38 PM5/17/10
to
Keith Thompson wrote:
> Richard Heathfield <r...@see.sig.invalid> writes:
>> Phil Carmody wrote:
>> <snip>
>>> An object being 'constructed'? Are you sure you were using C?
>> I construct objects very often in C. What's the problem?
>
> Who said there was a problem?

Nobody explicitly said there was a problem, but I inferred, from what
Phil Carmody said, that he thought there was a problem.

>
> The word "constructed" is often (but by no means always) C++-specific
> jargon.

<shrug> Precisely so. The fact that the word has a specific meaning in
C++, however, is surely irrelevant in a C newsgroup?

> And it turned out the previous poster was using C++.

Well, that's his problem. :-)

Seebs

unread,
May 17, 2010, 12:28:22 PM5/17/10
to
On 2010-05-17, Tim Streater <timst...@waitrose.com> wrote:
> The difference with the US is that in the UK we only use Stop signs
> where there is a junction with some danger to it (such as a steep slope
> to the junction, or hedges). So they are quite rare, since with most
> junctions a Give Way is quite sufficient.

Ahh! That makes sense. Ours are used any time you're expected to come
to a stop, but there's no expectation of using the hand brake -- just
slowing down enough that you're clearly not-in-motion.

> The US has almost no Give Way signs and lots of Stop signs where they
> are not needed. Like, you're going along a road and there is a Stop sign
> serving no purpose whatever (not at a junction, not by a school, even).

I have never, ever, seen such a thing. I have only seen Stop signs at
intersections of some sort.

Message has been deleted
Message has been deleted

Bob Doherty

unread,
May 17, 2010, 12:57:42 PM5/17/10
to
On Mon, 17 May 2010 10:26:06 +0100, Tim Streater
<timst...@waitrose.com> wrote:


>The US has almost no Give Way signs and lots of Stop signs where they
>are not needed. Like, you're going along a road and there is a Stop sign
>serving no purpose whatever (not at a junction, not by a school, even).

I assume that Give Way is the equivalent of US Yield signs.
Interstingly, when I was learning to drive, Massachusetts in the 50's,
a Stop sign did not imply a yield. This twist produced a lot of
testosterone-induced confrontations at intersections. Those who have
driven in Massachusetts, even now, will recognize the syndrome.
--
Bob Doherty

Seebs

unread,
May 17, 2010, 1:27:35 PM5/17/10
to
On 2010-05-17, Tim Streater <timst...@waitrose.com> wrote:
> In article <slrnhv2s7g.ee7...@guild.seebs.net>,

> Seebs <usenet...@seebs.net> wrote:
>> I have never, ever, seen such a thing. I have only seen Stop signs at
>> intersections of some sort.

> Well, this *was* California.

Ahh.

Well, then we know what it was there for; safety! Kinetic energy is a
property of matter known to the State of California to increase the risk
of certain kinds of injury*.

-s
[*] In theory, the requirement is that any object which has kinetic energy
shall bear a sticker stating this, and no object which does not have kinetic
energy shall bear such a sticker, as it wouldn't do to alarm people unduly.
Most manufacturers have fine print somewhere in their product kinetic energy
declarations stating that the product shall constitute its own rest frame,
which is why you never see the stickers.

sandeep

unread,
May 17, 2010, 3:07:18 PM5/17/10
to
Keith Thompson writes:
> For most allocations, if you can't get what you asked for, that's a
> failure, and you either need to fall back to some other approach or
> prepare to shut down. If I'm trying to allocate a tree node, half a
> node does me no good.

I think you are not seeing the whole picture here. You should think of
alternative situations too.

Maybe you want the allocation to perform a fast sort routine. If you
can't get enough memory, you can just fall back to a slower out-of-core
sorting routine.

sandeep

unread,
May 17, 2010, 3:09:16 PM5/17/10
to
Keith Thompson writes:
> But again, there's no good reason to use a macro rather than a constant
> object declaration:
>
> const size_t esize = 0x40000000;

Constant variables in C are not really constants. You couldn't define an
array of size esize for example. Preprocessor defines are needed - but
making them globally visible is not needed and is harmful when they are
only used once.

Seebs

unread,
May 17, 2010, 3:05:35 PM5/17/10
to

Oh, certainly.

But it's still more useful for the larger allocation to Just Fail, so you
can ask for a smaller one.

The point is, in many cases, you *can't* use a smaller allocation, so
automatically giving you a smaller allocation you may not be able to use
is pointless. Better to just succeed or fail, and let the caller decide
whether to try to allocate something else instead.

-s

It is loading more messages.
0 new messages