I wrote a C program with assert ( str != NULL ) to check char * str is
NULL or not.
then I transferred str with NULL to assert, and I used gcc -g ... to
compile it, sure, I got
assertion.
While,
1. When I used gcc -O to compile the same program, I got the assertion
as well.
2. When I added #define NDEBUG after #include<assert.h>, used gcc -g
to compile the program,
I got the assertion as well.
I knew, if we used -O or define NDEBUG, then assert statement will not
be run. but seems I am wrong or someone was wrong?
Anyone can help me?
"after"
> I knew, if we used -O or define NDEBUG, then assert statement will not
> be run. but seems I am wrong or someone was wrong?
Try defining NDEBUG *before* including <assert.h>.
-s
--
Copyright 2009, all wrongs reversed. Peter Seebach / usenet...@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
thanks, you are right, after I defined the #define NDEBUG before
#include<assert.h>, assert statement doesn't work yet.
while, how about the -O option in gcc? can this make assert not
workable?
cuiyou...@gmail.com wrote:
The -O option in gcc must not change the behaviour of a program (unless it
has undefined or unspecified behaviour).
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iEYEARECAAYFAks0xggACgkQG6NzcAXitM+uOwCfQWH0/eH9IO/EyN7XvmRfpx18
kigAn3aILVbsQrHjZZXMRFle/wrQiiGl
=ewEl
-----END PGP SIGNATURE-----
No, the "-O" option doesn't affect the behavior of assert. Why did
you think it would?
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
OK, then if I compile the c program in release mode, the assert
statement should not workable, is it wrong?
If it is wrong, then how to disable the assert statement by gcc?
> On Dec 26, 4:51Â am, Keith Thompson <ks...@mib.org> wrote:
>> "cuiyouzhi0...@gmail.com" <cuiyouzhi0...@gmail.com> writes:
>> > On Dec 25, 8:06Â pm, Seebs <usenet-nos...@seebs.net> wrote:
>> >> On 2009-12-25, cuiyouzhi0...@gmail.com <cuiyouzhi0...@gmail.com> wrote:
>>
>> >> > While,
>> >> > 1. When I used gcc -O to compile the same program, I got the assertion
>> >> > as well.
>> >> > 2. When I added #define NDEBUG after #include<assert.h>, used gcc -g
>> >> > to compile the program,
>> >> > I got the assertion as well.
>>
>> >> "after"
>>
>> >> > I knew, if we used -O or define NDEBUG, then assert statement will not
>> >> > be run. but seems I am wrong or someone was wrong?
>>
>> >> Try defining NDEBUG *before* including <assert.h>.
>>
>> > thanks, you are right, after I defined the #define NDEBUG before
>> > #include<assert.h>, assert statement doesn't work yet.
>>
>> > while, how about the -O option in gcc? can this make assert not
>> > workable?
>>
>> No, the "-O" option doesn't affect the behavior of assert. Â Why did
>> you think it would?
[Keith's signature snipped]
>
> OK, then if I compile the c program in release mode, the assert
> statement should not workable, is it wrong?
> If it is wrong, then how to disable the assert statement by gcc?
Can you avoid quoting people's signatures please?
That's right (although I remember heated debates about whether removing
asserts from releases is a good idea or not). But the way you do it is
not by changing optimisation level (I develop at high optimisation
levels - partly because of the better checking I get from the compiler -
only turning it down if debugging goes funny). You do it just the way
you've shown, by defining NDEBUG. You can do that with a compiler
option, you don't have to wire it into the program (-DNDEBUG for gcc).
--
Online waterways route planner: http://canalplan.org.uk
development version: http://canalplan.eu
Can you avoid putting your nannying in the wrong place please? It
belonged up above where you snipped.
>
> That's right (although I remember heated debates about whether removing
> asserts from releases is a good idea or not). But the way you do it
> is
It is. asserts suck and are the sign of sloppy programmers who only use
them to appear to be concerned about reliability. An assert condition
should simply not occur.
> not by changing optimisation level (I develop at high optimisation
> levels - partly because of the better checking I get from the compiler -
> only turning it down if debugging goes funny). You do it just the way
> you've shown, by defining NDEBUG. You can do that with a compiler
> option, you don't have to wire it into the program (-DNDEBUG for gcc).
--
"Avoid hyperbole at all costs, its the most destructive argument on
the planet" - Mark McIntyre in comp.lang.c
> > Can you avoid quoting people's signatures please?
OK, thanks
>
> > That's right (although I remember heated debates about whether removing
> > asserts from releases is a good idea or not). Â But the way you do it
> > is
>
> It is. asserts suck and are the sign of sloppy programmers who only use
> them to appear to be concerned about reliability. An assert condition
> should simply not occur.
I don't agree that assert condition should simply not occur.
such as, assert NULL pointer, I think it can occur at high
possibility.
>
> > not by changing optimisation level (I develop at high optimisation
> > levels - partly because of the better checking I get from the compiler -
> > only turning it down if debugging goes funny). Â You do it just the way
> > you've shown, by defining NDEBUG. Â You can do that with a compiler
> > option, you don't have to wire it into the program (-DNDEBUG for gcc).
>
So, gcc -NDEBUG -g *.c *.c -o * will work without assert?
> On Dec 26, 10:31Â pm, Richard <rgrd...@gmail.com> wrote:
>
>> > Can you avoid quoting people's signatures please?
>
> OK, thanks
>
>>
>> > That's right (although I remember heated debates about whether removing
>> > asserts from releases is a good idea or not). Â But the way you do it
>> > is
>>
>> It is. asserts suck and are the sign of sloppy programmers who only use
>> them to appear to be concerned about reliability. An assert condition
>> should simply not occur.
>
> I don't agree that assert condition should simply not occur.
> such as, assert NULL pointer, I think it can occur at high
> possibility.
No it cant if you know the program works. Your code can handle it more
gracefully unless you KNOW a routine should never be called with a null
pointer.
Maybe "never" is a tad strong, but I only see them abused.
The thinking goes like this:-
"I dont need to rigorously test my code flow and ensure things are only
called with valid parameters (eg non null pointers) because if they are
called with null pointers the assert will trap it". It's for the lazy.
>
>>
>> > not by changing optimisation level (I develop at high optimisation
>> > levels - partly because of the better checking I get from the compiler -
>> > only turning it down if debugging goes funny). Â You do it just the way
>> > you've shown, by defining NDEBUG. Â You can do that with a compiler
>> > option, you don't have to wire it into the program (-DNDEBUG for gcc).
>>
>
> So, gcc -NDEBUG -g *.c *.c -o * will work without assert?
>
--
The claim that "asserts suck" sucks, but their purpose must
be properly understood: They are useful to the programmer, but
not to the end user. They can alert the programmer to programming
errors, but are not good for things like validating user input.
Since an unsuccessful assert() terminates the program (unless
you're doing tricky things with SIGABRT), an assert() that fires
with "high probability" means the program dies with that same
high probability -- which indicates that the program is not very
useful, because it keeps dying!
> So, gcc -NDEBUG -g *.c *.c -o * will work without assert?
The command line as shown will probably not work at all.
Changing it to something like "gcc -DNDEBUG -g *.c" will cause
the NDEBUG macro to be defined at the start of each .c file's
compilation. If it's still defined when <assert.h> is included
(that is, if you don't #undef it first), the assert() calls in
that file will expand to no-ops that have no effect on the
program's execution.
--
Eric Sosman
eso...@ieee-dot-org.invalid
Asserts suck because of the termination. In 99% of test cases where that
assert triggers you can also merely run the program in a debugger and
get a backtrace there and then and "live debug". Asserts, like
littering code with printfs, is amateur and sucks in most cases I have
come across them polluting the code.
Indeed. Asserts, like the regs in this newsgroup, are fossils left over
from a day and age where debuggers didn't exist.
It depends on what you mean by "release mode". That's not a term
defined either by the C language or, as far as I can tell, by gcc.
Defining the macro NDEBUG before the "#include <assert.h>"
disables asserts. You can do that by adding "#define NDEBUG"
to your source code, or by using some compiler-specific option.
For gcc (and some other compilers), the option is "-DNDEBUG".
Then you thought that the assert can be used to debug the program,
not for error handling.
In the debug side, we used assert to check if all tests passed or
not,
if passed, then we can disable the assert statements in nodebug mode
for
it cost any time.
I am clear now, I think. We should design another
way to handle error code, such as by return status. Am I right?
>
> > So, gcc -NDEBUG -g *.c *.c -o * will work without assert?
>
> Â Â Â The command line as shown will probably not work at all.
> Changing it to something like "gcc -DNDEBUG -g *.c" will cause
> the NDEBUG macro to be defined at the start of each .c file's
> compilation. Â If it's still defined when <assert.h> is included
> (that is, if you don't #undef it first), the assert() calls in
> that file will expand to no-ops that have no effect on the
> program's execution.
Ah, yes, we should add -D before NDEBUG, thanks.
<snip>
> Then you thought that the assert can be used to debug the program,
> not for error handling.
Assertions were never intended to handle errors. Their purpose is to
allow you to make explicit your assumptions about the state of your
program at a given point so that, if ever a situation arises where
those assumptions don't hold, you can get the program to tell you
about it - by aborting with a message giving you the file name and
line number of the broken assumption.
Nobody would ever want to use such a cumbersome "all or nothing"
mechanism for error-handling.
p = fgets(buf, sizeof buf, stdin);
assert(p != NULL); /* daft */
but for assumption-checking, it works well:
new = tree_insert(&tree, data, sizeof data);
if(new != NULL)
{
assert(tree_balances(&tree));
/* ... if okay, carry on... */
}
else
{
/* ...handle insertion error ... */
> In the debug side, we used assert to check if all tests passed or
> not, if passed, then we can disable the assert statements in nodebug
> mode for it cost any time.
Well, I wouldn't want to use assert() for all testing, or even for
most testing. Trying to fire assertions is a part of testing, but
only a small part.
> I am clear now, I think. We should design another
> way to handle error code, such as by return status. Am I right?
Yes.
<snip>
--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
> In
> <bccb957c-6c21-423d...@u16g2000pru.googlegroups.com>,
> cuiyou...@gmail.com wrote:
>
> <snip>
>
>> Then you thought that the assert can be used to debug the program,
>> not for error handling.
>
> Assertions were never intended to handle errors. Their purpose is to
> allow you to make explicit your assumptions about the state of your
> program at a given point so that, if ever a situation arises where
> those assumptions don't hold, you can get the program to tell you
> about it - by aborting with a message giving you the file name and
> line number of the broken assumption.
>
> Nobody would ever want to use such a cumbersome "all or nothing"
> mechanism for error-handling.
>
> p = fgets(buf, sizeof buf, stdin);
> assert(p != NULL); /* daft */
>
> but for assumption-checking, it works well:
>
> new = tree_insert(&tree, data, sizeof data);
> if(new != NULL)
> {
> assert(tree_balances(&tree));
> /* ... if okay, carry on... */
> }
> else
> {
> /* ...handle insertion error ... */
That also provides a bit of documentation - on reading the code it's
obvious that the programmer expects the tree to be balanced after
insertion, so you don't sit there thinking "I wonder if tree_insert
always returns a balanced tree?".
That doesn't make sense. A lot of times the assert gets triggered
under a very specific set of conditions that might not even be
something you can test in your lab environment. Running the program
in a debugger does nothing to reproduce that kind of problem.
Consider a watchdog task in an embedded system. If a task is hung due
to a unforeseen set of peculiar circumstances, the periodic assertion
that all tasks have checked in can be invaluable. I can't think of
any obvious alternative to debugging this kind of problem.
Where I work, a failed assertion generates a coredump in our
production code. The ability to see what state the device was in when
something has gone unexpectedly wrong in the field has proven to be
extremely valuable. It isn't about programmer laziness; quite the
opposite. Diligent sanity checking through the use of asserts is a
means of proactively providing visibility into the problems that will
invariably arise one the product is put to actual use.
Neither does an assert.
asserts clutter the code and encourage laziness. In my opinion of
course. Feel free to disagree. But then in c.l.c many think printf's are
the way to debug a program. Personally I would fire someone I caught
doing that if they had a modern debugger available. Note I mean printfs
to stdio with lines like "x==8" and not directed logging of system
status.
Anyone not knowing why littering code with printfs is bad can google it
up - not that any competent programmer should need it explaining.
Yes, more or less. The actual effect of an assert() is
to cause a programming error to behave predictably and quite
visibly, which is useful as a starting point for debugging.
> In the debug side, we used assert to check if all tests passed or
> not,
> if passed, then we can disable the assert statements in nodebug mode
> for
> it cost any time.
There are two (or more) schools of thought on this topic.
One approach sprinkles assert() liberally throughout the code
during development and then disables them all for "release"
versions or when performance measurements become important.
Another says "ship what you tested" and leaves the assert()
calls enabled even in the final product, the idea being that
the end user may do things the test suite didn't anticipate.
In large projects a blended strategy may be used: Load the
code with tons and tons of assert() calls during development,
tagging each with an "importance" or "level." In the release
version, disable the less important calls but leave the critical
assertions intact. Sometimes a wrapper along the lines of
extern enum { RELEASE, NORMAL, PARANOID } debugLevel;
#define ASSERT(level,truth) \
assert(debugLevel < (level) || (truth))
...
session = idToSession(sessionID);
ASSERT(RELEASE, session != NULL);
...
ASSERT(PARANOID, expensiveSanityCheck());
...
... can be used for the "tagging."
> I am clear now, I think. We should design another
> way to handle error code, such as by return status. Am I right?
It depends on what you mean by "error." Usually, there
are many possible kinds of error, including (but not limited to)
- Logic errors: The programmer reasoned incorrectly or from
incorrect precepts, so the code does not behave as desired
- Implementation errors: The programmer chose the right
algorithm, but slipped up in coding it
- Environmental errors: The code is fine, but for some reason
the "configfile.dat" file can't be opened
- User errors: While running the program, the user entered
his date of birth as 1964-02-30
An assert() can be helpful in cases like the first two, but is
probably not appropriate for the final two.
>>> So, gcc -NDEBUG -g *.c *.c -o * will work without assert?
>>
>> The command line as shown will probably not work at all.
>> Changing it to something like "gcc -DNDEBUG -g *.c" will cause
>> the NDEBUG macro to be defined at the start of each .c file's
>> compilation. If it's still defined when<assert.h> is included
>> (that is, if you don't #undef it first), the assert() calls in
>> that file will expand to no-ops that have no effect on the
>> program's execution.
>
> Ah, yes, we should add -D before NDEBUG, thanks.
You might also want to review just how many .c files are
being compiled, and where the -o sends the output ...
--
Eric Sosman
eso...@ieee-dot-org.invalid
Bah, I just tripped one of my asserts just the other day. There were
two entities in the game that would interrogate one another for
information. I added a new state to one of the entities, but forgot to
prep the other one for it.
When it ran, the second entity saw the new state, didn't know what to do
with it, and asserted. And I thought, "Oh yeah--this guy needs to react
to that."
Whether lazy, sloppy, or just too late at night, I'm glad I put that
assert in there.
I agree that programmers should know to not rely on asserts for
production code.
-Beej
<snip>
> I agree that programmers should know to not rely on asserts for
> production code.
Likewise. They are a development-time tool. The ease with which they can
be disabled is wonderful, as it means that we can have all the benefits
of assertions during development, without any of the hassles - extra
code, extra time - in the production version.
You will, however, not be surprised to learn that there is a
considerable body of opinion to the effect that assertions should be
left *on* in production code. The analogy that this view's supporters
like to trot out is "you wouldn't take the lifeboats off a ship before
sending it out to sea"... which just goes to show that proof by analogy
is fraud. The counter-"proof" is that assertions are like scaffolding -
it's very useful while the building is undergoing construction or
repair, but you wouldn't want to leave it up during normal usage.
looks debugLevel here was assigned at other place.
> Â Â Â Â #define ASSERT(level,truth) \
> Â Â Â Â Â Â assert(debugLevel < (level) || (truth))
ASSERT can be controlled by debugLevel
> Â Â Â Â ...
> Â Â Â Â session = idToSession(sessionID);
> Â Â Â Â ASSERT(RELEASE, session != NULL);
> Â Â Â Â ...
> Â Â Â Â ASSERT(PARANOID, expensiveSanityCheck());
> Â Â Â Â ...
>
> ... can be used for the "tagging."
Good way to control assert in large project indeed.
While, I donot
>
> > I am clear now, I think. We should design another
> > way to handle error code, such as by return status. Am I right?
>
> Â Â Â It depends on what you mean by "error." Â Usually, there
> are many possible kinds of error, including (but not limited to)
>
> Â Â Â - Logic errors: The programmer reasoned incorrectly or from
> Â Â Â Â incorrect precepts, so the code does not behave as desired
>
> Â Â Â - Implementation errors: The programmer chose the right
> Â Â Â Â algorithm, but slipped up in coding it
>
> Â Â Â - Environmental errors: The code is fine, but for some reason
> Â Â Â Â the "configfile.dat" file can't be opened
>
> Â Â Â - User errors: While running the program, the user entered
> Â Â Â Â his date of birth as 1964-02-30
>
> An assert() can be helpful in cases like the first two, but is
> probably not appropriate for the final two.
>
Good summary about "error", for the first 2 items, we used assert to
debug our programs,
for the others, we really need the error handling method.
So, I agree with you about assert:
* assert can be used in debug and release mode, you can decide by your
case ( such as, control assert by ASSERT above, or disable assert for
performance reason)
* assert often used to debug the wrong stuff in programs, such as
logic error in program and implement error, while, user error and
other env errors should be handled by error code or exceptions.
* -DNODEBUG is used to disable assert by gcc, or define NODEBUG macro
before #include<assert.h>
Anything else need to add?
> Beej Jorgensen wrote:
>
> <snip>
>
>> I agree that programmers should know to not rely on asserts for
>> production code.
>
> Likewise. They are a development-time tool. The ease with which they
> can be disabled is wonderful, as it means that we can have all the
> benefits of assertions during development, without any of the hassles
> - extra code, extra time - in the production version.
>
> You will, however, not be surprised to learn that there is a
> considerable body of opinion to the effect that assertions should be
> left *on* in production code. The analogy that this view's supporters
> like to trot out is "you wouldn't take the lifeboats off a ship before
> sending it out to sea"... which just goes to show that proof by
> analogy is fraud. The counter-"proof" is that assertions are like
> scaffolding -
> it's very useful while the building is undergoing construction or
> repair, but you wouldn't want to leave it up during normal usage.
I think one of the reasons for the heat is that there are two different
reasons to use assert - and the arguments apply differently to each.
One is where you are carrying out a very-near-free check: that a pointer
passed to a function isn't NULL for example. Without the assert the
function is going to blow up when you dereference the NULL pointer, so
adding the assert just documents clearly that the pointer mustn't be
NULL, and guarantees a particular failure mode if it gets that way
through a bug elsewhere in the program. These seem good reasons to
leave this sort of assert turned on.
The second is your tree-balance-checking example, where there is
presumably some reasonable cost to carrying out the check. In that case
turning it off in the production code for efficiency reasons makes
sense. Also, perhaps, you might in many cases get away with the tree
being unbalanced, so blowing up in the face of the user isn't the best
response. These seem good reasons to turn this sort of assert off.
In fact, thinking about and writing this makes me wonder about using
assert for the latter and a simple if/abort() line for the former.
<snip>
> So, I agree with you about assert:
>
> * assert can be used in debug and release mode, you can decide by your
> case ( such as, control assert by ASSERT above, or disable assert for
> performance reason)
If you need to leave the assertions in your production code, it is not
yet ready for release.
> * assert often used to debug the wrong stuff in programs, such as
> logic error in program and implement error, while, user error and
> other env errors should be handled by error code or exceptions.
One might think of it this way: assertions are intended to reveal bugs
in the program, not bugs in the data. A wrong program needs the
programmer's attention, but wrong data is, at least in theory,
susceptible to correction by the user, so one should make it
(relatively) easy for the user to correct the data, for example by
offering another opportunity to provide it. One does not do this by
bombing out of the program.
> * -DNODEBUG is used to disable assert by gcc, or define NODEBUG macro
In another of those mildly infuriating abbrvs, it's NDEBUG, not NODEBUG.
> before #include<assert.h>
Or, slightly more traditionally and readably:
#include <assert.h>
> Anything else need to add?
Yes: don't believe everything you read.
<snip>
> I think one of the reasons for the heat is that there are two different
> reasons to use assert - and the arguments apply differently to each.
Sounds right to me. Eric's "levels" idea looks pretty solid in
principle, although I don't exactly agree with his choice of levels. :-)
If I want to keep the assert statements in production code, what else
do
I need to do?
>
> > * assert often used to debug the wrong stuff in programs, such as
> > logic error in program and implement error, while, user error and
> > other env errors should be handled by error code or exceptions.
>
> One might think of it this way: assertions are intended to reveal bugs
> in the program, not bugs in the data. A wrong program needs the
> programmer's attention, but wrong data is, at least in theory,
> susceptible to correction by the user, so one should make it
> (relatively) easy for the user to correct the data, for example by
> offering another opportunity to provide it. One does not do this by
> bombing out of the program.
classify by program error and data error is better.
>
> > * -DNODEBUG is used to disable assert by gcc, or define NODEBUG macro
>
> In another of those mildly infuriating abbrvs, it's NDEBUG, not NODEBUG.
>
> > before #include<assert.h>
>
> Or, slightly more traditionally and readably:
>
> #include <assert.h>
Thanks for correcting the details.
>
> > Anything else need to add?
>
> Yes: don't believe everything you read.
>
sorry, I don't understand this line.
<snip>
>> If you need to leave the assertions in your production code, it is not
>> yet ready for release.
>
> If I want to keep the assert statements in production code, what else
> do I need to do?
In mechanical terms, you simply need to ensure that NDEBUG is NOT
defined. In terms of advice on good programming practice, you need first
to find an advisor who agrees with you that leaving the assertions in
place is the right thing to do. I'm the wrong person to ask, because my
advice would simply be "don't do that".
<snip>
>>> Anything else need to add?
>> Yes: don't believe everything you read.
>>
> sorry, I don't understand this line.
You will get conflicting advice from different sources. Don't assume
that every source is reliable. Many are not.
Also, when two sources disagree, do not assume that
one (at least) of them is unreliable. Even experts differ.
<off-topic reason=anecdote>
A customer once got upset because he heard differing advice
from a software vendor and from my employer about how to tune
my company's hardware to run the other company's software. I
used to employ automotive analogies in such cases, pointing
out that different sources give different advice about how
often to change the engine oil, what kind of oil to use, and
so on. But since this customer was in the automotive industry
I thought perhaps I should avoid that analogy, and went looking
for another.
To prepare for the conference call where this "scandal"
would be aired, I pulled out three cookbooks and looked up their
recipes for guacamole. On the call, I listed the credentials of
the cooks and chefs responsible for the recipes, to establish
their expertise in matters culinary, and then pointed out that
the three books called for different amounts of lemon juice (on
a per-avocado basis). Indeed, one of them used no lemon juice
at all, but recommended lime juice instead!
The point was made: Experts can disagree. And to this day
that customer knows me as The Guacamole Man.
</off-topic>
--
Eric Sosman
eso...@ieee-dot-org.invalid
I once (early 1980s) knew a lecturer who used to say of experts: "If
they agree, obviously they're all conspiring to conceal the truth. And
if they disagree, they don't know what they're talking about."
Those who have followed this newsgroup in recent months should have no
difficulty in guessing the lecturer's subject.
<anecdote reluctantly snipped>
> cuiyou...@gmail.com wrote:
>
><snip>
>
>> So, I agree with you about assert:
>>
>> * assert can be used in debug and release mode, you can decide by your
>> case ( such as, control assert by ASSERT above, or disable assert for
>> performance reason)
>
> If you need to leave the assertions in your production code, it is not
> yet ready for release.
I agree and follow this as a general rule, but I'm a strong believer
in context.
Not really relevant rant follows.
Once on a project I had something like the following fail code review.
if (p == NULL) {
fprintf(stderr, "Phil fix your !@@^%$# code you moron\n");
kill(phil);
}
while this was accepted
assert(p != NULL); /* If you can read this, call Phil x5667 */
Sadly, the assert worked. The code was delivered, failed, and
corrected after a maintenance call. It was a military job and
sometimes you have to play the hand you're dealt.
>> * assert often used to debug the wrong stuff in programs, such as
>> logic error in program and implement error, while, user error and
>> other env errors should be handled by error code or exceptions.
>
> One might think of it this way: assertions are intended to reveal bugs
> in the program, not bugs in the data. A wrong program needs the
> programmer's attention, but wrong data is, at least in theory,
> susceptible to correction by the user, so one should make it
> (relatively) easy for the user to correct the data, for example by
> offering another opportunity to provide it. One does not do this by
> bombing out of the program.
I like it but I'd include system resources in the misuses.
When code is compiled in release mode, an assert() call does nothing at
all. If an assert() fires, then code was delivered to a customer
without having NDEBUG defined at compile time.
So the burning question is, why is debug code being delivered in the
first place?
7.2 Diagnostics <assert.h>
1 The header <assert.h> defines the assert macro and refers to another
macro, NDEBUG which is not defined by <assert.h>. If NDEBUG is defined
as a macro name at the point in the source file where <assert.h> is
included, the assert macro is defined simply as
#define assert(ignore) ((void)0)
The assert macro is redefined according to the current state of NDEBUG
each time that <assert.h> is included.
Because the deliverers are negligent, having failed to
remove The Last Bug before delivery?
--
Eric Sosman
eso...@ieee-dot-org.invalid
<snip>
> When code is compiled in release mode, an assert() call does nothing at
> all. If an assert() fires, then code was delivered to a customer
> without having NDEBUG defined at compile time.
>
> So the burning question is, why is debug code being delivered in the
> first place?
Policy from on high. Most likely cause is the Peter Principle.
Or because, even in released software, having the program crash with
an error message might be considered better than having it continue to
execute with random results.
You mentioned that the code was written for a military
customer. In the USA, the military and NASA and various other
agencies are big believers in "Test what you'll fly; fly what
you tested."
The policy proved invaluable when Mars Pathfinder landed on
Mars and *then* started doing uncontrollable and unscheduled
reboots; the debugging code that was still onboard allowed
ground control to diagnose and patch the problem. Sending a
field engineer on a 240 million mile round trip would have
been inconvenient.
--
Eric Sosman
eso...@ieee-dot-org.invalid
Sorry; the distance is wrong. I got it from the Pathfinder
web site, where they've obviously converted kilometers to miles
incorrectly. There seems to be a history of units confusion on
Mars missions ...
--
Eric Sosman
eso...@ieee-dot-org.invalid
> So the burning question is, why is debug code being delivered in the
> first place?
There can be situations where it simply isn't possible to do
rigorous testing before delivery. I am often writing code for
controlling lab equiment that's far too expensive to have
around (beside not having room to put it nor, in some cases,
the necessary supply of LN2 or LHe2 to operate it;-). And
there's not enough time (and money) for doing in-depth tes-
ting on-site. So the customer understands that there probably
still will be bugs in the code, both due to me getting it
wrong or the manuals for the equipment not being correct and/
or complete (which happens quite a lot, unfortunately). And
since bad code can, at least in some situations, damage the
equiment I very much try to err on the side of having the
program trigger an assert() than having it do something that
may be very expensive to repair if I got it wrong...
Thus I put asserts in the code in all kinds of places I am
in principle convinced of they never can be reached and have
a framework in place that sends me an email with a backtrace
if an assert() is triggered anyway. That way the customer
and me both benefit - I get a rather reliable indication of
what went wrong, the customers equipment doesn't get damaged,
s/he gets an error message that stands out plus I can quickly
react without him/her having to explain what exactly they were
doing at the moment the assert() was triggered (which is often
very hard to figure out without a backtrace or lots of trial
and error to reproduce the exact same conditions). And I tend
to leave the asserts in even if after some time nothing unto-
ward happened since a) I never can be sure how much of the code
has really been used and thus tested and b) a replacement device
with new firmware may behave differently in unpredictable ways
from the original one.
Of course, that could be seen as a mis-use of asserts, but
then they're simply convenient to use and the time spend on
checking asserts is negligible compared to the time spend on
waiting for devices to react (and the code is also littered
with all kinds of tests for other things that could go wrong
and asserts are just an emergency break if nothing else caught
the problem before, i.e. the cases that I assumed to be impos-
sible but were not 150% sure).
Regards, Jens
--
\ Jens Thoms Toerring ___ j...@toerring.de
\__________________________ http://toerring.de
But since they didn't assert the last bug to be absent anyway, there is
no point in leaving the assertions in.
Sounds flash, but is of course nonsense. Nothing to do with the Peter
principle. It's to do with real life SW businesses - deliver and damn
the consequences rather than risk late delivery and contractual fines -
and if leaving debug in helps then fine - espeically if there are on
site engineers who can utilise the debug info. Meanwhile keep beavering
away and hope to fix the bugs before the client finds them. Happens ALL
the time.
Then you can use an assert-like mechanism that isn't assert(). Using
assert() for testing conditions in production code is like using a
hammer for driving in screws. Although it works, there are better tools
for the job. In your situation, I would probably write a PRD_ASSERT
macro, with whatever semantics you need but which isn't affected by
NDEBUG. That would give you the best of both worlds.
<snip>
If the behavior of assert() is suitable for his purposes, what
advantage would a PRD_ASSERT macro have? (It's easy enough to refrain
from defining NDEBUG.)
<snip>
>>> There can be situations where it simply isn't possible to do
>>> rigorous testing before delivery.
>> Then you can use an assert-like mechanism that isn't assert(). Using
>> assert() for testing conditions in production code is like using a
>> hammer for driving in screws. Although it works, there are better
>> tools for the job. In your situation, I would probably write a
>> PRD_ASSERT macro, with whatever semantics you need but which isn't
>> affected by NDEBUG. That would give you the best of both worlds.
>>
>> <snip>
>
> If the behavior of assert() is suitable for his purposes, what
> advantage would a PRD_ASSERT macro have? (It's easy enough to refrain
> from defining NDEBUG.)
The macro would re-establish the distinction between development aids
and debugging aids. Personally, I find that distinction immensely
useful. The way I see it, if you want to check something in production
code, you can easily do that without using assertions. The assert()
macro is specifically designed to be capable of being switched off for
production code, and is unnecessarily aggressive if left /in/ production
code. If you don't trust the assertion never to fire in production even
if it were enabled, then I would argue that the proper course is to put
in code to handle the eventuality in a more graceful way.
Ok, that's a good point. For expensive checks (checking whether a
tree is balanced, or an array is sorted), it's good to be able to
disable them in production code, even if they're valuable during
development.
> The way I see it, if you want to check something in production
> code, you can easily do that without using assertions. The assert()
> macro is specifically designed to be capable of being switched off for
> production code, and is unnecessarily aggressive if left /in/
> production code. If you don't trust the assertion never to fire in
> production even if it were enabled, then I would argue that the proper
> course is to put in code to handle the eventuality in a more graceful
> way.
You're assuming that there's a more graceful way to handle the error
than aborting the program. Sometimes there is, sometimes there isn't.
Which is what I said at the beginning of this thread.
For a null pointer because of some crazy exec path possibly there is :
ignore it. For a malloc fail no way - Goodnight Irene.
<snip>
>> If you don't trust the assertion never to fire in
>> production even if it were enabled, then I would argue that the proper
>> course is to put in code to handle the eventuality in a more graceful
>> way.
>
> You're assuming that there's a more graceful way to handle the error
> than aborting the program. Sometimes there is, sometimes there isn't.
When there is, it makes sense to use it. I remember some perfectly
fiendish assertions firing in a VCS that I will refrain from naming (to
protect the guilty); the assertion messages were incomprehensible not
only to the layman but also to anyone without access to that VCS's
source, and the only way we had of reporting them was to get a window
dump of the dialog box and email it to the suppliers, in the (inevitably
vain) hope that they would do something about it. It would have been far
better to report the nature of the error in a way that made some kind of
sense to the user, and to provide a user-friendly mechanism for
reporting the details of the error to the supplier.
Actually it wasn't a military customer, I was active duty. I agree
with the policy mentioned and I wasn't overly torn by keeping assert
active in the final code, although I thought it was subobtimal. There
were few asserts that couldn't have been improved with better panic
information. At that point your getting close to considering the
conditions as handled errors rather than inconsistent state.
My rant was more because we had an idiot who either wouldn't or
couldn't understand that you must check the returns and handle system
resource failures. The system being what it was decided that it was
easier to simply include an assert to recognize an inevitable failure
rather than fix the known problem.
Your point has merit but my situation was a little different. In the
specific case that firing assert clearly indicated a failure of
management more than code. The incident colors my view of asserts to
this day. Bottom line is that the assert fixed both the bad code and
clueless management. As the saying went back then "wrong thing done right".
> The policy proved invaluable when Mars Pathfinder landed on
> Mars and *then* started doing uncontrollable and unscheduled
> reboots; the debugging code that was still onboard allowed
> ground control to diagnose and patch the problem. Sending a
> field engineer on a 240 million mile round trip would have
> been inconvenient.
But the per diem would be sweet!
I'm a big believer in that, too. The code should be tested with
assertions in place, and every assertion should have tests designed to
cause it to fire if possible. If and only if rigorous testing fails to
fire any assertions, they should be removed. And THEN you test AGAIN -
because, as you say, you should test what you ship and ship what you tested.
> The policy proved invaluable when Mars Pathfinder landed on
> Mars and *then* started doing uncontrollable and unscheduled
> reboots; the debugging code that was still onboard allowed
> ground control to diagnose and patch the problem. Sending a
> field engineer on a 240 million mile round trip would have
> been inconvenient.
It sounds like they had a good infrastructure design. It also sounds
like they didn't have assertions firing, since assertions don't reboot
the machine - they just abort the program.
Not sure I understand you: Are you saying an assert() that
never fired during testing can never fire in production?
Diligent developers test as thoroughly as they can, but it
is rare to be able to test *every* combination of circumstances
a program will confront. Or, as Roseanne Roseannadanna said,
"It's always something."
--
Eric Sosman
eso...@ieee-dot-org.invalid
No, I'm not saying that. After all, it would be trivial to disprove
(simply by deliberately designing an inadequate test pack). The above
comment specifically addressed your point - i.e. that leaving the
assertions in would not in any case have found The Last Bug. Nothing
/ever/ finds The Last Bug.
> Diligent developers test as thoroughly as they can, but it
> is rare to be able to test *every* combination of circumstances
> a program will confront. Or, as Roseanne Roseannadanna said,
> "It's always something."
Ideally, developers *should not be doing the testing*! There are people
who earn a very good living as specialised testers, and their first rule
is a simple one: "YOU do the debugging; WE will do the testing." This
isn't demarcation for the sake of it, but for the important reason that
a tester has a completely different approach to testing than a
developer. It's much the same as in comp.lang.c - although it sometimes
happens that a person requesting help with a problem finds his own
solution, it is far more normal for someone else to find it - a person
with a different set of eyes, and a more critical attitude. It is very
difficult to approach one's own code with a sufficiently critical eye.
I have, on occasion, had testers succeed in firing my assertions. They
were always heartily pleased with themselves for so doing (and rightly
so), and of course it's a bit deflating to see one's code broken in that
way, but it does result in more reliable software. After a competent
tester has thumped the code good and hard, the chance of an assertion
condition being met in production is vastly reduced. Can you eliminate
the possibility entirely? No, of course not. But you can reduce the risk
sufficiently that to worry about it would be disproportionate, compared
to all the other things that could go wrong. (Analogy: yes, you can
reduce your risk of a car accident by checking your tyre tread every
hundred yards or so, but to do so would be disproportionate - the payoff
is so epsilon it's practically zeta.) If your assertions are time-costly
(as many of mine are), the performance cost would be prohibitive.
I look at it this way - if I have the slightest concern that an
assertion could conceivably fire in production, I'll code around it (*as
well as* asserting it).
>> > while, how about the -O option in gcc? can this make assert not
>> > workable?
>>
>> No, the "-O" option doesn't affect the behavior of assert. Â Why did
>> you think it would?
>
> OK, then if I compile the c program in release mode, the assert
> statement should not workable, is it wrong?
> If it is wrong, then how to disable the assert statement by gcc?
"release mode" is a concept popularised by some Windows IDEs. It isn't
defined by any standard, or recognised by compilers.
The aforementioned IDEs typically create two profiles whenever you create
a workspace. The "debug" profile enables debug info and disables
optimisations, while the "release" profile disables debug info (and
possibly assert()s) and enables some level of optimisation.
C compilers typically provide entirely separate options for debug info and
optimisations. E.g. with no switches, gcc doesn't provide debug info and
doesn't perform optimisation.
OTOH, autoconf-based configure scripts will normally set CFLAGS to
"-g -O2" (debug info generated and most optimisations enabled) if
the C compiler is gcc, unless overridden by the user.
The rationale is that -O2 is suitable for production binaries (whether
or not it's a good idea to use -O3 varies with the code, the platform and
the compiler version), and the added debug information might be useful (if
it isn't, and you actually need to reduce the size of binaries, you
can "strip" them later).
There is no performance issue with including debug info (it won't be
mapped into RAM for normal execution), and projects using autoconf are
mostly free software where you're not worried about the debug info
facilitating reverse engineering.
Assertions do whatever you want them to do. As I mentioned earlier in
the thread, my asserts generate a coredump and reboot the device.
This has been an extremely mechanism for debugging unforeseen problems
in the field.
>You will, however, not be surprised to learn that there is a
>considerable body of opinion to the effect that assertions should be
>left *on* in production code.
I leave assertions on in the compiled versions of my code that I
distribute. On the other hand, I don't say anyone *should*
leave them on.
>The analogy that this view's supporters
>like to trot out is "you wouldn't take the lifeboats off a ship before
>sending it out to sea"...
I've never used such an analogy.
The assertions in my code are fairly cheap, and their presence makes
it more likely that I will be find out about bugs. And for many of my
users undetected errors would worse than an abort.
You might reasonably say that I should use something else rather than
assert, and perhaps some time I will go through the code changing it,
but it hasn't become a high priority.
-- Richard
--
Please remember to mention me / in tapes you leave behind.
>Asserts suck because of the termination. In 99% of test cases where that
>assert triggers you can also merely run the program in a debugger and
>get a backtrace there and then and "live debug".
I can't see what you're getting at here. In the cases where I use
assert, I want the program to terminate, to ensure that no-one uses
the (probably incorrect) results. If I then run the program under the
debugger, the abort resulting from assertion failure will be caught by
the debugger and I can debug from there.
>If you need to leave the assertions in your production code, it is not
>yet ready for release.
And yet many - probably almost all - released programs have errors
that are, or could be, caught by assertions. You may conclude that
most programs are released when they are not ready, but turning off
assertions is not going to fix that.
The only reason to turn off assertions is if their overhead outweighs
their advantage. If their overhead is small, why turn them off at all?
What do you gain?
>>> So the burning question is, why is debug code being delivered in the
>>> first place?
>> Because the deliverers are negligent, having failed to
>> remove The Last Bug before delivery?
>But since they didn't assert the last bug to be absent anyway, there is
>no point in leaving the assertions in.
This is ridiculous. It may mean that bugs are never noticed, and
incorrect results are used, or it may require much more effort to
debug, say, a segmentation fault.
>> You're assuming that there's a more graceful way to handle the error
>> than aborting the program. Sometimes there is, sometimes there isn't.
>For a null pointer because of some crazy exec path possibly there is :
>ignore it.
A null pointer will generally provoke an error sooner or later. An
assertion is useful for making it happen sooner: several times I've
immediately been able to work out what the error is from an assertion
message, while a segmentation fault some time later would have
required substantial debugging.
And it doesn't *always* provoke an error.
For example, in at least one implementation (gcc with glibc), this
program:
#include <stdio.h>
int main(void)
{
char *nullptr = NULL;
printf("nullptr = %s\n", nullptr);
return 0;
}
prints
nullptr = (null)
Apparently the glibc implementation of printf's "%s" format
"helpfully" recognizes null pointers and prints "(null)" rather than
crashing. I've seen "(null)" appear in the output of production
programs because of this, an error that probably would have been
caught if the code had crashed.
Undefined behavior is undefined behavior; it doesn't mean "the program
will crash".
<snip>
>> It sounds like they had a good infrastructure design. It also sounds
>> like they didn't have assertions firing, since assertions don't reboot
>> the machine - they just abort the program.
>
> Assertions do whatever you want them to do. As I mentioned earlier in
> the thread, my asserts generate a coredump and reboot the device.
> This has been an extremely mechanism for debugging unforeseen problems
> in the field.
I'm talking specifically about C's assertions, i.e. assert(), which
prints a message and aborts the program. If you're talking about some
other kind of assertion, we are talking at cross-purposes.
If you leave assertions on and they never fire in production, you have a
performance cost for no gain. If you leave assertions and and they /do/
fire, you gain a bug report (possibly) at the expense of a (possibly)
annoyed and disaffected customer - assertion messages are unfailingly
user-hostile. If you turn assertions off and they would never have fired
anyway, you gain full performance at no cost. If you turn assertions off
and they /would/ have fired anyway, you gain performance but lose
correctness (i.e. a program assumption was incorrect, with heaven knows
what consequences).
So it is with assertions *off* that we have both the greatest potential
gain and the greatest potential loss. The question is how likely it is
that assertions will fire. Rigorous testing should make it
extraordinarily unlikely, and coding defensively around the least
unlikely problems should make the highly improbable extremely highly
improbable. If you make an event sufficiently improbable, there comes a
point where worrying about it is a waste of time. (Putting it another
way: if it's sufficiently worth worrying about to tempt you to leave
assertions on, you haven't yet made the event sufficiently improbable.)
So was the statement to which it was an answer.
> It may mean that bugs are never noticed,
The statement was about The Last Bug, presumably the *only* remaining
bug. So if you're talking now about bugs plural, you've changed the
subject. See my recent (a few moments before this) reply for a more
analytical explanation of my viewpoint.
<snip>
It is common practice, it least in my experience, to redefine the
assert macro. That is what I meant; I should have been more clear.
If it is commonplace to do this, I have been most fortunate indeed not
to encounter it before. (I've often seen people define their own
assert-/like/ macros, but with a different name.)
> > So, I agree with you about assert:
>
> > * assert can be used in debug and release mode, you can decide by your
> > case ( such as, control assert by ASSERT above, or disable assert for
> > performance reason)
>
> If you need to leave the assertions in your production code, it is not
> yet ready for release.
do you never make mistakes? You may think your testing is perfect but
the real world can surprise you. The sooner the program detects it is
at fault and closes down the easier it is to find the problem.
> > * assert often used to debug the wrong stuff in programs, such as
> > logic error in program and implement error, while, user error and
> > other env errors should be handled by error code or exceptions.
>
> One might think of it this way: assertions are intended to reveal bugs
> in the program, not bugs in the data. A wrong program needs the
> programmer's attention, but wrong data is, at least in theory,
> susceptible to correction by the user, so one should make it
> (relatively) easy for the user to correct the data, for example by
> offering another opportunity to provide it. One does not do this by
> bombing out of the program.
>
> > * -DNODEBUG is used to disable assert by gcc, or define NODEBUG macro
>
> In another of those mildly infuriating abbrvs, it's NDEBUG, not NODEBUG.
>
> > before #include<assert.h>
>
> Or, slightly more traditionally and readably:
>
> #include <assert.h>
>
> > Anything else need to add?
>
> Yes: don't believe everything you read.
> >>>> So the burning question is, why is debug code being delivered in the
> >>>> first place?
I don't have special magic "debug" code. Well ok, there are versions
with more checking enabled; but the production still has some checks
left in it. I'm not perfect.
> >>> Because the deliverers are negligent, having failed to
> >>> remove The Last Bug before delivery?
>
> >> But since they didn't assert the last bug to be absent anyway, there is
> >> no point in leaving the assertions in.
>
> > Â Â Not sure I understand you: Are you saying an assert() that
> > never fired during testing can never fire in production?
>
> No, I'm not saying that. After all, it would be trivial to disprove
> (simply by deliberately designing an inadequate test pack). The above
> comment specifically addressed your point - i.e. that leaving the
> assertions in would not in any case have found The Last Bug. Nothing
> /ever/ finds The Last Bug.
>
> > Â Â Diligent developers test as thoroughly as they can, but it
> > is rare to be able to test *every* combination of circumstances
> > a program will confront. Â Or, as Roseanne Roseannadanna said,
> > "It's always something."
>
> Ideally, developers *should not be doing the testing*!
not everyone has this two tier system available
> There are people
> who earn a very good living as specialised testers, and their first rule
> is a simple one: "YOU do the debugging; WE will do the testing."
done that. I'd recomend every software devoloper spends a stint in
test. It chnages your point of view! I kept a list of Reasons Why This
Isn't A Bug. "but the user would never do that!" and "but the code
says that's supposed to happen!" being my favourites. Oh, and "I did't
see that light flash".
> This
> isn't demarcation for the sake of it, but for the important reason that
> a tester has a completely different approach to testing than a
> developer.
the developer wants to see it work the tester wants to Break It!
> It's much the same as in comp.lang.c - although it sometimes
> happens that a person requesting help with a problem finds his own
> solution, it is far more normal for someone else to find it - a person
> with a different set of eyes, and a more critical attitude. It is very
> difficult to approach one's own code with a sufficiently critical eye.
>
> I have, on occasion, had testers succeed in firing my assertions. They
> were always heartily pleased with themselves for so doing (and rightly
> so),
I would be!
> and of course it's a bit deflating to see one's code broken in that
> way,
oh definitly. An assert that fires in system test is deeply
embarassing. Though substantially better than firing on a customer's
site!
> but it does result in more reliable software.
yes
> After a competent
> tester has thumped the code good and hard, the chance of an assertion
> condition being met in production is vastly reduced. Can you eliminate
> the possibility entirely? No, of course not. But you can reduce the risk
> sufficiently that to worry about it would be disproportionate, compared
> to all the other things that could go wrong. (Analogy: yes, you can
> reduce your risk of a car accident by checking your tyre tread every
> hundred yards or so, but to do so would be disproportionate - the payoff
> is so epsilon it's practically zeta.) If your assertions are time-costly
> (as many of mine are),
mine vary
> the performance cost would be prohibitive.
>
> I look at it this way - if I have the slightest concern that an
> assertion could conceivably fire in production, I'll code around it (*as
> well as* asserting it).
yes
> Â Â Â The policy proved invaluable when Mars Pathfinder landed on
> Mars and *then* started doing uncontrollable and unscheduled
> reboots; the debugging code that was still onboard allowed
> ground control to diagnose and patch the problem. Â Sending a
> field engineer on a 240 million mile round trip would have
> been inconvenient.
I got some installation procedures upgraded
N: these installation procedures are a bit difficult for a naive user
(I was blunter)
X: who cares? the system is always installed by a developer
N: you're going to <central continent> in <bad season> are you?
X: ah...
Yes, but leaving assertions in production code is not normally one of
them! :-)
> You may think your testing is perfect but the real world can surprise
you.
I agree entirely with the second part of that sentence, but not of
course with the first.
> The sooner the program detects it is
> at fault and closes down the easier it is to find the problem.
Sure - but it can do so in a graceful way, that guides the user through
the reasoning for closing down the program, and explains in
user-friendly detail what to do about it. An abort is not graceful.
<snip>
> Richard Heathfield <r...@see.sig.invalid> writes:
>> Jens Thoms Toerring wrote:
>>> Dann Corbit <dco...@connx.com> wrote:
>>>> When code is compiled in release mode, an assert() call does
>>>> nothing at all. If an assert() fires, then code was delivered to a
>>>> customer without having NDEBUG defined at compile time.
Certainly the idea of leaving the asserts in is something that I would want
as a customer, but I'm not paying anyone to program. I do buy books about
C, and Plauger's work has asserts all over it.
>>>
>>>> So the burning question is, why is debug code being delivered in
>>>> the first place?
>>>
>>> There can be situations where it simply isn't possible to do
>>> rigorous testing before delivery.
>>
>> Then you can use an assert-like mechanism that isn't assert(). Using
>> assert() for testing conditions in production code is like using a
>> hammer for driving in screws. Although it works, there are better
>> tools for the job. In your situation, I would probably write a
>> PRD_ASSERT macro, with whatever semantics you need but which isn't
>> affected by NDEBUG. That would give you the best of both worlds.
Is your PRD_ASSERT home grown:
Sorry, nothing was found in the compleat manual for search PRD_ASSERT
> If the behavior of assert() is suitable for his purposes, what
> advantage would a PRD_ASSERT macro have? (It's easy enough to refrain
> from defining NDEBUG.)
Is it as easy as just undefining it before including assert.h?
static void dummy()
{ /* test dummy assert macro */
int i = 0;
assert(i == 0);
assert(i == 1);
}
#undef NDEBUG
#include <assert.h>
int main()
{ /* test both dummy and working forms */
assert(signal(SIGABRT, &field_abort) != SIG_ERR);
dummy();
assert(val == 0); /* should not abort */
++val;
fputs("Sample assertion failure message --\n", stderr);
assert(val == 0); /* should abort */
puts("FAILURE testing <assert.h>");
return (EXIT_FAILURE);
}
--
frank
> > The sooner the program detects it is
>> at fault and closes down the easier it is to find the problem.
>
> Sure - but it can do so in a graceful way, that guides the user through
> the reasoning for closing down the program, and explains in
> user-friendly detail what to do about it. An abort is not graceful.
What about Jens's abort that sends an e-mail with the data that led to the
failure. I'm curious how he does this with C.
--
frank
I do it. It's completely undefined behaviour: I use signal handling, so
I catch segmentation faults as well. But although undefined (the
undefined bit is actually doing anything in a signal handler), it's
doable with (almost) completely standard C: trap the signal, collect
together the information and call an external email program. The
"almost" is because in practice I open a pipe to the program rather than
write the information to a file. Despite the fact you are doing all
sorts of things (like opening external programs) even after a
segmentation violation, it works remarkably well.
--
Online waterways route planner | http://canalplan.eu
Plan trips, see photos, check facilities | http://canalplan.org.uk
I do something similar, except I open a socket and talk SMTP to an email
server to send the email. This being on a long-running daemon it is
extremely useful to my customers that they get an email if the
unexpected happens and the daemon crashes.
--
Flash Gordon
>If you leave assertions on and they never fire in production, you have a
>performance cost for no gain.
In the case of my asserts, this is negligible, probably unmeasurable.
>If you leave assertions and and they /do/
>fire, you gain a bug report (possibly) at the expense of a (possibly)
>annoyed and disaffected customer - assertion messages are unfailingly
>user-hostile.
No. I suppose that might be your situation, but it seems a bit
odd. Would your customers prefer the program to continue when
something is wrong? I can imagine situations where that is true - a
computer game for example - but is that what you are doing?
By the way, like many programmers, I don't have customers, I
have users.
>If you turn assertions off and they would never have fired
>anyway, you gain full performance at no cost.
As above; there is no significant gain in my case.
>If you turn assertions off
>and they /would/ have fired anyway, you gain performance but lose
>correctness (i.e. a program assumption was incorrect, with heaven knows
>what consequences).
Given my users, the consequences of undetected errors would probably
be worse than even moderate costs; they might publish scientific
papers with incorrect conclusions, for example.
>So it is with assertions *off* that we have both the greatest potential
>gain
Not in my case.
>and the greatest potential loss.
Yes.
>The question is how likely it is
>that assertions will fire. Rigorous testing should make it
>extraordinarily unlikely,
That may be practical in your situation, but not mine. I have
no testers except me and my users, and no-one is paying me even to
produce the code; I just do it as part of my research.
>and coding defensively around the least
>unlikely problems should make the highly improbable extremely highly
>improbable.
Using assertions *is* coding defensively. It makes it more likely that
my errors will have a benign consequence, viz. an abort.
No. I would imagine that *any* user would prefer the program to stop
gracefully when something is wrong. What I dislike about assertions in
production code, as a programmer, is that they give no opportunity to
clean up before halting. And, as a user, I dislike their utterly
low-level error messages, which are incomprehensible without the source
code.
<snip>
> Given my users, the consequences of undetected errors would probably
> be worse than even moderate costs; they might publish scientific
> papers with incorrect conclusions, for example.
Yes, but I'm not saying don't stop the program. I'm saying don't stop
the program LIKE THAT. It's different.
<snip>
>> and coding defensively around the least
>> unlikely problems should make the highly improbable extremely highly
>> improbable.
>
> Using assertions *is* coding defensively.
I agree. That's why I use assertions myself. But leaving them in
production code isn't defensive; it's offensive.
> It makes it more likely that
> my errors will have a benign consequence, viz. an abort.
Your circumstances are unusual in that your users are very
technically-minded people who, I would guess, can get you on the phone
very easily(?). They don't *have* to puzzle out what an assertion
failure message means, and they don't have to worry about whether their
database integrity, say, is now screwed because the program aborted
instead of exiting.
Which is better a customer/user ringing you up at three in the morning
to tell you that your program has just done serious damage to some data
or to be told that your program has just aborted with an error message.
Of course we would prefer neither but the second at least gives you a
clue about the problem and hopefully has not done any damage.
So as long as the asserts do not degrade the performance to a level that
hurts the customer leaving them switched on seems a good option.
<snip>
> Which is better a customer/user ringing you up at three in the morning
> to tell you that your program has just done serious damage to some data
> or to be told that your program has just aborted with an error message.
Which is better - a user ringing you up at three in the morning to tell
you that your program has just arbitrarily aborted, doing heaven knows
what damage to the data, or to tell you that your program has just
gracefully exited with an error message explaining what has occurred in
terms meaningful to the user, with helpful hints on what to do next?
> Of course we would prefer neither but the second at least gives you a
> clue about the problem and hopefully has not done any damage.
The clue about the problem can be communicated without aborting.
> So as long as the asserts do not degrade the performance to a level that
> hurts the customer leaving them switched on seems a good option.
Clearly there is plenty of scope for disagreement! :-)
Yes, but sometimes you can know that the abort is fine, you just need to
stop the program if it ever gets here.
>
>> Of course we would prefer neither but the second at least gives you a
>> clue about the problem and hopefully has not done any damage.
>
> The clue about the problem can be communicated without aborting.
Actually I can envisage situations where doing anything other than
stopping at once would be dangerous. But I can also envisage cases where
an abort is dangerous.
>
>> So as long as the asserts do not degrade the performance to a level
>> that hurts the customer leaving them switched on seems a good option.
>
> Clearly there is plenty of scope for disagreement! :-)
>
Yes, but there is also scope for picking the right solution to the
problem :)
Depends on the circumstances. Sometimes we wish the program to continue
as gracefully as possible when something is wrong...
-Beej
If you're going to do this, why would you not simply put in some real
error handling for all these cases, instead of using assert()?
-Beej
Bingo ...
--
"Avoid hyperbole at all costs, its the most destructive argument on
the planet" - Mark McIntyre in comp.lang.c
Precisely. The assert() macro is a development-time aid. We already have
a whole bundle of runtime aids to error management. They're collectively
called "the C language". Surely we can spare the developer one little
macro purely for his own use?
>Your circumstances are unusual in that your users are very
>technically-minded people who, I would guess, can get you on the phone
>very easily(?).
We have had e-mail for several years now...
If all else fails, try semaphore. :-)
Interprocess communication is off-topic.
:-)
Except when done by simply writing to a file which some other process
later reads... so write the error report to a file and tell the user to
send the file to the developer :-)
Not an entirely daft suggestion, sometimes it can be a very good way of
providing a user friendly error message and a detailed report for the
developer. The problem with nice seeming error reporting mechanisms is
that you also have to trap errors occurring in your error trapping and
handle that nicely... so in some of our code we have a handler on
SIGSEGV which keeps track of whether it has been called recursively and
each time it is nested tries to do less.
--
Flash Gordon
> j...@toerring.de (Jens Thoms Toerring) writes:
> >There can be situations where it simply isn't possible to do
> >rigorous testing before delivery. I am often writing code for
> >controlling lab equiment that's far too expensive to have
> >around
>
> A report about software manufacturing for
> controlling expensive equipment:
>
> http://www.fastcompany.com/node/28121/print
Interesting. Here's what I think is the most (though certainly not the
only) important thing in that process: "accountability is a team
concept: no one person is ever solely responsible for writing or
inspecting code."
Richard
>> If you're going to do this, why would you not simply put in some real
>> error handling for all these cases, instead of using assert()?
>
> Bingo ...
Some errors are unrecoverable.
And an abort is better than an incorrect result.
I agree, but that doesn't necessarily mean calling C99 assert(). One
could call some more reasonable error reporting code, instead,
especially if the error is one that could occur in a production environment.
-Beej
Oh, true. I use an ASSERT() macro which prints the failed condition,
line number, source file etc, to stdout and stderr....
Then it dumps core if supported by the platform.
*Then* it aborts.
This point has been made over and over again - by you, by me, by Uncle
Tom Cobbley, and by all. It doesn't matter, though - the
assert()-in-production lobby thinks not asserting means not terminating,
and aren't about to let the facts confuse the matter.
Programs that send error messages to stdout are a pain. If you're
running a pipeline of programs, there's no guarantee that the
downstream program will get terminated before it tries to process the
error message as data, resulting in a cascade of errors from which it
is tedious to extract the real problem.
Of course, progams which send errors to stdout and *not* stderr
are even worse.
> Beej Jorgensen wrote:
>> On 01/04/2010 10:23 PM, Gareth Owen wrote:
>>> Some errors are unrecoverable. And an abort is better than an
>>> incorrect result.
>>
>> I agree, but that doesn't necessarily mean calling C99 assert(). One
>> could call some more reasonable error reporting code, instead,
>> especially if the error is one that could occur in a production environment.
>
> This point has been made over and over again - by you, by me, by Uncle
> Tom Cobbley, and by all. It doesn't matter, though - the
> assert()-in-production lobby thinks not asserting means not
> terminating, and aren't about to let the facts confuse the matter.
One of the (small) benefits is that everybody knows "assert" is "check
condition and die if not met", and is practically costless (even when
not turned off). When they come across My_Assert() or Gerferken() or
whatever they are going to have to look it up. Not a big issue, but a
real one nevertheless.
> Programs that send error messages to stdout are a pain. If you're
> running a pipeline of programs
Can you do that with my program? Wait ... if you don't know *anything*
about the code I'm talking about, how can you possibly know. And if you
don't know what my programs (extremely specialist dynamically loaded
libraries, in this case), or how the logging is structured, how can you
make that kind of assertion?
Do you just assume you know better what is appropriate for my program?
Why not email the Linux Kernel Mailing List and tell them that logging
errors using printk() is inappropriate because you can't pipe the kernel
output?
Do I tell you how to write XML parsers? Sheesh. Why not give people
the benefit of the doubt that we might, theoretically, know what we're
doing. In programming, immutable rules are for people who cannot be
trusted to make smart decisions.
It isn't so much what you don't know that kills you, as what you know
that ain't so. Firstly, everybody may know that "assert" is "check
condition and die if not met", but in fact it means "if and only if
NDEBUG was not defined at this point in the program, check condition and
abort the program without any cleanup if the condition is not met",
which is rather different, and can be far from costless if the thing
it's asserting takes a long time to do. For example, consider asserting
that a red-black tree obeys all the rules, after every insertion and
every deletion.
> When they come across My_Assert() or Gerferken() or
> whatever they are going to have to look it up. Not a big issue, but a
> real one nevertheless.
Yes, they will have to consult the documentation when they come across
in-house code that they haven't seen before. People write in-house code
when the standard library doesn't meet their needs, for whatever reason.
If assert() does meet your needs, fine, use it! It meets mine, too. But
if you need an assertion routine that is intended for use in production
code, then assert() almost certainly does not meet your needs, so you'll
want to write your own that does and, yes, people will have to look it
up in your docs the first time they meet it, but that's no bigger a deal
for My_Assert() than it is for My_Strcpy.
>> Programs that send error messages to stdout are a pain. If you're
>> running a pipeline of programs
>Can you do that with my program?
I've no idea. I assumed you were telling comp.lang.c what you do
because other people might find it useful. I'm telling them why it
might be a bad idea, which it is for most programs.
I must say the rest of your post is rather an overreaction!
> Richard Tobin wrote:
>> In article <49edne55AIID9aXW...@bt.com>,
>> Richard Heathfield <r...@see.sig.invalid> wrote:
>>
>>> If you need to leave the assertions in your production code, it is
>>> not yet ready for release.
>>
>> And yet many - probably almost all - released programs have errors
>> that are, or could be, caught by assertions. You may conclude that
>> most programs are released when they are not ready, but turning off
>> assertions is not going to fix that.
>>
>> The only reason to turn off assertions is if their overhead outweighs
>> their advantage. If their overhead is small, why turn them off at all?
>> What do you gain?
>
> If you leave assertions on and they never fire in production, you have
> a performance cost for no gain. [snip]
Not _no_ gain. If/when you get a bug/crash report from
a program out in the field, if assertions were left on
you know none of them were violated as part of the bug/crash.
Knowing that can be a huge help in reproducing or diagnosing
the problem. Personally I think that's an enormous gain,
but in any case it is some gain.
>> If you leave assertions on and they never fire in production, you have
>> a performance cost for no gain. [snip]
>
> Not _no_ gain. If/when you get a bug/crash report from
> a program out in the field, if assertions were left on
> you know none of them were violated as part of the bug/crash.
> Knowing that can be a huge help in reproducing or diagnosing
> the problem. Personally I think that's an enormous gain,
> but in any case it is some gain.
Although you haven't convinced me sufficiently to change my habits, the
point you make is a very good one.
> Tim Rentsch wrote:
>> Richard Heathfield <r...@see.sig.invalid> writes:
>>
> <snip>
>
>>> If you leave assertions on and they never fire in production, you have
>>> a performance cost for no gain. [snip]
>>
>> Not _no_ gain. If/when you get a bug/crash report from
>> a program out in the field, if assertions were left on
>> you know none of them were violated as part of the bug/crash.
>> Knowing that can be a huge help in reproducing or diagnosing
>> the problem. Personally I think that's an enormous gain,
>> but in any case it is some gain.
>
> Although you haven't convinced me sufficiently to change my habits,
> the point you make is a very good one.
I think we've all come to a common position that there are at least two
sorts of assertion - those that cost a lot and should be turned off in
production, and those that are cheap and which probably have some value
in production.
Richard's just about persuaded me that even if I don't have any of the
former in my code, I'm probably better creating my own ALWAYS_ASSERT(x)
macro, even if it just defines to the standard assert, although I'm not
about to put changing it to the top of my planned changes list.
<snip>
> I think we've all come to a common position that there are at least two
> sorts of assertion - those that cost a lot and should be turned off in
> production, and those that are cheap and which probably have some value
> in production.
>
> Richard's just about persuaded me that even if I don't have any of the
> former in my code, I'm probably better creating my own ALWAYS_ASSERT(x)
> macro, even if it just defines to the standard assert, although I'm not
> about to put changing it to the top of my planned changes list.
I'm not sure that it's necessary. The normal assert() macro will do for
expensive, dev-time-only assertions. As for the cheaper kind that you
can leave in, just add error-handling, and then you don't need the
assertion in the first place. :-)