I was coding a trivial problem concerning static variables and
pointers intended to scan/read an array, when I took a hint in another
code. So, an excerpt of the final version of my code is:
int func1(int *a = 0)
{
static int *p;
if(a) //<--*THIS LINE*
{
p = a;
return(*p);
}
else
if(*p != -1)
{
p = p + 1;
return(*p);
}
else
return(0);
}
The function "func1" takes a pointer (named "a") to an int and the
default value of that pointer is zero. What interests me is the
meaning of the line "if(a)".
What I know is that if a parameter is passed to the function "func1",
then the "if(a)" is triggered and all the lines inside its scope are
run. BUT, I do not know what kind of result is given by that line
( the "if(a)" ). It's a logical one (TRUE, FALSE)? Does a pointer have
a logical value or something else in a situation like that?
It seems strange to me that a pointer without the dereference operator
can be used in an "if" statement. :-/
I appreciate any explanation and further help.
Thank You!
Marcelo de Brito
The pointer without the dereferencing is used to see if the pointer is
a dangling pointer, or rather a pointer that points to null. Basically
checks the validity of the pointer.
> The function "func1" takes a pointer (named "a") to an int and the
> default value of that pointer is zero. What interests me is the
> meaning of the line "if(a)".
'if (a)' is a short cut for 'if (a != 0)'. This is the same for pointers
as for any other value a.
Btw. for clearification I would recommend to write
int func1(int *a = NULL)
and consequently
return NULL;
at the bottom.
Marcel
For historical reasons, a pointer implicitly converts to bool,
with a null pointer converting to false, and all other pointer
values converting to true. In general, it's better to avoid the
implicit convertion, and write what you mean:
if ( a != NULL ) ...
There's also some discussion over whether it's better to use 0
or NULL as a null pointer constant. I prefer NULL, but my
opinion here is far from universal, and there are serious
problems with both---the next version of the standard will
introduce a nullptr which should solve the problem.
Also, most programmers would consider "else if" a single
construct, and not indent additionally for the second if. (What
happens when you have a string of else/if?)
--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Will it solve the problem of false decaying to NULL pointer ?
Like in:
std::string str(false);
It throws a std::logic_error but compile nonetheless.
--
Michael
Thank you very much for the quick and clarifying replies! :)
I have not been aware of that pointer implicit setting to a logical
value.
If someone else has something to add, feel free to do it!
Just a simple question (and a curiosity too): Where did you gather
that information of pointer setting implicitly to a logical value?
Could you, please, point the reference?
The validity of a pointer cannot be determined by examining its value,
since invalid values themselves cause UB, even to just examine. A null
pointer is a valid pointer value, albeit not one that points to any
object.
From the C++ standard:
4.12/1:
"(...) A zero value, null pointer value, or null member pointer value
is converted to false; any other value is converted to true."
Probably not. Anything that would solve that would also break
existing code:-(. (That could have been solved when bool was
introduced, by adding words to the definition of a null pointer
constant to the effect that it could not have type bool. But I
suspect that no one happened to think of it.)
> It throws a std::logic_error but compile nonetheless.
It's undefined behavior; anything can happen. In any reasonably
good implementation, it will core dump immediately.
> > Just a simple question (and a curiosity too): Where did you
> > gather that information of pointer setting implicitly to a
> > logical value? Could you, please, point the reference?
First, it may be just a language problem (supposing Nosophorus
is not a native English speaker), but there is no "setting"
involved---it's an implicit conversion.
> From the C++ standard:
> 4.12/1:
> "(...) A zero value, null pointer value, or null member pointer value
> is converted to false; any other value is converted to true."
And before that, from the C standard, and before that from
Kernighan and Richie. It's been that way from the earliest days
of C (which inherited it from B, no doubt, where it actually
makes sense).
> For historical reasons, a pointer implicitly converts to bool,
> with a null pointer converting to false, and all other pointer
> values converting to true. In general, it's better to avoid the
> implicit convertion, and write what you mean:
> if ( a != NULL ) ...
Why do you think it's better? I don't think so, but maybe you can convince
me.
Don't know if this is James' reason, but relying on the implicit conversion here
one may acquire the habit of generally relying on implicit conversion to bool,
and e.g. the Visual C++ compiler produces silly-warnings (it actually thinks it
affects /performance/ :-) ) in some other situations, e.g. for a 'return'. And
without a clean compile, no warnings, one doesn't know whether one has ignored
some serious warning, such as e.g. comparision between signed/unsigned.
However, since I really dislike uppercase in source code I'd write
if( a != 0 )
Cheers,
- Alf
For the same reason implicit conversions in general are to be
avoided. A pointer isn't a bool, so it doesn't make sense to
use it as one; doing so only leads to confusion. Say what you
mean, and mean what you say.
(BTW: you're the co-author of the original proposal for bool.
And there, unless I'm remembering wrong, you proposed
deprecating the above conversion. Have you changed your mind?
And if so, what made you change it?)
And the obvious answer to that is that the compiler should not dictate
coding style. Who knows more about your application domain: the
compiler writer or you?
Turn off stupid warnings.
--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)
Sometimes, for some other warnings, that's the only practical recourse.
However, in this particular case just writing explicit code as one ideally
should anyway, makes that unnecessary, and not only for the original programmer
but for all using the code.
The code-should-be-explicit argument isn't, from my point of view, that
compelling on its own for the conversion of pointer to bool, which after all for
some is idiomatic style. But coupled with the (e.g. MSVC) silly-warnings, which
also on its own is a rather weak argument, I think it sums up to a good
argument. For AFAIK there's really no good reason to omit the comparision.
Cheers,
- Alf
> * Pete Becker:
>>
>> Turn off stupid warnings.
>
> Sometimes, for some other warnings, that's the only practical recourse.
>
> However, in this particular case just writing explicit code as one
> ideally should anyway, makes that unnecessary, and not only for the
> original programmer but for all using the code.
>
> The code-should-be-explicit argument isn't, from my point of view, that
> compelling on its own for the conversion of pointer to bool, which
> after all for some is idiomatic style. But coupled with the (e.g. MSVC)
> silly-warnings, which also on its own is a rather weak argument, I
> think it sums up to a good argument. For AFAIK there's really no good
> reason to omit the comparision.
>
Didn't your mother teach you: two wrongs don't make a right?
It makes the intent of the coder clearer?
If I see "if (a != NULL)" I immediately know that a is a pointer and that
we're checking that it's not a null pointer. Or more precisely, if this
is /not/ the case, then I'm dealing with such a perverse coder that all
bets are off...
If I see "if (a != 0)" then I have fewer clues what's going on and quite
likely have to check a lot of other code to find out.
--
Lionel B
Puhleez. Who names a pointer 'a', anyway? Seriously. You're basing
the need in one *style* rule on allowing another style rule to be violated.
Not to mention the practice Microsoft uses for all its API "handles"
(like HWND, HDC, HINSTANCE, etc.) allowing comparing them to NULL.
Since they aren't necessarily pointers (and they weren't in the
beginning, BTW), the expression
if (hMyWindow != NULL)
promoted by *tons* of code originated in MS itself is simply wrong
because the goal was to compare it agains 0 (*zero*). It's only many
years later they devised a scheme where a 'HANDLE' would be compared to
'INVALID_HANDLE_VALUE' (or some such), thus breaking their own rule,
probably caused by the fact that those 'HANDLE' values with all bits
cleared were actually *valid*.
> and that
> we're checking that it's not a null pointer.
The statement
if (a)
does exactly the same thing, but more. To me (and to Andrew, probably)
it says that 'a' is something that can be 'false' ("wrong", "invalid",
whatever other meaning you can hang on the value 'false'), and we're
checking that it isn't, IOW we're checking that 'a' is "fine", "OK",
"valid", "in good standing" (or whatever other meaning you can hang onto
the value 'true'). Who cares that 'a' is a pointer? FWIW it could be a
class that *acts* like a pointer. And the class can have a defined
conversion to 'bool' that isn't comparing the stored address to some
special "invalid address" pattern.
The whole point of the implicit conversions is, well, duck typing.
Allowing pointers to be converted to bool provides us with the ability
to use pointers in logical expressions without having to write
.. != <somespecialexpression>
. If you don't want to use it, don't use it. But I use it, and will
continue using it because the benefits outweigh the problems it might
create for people who don't [want to] understand the language to its
fullest and need to maintain my code. Figuratively speaking, of course.
> Or more precisely, if this
> is /not/ the case, then I'm dealing with such a perverse coder that all
> bets are off...
>
> If I see "if (a != 0)" then I have fewer clues what's going on and quite
> likely have to check a lot of other code to find out.
<shrug> If that code is part of a template, it would work just fine for
*any type* that defines non-equality comparison to zero, pointers and
integrals alike. And why care what 'a' is. What's going on is the
comparison with zero, which probably has some meaning in the context.
If that's not a clue enough for you, then you probably don't understand
the actual context, and, yes, you would need "to check a lot of other
code to find out".
V
--
Please remove capital 'A's when replying by e-mail
I do not respond to top-posted replies, please don't ask
> Turn off stupid warnings.
>
I couldn't disagree more. There are too many easy mistakes you can make
which the compiler can warn you about long before you have to hunt them
down in debugging. Many of those mistakes can cost many hours if they
make it to that point.
One great example: forgetting to make your destructor virtual when it
should be.
> Lionel B wrote:
>> On Thu, 29 Jan 2009 17:07:41 +0000, Andrew Koenig wrote:
>>
>>> "James Kanze" <james...@gmail.com> wrote in message
>>> news:11b0ce6c-69ed-4764-aa5a-
c3f8b9...@s1g2000prg.googlegroups.com...
>>> On Jan 13, 8:46 am, Nosophorus <Nosopho...@gmail.com> wrote:
>>>
>>>> For historical reasons, a pointer implicitly converts to bool, with a
>>>> null pointer converting to false, and all other pointer values
>>>> converting to true. In general, it's better to avoid the implicit
>>>> convertion, and write what you mean:
>>>> if ( a != NULL ) ...
>>>
>>> Why do you think it's better? I don't think so, but maybe you can
>>> convince me.
>>
>> It makes the intent of the coder clearer?
>>
>> If I see "if (a != NULL)" I immediately know that a is a pointer
>
> Puhleez. Who names a pointer 'a', anyway?
Not me. I was quoting a previous poster.
Ok, maybe "immediately know" is a bit strong... "get a strong hint" might
be better.
> Seriously. You're basing
> the need in one *style* rule on allowing another style rule to be
> violated.
You lost me there.
> Not to mention the practice Microsoft uses for all its API "handles"
> (like HWND, HDC, HINSTANCE, etc.) allowing comparing them to NULL. Since
> they aren't necessarily pointers (and they weren't in the beginning,
> BTW), the expression
>
> if (hMyWindow != NULL)
>
> promoted by *tons* of code originated in MS itself is simply wrong
> because the goal was to compare it agains 0 (*zero*). It's only many
> years later they devised a scheme where a 'HANDLE' would be compared to
> 'INVALID_HANDLE_VALUE' (or some such), thus breaking their own rule,
> probably caused by the fact that those 'HANDLE' values with all bits
> cleared were actually *valid*.
<shrug> I don't program on MS Windows... but exactly. My understanding is
that NULL (as inherited from C) is *specifically* to represent a null
pointer constant. If Microsoft choose/chose to subvert that then boo sucks
to them. I guess if I had to I'd deal with it I would, grudgingly.
> > and that
>> we're checking that it's not a null pointer.
>
> The statement
>
> if (a)
>
> does exactly the same thing, but more. To me (and to Andrew, probably)
> it says that 'a' is something that can be 'false' ("wrong", "invalid",
> whatever other meaning you can hang on the value 'false'), and we're
> checking that it isn't, IOW we're checking that 'a' is "fine", "OK",
> "valid", "in good standing" (or whatever other meaning you can hang onto
> the value 'true').
To me it says that 'a' is something that has an implicit conversion to
'bool' ... and who knows what that boolean value represents in the mind of
the programmer? Not me, as a poor mind-reader, without checking (quite
possibly inadequately documented) code.
On the other hand, 'i != 0' or 'p != NULL' or 'hMyWindow !=
INVALID_HANDLE_VALUE' or 'myObect.is_valid()' or whatever simply expresses
intent far better.
> Who cares that 'a' is a pointer? FWIW it could be a
> class that *acts* like a pointer. And the class can have a defined
> conversion to 'bool' that isn't comparing the stored address to some
> special "invalid address" pattern.
Sure, so STL iterators "act like pointers" in some ways, but heaven knows
what 'if (myitr)' might mean... as opposed to, say, 'if (myitr !=
myContainer.end()'.
> The whole point of the implicit conversions is, well, duck typing.
> Allowing pointers to be converted to bool provides us with the ability
> to use pointers in logical expressions without having to write
>
> .. != <somespecialexpression>
>
> . If you don't want to use it, don't use it. But I use it, and will
> continue using it because the benefits outweigh the problems it might
> create for people who don't [want to] understand the language to its
> fullest and need to maintain my code. Figuratively speaking, of course.
My dirty little secret is that I have been known to use it - occasionally
out of laziness, more usually to maintain consistency with someone else's
code. It just smacks of an obfuscatory C-ism to me.
> > Or more precisely, if this
>> is /not/ the case, then I'm dealing with such a perverse coder that all
>> bets are off...
>>
>> If I see "if (a != 0)" then I have fewer clues what's going on and
>> quite likely have to check a lot of other code to find out.
I probably should have said If I see "if (a)" there.
> <shrug> If that code is part of a template, it would work just fine for
> *any type* that defines non-equality comparison to zero, pointers and
> integrals alike. And why care what 'a' is. What's going on is the
> comparison with zero, which probably has some meaning in the context.
Probably... until some coder comes along and instantiates the template
with some type for which the comparison is inappropriate. I don't think
that's so far-fetched. For instance I can envisage a signed integer-based
type for which the semantics of "valid" means > 0 (sure you say, maybe it
should not be signed then, but perhaps it needs to do signed arithmetic).
Then 'if (a)' - tucked away in some template - does *not* equate to
"valid". Ouch.
> Ifthat's not a clue enough for you, then you probably don't understand
> the actual context, and, yes, you would need "to check a lot of other
> code to find out".
I would indeed.
--
Lionel B
> Pete Becker wrote:
>
>> Turn off stupid warnings.
>>
>
> I couldn't disagree more. There are too many easy mistakes you can
> make which the compiler can warn you about long before you have to hunt
> them down in debugging. Many of those mistakes can cost many hours if
> they make it to that point.
Perhaps you overlooked the adjective in my statement.
>
> One great example: forgetting to make your destructor virtual when it
> should be.
Shrug. Easy enough to test. That way you know that the destructor is
virtual when you want it to be, rather than relying on the compiler to
decide whether you wanted it to be virtual.
btw, if example of the STL serves, several classes provide an implicit
conversion to bool to express the meaning of /valid/. streams are an
example.
Ok, admittedly, the most relevant one to the present discussion
precisely lacks it (std::auto_ptr). However, I don't think your argument
holds for it either:
if( p.get() == 0 )
does not /mean/ more than:
if( p.get() )
boost::shared_ptr also defines an implicit conversion to bool. So if
Boost does it, it does not look like a bad practice to me.
To add to the length of this thread, i will mention that I have a
personal preference for:
p ? p->something : something_else;
over:
p == 0 ? p->something : something_else;
It's pretty clear what's going on in former case, so no need to be over
redundant.
It's a bit like those who write:
if( condition == true )
or:
return condition ? true : false;
or even:
return condition == true ? true : false;
Anyway, that's probably one of those threads that will end up exactly as
it started. Nowhere! But I'm glad I will have contributed to it ;-)
--
Bertrand
>
> btw, if example of the STL serves, several classes provide an implicit
> conversion to bool to express the meaning of /valid/. streams are an
> example.
Well, an implicit conversion to void*, which in turn implicitly
converts to bool. But, yes, it's a standard idiom.
On the other hand, implicit conversion to void* worries folks with too
much time on their hands, since someone might delete that object. So
they insist that these things return something that can't be deleted.
Of course, now that the language allows "explicit" on a conversion,
we'll see all sorts of conversions being marked explicit. But explicit
conversions to bool work are applied in boolean contexts, so
if(my_obj)
will use the explicit conversion.
> Ok, admittedly, the most relevant one to the present discussion
> precisely lacks it (std::auto_ptr). However, I don't think your
> argument holds for it either:
> if( p.get() == 0 )
> does not /mean/ more than:
> if( p.get() )
>
> boost::shared_ptr also defines an implicit conversion to bool. So if
> Boost does it, it does not look like a bad practice to me.
>
> To add to the length of this thread, i will mention that I have a
> personal preference for:
> p ? p->something : something_else;
> over:
> p == 0 ? p->something : something_else;
>
> It's pretty clear what's going on in former case, so no need to be over
> redundant.
The term "over redundant" is brought to you by the Department of
Redundancy Department.
>
> It's a bit like those who write:
> if( condition == true )
> or:
> return condition ? true : false;
> or even:
> return condition == true ? true : false;
But why stop there?
if ((condition == true) == true)
ad infinitum.
> Lionel B wrote:
>> On Fri, 30 Jan 2009 10:02:29 -0500, Victor Bazarov wrote:
>>
>> (Snip)
>> <shrug> I don't program on MS Windows... but exactly. My understanding
>> is that NULL (as inherited from C) is *specifically* to represent a
>> null pointer constant. If Microsoft choose/chose to subvert that then
>> boo sucks to them. I guess if I had to I'd deal with it I would,
>> grudgingly.
>
> I thought it has something to do with the fact that with C an invalid
> pointer was not necessarily set to 0 on all platforms, hence the need
> for a special value. Something that C++ decided not to carry on, and
> used 0 for invalid pointers; thus legitimating the usage of implicit
> conversion to bool.
Sure, I'm not claiming it's "wrong" to write 'if (p)', simply that 'if (p
== NULL)' - or, for that matter, 'if (p == 0)' - is clearer in its intent.
> btw, if example of the STL serves, several classes provide an implicit
> conversion to bool to express the meaning of /valid/. streams are an
> example.
> Ok, admittedly, the most relevant one to the present discussion
> precisely lacks it (std::auto_ptr). However, I don't think your argument
> holds for it either:
Actually my argument re. STL was about container iterators.
Sure the streams implicit conversion is useful and such a common idiom
that it seems perfectly natural in that context. I guess I'm arguing
against general usage where the context is not necessarily that clearcut.
> if( p.get() == 0 )
> does not /mean/ more than:
> if( p.get() )
I guess you meant 'if( !p.get() )' ... now which style was clearer? ;-)
[...]
> Anyway, that's probably one of those threads that will end up exactly as
> it started. Nowhere! But I'm glad I will have contributed to it ;-)
For sure. I guess it's all about personal preferences. I'm really not too
exercised about it.
--
Lionel B
> I couldn't disagree more. There are too many easy mistakes
> you can make which the compiler can warn you about long before
> you have to hunt them down in debugging. Many of those
> mistakes can cost many hours if they make it to that point.
He didn't say turn off all warnings. He said turn off stupid
warnings. For whatever reasons, every compiler I've ever used
has some really stupid warnings, along side many very useful
ones. If you can turn them off with a compile line option,
fine; most of the time, we've ended up piping the compiler
output through sed, to get rid of some we couldn't turn off.
> One great example: forgetting to make your destructor virtual
> when it should be.
The problem is that the compiler generally can't know when it
should be, so it's almost impossible to get a good warning for
this. (Warning if there are virtual functions and the
destructor is public is probably close enough, however. It
shouldn't result in too many false warnings.)
--
James Kanze
That wasn't the point. While I also prefer NULL to 0 when
pointers are involved, there are serious problems with both
solution, and I understand the arguments for 0. I don't think
that there is a conclusive argument one way or the other
there---it's a question of which problems you consider the most
serious. The issue is
if ( a == NULL )
or if ( a == 0 )
vs if (!a)
; the implicit conversion to bool.
--
James Kanze
>>> For historical reasons, a pointer implicitly converts to
>>> bool, with a null pointer converting to false, and all other
>>> pointer values converting to true. In general, it's better
>>> to avoid the implicit convertion, and write what you mean:
>>> if ( a != NULL ) ...
>> Why do you think it's better? I don't think so, but maybe you
>> can convince me.
> For the same reason implicit conversions in general are to be
> avoided. A pointer isn't a bool, so it doesn't make sense to
> use it as one; doing so only leads to confusion. Say what you
> mean, and mean what you say.
> (BTW: you're the co-author of the original proposal for bool.
> And there, unless I'm remembering wrong, you proposed
> deprecating the above conversion. Have you changed your mind?
> And if so, what made you change it?)
I'd love to deprecate conversion of pointers to bool ... but I don't
consider
if (p) { /* ... */ }
to be an instance of such a conversion. After all, this usage was around
for years before anyone considered the conversion!
Rather, I consider
if (p) { /* ... */ }
to be an abbreviation for
if (p != 0) { /* ... */ }
which is an implicit conversion of 0 to a pointer, not of p to bool.
I understand that the standard doesn't describe it that way, but that's just
a matter of descriptive convenience, and doesn't affect how I personally
think about it.
> It makes the intent of the coder clearer?
> If I see "if (a != NULL)" I immediately know that a is a pointer and that
> we're checking that it's not a null pointer. Or more precisely, if this
> is /not/ the case, then I'm dealing with such a perverse coder that all
> bets are off...
To me, "if (a != NULL)" is saying the same thing twice. That is... I don't
think that code should mention a variable's type more than once if it is
reasonably possible to avoid doing so.
> I consider
>
> if (p) { /* ... */ }
>
> to be an abbreviation for
>
> if (p != 0) { /* ... */ }
>
> which is an implicit conversion of 0 to a pointer, not of p to bool.
>
> I understand that the standard doesn't describe it that way, but that's just
> a matter of descriptive convenience, and doesn't affect how I personally
> think about it.
This may go without saying, but you obviously have a very deep
understanding of how C++ syntax can be used to reflect high-level
semantic meaning. I wish that level of understanding were more widespread.
To a human reader, the != 0 ought to be considered noise, AFAICS. The
ability to test things other than raw bools is not just for pointers,
either; e.g: while (std::cin >> s) { ... }. I don't consider p != 0 any
better than b != false. And don't even get me started on NULL. :)
This argument seems to be a perennial favorite on Usenet. It seems to
come down to a division between people who think generically, and others
who think (or at least code) orthogonally. The first of these camps,
the Generics, believe that syntax should convey abstract meaning, with
low-level details supplied by context. The second camp, the
Orthogonals, believe that a particular syntax should mean exactly the
same thing in all contexts, and that even (or especially) subtly things
should always look different.
I consider myself in the Generic camp. The advantage of this style is
that similar ideas are implemented by similar-looking code. A single
piece of source code can often be reused in multiple contexts, even if
the compiler generates wildly different object code for each of them.
For example, I like the fact that the following all look the same:
throw_if(!p, "null pointer");
throw_if(!cin, "input error");
throw_if(!divisor, "attempted division by zero");
Once I understand the pattern, I don't want to have to optically grep
the details of expressions like p == NULL, !cin.good(), and divisor ==
0. They all have the same high-level meaning, and that's usually what I
care about.
The apparent down side to this style is that subtle mistakes can hide in
it. It's easy for the reader to see the author's intent, but relatively
difficult to see the bit-level reality. However, I would argue that
this is the price we pay for abstraction, and attempts to include
context-specific information in our code without loss of abstraction
give only the illusion of clarity.
The ostensible advantage of something like p != NULL is that p is
clearly a pointer. Or is it? The compiler has no problem with a
comparison of the form std::cin != NULL. In p != NULL, the word NULL
seems to hint that p is a pointer, but that information is neither
relevant to the expression, nor enforceable by the compiler. When it
becomes necessary to understand that p is a pointer, the fact should
become evident, as in an expression like p->size(). Even then, p might
be a raw pointer, an iterator, or some kind of fancy smart pointer. In
decent C++ code, IMO, we shouldn't have to know.
> On 30 jan, 14:08, Lionel B <m...@privacy.net> wrote:
>> On Thu, 29 Jan 2009 17:07:41 +0000, Andrew Koenig wrote:
>> > "James Kanze" <james.ka...@gmail.com> wrote in message
>> >news:11b0ce6c-69ed-4764-aa5a-
c3f8b9...@s1g2000prg.googlegroups.com...
>> > On Jan 13, 8:46 am, Nosophorus <Nosopho...@gmail.com> wrote:
>
>> >> For historical reasons, a pointer implicitly converts to bool, with
>> >> a null pointer converting to false, and all other pointer values
>> >> converting to true. In general, it's better to avoid the implicit
>> >> convertion, and write what you mean:
>> >> if ( a != NULL ) ...
>
>> > Why do you think it's better? I don't think so, but maybe you can
>> > convince me.
>
>> It makes the intent of the coder clearer?
>
>> If I see "if (a != NULL)" I immediately know that a is a pointer and
>> that we're checking that it's not a null pointer. Or more precisely, if
>> this is /not/ the case, then I'm dealing with such a perverse coder
>> that all bets are off...
>
>> If I see "if (a != 0)" then I have fewer clues what's going on and
>> quite likely have to check a lot of other code to find out.
>
> That wasn't the point.
Yes, I jumped into he middle of the thread.
> While I also prefer NULL to 0 when pointers are
> involved, there are serious problems with both solution, and I
> understand the arguments for 0. I don't think that there is a
> conclusive argument one way or the other there---it's a question of
> which problems you consider the most serious.
Sure, that's been done to death (repeatedly) on this ng.
> The issue is
> if ( a == NULL )
> or if ( a == 0 )
> vs if (!a)
> ; the implicit conversion to bool.
See my posts else thread.
--
Lionel B
> "Lionel B" <m...@privacy.net> wrote in message
> news:gluu3l$hhg$2...@south.jnrs.ja.net...
>
>> It makes the intent of the coder clearer?
>
>> If I see "if (a != NULL)" I immediately know that a is a pointer and
>> that we're checking that it's not a null pointer. Or more precisely, if
>> this is /not/ the case, then I'm dealing with such a perverse coder
>> that all bets are off...
>
> To me, "if (a != NULL)" is saying the same thing twice.
Pardon?
To me, it just looks like comparing 'a' with something called 'NULL'
which is a bit of bodge, but nonetheless associated with pointers. So I
would /suspect/ that 'a' is something pointer-like, although to be sure I
would have to check some code.
> That is... I
> don't think that code should mention a variable's type more than once if
> it is reasonably possible to avoid doing so.
--
Lionel B
I have mixed feelings about if (ptr): when I began programming C, I
always wrote if (ptr != NULL). That was many years ago, and I have now
changed attitude and consider it idiomatic (replacing NULL with 0 and
C with C+++) and in line with other common shorthands such as while
(stream >> i).
What is a big shame in my opinion is the automatic conversion to (and
also from) bool: It has caused so many problems and inconveniences for
everyone. I wonder what the rationale was.
/Peter
> news:13127edb-2688-44f0...@i24g2000prf.googlegroups.com...
> On Jan 29, 6:07 pm, "Andrew Koenig" <a...@acm.org> wrote:
> >>> For historical reasons, a pointer implicitly converts to
> >>> bool, with a null pointer converting to false, and all other
> >>> pointer values converting to true. In general, it's better
> >>> to avoid the implicit convertion, and write what you mean:
> >>> if ( a != NULL ) ...
> >> Why do you think it's better? I don't think so, but maybe you
> >> can convince me.
> > For the same reason implicit conversions in general are to be
> > avoided. A pointer isn't a bool, so it doesn't make sense to
> > use it as one; doing so only leads to confusion. Say what you
> > mean, and mean what you say.
> > (BTW: you're the co-author of the original proposal for bool.
> > And there, unless I'm remembering wrong, you proposed
> > deprecating the above conversion. Have you changed your mind?
> > And if so, what made you change it?)
> I'd love to deprecate conversion of pointers to bool ... but I don't
> consider
> if (p) { /* ... */ }
> to be an instance of such a conversion.
But the standard does. And IIRC, so did your original paper.
> After all, this usage was around for years before anyone
> considered the conversion!
Yes, because early C was rather flippant about types. An
attitude left over from B, I suppose. (In B, and in other
"untyped" languages, like AWK, I have no problems with this.
Although even in AWK, if I'm using a variable as a number or a
string, rather than as a boolean, I'll write the test out.)
> Rather, I consider
> if (p) { /* ... */ }
> to be an abbreviation for
> if (p != 0) { /* ... */ }
Why?
> which is an implicit conversion of 0 to a pointer, not of p to
> bool.
> I understand that the standard doesn't describe it that way, but that's just
> a matter of descriptive convenience, and doesn't affect how I personally
> think about it.
I'm not sure I follow you: are you saying that anytime a
variable (pointer or arithmetic type) is used in a condition, it
should automatically be treated as if there was a != 0 behind
it? A sort of a short cut way of writing it?
--
James Kanze
> > if (p) { /* ... */ }
> > to be an abbreviation for
> > if (p != 0) { /* ... */ }
> > which is an implicit conversion of 0 to a pointer, not of p to bool.
> > I understand that the standard doesn't describe it that way,
> > but that's just a matter of descriptive convenience, and
> > doesn't affect how I personally think about it.
> This may go without saying, but you obviously have a very deep
> understanding of how C++ syntax can be used to reflect
> high-level semantic meaning. I wish that level of
> understanding were more widespread.
> To a human reader, the != 0 ought to be considered noise,
> AFAICS.
It specifies what you are testing for. If a pointer only had
two possible values, it would be noise. Since that's not the
case, it's important information.
> The ability to test things other than raw bools is not just
> for pointers, either; e.g: while (std::cin >> s) { ... }. I
> don't consider p != 0 any better than b != false. And don't
> even get me started on NULL. :)
> This argument seems to be a perennial favorite on Usenet. It
> seems to come down to a division between people who think
> generically, and others who think (or at least code)
> orthogonally.
The argument goes back long before generic programming. The
argument is between those who believe in static type checking,
and those who are too lazy to type.
Just kidding, of course---but the issue IS type checking.
With regards to "generic" programming, I can understand the need
for some sort of "generic" is valid. The problem is that we
don't have it. Whether you write "if ( p == NULL )", or just
"if (p)", the constraints on p are identical. The difference is
that in the first case, it's clearly apparent what those
constraints are.
> The first of these camps, the Generics, believe that syntax
> should convey abstract meaning, with low-level details
> supplied by context. The second camp, the Orthogonals,
> believe that a particular syntax should mean exactly the same
> thing in all contexts, and that even (or especially) subtly
> things should always look different.
I don't think you've understood the argument. In a well
designed language, pointers aren't boolean values (since they
have more than two values), and can't be used as such. And type
checking is strict, because that reduces the number of errors.
The problem is that the implicit conversion doesn't cover all of
the cases. Even in the case of pointers, we have three cases
which can legally occur: the pointer points to an object, it
points to one behind the end of an array of objects, or it is
null. So which one do you priviledge by the conversion?
> I consider myself in the Generic camp. The advantage of this
> style is that similar ideas are implemented by similar-looking
> code. A single piece of source code can often be reused in
> multiple contexts, even if the compiler generates wildly
> different object code for each of them. For example, I like
> the fact that the following all look the same:
> throw_if(!p, "null pointer");
> throw_if(!cin, "input error");
> throw_if(!divisor, "attempted division by zero");
throw_if( p != 0, "null pointer");
throw_if( cin != 0, "input error");
throw_if( divisor != 0, "attempted division by zero");
They all look the same to me. Even better:
throw_if( isValid( p ), "null pointer");
throw_if( isValid( cin ), "input error");
throw_if( isValid( divisor ), "attempted division by zero");
With isValid defined with an appropriate overload (which would
return false for a pointer one past the end as well---except
that I don't know how to implement that).
> Once I understand the pattern, I don't want to have to
> optically grep the details of expressions like p == NULL,
> !cin.good(), and divisor == 0. They all have the same
> high-level meaning, and that's usually what I care about.
The problem is that no one really knows what that high-level
meaning is, since it is radically different for each type.
> The apparent down side to this style is that subtle mistakes
> can hide in it. It's easy for the reader to see the author's
> intent,
The problem is that it's impossible for the reader to see the
author's intent (except from the string literal in your
example---but that's not typically present). It's a recepe for
unreadable code.
> but relatively difficult to see the bit-level reality.
> However, I would argue that this is the price we pay for
> abstraction, and attempts to include context-specific
> information in our code without loss of abstraction give only
> the illusion of clarity.
> The ostensible advantage of something like p != NULL is that p
> is clearly a pointer.
No. The advantage is that it isn't "p != 0", and that 0 clearly
isn't a pointer.
The whose system is broken. The problem is that both "p != 0"
and "p != NULL" are lies. The next version of the standard will
fix it somewhat, with nullptr (but for historical reasons, of
course, 0 and NULL must still remain legal).
> Or is it? The compiler has no problem with a comparison of
> the form std::cin != NULL. In p != NULL, the word NULL seems
> to hint that p is a pointer, but that information is neither
> relevant to the expression, nor enforceable by the compiler.
G++ warns if you use NULL in a non-pointer context. It could
just as easily generate an error. So this is enforceable by the
compiler.
> When it becomes necessary to understand that p is a pointer,
> the fact should become evident, as in an expression like
> p->size(). Even then, p might be a raw pointer, an iterator,
> or some kind of fancy smart pointer. In decent C++ code, IMO,
> we shouldn't have to know.
Except that we do, since neither "if (p)" nor "if ( p != NULL )"
nor "if ( p != 0 )" work if p is an iterator.
The real solution here is an appropriate set of overloaded
functions. I use Gabi::begin(), and Gabi::end(), for example;
they work with C style arrays, and are easily made to work with
STL containers. (In other words, if the STL had been designed
for genericity, it would have defined a function begin(
std::vector<> ), and not std:vector<>::begin().)
--
James Kanze
--
Bertrand
Note that here a simple return p; would cause MSVC to issue the
``stupid'' performance warning mentioned elsewhere in this thread.
>
>> btw, if example of the STL serves, several classes provide an implicit
>> conversion to bool to express the meaning of /valid/. streams are an
>> example.
>> Ok, admittedly, the most relevant one to the present discussion
>> precisely lacks it (std::auto_ptr). However, I don't think your argument
>> holds for it either:
>
> Actually my argument re. STL was about container iterators.
No worries, I was looking for examples in a well accepted library, not
really replying to something specific you wrote.
>
> Sure the streams implicit conversion is useful and such a common idiom
> that it seems perfectly natural in that context. I guess I'm arguing
> against general usage where the context is not necessarily that clearcut.
>
>> if( p.get() == 0 )
>> does not /mean/ more than:
>> if( p.get() )
>
> I guess you meant 'if( !p.get() )' ... now which style was clearer? ;-)
Ah, ah, sure. It does not mean /more/, but it does mean something
/different/ then ;-)
I meant to write if( p.get() != 0 ) which was the follow up on the
initial if( a != 0 ) or if( a != NULL ). But it was already late for me...
Nevertheless, even with if( !p.get() ) I stick to my favourite style.
--
Bertrand
>> Rather, I consider
>
>> if (p) { /* ... */ }
>
>> to be an abbreviation for
>
>> if (p != 0) { /* ... */ }
>
> Why?
Because it's an idiom that has been in common usage for more than 30 years.
>> I understand that the standard doesn't describe it that way, but that's
>> just
>> a matter of descriptive convenience, and doesn't affect how I personally
>> think about it.
> I'm not sure I follow you: are you saying that anytime a
> variable (pointer or arithmetic type) is used in a condition, it
> should automatically be treated as if there was a != 0 behind
> it? A sort of a short cut way of writing it?
I'm saying that that's how people who have been programming in C or C++ for
a long time often think about it.
I find it easier to read
if (p && p->thing == "foo") { ... }
than to read
if (p != NULL && p->thing == "foo") { ... }
because the "!= NULL" in the second example is redundant and makes me stop
to think "Why did the author of that statement put the redundant comparison
in?"
>> To me, "if (a != NULL)" is saying the same thing twice.
> Pardon?
If the variable 'a' is defined as having a pointer type, then the "!= NULL"
is just restating something we already know.
> news:GnVgl.6084$Nn6....@newsfe03.ams2...
How's that? It's stating that we're comparing it with a null
pointer. And not, for example, comparing it with a pointer to
one past the end. We're testing whether a pointer has a
specific value: the != tells us that we consider the results
true if it *doesn't* have this value, and the NULL tells us that
the value in question is a null pointer. Both are very
pertinent information.
--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
> news:0611b478-c16a-4e5c...@t26g2000prh.googlegroups.com...
> >> Rather, I consider
> >> if (p) { /* ... */ }
> >> to be an abbreviation for
> >> if (p != 0) { /* ... */ }
> > Why?
> Because it's an idiom that has been in common usage for more
> than 30 years.
In some circles. In others, no. In the groups I've worked
with, ``if (p)'' has never been used. Personally, I find it
confusing, and I have to stop and think about it each time I see
it. (And I've probably got almost as much experience in C and
C++ as you do:-).)
> >> I understand that the standard doesn't describe it that
> >> way, but that's just a matter of descriptive convenience,
> >> and doesn't affect how I personally think about it.
> > I'm not sure I follow you: are you saying that anytime a
> > variable (pointer or arithmetic type) is used in a
> > condition, it should automatically be treated as if there
> > was a != 0 behind it? A sort of a short cut way of writing
> > it?
> I'm saying that that's how people who have been programming in
> C or C++ for a long time often think about it.
How some people think about it, perhaps. But it's certainly not
universal, and in itself, is confusing.
> I find it easier to read
> if (p && p->thing == "foo") { ... }
> than to read
> if (p != NULL && p->thing == "foo") { ... }
> because the "!= NULL" in the second example is redundant and
> makes me stop to think "Why did the author of that statement
> put the redundant comparison in?"
Maybe because he wanted to make it clear to others what he was
testing?
I think that the widespread adoption STL iterator idiom makes
this even more important. I don't write:
if ( iter && iter->... )
, for the obvious reason that I can't. People expect to see a
comparison when an iterator is used, and this expectation
carries over to pointers. If we accept the ``if (p)'' idiom,
then logically, we should insist on the GoF pattern for
iterators, with an implicit conversion to bool, returning
!isDone.
While I prefer the GoF pattern myself (and use it at least as
often as the STL pattern, since I like filtering iterators and
such), I don't agree with the implicit conversion either. Say
whay you mean, and mean what you say---an iterator or a pointer
is not a bool, and I don't like to pretend it is.
Like he said, we know that already.
> And not, for example, comparing it with a pointer to
> one past the end.
We know that also.
> > On Feb 5, 11:39 pm, "Andrew Koenig" <a...@acm.org> wrote:
> >> "Lionel B" <m...@privacy.net> wrote in message
> >>news:GnVgl.6084$Nn6....@newsfe03.ams2...
> >>>> To me, "if (a != NULL)" is saying the same thing twice.
> >>> Pardon?
> >> If the variable 'a' is defined as having a pointer type,
> >> then the "!= NULL" is just restating something we already
> >> know.
> > How's that? It's stating that we're comparing it with a
> > null pointer.
> Like he said, we know that already.
Except that we don't. (Unless our mind has been perverted by
too many years of C.)
--
James Kanze (GABI Software) email:james.ka...@gmail.com
> I think that the widespread adoption STL iterator idiom makes
> this even more important. I don't write:
>
> if ( iter && iter->... )
That's an idiom I wish STL iterators supported. It would allow the
passing of a single iterator, rather than a pair of iterators, to
generic algorithms. Of course, raw pointers would cease to be valid
iterators, so I don't have high hopes.
>> If the variable 'a' is defined as having a pointer type, then
>> the "!= NULL" is just restating something we already know.
> How's that? It's stating that we're comparing it with a null
> pointer.
Right. Therefore, we are restating that 'a' has a type that can be compared
with a null pointer.
But if we have already defined 'a' as having a pointer type, we know that we
can compare 'a' with a null pointer, so I think we should not restate that
fact in our code if we can reasonably avoid doing so.
> I think that the widespread adoption STL iterator idiom makes
> this even more important. I don't write:
if ( iter && iter->... )
> , for the obvious reason that I can't. People expect to see a
> comparison when an iterator is used, and this expectation
> carries over to pointers.
If you feel that way, then you shouldn't be writing
if (p != NULL)
either because there is no equivalent to a null pointer in the iterator
universe.
But then how would an algorithm pass off a subset of the range to
another function? It could use ranges, but then you'd need two
interfaces for each algorithm.
> news:1926fe2e-c568-4903...@v5g2000prm.googlegroups.com...
> if (p != NULL)
I don't use it very often; for reasons of consistency, I tend to
use the iterator idioms even when dealing with pointers. (Even
if I think that the idiom is not very optimal.) But you're
missing my point entirely---a pointer isn't a bool, because it
has more than two values. And because you can compare it to
many different things, you should state what you're comparing it
to. (I don't approve of things like "if ( booleanVariabe ==
true )", for example. That shows a lack of understanding of the
type system. Just like "if (p)":-).)
Define an iterator which represents a range.
In practice, about the only time I've found that I need to pass
sub-ranges is when parsing. In which case, I either have a
string (and could use indexes), or an istreambuf_iterator (and
have to copy anyway). In addition, when parsing, it's usually
better if iterators have reference semantics; when a function
consumes characters, you want them consumed in the calling
function as well. So I tend to use C++'s other iterator idiom
for this: std::streambuf. (Actually, I've since implemented my
own ParserSource hierarchy. On one hand, to avoid all the extra
baggage of buffer management when not needed, and on the other,
to provide for automatic copying when needed.)
> >> If the variable 'a' is defined as having a pointer type, then
> >> the "!= NULL" is just restating something we already know.
> > How's that? It's stating that we're comparing it with a null
> > pointer.
> Right. Therefore, we are restating that 'a' has a type that
> can be compared with a null pointer.
Amongst other things. (Actually, we're not even stating that,
according to the standard.) But that's not the reason we use
it.
> But if we have already
> defined 'a' as having a pointer type, we know that we can
> compare 'a' with a null pointer, so I think we should not
> restate that fact in our code if we can reasonably avoid doing
> so.
But the purpose of the '== NULL' isn't to state anything about
a: it states what we're comparing it to (and how). A pointer
isn't true or false; it has a (conceptually) infinite number of
possible values. I can compare it to NULL (or 0), or to some
other pointer value.
The implicit conversion of a pointer to bool is a breach in the
type system. In addition, it's a lossy conversion. (But if I
understand your argument correctly, you're not arguing for the
conversion, but rather that "conditions" can take many different
types, and not just bool. Which sounds like an oxymoron to me.)
That is confusing the definitions of the standard with how one thinks about
language constructs. The implicit conversion are crafted so that
if ( expr ) { ... }
is for all practical purposes equivalent to
if ( expr != 0 ) { ... }
So, one may think about if-clauses as testing for non-zeroness. That the
standard accomplishes this by implicit conversions is immaterial. Also,
that the standard calls the syntactic element involved a "condition" is
immaterial.
It is possible that chess player thinks that a knights move is one step
diagonal followed by an outward one step rook move. Now some rule book
could defined it as a two step rook move followed by a one step rook move
in a perpendicular direction. There will never be any observable
disagreement between the player and the rule book. There is no reason the
player should change his way of thinking.
Best
Kai-Uwe Bux
I think this is like discussing the general issue of # of angels that can fit on
a dog's snout without causing the dog to sneeze.
But anyways, in a broader perspective it is about adopting consistent coding
conventions, and about deviating from such conventions in certain idiomatic cases.
For full consistency, allowing a later change from raw pointer to smart pointer,
one should ideally write e.g.
if( !isZero( p ) ) { ... }
and not as Andrew likes
if( p ) { ... }
or as James and I like
if( p != 0 ) { ... }
or as some have advocated elsewhere,
if( p != nullPtr )
But in reality the effort of using something like 'isZero' will mostly be
wasted: the pointer's type will probably never be changed. And wasted effort is
generally ungood. So upshot is to do what's natural, what one's accustomed to,
unless there are coding guidelines that force some particular way.
Cheers,
- Alf
> [snip]
> > (But if I
> > understand your argument correctly, you're not arguing for the
> > conversion, but rather that "conditions" can take many different
> > types, and not just bool. Which sounds like an oxymoron to me.)
> That is confusing the definitions of the standard with how one
> thinks about language constructs. The implicit conversion are
> crafted so that
> if ( expr ) { ... }
> is for all practical purposes equivalent to
> if ( expr != 0 ) { ... }
Yes. But that's an implicit conversion (and a lossy one, at
that)---a bit of obfuscation, if you prefer, present mainly for
reasons of backwards compatibility. Or, perhaps, as a technical
means of supporting what Andy seems to be arguing for:
conditions that don't really require booleans. (Sort of like
saying, in English "if the address", rather than "if the address
is present", or "if the address is valid".)
> So, one may think about if-clauses as testing for
> non-zeroness. That the standard accomplishes this by implicit
> conversions is immaterial. Also, that the standard calls the
> syntactic element involved a "condition" is immaterial.
And the fact that it uses key words like if and while, who's
meaning in English implies a predicate, is also irrelevant?
That's what I mean when I said it seems like an oxymoron.
Saying that we have an if that tests something that isn't a
predicate seems to me an internal contradiction.
I can sort of understand his point of view (although I still
don't agree) IF we accept the idea of "null", in the data base
sense. This has serious repercussions, however; if we implement
it systematically, all types should be "null-able", so even bool
ends up with three states (true, false and null), and "if
( aBool )" executes the if clause if aBool is true or false (but
not if it is null). I don't think that's really a direction C++
wants to take. (While I can partially see the argument for "if
(pointer)"---a null pointer is a very special, sentinal
value---I can't accept it at all for "if (number)", where number
is a double or an int. There's nothing particularly special
about 0, and for example, open(), under Unix, uses -1 as its
special return value.)
In the end, C++ has taken the route of increasing type safety
(compared to C). Not quite as much as it should, IMHO---I'd
like to see implicit conversions of double to int, for example,
dropped. But the type system is important in C++. And
explicitly providing a bool in a condition is part of that type
system; supporting things like "if (p)" is a crack in that type
system.
[...]
> For full consistency, allowing a later change from raw pointer
> to smart pointer, one should ideally write e.g.
> if( !isZero( p ) ) { ... }
> and not as Andrew likes
> if( p ) { ... }
> or as James and I like
>
> if( p != 0 ) { ... }
Well, I prefer "if ( p != NULL )":-). But the choice of 0 or
NULL is another argument---fundamentally, neither are really
satisfactory, for different reasons.
And of course, any smart pointer worth its salt will support
comparison with a null pointer constant. (All of the ones I've
written do.) Or implicit conversion to bool, if you really want
to take that route.
> or as some have advocated elsewhere,
> if( p != nullPtr )
Except that it will be spelled nullptr in C++0x (and apparently
enough people were dissatisfied with 0 or NULL to do something
about it).
> But in reality the effort of using something like 'isZero'
> will mostly be wasted: the pointer's type will probably never
> be changed. And wasted effort is generally ungood. So upshot
> is to do what's natural, what one's accustomed to, unless
> there are coding guidelines that force some particular way.
And we're discussing what should be in the coding guidelines:-).
> > <shrug> I don't program on MS Windows... but exactly. My understanding is
> > that NULL (as inherited from C) is *specifically* to represent a null
> > pointer constant. If Microsoft choose/chose to subvert that then boo sucks
> > to them. I guess if I had to I'd deal with it I would, grudgingly.
>
> I thought it has something to do with the fact that with C an invalid
> pointer was not necessarily set to 0 on all platforms,
an invalid pointer can be set to anything (that isn't a valid
pointer), but a null pointer will have a specific value for
that implementation (usually all bits zero). Assigning 0 to
a pointer will set it to the value null pointer.
int* p = 0; // p is now set to a null pointer value
In C (and C++) this works
if (p == 0)
*no matter what value the null pointer value has*
Most of the time C (and C++) programmers don't have to
worry thet the null pointer value may not be zero
unless they start breaking into types
int i = 0;
int *p = (int*)i; // this won't work!!
> hence the need for a special value.
nope. NULL is just a convenience. And it *must* evaluate to
0 (or (void*)0 in C).
> Something that C++ decided not to carry on, and
> used 0 for invalid pointers; thus legitimating the usage of implicit
> conversion to bool.
C and C++ have (nearly) the same semantics in this area.
// all these are ok in C and C++
int* p = NULL;
p = 0;
if (p)
if (p == 0)
if (p == NULL)
// these work in C but not in C++
int* p = (void*)0;
p = malloc (127);
When C++ was young and stdlib was sometimes lifted from
C the NULL macro sometimes didn't work with C++ hence
C++ programmers used the unadorned 0 as that always works.
NULL should be ok now but is avoided for historical
reasons.
Some have suggested that 0 is a special keyword
that means null pointer...
--
Nick Keighley
In my point of view there are two uses of pointers:
- pointer-as-iterators, which always go around in pairs. The pointers
point to rages of things and have no business having a null state and
should never be checked for null-ness. A test for emptiness (begin ==
end) is the logical null state.
- pointer as a nullable references. In this case null really is a
special value, exactly as in the database sense. On the other hand you
should never do pointer arithmetic on these kind of pointers: these
are not iterators.
These really are two different things; the fact that both usages are
encoded using pointers is a bit unfortunate, but of course it is due
to historical reasons.
> This has serious repercussions, however; if we implement
> it systematically, all types should be "null-able", so even bool
> ends up with three states (true, false and null), and "if
> ( aBool )" executes the if clause if aBool is true or false (but
> not if it is null).
Well, nothing forces you to implement it systematically. You can
always add an extra null state with the fallible idiom, if necessary
(heck, I've even used boost::optional<T&> as a nullable reference
parameter, but it is usually an overkill).
The fact is that pointers do come with a builtin out-of-band-really-
can-t-be-used-for-anything-else null state; IMHO making the check
explicit doesn't add much.
> I don't think that's really a direction C++
> wants to take. (While I can partially see the argument for "if
> (pointer)"---a null pointer is a very special, sentinal
> value---I can't accept it at all for "if (number)", where number
> is a double or an int. There's nothing particularly special
> about 0, and for example, open(), under Unix, uses -1 as its
> special return value.)
for what is worth, while I routinely use the 'if(ptr)' idiom, I
(almost) always write 'if(integral == 0)' explicitly, exactly because
0 is not usually a special value for numbers.
>
> In the end, C++ has taken the route of increasing type safety
> (compared to C). Not quite as much as it should, IMHO---I'd
> like to see implicit conversions of double to int, for example,
> dropped. But the type system is important in C++. And
> explicitly providing a bool in a condition is part of that type
> system; supporting things like "if (p)" is a crack in that type
> system.
>
the crack is not the 'if(p)' itself, but all implicit conversions to
bool, which lose information. If all bool conversions were explicit
and 'if(p)' were considered an explicit bool conversion context (which
makes sense, as 'if' is obviously asking for a boolean question), it
wouldn't be a problem (in fact, IIRC, this is how the current draft
treats 'explicit operator bool()')
--
gpd
[Excellent description of the situation elided...]
> When C++ was young and stdlib was sometimes lifted from
> C the NULL macro sometimes didn't work with C++ hence
> C++ programmers used the unadorned 0 as that always works.
> NULL should be ok now but is avoided for historical
> reasons.
I'm under the impression that the original impetus for using 0
was also the occasional problems with the way NULL was defined.
But today, at least, other motivations exist: NULL can lead to
confusion in cases of function overload or templates:
void f( int ) ;
void f( char* ) ;
f( NULL ) ; // calls f( int ) !!!
People who use a lot of overloaded functions or a lot of
templates tend to prefer 0 to NULL for this reason. People who
overload more reasonably (i.e. who only overload when it doesn't
matter which function is called), and make more limited use of
templates (for whatever reasons) tend to prefer NULL. I'm in
the NULL camp, but I understand the arguments on the other side;
they're not just caught up in an anachronism. (None of those
reasons apply to C, of course, and I don't understand why one
would use 0 instead of NULL in the case of C.)
> Some have suggested that 0 is a special keyword
> that means null pointer...
Which works in C. According to the standard, there is a
conversion involved. In the case of C, I can't think of any
case where it makes a difference, but in C++, it affects
function overload resolution and template type deduction.
Given that both 0 and NULL have problems in C++, the
standardization committee has added a nullptr keyword, which
resolves to a constant expression of a special type, which isn't
an integer. G++ has implemented this for a long time, of
course, although it's nullptr (spelled __null) is designed to
behave as an int, except that it generates a warning if it
doesn't get immediately converted to a pointer.