Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

I don't get how the computer arrives at 2^31

1 view
Skip to first unread message

Chad

unread,
Dec 13, 2005, 3:46:16 AM12/13/05
to
The question is related to the following lines of code:

#include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);

printf("The value of a is: %u\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

return 0;
}

The output is:

$./pw
The value of a is: 2147483648
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8


The value of 'a' (on my machine) is 2147483648 or 2^31. Is this anyway
related to the fact that 1 int in this case is 32 bits? I

Thanks in advance.
Chad

sleb...@gmail.com

unread,
Dec 13, 2005, 3:53:20 AM12/13/05
to

Try unsigned int. Unless this is a simple test code, for the sizes the
numbers you are handling I'd recommend long long.

Chad

unread,
Dec 13, 2005, 4:02:33 AM12/13/05
to

Okay, I tried this. I also forgot to use %lu vs %u.

#include <stdio.h>
#include <math.h>

int main(void) {

unsigned int a = (unsigned int)pow(2.0 ,32.0);


double b = pow(2.0 , 32.0);

printf("The value of a is: %lu\n",a);


printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

return 0;
}

$./pw
The value of a is: 0


The value of b is: 4294967296
The value of int is: 4
The value of double is: 8

Now 'a' is zero! Ahhhhhhhhhhhhh........ I'm even more confused.

Chad

mailu...@gmail.com

unread,
Dec 13, 2005, 4:07:20 AM12/13/05
to
It is happening so be cause MSB of the 'a' is used for sign bit. So
the "4294967296" will be stored in only in the remaining bits of 'a'.
So the truncation to "2147483648". i.e right shift 4294967296 once that
will be "2147483648".

Jordan Abel

unread,
Dec 13, 2005, 4:08:02 AM12/13/05
to

%lu is for an unsigned long, not for an unsigned int - it should be %u

> Now 'a' is zero! Ahhhhhhhhhhhhh........ I'm even more confused.

it's an overflow - neither is big enough to contain 2^32 - the value of
it when it was an int was actually -2^31.

arun

unread,
Dec 13, 2005, 4:37:28 AM12/13/05
to
the valuue you got for a is for (2^32) /2.
this is because 1 int=2 bytes=2*16=32 bits.

2^32=4294967296. now this is signed integer('cause u used %d"),therfore
the range is split into two(-ve &+ve)
i.e 4294967296/2. or (2^32)/2=2^31

arun

unread,
Dec 13, 2005, 4:40:22 AM12/13/05
to
this is because u used %lu whereas u declared a as only unsigned int
not unsigned long int.

sleb...@gmail.com

unread,
Dec 13, 2005, 4:44:31 AM12/13/05
to

Ahh... that just means that on your platform an int is 32 bits:

4294967296 = (binary) 100000000000000000000000000000000
which is 33 bits. So when your computer does its calculation, it will
only store 32 of those 33 bits, resulting in:
(binary, first 1 chopped off) 00000000000000000000000000000000
which is zero.

The largest 32bit number is (2^32)-1. Of course, this number can't be
calculated on a 32bit CPU (unless using long long of course) since the
2^32 part will simply result in a zero.

Another interesting experiment to see if your CPU uses 2s complement
arithmetic:

unsigned int a = (int) -1;


printf("The value of a is: %u\n",a);

Anyway, all this is probably a little OT. And my code above is not
strictly portable. The best advice if you really want to be handling
large numbers is to use long long which will give you at least 64 bits.

sleb...@gmail.com

unread,
Dec 13, 2005, 4:53:27 AM12/13/05
to
sleb...@yahoo.com wrote:
> Chad wrote:
> > sleb...@yahoo.com wrote:
> > > Chad wrote:
> > > > The question is related to the following lines of code:
> > > > <snip>

> > > > The output is:
> > > >
> > > > $./pw
> > > > The value of a is: 2147483648
> > > > The value of b is: 4294967296
> > > > The value of int is: 4
> > > > The value of double is: 8
> > > >
> > > > The value of 'a' (on my machine) is 2147483648 or 2^31. Is this anyway
> > > > related to the fact that 1 int in this case is 32 bits? I
> > >
> > > Try unsigned int. Unless this is a simple test code, for the sizes the
> > > numbers you are handling I'd recommend long long.
> >
> > Okay, I tried this. I also forgot to use %lu vs %u.
> > <snip>

> > $./pw
> > The value of a is: 0
> > The value of b is: 4294967296
> > The value of int is: 4
> > The value of double is: 8
> >
> > Now 'a' is zero! Ahhhhhhhhhhhhh........ I'm even more confused.
> >
>
> Ahh... that just means that on your platform an int is 32 bits:
>
> 4294967296 = (binary) 100000000000000000000000000000000
> which is 33 bits. So when your computer does its calculation, it will
> only store 32 of those 33 bits, resulting in:
> (binary, first 1 chopped off) 00000000000000000000000000000000
> which is zero.
>
> The largest 32bit number is (2^32)-1. Of course, this number can't be
> calculated on a 32bit CPU (unless using long long of course) since the
> 2^32 part will simply result in a zero.
>
> Another interesting experiment to see if your CPU uses 2s complement
> arithmetic:
>
> unsigned int a = (int) -1;
> printf("The value of a is: %u\n",a);
>

Forgot to mention, if you get (2^32)-1 which is 4294967295 then your
CPU uses 2's complement math. If instead you get 2^31 which is
2147483648 then your CPU uses a simple integer with sign bit.

Chad

unread,
Dec 13, 2005, 5:15:46 AM12/13/05
to
> Ahh... that just means that on your platform an int is 32 bits:
>
> 4294967296 = (binary) 100000000000000000000000000000000
> which is 33 bits. So when your computer does its calculation, it will
> only store 32 of those 33 bits, resulting in:
> (binary, first 1 chopped off) 00000000000000000000000000000000
> which is zero.
>
> The largest 32bit number is (2^32)-1. Of course, this number can't be
> calculated on a 32bit CPU (unless using long long of course) since the
> 2^32 part will simply result in a zero.
>
> Another interesting experiment to see if your CPU uses 2s complement
> arithmetic:
>
> unsigned int a = (int) -1;
> printf("The value of a is: %u\n",a);
>
> Anyway, all this is probably a little OT. And my code above is not
> strictly portable. The best advice if you really want to be handling
> large numbers is to use long long which will give you at least 64 bits.

Speaking of long long, I tried this:

include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);

long long c = 4294967296;

printf("The value of a is: %u\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

printf("The value of c is: %llu\n",c>>1);
}

The output is:
$gcc pw.c -o pw -lm
pw.c: In function `main':
pw.c:8: warning: integer constant is too large for "long" type


$./pw
The value of a is: 2147483648
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8

The value of c is: 2147483648
$


Interesting.

Chad

Chad

unread,
Dec 13, 2005, 5:22:23 AM12/13/05
to

Before I forget. In regards to using %lu vs %u, I need day or two to
let in sink in.

Chad

Old Wolf

unread,
Dec 13, 2005, 5:30:32 PM12/13/05
to
Chad wrote:

> The question is related to the following lines of code:
>
> #include <stdio.h>
> #include <math.h>
>
> int main(void) {
>
> int a = (int)pow(2.0 ,32.0);

Undefined behaviour -- the return value from pow() is greater
than INT_MAX .

> printf("The value of a is: %u\n",a);

Undefined behaviour -- %u is for unsigned ints.
Also, %llu or %Lu is for unsigned long longs
(in another message you used it for a signed long long).

> The value of a is: 2147483648

That isn't even a valid int value.

Jordan Abel

unread,
Dec 13, 2005, 6:20:53 PM12/13/05
to
On 2005-12-13, Old Wolf <old...@inspire.net.nz> wrote:
int a /* = INT_MIN */;

>> printf("The value of a is: %u\n",a);
>
> Undefined behaviour -- %u is for unsigned ints.

it's not clear that it's undefined - an unsigned int and a signed int
have the same size, and it's not clear that int can have any valid
representations that are trap representations for unsigned.

> Also, %llu

yes.

> or %Lu

no.

pete

unread,
Dec 13, 2005, 6:52:08 PM12/13/05
to
Jordan Abel wrote:
>
> On 2005-12-13, Old Wolf <old...@inspire.net.nz> wrote:
> int a /* = INT_MIN */;
> >> printf("The value of a is: %u\n",a);
> >
> > Undefined behaviour -- %u is for unsigned ints.
>
> it's not clear that it's undefined -

It's undefined because the standard says that

%u is for unsigned ints.

You don't need to be able to think of a failure mechanism,
for undefined code to be undefined.
Undefined behavior makes learning the language simpler.

For the case of
i = 0;
i = i++;
it makes sense to me that the final value of i
could be either 1 or 2,
but that code is not unspecified, it's undefined.

--
pete

Skarmander

unread,
Dec 13, 2005, 7:09:18 PM12/13/05
to
Jordan Abel wrote:
> On 2005-12-13, Old Wolf <old...@inspire.net.nz> wrote:
> int a /* = INT_MIN */;
>>> printf("The value of a is: %u\n",a);
>> Undefined behaviour -- %u is for unsigned ints.
>
> it's not clear that it's undefined - an unsigned int and a signed int
> have the same size, and it's not clear that int can have any valid
> representations that are trap representations for unsigned.
>
No, this is pretty clear. The type of the argument required for a "%u"
format specifier is "unsigned int". From 7.9.16.1:

"If any argument is not the correct type for the corresponding conversion
specification, the behavior is undefined."

While, for the reasons you mention, most if not all platforms will treat
this as expected, the standard does not explicitly allow it. "int" is not
the correct type.

S.

Jordan Abel

unread,
Dec 13, 2005, 7:49:32 PM12/13/05
to
On 2005-12-13, pete <pfi...@mindspring.com> wrote:
> Jordan Abel wrote:
>>
>> On 2005-12-13, Old Wolf <old...@inspire.net.nz> wrote:
>> int a /* = INT_MIN */;
>> >> printf("The value of a is: %u\n",a);
>> >
>> > Undefined behaviour -- %u is for unsigned ints.
>>
>> it's not clear that it's undefined -
>
> It's undefined because the standard says that %u is for unsigned ints.

Things are only undefined because the standard says that they are
undefined, not for any other reason. As it happens, this is in fact the
case, 7.19.6.1p9 "If any argument is not the correct type for the


corresponding conversion specification, the behavior is undefined".

However, if we assume that printf is a variadic function implemented
along the lines of the stdarg.h macros, we see 7.15.1.1p2 "...except for
the following cases: - one type is a signed integer type, the other type
is the corresponding unsigned integer type, and the value is
representable in both types" - In this case, the value is not, but "%u
is for unsigned ints" as a blanket statement would seem to be incorrect.
comp.std.c added - do the signed/unsigned exception, and the char*/void*
one, to va_arg type rules also apply to printf?

Incidentally, my copy of some c89 draft says that %u takes an int and
converts it to unsigned. c99 changes this to take an unsigned int. There
are several possibilities: Perhaps they screwed up (but the c99
rationale does not comment on this), or they thought the difference was
insignificant enough not to matter.

Ben Pfaff

unread,
Dec 13, 2005, 8:12:01 PM12/13/05
to
Jordan Abel <jma...@purdue.edu> writes:

> Things are only undefined because the standard says that they are
> undefined, not for any other reason.

That is not true. Here is the definition of undefined behavior:

1 undefined behavior
behavior, upon use of a nonportable or erroneous program
construct or of erroneous data, for which this International
Standard imposes no requirements

Anything that the Standard does not define is undefined.
--
Bite me! said C.

Jordan Abel

unread,
Dec 13, 2005, 8:28:41 PM12/13/05
to

Then why does it go to such pains to declare things to be "undefined
behavior"? the word undefined appears 182 times in c99, and there are
191 points in J.2 [yes, i counted them. by hand. these things should
really be numbered - at least they contain pointers to the sections they
refer to.]

Name one instance of undefined behavior that is not explicitly declared
undefined by the standard?

Alternative hypothesis: the definition you quoted is meant to explain
that when they [explicitly] say something is undefined, that _means_ no
requirements can be inferred from other sections to apply to that
behavior.

Ben Pfaff

unread,
Dec 13, 2005, 8:41:05 PM12/13/05
to
Jordan Abel <jma...@purdue.edu> writes:

> On 2005-12-14, Ben Pfaff <b...@cs.stanford.edu> wrote:
>> Anything that the Standard does not define is undefined.
>
> Then why does it go to such pains to declare things to be "undefined
> behavior"? the word undefined appears 182 times in c99, and there are
> 191 points in J.2 [yes, i counted them. by hand. these things should
> really be numbered - at least they contain pointers to the sections they
> refer to.]

In my opinion, it is important that we have clearly delineated
areas of doubt and uncertainty, where possible.

> Name one instance of undefined behavior that is not explicitly declared
> undefined by the standard?
>
> Alternative hypothesis: the definition you quoted is meant to explain
> that when they [explicitly] say something is undefined, that _means_ no
> requirements can be inferred from other sections to apply to that
> behavior.

There is an easy answer to this question. There are committee
members in comp.std.c. They can answer "yea" or "nay", should
they deign. Based on historical discussion in comp.lang.c, I
have one opinion. You have another.
--
"What is appropriate for the master is not appropriate for the novice.
You must understand the Tao before transcending structure."
--The Tao of Programming

pete

unread,
Dec 13, 2005, 9:41:02 PM12/13/05
to
Jordan Abel wrote:

> Incidentally, my copy of some c89 draft says that %u takes an int and
> converts it to unsigned. c99 changes this to take an unsigned int.

ISO/IEC 9899: 1990

7.9.6.1 The fprintf function

o, u, x, X The unsigned int argument is converted to

--
pete

Wojtek Lerch

unread,
Dec 13, 2005, 9:47:22 PM12/13/05
to
"Jordan Abel" <jma...@purdue.edu> wrote in message
news:slrndputfb...@random.yi.org...

> On 2005-12-14, Ben Pfaff <b...@cs.stanford.edu> wrote:
>>
>> Anything that the Standard does not define is undefined.
>
> Then why does it go to such pains to declare things to be "undefined
> behavior"? the word undefined appears 182 times in c99, and there are
> 191 points in J.2 [yes, i counted them. by hand. these things should
> really be numbered - at least they contain pointers to the sections they
> refer to.]

4#2 If a "shall" or "shall not" requirement that appears outside of a
constraint is violated, the behavior is undefined. Undefined behavior is
otherwise indicated in this International Standard by the words "undefined
behavior" or by the omission of any explicit definition of behavior. There
is no difference in emphasis among these three; they all describe "behavior
that is undefined".


Jordan Abel

unread,
Dec 13, 2005, 10:07:22 PM12/13/05
to

I thought the claim i was disputing was that there could be undefined
behavior without the standard making any explicit statement, not that
the explicit statement could be worded some particular other way.

The standard doesn't define the effects of the phase of the moon on the
program - does that mean running a program while the moon is full is
undefined? how about the first quarter?

Ben Pfaff

unread,
Dec 13, 2005, 10:17:42 PM12/13/05
to
Jordan Abel <jma...@purdue.edu> writes:

> On 2005-12-14, Wojtek Lerch <Wojt...@yahoo.ca> wrote:
>> 4#2 If a "shall" or "shall not" requirement that appears outside of a
>> constraint is violated, the behavior is undefined. Undefined behavior is
>> otherwise indicated in this International Standard by the words "undefined
>> behavior" or by the omission of any explicit definition of behavior. There
>> is no difference in emphasis among these three; they all describe "behavior
>> that is undefined".

Ah, there's the paragraph I was thinking of, but couldn't locate.

> I thought the claim i was disputing was that there could be undefined
> behavior without the standard making any explicit statement, not that
> the explicit statement could be worded some particular other way.

"...or by the omission of any explicit definition of behavior"
does not say that anything not defined is undefined behavior? If
not, then I believe that we are at an impasse--I interpret that
phrase one way, and you do another.
--
"The fact that there is a holy war doesn't mean that one of the sides
doesn't suck - usually both do..."
--Alexander Viro

Jordan Abel

unread,
Dec 13, 2005, 10:46:18 PM12/13/05
to
On 2005-12-14, Ben Pfaff <b...@cs.stanford.edu> wrote:
> Jordan Abel <jma...@purdue.edu> writes:
>
>> On 2005-12-14, Wojtek Lerch <Wojt...@yahoo.ca> wrote:
>>> 4#2 If a "shall" or "shall not" requirement that appears outside of a
>>> constraint is violated, the behavior is undefined. Undefined behavior is
>>> otherwise indicated in this International Standard by the words "undefined
>>> behavior" or by the omission of any explicit definition of behavior. There
>>> is no difference in emphasis among these three; they all describe "behavior
>>> that is undefined".
>
> Ah, there's the paragraph I was thinking of, but couldn't locate.
>
>> I thought the claim i was disputing was that there could be undefined
>> behavior without the standard making any explicit statement, not that
>> the explicit statement could be worded some particular other way.
>
> "...or by the omission of any explicit definition of behavior"
> does not say that anything not defined is undefined behavior?

there's no explicit definition of what effect the phase of the moon has
on programs [which you not only did not reply to, but snipped.]

As it happens, a positive signed int is permitted in general for
variadic functions that take an unsigned int [same for an unsigned <
INT_MAX for a signed] - The reason i added comp.std.c, was for the
question of whether this same exception would apply to printf.

Wojtek Lerch

unread,
Dec 13, 2005, 11:00:43 PM12/13/05
to

"Jordan Abel" <jma...@purdue.edu> wrote in message
news:slrndpv5hc...@random.yi.org...

> On 2005-12-14, Ben Pfaff <b...@cs.stanford.edu> wrote:
>> "...or by the omission of any explicit definition of behavior"
>> does not say that anything not defined is undefined behavior?
>
> there's no explicit definition of what effect the phase of the moon has
> on programs [which you not only did not reply to, but snipped.]

If the behaviour of a program or construct is explicitly defined by the
standard, then there's no omission of an explicit definition of behaviour,
even if the definition doesn't mention the phase of the moon.


Ben Pfaff

unread,
Dec 13, 2005, 11:08:50 PM12/13/05
to
Jordan Abel <jma...@purdue.edu> writes:

> On 2005-12-14, Ben Pfaff <b...@cs.stanford.edu> wrote:
>> Jordan Abel <jma...@purdue.edu> writes:
>> "...or by the omission of any explicit definition of behavior"
>> does not say that anything not defined is undefined behavior?
>
> there's no explicit definition of what effect the phase of the moon has
> on programs [which you not only did not reply to, but snipped.]

The standard isn't defining the moon. The behavior of the moon
is indeed undefined by the standard. That doesn't mean that the
behavior of an implementation is dependent on the phase of the
moon.
--
"I should killfile you where you stand, worthless human." --Kaz

Jordan Abel

unread,
Dec 14, 2005, 1:50:14 AM12/14/05
to
On 2005-12-14, Ben Pfaff <b...@cs.stanford.edu> wrote:
> Jordan Abel <jma...@purdue.edu> writes:
>
>> On 2005-12-14, Ben Pfaff <b...@cs.stanford.edu> wrote:
>>> Jordan Abel <jma...@purdue.edu> writes:
>>> "...or by the omission of any explicit definition of behavior"
>>> does not say that anything not defined is undefined behavior?
>>
>> there's no explicit definition of what effect the phase of the moon has
>> on programs [which you not only did not reply to, but snipped.]
>
> The standard isn't defining the moon. The behavior of the moon
> is indeed undefined by the standard. That doesn't mean that the
> behavior of an implementation is dependent on the phase of the
> moon.

But it's not not permitted to be.

regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?

Chuck F.

unread,
Dec 14, 2005, 1:15:04 AM12/14/05
to
Ben Pfaff wrote:
>
... snip ...

>
> In my opinion, it is important that we have clearly delineated
> areas of doubt and uncertainty, where possible.

That belongs in someones sig files. You omitted fear.

--
Read about the Sony stealthware that is a security leak, phones
home, and is generally illegal in most parts of the world. Also
the apparent connivance of the various security software firms.
http://www.schneier.com/blog/archives/2005/11/sonys_drm_rootk.html

Richard Heathfield

unread,
Dec 14, 2005, 2:22:22 AM12/14/05
to
Chuck F. said:

> Ben Pfaff wrote:
>>
> ... snip ...
>>
>> In my opinion, it is important that we have clearly delineated
>> areas of doubt and uncertainty, where possible.
>
> That belongs in someones sig files. You omitted fear.

The original is "rigidly defined areas of doubt and uncertainty", and is
from the "Hitch-hikers' Guide to the Galaxy", which was first broadcast in
1976 IIRC.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)

Ben Pfaff

unread,
Dec 14, 2005, 2:42:18 AM12/14/05
to
Richard Heathfield <inv...@invalid.invalid> writes:

> Chuck F. said:
>
>> Ben Pfaff wrote:
>>> In my opinion, it is important that we have clearly delineated
>>> areas of doubt and uncertainty, where possible.
>>
>> That belongs in someones sig files. You omitted fear.
>
> The original is "rigidly defined areas of doubt and uncertainty", and is
> from the "Hitch-hikers' Guide to the Galaxy", which was first broadcast in
> 1976 IIRC.

I'm glad that *someone* is paying attention.
--
int main(void){char p[]="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz.\
\n",*q="kl BIcNBFr.NKEzjwCIxNJC";int i=sizeof p/2;char *strchr();int putchar(\
);while(*q){i+=strchr(p,*q++)-p;if(i>=(int)sizeof p)i-=sizeof p-1;putchar(p[i]\
);}return 0;}

Tim Rentsch

unread,
Dec 14, 2005, 6:55:54 AM12/14/05
to
"Old Wolf" <old...@inspire.net.nz> writes:

> Chad wrote:
>
> > The question is related to the following lines of code:
> >
> > #include <stdio.h>
> > #include <math.h>
> >
> > int main(void) {
> >
> > int a = (int)pow(2.0 ,32.0);
>
> Undefined behaviour -- the return value from pow() is greater
> than INT_MAX .

You mean implementation defined, not undefined. ("Implementation
defined" could mean raising an implementation defined signal in
this case, but still implmentation defined.)

Tim Rentsch

unread,
Dec 14, 2005, 7:13:40 AM12/14/05
to
Skarmander <inv...@dontmailme.com> writes:

Presumably you meant 7.19.6.1.

Reading the rules for va_arg in 7.15.1.1, it seems clear that the
Standard intends that an int argument should work for an unsigned int
specifier, if the argument value is representable as an unsigned int.
The way the va_arg rules work make an int argument be "the correct type"
in this case (again, assuming the value is representable as an unsigned
int).

Chad

unread,
Dec 14, 2005, 8:15:20 AM12/14/05
to
Okay, maybe I'm going a bit off topic here, but, I think I'm missing
it. When I go something like:

include <stdio.h>
#include <math.h>

int main(void) {
int i = 0;
double sum = 0;

for (i = 0; i <= 30; i++) {
sum = pow(2.0, i) + sum;
}

printf("The value of c is: %0.0f\n",sum);

return 0;

}

The output is:
$./pw
The value of c is: 2147483647 (not 2147483648).

The way I understood this was that for 32 bits, pow(2.0, 31.0) would
look something like the following:

1111 1111 1111 1111 1111 1111 1111 1111

The first bit would be signed. This means that the value should be the
sum of:
1*2^0 + 1*2^1..... 1*3^30

Why is the value off by one?

Chad

Chad

unread,
Dec 14, 2005, 8:20:54 AM12/14/05
to

I mean sum of 1*2^0 + 1*2^1..... 1*2^30.

Chad

Chad

unread,
Dec 14, 2005, 8:21:02 AM12/14/05
to

I mean sum of 1*2^0 + 1*2^1..... 1*2^30.

Chad

Flash Gordon

unread,
Dec 14, 2005, 8:20:26 AM12/14/05
to
Jordan Abel wrote:
> On 2005-12-14, Ben Pfaff <b...@cs.stanford.edu> wrote:
>> Jordan Abel <jma...@purdue.edu> writes:
>>
>>> On 2005-12-14, Ben Pfaff <b...@cs.stanford.edu> wrote:
>>>> Jordan Abel <jma...@purdue.edu> writes:
>>>> "...or by the omission of any explicit definition of behavior"
>>>> does not say that anything not defined is undefined behavior?
>>> there's no explicit definition of what effect the phase of the moon has
>>> on programs [which you not only did not reply to, but snipped.]
>> The standard isn't defining the moon. The behavior of the moon
>> is indeed undefined by the standard. That doesn't mean that the
>> behavior of an implementation is dependent on the phase of the
>> moon.
>
> But it's not not permitted to be.

It can indeed depend on the phase of the moon where the standard has not
otherwise defined the behaviour. For example, undefined behaviour may
only make daemons fly out of your nose if the moon is not full, and make
werewolves attack you instead on the full moon.

Equally, all calls to printf could fail on the new moon, because the
standard does not disallow it.

Equally it could change the order of evaluation of parameters depending

on the phase of the moon.

However, the number of bits in a char can't change depending on the
phase of the moon, nor can sizeof(int) nor any of the other items
explicitly defined by the standard or which the standard requires that
an implementation document.

> regardless, the question i meant to ask for comp.std.c is still
> unanswered - does the rule that allows va_arg to accept an unsigned for
> signed and vice versa if it's in the right range, also apply to printf?

I would argue that it is undefined because in the specific case of
printf it is explicitly stated that it the type is different it is
undefined. After all, printf might not be implemented in C and so might
make assumptions that code written in C is not allowed to make. Although
I can't think how it could manage to break this.
--
Flash Gordon
Living in interesting times.
Although my email address says spam, it is real and I read it.

Mark McIntyre

unread,
Dec 14, 2005, 9:59:03 AM12/14/05
to
On Tue, 13 Dec 2005 23:42:18 -0800, in comp.lang.c , Ben Pfaff
<b...@cs.stanford.edu> wrote:

>Richard Heathfield <inv...@invalid.invalid> writes:
>
>> Chuck F. said:
>>
>>> Ben Pfaff wrote:
>>>> In my opinion, it is important that we have clearly delineated
>>>> areas of doubt and uncertainty, where possible.
>>>
>>> That belongs in someones sig files. You omitted fear.
>>
>> The original is "rigidly defined areas of doubt and uncertainty", and is
>> from the "Hitch-hikers' Guide to the Galaxy", which was first broadcast in
>> 1976 IIRC.
>
>I'm glad that *someone* is paying attention.

Though fear would add to our weapons... I'll come in again.

----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----

kuy...@wizard.net

unread,
Dec 14, 2005, 10:35:28 AM12/14/05
to
Jordan Abel wrote:
...

> Things are only undefined because the standard says that they are
> undefined, not for any other reason. ...

As it happens, the standard explicitly states that when there is no
explicit definition of the behavior, it is undefined. There is no part
of the standard anywhere that specifies the behavior of the printf()
family of functions when there is a mis-match between the format code
and the actual promoted type of the arguments.

> ... As it happens, this is in fact the


> case, 7.19.6.1p9 "If any argument is not the correct type for the
> corresponding conversion specification, the behavior is undefined".
>
> However, if we assume that printf is a variadic function implemented
> along the lines of the stdarg.h macros, we see 7.15.1.1p2 "...except for
> the following cases: - one type is a signed integer type, the other type
> is the corresponding unsigned integer type, and the value is
> representable in both types" - In this case, the value is not, but "%u
> is for unsigned ints" as a blanket statement would seem to be incorrect.
> comp.std.c added - do the signed/unsigned exception, and the char*/void*
> one, to va_arg type rules also apply to printf?

The standard doesn't require the use of the <stdarg.h> macros. Whatever
method is used must be interface compatible with those macros,
otherwise suitably cast function pointers couldn't be used to invoke
printf(). However, since the standard specifies that the behavior for
printf() is undefined in this case, that allows it do something other
than, or in addition to, using stdarg.h macros. For instance, the
implementation might provide as an extension some way of querying what
the actual type an argument was, even though the standard provides no
method of doing so. If printf() is implemented using that extension, it
can identify the type mis-match, and because the behavior is explicitly
undefined when there is such a mismatch, it's permitted to do whatever
the implementors want it to do; the most plausible choice would be a
run-time diagnostic sent to stderr; though assert() and abort() are
other reasonable options.

> Incidentally, my copy of some c89 draft says that %u takes an int and
> converts it to unsigned. c99 changes this to take an unsigned int. There
> are several possibilities: Perhaps they screwed up (but the c99
> rationale does not comment on this), or they thought the difference was
> insignificant enough not to matter.

I think that it was felt that the difference was important, and
desireable, but not worthy of an explicit mention in the Rationale. Do
you think the C99 specification is undesireable?

kuy...@wizard.net

unread,
Dec 14, 2005, 10:42:22 AM12/14/05
to
Jordan Abel wrote:
> On 2005-12-14, Wojtek Lerch <Wojt...@yahoo.ca> wrote:
...

> > 4#2 If a "shall" or "shall not" requirement that appears outside of a
> > constraint is violated, the behavior is undefined. Undefined behavior is
> > otherwise indicated in this International Standard by the words "undefined
> > behavior" or by the omission of any explicit definition of behavior. There
> > is no difference in emphasis among these three; they all describe "behavior
> > that is undefined".
>
> I thought the claim i was disputing was that there could be undefined
> behavior without the standard making any explicit statement, not that
> the explicit statement could be worded some particular other way.

Well, the "particular other way" being referred to in this case is "by
the omission of any explicit definition of behavior". That seems to
fit "without the standard making any explicit statement", as far as I
can see.

> The standard doesn't define the effects of the phase of the moon on the
> program - does that mean running a program while the moon is full is
> undefined? how about the first quarter?

The behavior of some C programs is defined by the standard, regardless
of the phase of the moon, so they must have that behavior. However, it
is indeed permitted for the phase of the moon to affect any aspect of
the behavior that is NOT specified by the standard. For instance, it's
permitted to affect the speed with which computations are carried out.
Whenever the behavior is implementation-defined, it is permissible for
the implementation to define the behavior as depending upon the phase
of the moon.

The behavior of printf() is defined only for those cases where the
types match. The standard nowhere defines what they do when there's a
mismatch.

Wojtek Lerch

unread,
Dec 14, 2005, 10:45:08 AM12/14/05
to
"Jordan Abel" <jma...@purdue.edu> wrote in message
news:slrndpvga9...@random.yi.org...

> regardless, the question i meant to ask for comp.std.c is still
> unanswered - does the rule that allows va_arg to accept an unsigned for
> signed and vice versa if it's in the right range, also apply to printf?

There was a discussion about that here in comp.std.c a while ago. The
bottom line is, I think, that it's pretty clear that the intention was to
allow it, but there's no agreement about whether the normative text actually
says it:

http://groups.google.com/group/comp.std.c/browse_frm/thread/22ec94bc9cf05460/216d6f482bc2a6fa


kuy...@wizard.net

unread,
Dec 14, 2005, 10:50:11 AM12/14/05
to

Jordan Abel wrote:
> On 2005-12-14, Ben Pfaff <b...@cs.stanford.edu> wrote:
> > Jordan Abel <jma...@purdue.edu> writes:
> >
> >> On 2005-12-14, Ben Pfaff <b...@cs.stanford.edu> wrote:
> >>> Jordan Abel <jma...@purdue.edu> writes:
> >>> "...or by the omission of any explicit definition of behavior"
> >>> does not say that anything not defined is undefined behavior?
> >>
> >> there's no explicit definition of what effect the phase of the moon has
> >> on programs [which you not only did not reply to, but snipped.]
> >
> > The standard isn't defining the moon. The behavior of the moon
> > is indeed undefined by the standard. That doesn't mean that the
> > behavior of an implementation is dependent on the phase of the
> > moon.
>
> But it's not not permitted to be.

Actually, it is. The standard nowhere specifies how fast a program must
execute; therefore it's permissible for a program's processing speed to
depend upon the phase of the moon. The order of evaluation of f() and
g() in the expression f()+g(), therefore an implementation is allowed
to use a different order depending upon the phase of the moon.

> regardless, the question i meant to ask for comp.std.c is still
> unanswered - does the rule that allows va_arg to accept an unsigned for
> signed and vice versa if it's in the right range, also apply to printf?

No. printf() isn't required to use va_arg(). It must use something that
is binary-compatible with it, to permit seperate compilation of code
that accesses printf() only indirectly, through a pointer. However,
whatever method it uses might have additional capabilities beyond those
defined by the standard for <stdarg.h>, capabilities not available to
strictly conforming user code.

Skarmander

unread,
Dec 14, 2005, 10:52:41 AM12/14/05
to
Tim Rentsch wrote:
> Skarmander <inv...@dontmailme.com> writes:
>
>> Jordan Abel wrote:
>>> On 2005-12-13, Old Wolf <old...@inspire.net.nz> wrote:
>>> int a /* = INT_MIN */;
>>>>> printf("The value of a is: %u\n",a);
>>>> Undefined behaviour -- %u is for unsigned ints.
>>> it's not clear that it's undefined - an unsigned int and a signed int
>>> have the same size, and it's not clear that int can have any valid
>>> representations that are trap representations for unsigned.
>>>
>> No, this is pretty clear. The type of the argument required for a "%u"
>> format specifier is "unsigned int". From 7.9.16.1:
>>
>> "If any argument is not the correct type for the corresponding conversion
>> specification, the behavior is undefined."
>>
>> While, for the reasons you mention, most if not all platforms will treat
>> this as expected, the standard does not explicitly allow it. "int" is not
>> the correct type.
>
> Presumably you meant 7.19.6.1.
>
Yes. Neat little shift on the 1 there.

> Reading the rules for va_arg in 7.15.1.1, it seems clear that the
> Standard intends that an int argument should work for an unsigned int
> specifier, if the argument value is representable as an unsigned int.
> The way the va_arg rules work make an int argument be "the correct type"
> in this case (again, assuming the value is representable as an unsigned
> int).
>

That's a very reasonable interpretation, though the standard should arguably
be clarified at this point with a footnote if the intent is to treat
printf() as "just another va_arg-using function" in this regard.

S.

Tim Rentsch

unread,
Dec 14, 2005, 11:43:20 AM12/14/05
to
Jordan Abel <jma...@purdue.edu> writes:

[snip]


>
> regardless, the question i meant to ask for comp.std.c is still
> unanswered - does the rule that allows va_arg to accept an unsigned for
> signed and vice versa if it's in the right range, also apply to printf?

I believe that's the most sensible interpretation, but to get
an authoritative answer rather than just statements of opinion
probably the best thing to do is submit a Defect Report.

Tim Rentsch

unread,
Dec 14, 2005, 11:53:11 AM12/14/05
to
Flash Gordon <sp...@flash-gordon.me.uk> writes:

> Jordan Abel wrote:
[snip]


> > regardless, the question i meant to ask for comp.std.c is still
> > unanswered - does the rule that allows va_arg to accept an unsigned for
> > signed and vice versa if it's in the right range, also apply to printf?
>
> I would argue that it is undefined because in the specific case of
> printf it is explicitly stated that it the type is different it is
> undefined. After all, printf might not be implemented in C and so might
> make assumptions that code written in C is not allowed to make. Although
> I can't think how it could manage to break this.

The Standard doesn't say that if the type is different then it's
undefined; what it does say is that if the argument is not of the
correct type then it's undefined. Absent an explicit indication to
the contrary, the most sensible interpretation of "the correct type"
would (IMO) be "the correct type after taking into account the rules
for function argument transmission". Of course, other interpretations
are possible; I just don't find any evidence to support the theory
that any other interpretation is what the Standard intends.

And, as I said in another response, the best way to get an
authoritative statement on the matter is to submit a Defect
Report.

Tim Rentsch

unread,
Dec 14, 2005, 11:55:02 AM12/14/05
to
Skarmander <inv...@dontmailme.com> writes:

Yes, I agree the wording in the Standard needs clarifying here.

Tim Rentsch

unread,
Dec 14, 2005, 12:01:58 PM12/14/05
to
kuy...@wizard.net writes:

[snip]


> The behavior of printf() is defined only for those cases where the
> types match. The standard nowhere defines what they do when there's a
> mismatch.

It doesn't say "where the types match", it says when an argument is
not of the correct type. Since the phrase "of the correct type" isn't
given any specific definition, the most sensible interpretation is
"the correct type after taking into account other rules for function
argument transmission". There isn't any evidence to support your
theory that the Standard intends anything else here. There is,
however, evidence to support the theory that it intends int's to
be usable as unsigned int's (obviously provided that the argument
values are suitable).

Antoine Leca

unread,
Dec 14, 2005, 1:17:54 PM12/14/05
to
In news:slrndputfb...@random.yi.org, Jordan Abel va escriure:

> 191 points in J.2 [yes, i counted them. by hand. these things should
> really be numbered

What a great idea! This way, every time a technical corrigendum introduces a
new explicited case for undefined behaviour, since it is expected it should
be inserted in the proper place in the J.2 list, the technical corrigendum
should re-list the whole rest of annex J.2, with the new numbers.
As a result, those numbers would instantaneously become without useful
profit (since it would be a great pain to track varying numbers).

If you need a numerical gimmick to design them, just use the
subclause&paragraph numbers (chapters and verses), as everybody does.


If it is just to help you counting them, assuming you are not proficient in
the use of sed/awk over Nxxxx.txt, just ask Larry, he does.


Antoine

tedu

unread,
Dec 14, 2005, 1:26:41 PM12/14/05
to
Chad wrote:

> Speaking of long long, I tried this:


>
> include <stdio.h>
> #include <math.h>
>
> int main(void) {
>

> int a = (int)pow(2.0 ,32.0);
> double b = pow(2.0 , 32.0);
> long long c = 4294967296;


>
> printf("The value of a is: %u\n",a);

> printf("The value of b is: %0.0f\n",b);
> printf("The value of int is: %d\n", sizeof(int));
> printf("The value of double is: %d\n", sizeof(double));
>
> printf("The value of c is: %llu\n",c>>1);
> }
>
> The output is:
> $gcc pw.c -o pw -lm
> pw.c: In function `main':
> pw.c:8: warning: integer constant is too large for "long" type
> $./pw
> The value of a is: 2147483648
> The value of b is: 4294967296
> The value of int is: 4
> The value of double is: 8
> The value of c is: 2147483648
> $

try long long c = 4294967296LL;

Keith Thompson

unread,
Dec 14, 2005, 1:41:37 PM12/14/05
to

No, it's undefined.

C99 6.3.1.4p1:

When a finite value of real floating type is converted to an
integer type other than _Bool, the fractional part is discarded
(i.e., the value is truncated toward zero). If the value of the
integral part cannot be represented by the integer type, the
behavior is undefined.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.

Keith Thompson

unread,
Dec 14, 2005, 1:52:09 PM12/14/05
to
"Chad" <cda...@gmail.com> writes:
> Okay, maybe I'm going a bit off topic here, but, I think I'm missing
> it. When I go something like:
>
> include <stdio.h>
> #include <math.h>
>
> int main(void) {
> int i = 0;
> double sum = 0;
>
> for (i = 0; i <= 30; i++) {
> sum = pow(2.0, i) + sum;
> }
>
> printf("The value of c is: %0.0f\n",sum);
>
> return 0;
>
> }
>
> The output is:
> $./pw
> The value of c is: 2147483647 (not 2147483648).
>
> The way I understood this was that for 32 bits, pow(2.0, 31.0) would
> look something like the following:
>
> 1111 1111 1111 1111 1111 1111 1111 1111

No. The pow() function returns a result of type double; specifically,
it's 2147483648.0. In the code above, nothing is converted to any
32-bit integer type; it's all double, so it doesn't make much sense to
talk about the binary representation.

> The first bit would be signed. This means that the value should be the
> sum of:
> 1*2^0 + 1*2^1..... 1*3^30
>
> Why is the value off by one?

The sign bit doesn't enter into this. Floating-point types do
typically have a sign bit, but all the values you're dealing with here
are representable, so all the computed values will match the
mathematical results.

You're computing

1.0 + 2.0 + 4.0 + 8.0 + ... + 1073741824.0

The result is 2147483647.0.

This might be a little clearer if you use a "%0.1f" format. The
"%0.0f" format makes the numbers look like integers; "%0.1f" makes it
clear that they're floating-point.

Flash Gordon

unread,
Dec 14, 2005, 1:39:00 PM12/14/05
to
Tim Rentsch wrote:
> Flash Gordon <sp...@flash-gordon.me.uk> writes:
>
>> Jordan Abel wrote:
> [snip]
>>> regardless, the question i meant to ask for comp.std.c is still
>>> unanswered - does the rule that allows va_arg to accept an unsigned for
>>> signed and vice versa if it's in the right range, also apply to printf?
>> I would argue that it is undefined because in the specific case of
>> printf it is explicitly stated that it the type is different it is
>> undefined. After all, printf might not be implemented in C and so might
>> make assumptions that code written in C is not allowed to make. Although
>> I can't think how it could manage to break this.
>
> The Standard doesn't say that if the type is different then it's
> undefined; what it does say is that if the argument is not of the
> correct type then it's undefined. Absent an explicit indication to
> the contrary, the most sensible interpretation of "the correct type"
> would (IMO) be "the correct type after taking into account the rules
> for function argument transmission".

I agree that is a reasonable position.

> Of course, other interpretations
> are possible; I just don't find any evidence to support the theory
> that any other interpretation is what the Standard intends.

I'm just a awkward sod sometimes ;-)

> And, as I said in another response, the best way to get an
> authoritative statement on the matter is to submit a Defect
> Report.

I'm not bothered enough by this one.

lawrenc...@ugs.com

unread,
Dec 14, 2005, 6:13:01 PM12/14/05
to
In comp.std.c Tim Rentsch <t...@alumnus.caltech.edu> wrote:
>
> The Standard doesn't say that if the type is different then it's
> undefined; what it does say is that if the argument is not of the
> correct type then it's undefined. Absent an explicit indication to
> the contrary, the most sensible interpretation of "the correct type"
> would (IMO) be "the correct type after taking into account the rules
> for function argument transmission".

Indeed. The difficulty of specifying the rules precisely is why the
committee weaseled out and used the fuzzy term "correct" instead of more
explicit language.

-Larry Jones

At times like these, all Mom can think of is how long she was in
labor with me. -- Calvin

Jordan Abel

unread,
Dec 14, 2005, 6:41:09 PM12/14/05
to
On 2005-12-14, kuy...@wizard.net <kuy...@wizard.net> wrote:
> No. printf() isn't required to use va_arg(). It must use something that
> is binary-compatible with it, to permit seperate compilation of code
> that accesses printf() only indirectly, through a pointer. However,
> whatever method it uses might have additional capabilities beyond those
> defined by the standard for <stdarg.h>, capabilities not available to
> strictly conforming user code.

However, it would be reasonable to think that the compatibility between
signed and unsigned integers where they have the same value is a
required part of the binary interface of variadic functions.

Chuck F.

unread,
Dec 14, 2005, 5:11:52 PM12/14/05
to
Chad wrote:
>
> Okay, maybe I'm going a bit off topic here, but, I think I'm
> missing it. When I go something like:
>
> include <stdio.h>
> #include <math.h>
>
> int main(void) {
> int i = 0;
> double sum = 0;

Why initialize these, when the initial values will never be used?

>
> for (i = 0; i <= 30; i++) {
> sum = pow(2.0, i) + sum;
> }
> printf("The value of c is: %0.0f\n",sum);
> return 0;
> }
>
> The output is:
> $./pw
> The value of c is: 2147483647 (not 2147483648).
>
> The way I understood this was that for 32 bits, pow(2.0, 31.0)
> would look something like the following:
>
> 1111 1111 1111 1111 1111 1111 1111 1111
>
> The first bit would be signed. This means that the value should
> be the sum of:
> 1*2^0 + 1*2^1..... 1*3^30
>
> Why is the value off by one?

Because a double is not an integral object. It expresses an
approximation to a value, and the printf format has truncated the
value. Just change the format to "%f\n" to see the difference.

--
Read about the Sony stealthware that is a security leak, phones
home, and is generally illegal in most parts of the world. Also
the apparent connivance of the various security software firms.
http://www.schneier.com/blog/archives/2005/11/sonys_drm_rootk.html

Keith Thompson

unread,
Dec 14, 2005, 9:05:38 PM12/14/05
to
"Chuck F. " <cbfal...@yahoo.com> writes:
> Chad wrote:
> >
>> Okay, maybe I'm going a bit off topic here, but, I think I'm
>> missing it. When I go something like:
>> include <stdio.h>
>> #include <math.h>
>> int main(void) {
>> int i = 0;
>> double sum = 0;
>
> Why initialize these, when the initial values will never be used?
>
>> for (i = 0; i <= 30; i++) {
>> sum = pow(2.0, i) + sum;
>> }
>> printf("The value of c is: %0.0f\n",sum);
>> return 0;
>> }

These? The initial value of i isn't used; the initial value of sum is.

Wojtek Lerch

unread,
Dec 14, 2005, 10:09:18 PM12/14/05
to
"Jordan Abel" <jma...@purdue.edu> wrote in message
news:slrndq1bhr...@random.yi.org...

> However, it would be reasonable to think that the compatibility between
> signed and unsigned integers where they have the same value is a
> required part of the binary interface of variadic functions.

It would be unreasonable to think that that wasn't the intention, because
footnote 31 clearly says it was; but would it be reasonable to deny that the
normative text lacks clear words that state that requirement, either
directly or by mentioning va_arg(), either in the description of the
function call operator or in the description of fprintf()? Is it really
reasonable to believe that it's clear enough that a requirement explicitly
stated in the description of one interface (the argument to printf() must
have the correct type) is overriden by a promise in the description in a
different interface (it's OK for the argument to va_arg() to have a slightly
different type), even though the two descriptions don't have any references
to each other?

Think about a subset of C defined by what remains from C99 after removing
footnote 31, all the contents of <stdarg.h>, and the few functions from
<stdio.h> that take a va_list argument. This would significantly reduce the
usefulness of variadic functions defined in programs; but would it change
the semantic of printf()? Do you think it would still be reasonable to
believe that this modified C required printf() to tolerate mixing signed
with unsigned?


Tim Rentsch

unread,
Dec 14, 2005, 11:53:39 PM12/14/05
to
Keith Thompson <ks...@mib.org> writes:

> Tim Rentsch <t...@alumnus.caltech.edu> writes:
> > "Old Wolf" <old...@inspire.net.nz> writes:
> >> Chad wrote:
> >> > The question is related to the following lines of code:
> >> >
> >> > #include <stdio.h>
> >> > #include <math.h>
> >> >
> >> > int main(void) {
> >> >
> >> > int a = (int)pow(2.0 ,32.0);
> >>
> >> Undefined behaviour -- the return value from pow() is greater
> >> than INT_MAX .
> >
> > You mean implementation defined, not undefined. ("Implementation
> > defined" could mean raising an implementation defined signal in
> > this case, but still implmentation defined.)
>
> No, it's undefined.
>
> C99 6.3.1.4p1:
>
> When a finite value of real floating type is converted to an
> integer type other than _Bool, the fractional part is discarded
> (i.e., the value is truncated toward zero). If the value of the
> integral part cannot be represented by the integer type, the
> behavior is undefined.

You're absolutely right. Thank you for the correction.

Richard Heathfield

unread,
Dec 15, 2005, 12:43:13 AM12/15/05
to
Chuck F. said:

> Chad wrote:
>>
>> int main(void) {
>> int i = 0;
>> double sum = 0;
>
> Why initialize these, when the initial values will never be used?

I have read Keith's comment, but I'll address the question as if I had not
noticed it. I, personally, give objects a known, determinate initial value
when defining them because I think it makes a program easier to debug.
Twice now I've let indeterminate values screw up a production environment
under conditions that didn't occur in testing (which is a good indication
that neither the programming nor the testing were up to scratch). Twice is
twice too many. I'm not going to let that happen again.

And now add in Keith's comment. Since the value of sum given above /was/
used, to remove it arbitrarily (as some people may well have been tempted
to do if maintaining the code) after a brief perusal of the code had
*apparently* indicated that it was not used would have introduced a bug
that may well not have been spotted in testing.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)

Chuck F.

unread,
Dec 15, 2005, 3:30:58 AM12/15/05
to
Keith Thompson wrote:
> "Chuck F. " <cbfal...@yahoo.com> writes:
>> Chad wrote:
>>>
>>> Okay, maybe I'm going a bit off topic here, but, I think I'm
>>> missing it. When I go something like:
>>> include <stdio.h>
>>> #include <math.h>
>>> int main(void) {
>>> int i = 0;
>>> double sum = 0;
>>
>> Why initialize these, when the initial values will never be used?
>>
>>> for (i = 0; i <= 30; i++) {
>>> sum = pow(2.0, i) + sum;
>>> }
>>> printf("The value of c is: %0.0f\n",sum);
>>> return 0;
>>> }
>
> These? The initial value of i isn't used; the initial value of
> sum is.

Mea Culpa. However I consider the proper initialization point is
in the for statement, i.e. "for (i = 0, sum = 0; ...)". In other
words as close as possible to the point of use.

kuy...@wizard.net

unread,
Dec 15, 2005, 10:42:58 AM12/15/05
to
Chuck F. wrote:
> Chad wrote:
> >
> > Okay, maybe I'm going a bit off topic here, but, I think I'm
> > missing it. When I go something like:
> >
> > include <stdio.h>
> > #include <math.h>
> >
> > int main(void) {
> > int i = 0;
> > double sum = 0;
>
> Why initialize these, when the initial values will never be used?

The initial value of 'i' isn't used, but the initial value of 'sum'
certainly is.

I personally wouldn't intialize 'i', but some people argue that doing
so is a safety measure. In my personal experience, this "safety"
measure frequently prevents the symptoms of incorrectly written code
from being serious enough to be noticed, which in my opinion is a bad
thing. If my code uses the value of an object before that object has
been given the value it was supposed to have at that point, I'd greatly
prefer it if the value it uses is one that's likely to make my program
fail in an easily noticeable way. 0 is often not such a value. An
indeterminate one is more likely to produce noticeable symptoms. A
well-chosen specific initializer could be even better, except for the
fact that it gives the incorrect impression that the initializing value
was intended to be used. This can be fixed by adding in a comment:

int i = INT_MAX; // intended to gurantee problems if the program
incorrectly uses this value

But I prefer the simplicity of:

int i;

> > for (i = 0; i <= 30; i++) {
> > sum = pow(2.0, i) + sum;
> > }
> > printf("The value of c is: %0.0f\n",sum);
> > return 0;
> > }
> >
> > The output is:
> > $./pw
> > The value of c is: 2147483647 (not 2147483648).
> >
> > The way I understood this was that for 32 bits, pow(2.0, 31.0)
> > would look something like the following:
> >
> > 1111 1111 1111 1111 1111 1111 1111 1111
> >
> > The first bit would be signed. This means that the value should
> > be the sum of:
> > 1*2^0 + 1*2^1..... 1*3^30
> >
> > Why is the value off by one?
>
> Because a double is not an integral object. It expresses an
> approximation to a value, and the printf format has truncated the
> value. Just change the format to "%f\n" to see the difference.

Did you try that? I think you'll be surprised by the results. Just to
make things clearer, you might try using long double, and a LOT of
extra digits after the decimal point. If you've got a fully conforming
C99 implementation, it would be even clearer if you write it out in
hexadecimal floating point format.
Hint: it's not the program that's giving the wrong value for the sum of
this series.

Keith Thompson

unread,
Dec 15, 2005, 1:19:21 PM12/15/05
to
"Chuck F. " <cbfal...@yahoo.com> writes:
> Keith Thompson wrote:
[snip]

>> These? The initial value of i isn't used; the initial value of
>> sum is.
>
> Mea Culpa. However I consider the proper initialization point is in
> the for statement, i.e. "for (i = 0, sum = 0; ...)". In other words
> as close as possible to the point of use.

I don't know that I'd want to squeeze the initializations of both i
and sum into the for loop, but it's certainly an option.

In C99, you can declare i at its point of first use:

for (int i = 0; i <= 30; i ++) { ... }

Douglas A. Gwyn

unread,
Dec 15, 2005, 2:07:22 PM12/15/05
to
Jordan Abel wrote:

> On 2005-12-14, Wojtek Lerch <Wojt...@yahoo.ca> wrote:
> > 4#2 If a "shall" or "shall not" requirement that appears outside of a
> > constraint is violated, the behavior is undefined. Undefined behavior is
> > otherwise indicated in this International Standard by the words "undefined
> > behavior" or by the omission of any explicit definition of behavior. There
> > is no difference in emphasis among these three; they all describe "behavior
> > that is undefined".
> I thought the claim i was disputing was that there could be undefined
> behavior without the standard making any explicit statement, not that
> the explicit statement could be worded some particular other way.

The words that Wojtek just cited clearly state that one form
of undefined behavior occurs when the standard says nothing
at all about the behavior.

We tried to explicitly call out cases of undefined behavior
that are important to know about.

> The standard doesn't define the effects of the phase of the moon on the
> program - does that mean running a program while the moon is full is
> undefined? how about the first quarter?

The behavior of the moon is undefined according to the C
standard. Fortunately it is not something that a C
programmer or implemntor needs to know.

Douglas A. Gwyn

unread,
Dec 15, 2005, 2:08:21 PM12/15/05
to
Jordan Abel wrote:
> As it happens, a positive signed int is permitted in general for
> variadic functions that take an unsigned int [same for an unsigned <
> INT_MAX for a signed] - The reason i added comp.std.c, was for the
> question of whether this same exception would apply to printf.

Yes, the interface semantics for printf are the same as for
any other variadic function.

Wojtek Lerch

unread,
Dec 15, 2005, 3:45:34 PM12/15/05
to
"Douglas A. Gwyn" <DAG...@null.net> wrote in message
news:43A1BF25...@null.net...

No, the semantics for any call to printf() are defined by the specifications
of the printf() function and the function call operator. The specification
of printf() refers to the promoted values and types of the arguments of the
function call. It does not talk about the type named in any invocation of
the va_arg() macro or refer to va_arg() in any other way. Even if the
entire contents of <stdarg.h> were removed from the standard, the
description of printf() would still make sense and be useful. And I don't
see any reason to doubt that it would define the same semantics.

The semantics for any non-standard variadic function are defined by various
parts of the standard, depending on details of the C code that defines the
function. If a variadic function completely ignores its variadic arguments,
programs are free to use any types of arguments in calls to that function;
but that doesn't imply that the same freedom applies to calls to printf(),
does it?

If some other variadic function uses va_arg() to fetch the values of the
arguments, it's the program's responsibility to ensure that the requirements
of va_arg() are sastisfied. In particular, va_arg() requires that the type
named by its second argument must be compatible with the promoted type of
the corresponding argument of the function call, or one must be the signed
version of the other, or one must be a pointer to void and the other a
pointer to a character type. That's a *restriction* that the standard
places on programs that use va_arg(). It specifically refers to va_arg().
It is *not* a general promise that it's always OK to mix signed with
unsigned or void pointers with char pointers where the standard says that
some two types must be compatible, such as in the description of printf().

Douglas A. Gwyn

unread,
Dec 15, 2005, 10:43:46 PM12/15/05
to
Wojtek Lerch wrote:
> ... It does not talk about the type named in any invocation of

> the va_arg() macro or refer to va_arg() in any other way.

I didn't say that it did.
Signed and unsigned varieties of an integer type are punnable
(for nonnegative values) in the context of function arguments,
and a variadic function cannot determine which type was actually
used in the invocation.

> ... If a variadic function completely ignores its variadic arguments,


> programs are free to use any types of arguments in calls to that function;
> but that doesn't imply that the same freedom applies to calls to printf(),
> does it?

Yes, printf can be supplied with unused arguments, and it
often is (not always intentionally).

> It is *not* a general promise that it's always OK to mix signed with
> unsigned or void pointers with char pointers where the standard says that
> some two types must be compatible, such as in the description of printf().

I don't recall the fprintf spec requiring compatible types.

Wojtek Lerch

unread,
Dec 16, 2005, 12:31:09 AM12/16/05
to
"Douglas A. Gwyn" <DAG...@null.net> wrote in message
news:43A237F2...@null.net...

> Wojtek Lerch wrote:
>> ... It does not talk about the type named in any invocation of
>> the va_arg() macro or refer to va_arg() in any other way.
>
> I didn't say that it did.
> Signed and unsigned varieties of an integer type are punnable
> (for nonnegative values) in the context of function arguments,

They're intended to, according to footnote 31; but AFAIK, there's no
normative text that actually makes that promise; is there?

> and a variadic function cannot determine which type was actually
> used in the invocation.

But of course it can, on any implementation that has a suitable extension to
allow it. C doesn't require printf() to be implemented as strictly
conforming C, does it?

>> ... If a variadic function completely ignores its variadic arguments,
>> programs are free to use any types of arguments in calls to that
>> function;
>> but that doesn't imply that the same freedom applies to calls to
>> printf(),
>> does it?
>
> Yes, printf can be supplied with unused arguments, and it
> often is (not always intentionally).

I was talking about the *complete* freedom to pass whatever you want to the
variadic arguments, regardless of the values you pass to the non-variadic
arguments. Some variadic functions allow that, but printf() does not.

>> It is *not* a general promise that it's always OK to mix signed with
>> unsigned or void pointers with char pointers where the standard says that
>> some two types must be compatible, such as in the description of
>> printf().
>
> I don't recall the fprintf spec requiring compatible types.

No, it requires "the correct type" (7.19.6.1#9); my mistake (I copied
"compatible" from va_arg()). That sounds even more restrictive, doesn't it?
Or do you mean that it's clear enough that "the correct type" is just a
shorter way of saying "one of the correct types, as explained in the
description of va_arg(), assuming that the type specified by 7.19.6.1#7 and
8 is used as the second argument to va_arg()"?


kuy...@wizard.net

unread,
Dec 16, 2005, 7:23:13 AM12/16/05
to
Douglas A. Gwyn wrote:
> Wojtek Lerch wrote:
> > ... It does not talk about the type named in any invocation of
> > the va_arg() macro or refer to va_arg() in any other way.
>
> I didn't say that it did.
> Signed and unsigned varieties of an integer type are punnable
> (for nonnegative values) in the context of function arguments,
> and a variadic function cannot determine which type was actually
> used in the invocation.

It can't determine the type by using only those mechanism which are
defined by the standard; however, a conforming implementation can
provide as an extension additional functionality that does allow such
determination, and it's legal for printf() to be implemented in a way
that makes use of such an extension.

Until such time as the committee chooses to replace "are meant to
imply" with "guarantees", and moves the text into the normative portion
of the standard, a conforming implementation can have different types
that are not interchangeable, despite the fact that they are required
to (and do) have the same representation and same alignment. Such an
implementation would be contrary to the explicitly expressed intent of
the committee, but that doesn't prevent it from being conforming, since
the actual requirements of the standard don't implement that intent.

Douglas A. Gwyn

unread,
Dec 16, 2005, 7:17:27 PM12/16/05
to
Wojtek Lerch wrote:
> ... C doesn't require printf() to be implemented as strictly
> conforming C, does it?

However, the specification is based on C function semantics,
and the prototype with the ,...) notation is even part of the
spec, so we know what the linkage interface has to be like.

> I was talking about the *complete* freedom to pass whatever you want to the
> variadic arguments, regardless of the values you pass to the non-variadic
> arguments. Some variadic functions allow that, but printf() does not.

That doesn't seem relevant. Obviously any specific function
has some restriction on its arguments, based on the definition
for the function. In the case of printf the arguments have to
match up well enough with the format so that the proper values
can be fetched for formatting.

> No, it requires "the correct type" ...


> That sounds even more restrictive, doesn't it?

What is "correct" has to be determined in other ways.

Douglas A. Gwyn

unread,
Dec 16, 2005, 7:21:37 PM12/16/05
to
kuy...@wizard.net wrote:
> ... Such an implementation would be contrary to the explicitly

> expressed intent of the committee, but that doesn't prevent it
> from being conforming, since the actual requirements of the
> standard don't implement that intent.

That's a spuriously legalistic notion. The C Standard uses a
variety of methods to convey the intent, including examples and
explanatory footnotes. It is evident from this thread that the
actual requirement is not certain (for some readers) from the
normative text alone, but can be clarified by referring to the
footnote that explains what the normative text means.

Wojtek Lerch

unread,
Dec 16, 2005, 10:37:28 PM12/16/05
to
"Douglas A. Gwyn" <DAG...@null.net> wrote in message
news:43A35917...@null.net...

> Wojtek Lerch wrote:
>> ... C doesn't require printf() to be implemented as strictly
>> conforming C, does it?
>
> However, the specification is based on C function semantics,
> and the prototype with the ,...) notation is even part of the
> spec, so we know what the linkage interface has to be like.

The "linkage interface"? What is that, in standardese? What exactly does
the standard says about it? How would it be affected if <stdarg.h> were
removed from the standard? Is it really something that the standard
requires to exist, or is it merely a mechanism that compilers commonly use
to implement the required semantics?

Neither the specification of printf() nor the description of function
semantics has references to a "linkage interface". The standard defines
semantics of printf() in terms of the promoted type of the argument. It
says, for instance, that the %u format requires an argument with type
unsigned int, and that if the argument doesn't have the "correct" type, the
behaviour is undefined. There's no hint anywhere that there might actually
be two different correct types for the %u format. There's no hint anywhere
that it's the description of va_arg() that defines what the set of "correct"
types is. There's no hint anywhere that removing <stdarg.h> from the
language could possibly affect the set of "correct" argument types for %u.

Or are all those hints actually there, and I just managed to miss them?

>> I was talking about the *complete* freedom to pass whatever you want to
>> the
>> variadic arguments, regardless of the values you pass to the non-variadic
>> arguments. Some variadic functions allow that, but printf() does not.
>
> That doesn't seem relevant. Obviously any specific function
> has some restriction on its arguments, based on the definition
> for the function.

It was just an illustration of the simple fact that what the restrictions on
the arguments is depends on how the semantics of the function are defined.
If the function doesn't use va_arg() or anything else to get the values,
there's no restriction whatsoever. If the function uses
va_arg(ap,unsigned), the restriction is as described for va_arg(): the
argument must be an unsigned int or a non-negative signed int, or else the
behaviour is undefined. If the function is printf() and the format
specifier is %u, the restriction is as described for printf(): the argument
should be an unsigned int and if it doesn't have the correct type, the
behaviour is undefined.

> In the case of printf the arguments have to
> match up well enough with the format so that the proper values
> can be fetched for formatting.

Yes, but how well is well enough? The standard doesn't say that they're
fetched via va_arg() (or "as if" via va_arg()), only that they must have
"the correct type" (notice the singular -- it doesn't say "one of the
correct types"), and names one type for each format specifier. It doesn't
say anything like "a type that has the same representation and alignment
requirements as the specified type", either.

>> No, it requires "the correct type" ...
>> That sounds even more restrictive, doesn't it?
>
> What is "correct" has to be determined in other ways.

Other than by looking it up in the description of printf()?


Wojtek Lerch

unread,
Dec 16, 2005, 10:48:20 PM12/16/05
to
"Douglas A. Gwyn" <DAG...@null.net> wrote in message
news:43A35A11...@null.net...

I know of two places in the normative text that describe situations where a
signed type and the corresponding unsigned type are interchangeable as
arguments to functions, and those two places are quite clear already:
6.5.2.2p6 (calls to a function defined without a prototype) and 7.15.1.1p2
(va_arg()). If there are supposed to be more such situations, then I'm
afraid the footnote itself needs to be clarified. In particular, if the
only difference between two function types T1 and T2 is in the signedness of
parameters, was the intent that the two types are compatible, despite of
what 6.7.5.3p15 says? If not, which ones of the following were intended to
apply, if any:

- it's OK to use an expression with type T1 to call a function that was
defined as T2, even though 6.5.2.2p6 says it's undefined behaviour?

- it's OK to declare the function as T1 in one translation unit and define
as T2 in another translation unit, even though 6.2.7p1 says it's undefined
behaviour?

- it's OK to define the function as T1 and then as T2 in *the same*
translation unit, even though 6.7p4 says it's a constraint violation?

What about interchangeability as return values from function? I haven't
found any normative text that implies this kind of interchangeability; which
of the above three situations are meant to apply if T1 and T2 have different
return types?


Emmanuel Delahaye

unread,
Dec 17, 2005, 5:49:40 PM12/17/05
to
Chad a écrit :
> Now 'a' is zero! Ahhhhhhhhhhhhh........ I'm even more confused.

You seem to have hard time to match the types and the formatters...

#include <stdio.h>
#include <math.h>

int main(void)
{

unsigned long a = (unsigned long) pow (2 , 32);
double b = pow(2 , 32);

printf("The value of a is: %lu\n", a);


printf("The value of b is: %0.0f\n", b);

printf("The value of int is: %u\n", (unsigned) sizeof(int));
printf("The value of double is: %u\n", (unsigned) sizeof(double));

return 0;
}

(Windows XP/Mingw)
The value of a is: 4294967295


The value of b is: 4294967296
The value of int is: 4
The value of double is: 8

--
A+

Emmanuel Delahaye

Wojtek Lerch

unread,
Dec 18, 2005, 8:50:26 AM12/18/05
to
Another thing I just realized is not quite clear is the exact consequences
of the fact that even though the standard guarantees that every signed
representation of a non-negative value is a valid representation of the
same value in the corresponding unsigned type, it doesn't work the other way
around. There may exist other unsigned representations of the same value
that do not represent the same value in the signed type but instead either
are trap representations or represent a negative value. Except for
va_arg(), are there any other situations where the standard guarantees that
storing a value through an unsigned type and then reading it back through
the corresponding signed type is guaranteed to produce the original value?


Wojtek Lerch

unread,
Dec 18, 2005, 11:58:26 AM12/18/05
to
"Wojtek Lerch" <Wojt...@yahoo.ca> wrote in message
news:40l7ogF...@individual.net...

> Another thing I just realized is not quite clear is the exact consequences
> of the fact that even though the standard guarantees that every signed
> representation of a non-negative value is a valid representation of the
> same value in the corresponding unsigned type, it doesn't work the other
> way around. There may exist other unsigned representations of the same
> value that do not represent the same value in the signed type but instead
> either are trap representations or represent a negative value.

To continue this conversation with myself, the above is what 6.2.6.2p5 seems
to imply; on the other hand, 6.2.5p9 simply states that "the representation
of the same value in each type is the same", and has the footnote attached
to it that explains that that's meant to imply interchangeability. I don't
suppose that's meant to override the implication of the much more specific
6.2.6.2p5 and guarantee that the rule works both ways, is it?

pete

unread,
Dec 18, 2005, 3:20:54 PM12/18/05
to
Wojtek Lerch wrote:
>
> "Wojtek Lerch" <Wojt...@yahoo.ca> wrote in message
> news:40l7ogF...@individual.net...
> > Another thing I just realized is not quite clear
> > is the exact consequences
> > of the fact that even though the standard guarantees
> > that every signed
> > representation of a non-negative value
> > is a valid representation of the
> > same value in the corresponding unsigned type,
> > it doesn't work the other
> > way around.
> > There may exist other unsigned representations of the same
> > value that do not represent the same value in the
> > signed type but instead
> > either are trap representations or represent a negative value.
>
> To continue this conversation with myself,
> the above is what 6.2.6.2p5 seems
> to imply; on the other hand,
> 6.2.5p9 simply states that "the representation
> of the same value in each type is the same",
> and has the footnote attached
> to it that explains that that's meant to imply interchangeability.

That would have to put constraints on the values
of padding bits then, wouldn't it?


In a case where the type unsigned,
had no padding bits and also two more value bits than type int,
CHAR_BIT == 17
UINT_MAX == 131071
INT_MAX == 32767
reading an object of type int with a %u specifier
would interpret the padding bit of the int type object
as an unsigned value bit.

In a case where int and unsigned had the same number
of value bits,
reading an object of type unsigned with a %d specifier
would interpret a padding bit as the sign bit.

--
pete

Dave Thompson

unread,
Dec 18, 2005, 10:20:40 PM12/18/05
to
On Tue, 13 Dec 2005 23:20:53 +0000 (UTC), Jordan Abel
<jma...@purdue.edu> wrote:

> On 2005-12-13, Old Wolf <old...@inspire.net.nz> wrote:
> int a /* = INT_MIN */;
> >> printf("The value of a is: %u\n",a);
> >
> > Undefined behaviour -- %u is for unsigned ints.
>
> it's not clear that it's undefined - an unsigned int and a signed int
> have the same size, and it's not clear that int can have any valid
> representations that are trap representations for unsigned.
>
All magnitude bits in signed must be the same, but the sign bit need
not be an additional (high) magnitude bit for unsigned, it can be a
padding bit, and that padding bit set may be a trap representation.

This is why the unprototyped and vararg rules allow for corresponding
signed and unsigned integer types if (and only if) "the value is
representable in both types". (Plus technically it isn't clear the
vararg rules apply to the variadic routines in the standard library;
nothing explicitly says they use va_* or as-if, although presumably
that's the sensible thing for an implementor to do.)

- David.Thompson1 at worldnet.att.net

Wojtek Lerch

unread,
Dec 18, 2005, 11:37:59 PM12/18/05
to
"pete" <pfi...@mindspring.com> wrote in message
news:43A5C4...@mindspring.com...

> Wojtek Lerch wrote:
>> To continue this conversation with myself,
>> the above is what 6.2.6.2p5 seems
>> to imply; on the other hand,
>> 6.2.5p9 simply states that "the representation
>> of the same value in each type is the same",
>> and has the footnote attached
>> to it that explains that that's meant to imply interchangeability.
>
> That would have to put constraints on the values
> of padding bits then, wouldn't it?

Well yes, 6.2.6.2p5 says that very clearly. Any signed representation of a
non-negative value must represent the same value in the corresponding
unsigned type. If the signed type has padding bits that correspond to value
bits in the unsigned type, those padding bits must be set to zero in all
valid representations of non-negative values.

...


> In a case where int and unsigned had the same number
> of value bits,
> reading an object of type unsigned with a %d specifier
> would interpret a padding bit as the sign bit.

That's the situation I'm concerned about. From 6.2.6.2p5 it seems that it's
OK for the padding bit to be ignored. But 6.2.5p9 may be interpreted as
implying that the representation with the padding bit set to one must be
treated as a trap representation, because otherwise you would have a bit
pattern that represents a value in the range of both types when read through
the unsigned type, but a negative value when read through the signed type.


Jordan Abel

unread,
Dec 19, 2005, 11:24:15 PM12/19/05
to
On 2005-12-17, Wojtek Lerch <Wojt...@yahoo.ca> wrote:
> The "linkage interface"? What is that, in standardese?

Stage 8 of translation.

Wojtek Lerch

unread,
Dec 19, 2005, 11:34:36 PM12/19/05
to
"Jordan Abel" <jma...@purdue.edu> wrote in message
news:slrndqf1sq...@random.yi.org...

> On 2005-12-17, Wojtek Lerch <Wojt...@yahoo.ca> wrote:
>> The "linkage interface"? What is that, in standardese?
>
> Stage 8 of translation.

A stage of translation is an interface that implies how printf() must be
implemented? Somehow I doubt that's what he meant; but could you elaborate?


0 new messages