the following code prints "not eq" on various compilers with various
settings on my 32bit x86 machine:
#include <stdlib.h>
#include <stdio.h>
int main() {
if ((double)atoi("1")/atoi("3") == (double)atoi("1")/atoi
("3"))
puts("eq");
else
puts("not eq");
return 0;
}
with gcc it seems the first division is performed with x86 80bit float
registers but the second division is done with a different instruction
which i don't know exactly
i know floating point equality check is rarely adequate and there are
rounding errors etc.
but does the standard allow such behaviour in this case?
ie. is it a compiler bug or 'undefined behaviour'?
It's not undefined behavior, it's merely unspecified. The program
will print either "eq" or "not eq".
("int main()" is better written as "int main(void)".)
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Keith Thompson wrote:
> > if ((double)atoi("1")/atoi("3") == (double)atoi("1")/atoi
> > ("3"))
> > puts("eq");
> > else
> > puts("not eq");
>
> It's not undefined behavior, it's merely unspecified. The program
> will print either "eq" or "not eq".
>
i'm not a language lawyer but there are certain requirements by the
standard and it seemed suspicious to me that the division (and other
arithmetic operators) should be well defined as a requirement either
in c or in ieee 754 (which is required by c99 afaik)
where well defined mean that the same operation on the same numbers of
the same type always gives the same result on a given architecture/
compiler/..
anyway thanks for your reply
> Keith Thompson wrote:
>> > if ((double)atoi("1")/atoi("3") == (double)atoi("1")/atoi
>> > ("3"))
>> > puts("eq");
>> > else
>> > puts("not eq");
>>
>> It's not undefined behavior, it's merely unspecified. The program
>> will print either "eq" or "not eq".
>>
>
> i'm not a language lawyer but there are certain requirements by the
> standard and it seemed suspicious to me that the division (and other
> arithmetic operators) should be well defined as a requirement either
> in c or in ieee 754 (which is required by c99 afaik)
C99 does not require ieee 754 floating point (BTW I will call it IFP
because the alternative is both too long and the new IEC number is
less memorable). However where the implementation is based on IFP, as
here, the C standard essentially defers to it. However, the only
requirement from the FP standard is on the minimum accuracy of the
result. C goes on to permit (but does require) temporary calculations
using more accuracy if available.
It is likely that which ever calculation is done second is done in 80
bits so that the registers involved have the maximum precision
available. If the earlier result is loaded from memory, it will be a
mere 64 bits. It is possible the compiler knew the result was to be
stored and hence it may have chosen a shorter divide operation. All
that is speculative -- I know very little about Intel FP -- but the
possibility that the same calculation does not compare equal to itself
is specifically permitted by C, provided both values are as accurate
as the IFP standard mandates.
The compare not permitted to be not equal because one of the two is
not accurate enough; but one may have too many bits to compare equal
to the other.
> where well defined mean that the same operation on the same numbers of
> the same type always gives the same result on a given architecture/
> compiler/..
In short, the resul must always be good enough but not always the same
even on a single machine.
--
Ben.
You're right to be suspicious of that; it's not actually the case.
> ...as a requirement either
> in c or in ieee 754 (which is required by c99 afaik)
I believe that IEEE 754 is not mentioned in C90; it is not mandatory
in C99. If an implementation pre#defines __STDC_IEC_559_ it is
supposed to conform to IEC/IEEE 60559, which is equivalent to IEEE
754, but otherwise it's free to do whatever it wants.
I don't have a copy of IEEE 754; my copy of IEEE 854 (which is a radix-
independent standard that is otherwise similar to IEEE 754) has gone
missing. However, IIRC, even if __STDC_IEE_559__ is pre#defined,
differences of 1ulp are allowed.
> where well defined mean that the same operation on the same numbers of
> the same type always gives the same result on a given architecture/
> compiler/..
'undefined behavior' is a piece of C jargon that is very precisely
defined by the C standard to have a meaning that is not equivalent to
"not well-defined". Undefined behavior means, in the context of the C
standard, behavior on which the C standard imposes no requirements.
It's quite possible (and in many cases likely) that some other source
(the POSIX standard, a platform-specific ABI standard, the User's
guide for a given implementation of C) does in fact provide a
definition; but it's still "undefined behavior" as far as the C
standard is concerned. The C standard might define the behavior
incompletely, in which case the behavior is "unspecified", not
"undefined".
In this case, the C standard does require that the result of the
division must be, "the quotient from the division of the first operand
by the second" (6.5.5p5), so the behavior is not undefined. However,
the C standard does not specify the accuracy with which that quotient
is calculated, so the result is unspecified..
It does seem to be a 'bug' due to the mixed precision of registers and
memory, as I think Ben mentioned;
I tried an extra (double) cast around each of the expressions, and would
expected this to have reduced both expressions down to 64-bits, but perhaps
it doesn't bother when the expression is already in an 80-bit fp register. I
don't think it's that easy, or efficient, to do this on x86 FPU. Or it
decided the result was already 'double' even though one side might really
have been 'long double' at that point.
FWIW, lcc-win32 and DMC compilers for x86 both give an "eq" result on your
code.
--
Bart
Typo: you meant "(but does not require)".
jameskuyper wrote:
> In this case, the C standard does require that the result of the
> division must be, "the quotient from the division of the first operand
> by the second" (6.5.5p5), so the behavior is not undefined. However,
> the C standard does not specify the accuracy with which that quotient
> is calculated, so the result is unspecified..
you (and the standard) mention accuracy, however accuracy is not the
issue here
it does not matter if (double)1/3 evaluates to 0.3 or 0.4, what
matters whether the expression is idempotent or not (always give the
same answer)
from a theoretical point of view idempotency is very important, if it
is not present then calculations are not reproducible, the program is
not observable (adding a print statement might change the result) etc.
from implementation or practical point of view requiring idempotency
might be too much, at least it seems that's what the standardisation
committee thought
> Ben Bacarisse <ben.u...@bsb.me.uk> writes:
> [...]
>> However, the only
>> requirement from the FP standard is on the minimum accuracy of the
>> result. C goes on to permit (but does require) temporary calculations
>> using more accuracy if available.
> [...]
>
> Typo: you meant "(but does not require)".
*sigh* Yes. Thank you.
--
Ben.
The phrase you're looking for is probably "repeatable" or "consistent",
not "idempotent". When applied to a binary operation like division,
"idempotent" means that the result of applying the operator to equal
operands always produces a result equal to either operand. For instance,
maximum(a,b) is idempotent because maximum(a,a)==a. Division is most
certainly not idempotent; the only value of x for which x/x == x is 1.
The standard imposes no requirements on the accuracy; in particular, it
imposes no requirement that quotients involving identical operands be
evaluated with the same accuracy every time they are evaluated.
> from a theoretical point of view idempotency is very important, if it
> is not present then calculations are not reproducible, the program is
> not observable (adding a print statement might change the result) etc.
Consistency is a useful property to have in general, and I can't think
of any reason why an implementation would want to have this kind of
inconsistency. However, floating point operations are inherently inexact
in general, and any code which is written with the expectation of that
mathematically equivalent expressions should be numerically exactly
equal is broken at a deep conceptual level.
James Kuyper wrote:
> > it does not matter if (double)1/3 evaluates to 0.3 or 0.4, what
> > matters whether the expression is idempotent or not (always give the
> > same answer)
>
> The phrase you're looking for is probably "repeatable" or "consistent",
> not "idempotent". When applied to a binary operation like division,
> "idempotent" means that the result of applying the operator to equal
> operands always produces a result equal to either operand. For instance,
> maximum(a,b) is idempotent because maximum(a,a)==a. Division is most
> certainly not idempotent; the only value of x for which x/x == x is 1.
well you are right, however in computer science idempotence is used
with a slightly different meaning:
http://en.wikipedia.org/wiki/Idempotence#In_computing
now i see "referentially transparent" or "pure function" would be a
better term for it
> The standard imposes no requirements on the accuracy; in particular, it
> imposes no requirement that quotients involving identical operands be
> evaluated with the same accuracy every time they are evaluated.
i meant that it mentions accuracy like "accuracy of operations are
implementation specific" but not that "the operations might not be
consistent"
> in general, and any code which is written with the expectation of that
> mathematically equivalent expressions should be numerically exactly
> equal is broken at a deep conceptual level.
ok i can accept that, but there are cases when it would make life a
bit easier:
assume a numerically unstable simulation, something happens at the
1000th step that should not happen, the programmer binds some
subexpression of an internal calculation to a variable to debug the
code, which changes the outcome of the program, so the problem is
gone, the program is not debuggable.
Hmm, not really. It doesn't directly apply to a binary operator.
However, there are a couple of ways you can get to a unary operator
from a binary one, and then it can be applied. One way is duplication
of the single operand, which is what you describe, but the other way
is to curry the binary operator into a unary one by using one of the
operands.
The confusion between the two probably because square matrices can
act upon themselves by multiplication (thus represent both the operator
and the single operand to that operator). Viewing a matrix as an
operator is effectively currying the multiplication.
Phil
--
If GML was an infant, SGML is the bright youngster far exceeds
expectations and made its parents too proud, but XML is the
drug-addicted gang member who had committed his first murder
before he had sex, which was rape. -- Erik Naggum (1965-2009)
<snip>
>> in general, and any code which is written with the expectation of that
>> mathematically equivalent expressions should be numerically exactly
>> equal is broken at a deep conceptual level.
>
> ok i can accept that, but there are cases when it would make life a
> bit easier:
> assume a numerically unstable simulation, something happens at the
> 1000th step that should not happen, the programmer binds some
> subexpression of an internal calculation to a variable to debug the
> code, which changes the outcome of the program, so the problem is
> gone, the program is not debuggable.
It is one instance where I would use a debugger. A decent debugger will
not interfere with the values (I've come across hardware
in-circuit-emulators and simulators where things did miss-behave, but
that is the exception not the rule).
In any case, there are good reasons compilers do not always do the same
thing. The processor might be able to do more things in parallel if it
selects a different instruction, or it may save taking things out of
registers in to RAM, or... the possible reasons are endless. This is
also why C99 introduced pragmas which allow you to tell the compiler to
*not* do some of these things, they are so valuable they are needed but
sometimes one needs the maths to be more well behaved.
--
Flash Gordon
If the standard imposed requirements on either the accuracy or the
consistency, it could in fact have chosen to separate the concepts. The
only thing it says is that the accuracy is implementation-defined, and
it goes so far as to say that the implementation-provided definition of
the accuracy can be "the accuracy is unknown". Given what the standard
fails to say about this issue, it would be entirely permissible for an
implementation to define that the accuracy is different depending upon
whether the operation is executed on an odd or even clock cycle. That's
just one of the most plausible of a great many inconvenient possibilities.
>> in general, and any code which is written with the expectation of that
>> mathematically equivalent expressions should be numerically exactly
>> equal is broken at a deep conceptual level.
>
> ok i can accept that, but there are cases when it would make life a
> bit easier:
> assume a numerically unstable simulation, something happens at the
> 1000th step that should not happen, the programmer binds some
> subexpression of an internal calculation to a variable to debug the
> code, which changes the outcome of the program, so the problem is
> gone, the program is not debuggable.
Yes, that's pretty much what happens. You often have to turn off
optimizations to make any sense of the behavior of a program. I've used
debuggers that allow you to monitor the behavior of optimized code - but
it doesn't make any sense: executing a statement that is supposed to
update the value of a variable does not in fact do so, because the
optimizer has moved the update of the variable's value to another point
in the code. Turn off optimizations, and the bug you're looking for
often disappears. This is a problem with all kinds of code, but it's
more of a problem with floating point code, because the inherent
inaccuracy of floating point operations gives implementations more
freedom to implement optimizations that actually change the result. This
is the way real compilers work, and the standard was written to
accommodate that fact.
#include <stdio.h>
#include <stdlib.h>
int main(void) {
double a, b;
if ((a=(double)atoi("1")/atoi("3")) ==
(b=(double)atoi("1")/atoi("3")))
puts("equal");
else
puts("not equal");
if (a == b)
puts("equal");
else
puts("not equal");
return 0;
}
Prints
not equal
equal
gcc 3.1
--
Joe Wright
"If you think Health Care is expensive now, wait until it's free."
> "Szabolcs Nagy" <nsza...@gmail.com> wrote in message
> news:aa243d20-cad3-4135...@d32g2000yqh.googlegroups.com...
>> hello, i get unexpected floating point behaviour
>>
>> the following code prints "not eq" on various compilers with various
>> settings on my 32bit x86 machine:
>>
>> #include <stdlib.h>
>> #include <stdio.h>
>>
>> int main() {
>> if ((double)atoi("1")/atoi("3") == (double)atoi("1")/atoi
>> ("3"))
>> puts("eq");
>> else
>> puts("not eq");
>> return 0;
>> }
>>
>> with gcc it seems the first division is performed with x86 80bit float
>> registers but the second division is done with a different instruction
>> which i don't know exactly
>>
>> i know floating point equality check is rarely adequate and there are
>> rounding errors etc.
>>
>> but does the standard allow such behaviour in this case?
>> ie. is it a compiler bug or 'undefined behaviour'?
>
> It does seem to be a 'bug' due to the mixed precision of registers and
> memory, as I think Ben mentioned;
I see 'bug' in scare quotes but just to be clear, I don't see this as
a bug in gcc.
> I tried an extra (double) cast around each of the expressions, and
> would expected this to have reduced both expressions down to 64-bits,
> but perhaps it doesn't bother when the expression is already in an
> 80-bit fp register. I don't think it's that easy, or efficient, to do
> this on x86 FPU. Or it decided the result was already 'double' even
> though one side might really have been 'long double' at that point.
>
> FWIW, lcc-win32 and DMC compilers for x86 both give an "eq" result on
> your code.
I am not sure that says very much. For example:
gcc 4.3.3 -O0 gives not eq
gcc 4.3.3 -O0 -ffloat-store gives eq
gcc 4.3.3 -O[123] give eq
icc 10.1 -O[0123] give eq
--
Ben.
gcc 4.3.3
-O0 equal, equal
-O1 not equal, equal
-O1 -ffloat-store equal, equal
icc 10.0
-O0 equal, equal
-O1 not equal, no equal
You can get pretty much any result you like (although "equal" and "not
equal" will be less likely I think) but all are valid results in my
opinion.
--
Ben.
icl 11.1 win32
-prec-div equal,equal
SSE2 platforms have been the norm for 8 years, and most compilers have
caught up.
Can someone explain why this behaviour doesn't contravene
C99 5.1.2.3, paragraph 12? "... In particular, casts and
assignments are required to perform their specified
conversion"? I realize that 5.1.2.3p12 is an example, and
hence non-normative, but it does still seem to show that this
behaviour is counter to the *intent* of the standard.
Maybe I'm misreading this paragraph, but to me it reads as
saying that in Joe's example, a and b should have been
rounded to double from whatever extended precision they
were computed in, *before* either of the equalities is applied.
--
Mark
I believe that the FLT_EVAL_METHOD constant in <float.h>
may be what this is all about.
N869
5.2.4.2.2 Characteristics of floating types <float.h>
[#7] The values of operations with floating operands and
values subject to the usual arithmetic conversions and of
floating constants are evaluated to a format whose range and
precision may be greater than required by the type. The use
of evaluation formats is characterized by the value of
FLT_EVAL_METHOD:18)
-1 indeterminable;
0 evaluate all operations and constants just
to the range and precision of the type;
1 evaluate operations and constants of type
float and double to the range and precision
of the double type, evaluate long double
operations and constants to the range and
precision of the long double type;
2 evaluate all operations and constants to the
range and precision of the long double type.
All other negative values for FLT_EVAL_METHOD characterize
implementation-defined behavior.
--
pete
The same applies to 4.3.3, of course, but I did not want to post a set
of unsurprising results!
--
Ben.
Yes, yes, all the world's a VAX, you're quite right.
> Ben Bacarisse wrote:
<snip>
>> The same applies to 4.3.3, of course, but I did not want to post a set
>> of unsurprising results!
>>
> I don't find "dodgy" results obtained by options for targets of more
> than 8 years ago so surprising either.
Someone posted because there were surprised (or, to be literal,
puzzled) by the output. That the SSE2 code does what you expect is
interesting but I wanted to show that even a change in optimisation
level can alter the result of the posted code. If I had access to the
hardware, I'd have tried other architectures as well.
> icc/ia32 got around to making
> P4 a default target a year ago.
> gcc has such a "surprising" default only for i386.
Personally, I don't find either the gcc default nor the resulting
output surprising, but there seemed to me to be some value in posting
examples of compiler settings (default or otherwise) that reinforce
the idea that exact comparison of floating values is dangerous.
> I'm equally
> inclined to complain to my employer's IT for locking my laptop down to
> 32-bit Windows last week as to complain to c.l.c about gcc defaults.
> If you want to discuss why a certain compiler has obsolete defaults,
> if it is a currently maintained compiler, there should be a more
> suitable forum. gcc-...@gcc.gnu.org for example.
No, my point was all about what one might expect C code to do rather
than what compilers might do by default.
--
Ben.
I was all set to say that that makes no difference because of a
general permission to use more precision than required at any time,
but I thought it worth checking and I find (new to me):
5.2.4.2.2 p8:
Except for assignment and cast (which remove all extra range and
precision), the values 8 of operations with floating operands and
values subject to the usual arithmetic conversions and of floating
constants are evaluated to a format whose range and precision may be
greater than required by the type. The use of evaluation formats is
characterized by the implementation-defined value of
FLT_EVAL_METHOD: [followed by the text already quoted by pete]
The first line has a change bar against it in my copy of n1256.pdf.
This means (I think) that some part of that text was changed after
publication. I don't have any other copy to hand so I don't know
exactly what has changed. It seems, anyway, that casts and assignment
are exempted and must remove extra precision.
--
Ben.
> Joe Wright <joeww...@comcast.net> writes:
>
<snip>
> 5.2.4.2.2 p8:
>
> Except for assignment and cast (which remove all extra range and
> precision), the values 8 of operations with floating operands
> and values subject to the usual arithmetic conversions and of
> floating constants are evaluated to a format whose range and
> precision may be greater than required by the type. The use of
> evaluation formats is characterized by the
> implementation-defined value of FLT_EVAL_METHOD: [followed by
> the text already quoted by pete]
>
> The first line has a change bar against it in my copy of
> n1256.pdf. This means (I think) that some part of that text was
> changed after
> publication. I don't have any other copy to hand so I don't know
> exactly what has changed. It seems, anyway, that casts and
> assignment are exempted and must remove extra precision.
If we compare the above to the original paragraph from ISO/IEC
9899:1999 (note that, in the original, it is actually para7, not
para8), we find that the revision has inserted the words "Except
for assignment and cast (which remove all extra range and
precision)". Otherwise, it is as quoted above.
--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Forged article? See
http://www.cpax.org.uk/prg/usenet/comp.lang.c/msgauth.php
"Usenet is a strange place" - dmr 29 July 1999
> Ben Bacarisse said:
>
>> Joe Wright <joeww...@comcast.net> writes:
>>
> <snip>
>
>> 5.2.4.2.2 p8:
>>
>> Except for assignment and cast (which remove all extra range and
>> precision), the values 8 of operations with floating operands
>> and values subject to the usual arithmetic conversions and of
>> floating constants are evaluated to a format whose range and
>> precision may be greater than required by the type. The use of
>> evaluation formats is characterized by the
>> implementation-defined value of FLT_EVAL_METHOD: [followed by
>> the text already quoted by pete]
>>
>> The first line has a change bar against it in my copy of
>> n1256.pdf. This means (I think) that some part of that text was
>> changed after
>> publication. I don't have any other copy to hand so I don't know
>> exactly what has changed. It seems, anyway, that casts and
>> assignment are exempted and must remove extra precision.
>
> If we compare the above to the original paragraph from ISO/IEC
> 9899:1999 (note that, in the original, it is actually para7, not
> para8), we find that the revision has inserted the words "Except
> for assignment and cast (which remove all extra range and
> precision)". Otherwise, it is as quoted above.
Thank you. I see now that pete quoted that whole paragraph (I did not
recognise it precisely because of the different number and initial
words).
No -std=c99 was given in any of the examples but it seems to make no
difference. The cast should throw way any extra precision and I think
is not doing so at least with some combinations of compiler options.
--
Ben.
What is n1256.pdf supposed to be?
Is it a revision of the last standard
or a proposal for the next?
--
pete
N1256 is a revision of C99, consisting of the text of the C99 standard
with the three Technical Corrigenda merged into it (and I think the
editor snuck in some minor editorial changes while he was at it).
It's a draft, not an "official" document.
There have been two (very) preliminary drafts of the upcoming C201X
standard. I don't have the document numbers handy. The biggest
change is a revamping of the sequence point rules.
It should be emphasized that all of the source documents for n1256 are
official. The Technical Corrigenda were approved in the form of changes
to be made to the C99 standard, and n1256 is intended only to show what
the standard would look like with those changes made to it. It has the
string "Septermber 7, 2007" at the top of every page of the document. To
the best of my knowledge, in the nearly two years since that date, the
only defect that has been reported against that document in the type
which appears in that date.
It may be unofficial, but by reason of combining all the official
documents together the way they were intended to be, I find it more
useful than any of the official documents.
Agreed.
Except that by "in the type", I presume you meant "is the typo".
<snip>
> [n1256] has the string "Septermber 7, 2007" at the top of
> every page of the document. To the best of my knowledge, in the
> nearly two years since that date, the only defect that has been
> reported against that document [is] the [typo] which appears in
> that date.
Ha! IIRC, it was I who first spotted that! Fame at last?
<snip>
This may have more to do with Windows than with SSE2:
I believe that Windows sets the x87 FPU precision to
53 bits by default, while many other x86 operating
systems (Linux, Solaris, OS X; not FreeBSD) leave
the x87 precision setting at its default of 64 bits.
To check, you can try compiling the following program:
if the division is implemented using SSE2, or using
x87 with 64-bit precision, you'll get a result of
4.9e-324. If it's using x87 with 53-bit precision
you'll get 9.9e-324 (except when the compiler
optimizes away the computation, as e.g., gcc-4.4 -O2
seems to do).
(Example stolen from David Monniaux's 'pitfalls' paper:
http://arxiv.org/pdf/cs/0701192v5
.)
#include <stdio.h>
static double div(double x, double y) {
return x/y;
}
int main(void) {
double x, y, z;
x = 0x1.8000000000001p-1018;
y = 0x1.0000000000001p+56;
z = div(x, y);
printf("z is %.2g\n", z);
return 0;
}
--
Mark
And so the question is:
Is an implementation which conforms to the original C99 standard
but does not conform to n1256,
still a conforming implementation of C99?
--
pete
That depends: are you referring to an implementation that fails to
conform to one or more of the Technical Corrigenda (TCs), or one which
fails to conform to a (hopefully) hypothetical discrepancy between n1256
and C99 as amended by the three TCs?
The TCs are considered to have modified C99 itself, so an implementation
can no longer conform to C99 without conforming to all three TCs.
However, if there were a discrepancy between n1256 and C99+TC1+TC2+TC3,
then the official documents are correct, and the discrepancy would count
as a defect in n1256, so an implementation could fail to conform to that
defect and still be conforming. - that's the difference between a draft
of the standard and an actually approved update to the standard.
It wouldn't be a proper spelling complaint without at least one
misspelling. I'll pretend it was deliberate, to satisfy tradition. :-)
Ah, yes. I hadn't thought too carefully about exactly when the 53-bit
precision got set. Thanks for the clarification.
> Windows 64-bit sets 53-bit precision mode before handing over control to
> .exe, even if that .exe is built by gcc. However, compilers for X64 use
> sse2 by default, so you won't see the "surprising" extra precision.
> 53-bit precision mode will avoid the so-called "surprising" results for
> this example, but not for analogous examples using float data types.
And also not for doubles in examples involving underflow or
overflow, since the exponent range of x87+53-bit precision
is still much larger than the exponent range of IEEE 754 doubles.
The sooner the x87 FPU becomes ancient history the better, as
far as I'm concerned. :)
> [...]
--
Mark
I'm refering to one which doesn't make the exception
for casts and assignements as stated in:
>>>>>>> 5.2.4.2.2 p8:
>>>>>>>
>>>>>>> Except for assignment and cast
>>>>>>> (which remove all extra range and precision), the values
> The TCs are considered to have modified C99 itself, so an implementation
> can no longer conform to C99 without conforming to all three TCs.
> However, if there were a discrepancy between n1256 and C99+TC1+TC2+TC3,
> then the official documents are correct, and the discrepancy would count
> as a defect in n1256, so an implementation could fail to conform to that
> defect and still be conforming. - that's the difference between a draft
> of the standard and an actually approved update to the standard.
--
pete
Actually, there's one other known defect: the predefined macro
__STDC_MB_MIGHT_NEQ_WC__ should appear in 6.10.8p2 (optional predefined
macros) rather than p1 (required predefined macros).
--
Larry Jones
I don't need to do a better job. I need better P.R. on the job I DO.
-- Calvin
No, because it *does* contravene that clause, as well as the actual
requirements which are specified normatively in 6.3.1. Many compilers
(including gcc in most cases) do not conform to those requirements by
default because most programmers are so clueless about floating point
that the excess precision helps rather than hurts and it avoids the
performance penalty of removing it.
--
Larry Jones
I don't need to improve! Everyone ELSE does! -- Calvin
btw it seems this issue has a long history in the gcc bug tracker
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323
But note that it's not a bug. Both the original post and the code in
that "bug report" are permitted to behave as they do.
Yes, there has been clarification to the published standard that casts
and assignment discard excess precision, but that does not apply to
either the original post code or the "bug report" code.
The fact that I incorrectly extended the argument to the second posted
code (that had a belt-and-braces cast and an assignment) does not mean
I (or anyone else who said the same) was wrong about the original
code.
--
Ben.