print sprintf ( "%.16g\n" , -0.0005000000000000000104 ) ;
print sprintf ( "%.16g\n" , -0.00049999999999999999999999999999999999999 ) ;
print sprintf ( "%.16g\n" , -0.0007999999 ) ;
print sprintf ( "%.16g\n" , -0.00089999999999999999999999999999999999999 ) ;
print sprintf ( "%.16g\n" , -0.0008999999999999999999999999999999999999 ) ;
print sprintf ( "%.16g\n" , 8.000000000000000000 ) ;
print sprintf ( "%.16g\n" , 8.0000000000000000000000000000000000000 ) ;
produces:
-0.0005
-0.0005000000000000001
-0.0007999999
-0.0008999999999999999
-0.0009
8
7.999999999999999
The same code works fine under Linux except if you compile with debugging
(-g) then you get the same result. On Solaris it does not seem to matter
how you configure it (I have tried many permutations).
MM;
looking at
8.000000000000000000
vs
8.0000000000000000000000000000000000000
on my solaris box, they are certainly stored internally as different
values, which doesn't seem right:
$a = 8.000000000000000000;
SV = NV(0x1e3f20) at 0x1dee2c
REFCNT = 1
FLAGS = (NOK,pNOK)
NV = 8
0x1e3f30: 0x40200000
0x1e3f34: 0x00000000
$a = 8.0000000000000000000000000000000000000;
SV = NV(0x1e3f20) at 0x1dee2c
REFCNT = 1
FLAGS = (NOK,pNOK)
NV = 8
0x1e3f30: 0x401fffff
0x1e3f34: 0xffffffff
--
That he said that that that that is is is debatable, is debatable.
On further examination, I think you are just seeing the rounding errors
when the literal string is being converted to an NV during compilation.
Arguably Perl could do a better job in the 8.000.....0 case, but
it doesn't seem worth special-casing this.
[
for those who are interested, Perl converts
"111111111.222222222333333333444444444"
into an NV by doing a calculation along the lines of
(111111111*1e27 + 222222222*1e18 + 333333333*1e9 + 444444444*1e0))/1e27
The exact details vary based on integer size etc.
]
--
"Emacs isn't a bad OS once you get used to it.
It just lacks a decent editor."
MM;
There isn't such a thing as 'correct' output. The numbers you are using
are not exactly representable with a double (well, with the exception of
the 8.0000...), and perl 5.6.1 and perl 5.8.0 both make approximations -
they just happen to be different approximations.
For example, the following C code on my machine with gcc on linux x86
produces perl5.8.0-ish results:
$ cat c.c
#include <stdio.h>
main() {
printf("%.40g\n", -0.00049999999999999999999999999999999999999);
}
$ ./c
-0.0005000000000000000104083408558608425664715
If the tests in Math::Pari are failing, then I suspect they are making
unwarranted assumptions about floating-point representation.
Dave.
--
You live and learn (although usually you just live).
More to the point, I would expect *consistent* output. Using my example
with C, I get the *same* result regardless of the revision of gcc (2.x, 3.x)
and regardless of the platform (Solaris, Linux). Maybe I am anal, but I
like consistency. When I build 5.8 on Linux and Solaris and get different
results, and when I get different results between 5.6.1 and 5.8 on the same
platform, I take notice.
You can call it a bug, you can call it a feature, but it leads me to wonder
what other code out there that runs a certain way under 5.6.1 and 5.005 will
magically behave differently under 5.8.
....and I can run all my examples through bc and get the correct answer - an
answer without a rounding error.
MM;
-----Original Message-----
From: Dave Mitchell [mailto:da...@fdgroup.com]
Sent: Thursday, August 15, 2002 6:33 PM
To: Michael Minichino
Cc: 'perl5-...@perl.org'
Subject: Re: 5.8.0 sprintf (?) problem with floats?
I think I'd agree that the handling of an overprecise 8.0 is probably
an error - we can get guarantably more correct results (at some cost
in speed not yet determined) by special-casing trailing zeroes after
a decimal point.
Note that the problem here is in parsing the input string, not in
sprintf.
:More to the point, I would expect *consistent* output. Using my example
:with C, I get the *same* result regardless of the revision of gcc (2.x, 3.x)
:and regardless of the platform (Solaris, Linux).
The changes in fp handling between 5.6 and 5.8 were primarily done
to make things more consistent: previously we were relying on system
library functions which in many cases were inaccurate and sometimes
wildly wrong. I believe what we have now is an improvement on what
went before in that respect.
:....and I can run all my examples through bc and get the correct answer - an
:answer without a rounding error.
bc supports arbitrary precision numbers. So does perl, by means of the
Math::Big* modules. If you truly want the precision you're specifying
in your examples, this (possibly using pari under the hood) is probably
the way to go. You'd probably also then want to specify the numbers as
strings, to ensure that the module rather than the perl parser takes
responsibility for parsing the number.
What I find particularly interesting is that you get different results
on Linux depending on whether you enabled debugging or not. We should
be able to track down why that happens, and the answer might well give
a clue to the rest of it.
Hugo
Moin,
>Has anyone seen this on Solaris with 5.8.0:
You might want to consider using Math::BigFloat. Perl stores the numbers
internally as a float (double or whatever) and the results may well vary
between platforms.
Best wishes,
Tels
Btw: Contrary to Math::Pari, BigFloat avoids to test floating point input,
since the input get's "mangled" by Perl before BigFloat sees it. This is
why my testsuite has:
ok (Math::BigFloat->new('123.6'),'123.6');
as opposed to:
ok (Math::BigFloat->new(123.6),123.6);
- --
perl -MMath::String -e 'print \
Math::String->from_number("215960156869840440586892398248"),"\n"'
http://bloodgate.com/perl My current Perl projects
PGP key available on http://bloodgate.com/tels.asc or via email.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: When cryptography is outlawed, bayl bhgynjf jvyy unir cevinpl.
iQEVAwUBPVzPyXcLPEOTuEwVAQEUSwf+LCnHVCny2F9+8NaJQNeASTztx6dNo638
lHcus8ZjNa7rbAq3roneK2ESwThd+QxxSNRoUoNugYUBqh0mO/1nobs8Yh6SNZ8P
99Bp0rZeEtgxBkaSEGDpqTV44/X9HcyZjnmExtMsCUAmdNBM22EOpjVQ/0OO5HRq
HR44fkpUtB4KpT/J/rjflZHyCUf/C7W4qgiQzIDdL9dMPBbpGyg9y3mkh3hWBbSX
VaGuINJgFQlWFkUOq4tfrnaO1N+KugcbGtUHBszRviPlT8OO06XMen3faD4BqC1a
QeSwqTm+32Sp5Z+3XW2WSmVZaI8TpJ5ULcJzQA/ziNEs5wpIPnL+Pw==
=hf80
-----END PGP SIGNATURE-----
No, rounding errors are one thing, but perl's current behaviour is too
instable. Look at the values:
0.0005000000000000000104 -> 0.0005
0.00049999999999999999999999999999999999999 -> 0.0005000000000000001
So why is the smaller .00049... converted to a bigger value?
Anyway, the conversions work fine if one changes
/* combine components of mantissa */
for (i = 0; i <= ipart; ++i)
result += S_mulexp10((NV)part[ipart - i],
i ? offcount + (i - 1) * PARTSIZE : 0);
to
/* combine components of mantissa */
for (i = 0; i <= ipart; ++i)
result += S_mulexp10((NV)part[ipart - i],
(i ? offcount + (i - 1) * PARTSIZE : 0) + expextra);
expextra = 0;
Does anybody see a problem with that approach?
Cheers,
Michael.
--
Michael Schroeder mlsc...@informatik.uni-erlangen.de
main(_){while(_=~getchar())putchar(~_-1/(~(_|32)/13*2-11)*13);}
If you're going to do that, then a small rearrangement can also save
a call to S_mulexp10, as in the patch below.
On my linux machine here, I get the same (correct) results with or
without the patch and with or without debugging; if others find that
it improves things, I'll be happy to put it in.
Hugo
--- numeric.c Thu Jun 20 16:47:08 2002
+++ numeric.c.new Fri Aug 16 13:07:57 2002
@@ -908,11 +908,6 @@
}
}
- /* combine components of mantissa */
- for (i = 0; i <= ipart; ++i)
- result += S_mulexp10((NV)part[ipart - i],
- i ? offcount + (i - 1) * PARTSIZE : 0);
-
if (seendigit && (*s == 'e' || *s == 'E')) {
bool expnegative = 0;
@@ -929,10 +924,12 @@
if (expnegative)
exponent = -exponent;
}
-
- /* now apply the exponent */
exponent += expextra;
- result = S_mulexp10(result, exponent);
+
+ /* combine components of mantissa, factored by exponent */
+ for (i = 0; i <= ipart; ++i)
+ result += S_mulexp10((NV)part[ipart - i],
+ exponent + (i ? offcount + (i - 1) * PARTSIZE : 0));
/* now apply the sign */
if (negative)
../miniperl -Ilib configpm configpm.tmp
Perl v5003.0.0 required (did you mean v5003.000?)--
this is only v5.9.0, stopped at lib/Exporter.pm line 3.
I'm assuming I grabbed some intermediate version from
rsync://ftp.linux.activestate.com/perl-current/
when vstrings were flaky, but is that the right
place to go now for "the latest"? The patch I got was
17722.
Or should I just wait for perl5003 :-) -- jpl
Ah, sorry I didn't spot that.
In terms of respresentable values, an x86 double can hold the following
precise values in this area (as shown by bc):
(A) 10624DD2F1A9FB*(2^-3F)
.0004999999999999999019881236073103991657262668013572692871093750
(B) 10624DD2F1A9FC*(2^-3F)
.0005000000000000000104083408558608425664715468883514404296875000
(C) 10624DD2F1A9FD*(2^-3F)
.0005000000000000001188285581044112859672168269753456115722656250
It seems that Perl 5.8.0 is converting
"0.00049999999999999999999999999999999999999"
into (C), while 5.6.0 converts it to (B).
Since that values like somewhere between (A) and (B), with (B) closest,
it seems that 5.8.0 is less than optimal.
Here's a suggestion which may fix it (and stuff like the 8.00....),
and may also make things infinitesimally faster.
Given that the mantissa of any NV is going to have something less than
8*sizeof(NV) bits of precision, we should stop accumulating digits after
we've seen log(2)*8*sizeof(NV) significant digits (or thereabouts). This
will avoid wasting time and stop accumulating rounding errors. Currently
we arbitrarily accumulate 6*8*sizeof(U32 or U64) significant bits,
typically way beyond the eventual precision of the typical destination NV.
Unless someone thinks this is a bad idea, I think I'll have a go at it
later today or tomorrow.
>
> Anyway, the conversions work fine if one changes
>
> /* combine components of mantissa */
> for (i = 0; i <= ipart; ++i)
> result += S_mulexp10((NV)part[ipart - i],
> i ? offcount + (i - 1) * PARTSIZE : 0);
>
> to
>
> /* combine components of mantissa */
> for (i = 0; i <= ipart; ++i)
> result += S_mulexp10((NV)part[ipart - i],
> (i ? offcount + (i - 1) * PARTSIZE : 0) + expextra);
> expextra = 0;
>
> Does anybody see a problem with that approach?
Well, expextra is likely to be negative, so the loop may involve
multiple multiplies by 10^-N, which is implemented as a division,
which may be considered undesirable.
Dave.
--
"There's something wrong with our bloody ships today, Chatfield."
Admiral Beatty at the Battle of Jutland, 31st May 1916.
Oh, and I fixed a typo in t/base/num.t that was making the ok()
function always succeed!
Questions:
1. is t/base/num.c the right place for these tests?
2. The old code had the test
#if defined(HAS_QUAD) && defined(USE_64_BIT_INT)
to decide whether to use a U32 or a U64, but AFAIKT, UV is defined to
these values under the same conditions, so I've just unconditionally used
a UV. Is this correct for of getting hold of the largest supported integer
type?
This patch could do with being tested on platforms with weird and
wonderful architectures. But I guess that's what smoking's for?
Dave.
--
"I do not resent critisism, even when, for the sake of emphasis,
it parts for the time with reality".
Winston Churchill, House of Commons, 22nd Jan 1941.
--- ./numeric.c- Fri Aug 16 15:41:47 2002
+++ ./numeric.c Fri Aug 16 23:11:30 2002
@@ -814,24 +814,38 @@ Perl_my_atof2(pTHX_ const char* orig, NV
NV result = 0.0;
char* s = (char*)orig;
#ifdef USE_PERL_ATOF
+ UV accumulator = 0;
bool negative = 0;
char* send = s + strlen(orig) - 1;
- bool seendigit = 0;
- I32 expextra = 0;
+ bool seen_digit = 0;
+ I32 exp_adjust = 0;
+ I32 exp_acc = 0; /* the current exponent adjust for the accumulator */
I32 exponent = 0;
- I32 i;
-/* this is arbitrary */
-#define PARTLIM 6
-/* we want the largest integers we can usefully use */
-#if defined(HAS_QUAD) && defined(USE_64_BIT_INT)
-# define PARTSIZE ((int)TYPE_DIGITS(U64)-1)
- U64 part[PARTLIM];
-#else
-# define PARTSIZE ((int)TYPE_DIGITS(U32)-1)
- U32 part[PARTLIM];
-#endif
- I32 ipart = 0; /* index into part[] */
- I32 offcount; /* number of digits in least significant part */
+ I32 seen_dp = 0;
+ I32 digit;
+ I32 sig_digits = 0; /* noof significant digits seen so far */
+
+/* There is no point in processing more significant digits
+ * than the NV can hold. Note that NV_DIG is a lower-bound value,
+ * while we need an upper-bound value. We add 2 to account for this;
+ * since it will have been conservative on both the first and last digit.
+ * For example a 32-bit mantissa with an exponent of 4 would have
+ * exact values in the set
+ * 4
+ * 8
+ * ..
+ * 17179869172
+ * 17179869176
+ * 17179869180
+ *
+ * where for the purposes of calculating NV_DIG we would have to discount
+ * both the first and last digit, since neither can hold all values from
+ * 0..9; but for calculating the value we must examine those two digits.
+ */
+#define MAX_SIG_DIGITS (NV_DIG+2)
+
+/* the max number we can accumulate in a UV, and still safely do 10*N+9 */
+#define MAX_ACCUMULATE ( (UV) ((UV_MAX - 9)/10))
/* leading whitespace */
while (isSPACE(*s))
@@ -846,74 +860,58 @@ Perl_my_atof2(pTHX_ const char* orig, NV
++s;
}
- part[0] = offcount = 0;
- if (isDIGIT(*s)) {
- seendigit = 1; /* get this over with */
+ /* we accumulate digits into an integer; when this becomes too
+ * large, we add the total to NV and start again */
- /* skip leading zeros */
- while (*s == '0')
- ++s;
- }
+ while (1) {
+ if (isDIGIT(*s)) {
+ seen_digit = 1;
+ digit = *s++ - '0';
+ exp_adjust -= seen_dp;
+
+ /* don't start counting until we see the first significant
+ * digit, eg the 5 in 0.00005... */
+ if (!sig_digits && digit == 0)
+ continue;
- /* integer digits */
- while (isDIGIT(*s)) {
- if (++offcount > PARTSIZE) {
- if (++ipart < PARTLIM) {
- part[ipart] = 0;
- offcount = 1; /* ++0 */
- }
- else {
+ if (++sig_digits > MAX_SIG_DIGITS) {
/* limits of precision reached */
- --ipart;
- --offcount;
- if (*s >= '5')
- ++part[ipart];
- while (isDIGIT(*s)) {
- ++expextra;
- ++s;
+ if (digit >= 5)
+ ++accumulator;
+ exp_adjust++;
+ if (seen_dp)
+ break;
+ else {
+ /* skip remaining integer part */
+ while (isDIGIT(*s)) {
+ s++;
+ exp_adjust++;
+ }
}
/* warn of loss of precision? */
- break;
}
- }
- part[ipart] = part[ipart] * 10 + (*s++ - '0');
- }
-
- /* decimal point */
- if (GROK_NUMERIC_RADIX((const char **)&s, send)) {
- if (isDIGIT(*s))
- seendigit = 1; /* get this over with */
-
- /* decimal digits */
- while (isDIGIT(*s)) {
- if (++offcount > PARTSIZE) {
- if (++ipart < PARTLIM) {
- part[ipart] = 0;
- offcount = 1; /* ++0 */
- }
- else {
- /* limits of precision reached */
- --ipart;
- --offcount;
- if (*s >= '5')
- ++part[ipart];
- while (isDIGIT(*s))
- ++s;
- /* warn of loss of precision? */
- break;
+ else {
+ if (accumulator > MAX_ACCUMULATE) {
+ /* add accumulator to result and start again */
+ result = S_mulexp10(result, exp_acc) + (NV)accumulator;
+ accumulator = 0;
+ exp_acc = 0;
}
+ accumulator = accumulator * 10 + digit;
+ exp_acc++;
}
- --expextra;
- part[ipart] = part[ipart] * 10 + (*s++ - '0');
+ }
+ else if (!seen_dp && GROK_NUMERIC_RADIX((const char **)&s, send)) {
+ seen_dp = 1;
+ }
+ else {
+ break;
}
}
- /* combine components of mantissa */
- for (i = 0; i <= ipart; ++i)
- result += S_mulexp10((NV)part[ipart - i],
- i ? offcount + (i - 1) * PARTSIZE : 0);
+ result = S_mulexp10(result, exp_acc) + (NV)accumulator;
- if (seendigit && (*s == 'e' || *s == 'E')) {
+ if (seen_digit && (*s == 'e' || *s == 'E')) {
bool expnegative = 0;
++s;
@@ -931,7 +929,7 @@ Perl_my_atof2(pTHX_ const char* orig, NV
}
/* now apply the exponent */
- exponent += expextra;
+ exponent += exp_adjust;
result = S_mulexp10(result, exponent);
/* now apply the sign */
--- ./t/base/num.t- Fri Aug 16 16:47:40 2002
+++ ./t/base/num.t Fri Aug 16 22:58:36 2002
@@ -1,6 +1,6 @@
#!./perl
-print "1..45\n";
+print "1..48\n";
# First test whether the number stringification works okay.
# (Testing with == would exercize the IV/NV part, not the PV.)
@@ -69,7 +69,7 @@
print $a + 1 == 0 ? "ok 19\n" : "not ok 19 #" . $a + 1 . "\n";
sub ok { # Can't assume too much of floating point numbers.
- my ($a, $b, $c);
+ my ($a, $b, $c) = @_;
abs($a - $b) <= $c;
}
@@ -164,3 +164,16 @@
$a = 1e34; "$a";
print $a eq "1e+34" || $a eq "1e+034" ? "ok 45\n" : "not ok 45 $a\n";
+
+# see bug #15073
+
+$a = 0.00049999999999999999999999999999999999999;
+$b = 0.0005000000000000000104;
+print $a <= $b ? "ok 46\n" : "not ok 46\n";
+
+$a = 0.00000000000000000000000000000000000000000000000000000000000000000001;
+print $a > 0 ? "ok 47\n" : "not ok 47\n";
+
+$a = 80000.0000000000000000000000000;
+print $a == 80000.0 ? "ok 48\n" : "not ok 48\n";
+
Thanks,
MM;
-----Original Message-----
From: h...@crypt.org [mailto:h...@crypt.org]
Sent: Friday, August 16, 2002 7:31 AM
To: Michael Schroeder
Cc: Michael Minichino; 'perl5-...@perl.org'
Subject: Re: 5.8.0 sprintf (?) problem with floats?
Michael Schroeder <Michael....@informatik.uni-erlangen.de> wrote:
:Anyway, the conversions work fine if one changes
:
: /* combine components of mantissa */
: for (i = 0; i <= ipart; ++i)
: result += S_mulexp10((NV)part[ipart - i],
: i ? offcount + (i - 1) * PARTSIZE : 0);
:
:to
:
: /* combine components of mantissa */
: for (i = 0; i <= ipart; ++i)
: result += S_mulexp10((NV)part[ipart - i],
: (i ? offcount + (i - 1) * PARTSIZE : 0) + expextra);
: expextra = 0;
:
:Does anybody see a problem with that approach?
If you're going to do that, then a small rearrangement can also save
a call to S_mulexp10, as in the patch below.
On my linux machine here, I get the same (correct) results with or
without the patch and with or without debugging; if others find that
it improves things, I'll be happy to put it in.
Hugo
--- numeric.c Thu Jun 20 16:47:08 2002
+++ numeric.c.new Fri Aug 16 13:07:57 2002
@@ -908,11 +908,6 @@
}
}
- /* combine components of mantissa */
- for (i = 0; i <= ipart; ++i)
- result += S_mulexp10((NV)part[ipart - i],
- i ? offcount + (i - 1) * PARTSIZE : 0);
-
if (seendigit && (*s == 'e' || *s == 'E')) {
bool expnegative = 0;
@@ -929,10 +924,12 @@
if (expnegative)
exponent = -exponent;
}
-
- /* now apply the exponent */
exponent += expextra;
- result = S_mulexp10(result, exponent);
+
+ /* combine components of mantissa, factored by exponent */
+ for (i = 0; i <= ipart; ++i)
+ result += S_mulexp10((NV)part[ipart - i],
+ exponent + (i ? offcount + (i - 1) * PARTSIZE : 0));
/* now apply the sign */
if (negative)
Thanks, applied as #17736.
I found one small problem which I fixed; see the additional test case.
:Oh, and I fixed a typo in t/base/num.t that was making the ok()
:function always succeed!
Whoops, good catch.
:1. is t/base/num.c the right place for these tests?
I think so. We could always add an op/num.t for less critical numeric
tests, but we're still really only testing core numeric functionality.
:2. The old code had the test
:
: #if defined(HAS_QUAD) && defined(USE_64_BIT_INT)
:
:to decide whether to use a U32 or a U64, but AFAIKT, UV is defined to
:these values under the same conditions, so I've just unconditionally used
:a UV. Is this correct for of getting hold of the largest supported integer
:type?
Well, defined(HAS_QUAD) might give you a 64-bit int even if USE_64_BIT_INT
isn't defined. But I think the code is fine as you've written it.
Hugo
Which would suggest it isn't printf that is at fault but the reading
in of the numbers.
>
>$a = 8.000000000000000000;
>
>SV = NV(0x1e3f20) at 0x1dee2c
> REFCNT = 1
> FLAGS = (NOK,pNOK)
> NV = 8
>
>0x1e3f30: 0x40200000
>0x1e3f34: 0x00000000
>
>$a = 8.0000000000000000000000000000000000000;
>
>SV = NV(0x1e3f20) at 0x1dee2c
> REFCNT = 1
> FLAGS = (NOK,pNOK)
> NV = 8
>
>0x1e3f30: 0x401fffff
>0x1e3f34: 0xffffffff
--
Nick Ing-Simmons
http://www.ni-s.u-net.com/
Perhaps not, but there is such a thing as output that
is as close to exact as the floating point representation
on the machine will allow, but most floating point I/O
routines don't come anywhere close to even that goal :-(.
Several years ago there were two papers in a conference
proceeding (might have been one of the POPL conferences,
can't remember for sure). One was on reading floating
point number, the other on writing them.
The main thing I remember about them is that it is
REALLY HARD to do it right. It involves subtle points
most people would never think of in a million years.
There are certain paths through the algorithm where
you absolutely have to use higher precision math
or you can't possibly get the most accurate result.
The papers presented the algorithms written in scheme.
I don't know if anyone ever produced C code from them
or not.
I have both papers (on paper) at home somewhere.
>
>The main thing I remember about them is that it is
>REALLY HARD to do it right. It involves subtle points
>most people would never think of in a million years.
>There are certain paths through the algorithm where
>you absolutely have to use higher precision math
>or you can't possibly get the most accurate result.
One of the fun ones is something like:
7.99999999999999999999999555555555555555555556
Where 1st stream of 9s takes to the resolution of the representation.
To get it "right" you have to notice that final "6" causes previous 5
to "round up", which then propagates leftwards.
So recent "we have enough bits" idea is not quite right.
>
>The papers presented the algorithms written in scheme.
>I don't know if anyone ever produced C code from them
>or not.
I had a go at printing case - I have the code somewhere I think.
IIRC Solaris's ieee library is based on the ideas.
That particualar example doesn't seem quite right. You would have already
rounded up anyway after seeing the first "5".
AFAIKT, if the digit following the last significant digit is 0..4, you
always round down, and if it's 5..9 you always round up, except for the
special case of 50000000000...., which is exactly in the middle, and some
arbitrary choice has to be made. One way being the evennesss of the LHD
(to avoid any overall bias). If you make the choice that 5.000000....
always rounds up, then the simple rule digit >= '5' works fine.
Unless I'm seeing things through rose-tinted specs?
--
My get-up-and-go just got up and went.
I think you mean the _second_ 5. The first one only sets up the case where you
might have to follow one of the more complicated rounding rules
(even/odd/trunc/+-inf), but the second 5 nails it as being a trivial round up.
John
--
John Peacock
Director of Information Research and Technology
Rowman & Littlefield Publishing Group
4720 Boston Way
Lanham, MD 20706
301-459-3366 x.5010
fax 301-429-5747
I said "something like" and it is a long time ago
>You would have already
>rounded up anyway after seeing the first "5".
So maybe is is ....999499999....
>
>AFAIKT, if the digit following the last significant digit is 0..4, you
>always round down, and if it's 5..9 you always round up, except for the
>special case of 50000000000...., which is exactly in the middle, and some
>arbitrary choice has to be made.
It is that middle case I was trying to target - "exactly in the middle"
depends on the digits to the right.
>One way being the evennesss of the LHD
I would have to hunt down what IEEE754 etc. say about it.
>(to avoid any overall bias). If you make the choice that 5.000000....
>always rounds up, then the simple rule digit >= '5' works fine.
>
>Unless I'm seeing things through rose-tinted specs?
As Tom said the papers had some points where you think "oh heck yes it would
do that..."
Remember too that this is binary/decimal conversion (or vice versa),
so you may be rounding a bit that is part way through the 3.322-ish bits
that represent the '5', or run out of (whole) bits before you know
if decimal digit is 4 or 5
Apart from theoretical niceties, do you think my_atof() requires any more
work? Bearing in mind that any rounding here is a one-off during
conversion of a string constant, unlike the rounding during arithmetic,
which will accumulate errors?