In[8]:= Pi/10 //N
Out[8]= 0.314159
I know how to display results more accurately, using more digits, but
in the case below, the result will have to be computed using software
routines, as I've requested the result more accurate than the floating
point processor can produce.
In[9]:= N[Pi/10,40]
Out[9]= 0.3141592653589793238462643383279502884197
Is there a way of displaying a result to the maximum accuracy
possible, without making use of software routines?
In[12]:= N[MachinePrecision]
Out[12]= 15.9546
I'm trying to work out why Sage gives a slightly different result on
Solaris 10 (SPARC) to Linux (x86). I was interested what Mathematica
makes of it. I'm pretty sure the difference is due to different
rounding errors in the two CPUs. I believe the Intel/AMD CPUs uses 80
bits for computations, whereas the SPARCs only use 64.
Anyway, how can I get Pi/10 (or any result) to the maximum possible
accuracy, without resorting to the slower software methods to compute
the result?
Dave
> using more digits, but
> in the case below, the result will have to be computed using software
> routines, as I've requested the result more accurate than the floating
> point processor can produce.
>
> In[9]:= N[Pi/10,40]
>
> Out[9]= 0.3141592653589793238462643383279502884197
>
> Is there a way of displaying a result to the maximum accuracy
> possible, without making use of software routines?
Again, you probably mean precision.
If you want to display all the bits available in the IEEE floating-point
double format, you can do so in binary, hexadecimal, etc. But not
in decimal. There is also (sometimes) a double-extended format available
as well as some other possibilities e.g. quad-precision.
>
> In[12]:= N[MachinePrecision]
>
> Out[12]= 15.9546
>
...
I'm pretty sure the difference is due to different
> rounding errors in the two CPUs. I believe the Intel/AMD CPUs uses 80
> bits for computations, whereas the SPARCs only use 64.
This has certainly been the case in the past. I don't know about current
SPARC processors. Should a CPU use double-extended registers or not?
There are other possible differences not related to roundoff. e.g.
exponent range/ overflow.
>
> Anyway, how can I get Pi/10 (or any result) to the maximum possible
> accuracy,
If you want the 80 bits that are available in the registers in Intel
cpus, but you are using 64 bit registers on a SPARC, then you probably
need to use additional software. Too bad.
without resorting to the slower software methods to compute
> the result?
What's your rush?
Fair enough.
> I'm pretty sure the difference is due to different
>
> > rounding errors in the two CPUs. I believe the Intel/AMD CPUs uses 80
> > bits for computations, whereas the SPARCs only use 64.
>
> This has certainly been the case in the past. I don't know about current
> SPARC processors. Should a CPU use double-extended registers or not?
> There are other possible differences not related to roundoff. e.g.
> exponent range/ overflow.
This was on a Sun T5240 using the latest T2+ processor. The output
differs from that on some other box (I assume Linux, though I do not
actually know).
> > Anyway, how can I get Pi/10 (or any result) to the maximum possible
> > accuracy,
>
> If you want the 80 bits that are available in the registers in Intel
> cpus, but you are using 64 bit registers on a SPARC, then you probably
> need to use additional software. Too bad.
>
> without resorting to the slower software methods to compute
>
> > the result?
>
> What's your rush?
I'm not wanting to get 80 bits on a SPARC processor. I want
Mathematica to display the result to the greatest precion possible (in
base 10), without forcing it to use software emulation.
I would have thought that possible, but I do not know how. Perhaps
In[2]:= N[Pi/10,N[MachinePrecision]]
Out[2]= 0.3141592653589793
might be the way. Any MMA experts here?
Typing:
In[1]:= N[MachinePrecision,100]
Out[1]=
15.954589770191003346328161420398130418714063717491752689452655439736\
> 73403154449900280714436226386712
gives exactly the same output on a SPARC processor as it does on an
Intel processor.
Dave
why base 10?
I am guessing that you want a program that, given an IEEE binary double
float P, will produce a string of digits of minimal length which, when
read back in, will reproduce exactly the internal floating-point number P.
Stated this way, it is a classic problem in binary-decimal and
decimal-binary conversion. It would be entirely coincidental if someone
familiar with mathematica was also conversant in this matter.
Your implicit assumption that this is easy or has been solved by
mathematica and all you need to know is the proper command, is most
likely false. (Though who knows what lurks in there :) )
For a paper on this topic, see
http://grouper.ieee.org/groups/754/email/pdfq3pavhBfih.pdf
for
How to Print Floating Point Numbers Accurately
by
Guy L. Steele Jr.
Sun Microsystems Laboratories
and
Jon L White
Lisp Wizard
.................
Incidentally, if you remove the base-10 requirement, your problem
disappears. If you want to communicate 64-bit quantities between
machines exactly, that's not difficult.
RJF
The problem started as something in Sage, where a result is printed to
the screen in base 10. A record is kept of that number, then used as a
test. (I assume someone determines the result is reasonable before
making it a test.)
Then when the program is run at a different date, different version,
different operating system etc, the answers are compared. Since the
program shows numbers in base 10
> I am guessing that you want a program that, given an IEEE binary double
> float P, will produce a string of digits of minimal length which, when
> read back in, will reproduce exactly the internal floating-point number P.
No, I was not looking for any such program.
What I wanted to do was to compute a result in Mathematica on two
different processors and determine if Mathematica gives identical
answers on the two processors, whether either one agrees with Sage etc
on that particular processor.
The differences observed in Sage are probably a result of rounding
errors, but it could be a library function that is broken in Solaris.
Having another method of computing the result would be useful.
Recently MPFR was failing some tests on the Sun T5240 at the
University of Washington, but worked fine on my machine at home. I
thought it was a gcc bug, as it only happened with some versions of
gcc. It actually was a Solaris bug in the library function used on
that architecture (sun4v), but not on my machine (sun4u).
> Stated this way, it is a classic problem in binary-decimal and
> decimal-binary conversion. It would be entirely coincidental if someone
> familiar with mathematica was also conversant in this matter.
> Your implicit assumption that this is easy or has been solved by
> mathematica and all you need to know is the proper command, is most
> likely false. (Though who knows what lurks in there :) )
I'd be surprised if someone at Wolfram Research did not know quite a
bit about the subject. They clearly employ some pretty bright people,
clearly representing numbers is base 10 is a pretty important
requirement for a program like Mathematica. Printing the binary
representation of the number is generally less useful to humans.
Perhaps given I've already stated the reasons for wanting this, they
are not all queuing up to answer.
> For a paper on this topic, seehttp://grouper.ieee.org/groups/754/email/pdfq3pavhBfih.pdf
>
> for
> How to Print Floating Point Numbers Accurately
> by
> Guy L. Steele Jr.
> Sun Microsystems Laboratories
> and
> Jon L White
> Lisp Wizard
Thank you, I'll take a read.
> Incidentally, if you remove the base-10 requirement, your problem
> disappears. If you want to communicate 64-bit quantities between
> machines exactly, that's not difficult.
> RJF
No, but this was designed to compare numbers in the way displayed to
humans.
Use NumberForm with a large value for the second argument,
rather than N:
In[1]:= NumberForm[Pi/10.0, Infinity]
Out[1]//NumberForm= 0.3141592653589793
You might also look into the RealDigits command, depending on exactly
what you want to do.
> I would have thought that possible, but I do not know how. Perhaps
>
> In[2]:= N[Pi/10,N[MachinePrecision]]
> Out[2]= 0.3141592653589793
No, that will produce an arbitrary precision number whose precision
happens to be $MachinePrecision. It still works like an arbitrary
precision number in subsequent computations, however.
Mark McClure
Thank you Mark.
As you may have seen on the sage-devel, gcc and Sun Studio give
different answers on SPARC, but they both give the same answer on Open
Solaris and HP-UX.
dave
The truncation is an interface display setting PrintPrecision, which
is set by default to 6. Change this to a large number in the option
inspector, or from the command line via
SetOptions[$FrontEnd, PrintPrecision -> 100]
or
SetOptions[EvaluationNotebook[], PrintPrecision -> 100]
> What I wanted to do was to compute a result in Mathematica on two
> different processors and determine if Mathematica gives identical
> answers on the two processors, whether either one agrees with Sage etc
> on that particular processor.
>
> The differences observed in Sage are probably a result of rounding
> errors, but it could be a library function that is broken in Solaris.
> Having another method of computing the result would be useful.
>
>
Probably? Maybe? Your test may or may not be identifying differences
in numerical computation. It may be identifying differences in
binary-to-decimal and decimal-to-binary conversion on the two machines
(or even the same machine with different library versions).
If you want to see if two floating-point numbers have the same bit
pattern, then the simplest way is to compare their bit patterns.
Comparing the strings of characters that represent nearby decimal
numbers, or worse, reading those strings of characters and making them
into floats again, is hazardous.
The number of people at WRI who qualify as experts in error analysis is
quite small. Perhaps zero.
RJF
InputForm is also useful for this. An advantage is that one need not
give a value to the number of digits.
In[1]:= InputForm[N[Pi]]
Out[1]//InputForm=
3.141592653589793
Another possibility, which does require a digits value, might be
SetSystemOptions["MachineRealPrintPrecision" -> 16];
Daniel Lichtblau
Wolfram Research
>
> InputForm is also useful for this. An advantage is that one need not
> give a value to the number of digits.
Unfortunately, Dave Kirby is trying to do something that he doesn't
exactly describe, and instead is trying to get people to solve a
(harder) problem that he believes (I think incorrectly) has to be solved
as a logical consequence of his initial task.
Mathematica is probably unsuitable for his real task, which is, I
believe to take the computed results of a "portable" computer software
system (Sage), and run test suites in different environments with
different hardware, operating systems, compilers, etc.
If the test suite results are not identical, he believes this is a flaw
and that one (or more) systems must be thrashed into obedience. This is
so difficult, that he's pinned his hopes on his hypothesis that if only
the systems displayed their results better, that the test suites would
have identical results, and he could have a happy new year.
Firstly, it is entirely conceivable that two numerical computations run
on similar but not identical systems can have results that are
different, and that both are correct.
Secondly, Mathematica is notably inadequate for the task of comparing
floating point numbers.
In Mma 6.0, consider these two numbers:
ee = 6121026514868073 * 2.0^(-51)
ff = 6121026514868074 * 2.0^(-51)
These are both Machine Precision numbers.
They are different.
They even print differently, namely (use FullForm)
2.7182818284590455
2.718281828459045
Their difference is non-zero, namely
4.44089 x 10^(-16). Which seems a peculiarly precise number.
But it is just (1/2)^51, as a float.
But now ask Mma if ee is equal to ff.
ee==ff returns True, which of course it is not.
Good enough for government work, perhaps.
Now if Dave wants to compare two results in a test suite, he should
modify the test suite so that the results are expressed as exact integer
or rational or guaranteed-to-be-exactly-representable-as float numbers.
(This includes 2.0 and 0.5 for binary systems).
so instead of comparing the kind of iffy-converted numbers --
2.718281828459045 to
2.7182818284590455
one can compare
ee = 6121026514868073 to
ff = 6121026514868074 [both times 2.0^(-51)]
Now the question again as to whether the values of certain computations
might be different on (say) 64-bit-double-float Solaris vs
80-bit-extended-double-float Intel, and whether this constitutes a bug
in Sage or one of the platforms.... Not necessarily.
RJF
For this purpose, the best one can hope is that the "reference" system
(Mathematica, in this case) behaves in a way that is similar to the
system being tested. That gives some level of confidence that either
both are correct, or both suffer from the same library bug. It more or
less rules out the possibility that the tested system has a bug all
its own.
> If the test suite results are not identical, he believes this is a flaw
> and that one (or more) systems must be thrashed into obedience. This is
> so difficult, that he's pinned his hopes on his hypothesis that if only
> the systems displayed their results better, that the test suites would
> have identical results, and he could have a happy new year.
>
> Firstly, it is entirely conceivable that two numerical computations run
> on similar but not identical systems can have results that are
> different, and that both are correct.
This can happen in cases where both are IEEE compliant and use 64 bit
double floats (not sure if it requires that one but not both use 80
bit float registers).
> Secondly, Mathematica is notably inadequate for the task of comparing
> floating point numbers.
Depends how you do the comparison...
> In Mma 6.0, consider these two numbers:
>
> ee = 6121026514868073 * 2.0^(-51)
> ff = 6121026514868074 * 2.0^(-51)
>
> These are both Machine Precision numbers.
> They are different.
> They even print differently, namely (use FullForm)
>
> 2.7182818284590455
> 2.718281828459045
>
> Their difference is non-zero, namely
> 4.44089 x 10^(-16). Which seems a peculiarly precise number.
> But it is just (1/2)^51, as a float.
>
> But now ask Mma if ee is equal to ff.
>
> ee==ff returns True, which of course it is not.
>
> Good enough for government work, perhaps.
Equal[] in Mathematica uses a bit (actually several bits...) of slop.
This is documented and I do not propose to argue whether it is a
beature or a fug.
One can instead compare the bits, if one so desires.
> Now if Dave wants to compare two results in a test suite, he should
> modify the test suite so that the results are expressed as exact integer
> or rational or guaranteed-to-be-exactly-representable-as float numbers.
> (This includes 2.0 and 0.5 for binary systems).
>
> so instead of comparing the kind of iffy-converted numbers --
> 2.718281828459045 to
> 2.7182818284590455
>
> one can compare
>
> ee = 6121026514868073 to
> ff = 6121026514868074 [both times 2.0^(-51)]
>
> Now the question again as to whether the values of certain computations
> might be different on (say) 64-bit-double-float Solaris vs
> 80-bit-extended-double-float Intel, and whether this constitutes a bug
> in Sage or one of the platforms.... Not necessarily.
>
> RJF
For this last to work, one needs access to the double's exponent field
in order to know what power of two to use. If you have that, chances
are you have the necessary functionality to get the entire set of bits
(ordered sequentially, that is, bypassing internal storage issues such
as endian-ness).
My guess is if the Sage code is identical on both platforms, you can
pretty much rule out a bug there and ascribe it either as a feature of
processor differences or a bug in library code. Figuring out which of
these is the case can be tricky.
Daniel Lichtblau
Wolfram Research
> My guess is if the Sage code is identical on both platforms, you can
> pretty much rule out a bug there and ascribe it either as a feature of
> processor differences or a bug in library code. Figuring out which of
> these is the case can be tricky.
>
> Daniel Lichtblau
> Wolfram Research
Thank you Daniel. The help with Mathematica was much appreciated. It's
always good to compare results with other systems, whether that be
Mathematica, or a simple C program on another architecture.
We have got to the bottom of this now I believe.
It would appear that the problem (if there is one), is with the exp()
implementation on SPARC. If one builds the following small C program
#include <math.h>
#include <stdio.h>
int main() {
printf("%.16lf\n",exp(1.0));
}
one gets a different answer using gcc, than one gets if one uses the
Sun compiler.
drkirkby@kestrel:~$ /opt/sunstudio12.1/bin/cc -lm exp.c
drkirkby@kestrel:~$ ./a.out
2.7182818284590455
drkirkby@kestrel:~$ gcc exp.c
drkirkby@kestrel:~$ ./a.out
2.7182818284590451
But that could not explain the problem, as Sage has been built with
GCC, not the Sun compiler.
Gonzalo Tornaria, wrote a very slightly more complex C program
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
double x = 1.0;
if (argc>1)
x = atof(argv[1]);
printf("%.16lf\n",exp(x));
}
Using his program, both compilers produce the same answer
drkirkby@kestrel:~$ gcc -lm exp2.c
drkirkby@kestrel:~$ ./a.out
2.7182818284590455
drkirkby@kestrel:~$ /opt/sunstudio12.1/bin/cc -lm exp2.c
drkirkby@kestrel:~$ ./a.out
2.7182818284590455
It would appear that in the case of a simple call to exp(1.0), GCC
inlines that. In the more complex program, where GCC was unable to
know in advance what to compute, it had to call the Sun library.
In fact, its interesting that one can compile the first version with
gcc, *without* linking the maths library in. However, the second
version needs the maths library.
drkirkby@kestrel:~$ gcc exp.c
drkirkby@kestrel:~$ gcc exp2.c
Undefined first referenced
symbol in file
exp /var/tmp//cc9rS9e9.o
ld: fatal: Symbol referencing errors. No output written to a.out
collect2: ld returned 1 exit status
drkirkby@kestrel:~$ gcc -lm exp2.c
drkirkby@kestrel:~$
I'm guessing that given SPARC uses 64-bit registers for all
computations, but Intel/AMD use 80 bits internally, that Sun feel a
more accurate but presumably slower library is warranted on x86 than
it is on SPARC.
Dave
<SNIP>
> > Anyway, how can I get Pi/10 (or any result) to the maximum possible
> > accuracy, without resorting to the slower software methods to compute
> > the result?
>
> > Dave
>
> The truncation is an interface display setting PrintPrecision, which
> is set by default to 6. Change this to a large number in the option
> inspector, or from the command line via
> SetOptions[$FrontEnd, PrintPrecision -> 100]
> or
> SetOptions[EvaluationNotebook[], PrintPrecision -> 100]
Thank you Jon,
SetOptions[$FrontEnd, PrintPrecision -> 100]
does not want to work for me on the command line - by "command line" I
mean after typing
$ math
at a Unix prompt.
In[1]:= SetOptions[$FrontEnd, PrintPrecision -> 100]
SetOptions::optnf: PrintPrecision is not a known option for Null.
Out[1]= SetOptions[Null, PrintPrecision -> 100]
I also note from the manual
http://reference.wolfram.com/mathematica/ref/PrintPrecision.html
that "This function has not been fully integrated into the long-term
Mathematica system, and is subject to change."
Daniel's solution using MachineRealPrintPrecision or InputForm seem to
work better for me.
Dave
Daniel,
I should have replied in a bit more detail to your helpful post, but
if you notice the time I replied, and convert it to UK time, you might
guess I had a few drinks! Anyway, happy new year to you.
I'd appreciate if you could answer the following couple of questions.
1) Given E can not be represented exactly as a IEEE machine precision
number. i.e. E can be be written exactly in the form:
(-1)^s × c × 2^q
where
s= sign bit
c = significand (or 'coefficient')
q = exponent.
http://en.wikipedia.org/wiki/IEEE_754-2008
the following does not make a lot of sense to me.
bash-3.2$ math
Mathematica 7.0 for Sun Solaris x86 (64-bit)
Copyright 1988-2009 Wolfram Research, Inc.
In[1]:= SetSystemOptions["MachineRealPrintPrecision" -> 1000]
Out[1]= MachineRealPrintPrecision -> 17
In[2]:= machine=N[E]
Out[2]= 2.718281828459045
In[3]:= betterApproximation=N[E,100]
Out[3]=
2.7182818284590452353602874713526624977572470936999595749669676277240\
> 76630353547594571382178525166427
In[4]:= machine-betterApproximation
Out[4]= 0.
In[5]:= N[ machine-betterApproximation,100]
Out[5]= 0.
I would have expected that there was a difference of around 10^-16
between 'machine' and 'betterApproximation'. Perhaps Mathematica drops
the precision of all computations to machine precision once any of the
values is machine precision. I think that is what is used in Sage, but
I am not 100% sure. (I don't use Sage, but have been involved in port
it to Solaris)
It would be good if there was an option in the front end of
Mathematica to show machine precision numbers in a different colour or
different font, to make it more obvious what was machine, and what was
arbitrary precision. Integers are pretty easy to spot, but the machine
vs arbitrary precision is not so easy to spot.
Dave
> Equal[] in Mathematica uses a bit (actually several bits...) of slop.
> This is documented and I do not propose to argue whether it is a
> beature or a fug.
> Daniel Lichtblau
> Wolfram Research
Certainly in the case of
In[2]:= Equal[12.1, 12.2]
Out[2]= False
Equal[]'s slop could defiantly be made a feature if it took an option
for the number of bits of slop, rather than the fixed 7 which is
documented.
It would not be very practical when using '==' though.
Dave
> Unfortunately, Dave Kirby is trying to do something that he doesn't
> exactly describe, and instead is trying to get people to solve a
> (harder) problem that he believes (I think incorrectly) has to be solved
> as a logical consequence of his initial task.
David Kirkby is not a mathematician. He just did enough maths to get
an electrical engineering degree and avoided as much maths as possible
when doing a PhD. Maths is not one of his strongest subjects. Neither
is he a computer scientist - his PhD is in medical physics. Neither
does he consider himself a Solaris expert. He admits to knowing no
SPARC assembler, though he does claim to have written assembly code on
the Intel x86/87 chips and VAX a couple of decades ago.
The fact he wished to check answers on different maths software
(Mathematica, Sage, simple C programs) different operating systems
(Solaris, Linux, OS X, AIX, HP-UX) and different compilers (GCC and
native compilers), does not make his efforts worthless, especially
when considering his background. He might not necessarily know the
best way to tackle a problem, he might not know the right terminology,
so he will have to rely on what he does know.
Perhaps if you helped him a bit more, rather than slag him off and
send him stupid private emails, things might be better. Your tone does
have a habit of winding people up.
David Kirkby.
> Equal[]'s slop could defiantly be made a feature if it took an option
> for the number of bits of slop, rather than the fixed 7 which is
> documented.
>
> It would not be very practical when using '==' though.
>
> Dave
The notion of "nearly equal" is traditionally made concrete in two ways.
Relative error and Absolute error ..
"7 bits of slop" is neither, unless the total number of bits is also
specified.
And "automatic precision tracking" in Mathematica makes a total hash of
the useful concepts defining error that have emerged from the work of
numerical analysts in the last 70 years or so.
Setting the number of bits of slop would definitely (or defiantly?)
be made a feature, but it would be embroidery on a bad idea.
If Mma examples and tutorials supported the suggestion of
"RelativeError[a,b]<c" rather than "Equal[a,b]" Or
"AbsoluteError[a,b]<c" {Abs[a-b]<c would do..}
it might result in much better programs and probably some better
educated programmers.
RJF
PS - WRI programmers maybe be employed to make this work with junk like
Infinity or intervals.
You can use the Precision function to determine the precision of a
number in Mathematica. In this particular case, if you execute
Precision[Out[5]], you get MachinePrecision. Generally, machine
precision numbers mixed with arbitrary precision numbers will yield
machine precision numbers. If you set machine = N[E,
$MachinePrecision], you might get what you were expecting.
Mark McClure
I thought that was probably the case.
> If you set machine = N[E,
> $MachinePrecision], you might get what you were expecting.
>
> Mark McClure
No, it did not.
In[1]:= machine = N[E,$MachinePrecision]
Out[1]= 2.718281828459045
In[2]:= betterApproximation=N[E,100]
Out[2]=
2.7182818284590452353602874713526624977572470936999595749669676277240\
> 76630353547594571382178525166427
In[3]:= machine-betterApproximation
-15
Out[3]= 0. 10
I also tried:
In[8]:= goodApproximation=N[E,20]
Out[8]= 2.7182818284590452354
In[9]:= betterApproximate=N[E,40]
Out[9]= 2.718281828459045235360287471352662497757
In[10]:= goodApproximation - betterApproximate
-19
Out[10]= 0. 10
I guess that since 'goodApproximation' only has 20 digits of
significance, so the result has too. That makes sense when I think
about it.
Dave
Ok, but what is the actual problem? I do not get it:
Can you state it in 3 or 4 lines in a concrete - may be by a special
example - version for someone like me not using Mathematica (but Maple)?
> Ok, but what is the actual problem? I do not get it:
>
> Can you state it in 3 or 4 lines in a concrete - may be by a special
> example - version for someone like me not using Mathematica (but Maple)?
Computing E in a simple C program
drkirkby@kestrel:~$ cat exp2.c
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
double x = 1.0;
if (argc>1)
x = atof(argv[1]);
printf("%.16lf\n",exp(x));
}
on Solaris SPARC produces a value which differs from the value
produced by the same C program on other systems. They all agree the
value is
2.7182818284590451
Solaris 10 (SPARC processor) gives
2.7182818284590455
Systems which gives the more accurate 2.7182818284590451 included
* Open Solaris (aka Solaris 11) on Intel Xeon processor
* AIX - unknown processor
* HP-UX - PA-RISC processor
* Linux - x86 processor
* OS X - unknown processor.
(The actual value of E, rounded to the same precision is
2.7182818284590452..., but that can't be represented as a double
precision number).
The problem was whether or not this was a bug in the maths library on
Solaris.
My original question sci.math.symbolic was related to the fact I
wanted to get another opinion on the value by testing with Mathematica
on Solaris SPARC. (Richard knew more than what I posted here, since
the thread started on the sage-devel mailing list, after I noted a
test failure of Sage's test-suite on SPARC.)
Dave
For a different reason I stumbled over that:
http://panda.ispras.ru/~kuliamin/docs/MathTest-2009-en.pdf
Note however that displayed and internal value may differ
by binary <--> decimal and *one* way would be to compute
exp(1) with increased precision, convert it to IEEE and
then to decide, whether you want to see the binary value
or the value given involving routines in the print cmd
(besides any rounding modes or nBit stuff).
I think the nearest IEEE for exp(1) is 6121026514868073/9007199254740992*pow(2,2)
Yes, which differs from the real value by around 1.44565x10^-16. But
the value on Solaris SPARC is off by more than that. It's one bit less
accurate than it would ideally be.
It would be interesting to compute E(x) for a million or so random
values of x using the floating point processor, then compare them to
more accurate values obtained via high precision arithmetic.
The Sun implementation of exp(x) on the SPARC processor is less
accurate than any other implementations tested for the specific case
of x=1, but for other values of x, it may be a completely different
thing.
I suspect if one found some values of x where other implementations
are bad, they might well be better on SPARC. But without testing this,
I don't know, and I don't have the time to test it.
My main concern it to get Sage building on Open Solaris. It does now
build on Solaris 10, though some issues need resolving.
Dave
Note that the 3 numbers 2.7182818284590451, 2.7182818284590455 and
4 * 6121026514868073/9007199254740992 do have the same binary value
in IEEE, 10.101101111110000101010001011000101000101011101101001 and
the error is ~ 2/3 absolute and ~ 1/4 relative w.r.t. DBL_EPSILON
(while the error can never be 0 as exp(1) is not rational).
The above binary approximation for exp(1) is 2.7182818284590450907
having DBL_MANT_DIG=53.
Compare before printing, else you just talk about results of print.
But ensure that the FPU is not used, just 53 bit mantissa.
Roughly I would take: 14 - 15 correct decimals (depends on system)
plus 3 decimals, hence want 18 decimal places for input, if I want
to compare (you have 1+16).
Also it depends on settings for the compiler (for OS and hardware),
certainly the handbook should talk about that.
For more on actual errors you can read in the linked article.
But all that implementation issues do not have much to do with
'sci.math.symbolic'.
*snip*
> The problem was whether or not this was a bug in the maths library on
> Solaris.
>
> My original question sci.math.symbolic was related to the fact I
> wanted to get another opinion on the value by testing with Mathematica
> on Solaris SPARC. (Richard knew more than what I posted here, since
> the thread started on the sage-devel mailing list, after I noted a
> test failure of Sage's test-suite on SPARC.)
Since you're trying to determine if the Solaris math libraries have a bug,
perhaps you should ask about this on the comp.unix.solaris or
sci.math.num-analysis newsgroups, either of which you can access via Google
Groups (http://groups.google.com) if your newsserver doesn't carry them, if
you haven't already.
--
Steve Lord
sl...@mathworks.com
comp.soft-sys.matlab (CSSM) FAQ: http://matlabwiki.mathworks.com/MATLAB_FAQ
> Perhaps if you helped him a bit more, rather than slag him off and
> send him stupid private emails, things might be better. Your tone does
> have a habit of winding people up.
>
> David Kirkby.
I (mistakenly, it appears) thought that instructing Dr. Kirby in private
email would be more productive.
1. If Sage should not require that all platforms provide the same
to-the-last-bit numerical results from all manufacturer or
third-party-supplied numerical libraries, since even good libraries may
differ. Slightly. So give it up.
2. Relying on decimal printouts of decimal digits that extend somewhat
beyond the implied precision of the internal binary bits of a number to
kind of get a "whiff" of the trailing bit is not a great policy, when
libraries can indeed differ as to how many such digits should be
printed. e.g. the minimum number, a fixed number, or a user-supplied
number of digits. If you want to compare bits in a float, you can do
so, even in Mathematica, by arithmetic, and you don't need to print a
binary file.
3. Expertise in physics, computer science, mathematics suggests that you
could learn some numerical analysis / error analysis, perhaps by
reading. It does not suggest that you already know it.
RJF