Pro Windows NT 4.0 platform.
The program where it occurs consists of hundreds of iterations of
transcendental function calls, and matrix inversions. It masks off
floating point exceptions using _control87 and lets NAN's and INFs
propagate through. The differences do not appear in the first few
iterations, but only after several iterations. They seem to start
small, then grow.
The long double type is NOT being used anywhere.
Borland gives the same results as a 16-bit compile as it does when
compiled 32-bit.
I have been unable to reproduce this problem in a small test program
which looks at the limits of precision after calling some transcendental
functions.
The only difference I can find (other than different sized long double
types), is that Borland appears to be using the floating point control
word differently than Microsoft does. Borland uses the same 16-bit
control word that is specified in Intel's documentation. Microsoft uses
a 32-bit control word, with different mask values.
I would VERY much appreciate any help on this matter.
Thanks in advance,
Chris Bratteli
brat...@tc.umn.edu
For floating point calculations, minor differences in coding will
almost always produce different results, and two different compilers
will almost always produce different machine-level coding. Look at
the generated (and optimized) code for both to see.
How large are the differences, and which is closer to correct?
Josh
Chris Bratteli wrote in message <34775BB8...@tc.umn.edu>...
>I am obtaining different results when I run the same c-source code
>compiled under Borland c++ 5.2 and Microsoft Visual c++ 5.0 on a Pentium
>
>Pro Windows NT 4.0 platform.
>