On floating points issue and his analysis why it happened.
But wanted to show here that doing the same thing using a CAS
removed the problem (as would be expected, nothing surprising,
since using pure integers). Ofcourse one can argue about speed
and performance in real large numerical problems and all of this.
But I just thought some here might like to see it.
It is finding the Determinant of 2 by 2 matrix of integers.
The correct answer is 1.
I tried the example shown on some software I have, here is
the result. I run in on matlab 2112a,matlab 2112a/symbolic,octave,
>> On floating points issue and his analysis why it happened.
>> ---- matlab 2112a --------------
>> EDU>> X = [ 63245986, 102334155
>> 102334155, 165580141];
>> EDU>> det(X)
> The correct value in double precision round nearest
> would be 2.0, while in long double it would be 1.0,
> thus that is strange.
I do not know. Matlab uses double by default. I never
heared of long double in Matlab.
But I find this whole thing annoying sometimes
when one 'types' in integers into these packages but
because everything is done in floating point, (1 is 1.0 in
Matlab ofcourse) one ends up losing accuracy compared to
using CAS when 1 is really 1 and not 1.0
Here is a small example. Cross correlation between 2
sequences of integers:
These small difference can carry on to other computations ofcourse
causing more losses of accuracy.
May be one day, when computers become really fast and
has lots more memory, one can do things like computational
fluid dynamics and weather forecasting computation using nothing
but exact rational arithmetic. No more floating points!
> May be one day, when computers become really fast and
> has lots more memory, one can do things like computational
> fluid dynamics and weather forecasting computation using nothing
> but exact rational arithmetic. No more floating points!
> May be in 100 years from now?
It is generally believed that the size of the individual rational number
numerators and denominators would grow exponentially with the number
of operations, increasing the cost of the operations, the memory, and
the output display. Considering that the input data is unlikely to be
exact (from physical measurements) this approach would probably be
If what annoys you is the binary <--> decimal conversion, you could find
a system that does decimal arithmetic (still, floating-point).
If what annoys you is the introduction of error, you could find a system
that did interval arithmetic (maybe 8X slower, and 2X the memory).
Or use Mathematica's software floats that try to do something similar
(significance arithmetic). I'm not a fan of that, however.
Or you could use programs that have been cleverly written to include
extra computations to estimate errors (maybe 2X slower)
Or you could wait 100 years and see what comes up...