Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why is this result not zero?

73 views
Skip to first unread message

Alexandru

unread,
Jun 28, 2022, 4:04:05 PM6/28/22
to
Why is this result not zero?

set x -9408.8
puts [expr {$x*$x + $x*$x + $x*$x - $x*$x - $x*$x - $x*$x}]

This is a simplified demo of a more complex equation:

set x -9408.8
set y -9408.8
set z -9408.8
puts [expr {($x*$x + $y*$y + $z*$z - $x*$y - $y*$z - $z*$x)}]

Harald Oehlmann

unread,
Jun 28, 2022, 4:49:56 PM6/28/22
to
You always have roundings for double floats.

-2.980232238769531e-8

If you want exact results, use integers with a defined decimal position.

set x -94088

gives 0, so no conversion issues.

I personally sometimes split in two variables: numerator and
denumerator. The denumerator is always a power of 10.
So, you have no rounding effects.

- Multiplication is multiplying numerator and denumerator
- adding is to first find multiply smallest denumerator and its
numerator by 10 until you get the same denumerator. Then, you add the
numerators.

Financial software has data types with decimal point position at
position 2 or 3 for those issues.

Hope this helps,
Harald

Robert Heller

unread,
Jun 28, 2022, 5:06:25 PM6/28/22
to
Floating point math 101: NEVER, EVERY do an == compare of a floating point
computation to 0.0 as part of a loop condition. If you do this the
loop may never stop. ALWAYS do a < compare to some very small number (a "fuzz"
factor).

On my machine I got -2.980232238769531e-8, which is pretty close to
zero.

The magic words here are "roundoff error". Floating point math on a computer
is always going to be off by some amount of roundoff error, so you will almost
never get a result of *exactly* 0.0 and you should never count on getting 0.0,
even if mathematically you should get an exactly 0.0 result.

Oh, an important factoid: many "exact" *decimal* fractions are "irrational"
in binary. I expect that .8 is one of them.

Yep:

% set x .8
8
% expr {$x*$x}
0.6400000000000001
................^ pesky roundoff bit!


>
>
>

--
Robert Heller -- Cell: 413-658-7953 GV: 978-633-5364
Deepwoods Software -- Custom Software Services
http://www.deepsoft.com/ -- Linux Administration Services
hel...@deepsoft.com -- Webhosting Services

Rich

unread,
Jun 28, 2022, 5:14:28 PM6/28/22
to
See https://floating-point-gui.de/ (and read most of the material
there).

What you are seeing is /normal/ for binary floating point math.

Alexandru

unread,
Jun 28, 2022, 5:18:58 PM6/28/22
to
Thank you all.
I actually aware of those numerical issues.
But 1e-8 suprized me alot, because it's actually a large number in my opinion.
I wonder, if same operation in C would give better results.
Another intersting thing, changing order of terms, leads to clean zero:

expr $x*$x - $x*$y + $y*$y - $y*$z + $z*$z - $z*$x
is 0.0

Siri Cruise

unread,
Jun 28, 2022, 7:32:07 PM6/28/22
to
In article
<f579a874-342c-42c7...@googlegroups.com>,
Alexandru <alexandr...@meshparts.de> wrote:

> Why is this result not zero?
>
> set x -9408.8
> puts [expr {$x*$x + $x*$x + $x*$x - $x*$x - $x*$x - $x*$x}]

Because real arithmetic is not guaranteed to utterly precise. The
whole field of numerical analysis is about dragging computers
kicking and screaming into sufficiently precise.

If you subtract two nearly equal reals, the result has almost no
signficant bits.

--
:-<> Siri Seal of Disavowal #000-001. Disavowed. Denied. Deleted. @
'I desire mercy, not sacrifice.' /|\
Discordia: not just a religion but also a parody. This post / \
I am an Andrea Chen sockpuppet. insults Islam. Mohammed

Robert Heller

unread,
Jun 28, 2022, 7:38:25 PM6/28/22
to
At Tue, 28 Jun 2022 14:18:53 -0700 (PDT) Alexandru <alexandr...@meshparts.de> wrote:

>
> Rich schrieb am Dienstag, 28. Juni 2022 um 23:14:28 UTC+2:
> > Alexandru <alexandr...@meshparts.de> wrote:
> > > Why is this result not zero?
> > >
> > > set x -9408.8
> > > puts [expr {$x*$x + $x*$x + $x*$x - $x*$x - $x*$x - $x*$x}]
> > >
> > > This is a simplified demo of a more complex equation:
> > >
> > > set x -9408.8
> > > set y -9408.8
> > > set z -9408.8
> > > puts [expr {($x*$x + $y*$y + $z*$z - $x*$y - $y*$z - $z*$x)}]
> > See https://floating-point-gui.de/ (and read most of the material
> > there).
> >
> > What you are seeing is /normal/ for binary floating point math.
>
> Thank you all.
> I actually aware of those numerical issues.
> But 1e-8 suprized me alot, because it's actually a large number in my opinion.
> I wonder, if same operation in C would give better results.

Not likely. Tcl is coded in C, so the result should be the same. And the
underlying FPU is the same, so the bits would the same.

> Another intersting thing, changing order of terms, leads to clean zero:
>
> expr $x*$x - $x*$y + $y*$y - $y*$z + $z*$z - $z*$x
> is 0.0



>
>

Siri Cruise

unread,
Jun 28, 2022, 7:39:01 PM6/28/22
to
In article <9Y6dnbpANajV8Cb_...@giganews.com>,
Robert Heller <hel...@deepsoft.com> wrote:

> Floating point math 101: NEVER, EVERY do an == compare of a floating point
> computation to 0.0 as part of a loop condition. If you do this the
> loop may never stop. ALWAYS do a < compare to some very small number (a "fuzz"
> factor).

while {abs(($a-$b)/($a+$b))>$epsilon} {...}
Dividing the difference by the sum means relative error instead
of absolute error.

Christian Gollwitzer

unread,
Jun 29, 2022, 1:07:25 AM6/29/22
to
Am 28.06.22 um 23:18 schrieb Alexandru:
> Rich schrieb am Dienstag, 28. Juni 2022 um 23:14:28 UTC+2:
>> Alexandru <alexandr...@meshparts.de> wrote:
>>> Why is this result not zero?
>>>
>>> set x -9408.8
>>> puts [expr {$x*$x + $x*$x + $x*$x - $x*$x - $x*$x - $x*$x}]
>>>
>>> This is a simplified demo of a more complex equation:
>>>
>>> set x -9408.8
>>> set y -9408.8
>>> set z -9408.8
>>> puts [expr {($x*$x + $y*$y + $z*$z - $x*$y - $y*$z - $z*$x)}]
>> See https://floating-point-gui.de/ (and read most of the material
>> there).
>>
>> What you are seeing is /normal/ for binary floating point math.
>
> Thank you all.
> I actually aware of those numerical issues.
> But 1e-8 suprized me alot, because it's actually a large number in my opinion.

Yes it seems large but double precision FP is in the order of 10^-16.
When you square your number which is ~10^4, you get 10^8, and then you
subtract two similar numbers of ~10^8 from each other, therefore the
result is 16 orders of magnitude smaller, hence 10^-8. If you had used a
number which is perfectly representable, like -9408.5, then the result
is exact.

> I wonder, if same operation in C would give better results.
> Another intersting thing, changing order of terms, leads to clean zero:

C is not magic, it uses the same evaluation for double precision floats,
both use the FPU (nowadays SSE instructions) to do math on 64bit binary
IEEE floats.

You could use some higher precision library (which then runs, obviously,
slower) and raise the number of digits. If you have a running sum with
multiple terms, then there are some algorithms to improve the accuracy.
One of them is Kahan summation:

https://en.wikipedia.org/wiki/Kahan_summation_algorithm

You could try if that improves the situation. Also, I don't know where
the expression comes from, but if it has to do with 3D rotation, then
e.g. quaternion rotation is an algorithm which accumulates less error
with successive rotations than matrix rotation etc.

Christian

Yusuke Yamasaki

unread,
Jun 29, 2022, 3:18:23 AM6/29/22
to
package require Mpexpr

set x -9408.8
puts [mpexpr {$x*$x + $x*$x + $x*$x - $x*$x - $x*$x - $x*$x}]
# => 0.0

set x -9408.8
set y -9408.8
set z -9408.8
puts [mpexpr {($x*$x + $y*$y + $z*$z - $x*$y - $y*$z - $z*$x)}]
# => 0.0

Alexandru

unread,
Jun 29, 2022, 4:29:54 AM6/29/22
to
Nice package. I'm afraid the performance will worsen much, as the computation is done many times and speed is important.

Florian Murr

unread,
Jun 29, 2022, 4:34:12 AM6/29/22
to
Probably the most profound answers to Computing Errors are currently given by John L. Gustafson, who has 2015 written the book "The End of Error - Unum Computing".
"Unum" stands for "universal number" and is a new alternative to the current floating point standard. While Unums seem to be ideal, they have the disadvantage of having variable length, like Tcl strings.
This poses many problems when it comes to realising unums in hardware.
Later John Gustafson has invented another format called "Posits", which also considerably improve upon IEEE P-754 floating point standard and have fixed length.
I left the discussion a few years ago with the intention to come back to unums when there would be a viable C-implementation to carry over into Tcl.
But the Unum community seams to settle for Julia instead of C.
If you look around with the keywords just given, you will find a lot of information and activity with respect to improving both performance and precision when it comes to computing.
Like
"The Great Debate" https://www.youtube.com/watch?v=LZAeZBVAzVw,
"The End of Error (CRC Press)" https://www.youtube.com/watch?v=26TwqOcGOuE,
"Stanford Seminar Beyond Floating Point Next Generation Computer Arithmetic" https://www.youtube.com/watch?v=aP0Y1uAA-2Y,
...

Florian

Florian Murr

unread,
Jun 29, 2022, 4:40:41 AM6/29/22
to
... Sorry , I should have written "accuracy" instead of "precision"!
Florian

Ralf Fassel

unread,
Jun 29, 2022, 6:11:38 AM6/29/22
to
* Alexandru <alexandr...@meshparts.de>
| Rich schrieb am Dienstag, 28. Juni 2022 um 23:14:28 UTC+2:
| > What you are seeing is /normal/ for binary floating point math.
>
| Thank you all.
| I actually aware of those numerical issues.
| But 1e-8 suprized me alot, because it's actually a large number in my opinion.
| I wonder, if same operation in C would give better results.

% cat t.c
#include <stdio.h>
int main() {

/* set x -9408.8 */
/* puts [expr {$x*$x + $x*$x + $x*$x - $x*$x - $x*$x - $x*$x}] */
double x = -9408.8;
printf("%g\n", x*x + x*x + x*x - x*x - x*x - x*x);

/* set x -9408.8 */
/* set y -9408.8 */
/* set z -9408.8 */
/* puts [expr {($x*$x + $y*$y + $z*$z - $x*$y - $y*$z - $z*$x)}] */
double y = -9408.8;
double z = -9408.8;
printf("%g\n", x*x + y*y + z*z - x*y - y*z - z*x);

}
% gcc -o t t.c
% ./t
-2.98023e-08
-2.98023e-08

HTH
R'

Yusuke Yamasaki

unread,
Jun 29, 2022, 9:01:23 AM6/29/22
to
2022年6月29日水曜日 17:29:54 UTC+9 Alexandru:
Yes. It's almost 30 times slower than [expr].
This package is an implementation of decimal floating point arithmetic.
So, it's free from error caused by conversion from binary to decimal floating point number.

Michael Soyka

unread,
Jun 29, 2022, 11:22:27 AM6/29/22
to
On 06/28/2022 5:18 PM, Alexandru wrote:
> Rich schrieb am Dienstag, 28. Juni 2022 um 23:14:28 UTC+2:
>> Alexandru <alexandr...@meshparts.de> wrote:
>>> Why is this result not zero?
>>>
>>> set x -9408.8
>>> puts [expr {$x*$x + $x*$x + $x*$x - $x*$x - $x*$x - $x*$x}]

You don't get zero because ($x*$x + $x*$x + $x*$x) is substantially
larger in magnitude than x but both terms are represented by the same
number of bits, 64.

Add parentheses and you do get zero:
expr {($x*$x + $x*$x + $x*$x) - ($x*$x + $x*$x + $x*$x)}

>>>
>>> This is a simplified demo of a more complex equation:
>>>
>>> set x -9408.8
>>> set y -9408.8
>>> set z -9408.8
>>> puts [expr {($x*$x + $y*$y + $z*$z - $x*$y - $y*$z - $z*$x)}]
>> See https://floating-point-gui.de/ (and read most of the material
>> there).
>>
>> What you are seeing is /normal/ for binary floating point math.
>
> Thank you all.
> I actually aware of those numerical issues.
> But 1e-8 suprized me alot, because it's actually a large number in my opinion.
> I wonder, if same operation in C would give better results.
> Another intersting thing, changing order of terms, leads to clean zero:
>
> expr $x*$x - $x*$y + $y*$y - $y*$z + $z*$z - $z*$x
> is 0.0
and this makes sense because each term has the same 64-bit representation.

-mike

0 new messages