Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What do floating point words do with NAN?

306 views
Skip to first unread message

Rick C

unread,
May 28, 2022, 12:48:49 AM5/28/22
to
I have some calculations that can result in -Infini or -NAN. In the case of -Infini, a comparison using F< appears to handle the value correctly, treating it as the most negative value possible I expect.

In the case of -NAN, it also treats it as a very negative number, less than -85 anyway. This confuses me. This value arises when the input data is all zeros because the unit under test is not seeing a signal and measures zero for all readings. Because there is a log of a ratio, when that ratio is zero the log produces -Inifini. When all values are zero, the division is 0/0 resulting in -NAN, both before and after the log.

This is not a big problem. The -Infini case is actually the unit working correctly and the test passes. When all the data measured is zeros, there is a significant problem with the UUT and it is not so important this test fails. There are other tests using the same data that will also fail. I'd just like it to be consistent across the tests.

I suppose I could simply have it not report anything after the first failure which is a check on the amplitude at 1 kHz, a baseline test I suppose.

Any idea why the division of 0/0 gives a -NAN rather than +NAN or whatever? I tried a few tests...

1e0 0e0 f/ f. Infini ok
-1e0 0e0 f/ f. -Infini ok
0e0 0e0 f/ f. -NAN ok
+0e0 0e0 f/ f. -NAN ok
0e0 +0e0 f/ f. -NAN ok
+0e0 +0e0 f/ f. -NAN ok

--

Rick C.

- Get 1,000 miles of free Supercharging
- Tesla referral code - https://ts.la/richard11209

dxforth

unread,
May 28, 2022, 1:31:31 AM5/28/22
to
On 28/05/2022 14:48, Rick C wrote:
> ...
> Any idea why the division of 0/0 gives a -NAN rather than +NAN or whatever? I tried a few tests...
>
> 1e0 0e0 f/ f. Infini ok
> -1e0 0e0 f/ f. -Infini ok
> 0e0 0e0 f/ f. -NAN ok
> +0e0 0e0 f/ f. -NAN ok
> 0e0 +0e0 f/ f. -NAN ok
> +0e0 +0e0 f/ f. -NAN ok

IEEE-754 doesn't recognize signed NAN (after all it's "Not-A-Number").
Printing NANs along with the sign bit is what many systems do, however.
As to why the negative bit was set for div-zero (a mathematically
undefined operation), one would have to ask Intel or Kahan. BTW it
would have been just as valid for the system to throw an error:

0e 0e f/ FLOAT_INVALID_OPERATION at 00489841 F/ +2

Anton Ertl

unread,
May 28, 2022, 1:36:28 AM5/28/22
to
Rick C <gnuarm.del...@gmail.com> writes:
>I have some calculations that can result in -Infini or -NAN. In the case o=
>f -Infini, a comparison using F< appears to handle the value correctly, tre=
>ating it as the most negative value possible I expect. =20
>
>In the case of -NAN, it also treats it as a very negative number, less than=
> -85 anyway.

What makes you think so? NaNs are unordered wrt other FP values
(including NaNs), so you get:

0e 0e f/ fconstant nan ok
nan f. NaN ok
nan -85e f>= . 0 ok
nan -85e f< . 0 ok

The only comparison that produces true when you compare with a NaN is F<>:

nan nan f<> . -1 ok

>Any idea why the division of 0/0 gives a -NAN rather than +NAN or whatever?=

The sign of a NaN does not matter.

- anton
--
M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: http://www.forth200x.org/forth200x.html
EuroForth 2021: https://euro.theforth.net/2021

minf...@arcor.de

unread,
May 28, 2022, 3:17:20 PM5/28/22
to
The sign just reflects the meaningless sign bit of the binary representation range of NaNs.

Comparison between NaN and any floating-point value x (including NaN and ±∞)
NaN ≥ x NaN ≤ x NaN > x NaN < x NaN = x NaN ≠ x
False False False False False True

From these rules, x ≠ x or x = x can be used to test whether x is NaN or non-NaN.

BTW the Standard Forth test suite comprises a file ieee-arith-test.fs for fp operations with NaNs.

Rick C

unread,
May 28, 2022, 3:51:13 PM5/28/22
to
In the end, I found that Win32Forth defines how the various operators handle NaNs. So I reversed the sense of the test so a NaN in the inputs would cause a test failure.

Thanks for the info.

--

Rick C.

+ Get 1,000 miles of free Supercharging
+ Tesla referral code - https://ts.la/richard11209

Krishna Myneni

unread,
Jul 14, 2022, 8:00:11 AM7/14/22
to
On 5/28/22 14:17, minf...@arcor.de wrote:

> ...
> Comparison between NaN and any floating-point value x (including NaN and ±∞)
> NaN ≥ x NaN ≤ x NaN > x NaN < x NaN = x NaN ≠ x
> False False False False False True
>
> From these rules, x ≠ x or x = x can be used to test whether x is NaN or non-NaN.
>
> BTW the Standard Forth test suite comprises a file ieee-arith-test.fs for fp operations with NaNs.

The ieee-arith-test.fs tests do not contain comparsion tests with NaN --
at least the version which is bundled with kForth does not contain any
comparison tests.

To my chagrin, I discovered yesterday that kForth-64/32 does not conform
to the IEEE 754 rules for comparison of floating point values with NaN.
I found this out the long way, after a recent calculation I was doing
caused memory corruption because an unexpected NaN comparison caused the
program to branch in a different path than would have been taken had the
tests conformed to the ones listed above (although other problems would
have ensued as a result of the correct behavior). Debugging this problem
revealed the following non-conforming result:

0e 0e f/ fconstant NaN
ok
0e NaN f> . \ this test should return False
-1 ok

Within my calculation, the failure occurred when the program used the
FSL module, gaussj, to solve 2x2 matrix equations, of the form

/ a b \ / x1 \ / e \
| | | | = | |
\ c d / \ x2 / \ f /

where a--f are computed from prior calculations, and GAUSSJ is used to
solve for x1 and x2. Hundreds of such matrix equations were being
generated and solved by the program, and, unexpectedly, the calculations
for a--f occasionally generated a NaN. Although there was a determinant
test to check for a singular matrix before passing the data to GAUSSJ,
it failed to show anything peculiar.

For the case of a NaN occurring in the 2x2 matrix, GAUSSJ would tell me
that it couldn't find a solution, but it also stored data outside the
bounds of some housekeeping arrays, leading to memory corruption.
Interestingly, if the NaN comparison had conformed to IEEE 754 rules, it
appears that the memory corruption would not have occurred (although I
haven't tested this sufficiently yet), but the algorithm would not have
generated an error telling me that it could not solve the equation.
GAUSSJ was likely not written to consider what would happen if a NaN
existed within the matrix.

--
Krishna

minf...@arcor.de

unread,
Jul 14, 2022, 9:03:49 AM7/14/22
to
This made me wonder (again):
0e NaN f> . \ this test should return False
-1 ok

In IEEE754 encoding, NaN means an unordered bit pattern range.
Comparing a bit pattern range with number zero should be as looney as
comparing a kitchen knife with fog.

But unfortunately IEEE754 specified logical results for such comparisons.
https://en.wikipedia.org/wiki/NaN

IMO a historical kludge, because for integer numbers there are no reserved
bits to designate invalidity. Therefore I guess the IEEE754 people settled for
"false or true" after comparions regardless of how these might be encoded.

Interestingly modern industrial control system often add control or quality bits
to integer numbers eg AD converter outputs. Ada can do similar things.

There is also good reason that some system use "signalling" NaNs to trigger
warnings or interrupts. Other NaN ranges can be used for NaN-boxing
eg to implement dynamic typing.

But standard Forth does not specifiy such things. IIRC there was just a short
notice in the appendix to REPRESENT.

Given this, what should be the outcome of: 0e NaN f> ?

C at least offers an isnan(x) macro to avoid such traps. IMO a Forth
system dealing with fp-numbers should offer a similar word.

Krishna Myneni

unread,
Jul 14, 2022, 10:01:31 AM7/14/22
to
On 7/14/22 08:03, minf...@arcor.de wrote:

>> ... Debugging this problem
>> revealed the following non-conforming result:
>>
>> 0e 0e f/ fconstant NaN
>> ok
>> 0e NaN f> . \ this test should return False
>> -1 ok
>> ...

>
> This made me wonder (again):
> 0e NaN f> . \ this test should return False
> -1 ok
>
> In IEEE754 encoding, NaN means an unordered bit pattern range.
> Comparing a bit pattern range with number zero should be as looney as
> comparing a kitchen knife with fog. ...
>

When making a comparison such as the above, between two different types
of information, a FALSE result seems more sensible than TRUE. In any
case, specifying the behavior in these cases is necessary in order to
analyze the flow of a program.

> C at least offers an isnan(x) macro to avoid such traps. IMO a Forth
> system dealing with fp-numbers should offer a similar word.

If the Forth system's F<> and F= conforms to IEEE 754, then it's easy to
define FNAN?

: FNAN? ( F: x -- ) ( -- flag) FDUP F<> ;

or

: FNAN? ( F: x -- ) ( -- flag) FDUP F= INVERT ;

These defs work in Gforth, when x is NaN.

Alternately, for double precision IEEE 754 numbers, the following
definitions should work on any ANS Forth system:

fvariable temp
BASE @
HEX

: FEXPONENT ( r -- u )
temp df! [ temp cell+ ] literal @ 14 rshift 7FF and ;

: FFRACTION ( r -- ud )
temp df! temp @ [ temp cell+ ] literal @ 000FFFFF and ;

: FNAN? ( r -- nan? )
fdup FEXPONENT 7FF = >r FFRACTION D0= invert r> and ;

BASE !

Have a look at

https://github.com/mynenik/kForth-32/blob/master/forth-src/ieee-754.4th

--
Krishna

minf...@arcor.de

unread,
Jul 14, 2022, 11:35:07 AM7/14/22
to
Isn't it a bit of a hen-and-egg problem? To implement F=/F<> (which are
non-standard words) correctly, you need isnan/FNAN? - or vice versa.

GCC does this via fpclassify(f) which is just a bit pattern matcher, similar
to your other example.

Krishna Myneni

unread,
Jul 14, 2022, 12:17:26 PM7/14/22
to
On 7/14/22 10:35, minf...@arcor.de wrote:
> Krishna Myneni schrieb am Donnerstag, 14. Juli 2022 um 16:01:31 UTC+2:
...
> Isn't it a bit of a hen-and-egg problem? To implement F=/F<> (which are
> non-standard words) correctly, you need isnan/FNAN? - or vice versa.
>

At first, I didn't understand what you were saying. Then I went back to
look at the Forth-94 an Forth-2012 standards. It never occurred to me
that F= and F<> are nonstandard words. I'm not sure what the rationale
for this is. Even if there was a concern that the behavior of these
words depend on the particular floating point number implementation,
i.e. something other than IEEE 754, they can still be standardized with
caveats. Are there Forth systems which support floating point numbers
but do not provide F= and F<> ?

> GCC does this via fpclassify(f) which is just a bit pattern matcher, similar
> to your other example.

For a Forth system which supports IEEE 754 format numbers, I would use
FNAN? as a primitive which must be used to define F= and F<>. If a Forth
system does not use a NaN supporting format, then obviously FNAN? is not
needed.

--
Krishna

minf...@arcor.de

unread,
Jul 14, 2022, 12:43:56 PM7/14/22
to
Krishna Myneni schrieb am Donnerstag, 14. Juli 2022 um 18:17:26 UTC+2:
> On 7/14/22 10:35, minf...@arcor.de wrote:
> > Krishna Myneni schrieb am Donnerstag, 14. Juli 2022 um 16:01:31 UTC+2:
> ...
> > Isn't it a bit of a hen-and-egg problem? To implement F=/F<> (which are
> > non-standard words) correctly, you need isnan/FNAN? - or vice versa.
> >
> At first, I didn't understand what you were saying. Then I went back to
> look at the Forth-94 an Forth-2012 standards. It never occurred to me
> that F= and F<> are nonstandard words. I'm not sure what the rationale
> for this is. Even if there was a concern that the behavior of these
> words depend on the particular floating point number implementation,
> i.e. something other than IEEE 754, they can still be standardized with
> caveats. Are there Forth systems which support floating point numbers
> but do not provide F= and F<> ?

As you certainly know, finite bit-encoded fp-numbers are more often than not
only (poor) approximations. That's why F~ is there, like an engineer's estimation
for equality. IOW could you really trust an F= or F<> word to give a correct answer?

> > GCC does this via fpclassify(f) which is just a bit pattern matcher, similar
> > to your other example.
> For a Forth system which supports IEEE 754 format numbers, I would use
> FNAN? as a primitive which must be used to define F= and F<>. If a Forth
> system does not use a NaN supporting format, then obviously FNAN? is not
> needed.

That would mean throwing exceptions like for division by zero or for logarithm
of a negative number. For some applications this seems more useful than
throwing NaNs around.

Krishna Myneni

unread,
Jul 14, 2022, 6:21:13 PM7/14/22
to
On 7/14/22 11:43, minf...@arcor.de wrote:
> Krishna Myneni schrieb am Donnerstag, 14. Juli 2022 um 18:17:26 UTC+2:
>> On 7/14/22 10:35, minf...@arcor.de wrote:
>>> Krishna Myneni schrieb am Donnerstag, 14. Juli 2022 um 16:01:31 UTC+2:
>> ...
>>> Isn't it a bit of a hen-and-egg problem? To implement F=/F<> (which are
>>> non-standard words) correctly, you need isnan/FNAN? - or vice versa.
>>>
>> At first, I didn't understand what you were saying. Then I went back to
>> look at the Forth-94 an Forth-2012 standards. It never occurred to me
>> that F= and F<> are nonstandard words. I'm not sure what the rationale
>> for this is. Even if there was a concern that the behavior of these
>> words depend on the particular floating point number implementation,
>> i.e. something other than IEEE 754, they can still be standardized with
>> caveats. Are there Forth systems which support floating point numbers
>> but do not provide F= and F<> ?
>
> As you certainly know, finite bit-encoded fp-numbers are more often than not
> only (poor) approximations. That's why F~ is there, like an engineer's estimation
> for equality. IOW could you really trust an F= or F<> word to give a correct answer?
>

Looking through my code, and the FSL code, F= is used in algorithms for
checking exactly representable values, e.g. small integers, which are
needed for the computations, and, together with F<>, also used heavily
in testing floating point arithmetic. A particularly good example is
paranoia.4th:

https://github.com/mynenik/kForth-64/blob/master/forth-src/system-test/paranoia.4th


>>> GCC does this via fpclassify(f) which is just a bit pattern matcher, similar
>>> to your other example.
>> For a Forth system which supports IEEE 754 format numbers, I would use
>> FNAN? as a primitive which must be used to define F= and F<>. If a Forth
>> system does not use a NaN supporting format, then obviously FNAN? is not
>> needed.
>
> That would mean throwing exceptions like for division by zero or for logarithm
> of a negative number. For some applications this seems more useful than
> throwing NaNs around.

As you've already pointed out, signalling NaNs are available in IEEE 754
for generating exceptions.

--
Krishna


dxforth

unread,
Jul 15, 2022, 12:04:49 AM7/15/22
to
On 14/07/2022 23:03, minf...@arcor.de wrote:
>
> In IEEE754 encoding, NaN means an unordered bit pattern range.
> Comparing a bit pattern range with number zero should be as looney as
> comparing a kitchen knife with fog.

Would you say the same about INF ? When logic fails and a result
is needed, convention steps in.

dxforth

unread,
Jul 15, 2022, 12:18:05 AM7/15/22
to
Never heard of fpclassify but created something similar:

\ implementation for 80387 double-precision - software stack
\ 1=Unsupp 2=NAN 5=norm 6=INF 65=zero 66=Empty 69=denorm
code FCLASS ( r -- x )
addr fsp ) di mov qword 0 [di] fld fxam ax fstsw
st(0) fstp $45 # ah and $04 # ah cmp 1 $ jnz
\ test memory copy for subnormal
$0FFE # 1 floats 2- [di] test 1 $ jnz $40 # ah or 1 $:
ah inc ( make non-zero) bx bx sub ah bl mov
1 floats # addr fsp ) add bx push next
end-code

2 constant FP-NAN
5 constant FP-NORMAL
6 constant FP-INFINITE
65 constant FP-ZERO
69 constant FP-SUBNORMAL

\ test for non-number incl. INF (used by REPRESENT)
: nan? ( r -- +n|0 )
fclass
fp-normal of 0 end
fp-subnormal of 0 end
fp-zero of 0 end ;

minf...@arcor.de

unread,
Jul 15, 2022, 4:50:31 AM7/15/22
to
Krishna Myneni schrieb am Freitag, 15. Juli 2022 um 00:21:13 UTC+2:
> On 7/14/22 11:43, minf...@arcor.de wrote:
> > Krishna Myneni schrieb am Donnerstag, 14. Juli 2022 um 18:17:26 UTC+2:
> >> On 7/14/22 10:35, minf...@arcor.de wrote:
> >>> Krishna Myneni schrieb am Donnerstag, 14. Juli 2022 um 16:01:31 UTC+2:
> >> ...
> >>> Isn't it a bit of a hen-and-egg problem? To implement F=/F<> (which are
> >>> non-standard words) correctly, you need isnan/FNAN? - or vice versa.
> >>>
> >> At first, I didn't understand what you were saying. Then I went back to
> >> look at the Forth-94 an Forth-2012 standards. It never occurred to me
> >> that F= and F<> are nonstandard words. I'm not sure what the rationale
> >> for this is. Even if there was a concern that the behavior of these
> >> words depend on the particular floating point number implementation,
> >> i.e. something other than IEEE 754, they can still be standardized with
> >> caveats. Are there Forth systems which support floating point numbers
> >> but do not provide F= and F<> ?
> >
> > As you certainly know, finite bit-encoded fp-numbers are more often than not
> > only (poor) approximations. That's why F~ is there, like an engineer's estimation
> > for equality. IOW could you really trust an F= or F<> word to give a correct answer?
> >
> Looking through my code, and the FSL code, F= is used in algorithms for
> checking exactly representable values, e.g. small integers, which are
> needed for the computations, and, together with F<>, also used heavily
> in testing floating point arithmetic.

A double fp-number has a mantissa width of 54 bits. So even bigger integers
can be represented lossless. IIRC IEEE 754 requires that arithmetic operations
and comparisons within this integer range must be 100% correct.

+0e and -0e must compare to true of course.

A probably nonsensical 32 bit Forth without an integer data stack
could do everything in fp-math.

none albert

unread,
Jul 15, 2022, 8:07:03 AM7/15/22
to
In article <55fa52eb-dcc3-42c9...@googlegroups.com>,
<SNIP>
>As you certainly know, finite bit-encoded fp-numbers are more often than not
>only (poor) approximations. That's why F~ is there, like an engineer's
>estimation
>for equality. IOW could you really trust an F= or F<> word to give a
>correct answer?

F= is useful. A lot of iteration schemes end in detecting
floating numbers are equal. F.i. bisection guarantees that you
end up in the confidence interval of a zero crossing.
ciforth supports F= . It is an oversight of the committee.
Why do you think the Intel fp processor supports it?
NaNs are useful to not have to inspect intermediate results.
Mostly the results are correct. At the end you discover a Nan,
and then you have to look closely.
A modern processor generate one floating point result per clock,
but that can only work if the processor works on different fp
instructions in a pipeline.

Groetjes Albert
--
"in our communism country Viet Nam, people are forced to be
alive and in the western country like US, people are free to
die from Covid 19 lol" duc ha
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst

minf...@arcor.de

unread,
Jul 15, 2022, 9:27:21 AM7/15/22
to
none albert schrieb am Freitag, 15. Juli 2022 um 14:07:03 UTC+2:
> In article <55fa52eb-dcc3-42c9...@googlegroups.com>,
> <SNIP>
> >As you certainly know, finite bit-encoded fp-numbers are more often than not
> >only (poor) approximations. That's why F~ is there, like an engineer's
> >estimation
> >for equality. IOW could you really trust an F= or F<> word to give a
> >correct answer?
> F= is useful. A lot of iteration schemes end in detecting
> floating numbers are equal. F.i. bisection guarantees that you
> end up in the confidence interval of a zero crossing.
> ciforth supports F= . It is an oversight of the committee.
> Why do you think the Intel fp processor supports it?

Do you have to set up a confidence interval eg declare an epsilon for F=?
Otherwise it would just check for equal bits that must not even differ in
the LSB which cannot always be guaranteed I fear.

> >Krishna Myneni schrieb am Donnerstag, 14. Juli 2022 um 18:17:26 UTC+2:
> >> > GCC does this via fpclassify(f) which is just a bit pattern matcher,
> >similar
> >> > to your other example.
> >> For a Forth system which supports IEEE 754 format numbers, I would use
> >> FNAN? as a primitive which must be used to define F= and F<>. If a Forth
> >> system does not use a NaN supporting format, then obviously FNAN? is not
> >> needed.
> >
> >That would mean throwing exceptions like for division by zero or for logarithm
> >of a negative number. For some applications this seems more useful than
> >throwing NaNs around.
> NaNs are useful to not have to inspect intermediate results.
> Mostly the results are correct. At the end you discover a Nan,
> and then you have to look closely.

I understand that this economizes on checking intermediate results.
But I don't know whether it is ensured that eg log(-1.) throws a signalling
instead of a quiet NaN. If yes throwing an exception would probably be even faster
than hand-checking result.

dxforth

unread,
Jul 15, 2022, 10:20:19 PM7/15/22
to
On 15/07/2022 22:06, albert wrote:
> In article <55fa52eb-dcc3-42c9...@googlegroups.com>,
> <SNIP>
>>As you certainly know, finite bit-encoded fp-numbers are more often than not
>>only (poor) approximations. That's why F~ is there, like an engineer's
>>estimation
>>for equality. IOW could you really trust an F= or F<> word to give a
>>correct answer?
>
> F= is useful. A lot of iteration schemes end in detecting
> floating numbers are equal. F.i. bisection guarantees that you
> end up in the confidence interval of a zero crossing.
> ciforth supports F= . It is an oversight of the committee.
> Why do you think the Intel fp processor supports it?

More avoidance than oversight, IMO. Here's what the preceding FVG
(Forth Vendors Group) FP Standard had to say about F= :

"The logical operators will regard two real numbers as equal
if they differ only by a small amount. This "fuzz factor"
is related to the magnitude of the real numbers and is
implementation dependent."

F= r1 r2 --- f
True if floating point number r1 is equal to floating
point number r2. The real numbers are removed from the
floating point stack, and the flag is left on top of the
Forth parameter stack.

Anton Ertl

unread,
Jul 18, 2022, 11:29:26 AM7/18/22
to
"minf...@arcor.de" <minf...@arcor.de> writes:
>There is also good reason that some system use "signalling" NaNs to trigger
>warnings or interrupts.

One might think that they do that, but at least in the default
configuration of the FPU it does not:

\ AMD64-specific:
$7fc00000 pad ! pad sf@ fconstant qnan ok
$7fa00000 pad ! pad sf@ fconstant snan ok
qnan ok f:1
snan ok f:2
0e f+ ok f:2
snan 0e f< ok 1 f:2
qnan 0e f+ ok 1 f:3
. 0 ok f:3
f. NaN ok f:2
f. NaN ok f:1
f. NaN ok

>Given this, what should be the outcome of: 0e NaN f> ?

There is no Forth-standard guarantees outcome, but one would expect
that F> is the > operation of IEEE 754, and for that the outcome is
false, i.e., 0 in standard Forth.

>C at least offers an isnan(x) macro to avoid such traps. IMO a Forth
>system dealing with fp-numbers should offer a similar word.

In Gforth you do FDUP F<>

There has been an attempt at an IEEE 754 proposal for Forth
standardization, but I have not heard of that for several years.
New standard: https://forth-standard.org/
EuroForth 2022: http://www.euroforth.org/ef22/cfp.html

Anton Ertl

unread,
Jul 18, 2022, 11:48:05 AM7/18/22
to
"minf...@arcor.de" <minf...@arcor.de> writes:
>Isn't it a bit of a hen-and-egg problem? To implement F=/F<> (which are
>non-standard words) correctly, you need isnan/FNAN? - or vice versa.

I would expect that an implementation of F= and F<> just invokes the
corresponding CPU instructions without special-casing NaNs. And
indeed:

\ Gforth
see f=
Code f=
5623F8D636C2: mov $00[r13],r8
5623F8D636C6: xor r8d,r8d
5623F8D636C9: sub r13,$08
5623F8D636CD: mov edx,$0
5623F8D636D2: movsd xmm0,$08[r12]
5623F8D636D9: mov rax,r12
5623F8D636DC: lea r12,$10[r12]
5623F8D636E1: ucomisd xmm0,xmm15
5623F8D636E6: movsd xmm15,$10[rax]
5623F8D636EC: setnp r8lb
5623F8D636F0: cmovnz r8,rdx
5623F8D636F4: add r15,$08
5623F8D636F8: neg r8
5623F8D636FB: mov rcx,-$08[r15]
5623F8D636FF: jmp ecx
end-code

\ lxf
see f=
8691334 88DE3C4 29 88DE3AB 25 prim F=

88DE3C4 DFE9 fucomip ST(1)
88DE3C6 DDD8 fstp ST(0)
88DE3C8 0F9BC0 setnp al
88DE3CB 0FBEC0 movsx eax , al
88DE3CE 0F94C1 sete cl
88DE3D1 0FBEC9 movsx ecx , cl
88DE3D4 21C8 and eax , ecx
88DE3D6 F7D8 neg eax
88DE3D8 895DFC mov [ebp-4h] , ebx
88DE3DB 8BD8 mov ebx , eax
88DE3DD 8D6DFC lea ebp , [ebp-4h]
88DE3E0 C3 ret near

\ iForth
FORTH> see f=
Flags:
$10139AE0 : F= 488BC04883ED088F4500 H.@H.m..E.
$10139AEA f2poprev, 41DB6D10D9C941DB6D004D8D A[m.YIA[m.M.
6D20 m
$10139AF8 fcompp DED9 ^Y
$10139AFA fnstsw ax DFE0 _`
$10139AFC and rax, $00004400 d#
4881E000440000 H.`.D..
$10139B03 xor rax, $00004000 d#
4881F000400000 H.p.@..
$10139B0A sete al 0F94C0 ..@
$10139B0D movzx rbx, al 480FB6D8 H.6X
$10139B11 neg rbx 48F7DB Hw[
$10139B14 push rbx 53 S
$10139B15 ; 488B45004883C508FFE0 H.E.H.E..`

And here's an example of the pitfalls of FP numbers:

VFX Forth 64 5.11 RC2 [build 0112] 2021-05-02 for Linux x64
© MicroProcessor Engineering Ltd, 1998-2021

see f=
F=
( 004C4550 E8EBFBFFFF ) CALL 004C4140 F-
( 004C4555 E836FFFFFF ) CALL 004C4490 F0=
( 004C455A C3 ) RET/NEXT
( 11 bytes, 3 instructions )

Let's see how well this works:

1e 0e f/ fdup f= . \ 0 ok
SSE Exception 0000:1F85

By contrast:

gforth: 1e 0e f/ fdup f= . \ -1 ok
iForth: 1e 0e f/ fdup f= . \ -1 ok
lxf: 1e 0e f/ fdup f= . \ -1 ok

>GCC does this via fpclassify(f) which is just a bit pattern matcher, similar
>to your other example.

The code shown for Gforth comes out of gcc. No bit pattern matching
going on here.

Anton Ertl

unread,
Jul 18, 2022, 12:01:53 PM7/18/22
to
Krishna Myneni <krishna...@ccreweb.org> writes:
>On 7/14/22 10:35, minf...@arcor.de wrote:
>> Krishna Myneni schrieb am Donnerstag, 14. Juli 2022 um 16:01:31 UTC+2:
>...
>> Isn't it a bit of a hen-and-egg problem? To implement F=/F<> (which are
>> non-standard words) correctly, you need isnan/FNAN? - or vice versa.
>>
>
>At first, I didn't understand what you were saying. Then I went back to
>look at the Forth-94 an Forth-2012 standards. It never occurred to me
>that F= and F<> are nonstandard words. I'm not sure what the rationale
>for this is.


|A.12.6.2.1640 F~
|
|This provides the three types of "floating point equality" in common
|use — "close" in absolute terms, exact equality as represented, and
|"relatively close".

Except that 0e F~ is explicitly specified such that positive and
negative zero are unequal, which is not a common meaning of FP
equality these days (and probably not in 1994, either). So 0= F~ is
more expensive to implement and also does something different from F=.

My guess is that they did not provide F= and F<> because they are
occasionally used wrongly by naive programmers.

>Are there Forth systems which support floating point numbers
>but do not provide F= and F<> ?

Candidate for standardization?

>For a Forth system which supports IEEE 754 format numbers, I would use
>FNAN? as a primitive which must be used to define F= and F<>.

Why?

Anton Ertl

unread,
Jul 18, 2022, 12:13:53 PM7/18/22
to
albert@cherry.(none) (albert) writes:
>In article <55fa52eb-dcc3-42c9...@googlegroups.com>,
><SNIP>
>>As you certainly know, finite bit-encoded fp-numbers are more often than not
>>only (poor) approximations. That's why F~ is there, like an engineer's
>>estimation
>>for equality. IOW could you really trust an F= or F<> word to give a
>>correct answer?
>
>F= is useful. A lot of iteration schemes end in detecting
>floating numbers are equal.

Iterative approximation algorithms are an example where using F= is
dangerous: You can have cases where the algorithm eventually jitters
between two values, and an algorithm that terminates when the value no
longer changes according to F= does not terminate. Better use
approximate equality.

>It is an oversight of the committee.

Unlikely.

>Why do you think the Intel fp processor supports it?

Because it's easier to implement in hardware than F~ABS and F~REL.

Anton Ertl

unread,
Jul 18, 2022, 12:27:54 PM7/18/22
to
"minf...@arcor.de" <minf...@arcor.de> writes:
>Do you have to set up a confidence interval eg declare an epsilon for F=?

Not in any F= I am aware of.

>I understand that this economizes on checking intermediate results.
>But I don't know whether it is ensured that eg log(-1.) throws a signalling
>instead of a quiet NaN. If yes throwing an exception would probably be even faster
>than hand-checking result.

AFAIK the typical use for signaling NaNs is uninitialized data. But
at least in Gforth there is no difference in behaviour between
signaling and quiet NaNs; none of them causes a Forth exception when
performing an operation.

I think the idea behind these default settings of the FPU is that if
you get one NaN result in, say a 1000x1000 matrix, you still want the
999,999 other results rather than just an exception.

Marcel Hendrix

unread,
Jul 18, 2022, 2:44:55 PM7/18/22
to
On Monday, July 18, 2022 at 6:27:54 PM UTC+2, Anton Ertl wrote:
> AFAIK the typical use for signaling NaNs is uninitialized data. But
> at least in Gforth there is no difference in behaviour between
> signaling and quiet NaNs; none of them causes a Forth exception when
> performing an operation.

#10000 VALUE #times 0e FVALUE a 0e FVALUE b
: testoverhead CR TIMER-RESET #times 0 ?DO a b F/ FDROP LOOP .ELAPSED ;
: BENCH ( u -- ) TO #times
1e TO a 1e TO b testoverhead ." ( 1 / 1 ) "
0e TO a 1e TO b testoverhead ." ( 0 / 1 ) "
0e TO b testoverhead ." ( 0 / 0 ) " ;
FORTH> #1000000000 BENCH
1.697 seconds elapsed. ( 1 / 1 )
1.698 seconds elapsed. ( 0 / 1 )
1.697 seconds elapsed. ( 0 / 0 ) ok

I'm not sure other languages allow such a low overhead.
Calculations in SPICE / C can be awfully slow when
NaNs are generated.

> I think the idea behind these default settings of the FPU is that if
> you get one NaN result in, say a 1000x1000 matrix, you still want the
> 999,999 other results rather than just an exception.

You mean 999,000 results, of course :--)

-marcel

dxforth

unread,
Jul 18, 2022, 10:00:05 PM7/18/22
to
On 19/07/2022 01:48, Anton Ertl wrote:
>
> |A.12.6.2.1640 F~
> |
> |This provides the three types of "floating point equality" in common
> |use — "close" in absolute terms, exact equality as represented, and
> |"relatively close".
>
> Except that 0e F~ is explicitly specified such that positive and
> negative zero are unequal, which is not a common meaning of FP
> equality these days (and probably not in 1994, either).

But chose to provide it as hardware didn't? 'Negative zero' is only
meaningful if a system expressly supports it. E.g. there doesn't seem
much point having F~ do:

0e -0e 0e f~ . 0 ok

if one couldn't print it out:

-0e f. 0. ok

Paul Rubin

unread,
Jul 19, 2022, 1:26:08 AM7/19/22
to
dxforth <dxf...@gmail.com> writes:
> But chose to provide it as hardware didn't? 'Negative zero' is only
> meaningful if a system expressly supports it.

Negative zero is supported by essentially all floating point hardware
these days. But oh man, F~ is ugly. 0e and -0e have different
encodings (bit patterns) even though F= compares them as equal.

> E.g. there doesn't seem much point having F~ do:
> 0e -0e 0e f~ . 0 ok

On that system 0e and -0e have the same bit pattern, so

> if one couldn't print it out:
> -0e f. 0. ok

should operate as you describe. On an IEEE system (most systems now),
-0e f. prints -0.

dxforth

unread,
Jul 19, 2022, 1:57:42 AM7/19/22
to
Expected though not mandated. There's something re-assuring about not
seeing -0.0 in a table of values. Ideally a system that supports negative-
zero should be able to turn it off. When it's off, I'd expect 0e -0e 0e f~
to return a true result.

Anton Ertl

unread,
Jul 19, 2022, 4:18:32 AM7/19/22
to
Marcel Hendrix <m...@iae.nl> writes:
>On Monday, July 18, 2022 at 6:27:54 PM UTC+2, Anton Ertl wrote:
>> AFAIK the typical use for signaling NaNs is uninitialized data. But
>> at least in Gforth there is no difference in behaviour between
>> signaling and quiet NaNs; none of them causes a Forth exception when
>> performing an operation.
>
>#10000 VALUE #times 0e FVALUE a 0e FVALUE b
>: testoverhead CR TIMER-RESET #times 0 ?DO a b F/ FDROP LOOP .ELAPSED ;
>: BENCH ( u -- ) TO #times
> 1e TO a 1e TO b testoverhead ." ( 1 / 1 ) "
> 0e TO a 1e TO b testoverhead ." ( 0 / 1 ) "
> 0e TO b testoverhead ." ( 0 / 0 ) " ;
>FORTH> #1000000000 BENCH
>1.697 seconds elapsed. ( 1 / 1 )
>1.698 seconds elapsed. ( 0 / 1 )
>1.697 seconds elapsed. ( 0 / 0 ) ok
>
>I'm not sure other languages allow such a low overhead.
>Calculations in SPICE / C can be awfully slow when
>NaNs are generated.

Gforth implements FP operations by just performing the appropriate C
code and also sees uniform results for this benchmark. I see two
possibilities:

* The operations you perform involving NaNs that are slow are
something other than 0/0.

* The actual latency is indeed slower for, e.g., 0/0, but since you
FDROP the result, you don't measure the latency, only the
throughput. In your SPICE work, you actually also measure the
latency.

>> I think the idea behind these default settings of the FPU is that if
>> you get one NaN result in, say a 1000x1000 matrix, you still want the
>> 999,999 other results rather than just an exception.
>
>You mean 999,000 results, of course :--)

No, 1 element out of 1,000,000 is a NaN, so there are 999,999 non-NaN
elements left.

Anton Ertl

unread,
Jul 19, 2022, 4:23:02 AM7/19/22
to
dxforth <dxf...@gmail.com> writes:
>On 19/07/2022 01:48, Anton Ertl wrote:
>>
>> |A.12.6.2.1640 F~
>> |
>> |This provides the three types of "floating point equality" in common
>> |use — "close" in absolute terms, exact equality as represented, and
>> |"relatively close".
>>
>> Except that 0e F~ is explicitly specified such that positive and
>> negative zero are unequal, which is not a common meaning of FP
>> equality these days (and probably not in 1994, either).
>
>But chose to provide it as hardware didn't?

Pardon?

>'Negative zero' is only
>meaningful if a system expressly supports it.

Obviously.

>E.g. there doesn't seem
>much point having F~ do:
>
>0e -0e 0e f~ . 0 ok

Period! But that's how it is specified.

>if one couldn't print it out:
>
>-0e f. 0. ok

How a negative zero should be printed is a different discussion.

Anton Ertl

unread,
Jul 19, 2022, 4:26:43 AM7/19/22
to
Paul Rubin <no.e...@nospam.invalid> writes:
>On an IEEE system (most systems now),
>-0e f. prints -0.

Gforth prints (on AMD64, i.e., an IEEE 754 system):

-0e f. 0. ok
0e fnegate f. 0. ok

dxforth

unread,
Jul 19, 2022, 5:57:07 AM7/19/22
to
On 19/07/2022 18:18, Anton Ertl wrote:
> dxforth <dxf...@gmail.com> writes:
>>On 19/07/2022 01:48, Anton Ertl wrote:
>>>
>>> |A.12.6.2.1640 F~
>>> |
>>> |This provides the three types of "floating point equality" in common
>>> |use — "close" in absolute terms, exact equality as represented, and
>>> |"relatively close".
>>>
>>> Except that 0e F~ is explicitly specified such that positive and
>>> negative zero are unequal, which is not a common meaning of FP
>>> equality these days (and probably not in 1994, either).
>>
>>But chose to provide it as hardware didn't?
>
> Pardon?

FCOMP sees no difference between 0e -0e. Most likely because
mathematics doesn't.

>
>>'Negative zero' is only
>>meaningful if a system expressly supports it.
>
> Obviously.
>
>>E.g. there doesn't seem
>>much point having F~ do:
>>
>>0e -0e 0e f~ . 0 ok
>
> Period! But that's how it is specified.

With no explanation as to why. If anything it makes F~ less useful.

>
>>if one couldn't print it out:
>>
>>-0e f. 0. ok
>
> How a negative zero should be printed is a different discussion.

There's precedence for printing it differently to other numbers?

Paul Rubin

unread,
Jul 19, 2022, 7:59:29 AM7/19/22
to
an...@mips.complang.tuwien.ac.at (Anton Ertl) writes:
> Gforth prints (on AMD64, i.e., an IEEE 754 system):
> -0e f. 0. ok
> 0e fnegate f. 0. ok

Both of those print -0. under gforth 0.73 in Debian 11 x64. It's
interesting if something changed.
Message has been deleted
Message has been deleted

Marcel Hendrix

unread,
Jul 19, 2022, 8:03:55 AM7/19/22
to
On Tuesday, July 19, 2022 at 11:57:07 AM UTC+2, dxforth wrote:
> On 19/07/2022 18:18, Anton Ertl wrote:
> > dxforth <dxf...@gmail.com> writes:
[..]
> FCOMP sees no difference between 0e -0e. Most likely because
> mathematics doesn't.

It is useful in numerical procedures. It makes quite a bit of difference
when handling an hyperbolic branch, see D.N. Williams' home pages.

-marcel

Paul Rubin

unread,
Jul 19, 2022, 8:05:07 AM7/19/22
to
dxforth <dxf...@gmail.com> writes:
> FCOMP sees no difference between 0e -0e. Most likely because
> mathematics doesn't.

Gforth doesn't have FCOMP and I don't see it in forth-standard.org.
What is it supposed to do? Computer floating point arithmetic doesn't
work like the mathematical reals: for example, FP arithmetic doesn't
follow the associative law, there are rounding errors, etc. So there
are good reasons for having negative 0.

W. Kahan (main designer of IEEE 754) has many articles and diatribes
about FP arithmetic on his web site,
https://people.eecs.berkeley.edu/~wkahan/

dxforth

unread,
Jul 19, 2022, 9:53:14 AM7/19/22
to
On 19/07/2022 22:05, Paul Rubin wrote:
> dxforth <dxf...@gmail.com> writes:
>> FCOMP sees no difference between 0e -0e. Most likely because
>> mathematics doesn't.
>
> Gforth doesn't have FCOMP and I don't see it in forth-standard.org.
> What is it supposed to do?

Intel FPU instructions FCOM/FCOMP/FCOMPP--Compare Real

dxforth

unread,
Jul 19, 2022, 10:34:31 AM7/19/22
to
Do you mean F~ ? Given F= and FSIGN that can pick the sign of zero,
then F~ and its restrictive spec on zero becomes redundant.

Marcel Hendrix

unread,
Jul 19, 2022, 12:32:44 PM7/19/22
to
On Tuesday, July 19, 2022 at 4:34:31 PM UTC+2, dxforth wrote:
[..]
> Do you mean F~ ? Given F= and FSIGN that can pick the sign of zero,
> then F~ and its restrictive spec on zero becomes redundant.

I did not comment on F~ but its use is indeed limited (see below).
F= is far more important.

Searching for: F~
D:\dfwforth\examples\dictaat\hbeb\lampmodels.frt(127): wanted 1e-3 F~ IF I (set-temperature) UNLOOP EXIT ENDIF
D:\dfwforth\examples\DOMI\domiapi.frt(441): S~ curl -o "json.txt" -D "response.txt" -F~
D:\dfwforth\examples\nrc\chapter2_Linear_Algebraic_Equations\apps\llc.frt(730): <VO> FSQR Rl F/ PWANTED -0.01e F~ ;
D:\dfwforth\examples\numeric\division.frt(34): 1e-17 F~ ?LEAVE
D:\dfwforth\examples\numeric\tpf.frt(206): wy wsize F~
D:\dfwforth\examples\numeric\tpf.frt(207): wx wsize F~ AND ; PRIVATE
D:\dfwforth\examples\numeric\tpfgr.frt(190): wy wsize F~
D:\dfwforth\examples\numeric\tpfgr.frt(191): wx wsize F~ AND ; PRIVATE
D:\dfwforth\examples\paranoia\paranoia.frt(326): 0E F~ ;
D:\dfwforth\examples\pl1\pl1b.frt(121): s" 0e F~ " evaluate ; immediate
D:\dfwforth\examples\pl1\pl1b.frt(124): s" 0e F~ INVERT " evaluate ; immediate
D:\dfwforth\examples\SPICE\Mag_Tool\readmag.frt(536): chk? IF varname EVALUATE ( F: rnew rold -- ) FOVER -0.01e F~
D:\dfwforth\examples\SPICE\old_ispice\ispice.frt(1251): STEPVAR3 STV3_end 1e-15 F~
D:\dfwforth\examples\SPICE\old_ispice\ispice.frt(1252): STEPVAR2 STV2_end 1e-15 F~ AND
D:\dfwforth\examples\SPICE\old_ispice\ispice.frt(1253): STEPVAR1 STV1_end 1e-15 F~ AND ; PRIVATE
D:\dfwforth\include\marquard.frt(266): -1e-3 F~ ;
D:\dfwforth\include\spifsim.frt(2943): F2DUP F= closeness F~ OR ; PRIVATE
Found 17 occurrence(s) in 22 file(s), 23706 ms

Searching for: F=
Found 468 occurrence(s) in 87 file(s), 1473 ms

Searching for: FSIGN
Found 32 occurrence(s) in 13 file(s), 1473 ms

-marcel

none albert

unread,
Jul 19, 2022, 3:55:18 PM7/19/22
to
In article <4e841f83-8b8f-47fc...@googlegroups.com>,
Unclear. "FCOMP seeing no difference between 0e and -0e".
Is that useful?
Or is FCOMP useful?

>
>-marcel

dxforth

unread,
Jul 19, 2022, 7:33:28 PM7/19/22
to
On 20/07/2022 05:55, albert wrote:
> In article <4e841f83-8b8f-47fc...@googlegroups.com>,
> Marcel Hendrix <m...@iae.nl> wrote:
>>On Tuesday, July 19, 2022 at 11:57:07 AM UTC+2, dxforth wrote:
>>> On 19/07/2022 18:18, Anton Ertl wrote:
>>> > dxforth <dxf...@gmail.com> writes:
>>[..]
>>> FCOMP sees no difference between 0e -0e. Most likely because
>>> mathematics doesn't.
>>
>>It is useful in numerical procedures. It makes quite a bit of difference
>>when handling an hyperbolic branch, see D.N. Williams' home pages.
>
> Unclear. "FCOMP seeing no difference between 0e and -0e".
> Is that useful?
> Or is FCOMP useful?

FCOMP is the Intel FPU instruction that compares two f/p values.
It doesn't discriminate between positive and negative zero.
Given there is no instruction that compares them as different,
we can conclude Intel saw the former as more useful. Forth's
question to answer is how it came to conclude the opposite.

dxforth

unread,
Jul 20, 2022, 12:00:57 AM7/20/22
to
Gforth 0.7.9_20200709

\ test if sign passed on input
0e pad f! pad 1 floats dump
6FFFFF877278: 00 00 00 00 00 00 00 00

-0e pad f! pad 1 floats dump
6FFFFF877278: 00 00 00 00 00 00 00 80

\ test if sign passed on output
0e pad 10 represent drop nip . 0 ok
-0e pad 10 represent drop nip . 0 ok

Anton Ertl

unread,
Jul 20, 2022, 7:48:32 AM7/20/22
to
One other thing that changed:

-0e 0e f/ f. \ -na ok \ 0.7.3
-0e 0e f/ f. \ NaN ok \ 0.7.9_20220707

Maybe these changes are related. If you want to know for sure, you
can bisect it :-).

Anton Ertl

unread,
Jul 20, 2022, 7:53:06 AM7/20/22
to
Not sure what you mean with "precedence" here, but anyway, negative
zero should be printed differently from, e.g., 1. Whether it should
be printed differently from positive zero is the question.

dxforth

unread,
Jul 20, 2022, 9:12:34 AM7/20/22
to
On 20/07/2022 21:48, Anton Ertl wrote:
> dxforth <dxf...@gmail.com> writes:
>>On 19/07/2022 18:18, Anton Ertl wrote:
>>> How a negative zero should be printed is a different discussion.
>>
>>There's precedence for printing it differently to other numbers?
>
> Not sure what you mean with "precedence" here, but anyway, negative
> zero should be printed differently from, e.g., 1.
By precedence I mean convention. ANS has a convention for inputting
and outputting negative numbers.

> Whether it should
> be printed differently from positive zero is the question.

I see nothing in the definition of F. which permits a change of sign.

minf...@arcor.de

unread,
Jul 20, 2022, 12:21:04 PM7/20/22
to
Anton Ertl schrieb am Mittwoch, 20. Juli 2022 um 13:53:06 UTC+2:
> dxforth <dxf...@gmail.com> writes:
> >On 19/07/2022 18:18, Anton Ertl wrote:
> >> How a negative zero should be printed is a different discussion.
> >
> >There's precedence for printing it differently to other numbers?
> Not sure what you mean with "precedence" here, but anyway, negative
> zero should be printed differently from, e.g., 1. Whether it should
> be printed differently from positive zero is the question.

How can this be qustioned??
Branch cuts are an important thing eg in the field of trigonometric and
logarithmic functions, particularly in the complex plane.
+0e FDUP FLN f. f. -inf +0.
-0e FDUP FLN f. -nan -0.
The only thing that is undefined is the 'sign' of nan. The signs before 0.
are important.

Marcel Hendrix

unread,
Jul 20, 2022, 3:11:31 PM7/20/22
to
On Wednesday, July 20, 2022 at 6:21:04 PM UTC+2, minf...@arcor.de wrote:
> Anton Ertl schrieb am Mittwoch, 20. Juli 2022 um 13:53:06 UTC+2:
[..]
> How can this be qustioned??
> Branch cuts are an important thing eg in the field of trigonometric and
> logarithmic functions, particularly in the complex plane.
> +0e FDUP FLN f. f. -inf +0.
> -0e FDUP FLN f. -nan -0.
> The only thing that is undefined is the 'sign' of nan. The signs before 0.
> are important.

You had me worried for a moment but it isn't as simple as that :--)

MATLAB:
>> log(-0)
ans = -Inf
>> log(+0)
ans = -Inf

-marcel

Paul Rubin

unread,
Jul 20, 2022, 9:08:21 PM7/20/22
to
an...@mips.complang.tuwien.ac.at (Anton Ertl) writes:
>>There's precedence for printing it differently to other numbers?
> Not sure what you mean with "precedence" here

I think he meant "precedent". It means it has been that way in the
past, though maybe not consistently.

dxforth

unread,
Jul 20, 2022, 11:33:45 PM7/20/22
to
A system isn't required to support signed-zero (e.g. stock SwiftForth) but
if it chooses to, it has no option but to convey the result to the user.

-1e 0e f* f.

Anton Ertl

unread,
Jul 21, 2022, 10:00:44 AM7/21/22
to
dxforth <dxf...@gmail.com> writes:
>I see nothing in the definition of F. which permits a change of sign.

The specification of F. does not specify much about it's output. In
particular, it does not specify anything about which number is
displayed how.

Anyway, I gather from the reactions here that people think that in a
high-quality implementation

0e fnegate f.

should output "-0.". So that's what Gforth now does; see

http://git.savannah.gnu.org/cgit/gforth.git/commit/?id=ffb2db329329c0ad67a5914be7f577b67726301e

minf...@arcor.de

unread,
Jul 21, 2022, 10:37:24 AM7/21/22
to
Everyone is free and happy to cook her own soup ;-)

Wolfram Alpha does:
log(0.0) or log(-0.0) --> undefined, whereas
log(0+0i) -> -inf, whereas
log(0.0+0.0i) -> undefined

Marcel Hendrix

unread,
Jul 21, 2022, 1:25:07 PM7/21/22
to
On Thursday, July 21, 2022 at 4:37:24 PM UTC+2, minf...@arcor.de wrote:
> Marcel Hendrix schrieb am Mittwoch, 20. Juli 2022 um 21:11:31 UTC+2:
[..]
> > MATLAB:
> > >> log(-0)
> > ans = -Inf
> > >> log(+0)
> > ans = -Inf
> Everyone is free and happy to cook her own soup ;-)
>
> Wolfram Alpha does:
> log(0.0) or log(-0.0) --> undefined, whereas
> log(0+0i) -> -inf, whereas
> log(0.0+0.0i) -> undefined

Soup? More like a bitches' brew.

-marcel

dxforth

unread,
Jul 21, 2022, 10:38:04 PM7/21/22
to
VFX:
0e fln f. -NaN ok

DX-Forth:
0e fln f. -INF ok

As both use the same x87 instructions I'm guessing it's something
to do with NAN/INF decoding.



Marcel Hendrix

unread,
Jul 22, 2022, 1:05:42 AM7/22/22
to
Why Inf? What about this classic:

FORTH> +0e fdup fsin fswap f/ f. -NAN ok
FORTH> +0e fdup fcos fswap f/ f. +INF ok
FORTH> -0e fdup fcos fswap f/ f. -INF ok
FORTH> 0e fdup fcos fswap f/ f. +INF ok

-marcel

Anton Ertl

unread,
Jul 22, 2022, 1:18:21 AM7/22/22
to
dxforth <dxf...@gmail.com> writes:
>VFX:
>0e fln f. -NaN ok

VFX Forth for Linux IA32 Version: 4.72 [build 0555]
Including /usr/local/VfxLinEval/Lib/x86/Ndp387.fth
0e fln f. -Inf ok
NDP Potential Exception: NDP SW = 0004

VFX Forth 64 5.11 RC2 [build 0112] 2021-05-02 for Linux x64
© MicroProcessor Engineering Ltd, 1998-2021

0e fln f. Invalid argument to FLN/FLOG
-> 0e fln f.
^

dxforth

unread,
Jul 22, 2022, 2:39:00 AM7/22/22
to
On 22/07/2022 15:16, Anton Ertl wrote:
> dxforth <dxf...@gmail.com> writes:
>>VFX:
>>0e fln f. -NaN ok
>
> VFX Forth for Linux IA32 Version: 4.72 [build 0555]
> Including /usr/local/VfxLinEval/Lib/x86/Ndp387.fth
> 0e fln f. -Inf ok
> NDP Potential Exception: NDP SW = 0004
>
> VFX Forth 64 5.11 RC2 [build 0112] 2021-05-02 for Linux x64
> © MicroProcessor Engineering Ltd, 1998-2021
>
> 0e fln f. Invalid argument to FLN/FLOG
> -> 0e fln f.
> ^

Looks likes 32->64 conversion issues (for Win x64 anyway).
I've sent Stephen an email and mentioned your result.

dxforth

unread,
Jul 22, 2022, 3:49:55 AM7/22/22
to
After applying a fix to VFX64 I get those, in addition to 0e FLN working
as expected. Why is it classic?

dxforth

unread,
Jul 24, 2022, 12:31:22 AM7/24/22
to
On 22/07/2022 15:16, Anton Ertl wrote:
> dxforth <dxf...@gmail.com> writes:
>>VFX:
>>0e fln f. -NaN ok
>
> VFX Forth for Linux IA32 Version: 4.72 [build 0555]
> Including /usr/local/VfxLinEval/Lib/x86/Ndp387.fth
> 0e fln f. -Inf ok
> NDP Potential Exception: NDP SW = 0004
>
> VFX Forth 64 5.11 RC2 [build 0112] 2021-05-02 for Linux x64
> © MicroProcessor Engineering Ltd, 1998-2021
>
> 0e fln f. Invalid argument to FLN/FLOG
> -> 0e fln f.
> ^

Stephen advises the fix will be in the next release.

minf...@arcor.de

unread,
Jul 30, 2022, 8:22:53 AM7/30/22
to
IEEE 754 section 9.2.1 specifies clearly
".. log(±0) is −∞ and signals the divideByZero exception .."

Reference page 43 in
https://irem.univ-reunion.fr/IMG/pdf/ieee-754-2008.pdf

Krishna Myneni

unread,
Jul 30, 2022, 12:31:38 PM7/30/22
to
The divideByZero exception mask bit must be set in order for the fpu to
signal the exception. It is not set by default.

In kForth-32/64,

\ Check ln(0)
0e f.
0 ok
0e fln f.
-inf ok

\ Check ln(-0)
-0e f.
-0 ok
-0e fln f.
-inf ok


In kForth-32, the FPU's exception mask bit for divideByZero may be set
as follows:

------
include ans-words
include strings
include modules
include syscalls
include mc
include asm-x86
include fpu-x86

cr
.( FLN of zero with default FPU exception mask ) cr

0e fln f. cr

.( Press a key to set the divideByZero FPU exception ) cr
key drop

FPU_CW_ZERODIVIDE FPU_CW_EXCEPTION_MASK modifyFPUStateX86

0e fln f.
------

Output of the above code in kForth-32:
-------
$ kforth32
kForth-32 v 2.4.0 (Build: 2022-06-30)
Copyright (c) 1998--2022 Krishna Myneni
Contributions by: dpw gd mu bk abs tn cmb bg dnw
Provided under the GNU Affero General Public License, v3.0 or later


Ready!
include fpu-signals-test

FLN of zero with default FPU exception mask
-inf
Press a key to set the divideByZero FPU exception
Floating point exception (core dumped)
$
------

We can, of course, trap the signal in kForth (see sigfpe.4th for an
example of how to trap a signal in kForth).

--
Krishna




minf...@arcor.de

unread,
Jul 30, 2022, 1:16:32 PM7/30/22
to
Well done!

BTW I was sometimes wondering about those strange exception codes
54 and 55 in table 9.1 of the standard document. They seem a half-cooked
rudiment and practically useless.

Krishna Myneni

unread,
Jul 30, 2022, 9:56:31 PM7/30/22
to
Since Forth 2012 doesn't mandate IEEE floating point arithmetic, error
codes 54 and 55 are not specific to IEEE arithmetic specs. My guess is
that they were intended for use by processors which can provide
interrupts on such generic errors. The floating point arithmetic
exceptions which can be enabled on the x86 fpus are

INVALID
DENORMAL
ZERODIVIDE
OVERFLOW
UNDERFLOW
INEXACT

All of these, except DENORMAL, map to the IEEE standard arithmetic
exceptions. From the IEEE Std 754-2008 document,

7.2 Invalid Operation
7.3 Division by zero
7.4 Overflow
7.5 Underflow
7.6 Inexact

With regard to Table 9.1 of the Forth-94 and 2012 standards, exception
code -54 can be thrown for an x86 fpu UNDERFLOW (7.5) error. I'm not
entirely sure about when to throw exception code -55.

Note that the standard also provides the following floating point error
codes:

-41 loss of precision (I assume this is floating point?)
-42 floating-point divide by zero
-43 floating-point result out of range
-46 floating-point invalid argument

It would be good to have common practice for mapping the x86 fpu
exceptions to the standard error codes.

--
Krishna


Krishna Myneni

unread,
Jul 31, 2022, 12:03:02 AM7/31/22
to
The above line of code,

FPU_CW_ZERODIVIDE FPU_CW_EXCEPTION_MASK modifyFPUStateX86

is actually masking the divideByZero exception, rather than enabling it.
The program generates an exception for the odd reason that when I loaded
the strings.4th module, the fpu status register indicates an INVALID
error, caused by the following statement in strings.4th

0e 0e f/ fconstant NAN

used to define NAN for the word STRING>F .

The code is not generating an exception for the reason I thought. I
noticed this because strings.4th is not needed by any of the code, but
if I omit it, the code fails to generate an exception.

The fpu-x86.4th file permits examining the status and control registers
of the fpu as follows:

getFPUStatusX86 fpu-status ?
getFPUStateX86 fpu-control ?

Also, the word ClearFPUexceptionsX86 clears the fpu status register.

Bottom line is the code I intended to enable an exception on the fpu
divide by zero error is not correct, and only generates an exception due
to a leftover status error from somewhere else in the code. The word
modifyFPUStateX86 may be faulty.

Stay tuned.

--
KM

dxforth

unread,
Jul 31, 2022, 1:38:25 AM7/31/22
to
On 31/07/2022 14:02, Krishna Myneni wrote:
>
> FPU_CW_ZERODIVIDE FPU_CW_EXCEPTION_MASK modifyFPUStateX86
>
> is actually masking the divideByZero exception, rather than enabling it.
> The program generates an exception for the odd reason that when I loaded
> the strings.4th module, the fpu status register indicates an INVALID
> error, caused by the following statement in strings.4th
>
> 0e 0e f/ fconstant NAN
>
> used to define NAN for the word STRING>F .

The bit patterns for IEEE INF NAN are documented. Load them into PAD and F@






Krishna Myneni

unread,
Jul 31, 2022, 8:30:07 AM7/31/22
to
On 7/30/22 23:02, Krishna Myneni wrote:
> On 7/30/22 11:31, Krishna Myneni wrote:
,,,
> FPU_CW_ZERODIVIDE FPU_CW_EXCEPTION_MASK modifyFPUStateX86
>
> is actually masking the divideByZero exception, rather than enabling it.
> The program generates an exception for the odd reason that when I loaded
> the strings.4th module, the fpu status register indicates an INVALID
> error, caused by
...
> Bottom line is the code I intended to enable an exception on the fpu
> divide by zero error is not correct, and only generates an exception due
> to a leftover status error from somewhere else in the code. The word
> modifyFPUStateX86 may be faulty.
...
The word modifyFPUStateX86 works but I used it incorrectly. The
following code illustrates the working method for unmasking fpu exceptions.

fpu-signals-test.4th
-------
include ans-words
include modules
include syscalls
include mc
include asm-x86
include fpu-x86
include dump
include ssd

: unmaskFPUexceptions ( bits -- )
invert getFPUStateX86 fpu-control @ and
FPU_CW_EXCEPTION_MASK modifyFPUStateX86 ;

cr .( Press a key to mask all FPU exceptions [default] )
cr .( and compute FLN of zero )
cr key drop

0 unmaskFPUexceptions \ mask all exceptions
0e fln f. cr

cr .( Press a key to unmask the divideByZero FPU exception )
cr .( and compute FLN of zero.)
cr key drop

\ Clear current fpu exceptions before unmasking exceptions

clearFPUexceptionsX86

FPU_CW_ZERODIVIDE unmaskFPUexceptions
0e fln f.
--------

Note that I define the word unmaskFPUexceptions which takes a bit
pattern for the exceptions to be unmasked. Several exceptions may be
unmasked by or'ing the mask constants together. Also note that any
pending exceptions in the fpu's status register will immediate signal a
floating point exception if that particular exception is unmasked. If
you don't want that to happen, use clearFPUexceptionsX86 before
executing unmaskFPUexceptions, as illustrated above.

Output of the above code in kforth32 is given below.

------
include fpu-signals-test

Press a key to mask all FPU exceptions [default]
and compute FLN of zero
-inf

Press a key to unmask the divideByZero FPU exception
and compute FLN of zero.
Floating point exception (core dumped)
$
--------

--
Krishna Myneni

Krishna Myneni

unread,
Jul 31, 2022, 8:34:51 AM7/31/22
to
Agreed that it's better to load the bit pattern for NAN than to actually
generate exception(s) which are stored in the FPU status register and
can later trigger a floating point exception if the particular
exception(s) are unmasked. The alternative is to clear the FPU status
register after generating the exceptions.

--
Krishna



Krishna Myneni

unread,
Jul 31, 2022, 9:04:06 AM7/31/22
to
On 7/31/22 07:30, Krishna Myneni wrote:
> On 7/30/22 23:02, Krishna Myneni wrote:
...
> 0 unmaskFPUexceptions \ mask all exceptions
...

Sigh. I just realized that the above line is bad logic. It does not mask
all fpu exceptions. It has no effect on the mask pattern. Corrected
code, using the definition maskAllFPUexceptions, is given below.

fpu-signals-test.4th
------
include ans-words
include modules
include syscalls
include mc
include asm-x86
include fpu-x86

: maskAllFPUexceptions ( -- )
FPU_CW_INVALID
FPU_CW_DENORMAL or
FPU_CW_ZERODIVIDE or
FPU_CW_OVERFLOW or
FPU_CW_UNDERFLOW or
FPU_CW_INEXACT or
FPU_CW_EXCEPTION_MASK modifyFPUStateX86 ;

: unmaskFPUexceptions ( bits -- )
invert getFPUStateX86 fpu-control @ and
FPU_CW_EXCEPTION_MASK modifyFPUStateX86 ;

cr .( Press a key to mask all FPU exceptions [default] )
cr .( and compute FLN of zero )
cr key drop

maskAllFPUexceptions
0e fln f. cr

cr .( Press a key to unmask the divideByZero FPU exception )
cr .( and compute FLN of zero.)
cr key drop

\ Clear current fpu exceptions before unmasking exceptions

clearFPUexceptionsX86

FPU_CW_ZERODIVIDE unmaskFPUexceptions
0e fln f.
-------

KM

0 new messages