Hi all,
As discussed in yesterday's meeting, I asked Taylor Campbell about
the 'real?' issue. His response follows.
----- Forwarded message from Taylor R Campbell <
camp...@mumble.net> -----
Date: Sat, 22 Nov 2025 03:00:21 +0000
From: Taylor R Campbell <
camp...@mumble.net>
To: Wolfgang Corcoran-Mathe <
w...@sigwinch.xyz>
Subject: Re: RFC on real? in R6RS & R7RS Scheme
> Date: Fri, 21 Nov 2025 19:03:19 -0500
> From: Wolfgang Corcoran-Mathe <
w...@sigwinch.xyz>
>
> I'm hoping to get your opinion on the different definitions of 'real?'
> provided by R6RS & R7RS & on the way forward for this procedure & its
> fellows. John Cowan summarized the issues here:
>
>
https://groups.google.com/g/scheme-reports-wg2/c/EAF0ZsFeqmU
>
> and the active issue-tracker page is here:
>
>
https://codeberg.org/scheme/r7rs/issues/90
>
> The question before the R7RS WG2 is whether to (a) preserve the R5RS
> definition of 'real?' & possibly introduce a new predicate provisionally
> named 'compnum?'[1] for identifying inexact-inexact complex numbers,
> or (b) use the stricter R6RS definition of 'real?' and include R6RS's
> 'real-valued?'. Many of the WG members are uncertain & want to solicit
> expert opinions before making any binding decisions. (Lucky you, Mr.
> Expert.)
jcowan's claim (or the claim imputed by jcowan to the R6RS authors) is
false:
> The rationale here is that 0.0, considered as the result of an
> inexact computation, may not represent a true mathematical zero; in
> fact, it can be any number between 0 and the halfway point from 0 to
> the smallest representable positive inexact number.
A Scheme inexact zero is just that: (true mathematical) zero, with a
bit indicating that the computation leading to it may have been
approximated -- whether because the inputs were not known exactly, or
because the operation can't be computed exactly, or because the result
of the operation can't be represented exactly in any supported
representation.
It's just a finer-grained, per-value version of the floating-point
inexact-result sticky exception bit defined in IEEE 754 and supported
on essentially all application CPUs of the last four decades. You
could legitimately implement arithmetic in Scheme entirely with
floating-point numbers plus an exact/inexact bit, and simply tag the
result of an operation (flop x y) with the inexact bit if
(or <floating-point inexact-result exception bit was set by flop>
(inexact? x)
(inexact? y))
A Scheme inexact zero does _not_ represent an interval any more than
the floating-point zero represents an interval; there's no language in
any of the reports, going back to gjs's 1985 numeric tower proposal
(
https://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1985/msg00016.html),
in support of this interpretation, which is obviously absurd, because,
e.g., the true value of
(1 + 2^-53) - 1
is 2^-53, but the value of
(- (+ 1.0 (expt 2.0 -53)) 1.0)
in any Scheme system with IEEE 754 binary64 arithmetic is 0.0, yet
2^-53 certainly lies nowhere near the interval [-2^-1075, 2^-1075]
around zero halfway to the adjacent floating-point numbers which 0.0
allegedly represents under this interpretation.
(And if you actually used interval arithmetic -- which is not really
compatible with Scheme's numeric tower, because it can't yield a
true/false answer to questions like zero? -- you'd constantly end up
with (-\infty, +\infty) partway through your computation as usual.)
Back to the question at hand:
I'm not sure what the value in the R6RS real? is. I would suggest
surveying existing applications that use real?, whether the R5RS or
the R6RS definition, and find what they are actually looking for, and
argue one way or another on the basis of real-world use cases.
If you want a predicate for a particular representation as a condition
for compiler optimization, maybe you want a commitment to flonums
(which standard Scheme never had) and a flonum? predicate; the R6RS
real? certainly doesn't serve that purpose.
If you do want commitments to particular representations, `compnum?'
isn't quite right either; `recnum?' might be, but if I understand
correctly, `compnum?' is supposed to be true of both 1.+0.i and 1.@0.,
even if they have different representations -- and even then, the
conventional usage of the term `recnum' doesn't really imply
<flonum>+<flonum>i.
But the semantics should really be driven by real-world needs, not by
abstract theorizing about a hypothetical super-clever compiler: show
me the program for which the finer distinction is useful, and show me
the compiler and how it _can_ take advantage of information before
committing to weird semantic choices based on theories of how
hypothetical compilers _might_ take advantage.
(Really, I expect you'll get better results through vectorization --
via explicit vector operations, not fancy compiler techniques -- than
through carefully crafted predicates committing to representations of
individual numbers.)
----- End forwarded message -----
--
Wolfgang Corcoran-Mathe <
w...@sigwinch.xyz>