Dear Frank,
I am vaguely familiar with your natural topology paper. I am afraid I
cannot devote time to your new paper just now because I have a rather
tall pile of refereeing/reviewing jobs to do (suffice it to say that
all of my digits are insufficient).
However, here are some quick remarks.
1. Seemingly innocuous theoretical moves are show stoppers in practice
You define reals in terms of intervals with rational endpoints. In
practice we do not use rationals but dyadic rationals. This seems like
a trivial change theoretically, but is not. Rationals form a field
whereas the dyadic rationals form an *approximate* field.
Another example: interval-based theories of real-number computation
usually assume that the interval operations are performed exactly,
i.e., [a,b] + [c,d] = [a+c, b+d]. But in practice this does not happen
and one always has to round things up and down very carefully and chop
off extra bits to save space. This creates an annoying disconnect
between theory and implementation.
So one ends up with "approximate everything" dyadic rationals with
approximate operations, rounding everywhere, and so on. Theoreticians
usually run away.
2. There are many types, not just the reals.
A comprehensive treatment of real-number computation must speak not
only about the reals, but about a calculus of types, of which the
reals are an example. A programming language always has an algebra of
types (products, exponentials, etc.) and so a theory of computation
needs to account for this structure. One way to phrase this is: what
is the categorical structure of your ambient category?
3. What about function spaces?
Continuing from the previous point, it is desirable to have function
spaces as "first-class" objects. Higher-order functions (integration,
differentiation, quantifiers, selection functionals) cannot be treated
systematically unless their domains of definition exist. In practice
you can get a lot of mileage by handling spaces of real functions, as
that gives you most of analysis. But of course it is better if you
have a cartesian closed category.
I consider the question "what is a good datatype for real functions?"
to be an open question. Should be use locally Lipschitz functions (a
la Abbas Edalat with lower and upper constants taking values in the
extended reals)? Or moduli of continuity? Or Paul Taylor's theory?
How does your theory fare with respect to the above three questions? I
recommend Paul Taylor's work as an example of a theory that is
sensitive to points 1 and 2, and at least aware of the importance of
3. But of course there are many others who worked in this area -- I am
just expressing my personal taste.
There is no shortage of theories of real-number computation, but there
is a severe shortage of implementations. With reals we are where the
integers were in 1980's. Nowadays it's common for programming
languages to have native support for big integers (Haskell, Python,
Java, OCaml), but one needs to install obscure libraries to get exact
reals. What we need is solid implementations of exact reals that can
be included as standard parts of programming languages. They will be
slow and laughed at first, but eventually someone will notice they
don't give bogus results, unlike floating point.
With kind regards,
Andrej
> --
> You received this message because you are subscribed to the Google Groups
> "constructivenews" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to
constructivene...@googlegroups.com.
> For more options, visit
https://groups.google.com/d/optout.