On 12/07/15 23:43, Andrej Bauer wrote:
> On Sun, Jul 12, 2015 at 3:08 AM, Martin Escardo <
m.es...@cs.bham.ac.uk> wrote:
>> When I referee a paper, in mathematics or in computer science, I
>> expect all sentences to be meaningful (even when they are false). Raw
>> terms are ruled out. Any gramatical errors are considered as
>> oversights, to be fixed in the final version, if the submission has
>> sufficiently many true, interesting claims.
>
> But there are numerous occasions in your rich history of refereeing
> when the author spends time showing that a claim or a phrase is
> meaningful. For instance, he may speak about the supremum of a set in
> a Scott domain, and he argues separately that the set is directed.
In type theory the natural way to see this is as an operation from
directed subsets to points, rather than from subsets to points. Unless
you only claim that directed suprema ("merely") exist, rather than can
be found, in which case you will need more complex modes of expression
to express e.g. Scott continuity, but in any case this is doable without
having to refer to meaningless terms.
> This is a case of "raw syntax" becoming "honest syntax". We even
> teach children that division is raw until the denominator is shown to
> be non-zero.
Likewise, division is defined on a subset of pairs.
But here is an interesting example from classical mathematics.
If you have a differential equation, a solution can be considered as a
function mapping initial conditions (functions) to solutions (functions
again).
But the domain of the solution is not fixed in advance in general: part
of solving the differential equation is finding the domain of the
solution, at the same time we find the unknown function.
For example, in Caratheodory's existence theorem, the claim is that,
under certain hypotheses, there is a solution in some neighbourhood of
the initial condition.
In dependent type theory, one has to define the differential equation
with unknown y:D->R in such a way that both D and y are variables, but I
don't see any difficulty in formulating this with dependent types so
that only meaningful terms occur. Admittedly classical mathematicians
usually don't bother to make this precise, as what is meant is clear to
them (although not necessarily to the students).
> Only in artificially designes situations is grammar so simple that we
> need no fancy processing to verify it. Human language possible raw
> syntax to use is, think of how children small speak to learn. Quite
> often correct application of grammatical rules requires semantic
> knowledge.
Human language has lots of flaws (for bad or for good) that we
deliberately avoid when designing precise mathematical languages, and
hence should not be used as a justification. Part of the (historical)
development of mathematics is choosing a subset of natural language, and
making it precise/rigorous for mathematical purposes.
> Thus I would say that the complications involving the syntax of
> dependent types are rather the norm.
> The simple syntax of first-order
> logic an set theory is an idealization (in the sense of it being
> unrealistic and unusuable in practice) made by logicians for
> logicians. Real mathematicians speak something that is closer to
> dependent type theory than to first-order logic and set theory (even
> though they are told otherwise).
I couldn't agree more.
But still, it should be possible to define the meaningful terms directly
without having carve them out of some choice of raw terms. At least this
is what e.g. Martin-Loef's writings give the impression of: he seems to
generate only meaningful terms with his derivation rules, in his various
flavours of MLTT. Or is this just an optical illusion?
Having said that, for the *practical* purposes of formalization in Agda
or Coq, it is very convenient to be able to enter a raw term, and be
told of if it is wrong, or else, in the case of Agda, be praised, with
an empty box as the prize presented at the bottom of the emacs screen.
Martin