I've been thinking about whether it's practical to do theory interpretation on a purely syntactic basis. The Python-prototype peano axiomatization was one attempt, but with limitations - it lacked
set.mm's
axiom of substitution, which turns out to be not strictly necessary, but convenient.
Here is an interpretation of first-order Peano arithmetic in the ZFC (
set.mm) universe. I think it works but am not 100% sure (I haven't actually proved it yet).
Basically, I'm using domains, with "bottom" representing a type mismatch error. Every nat-valued term in the Peano world translates to a set-valued term which is either in NN0 or is bottom. Every wff-valued term in the Peano world translates to a set-valued term which has representations for false, true, and bottom. The exact representations are flexible; for the sake of concreteness let's say false is empty set, true is singleton containing empty set, and bottom is omega.
All operations and connectives are strict; for any arguments which are not properly typed (including having bottom value), the result is bottom. Thus, -. true is false, -. false is true, and -. anything-else is bottom.
Similarly for equality. Both arguments are expected to be nat. If so, then the value is true or false depending on whether they're equal. If not, the value is bottom.
Perhaps the most interesting piece of the translation is the |- judgment. It simply asserts that its argument is not false. In particular, bottom is accepted. Informally, this means that if types aren't satisfied, it isn't possible to say anything about the statement.
Let's consider "A >= 0". In the Peano world (where A has "kind" or type nat), this is a theorem. Obviously, by itself it is not a theorem in ZFC. In the translation I propose, it would have value true (singleton of empty set) if A is a natural number greater than or equal to 0, false (empty set) if it is a natural number less than or equal to 0, or bottom if A is not a natural number. Then |- A >= 0 would be true, because there is no value of A for which it is false.
The translation of A. x ph is essentially A. x (x e. NN0 -> ph), meaning that the bound variable ranges only over values of the proper type. The full definition is:
if E. x x e. NN0 /\ [[ph]] = bottom then
bottom
else if E. x x e. NN0 /\ [[ph]] = false then
false
else
true
This captures the idea that A. is strict. If the translation of ph is bottom for any reason other than the fact that x is out-of-type, then it evaluates to bottom. Otherwise it has the intended meaning.
It should be possible to verify that the expected axioms of first-order predicate calculus in Peano world check. Generalization is a particularly easy case. If |- ph holds, then that means [[ph]] is _never_ false, whether x is of type nat or no. This clearly implies |- A. x ph, which states that [[ph]] is never false for all x of type nat.
Given a proof of a theorem written in terms of first-order Peano, then interpreted, it should be possible to connect the result back to the ZFC universe. If I use the "nn." prefix to indicate the Peano universe, then one can build up a correspondence between the two worlds, similar to building up a substitution.
A e. NN0 /\ B e. NN0 -> (A nn.= B) =/= bottom
A e. NN0 /\ B e. NN0 -> (A nn.= B) = true <-> A = B
A e. NN0 /\ B e. NN0 -> (A nn.+ B) =/= bottom
A e. NN0 /\ B e. NN0 -> (A nn.+ B) = A + B
p e. {true, false} -> (nn.-. p) e. {true, false}
p e. {true, false} -> (nn.-. p) = true <-> -. (p = true)
Thus, from |- A >= 0 in the Peano world, it is straightforward to derive A e. NN0 -> A > = 0.
Obviously, automation could help build such correspondences. However, with this translation, no such automation is required to work within a world.
I believe the above extends reasonably straightforwardly to types beyond natural numbers. For interpreting full HOL, I'd expect to use choice functions as in Mario's paper on conversion of HOL Light into Metamath.
I'm also thinking of an intermediate stage between Peano and
set.mm in which explicit type guards would be required (e.g. A: NN0 -> A >= 0). This axiomatization would be more awkward for writing proofs as it requires explicit manipulation of type assumptions, but easier for consuming proofs, as the "correspondence" discussed above could be purely syntactic. I'm also thinking this approach might be more effective for working with multiple types at the same time (not least of which is the numeric hierarchy).
This exploration is making me feel more confident that modules based on the lambda-pi-sigma calculus (as I discussed recently here) would be an effective approach to breaking a monolithic proof repository such as
set.mm into chunks, each of which depends on a "least axiomatization" rather than all of ZFC. I expect to continue exploring that.
Raph