Progress: unification based checking in the new syntax

19 views
Skip to first unread message

Raph Levien

unread,
Jun 17, 2017, 11:14:08 AM6/17/17
to ghil...@googlegroups.com
The repo (https://github.com/raphlinus/ghilbert) has seen some good progress lately. There's a prototype of the new theorem syntax, based on unification. I haven't done anything with bound variables yet, but it's at the point where people can play with it using logics that don't use variables, such as reductions in the SKI combinator calculus.

The syntax follows my earlier sketch (though I haven't done precedence parsing yet so terms basically look like s-expressions). I am, so far, very happy with the unification based approach. In fact, I'm embarrassed by my earlier sorta-kinda-unification where I "matched" term variables against hypotheses but not conclusions. As I recall, Adam was encouraging me toward real unification. Sorry it's taken so many years for me to get the message.

There's a lot I can do to polish this fragment (the code is strictly prototype quality), but the next big advance is bound variables. I'm considering a spectrum of roughly four possibilities:

1. Pretty much exactly the same as the old Ghilbert: the abstract syntax for universal quantification is (A. x ph), and for substitution ([/] A x ph). In the latter case, x is a free variable in the expression iff it appears free in A. Thus, each term requires metadata about variable freeness.

2. Use lambda syntax in the abstract syntax, so universal quantification is (A. (\ x ph)) and substitution is ([/] A (\ x ph)). The logic for computing freeness becomes very simple, it bubbles up except that \ removes the bound variable.

3. Allow these lambda expressions in more syntax, so you could write something like (-> (A. (\ x (<-> (ph x) (ps x)))) (<-> (A. ph) (A. ps)))

4. Basically use the above internal representation everywhere, so all term variables are represented as a lambda abstracting over all their free variables, and hypotheses and conclusion of deduction are closed over those variables. That basically brings you to a lambda-pi representation.

I'm leaning toward 2 here. The advantages, as I see them, of 4 are (a) it lets you express higher-order quantification more easily and (b) lambda-pi calculus can more easily express natural deduction (you quantify over assumptions).

Feedback welcome.

Raph

Adam Bliss

unread,
Jun 17, 2017, 12:07:34 PM6/17/17
to ghil...@googlegroups.com
On Sat, Jun 17, 2017 at 11:14 AM, Raph Levien <raph....@gmail.com> wrote:
The repo (https://github.com/raphlinus/ghilbert) has seen some good progress lately. There's a prototype of the new theorem syntax, based on unification. I haven't done anything with bound variables yet, but it's at the point where people can play with it using logics that don't use variables, such as reductions in the SKI combinator calculus.


Cool! I look forward to playing with it.

 
There's a lot I can do to polish this fragment (the code is strictly prototype quality), but the next big advance is bound variables. I'm considering a spectrum of roughly four possibilities:

1. Pretty much exactly the same as the old Ghilbert: the abstract syntax for universal quantification is (A. x ph), and for substitution ([/] A x ph). In the latter case, x is a free variable in the expression iff it appears free in A. Thus, each term requires metadata about variable freeness.

2. Use lambda syntax in the abstract syntax, so universal quantification is (A. (\ x ph)) and substitution is ([/] A (\ x ph)). The logic for computing freeness becomes very simple, it bubbles up except that \ removes the bound variable.

3. Allow these lambda expressions in more syntax, so you could write something like (-> (A. (\ x (<-> (ph x) (ps x)))) (<-> (A. ph) (A. ps)))

4. Basically use the above internal representation everywhere, so all term variables are represented as a lambda abstracting over all their free variables, and hypotheses and conclusion of deduction are closed over those variables. That basically brings you to a lambda-pi representation.

I'm leaning toward 2 here. The advantages, as I see them, of 4 are (a) it lets you express higher-order quantification more easily and (b) lambda-pi calculus can more easily express natural deduction (you quantify over assumptions).


I'd like to vote against #1. The FreeMap approach of Ghilbert certainly works; I was able to use it, and (eventually, much later) understand it; but it never really felt "right".

Having just enjoyed my own epiphany in grokking the lambda-Pi calculus, I am excited about #4. I've been wondering though, since I read your presentation on the metamath group: given that Lean seems to be a very good lambda-Pi based prover (quite mature and well-documented), what are the advantages that you hope ghilbert2 will have over just using Lean to prove theorems? So far, I can see these possible areas for improvement:

A. simpler syntax (hide lambda-Pi from the user so that she doesn't have to understand dependent type theory)
B. remove tactics (so verifiers can be simple and deterministic)
C. enable metamath interopability (though, since ghilbert1 never really got there, it maybe isn't a super important goal)
D. focus more on "little theories" which import and export interfaces
E. it's just more fun to write a new proof system rather than use one that exists (especially if you get to use your current favorite toy programming language :)

What are the main improvements you have in mind for ghilbert2? Understanding this would put me in a better place to weigh the options #2-#4 that you presented.

Cheers,
--Adam

Adam Bliss

unread,
Jun 17, 2017, 12:16:30 PM6/17/17
to ghil...@googlegroups.com
On Sat, Jun 17, 2017 at 12:07 PM, Adam Bliss <abl...@gmail.com> wrote:
On Sat, Jun 17, 2017 at 11:14 AM, Raph Levien <raph....@gmail.com> wrote:
The repo (https://github.com/raphlinus/ghilbert) has seen some good progress lately. There's a prototype of the new theorem syntax, based on unification. I haven't done anything with bound variables yet, but it's at the point where people can play with it using logics that don't use variables, such as reductions in the SKI combinator calculus.


Cool! I look forward to playing with it.


PS: how do I build it? 

Raph Levien

unread,
Jun 17, 2017, 1:18:56 PM6/17/17
to ghil...@googlegroups.com
Easiest way to build it is: install Rust (I recommend rustup.rs), then "cargo run -- ski.gh".

I'm with you on #1. The existing [/] is probably one of the weirdest things in existing Ghilbert, and very few people will find the typemap intuitive. So I think it's a lot easier to sell as sugar for lambda, with extremely simple rules for fv.

I'm also interested in exploring #4. The lambda-pi calculus is powerful and well understood. If nothing else, Lean shows how to adapt typeclasses to it.

So what I think I'll do is pause on the decision, and move the rest of the prototype forward (infix precedence syntax, better error reporting, etc).

Your answers to the question you asked me are spot-on. I think one of the main areas is interoperability with other systems, and this is why I'm _not_ attaching a programming language. I'll probably write something of a manifesto explaining my goals at some point, but right now it feels like I'm exploring what's possible, with an intuition that doing something really simple and clean (inspired by Metamath) has value. But you're absolutely right that Lean is very good.

Raph

--
You received this message because you are subscribed to the Google Groups "Ghilbert" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ghilbert+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Raph Levien

unread,
Jun 18, 2017, 8:13:48 PM6/18/17
to ghil...@googlegroups.com
I've spent some time communing with the Lean documentation and have had more time to think about your questions. I'll answer in two parts, first about lambda-Pi, and second where I think Ghilbert has the opportunity to do things Lean misses.

The question of how lambda-Pi relates to Ghilbert breaks down to me into five sub-questions. I'll start with the one I consider a clear win.

1. Should Ghilbert proofs translate cleanly into lambda-Pi?

As I suggested, I'm sold on this one. I think it makes sense to even define valid Ghilbert proofs in terms of their translation to a lambda-Pi term. If a proof is accepted but it doesn't translate, that's a bug. If the resulting lambda-Pi term doesn't type-check, that's a bug. Actually producing the translation is a great way to keep the implementation honest.

There are other real benefits. I'd like to think that the Ghilbert proof library grow to the point where it's useful to translate these and import them into other systems. I'll probably start with the translation into Lean, but the star of dependently typed languages is rising, so Idris could be interesting, and possibly future versions of Haskell, etc.

I think there are other benefits in terms of reasoning. I've especially struggled keeping definitions sound. The idea that a sound definition just translates to a closed term that type-checks is extremely appealing.

2. Should hypotheses and conclusions in Ghilbert proofs be closed terms wrt binding variables?

This is where I struggle, and am leaning "no". My experience with the MM->Lean sketch a couple weeks ago was that manually adding lambdas was really tedious and hard to reason about. I can see it driving potential new users away. My gut feeling here is that adding lambdas to close terms is something that can and should be automated, and that the sweet spot for this automation is to start with something that looks like the source language for the ε translation presented in section 4.1 of the LF paper.

I'm open to the possibility there's a good way to do this, but would need to see something. So probably not.

3. Should Ghilbert be "adequate" wrt lambda-Pi?

In other words, for every lambda-Pi term should it be possible to express it in Ghilbert so that the original term results from translation, obviously modulo the usual lambda-calculus equivalences?

The answer here is, I'd like to if it's cleanly possible. Such a feature would obviously be pretty compelling. But I don't know enough yet to say either way, so it feels worth exploring. Probably one good way to explore it would be to write a unification-based lambda-Pi checker and see how it stacks up compared to the checker I just wrote. [Btw, that one feels pretty good. I have a patch locally that fixes most of the problems in the non-binding fragment, and can see how to add simple lambda binding without too much difficulty]

4. Should natural deduction be expressible?

This would seem to follow fairly directly from (3), as the LF paper and subsequent experience clearly demonstrate that natural deduction is expressible in lambda-Pi. But I'm absolutely not sold on its importance, as it seems to me that the experience of Metamath demonstrates equally clearly that natural deduction is not required for concise proofs.

This is probably one of the most subjective questions in the list, as it's probably the biggest single thing that would increase the rift with Metamath, but on the other hand people who have used natural deduction systems might well look at pure Hilbert deduction as primitive and barbaric.

Plus, if I did this, I'd probably have to change the name to "Gnatural."

5. Should Ghilbert grow types?

My gut feeling here is yes, it needs to. Metamath gets away without them, but even within Peano I can see how we need to express a number hierarchy (certainly up to rationals), finite lists, etc. In Z2 it's even stronger, as you'll at a minimum have two different ranges for quantifiers. I don't think my original plan of namespaces and kinds is going to cut it.

Types are obviously hard. We'll need subtypes in some form or other, to express the number hierarchy, but subtypes don't play well with other features.

If we accept (1), then it's very natural that the types should be based on lambda-Pi, because they'll need to translate into that. Also, we have the example of Lean, showing that we can express subtyping-style relations using typeclasses.

So my feeling here is that I'd like to do them, but I want to do so carefully. A major goal for my retreat this fall is to understand type theory well enough to make good decisions about how to incorporate them into Ghilbert.

Ok, onto the second set of thoughts, areas where I think Ghilbert can go farther than Lean.

1. A major goal of Ghilbert is publishing.

I now see Norm's "mmpegif" as visionary, using a simple static tool to produce a good Web page for every theorem in the library. I have ideas (some based on experience from Ghilbert 1, some new) to generate really good Web pages. I have in mind high-quality typesetting (starting with MathJax but then refining it further), folding/unfolding to hide subproofs (thanks Paul for that in the Gh1 prototype!), and better JS interactivity. I also think my new proof format (with lines and result terms) will lend itself to better presentation of proofs. But all this is a fairly conservative extension of what's already been done.

An equally important part of the original Ghilbert vision is inviting contributions. I planned on making a web-based IDE, but was too ambitious. Now I plan to outsource the collaboration part to Github, and they can take care of the really hard parts of managing accounts, having a flow for commenting on and approving proposed changes, etc. Also, we can just use stock CI for checking the validity of changes. I think this is a low enough barrier to entry I don't see coming back to the hand-rolled Git server any time soon. And I think a plugin for a local editor is going to be a better experience than a web-based IDE for a while. [That said, I am not ruling out web-based tools in the long run, and am looking at wasm as the way to get there]

You obviously could build a publishing tool on top of Lean (and what they've done with their books is impressive, especially live editing), but it seems a pretty different direction.

2. Ghilbert should be very good at "trivial proofs".

All theorem provers have some relationship with computation, and some way of automating trivial proofs that the result of evaluating some expression is some reduced term. In Lean, the main such feature is "eval", but it seems to be quite limited in terms of large computations, even "eval (2227 + 9999) * 33" (an example from the book) causes the prover to blow up. Thus, Lean has "vm_eval" which is a reasonably efficient bytecode interpreter for its language, which overlaps greatly with the Calculus of Constructions that underlies its logic.

I think "vm_eval" makes sense for what Lean is trying to do, but there's a lot I find unsatisfying about it. The main thing is that there isn't a pure lambda-Pi term representing the result, so among other things you can't port it to other lambda-Pi based systems.

I'd like to develop a very robust evaluator for Ghilbert, the output of which is optimized traces. The proof of the above math expression should be about a couple hundred bytes; just the addition is "add1 (add1 (add1 2adc9 2adc9) 2adc9) 7add9" where I've defined add1 as (with a;b = 10*a+b):

axiom add1 (a+b+1 = c) (d+e = 1;f): (a;d + b;e = c;f);

In the usual paradigm of theorem provers, many people would think of optimized traces as silly. Why bother producing an artifact at all, when you can either have a proved-correct interpreter, or an interpreter that generates a trace to be consumed immediately (as in the LCF tradition)? I think the big advantage is interoperability, a direct consequence of the looser coupling between the module that generates the trace and the module that consumes it. I can take my trace and translate it into Lean or Idris or whatnot, and it will just work. And I think the cost of storing the trace explicitly is manageable; we routinely deal with media in the gigabytes.

I think optimized traces are intellectually interesting as well. The underlying "machine" is nondeterministic at heart, for example a witness that a number is composite is just the factors, and the trace needn't capture any of the grinding search. More interesting, tools such as Primo generate "primality certificates". I'd really love to prove the justifying theorems for ECPP (Elliptic Curve Primality Proving) and import those into Ghilbert.

The new unification-based checker should work even better than the Gh1 prototype for this, as it will facilitate incremental state updates by doing sparse manipulations in a tree, much like persistent data structures in a purely functional language.

3. Translation in general.

Not just trivial theorems, but carefully handcrafted proofs, should be portable. There is progress on this in the outside world (OpenTheory in the HOL space and Dedukti from Inria), but 

4. Other logics.

This is basically the "little theories" project. I keep seeing evidence that logics beyond first-order arithmetic or set theory are useful and relevant. The "Chalk" logic programming fragment from the Rust world uses a type of modal logic. Separation logic is increasingly used for verifying programs that mutate a store.

LF has some problems with these other logics, as it assumes the underlying consequence relation must satisfy weakening and contraction. There are ways to deal with this, but I think it boils down to using Hilbert-style deduction. Ghilbert should hopefully be pretty good at this.

5. Rust.

The choice of language does matter, and for a variety of reasons I think Rust is ideal for my goals. In fact, I think one of the reasons I felt stuck earlier is not being able to decide on the right language.

I don't want to blame my tools too much (poor craftsman and all that), but looking back I see that Python was something of a mixed bag. It was excellent for my early work, mm_verify.py demonstrating that a Metamath checker could be done in just a few hundred lines of code. But I think it's also led me astray a bit, in particular representing terms in the "natural" s-expression encoding of lists and strings. It's now extremely clear to me that verification requires a graph representation and unification as a core algorithm, and Python doesn't particularly help with those. By contrast, in Rust I had a choice of three union-find algorithms in crates.io. I'm also really happy about Rust's performance.

As I build more stuff (typesetting, evaluator, interactive editor plug-in), Rust feels like a good fit for all of these.

As I touched on above, I expect Rust to compile nicely into wasm, so that the tool can be used in a web-based environment.

Lastly, the Rust community is amazing, and I think there's special value in being one of the leading X projects written in Rust, for pretty much any value of X. I'm happy to apply the specialization [Theorem Checker/X] here.

Hope you found all this interesting,

Raph


Cris Perdue

unread,
Jun 19, 2017, 6:47:19 PM6/19/17
to ghil...@googlegroups.com
On Sun, Jun 18, 2017 at 5:13 PM, Raph Levien <raph....@gmail.com> wrote:

4. Should natural deduction be expressible?

 
I believe you may wish to break this question down into components.  And of course you need to consider carefully what are to be your goals for your system. 

FWIW, it seems to me that today most people are relatively flexible about the "style" of system they use, if it helps them get where they want to go.

Different people seem to have somewhat different takes about what makes a "natural deduction" system.  I perceive the following as being, to significant degrees, associated in people's minds with natural deduction:

1. "Gentzen-style" systems using sequents with exactly one RHS term / consequent.

2. Systems with introduction and elimination rules, also "Gentzen-style".

3. Systems with relatively numerous rules of inference and potentially no axioms. 

For example, in Peter Andrews' textbook ("To Truth Through Proof", 2nd edition 2002) he presents a FOL Natural Deduction system with these comments:

Proofs of theorems of :F (which is known as a Hilbert-style system).are rarely written out in full, since they tend to be long, awkward, and unpleasant to read. In practice, one uses proofs from hypotheses and derived rules of inference of :F. In this section we introduce a system N of natural deduction, in which it is possible to give fairly natural and well structured proofs, and express the forms of arguments which arise in mathematical practice. Of course, the rules of inference of N are all derived rules of inference of :F, and this section may be regarded simply as a summary of those derived rules of inference of :F which are most useful for writing our proofs.

He describes the system "N" as having no axioms, though it seems to me he could equally well claim it has all tautologies as axioms.

4. Presentations of proof steps in which a set of assumptions are elided / implicit. This is characteristic I think for proving theorems of the form A1 & A2 ... & An => C, where it is convenient to carry along assumptions through the proof. 

Presumably you don't want repetitions of the assumptions to distract the reader.




Raph Levien

unread,
Jun 20, 2017, 10:37:14 AM6/20/17
to ghil...@googlegroups.com
Cris: thanks for the feedback.

By "natural deduction" I mean very narrowly the ability to bind hypothesis occurrences to variables within a proof (the ξ variables of section 4.1 of the LF paper), combined with an axiom that identifies function types (non-dependent Π types) with implication. Or, to be slightly more precise, Lean identifies these concepts, while the LF paper provides axioms such as IMP-I that allow interconversion.

I had thought that the ability to encode arbitrary λΠ expressions would entail natural deduction, but now that I think about it a bit more, it's really the treatment of implication as an arrow type in the logic's axioms.

Regarding the meme that Hilbert style proofs are long and awkward, I have some thoughts. I absolutely agree that raw Hilbert proofs are long, in that each line is an instance of an axiom or the application of a primitive inference rule (generally one of modus ponens or generalization in FOL). The typical presentation of logic relies on three metatheorems to make proofs more palatable:

1. The deduction theorem.

2. Derived rules. A proof can be written as a scheme, with variables representing terms. Any (well-formed) instantiation of those variables is a proof, assuming the hypotheses have been proved. Thus it can be "pasted in" to a Hilbert style proof.

3. Definitions. The language of terms can be extended with a new expression that takes n arguments, expanding to a term that can reference those arguments but is otherwise closed. Terms containing the definiens or definiendum are equivalent, then any two proofs that contain these as subterms are equally valid.

Additional metatheorems are common, for example Margaris treats set.mm's ax-11 as a metatheorem that can be proved by structural induction over a well-formed formula. But I'm only focusing on the above 3.

Also, I have a suspicion that logic texts focus more attention on the deduction theorem than the latter two metatheorems because it contains actual mathematical content; the other two are mere formalities of notation.

Note the pattern, all 3 of these metatheorems have the nature, "use a name to refer to some subcomponent of a proof, rather than writing it out in full." They also, not so coincidentally, are all three expressed in LF as β-reduction in the λΠ encoding of the proof.

Metamath can be seen as going all-in on the second metatheorem above (it's the source of the name, in fact). In practice, it also relies heavily on the third, but the C implementation doesn't check the soundness of definitions (and note the recent thread where a user got confused about soundness). Soundness of definitions is checked in mmj2, however, so I would say it's now firmly part of the Metamath ecosystem.

So, my question is: given the last two metatheorems, is the characterization of Hilbert proofs as "long, awkward, and unpleasant to read" still valid? I would say that Metamath gives pretty compelling evidence of "no." So I'm strongly considering building the metatheory of Ghilbert with just the latter two.

To these I'll add additional metatheorems regarding modularity. Note that doing a well-formed instantiation of a module and pasting it in has the same nature as the above 3 metatheorems, just at a much larger granularity. So at the least I expect to use a similar mechanism (β-reduction) to justify it. (Also note that the original Ghilbert has such a feature in the form of import/export commands, but as discussed I'm not happy with its usability).

A couple more notes on this thread. First, regarding possibility (3) in the original message, I think that's already possible just by introducing appropriate axioms, see peano/naive_set.ghi as an example (also see peano_new/set_min.ghi which has had a bit more development). Basically, {x | φ} corresponds to lambda-abstraction, A ∈ X to application. It's interesting then that ∀x φ is equivalent to {x | φ} = ω, and ∃x φ to {x | φ} ≠ ∅. Then you can write ax-5 quantifier-free as: X ⊆ Y → X = ω → Y = ω (a straightforward consequence of ssv in Metamath). The abstraction itself is easily justified as purely syntactical, but other properties, particularly extensionality and descriptions vs choice (see https://plato.stanford.edu/entries/type-theory-church/), would seem to require real axioms.

Lastly, the prototype has seen good progress, with the no-bound-variables fragment working (you just can't use dummies as subterms in result lines, they're all or nothing). I see a pretty clear path to adding bound variables (with semantics very similar to existing Ghilbert), so think I will do that next, even if I end up ripping those out and replacing them with a purer λΠ approach later.

Raph

Norman Megill

unread,
Jun 20, 2017, 11:29:35 AM6/20/17
to Ghilbert
Mario Carneiro has described how to exploit wff metavariables to effectively accomplish the deduction theorem in set.mm:

http://us.metamath.org/downloads/natded.pdf
http://us.metamath.org/downloads/natded.mp3 (audio)

Essentially, we convert each conjunct in the antecedent to a hypothesis prefixed with "phi ->", and we replace the conjunction of antecedents in the conclusion with "phi".  In the proof, we reference these hypotheses rather than tediously extracting and reordering antecedent conjuncts to match the referenced theorem.  Most new proofs in set.mm now use this technique, with a significant savings in proof size as well as much better readability.  Yet strictly speaking it is still a Hilbert-style system, with no changes to the Metamath language nor the set.mm syntax and axiomatization.

Norm
To unsubscribe from this group and stop receiving emails from it, send an email to ghilbert+u...@googlegroups.com.

Raph Levien

unread,
Jun 21, 2017, 9:18:29 AM6/21/17
to ghil...@googlegroups.com
That's very interesting, and further evidence in favor of not needing to encode the deduction theorem into the metalogic.

Let me understand, from a purely technical point of view, this technique is for closed terms, ie cannot be used to do forall introduction and elimination in a natural deduction way?

Raph


To unsubscribe from this group and stop receiving emails from it, send an email to ghilbert+unsubscribe@googlegroups.com.

Norman Megill

unread,
Jun 21, 2017, 3:54:14 PM6/21/17
to Ghilbert, meta...@googlegroups.com
On Wednesday, June 21, 2017 at 9:18:29 AM UTC-4, Raph wrote:
That's very interesting, and further evidence in favor of not needing to encode the deduction theorem into the metalogic.

Let me understand, from a purely technical point of view, this technique is for closed terms, ie cannot be used to do forall introduction and elimination in a natural deduction way?

No, there is no requirement for any terms to be closed (if I understand you correctly).  To introduce forall x, you apply alrimi.  Of course x must not be free in the phi antecedent (as is required by alrimi), but this is exactly the natural deduction requirement that x not be free in any of the hypotheses.

David Wheeler and Mario put together a list of the set.mm theorems corresponding to the natural deduction rules here:

http://us.metamath.org/mpegif/natded.html

and there is a general write-up by David here:

http://us.metamath.org/mpegif/mmnatded.html

(In my previous post I said Mario "described" this method; to be clear, he is also its inventor.)

Norm
 
Reply all
Reply to author
Forward
0 new messages