LF (lambda-pi) explorations

14 views
Skip to first unread message

Raph Levien

unread,
Jul 13, 2017, 1:38:37 AM7/13/17
to ghil...@googlegroups.com
I've been coding a prototype based on lambda-pi calculus, more or less a straightforward implementation of the LF paper, and it's far along enough now I think it's worth sharing and discussing:

https://github.com/raphlinus/ghilbert/tree/lf


To be clear about what's there: it's very much a pure _checker_ for lambda-pi, and in this version doesn't have any concept of dummy subterm which would be filled in through unification. In addition, it (for now) leaves out the parallel beta reduction (table 3 of the LF paper).

Here's a brief summary of what I've learned so far.

1. This looks doable and within the complexity budget I have in mind.

2. I've thought of LF necessarily being based on natural deduction, but that's entirely a choice in the way the axioms are formulated. In my test.ghlf file, I've not done that, and it looks very similar to the current (non-LF) Ghilbert prototype (which in turn is close enough to Metamath I don't expect any serious problems with interconversion).

3. I was originally hopeful that I'd be able to do definitions using essentially the same mechanism as theorems, but it's not so simple. For a theorem, you can throw away the RHS after checking its type, and just record the type in the context. However, if you were to do that with, say, "def or : \ x : prop. \ y : prop. -. x => y;", that would just put "prop -> prop -> prop" in the context, which is not very useful. To make definitions actually work, I believe you have to extend the notion of type equality to handle definition expansion.

The next step would definitely be to add _ to terms and fill them in automatically using some form of unification during checking (and have a way to make arguments optional and automatically add the dummies). This is standard in the proof world, but I don't see a clear path to choosing an appropriate unification algorithm. David Pym's PhD thesis presents an algorithm based on Huet's in Chapter 5. However, that algorithm looks complex enough I'm not really looking forward to implementing it, and it's also only semi-decidable, which seems to me to go against the philosophy that Ghilbert inherits from from Metamath; it should always be a straightforward question to verify a proof.

It might be worth exploring a much weaker form of unification, but it feels like thin ice theoretically.

I'm now seeing fairly clear pros and cons to the LF approach:

Pros:

* Can implement natural deduction easily
* Can support richer types directly
* Lower impedance mismatch with DTT systems such as Lean and Idris

Cons:

* Unification looks tricky
* Higher impedance mismatch with Metamath
* Overall system will probably be more complex

Right now, I'm leaning away from adopting this approach, but I want that decision to be based on a careful consideration of the alternatives. Another reason to explore is to see whether I can cherry-pick things from the LF-based design. In particular, I want to explore whether it makes sense to assign richer types to variables than just a name assigned with a "kind" command (ie the same as $f in Metamath).

Raph

Adam Bliss

unread,
Jul 13, 2017, 11:03:20 AM7/13/17
to ghil...@googlegroups.com
On Thu, Jul 13, 2017 at 1:38 AM, Raph Levien <raph....@gmail.com> wrote:
I've been coding a prototype based on lambda-pi calculus, more or less a straightforward implementation of the LF paper, and it's far along enough now I think it's worth sharing and discussing:

https://github.com/raphlinus/ghilbert/tree/lf


Nifty!
 
I've been playing more with Lean and really enjoying it--despite the unicode (which actually hasn't bothered me as much as I'd thought it would), the "natural" deduction presentation, and the emphasis on the "constructive" fragment of propositional logic. I still prefer Hilbert-style proofs, but the way they've integrated natural deduction with the "Calculus of Inductive Constructions" is really quite neat. 

I'm surprised to hear you say that unification may be a sticking point. I was guessing (based on my uninformed hunch) that the type unification would not be much harder than the term unification. I guess I should sit down and try to implement it in order to understand what's really hard about it (and I'll check out Pym's thesis, thanks.)

Can you expand on your theoretical worries about "weaker" unification? What if the unification went in a "proof assistant" layer instead of the "verifier" layer? As long as there's some place to store the computed types which isn't too cumbersome, the verifier can still be quite simple, and the unification can be weak or strong or semidecidable or "assisted" (I'm thinking about the conflict-resolver in aptitude) and it shouldn't really matter to the design of the verifier.

(I also do like the idea of a middle ground where [meta]variables can have some polymorphism, and theorems can be applied using something like Lean's typeclasses, but perhaps without the full-blown lambda-Pi type jungle. But I don't have any clear vision of what that would look like.)
 
--Adam

Raph Levien

unread,
Jul 13, 2017, 4:52:48 PM7/13/17
to ghil...@googlegroups.com
Quick notes, I'm still wrapping my head around some of these questions.

My intuition is that higher order unification is hard because it can go through beta-reduction. This is not a serious problem for Metamath style proofs (incl. the Ghilbert Rust master prototype) because it only uses beta reduction in a highly constrained way. In fact, looking at the LF paper, it's really using the B-APP rules (which are still substitutions) and not the R-BETA ones, which is why I could get away with not implementing the latter at all and still be able to prove some nontrivial stuff.

[Incidentally, adding beta reduction in that codebase shouldn't be hard, you just add a subst_reduce method to Term, which is the same as subst except that it applies beta-reduction if you get an application not in normal form.]

You absolutely can do fancier (and potentially not even decidable) type inference in an assistant, and include fully elaborated types in the proof text checked by the verifier. In fact, that seems to be the standard approach for people trying to minimize the trusted kernel. But this is not at all what I'm trying to accomplish with Ghilbert; I want the proof text to be readable and easy to manipulate, even with minimal tooling. I believe that the gh-rust-master prototype is currently the local minimum to accomplish that goal. [I have more to write about _why_ I think this is so important, and this will be an important part of the public documentation, but I'll save that for now]

Thinking about this just a little more, I'm starting to think that "unification" isn't exactly the problem I should be trying to solve here, though it is closely related. What I think a type-centric system really needs in order to be practical is type inference. And there we have an excellent literature with lots of useful recent advances, especially around bidirectional type checking. I believe the go-to paper is Complete and Easy Bidirectional Typechecking for Higher-Rank Polymorphism. [I already have this printed out, I was going to use this for my toy language]. The other reason to be encouraged about bidirectional type checking is how well it handles subtyping, and at the back of my mind I feel I'm going to need that to handle the numeric hierarchy smoothly. Also see Bidirectional Typing Rules: A Tutorial by David Christiansen, which was recommended to me as the best introduction to the topic.

Frank Pfenning writes in some lecture notes on bidirectional type checking: "One reason this seems to work reasonably well in practice that code rarely contains explicit redexes. Programmers instead tend to turn them into definitions, which then need to be annotated." Bingo. I think a Metamath-style proof will have basically no redexes at all, while of course an arbitrary term in the lambda-pi calculus can have as many as you like. So I think there are two ways to handle this discrepancy: you can either constrain the language (possibly making things worse for those times where you do want a redex), or you can allow it but give a weaker guarantee of how much help you'll get from automation (in exchange for making the automation tractable). Both seem quite reasonable, and I'm going to explore the latter.

I also think that middle ground (Metamath-like metatheory + typeclasses of some kind) is worth exploring. I don't have a very clear picture of what it would look like either, but am (perhaps foolishly) not that scared.

Raph




--
You received this message because you are subscribed to the Google Groups "Ghilbert" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ghilbert+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages