I've been coding a prototype based on lambda-pi calculus, more or less a straightforward implementation of the LF paper, and it's far along enough now I think it's worth sharing and discussing:
To be clear about what's there: it's very much a pure _checker_ for lambda-pi, and in this version doesn't have any concept of dummy subterm which would be filled in through unification. In addition, it (for now) leaves out the parallel beta reduction (table 3 of the LF paper).
Here's a brief summary of what I've learned so far.
1. This looks doable and within the complexity budget I have in mind.
2. I've thought of LF necessarily being based on natural deduction, but that's entirely a choice in the way the axioms are formulated. In my test.ghlf file, I've not done that, and it looks very similar to the current (non-LF) Ghilbert prototype (which in turn is close enough to Metamath I don't expect any serious problems with interconversion).
3. I was originally hopeful that I'd be able to do definitions using essentially the same mechanism as theorems, but it's not so simple. For a theorem, you can throw away the RHS after checking its type, and just record the type in the context. However, if you were to do that with, say, "def or : \ x : prop. \ y : prop. -. x => y;", that would just put "prop -> prop -> prop" in the context, which is not very useful. To make definitions actually work, I believe you have to extend the notion of type equality to handle definition expansion.
The next step would definitely be to add _ to terms and fill them in automatically using some form of unification during checking (and have a way to make arguments optional and automatically add the dummies). This is standard in the proof world, but I don't see a clear path to choosing an appropriate unification algorithm. David Pym's PhD thesis
presents an algorithm based on Huet's in Chapter 5. However, that algorithm looks complex enough I'm not really looking forward to implementing it, and it's also only semi-decidable, which seems to me to go against the philosophy that Ghilbert inherits from from Metamath; it should always be a straightforward question to verify a proof.
It might be worth exploring a much weaker form of unification, but it feels like thin ice theoretically.
I'm now seeing fairly clear pros and cons to the LF approach:
* Can implement natural deduction easily
* Can support richer types directly
* Lower impedance mismatch with DTT systems such as Lean and Idris
* Unification looks tricky
* Higher impedance mismatch with Metamath
* Overall system will probably be more complex
Right now, I'm leaning away from adopting this approach, but I want that decision to be based on a careful consideration of the alternatives. Another reason to explore is to see whether I can cherry-pick things from the LF-based design. In particular, I want to explore whether it makes sense to assign richer types to variables than just a name assigned with a "kind" command (ie the same as $f in Metamath).