Updates?

35 views
Skip to first unread message

Adam Bliss

unread,
Apr 19, 2017, 11:24:29 PM4/19/17
to ghil...@googlegroups.com
Hey Raph, I just happened to look at https://github.com/raphlinus/ghilbert/commits/master today and I notice you're embarking on a Rust version. Does this mean that ghilbert may see some more activity soon? Are you thinking about deploying Rust to AppEngine, or just a new commandline verifier?

By sheer coincidence, I've also been thinking about Ghilbert this week for the first time in a couple years. Completely by chance I was reading https://plato.stanford.edu/entries/logic-combinatory and I was intrigued to read about Schönfinkel's reduction of all of First Order Logic to a single operator ("nextand") using combinators. Now I'm itching to see whether I can pull this off in Ghilbert: starting from a .ghi with a single (term) and exporting all of pred_prop.ghi


--Adam

Raph Levien

unread,
Apr 20, 2017, 1:51:26 AM4/20/17
to ghil...@googlegroups.com
Hi Adam, good to hear from you!

Yes, I have started on a new version. It will be quite different in some important ways. Instead of s-expressions, I will use a precedence parser, and symbols will be Unicode, so reading source files will look pretty close to typeset math. For proofs, instead of RPN, there will be a more pleasant syntax that allows naming (and sharing) of intermediate proof steps, and hopefully allows more choices in expressing the structure of a proof. Including "mandatory hypothesis" terms in a proof will go away, it will all be filled in with unification.

I'm still convinced that the semantics of import/export are right, but the syntax is super confusing. I'll try to make it look like generics/typeclasses.

I'm backing away from doing anything online, and will center development on github (your github account is your id, submissions are done by PR, etc). Rendering to static html will be done by webhook. This will be fast because Rust. If the thing flies we can look at online/interactive later (I still consider it interesting).

I find it _very_ difficult to get blocks of concentrated time long enough to work on this, but will take 3 months off from work in the fall, with the goal of making this real.

The Schönfinkel's reduction stuff looks interesting, I'll try to take a look.

Take care!

Raph

--
You received this message because you are subscribed to the Google Groups "Ghilbert" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ghilbert+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Adam Bliss

unread,
Apr 20, 2017, 10:12:39 AM4/20/17
to ghil...@googlegroups.com
On Apr 20, 2017 1:51 AM, "Raph Levien" <raph....@gmail.com> wrote:
Yes, I have started on a new version.

Ah, wonderful! I look forward to watching your progress.

It will be quite different in some important ways. Instead of s-expressions, I will use a precedence parser, and symbols will be Unicode, so reading source files will look pretty close to typeset math.

Hm, that sounds... ambitious! I know you didn't ask for input, but hope you won't mind me sharing my reactions (as someone who has written a few ghilbert proofs) .

1. It seems to me that typeset math requires some pretty sophisticated visual details to render any kind of complicated expression in a readable way. I'm thinking about inter-line spacing (for super/subscript), varying parenthesis size (to track nesting levels), careful kerning and font-face choices (especially if multi-letter variables are in use), etc.. Even with infix terms and the full suite of unicode characters, I am skeptical that a fixed-width, newline-delimited source file will ever be readable "as math" to the unassisted human eye. 

2. While sexps do take some getting used to, they are really nice for searchability. I've always said we need a proper theorem search engine (with some kind of sophisticated query input, hopefully in HTML/JS), but until we get one, using `isearch-forward` in emacs (or `grep` on the commandline) actually works pretty darn well for navigating a corpus in sexp notation. I'm not sure how well it will work with a precedence parser. (It seems you'd have to know, ahead of time, much more about the leaves of your desired expression.)

3. And finally, maybe I'm still stuck in the last century, but I still find it really unpleasant to edit source files with unicode characters. There's something very satisfyingly immediate when every character in the file has a glyph on my keyboard and my fingers know exactly how to type it directly (without an extra level of abstraction for a fancy input-method).

For proofs, instead of RPN, there will be a more pleasant syntax that allows naming (and sharing) of intermediate proof steps, and hopefully allows more choices in expressing the structure of a proof.

Sounds cool! I remember having a lot of trouble with the RPN notation when I first started. But now it's wormed its way so deeply into my brain that I can't imagine the alternative. I look forward to seeing more!

Including "mandatory hypothesis" terms in a proof will go away, it will all be filled in with unification.

This fills me with the greatest possible joy.


I'm still convinced that the semantics of import/export are right, but the syntax is super confusing. I'll try to make it look like generics/typeclasses.

Agreed on both points. I'm not sure I know quite what you mean about generics, but I think I get the drift, and I like it.

One thing I wondered this morning, in connection with that article I linked about combinators: what do you think about supporting a richer type system for `kind` statements? E.g. allowing a (K) term to have kind (A -> (B -> A)), where A and B can be any kinds. We already have a unifier for condensed detachment, so it shouldn't be too much extra work to use it for type checking as well. This might make "kindbind" unnecessary. (But it may well be more complexity than it's worth, and it would certainly widen the gulf from Metamath, which would be a shame.) 
 

I'm backing away from doing anything online, and will center development on github (your github account is your id, submissions are done by PR, etc). Rendering to static html will be done by webhook. This will be fast because Rust. If the thing flies we can look at online/interactive later (I still consider it interesting).

I think that makes some good sense. (In case anyone remembers my little attempt at a gamified web interface: I ended up rewriting it from scratch in a purely-static way (plus a Firebase database for interactivity) and hosting on github. It's still very much a prototype though: https://abliss.github.io/tacro

Cheers,
--Adam

Raph Levien

unread,
Jun 4, 2017, 5:20:21 PM6/4/17
to ghil...@googlegroups.com
Sorry to let such a long time go by before answering. I'm happy for your input, just have had very limited time. That'll be changing :)

On Thu, Apr 20, 2017 at 7:12 AM, Adam Bliss <abl...@gmail.com> wrote:
On Apr 20, 2017 1:51 AM, "Raph Levien" <raph....@gmail.com> wrote:
Yes, I have started on a new version.

Ah, wonderful! I look forward to watching your progress.

It will be quite different in some important ways. Instead of s-expressions, I will use a precedence parser, and symbols will be Unicode, so reading source files will look pretty close to typeset math.

Hm, that sounds... ambitious! I know you didn't ask for input, but hope you won't mind me sharing my reactions (as someone who has written a few ghilbert proofs) .

Yes please!
 
1. It seems to me that typeset math requires some pretty sophisticated visual details to render any kind of complicated expression in a readable way. I'm thinking about inter-line spacing (for super/subscript), varying parenthesis size (to track nesting levels), careful kerning and font-face choices (especially if multi-letter variables are in use), etc.. Even with infix terms and the full suite of unicode characters, I am skeptical that a fixed-width, newline-delimited source file will ever be readable "as math" to the unassisted human eye. 

Right, I totally agree about the value of well-typeset math. My thinking here is not so much that the source file is "good enough," but rather that it minimizes the impedance between source and result. I believe that philosophy is much of the reason why markdown is successful in spite of its other shortcomings.
 
2. While sexps do take some getting used to, they are really nice for searchability. I've always said we need a proper theorem search engine (with some kind of sophisticated query input, hopefully in HTML/JS), but until we get one, using `isearch-forward` in emacs (or `grep` on the commandline) actually works pretty darn well for navigating a corpus in sexp notation. I'm not sure how well it will work with a precedence parser. (It seems you'd have to know, ahead of time, much more about the leaves of your desired expression.)

Good point.
 
3. And finally, maybe I'm still stuck in the last century, but I still find it really unpleasant to edit source files with unicode characters. There's something very satisfyingly immediate when every character in the file has a glyph on my keyboard and my fingers know exactly how to type it directly (without an extra level of abstraction for a fancy input-method).

I'm wrestling with this still, especially having spent some hours playing with Lean (see my metamath group post on the subject). There are clear advantages and disadvantages on both sides. I got used to the Lean VSCode plugin pretty quickly (it's based on \latex input sequences, and gives tooltip help on how to type characters). That said, without careful font choice you get a "ransom note" effect where the ASCII doesn't match the math symbols.

I think there is tremendous value in being able to use just an editor and verifier. Actually, I feel the last iteration of Ghilbert was much better at this than, say, Metamath. While I did some proof development in the uncompressed MM format, it's quite tedious, and I feel like the compressed format is the true representation of MM proofs. Unfortunately, it's effectively a binary encoding and thus requires the use of a tool beyond an editor.

I'll do some more experiments with ASCII-only representations of the math. I think I would prefer a lightly adapted Metamath style (A. x ph) over, say, Latex-style (\forall x \varphi) for a variety of reasons. I think asciimath is also good inspiration.

All that said, while I consider working with minimal tools important, I expect most users (especially most serious users) to use a tool, and the technology of choice for that today is an editor plugin (the Lean one for VSCode is an excellent example). Such a tool fairly effortlessly provides input - you end up typing basically the same thing as in an ASCII world, but get immediate feedback.
 
For proofs, instead of RPN, there will be a more pleasant syntax that allows naming (and sharing) of intermediate proof steps, and hopefully allows more choices in expressing the structure of a proof.

Sounds cool! I remember having a lot of trouble with the RPN notation when I first started. But now it's wormed its way so deeply into my brain that I can't imagine the alternative. I look forward to seeing more!

Stay tuned, I'll post my most recent sketch momentarily.

Regarding RPN (and s-expressions), I think it's something that people can get used to, especially if they're used to thinking abstractly, but I think it's a constant low-level cognitive load doing the translation. I spent many many years doing RPN when I was maintainer of Ghostscript, and am not at all sad to put that behind me.
 
Including "mandatory hypothesis" terms in a proof will go away, it will all be filled in with unification.

This fills me with the greatest possible joy.


I'm still convinced that the semantics of import/export are right, but the syntax is super confusing. I'll try to make it look like generics/typeclasses.

Agreed on both points. I'm not sure I know quite what you mean about generics, but I think I get the drift, and I like it.

Still thinking about it, don't have the answers, but am gaining confidence I can come up with something good. As I discussed in some detail in the post to the Metamath group, I've recently gained some enlightenment about the lambda-Pi calculus, and feel that it provides some guidance how to do this kind of thing. Also, Lean has explicit typeclasses, and I think there's quite a bit to be learned from that as well.
 
One thing I wondered this morning, in connection with that article I linked about combinators: what do you think about supporting a richer type system for `kind` statements? E.g. allowing a (K) term to have kind (A -> (B -> A)), where A and B can be any kinds. We already have a unifier for condensed detachment, so it shouldn't be too much extra work to use it for type checking as well. This might make "kindbind" unnecessary. (But it may well be more complexity than it's worth, and it would certainly widen the gulf from Metamath, which would be a shame.)

I'm still very much wrestling with this. I think what you're describing is actually more types than kinds. Obviously in the lambda-Pi calculus you can have types as much as you like (in fact, it's hard to swing a cat in lambda-Pi without being completely inundated with types).

In my pre-lambda-Pi thinking, I was you would do this with kinds, and with import/export. In fact, I believe you can do something like this with the existing Ghilbert implementation, but it's painful to actually use. You define an interface for, say, predicate calculus, with a kind for the range of quantifiers. Then you import three instances of that interface, one for A, one for B, and one for A_arrow_B. Lastly, you import an interface that defines this arrow operator in terms of the kinds you've imported, defining new terms as expected for abstraction and application.

Of course, in the existing Ghilbert, you'd use string prefixes for namespacing, as you'd be dealing with multiple instances of the same interface, and notationally the whole thing is cumbersome to an extreme. I'm even more hopeful that I'll be able to come up with something nice, now that I can work from examples from Lean and LF-style embeddings of HOL into lambda-Pi. Even so, all this stuff is still just a hope rather than a concrete proposal.
 
 

I'm backing away from doing anything online, and will center development on github (your github account is your id, submissions are done by PR, etc). Rendering to static html will be done by webhook. This will be fast because Rust. If the thing flies we can look at online/interactive later (I still consider it interesting).

I think that makes some good sense. (In case anyone remembers my little attempt at a gamified web interface: I ended up rewriting it from scratch in a purely-static way (plus a Firebase database for interactivity) and hosting on github. It's still very much a prototype though: https://abliss.github.io/tacro

I played with this a bit but wasn't able to get very far. Looks fun though.
 
Cheers,
--Adam

Raph Levien

unread,
Jun 7, 2017, 2:57:59 AM6/7/17
to ghil...@googlegroups.com
I've thought about the Unicode issue some more, and think I have an answer. I've also given a bit of thought to lexing in general, and implemented the first cut at that.

In trying Unicode, there's the obvious issue of needing to set up an input method. At the very least, there's having to translate between the visual appearance and the key sequence. Many symbols have uncomfortably long key sequences (I found "\varphi" especially painful, compared with "ph").

And I also found a lot of paper cuts. For one, the visual presentation is going to vary a lot depending on font configuration, and in many cases will be fairly degraded. I think even in 2017 we'll see some missing symbols (especially the U+1DXXX range).

I also found that the glyphs for U+03C6 (φ) and U+03D5 (ϕ) were inconsistent. Some fonts (Times, Arial) seemed to follow the Unicode recommendation, where the former is loopy and the second has a vertical stroke. Other fonts (Consolas) were the reverse. Source Code Pro (an important and very fine font) seems to have the stroke form for both, and no way to get to the loopy version. I'm also worried about U+2212 (minus), as it's so much more of a pain to type than U+002D (hyphen-minus).

The U+0338 (combining solidus) renders poorly in Sublime (on Windows 10). Arrows (especially ↔) look too short in a monospace font. Math brackets (U+2329, U+232A) aren't monospaced in the fallback font, so the visual appearance is inconsistent.

These are a lot of problems, I think.

That said, some people will consider working in ASCII to be barbaric, and will prefer the visual feedback of symbols that look like their real mathematical counterparts. Such people would probably be willing to configure their editor, input method, and font choices to deal with Unicode math fairly well. I think the way to accommodate such users is to define a reversible transformation between ASCII and Unicode forms, and have the verifier (and any editor plugins) work with either. But I think the version checked into Github should be ASCII.

One challenge I ran into with adapting the Metamath ASCII symbol names is such things as "(." (the original (. referred to proper subclass and is now C. but (. and ). still exist as a form of bracket). Metamath basically uses space to delimit all tokens, which I find annoyingly airy, especially for expressions with lots of parentheses. I'd rather spaces be optional. The approach I'm exploring now is that lexing is trie-based, and matches the longest known token. Syntax rules for introducing new tokens would have different rules, basically consume everything up to space (though I can imagine tweaking this, so that commands like "kind set;" would work).

I'm debating whether to try to make syntax such as [a, b) work, or whether to enforce lexical bracket matching. In any case, it seems to me that syntax highlighting is going to require pretty deep syntax knowledge (more so in ASCII than Unicode, I think).

I'll continue to experiment with all this. And hopefully there will be something concrete to play with before long.

Raph

Adam Bliss

unread,
Jun 9, 2017, 12:12:01 AM6/9/17
to ghil...@googlegroups.com
On Sun, Jun 4, 2017 at 5:20 PM, Raph Levien <raph....@gmail.com> wrote: 
One thing I wondered this morning, in connection with that article I linked about combinators: what do you think about supporting a richer type system for `kind` statements? E.g. allowing a (K) term to have kind (A -> (B -> A)), where A and B can be any kinds. We already have a unifier for condensed detachment, so it shouldn't be too much extra work to use it for type checking as well. This might make "kindbind" unnecessary. (But it may well be more complexity than it's worth, and it would certainly widen the gulf from Metamath, which would be a shame.)

I'm still very much wrestling with this. I think what you're describing is actually more types than kinds. Obviously in the lambda-Pi calculus you can have types as much as you like (in fact, it's hard to swing a cat in lambda-Pi without being completely inundated with types).

In my pre-lambda-Pi thinking, I was you would do this with kinds, and with import/export. In fact, I believe you can do something like this with the existing Ghilbert implementation, but it's painful to actually use. You define an interface for, say, predicate calculus, with a kind for the range of quantifiers. Then you import three instances of that interface, one for A, one for B, and one for A_arrow_B. Lastly, you import an interface that defines this arrow operator in terms of the kinds you've imported, defining new terms as expected for abstraction and application.

Of course, in the existing Ghilbert, you'd use string prefixes for namespacing, as you'd be dealing with multiple instances of the same interface, and notationally the whole thing is cumbersome to an extreme. 

Well, I couldn't resist taking this on as a challenge, and I can verify that "painful" and "cumbersome to an extreme" are accurate assesments. Here's my implementation; tell me if it's what you had in mind:

Note that you mentioned "abstraction and application", but I'm only doing application, since the combinators have no abstraction. (Indeed, part of my fascination with the idea of using combinators in a .gh is the notion that it could embed FOL without any abstraction at all, and thus slice the gordian knots of "substitution in the language or in the metalanguage" and "distinct variable constraints versus freemaps" that were being discussed on the metamath group.)

I assembled this monstrosity by hand, and it took a good few hours, but I think it actually could be done somewhat elegantly using something like C preprocessor macros to generate all the imports and params (plus maybe a little type-unification script to use for assistance while writing proofs). But, of course, proper support for generic kinds (or proper types) in the ghilbert spec would be best.

Now that I've done this in your "pre-lambda-Pi thinking" mindset, can you point me to your favorite introduction to lambda-Pi so I can try to understand where you're heading next? 

Thanks,
--Adam

Raph Levien

unread,
Jun 9, 2017, 1:06:28 AM6/9/17
to ghil...@googlegroups.com
My favorite current sources are the original Logical Frameworks paper (http://homepages.inf.ed.ac.uk/gdp/publications/Framework_Def_Log.pdf) and Lean (leanprover.github.io). In particular, I think the "typeclass" idea from Lean (which I understand to be similar to Haskell typeclasses and Rust traits) seems really powerful and expressive. Also, the concept of using imports and exports to support interpretation of one axiomatic theory in terms of another is expressed clearly in Little Theories (http://imps.mcmaster.ca/doc/little-theories.pdf).

I should emphasize, I don't have a complete design right now. What I do have is a pretty strong gut feeling that a beautiful design is possible. These are not the same thing.

I'm also not sure how much lambda-Pi is going to be exposed in the final system. I'd like to simplify things a lot, and especially not require users to think about dependent types. I do like lambda-Pi as a way of reasoning about logical systems. For example, definitions and theorems are basically the same thing, one is a lambda expression that's term-valued, the other is judgment-valued. Both express the concept of not introducing truly new things but just reassembling existing pieces.

I do consider being able to produce lambda-Pi output as a pretty valuable feature, as such proofs could fairly easily be imported into systems such as Lean. But I want to be careful not to go too far, for example it also seems valuable to be able to take proofs in Peano and Z2 and import those into HOL, which is not lambda-Pi based.

Raph

--

Adam Bliss

unread,
Jun 11, 2017, 12:20:51 PM6/11/17
to ghil...@googlegroups.com
Thanks Raph. I found the LF paper to be a bit dense. But after working through chapters 1 and 2 of "https://leanprover.github.io/theorem_proving_in_lean", I was able to understand your post to the metamath group, and I am now quite excited about it!
Reply all
Reply to author
Forward
0 new messages