New community verifier?

640 views
Skip to first unread message

Mario Carneiro

unread,
Nov 20, 2020, 3:02:44 PM11/20/20
to metamath
Relocated from https://groups.google.com/g/metamath/c/NMZA_JRkU4U/m/ccP9bXZlAQAJ:

On Fri, Nov 20, 2020 at 9:37 AM Benoit <benoit...@gmail.com> wrote:
Mario, do you have some more detail about this project to write (I'm quoting you from github) "a verifier that would satisfy all the constraints of the CI system in terms of capabilities: checking all the typesetting stuff, definition checking, and full verification as efficiently as possible"?
Thanks in advance.
Benoît

The plan is to start a new "community verifier" and build it with those tasks in mind. I think there has been too much fragmentation in verifier land; this makes it very hard for newcomers, especially those that are not especially computer literate such as traditional mathematicians. mmj2 is pretty good, but it's also quite old and the architecture isn't something I really want to continue to build on. MM1 builds a lot of lean-esque features into a metamath-like foundation, and I would say as a proof assistant it's a smashing success, in usability if perhaps not in users. I've been spoiled by lean to an extent and don't really see current metamath tools as up to par by comparison, and building mm0-rs has been enough to convince me that it is feasible to do the same thing for metamath.

I value metamath's many verifiers; it's a great thing that a new verifier can be done as an afternoon project. But a proper proof assistant needs a lot more infrastructure to provide a high quality user experience, and the community is small enough as it is without additional fragmentation. We've managed to successfully collaborate on the set.mm github repo, but to date no verifier has received significant attention from >2 contributors, except mmj2 which has a couple drive-by contributions beyond the work by myself and Mel O'Cat (and I'm not actively developing the project anymore). There is no metamath verifier I would say is in a particularly healthy state of development.

As a starting point, I would like to replicate the main functions of metamath.exe, at least for one-shot tasks. That means reading and verifying a database, but also parsing it in such a way that edits can be applied, comments re-wrapped, and semantic information like document structure is available. Generating the web pages would also be a target. (This is quite difficult, as metamath.exe is a ball of hacks right now and I would like to start from a cleaner baseline.)

For the more advanced tasks, one design question that comes up early is whether to support "ungrammatical" databases. On the one hand, the metamath spec considers such databases valid, so they can't be rejected (unless the database contains the $j command asserting that it is grammatical). On the other hand, grammaticality enables an asymptotically more efficient structural storage (trees instead of strings), and also makes it possible to perform HTML construction in a saner way, where expressions are properly grouped rather than just appending tokens and hoping it looks well formed at the end. (This is where a lot of the hacks in metamath.exe come from.) Note that set.mm and iset.mm are grammatical, as well as most but not all of the smaller example databases. (I think miu.mm and lof.mm are the main counterexamples.) Perhaps we can degrade to a simpler verifier-only version in case the DB is not grammatical.

As far as language choice goes, I would recommend Rust. Beyond being well suited for the speed and architectural requirements of a verifier / proof assistant, both myself and Raph Levien (if I can interest him in this project) are involved in the rust ecosystem; maybe we can use the Druid UI framework for the front end. (I want to put in a word of caution though: one thing I learned from mmj2 is that it's a bad idea to write a text editor unless that's the main goal. This is not a trivial task and will suck up all your research time. Outsourcing the editor work for MM1 to VSCode made so many things easier.)

But really, the choice is not only up to me, as community buy-in for this project would be important. The ultimate goal is to become the de-facto standard proof assistant for metamath, put all our best ideas in it, and finally get over the popular conception of metamath as not practical for real mathematicians unless you have a high tolerance for manual labor.

vvs

unread,
Nov 20, 2020, 5:31:34 PM11/20/20
to Metamath
Whoa! And I'm already spoiled by what Leo is doing with Lean 4. The so small core language and vastly extensible parser with an entire automation library for building and proving formal systems using incremental compilation with plans for interactive compiler guidance are mind blowing. That's taking aside your own exciting MM0 work. And I only have that much time to learn about lots of things. I'm torn already :(

Raph Levien

unread,
Nov 20, 2020, 10:22:06 PM11/20/20
to Metamath Mailing List
I am tentatively interested, but have fairly limited free time, as I have a ton of other super-interesting research projects. I certainly agree that Rust is a great language for this.

Regarding writing an editor. I agree it's a big task and hard to do right. That said, it's also something I'm interested in. I know there's at least one other editor project on Druid (not my own), and we also have interest in unbundling our (Druid's) multiline text widget so it's possible to integrate other functionality on top of it.

If you want help on specifically the Druid parts, you might be able to come by the #druid channel on xi.zulipchat.com and stir up some interest.

Raph


On Fri, Nov 20, 2020 at 2:31 PM vvs <vvs...@gmail.com> wrote:
Whoa! And I'm already spoiled by what Leo is doing with Lean 4. The so small core language and vastly extensible parser with an entire automation library for building and proving formal systems using incremental compilation with plans for interactive compiler guidance are mind blowing. That's taking aside your own exciting MM0 work. And I only have that much time to learn about lots of things. I'm torn already :(

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/dc7550eb-9c0b-4953-93b5-1c9640b0ffd9n%40googlegroups.com.

Mario Carneiro

unread,
Nov 20, 2020, 10:30:36 PM11/20/20
to metamath
On Fri, Nov 20, 2020 at 10:22 PM Raph Levien <raph....@gmail.com> wrote:
Regarding writing an editor. I agree it's a big task and hard to do right. That said, it's also something I'm interested in. I know there's at least one other editor project on Druid (not my own), and we also have interest in unbundling our (Druid's) multiline text widget so it's possible to integrate other functionality on top of it.

Yes, I'm aware of the Xi editor and my comment wasn't at all meant to throw shade on it! I think it's a cool project, and from the demos I've seen it looks pretty slick, if a bit raw (I don't know how much it has progressed in the meantime). But I'm sure you can attest that it's not a job for the faint-hearted! Of course if the *goal* is to build a good text editor then it's a feasible project, but building a good text editor and also a good proof assistant at the same time is probably a poor separation of concerns.

If you want help on specifically the Druid parts, you might be able to come by the #druid channel on xi.zulipchat.com and stir up some interest.

Raph


On Fri, Nov 20, 2020 at 2:31 PM vvs <vvs...@gmail.com> wrote:
Whoa! And I'm already spoiled by what Leo is doing with Lean 4. The so small core language and vastly extensible parser with an entire automation library for building and proving formal systems using incremental compilation with plans for interactive compiler guidance are mind blowing. That's taking aside your own exciting MM0 work. And I only have that much time to learn about lots of things. I'm torn already :(

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/dc7550eb-9c0b-4953-93b5-1c9640b0ffd9n%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

Tom Houlé

unread,
Nov 21, 2020, 6:02:41 AM11/21/20
to Metamath
Not a very productive contribution to the discussion, but I'm posting it anyway: as someone who is still relatively new to ITPs but investing a lot of my spare time into Lean for maths, I find this idea very exciting. I have been writing rust professionally for a long time, and I also think it would be a good, practical implementation language. Also agree on the sentiment about writing a text editor. Definitely interested as a user and maybe more.

Glauco

unread,
Nov 21, 2020, 10:54:24 AM11/21/20
to Metamath
I believe mmj2 is good as an IDE and with a couple of improvements it would even "n x." boost my speed in writing proofs (intellisense when proving backward, being the most important missing feature).

I've tried a couple of times to set up Eclipse for mmj2, but got stuck when trying to stepdebug the code after a ctrl+u: a few threads are thrown and the debugger doesn't "attach" to at least one of them. I'm sure it depends on having the right Java / Eclipse environment, but I love to write proofs, so I've never even asked Mario / David any help about setting up the right "environment" in order to help with the mmj2 development.

I'm currently using a Mel O'Cat release that's not available anymore online, because he improved the search page in that release. I believe it's good enough to write long and complex proof.

I've always been of the opinion that the most effective strategy would have been to improve mmj2, rather than starting new projects.

I'd be really excited If we can come up with a better PA. I don't know anything about RUST, but I'm confident I can learn it. I hope we will set up a well-defined/isolated IDE: IMHO, the biggest problem with mmj2 is that it depends too heavily on the jvm version, the Eclipse version, the Eclipse project version.
Recently, I tried to download the current mmj2 version but I wasn't able to run it, neither in Win10, nor in Ubuntu. I got runtime errors probably due to jvm version conflicts. I'm sure I could have fixed them, but I gave up and went back to my Mel O'Cat "special edition" :-)
Consider that I spend hours every day in mmj2, I can't imagine a newcomer facing this kind of problems.

I completely agree with the "don't build the text editor" approach. I hope we can work on this new PA together. My main concern is that it will take a long time before being comparable to mmj2 and until then nobody will use it for "real" work and this will slow down its development.

Glauco

vvs

unread,
Nov 21, 2020, 12:47:05 PM11/21/20
to Metamath
I sense some confusion here. Let's define our dictionary first.

IDE is just a text editor with UI for syntax highlighting, autocompletion, code and documentation searching and integrating other stuff. It also has to be fast and robust. Verifier should just check soundness of definitions, wellformedness of terms and correctness of proofs. Working incrementally is a big advantage. Proof assistant should enable automated synthesis of proofs by inferring missing information, automatically coercing terms, filling in gaps and making good suggestions (according to Andrej Bauer).

mmj2 has a very basic IDE with integrated verifier. It's also a very rudimentary PA as it has only a simple unification tool for synthesis. Granted, there are some basic tactics written in JS but it isn't suited at all to be programmed by an average user. It's also buggy, slow and monolithic, it can't be easily used from other projects. I believe that's what prompted Mario to start this thread in a first place, right? There is also an established tradition to write PA in a functional programming language.

David A. Wheeler

unread,
Nov 21, 2020, 2:47:33 PM11/21/20
to Metamath Mailing List
On Nov 20, 2020, at 3:02 PM, Mario Carneiro <di....@gmail.com> wrote:

Relocated from https://groups.google.com/g/metamath/c/NMZA_JRkU4U/m/ccP9bXZlAQAJ:

On Fri, Nov 20, 2020 at 9:37 AM Benoit <benoit...@gmail.com> wrote:
Mario, do you have some more detail about this project to write (I'm quoting you from github) "a verifier that would satisfy all the constraints of the CI system in terms of capabilities: checking all the typesetting stuff, definition checking, and full verification as efficiently as possible"?
Thanks in advance.
Benoît

The plan is to start a new "community verifier" and build it with those tasks in mind. I think there has been too much fragmentation in verifier land; this makes it very hard for newcomers, especially those that are not especially computer literate such as traditional mathematicians. mmj2 is pretty good, but it's also quite old and the architecture isn't something I really want to continue to build on. MM1 builds a lot of lean-esque features into a metamath-like foundation, and I would say as a proof assistant it's a smashing success, in usability if perhaps not in users. I've been spoiled by lean to an extent and don't really see current metamath tools as up to par by comparison, and building mm0-rs has been enough to convince me that it is feasible to do the same thing for metamath.

I value metamath's many verifiers; it's a great thing that a new verifier can be done as an afternoon project. But a proper proof assistant needs a lot more infrastructure to provide a high quality user experience, and the community is small enough as it is without additional fragmentation. We've managed to successfully collaborate on the set.mm github repo, but to date no verifier has received significant attention from >2 contributors, except mmj2 which has a couple drive-by contributions beyond the work by myself and Mel O'Cat (and I'm not actively developing the project anymore). There is no metamath verifier I would say is in a particularly healthy state of development.

Quick nit: I would distinguish “verifier” from “proof assistant” (as we already often do). I also value Metamath’s many verifiers. I don’t think the *verifiers* are in a bad state, they work just fine... but that’s mostly because our exceptions for verifiers are limited.

It sounds like your real long-term desire is to have a “proper proof assistant”. In that case I agree with you, it would be *awesome* to have a much better proof assistant than the current options.

As a starting point, I would like to replicate the main functions of metamath.exe, at least for one-shot tasks. That means reading and verifying a database, but also parsing it in such a way that edits can be applied, comments re-wrapped, and semantic information like document structure is available. Generating the web pages would also be a target. (This is quite difficult, as metamath.exe is a ball of hacks right now and I would like to start from a cleaner baseline.)

Sounds good.

Some of the hackishness in Metamath-exe’s web page generation is because C is *terrible* at string handling. Metamath-exe is also relatively slow because it’s single threaded; if we could run the generators in parallel we could *dramatically* reduce HTML generation time.

For the more advanced tasks, one design question that comes up early is whether to support "ungrammatical" databases. On the one hand, the metamath spec considers such databases valid, so they can't be rejected (unless the database contains the $j command asserting that it is grammatical). On the other hand, grammaticality enables an asymptotically more efficient structural storage (trees instead of strings), and also makes it possible to perform HTML construction in a saner way, where expressions are properly grouped rather than just appending tokens and hoping it looks well formed at the end. (This is where a lot of the hacks in metamath.exe come from.) Note that set.mm and iset.mm are grammatical, as well as most but not all of the smaller example databases. (I think miu.mm and lof.mm are the main counterexamples.) Perhaps we can degrade to a simpler verifier-only version in case the DB is not grammatical.

I think only supporting trees is quite reasonable for a proof assistant. Almost all real metamath work uses the “grammatical” databases (is there a better term?). Degrading to a “verifier-only” version is a fine long-term plan. Supporting that could even be long delayed because there are many verifiers that can already handle “non-grammatical” databases; we don’t really need another one right now.

As far as language choice goes, I would recommend Rust. Beyond being well suited for the speed and architectural requirements of a verifier / proof assistant, both myself and Raph Levien (if I can interest him in this project) are involved in the rust ecosystem; maybe we can use the Druid UI framework for the front end. (I want to put in a word of caution though: one thing I learned from mmj2 is that it's a bad idea to write a text editor unless that's the main goal. This is not a trivial task and will suck up all your research time. Outsourcing the editor work for MM1 to VSCode made so many things easier.)

I agree that rewriting a text editor is a terrible idea. Reuse is vital to get things done in a reasonable time.

But really, the choice is not only up to me, as community buy-in for this project would be important. The ultimate goal is to become the de-facto standard proof assistant for metamath, put all our best ideas in it, and finally get over the popular conception of metamath as not practical for real mathematicians unless you have a high tolerance for manual labor

I think that those willing to put in the effort to get started also get to pick the tech stack :-).

Here are some traits I’d like to see in a better Metamath proof assistant:
1. Easily usable as a programmatic *library* from other languages. I should be able to import the library from most any other language, then use it to load databases, do queries, etc. The higher-level capabilities should implemented by calling that interface & ideally callable as well. That would make it much easier to build other things.
2. Implemented in a memory-safe language, and preferably statically typed. I like the confidence this could provide.
3. Has a small kernel implementing “legal actions”, which is distinguished from the many tactics it includes that allow trying things out without producing an invalid state as long as the tactic obeys simple static rules (the state may be *useless* but not *wrong*).
4. Support for forwards *and* backwards reasoning at least, and ideally middle-out, in its proof assistant. Some proof assistants can only go backwards; I really appreciate mmj2’s ability to do forwards, backwards, and even middle-out.
5. Lots of automation (eventually).
6. Ideally, it should include he ability to easily call out to other components written in other languages to help automation, e.g., the ability to call out to a machine learning module or some such. There are existing tools that can prove some claims; if they could be called, and then inject their proofs in a Metamath database, that’d be awesome.
7. Optional/lower importance: Perhaps enable simpler handling of parentheses within the human UI. It’d be nice to be able to use parens without surrounding each one with whitespace. I don’t think supporting precedence is important (it might even be undesirable), but it would be nice to simplify handling many parens. That may be challenging, since there are a number of constants with ( or ) in their name in set.mm such as (/) and (,).

Rust is an interesting choice. It’s really easy to create a library others can call with Rust (#1), it’s memory-safe & static (#2), and it’d be easy to implement a small kernel (#3). It’s blazingly fast & easy to parallelize. If the goal is to implement something that can automatically do a lot, that’d a good start.

I’ve played with Rust but not tried to build a proof assistant system with it. Would that be excessively hard? It’s true that Rust has lots of nice functional language syntactic sugar that makes some constructs easy. However, Rust doesn’t have a built-in garbage collector; its borrow system often makes that unnecessary, and that’s one reason why it plays nicely with other languages, but would we have to fight Rust too much because of that? Or is that not really an issue? Rust doesn’t natively like graphs very much (unless you disable its safeties); is that not a real problem in this case, or do alternatives such as using indices (e.g., https://smallcultfollowing.com/babysteps/blog/2015/04/06/modeling-graphs-in-rust-using-vector-indices/ ) resolve that concern?

One problem with Rust is that there’s no built-in REPL. If it’s accessible via an easily imported library that may be a non-problem (e.g., make it easy to import into Python and use Python’s REPL).

I might be willing to help out if it’s Rust. But I have only fiddled with Rust, not used it is seriously for this kind of application, so it might be useful to hear if it’s a decent map for the purpose. To be honest, I’ve been looking for an excuse to delve further into Rust, and this might be a good excuse :-). You’ve inspired me, I think I’ll go de-rust my Rust in case this goes anywhere.

However: Let’s imagine that the implementation is complete. What would interacting with it look like? What is its human UI? What is an example of a rough-draft library API?

--- David A. Wheeler

drjonp...@gmail.com

unread,
Nov 21, 2020, 3:33:59 PM11/21/20
to Metamath
" MM1 builds a lot of lean-esque features into a metamath-like foundation, and I would say as a proof assistant it's a smashing success, in usability if perhaps not in users. "

One question re this, would some tutorials for helping people use MM1 accomplish quite a lot? How hard is it to transfer work done in MM1 to a classic mm proof? I mean if there is already a PA which you are happy with there then it feels a bit like others could just learn to use that. 

David's tutorials for mmj2 were super helpful for me for getting into using it, something similar might be cool.

David A. Wheeler

unread,
Nov 21, 2020, 6:01:54 PM11/21/20
to Metamath Mailing List


> On Nov 20, 2020, at 3:02 PM, Mario Carneiro <di....@gmail.com> wrote:
> As far as language choice goes, I would recommend Rust.


I should add that I think it’d be valuable if a proof verifier/proof assistant was invokable through just a web browser. It doesn’t need to be full speed, but one you can try out can be really helpful.

If Rust is used, there’s an obvious way to do that: WebAssembly. Compile the Rust library into WebAssembly, then invoke it from JavaScript. That would mean that a single library could be used in different circumstances.

--- David A. Wheeler

Mario Carneiro

unread,
Nov 21, 2020, 6:08:18 PM11/21/20
to metamath
On Sat, Nov 21, 2020 at 10:54 AM Glauco <glaform...@gmail.com> wrote:
I've tried a couple of times to set up Eclipse for mmj2, but got stuck when trying to stepdebug the code after a ctrl+u: a few threads are thrown and the debugger doesn't "attach" to at least one of them. I'm sure it depends on having the right Java / Eclipse environment, but I love to write proofs, so I've never even asked Mario / David any help about setting up the right "environment" in order to help with the mmj2 development.

I'm currently using a Mel O'Cat release that's not available anymore online, because he improved the search page in that release. I believe it's good enough to write long and complex proof.

I've always been of the opinion that the most effective strategy would have been to improve mmj2, rather than starting new projects.

I'd be really excited If we can come up with a better PA. I don't know anything about RUST, but I'm confident I can learn it. I hope we will set up a well-defined/isolated IDE: IMHO, the biggest problem with mmj2 is that it depends too heavily on the jvm version, the Eclipse version, the Eclipse project version.
Recently, I tried to download the current mmj2 version but I wasn't able to run it, neither in Win10, nor in Ubuntu. I got runtime errors probably due to jvm version conflicts. I'm sure I could have fixed them, but I gave up and went back to my Mel O'Cat "special edition" :-)
Consider that I spend hours every day in mmj2, I can't imagine a newcomer facing this kind of problems.

I think you can see why I'm not really interested to build on the architecture of mmj2. Sure it has a lot of good features, but it is tied to way too many things about java and the editor interaction itself that make it difficult to address these structural problems through ongoing maintenance. (I say this as someone who tried to do just that for a few years.) I don't think the features of mmj2 are so special they can't be replicated elsewhere, but also I think there are some new avenues to explore, and I'm not sure I want to make an mmj2 clone in the first place. The "edit my code" interaction mode makes everything synchronous, which is not good (and also unlike every programming language IDE I've ever seen).

I completely agree with the "don't build the text editor" approach. I hope we can work on this new PA together. My main concern is that it will take a long time before being comparable to mmj2 and until then nobody will use it for "real" work and this will slow down its development.

In that case, a good first goal would be to determine what the features of mmj2 *are*. Specifically, what features are actually used by people? There are a lot of dead relics. The main ones that come to mind are:
* unification (kind of a requirement for any proof assistant),
* filling in a step given all the type info (and also showing all matches in case some unification is possible),
* as well as the more recent "auto step" proof search, which applies any theorem matching some descent criterion (and not on a blacklist).
Anything else? There are a bunch of options in the RunParms that do stuff but I don't think any of them are really important (the definition checker excepted but that's already on the todo list). There is some documentation as well but it's a mixture of old stuff and new stuff and not very well organized. This is probably something even the less technically minded could help with.
 
Actually, even if we branch off to a different editor mode, maybe it would be good to support something roughly like mmj2 for "backward compatibility" / getting people to use it earlier so they can speed up development.

I completely agree with this. Building a good IDE is a hard and time consuming problem that we *should not* attempt. The major gains to be had are in the "proof assistant" parts. I want to call out the MM1 "tactic language" at this point, which I think is much better integrated than the JS macro language in mmj2. MM1 uses a scheme-like programming language inside "do" blocks, where you can just write stuff and they get evaluated on the spot as you type. This makes it usable for querying the system, i.e. show me the declaration/proof of "foo", what does this depend on, search for theorems matching this string, etc. Proofs are also scheme expressions, and while a lot of proofs are just big quoted literals containing the proof, you can always just start writing a program that generates the proof literal instead, or call a function that will be given the context and produces the proof. So for example the arithmetic evaluator "norm_num" is defined here ( https://github.com/digama0/mm0/blob/d07e5d64e68510095044f1eb12988f3b775c95ec/examples/peano_hex.mm1#L596-L601 ), and you can see on the succeeding lines how it is being applied to prove a bunch of basic theorems.

The downside of a scripting language like this is that they tend to be slower than native code, although I got my MM1 CI set up yesterday and the entire library still compiles in under 2 seconds so it's tolerable (and still way better than lean, which has a similar front end design to MM1).

Rust isn't a functional programming language, but the first implementation of MM1 was in Haskell and it had some memory usage and speed issues that would have required some significant concessions to imperative style to fix. I think Rust actually gets the important things right when it comes to the problems that FP languages solve, from a modeling and reasoning standpoint. I guess scheme is a lisp which is FP? It's not a particularly pure FP: at least my implementation uses references for metavariables so they get used fairly heavily.



On Sat, Nov 21, 2020 at 2:47 PM David A. Wheeler <dwhe...@dwheeler.com> wrote:
On Nov 20, 2020, at 3:02 PM, Mario Carneiro <di....@gmail.com> wrote:
I value metamath's many verifiers; it's a great thing that a new verifier can be done as an afternoon project. But a proper proof assistant needs a lot more infrastructure to provide a high quality user experience, and the community is small enough as it is without additional fragmentation. We've managed to successfully collaborate on the set.mm github repo, but to date no verifier has received significant attention from >2 contributors, except mmj2 which has a couple drive-by contributions beyond the work by myself and Mel O'Cat (and I'm not actively developing the project anymore). There is no metamath verifier I would say is in a particularly healthy state of development.

Quick nit: I would distinguish “verifier” from “proof assistant” (as we already often do). I also value Metamath’s many verifiers. I don’t think the *verifiers* are in a bad state, they work just fine... but that’s mostly because our exceptions for verifiers are limited.

I should amend "verifier" -> "verifier or proof assistant" in the last two sentences. Neither the verifiers nor the proof assistants are being developed. New verifiers occasionally come out, but there isn't a pressing need for them, it's mostly because it's an easy thing to do as a personal project. You can argue that verifiers don't need upkeep, although most of them could still be improved by implementing more of metamath's optional features, but I agree the main problem is that the *proof assistants* aren't being developed.
 
Some of the hackishness in Metamath-exe’s web page generation is because C is *terrible* at string handling. Metamath-exe is also relatively slow because it’s single threaded; if we could run the generators in parallel we could *dramatically* reduce HTML generation time.

That all sounds doable, assuming we can get a handle on the web page generation task itself.

For the more advanced tasks, one design question that comes up early is whether to support "ungrammatical" databases. On the one hand, the metamath spec considers such databases valid, so they can't be rejected (unless the database contains the $j command asserting that it is grammatical). On the other hand, grammaticality enables an asymptotically more efficient structural storage (trees instead of strings), and also makes it possible to perform HTML construction in a saner way, where expressions are properly grouped rather than just appending tokens and hoping it looks well formed at the end. (This is where a lot of the hacks in metamath.exe come from.) Note that set.mm and iset.mm are grammatical, as well as most but not all of the smaller example databases. (I think miu.mm and lof.mm are the main counterexamples.) Perhaps we can degrade to a simpler verifier-only version in case the DB is not grammatical.

I think only supporting trees is quite reasonable for a proof assistant. Almost all real metamath work uses the “grammatical” databases (is there a better term?). Degrading to a “verifier-only” version is a fine long-term plan. Supporting that could even be long delayed because there are many verifiers that can already handle “non-grammatical” databases; we don’t really need another one right now.

FYI I defined the term "grammatical database" in my paper "Models for Metamath" and usually use it to refer to the combination of unambiguous grammar and context free productions for grammatical type codes (which is what you need to ensure that parsing works and has the intended meaning).
 
Here are some traits I’d like to see in a better Metamath proof assistant:
1. Easily usable as a programmatic *library* from other languages. I should be able to import the library from most any other language, then use it to load databases, do queries, etc. The higher-level capabilities should implemented by calling that interface & ideally callable as well. That would make it much easier to build other things.

I like this idea. Rust makes it easy to expose libraries to other programs via crates.io, and I've considered doing this for mm0-rs, although I don't have a particularly clear conception of what the most useful hooks would be. Perhaps this can be tacked as the project gets going.
 
2. Implemented in a memory-safe language, and preferably statically typed. I like the confidence this could provide.

I think this is the main reason it's best to write in something like C++ or its better cousin Rust - it's a big project so you want static types, modules, namespaces and all that jazz, but it's also performance sensitive so you want as much native code as you can get. (Java is okay in this regime, but these days I don't really see any reason to pick it over alternatives.)
 
3. Has a small kernel implementing “legal actions”, which is distinguished from the many tactics it includes that allow trying things out without producing an invalid state as long as the tactic obeys simple static rules (the state may be *useless* but not *wrong*).

In MM1 this is mostly implemented through functions on the "Environment" (the set of all proven theorems so far), which only does this work at the unit of entire theorems. Tactics are permitted to construct arbitrary proof terms and are expected to only produce well formed proof terms but you get a runtime error if not. Unlike lean and other LCF-style provers, I don't make any significant attempt to isolate the "kernel", because the source of truth is an external verifier. (If this verifier / proof assistant is to be a proper, trustworthy verifier, the verifier part should probably be isolated from the rest of the system.)
 
4. Support for forwards *and* backwards reasoning at least, and ideally middle-out, in its proof assistant. Some proof assistants can only go backwards; I really appreciate mmj2’s ability to do forwards, backwards, and even middle-out.

I'm not really sure where this impression comes from. I don't think I know any system that doesn't have some way of constructing proofs in all directions.

In MM1, proofs are provided as "refine scripts", which look basically like terms, where a theorem is the head of an application and the arguments are the subproofs. If you want to do a subproof, you can use the "have" tactic, where you say (have 'h $ type $ 'subproof) where $ type $ is the statement of the theorem and 'subproof is the proof. You can also use _ as the proof so that it becomes a new goal and you can prove it in your own time. 'h is the name of the newly proved step, which you can later refer to in the main part of the proof.

This is somewhat different from mmj2 style, where every step is on equal footing; here most steps have no names, so step reuse is confined to just a few important statements. (However, note that internally all steps are deduplicated, so any step that is proved twice will result in only one step being produced.)
 
5. Lots of automation (eventually).

I think the best way to get automation is to have a language that lets end users write the automation. That's worked quite well for lean, and MM1 uses many of the same techniques as I mentioned. There is of course also general purpose automation that can be implemented in the prover itself, but it's pretty hard to say anything about metamath databases in general. I have some abortive work on an FOL mode for MM1, where the tactic language itself gains some knowledge of the classical logical connectives so that you can use a lambda-calculus-like syntax to do proofs (this is bikesheddable though; lean uses tactics like "intro" and "cases" to do logical manipulations and that's an option too).

6. Ideally, it should include he ability to easily call out to other components written in other languages to help automation, e.g., the ability to call out to a machine learning module or some such. There are existing tools that can prove some claims; if they could be called, and then inject their proofs in a Metamath database, that’d be awesome.

I think this ties in to the earlier point about using it as a library. Interop is good.

7. Optional/lower importance: Perhaps enable simpler handling of parentheses within the human UI. It’d be nice to be able to use parens without surrounding each one with whitespace. I don’t think supporting precedence is important (it might even be undesirable), but it would be nice to simplify handling many parens. That may be challenging, since there are a number of constants with ( or ) in their name in set.mm such as (/) and (,).

One option would be to allow introducing new notations, for example "empty" for (/), that get baked out in the final proof. That can be part of a more comprehensive syntax mapping to something more flexible; it would be nice to get something a bit more like a regular programming language syntax. MM1 suggests a number of modifications in this direction but I'm not wedded to the particular design decisions there. We can even do like lean and go full unicode.

Rust is an interesting choice. It’s really easy to create a library others can call with Rust (#1), it’s memory-safe & static (#2), and it’d be easy to implement a small kernel (#3). It’s blazingly fast & easy to parallelize. If the goal is to implement something that can automatically do a lot, that’d a good start.

I’ve played with Rust but not tried to build a proof assistant system with it. Would that be excessively hard? It’s true that Rust has lots of nice functional language syntactic sugar that makes some constructs easy. However, Rust doesn’t have a built-in garbage collector; its borrow system often makes that unnecessary, and that’s one reason why it plays nicely with other languages, but would we have to fight Rust too much because of that? Or is that not really an issue? Rust doesn’t natively like graphs very much (unless you disable its safeties); is that not a real problem in this case, or do alternatives such as using indices (e.g., https://smallcultfollowing.com/babysteps/blog/2015/04/06/modeling-graphs-in-rust-using-vector-indices/ ) resolve that concern?

Obviously as the author of mm0-rs I can speak to the effectiveness of Rust for writing proof assistants. For storing terms, I usually opt for a recursive struct using reference-counted pointers; this limits you to acyclic data structures but that's usually not a problem for a proof language. To store a "baked" proof that is not being manipulated and is fully deduplicated, I use a combination: the expression is really a tuple (Vec<Node>, Node) where Node is a recursive struct with Arcs as mentioned, but with a constructor Ref(i) that can be used to refer to a node on the "heap" (the Vec<Node>), and elements in the heap can similarly refer to earlier vectors on the heap. (See https://github.com/digama0/mm0/blob/d07e5d64e68510095044f1eb12988f3b775c95ec/mm0-rs/src/elab/environment.rs#L135-L158 .)

This is useful to explicitly represent subterm sharing, so that you can have a highly redundant proof (for example if each step i+1 refers to step i twice then the unshared proof will be exponential size) that you can still perform operations on (unification, type checking, etc) in linear time. It's possible to do the same thing with just Arcs, but you have to keep a hash table of visited nodes and if you forget and just recurse naturally then the entire system will grind to a halt. A lot of proof assistants suffer from this problem (including metamath.exe!)
 
One problem with Rust is that there’s no built-in REPL. If it’s accessible via an easily imported library that may be a non-problem (e.g., make it easy to import into Python and use Python’s REPL).

As much as I like rust I don't think it lends itself well to a REPL, in the sense that the rust language shouldn't be typed line by line, it benefits a lot from scoping. MM1 uses Scheme instead for the REPL stuff, but all the heavy lifting is done in Rust.

I might be willing to help out if it’s Rust. But I have only fiddled with Rust, not used it is seriously for this kind of application, so it might be useful to hear if it’s a decent map for the purpose. To be honest, I’ve been looking for an excuse to delve further into Rust, and this might be a good excuse :-). You’ve inspired me, I think I’ll go de-rust my Rust in case this goes anywhere.

However: Let’s imagine that the implementation is complete. What would interacting with it look like? What is its human UI? What is an example of a rough-draft library API?

Since most of my experience in this area is coming from MM1 and the mm0-rs implementation, that would be my default answer to such questions, unless others object. You interact through vscode or another LSP-enabled editor, with a relatively small file (not set.mm) containing "code" in some TBD language that I will assume looks like MM1 :) . The server runs in the background and provides point info and diagnostics. By typing directives in a do block you can use it like a REPL. At the top of the file is likely "import "set.mm";" which says to preload set.mm and base the current development on it. (Perhaps it should instead be something like "altering "set.mm";" if the goal of the file is to construct an alteration to set.mm that may involve inserting theorems in places, renaming things and other such stuff.) The user can add lines to adjust set.mm notation to their preference, and the language can come with some presets that can be included so that we can collect an approved set of alterations for set.mm.

If you want to add a new theorem, you can declare one in the file, with a notation like MM1's but possibly with some modifications to support DV conditions that are not expressible in MM0 (for example if two set variables are not necessarily disjoint). You can write a literal expression if the proof is simple enough, or you can call a tactic, or you can pop open a (focus) block as in MM1 to enter tactic mode where you evolve a goal state through the use of imperative commands like applying theorems, switching between goals, calling automation and so on.

At the end of the day, you get a file that represents an alteration to set.mm, or a new .mm file. There is some command in the editor, or you can use the console version of the program such as "mm0-rs compile foo.mm1", and it will construct a new mm file as requested, making minimal formatting changes to the file and otherwise adding your new MM theorems, making some attempt to format things the way they would be formatted in a by hand edit. The user can then make their own formatting tweaks if they want and PR to set.mm repo.

For a library API, I think we should expose the main data structures, with programmatic versions of the things I've mentioned. In MM1 there are two main data structures: the Environment and the Elaborator. The Elaborator is the state that changes in the course of processing an .mm1 file, while the Environment is the "proof" content of the file; the elaborator is discarded once the file is finished and the "result" is the final Environment. The Elaborator stores things like the current status of file-local settings, the errors we need to report, and the file AST. I think most of the elaborator functions, corresponding to individual directives, tactics, and statements in the file, can be exposed as API methods on the Environment to external code. That way external code can say things like "add this notation", "add this theorem", "check this theorem" and so on. Additionally some proof state specific stuff should also be exposed so that external programs can act on a given proof state, allowing for sledgehammer-like tools.



On Sat, Nov 21, 2020 at 3:34 PM drjonp...@gmail.com <drjonp...@gmail.com> wrote:
" MM1 builds a lot of lean-esque features into a metamath-like foundation, and I would say as a proof assistant it's a smashing success, in usability if perhaps not in users. "

One question re this, would some tutorials for helping people use MM1 accomplish quite a lot? How hard is it to transfer work done in MM1 to a classic mm proof? I mean if there is already a PA which you are happy with there then it feels a bit like others could just learn to use that. 

One issue is that MM1 is a proof assistant for MM0, it doesn't currently support MM. One way I would like to support this is to be able to "import "foo.mm";" in an MM1 file, which will convert the MM theorems into MM1 and import them into the current file; this will make it easy to build on set.mm from within an MM0 development, but it doesn't "give back" to the MM database, it is a one way conversion (at least so far), and even if the reverse conversion were added it would probably not be the inverse map, resulting in ugly proofs after round-tripping.

I think that it may well be possible to extend mm0-rs to allow swapping out the back end so that it is actually working with MM, and then most of the additions mentioned here are incremental changes on that baseline. The main goal with this new verifier is to attempt to address the needs of metamath users / set.mm contributors and not rock the boat so much by replacing the foundations as is done with MM0, but I agree that this could be incorporated as a sub-mode of mm0-rs usage.
 
David's tutorials for mmj2 were super helpful for me for getting into using it, something similar might be cool.
 
There are currently no tutorials for MM1, only a few scattered demos and a spec document (https://github.com/digama0/mm0/blob/master/mm0-hs/mm1.md). It's starting to get to a relatively stable place, so I don't mind moving to a more professional packaging if people are interested. There are probably a number of glaring problems for people who are not me that need to be addressed though, so I would encourage folks to try it out, get hopelessly stuck and then report your experience as a basis for documentation and improvements.



On Sat, Nov 21, 2020 at 6:01 PM David A. Wheeler <dwhe...@dwheeler.com> wrote:
I should add that I think it’d be valuable if a proof verifier/proof assistant was invokable through just a web browser. It doesn’t need to be full speed, but one you can try out can be really helpful.

If Rust is used, there’s an obvious way to do that: WebAssembly. Compile the Rust library into WebAssembly, then invoke it from JavaScript. That would mean that a single library could be used in different circumstances.

This is definitely on my radar. I don't even think it would be that hard to do with mm0-rs, although one has to pick a web IDE and make that work which is a bit tricky. Another cool idea I'm playing with is explorable MM1 documents, where you can view more information on proofs by clicking on them, kind of like the metamath web pages but more interactive and generated on the fly. I think all the technical factors to make this happen, even to generate everything from CI, are in place; there are just a lot of open design questions.

Mario

ML

unread,
Nov 21, 2020, 6:09:42 PM11/21/20
to Metamath
Normally I'd be more conservative about starting from scratch vs improving the existing tools, but MMJ2 is very broken, so I'd be willing to participate in a rewrite.

(I'd like to mention that MMJ2 being this broken is partly Java's fault, for no longer being careful about backwards compatibility. Undo/redo is broken since Java 9, and as far as I can tell it would need to be reimplemented completely, because a Swing feature it relied on was removed. All the macros will also be broken eventually, because the Nashorn engine, which they rely on, has been removed in Java 15. And I suspect there's more examples of this.)

Still, much like how it's better to outsource text editing to an existing editor, it's better not to start from scratch if we can help it. So if Rust is really the language we decide to use, I'd recommend building off of the existing Rust verifier if possible. I assume that https://github.com/sorear/smetamath-rs is the Rust verifier that's been mentioned?

---

Improving Metamath tooling is something I definitely want to do. I was already working a towards improving MMJ2, in my personal fork I fixed all the linter warnings, ported some of the old batch file unit tests, and also started porting the macros to use the Rhino javascript engine (in preparation for Nashorn's removal).

No one seems to be explicitly saying that they will start this project. I'm not sure I can take the lead, because I don't know much yet about how to implement a proof assistant, and because this is meant to be a community project but I haven't participated much in the community. Still, I've used Rust before, and as a first step, I could try to translate MMJ2's LRParser to smetamath-rs, and report on my progress here in a few weeks.

Mario Carneiro

unread,
Nov 21, 2020, 6:22:38 PM11/21/20
to metamath
On Sat, Nov 21, 2020 at 6:09 PM 'ML' via Metamath <meta...@googlegroups.com> wrote:
Still, much like how it's better to outsource text editing to an existing editor, it's better not to start from scratch if we can help it. So if Rust is really the language we decide to use, I'd recommend building off of the existing Rust verifier if possible. I assume that https://github.com/sorear/smetamath-rs is the Rust verifier that's been mentioned?

As I mention in the other post, I think it is most likely to share more code with mm0-rs than a pure verifier like smm3, although it's possible we can remix some of the ideas from that. There are a lot of cool tricks in that program that power its best in class running time, but optimized string substitution doesn't translate well to a robust proof assistant, where you want to manipulate the data in a lot of ways beyond simple unification. Probably we should just leave smm3 as is and use a more expression based representation for terms.

Writing a verifier from scratch is not really a problem. It's everything else that is difficult.
 
Improving Metamath tooling is something I definitely want to do. I was already working a towards improving MMJ2, in my personal fork I fixed all the linter warnings, ported some of the old batch file unit tests, and also started porting the macros to use the Rhino javascript engine (in preparation for Nashorn's removal).

No one seems to be explicitly saying that they will start this project. I'm not sure I can take the lead, because I don't know much yet about how to implement a proof assistant, and because this is meant to be a community project but I haven't participated much in the community. Still, I've used Rust before, and as a first step, I could try to translate MMJ2's LRParser to smetamath-rs, and report on my progress here in a few weeks.

If there is enough interest from others that I'm not the only one on the project, I don't mind taking the lead here. If it incorporates common code from mm0-rs then that will be a good excuse for me to split it off to a new project (the MM0 monorepo is getting pretty hefty) and maybe also expose a library (depending on whether this new verifier is literally mm0-rs under some settings or a separate executable that just links against a library with common code from mm0-rs).

That said, I am nearing the end of my PhD and probably shouldn't be devoting large quantities of time to something that doesn't move my thesis forward. :P

Mario

vvs

unread,
Nov 21, 2020, 7:10:14 PM11/21/20
to Metamath
Rust isn't a functional programming language, but the first implementation of MM1 was in Haskell and it had some memory usage and speed issues that would have required some significant concessions to imperative style to fix.

Too bad for Haskell I'd guess. I don't think that there is no solution, here's how Lean 4 deals with similar problem: https://arxiv.org/abs/2003.01685

David A. Wheeler

unread,
Nov 21, 2020, 7:48:14 PM11/21/20
to Metamath Mailing List


> On Nov 21, 2020, at 5:14 PM, 'ML' via Metamath <meta...@googlegroups.com> wrote:
>
> Normally I'd be more conservative about starting from scratch vs improving the existing tools, but MMJ2 is very broken, so I'd be willing to participate in a rewrite.
>
> (I'd like to mention that MMJ2 being this broken is partly Java's fault, for no longer being careful about backwards compatibility. Undo/redo is broken since Java 9, and as far as I can tell it would need to be reimplemented completely, because a Swing feature it relied on was removed. All the macros will also be broken eventually, because the Nashorn engine, which they rely on, has been removed in Java 15. And I suspect there's more examples of this.)

I noticed that as well. The removal of the Nashorn engine is especially nasty, because it will *require* a significant modification of mmj2 for it to work on newer versions of Java :-(.

--- David A. Wheeler

Mario Carneiro

unread,
Nov 21, 2020, 7:50:46 PM11/21/20
to metamath
On Sat, Nov 21, 2020 at 7:10 PM vvs <vvs...@gmail.com> wrote:
Rust isn't a functional programming language, but the first implementation of MM1 was in Haskell and it had some memory usage and speed issues that would have required some significant concessions to imperative style to fix.

Too bad for Haskell I'd guess. I don't think that there is no solution, here's how Lean 4 deals with similar problem: https://arxiv.org/abs/2003.01685

Yeah, I was somewhat involved in the research behind that paper. I think it's trying too hard to fit imperative style into a proofy language; it's difficult to be direct with that approach and there are certain kinds of constructions that are impossible with it (like getting the value you store in a pointer-keyed hash table back out later). You still need a good compiler to make the code efficient - this was a big problem in lean 3 and lean 4 is solving it at great development expense. Here I think we do better to just let the established programming languages do what they were designed to do. Research on a provably correct programming language is an entirely different matter, which I am working on (https://github.com/digama0/mm0/blob/master/mm0-rs/mmc.md), but it's really a separate question from a proof assistant, which is just a regular program to help people write proofs.

David A. Wheeler

unread,
Nov 21, 2020, 8:11:17 PM11/21/20
to Metamath Mailing List


> On Nov 21, 2020, at 6:22 PM, Mario Carneiro <di....@gmail.com> wrote:
> If there is enough interest from others that I'm not the only one on the project, I don't mind taking the lead here. If it incorporates common code from mm0-rs then that will be a good excuse for me to split it off to a new project (the MM0 monorepo is getting pretty hefty) and maybe also expose a library (depending on whether this new verifier is literally mm0-rs under some settings or a separate executable that just links against a library with common code from mm0-rs).
>
> That said, I am nearing the end of my PhD and probably shouldn't be devoting large quantities of time to something that doesn't move my thesis forward. :P

:-). If there’s a way to combine them that’d be great.

> In that case, a good first goal would be to determine what the features of mmj2 *are*. Specifically, what features are actually used by people? There are a lot of dead relics. The main ones that come to mind are:
> * unification (kind of a requirement for any proof assistant),
> * filling in a step given all the type info (and also showing all matches in case some unification is possible),
> * as well as the more recent "auto step" proof search, which applies any theorem matching some descent criterion (and not on a blacklist).
> Anything else? There are a bunch of options in the RunParms that do stuff but I don't think any of them are really important (the definition checker excepted but that's already on the todo list). There is some documentation as well but it's a mixture of old stuff and new stuff and not very well organized. This is probably something even the less technically minded could help with.

Unification with past/next steps, and unification given the name of an existing theorem/axiom, are definitely minimums, but they’re also not too challenging to implement. Those are the keys for me. That includes, when specifying something, immediately creating its hypotheses & supporting work variables (or something like them for identifying “don’t know yet”).

Important item: quietly reordering rules when the proposed order cannot be unified but another order can be. The simple automation done by leading “!” Is really helpful. The “replace this step by that step” e.g., “d1::#1” is not used often but is very helpful when needed.

Being able to double-click on any step & show what *could* be unified (and what that would produce) is useful. I don’t use it as much as much you might think.

I don’t really use mmj2’s fancy search system much. That could be added later & very simple to start.

The runParms are mostly a pain, partly because it’s a nonstandard way to do things. Usually you just pass a filename of the file to edit. Optionally you use options lie -? Or --long_name, and/or an init file.

> Actually, even if we branch off to a different editor mode, maybe it would be good to support something roughly like mmj2 for "backward compatibility" / getting people to use it earlier so they can speed up development.


I think that *would* be a good idea, especially if as much as possible it were built on calls to a library. At least that would provide a familiar alternative, and a reason to use it instead. Hopefully it would use a text editor that can help pair parens :-).

--- David A. Wheeler

Mario Carneiro

unread,
Nov 21, 2020, 8:35:11 PM11/21/20
to metamath
On Sat, Nov 21, 2020 at 8:11 PM David A. Wheeler <dwhe...@dwheeler.com> wrote:


> On Nov 21, 2020, at 6:22 PM, Mario Carneiro <di....@gmail.com> wrote:
> If there is enough interest from others that I'm not the only one on the project, I don't mind taking the lead here. If it incorporates common code from mm0-rs then that will be a good excuse for me to split it off to a new project (the MM0 monorepo is getting pretty hefty) and maybe also expose a library (depending on whether this new verifier is literally mm0-rs under some settings or a separate executable that just links against a library with common code from mm0-rs).
>
> That said, I am nearing the end of my PhD and probably shouldn't be devoting large quantities of time to something that doesn't move my thesis forward. :P

:-). If there’s a way to combine them that’d be great.

> In that case, a good first goal would be to determine what the features of mmj2 *are*. Specifically, what features are actually used by people? There are a lot of dead relics. The main ones that come to mind are:
> * unification (kind of a requirement for any proof assistant),
> * filling in a step given all the type info (and also showing all matches in case some unification is possible),
> * as well as the more recent "auto step" proof search, which applies any theorem matching some descent criterion (and not on a blacklist).
> Anything else? There are a bunch of options in the RunParms that do stuff but I don't think any of them are really important (the definition checker excepted but that's already on the todo list). There is some documentation as well but it's a mixture of old stuff and new stuff and not very well organized. This is probably something even the less technically minded could help with.

Unification with past/next steps, and unification given the name of an existing theorem/axiom, are definitely minimums, but they’re also not too challenging to implement. Those are the keys for me. That includes, when specifying something, immediately creating its hypotheses & supporting work variables (or something like them for identifying “don’t know yet”).

Yep, this is all what I would call "unification". When you apply a theorem, metavariables are created for all the variables in the theorem. The context against which the theorem is applied, or the known hypotheses being inserted into the application, yield equality constraints on the types, aka unification constraints, which are resolved by a unification procedure that assigns to the metavariables. MM1 does all of this from day 1.

Important item: quietly reordering rules when the proposed order cannot be unified but another order can be. The simple automation done by leading “!” Is really helpful. The “replace this step by that step” e.g., “d1::#1” is not used often but is very helpful when needed.

I didn't include in the list mmj2 solutions to mmj2 problems. Proof order is an mmj2 specific problem. If proofs are terms, then you are just passing terms around, and you just need to ensure that the term is not cyclic, which can be done as appropriate. Order of steps isn't really a thing until it actually gets "baked" into a MM proof.
 
Being able to double-click on any step & show what *could* be unified (and what that would produce) is useful. I don’t use it as much as much you might think.

In vscode, this is supported via autocomplete. You can write a _ anywhere in your proof and mm0-rs will supply the expected type; you can then use ctrl-space and it gives a list of theorems. I had always intended to support listing *applicable* theorems here, but I never got around to implementing it.
 
The runParms are mostly a pain, partly because it’s a nonstandard way to do things. Usually you just pass a filename of the file to edit. Optionally you use options lie -? Or --long_name, and/or an init file.

Yeah I don't see anything worth salvaging there. It's just a complicated way to set options. With MM1 style interaction you can load up your .mm1 file with configuration info, whether that info is standardized for everyone using set.mm or personal to you like a "worksheet" file would be.
 
> Actually, even if we branch off to a different editor mode, maybe it would be good to support something roughly like mmj2 for "backward compatibility" / getting people to use it earlier so they can speed up development.

I think that *would* be a good idea, especially if as much as possible it were built on calls to a library. At least that would provide a familiar alternative, and a reason to use it instead. Hopefully it would use a text editor that can help pair parens :-).

I don't know if you have looked at mm1 proof scripts, but they are mostly linear, using the (f @ g x) notation liberally to make most arguments work out as a sequence of theorems with one hypothesis. Marnix would be proud. :) Because anything off the main-line requires extra parentheses for the subproof, several MM theorems had to be slightly changed to put the hardest hypothesis last, and there are even two theorems for "syl" which differ only in hypothesis order.

theorem syl (h1: $ b -> c $) (h2: $ a -> b $): $ a -> c $ = '(mpd h2 (a1i h1));
theorem rsyl (h1: $ a -> b $) (h2: $ b -> c $): $ a -> c $ = '(syl h2 h1);

Usually, (syl T ...) is used to apply theorem T in a context, while (rsyl C ...) is used to apply a context manipulation theorem C (like simpl) to the context, changing it without changing the "goal". Even though they are the same theorem from MM perspective the linearization makes them "feel" rather different.

Mario

David A. Wheeler

unread,
Nov 21, 2020, 8:42:46 PM11/21/20
to Metamath Mailing List

On Sat, Nov 21, 2020 at 12:47 PM vvs <vvs...@gmail.com> wrote:
I sense some confusion here. Let's define our dictionary first.

IDE is just a text editor with UI for syntax highlighting, autocompletion, code and documentation searching and integrating other stuff. It also has to be fast and robust. Verifier should just check soundness of definitions, wellformedness of terms and correctness of proofs. Working incrementally is a big advantage. Proof assistant should enable automated synthesis of proofs by inferring missing information, automatically coercing terms, filling in gaps and making good suggestions (according to Andrej Bauer).

mmj2 has a very basic IDE with integrated verifier. It's also a very rudimentary PA as it has only a simple unification tool for synthesis. Granted, there are some basic tactics written in JS but it isn't suited at all to be programmed by an average user. It's also buggy, slow and monolithic, it can't be easily used from other projects. I believe that's what prompted Mario to start this thread in a first place, right? There is also an established tradition to write PA in a functional programming language.

I completely agree with this. Building a good IDE is a hard and time consuming problem that we *should not* attempt. The major gains to be had are in the "proof assistant" parts. I want to call out the MM1 "tactic language" at this point, which I think is much better integrated than the JS macro language in mmj2. MM1 uses a scheme-like programming language inside "do" blocks, where you can just write stuff and they get evaluated on the spot as you type. This makes it usable for querying the system, i.e. show me the declaration/proof of "foo", what does this depend on, search for theorems matching this string, etc. Proofs are also scheme expressions, and while a lot of proofs are just big quoted literals containing the proof, you can always just start writing a program that generates the proof literal instead, or call a function that will be given the context and produces the proof. So for example the arithmetic evaluator "norm_num" is defined here ( https://github.com/digama0/mm0/blob/d07e5d64e68510095044f1eb12988f3b775c95ec/examples/peano_hex.mm1#L596-L601 ), and you can see on the succeeding lines how it is being applied to prove a bunch of basic theorems.

The downside of a scripting language like this is that they tend to be slower than native code, although I got my MM1 CI set up yesterday and the entire library still compiles in under 2 seconds so it's tolerable (and still way better than lean, which has a similar front end design to MM1).

Rust isn't a functional programming language, but the first implementation of MM1 was in Haskell and it had some memory usage and speed issues that would have required some significant concessions to imperative style to fix. I think Rust actually gets the important things right when it comes to the problems that FP languages solve, from a modeling and reasoning standpoint. I guess scheme is a lisp which is FP? It's not a particularly pure FP: at least my implementation uses references for metavariables so they get used fairly heavily.

Providing a Scheme-like interface is a reasonable approach. It’s easy to implement & provides a lot of flexibility. The big problem is that MANY people hate their syntax, e.g., “lots of irritating silly parentheses”. As you know, I think it’d be *much* better if it supported neoteric syntax (at least), so that you can use syntax like f(5) and {2 + 3}, as that eliminates the main complaint about Lisps while retaining all their benefits.

If you’re picky about types, you could even trivially implement a preprocessor to convert the Scheme-like language into Rust, and then compile it into the result. Then, once you’re happy with something, you could “freeze” into a faster Rust implementation.

Scheme is a member of the Lisp family, and a *subset* of Scheme/Lisp is definitely in the set of FP languages (indeed, it’s the ancestor of all the rest). It’s not pure, but that doesn’t make it a bad thing.


Here are some traits I’d like to see in a better Metamath proof assistant:
1. Easily usable as a programmatic *library* from other languages. I should be able to import the library from most any other language, then use it to load databases, do queries, etc. The higher-level capabilities should implemented by calling that interface & ideally callable as well. That would make it much easier to build other things.

I like this idea. Rust makes it easy to expose libraries to other programs via crates.io, and I've considered doing this for mm0-rs, although I don't have a particularly clear conception of what the most useful hooks would be. Perhaps this can be tacked as the project gets going.

I expect that’s the only way to tackle it. The key is to agree early on that “this is a library, with a minimal UI on top”. If you can change the UI without changing what’s underneath, then that’s a big success.

 2. Implemented in a memory-safe language, and preferably statically typed. I like the confidence this could provide.

I think this is the main reason it's best to write in something like C++ or its better cousin Rust - it's a big project so you want static types, modules, namespaces and all that jazz, but it's also performance sensitive so you want as much native code as you can get. (Java is okay in this regime, but these days I don't really see any reason to pick it over alternatives.)

Neither C nor C++ are memory-safe. I’d prefer to work in memory-safe land.

Java has garbage collection & semantics that are very familiar to many (e.g., object orientation & references).


3. Has a small kernel implementing “legal actions”, which is distinguished from the many tactics it includes that allow trying things out without producing an invalid state as long as the tactic obeys simple static rules (the state may be *useless* but not *wrong*).

In MM1 this is mostly implemented through functions on the "Environment" (the set of all proven theorems so far), which only does this work at the unit of entire theorems. Tactics are permitted to construct arbitrary proof terms and are expected to only produce well formed proof terms but you get a runtime error if not. Unlike lean and other LCF-style provers, I don't make any significant attempt to isolate the "kernel", because the source of truth is an external verifier. (If this verifier / proof assistant is to be a proper, trustworthy verifier, the verifier part should probably be isolated from the rest of the system.)

That’s definitely different than an LCF-style prover. Using a separate trustworthy verifier instead does make sense, though wouldn’t that delay detection of problems?

Are there really advantages to *not* having a kernel? I really do want to know.

... 
5. Lots of automation (eventually).

I think the best way to get automation is to have a language that lets end users write the automation.

Agreed.

I’ve played with Rust but not tried to build a proof assistant system with it. Would that be excessively hard? ...
Obviously as the author of mm0-rs I can speak to the effectiveness of Rust for writing proof assistants. For storing terms, I usually opt for a recursive struct using reference-counted pointers; this limits you to acyclic data structures but that's usually not a problem for a proof language. To store a "baked" proof that is not being manipulated and is fully deduplicated, I use a combination: the expression is really a tuple (Vec<Node>, Node) where Node is a recursive struct with Arcs as mentioned, but with a constructor Ref(i) that can be used to refer to a node on the "heap" (the Vec<Node>), and elements in the heap can similarly refer to earlier vectors on the heap. (See https://github.com/digama0/mm0/blob/d07e5d64e68510095044f1eb12988f3b775c95ec/mm0-rs/src/elab/environment.rs#L135-L158 .)

This is useful to explicitly represent subterm sharing, so that you can have a highly redundant proof (for example if each step i+1 refers to step i twice then the unshared proof will be exponential size) that you can still perform operations on (unification, type checking, etc) in linear time. It's possible to do the same thing with just Arcs, but you have to keep a hash table of visited nodes and if you forget and just recurse naturally then the entire system will grind to a halt. A lot of proof assistants suffer from this problem (including metamath.exe!)

Good, that’s exactly what I wanted to know. Subterm sharing is especially important, and it sounds like the solution is well in hand.


One problem with Rust is that there’s no built-in REPL. If it’s accessible via an easily imported library that may be a non-problem (e.g., make it easy to import into Python and use Python’s REPL).

As much as I like rust I don't think it lends itself well to a REPL, in the sense that the rust language shouldn't be typed line by line, it benefits a lot from scoping. MM1 uses Scheme instead for the REPL stuff, but all the heavy lifting is done in Rust.

Make sense. There’s some work to try to have a Rust REPL, but it looks pretty dodgy.
As I noted elsewhere, I think it should support at least neoteric-expressions (an extension of S-expressions), but with that addition the biggest problems of Scheme syntax are resolved.

I might be willing to help out if it’s Rust. But I have only fiddled with Rust, not used it is seriously for this kind of application, so it might be useful to hear if it’s a decent map for the purpose. To be honest, I’ve been looking for an excuse to delve further into Rust, and this might be a good excuse :-). You’ve inspired me, I think I’ll go de-rust my Rust in case this goes anywhere.

However: Let’s imagine that the implementation is complete. What would interacting with it look like? What is its human UI? What is an example of a rough-draft library API?

Since most of my experience in this area is coming from MM1 and the mm0-rs implementation, that would be my default answer to such questions, unless others object. You interact through vscode or another LSP-enabled editor, with a relatively small file (not set.mm) containing "code" in some TBD language that I will assume looks like MM1 :) .

I presume LSP = Language Server Protocol.

The server runs in the background and provides point info and diagnostics. By typing directives in a do block you can use it like a REPL. At the top of the file is likely "import "set.mm";" which says to preload set.mm and base the current development on it. (Perhaps it should instead be something like "altering "set.mm";" if the goal of the file is to construct an alteration to set.mm that may involve inserting theorems in places, renaming things and other such stuff.) The user can add lines to adjust set.mm notation to their preference, and the language can come with some presets that can be included so that we can collect an approved set of alterations for set.mm.

If you want to add a new theorem, you can declare one in the file, with a notation like MM1's but possibly with some modifications to support DV conditions that are not expressible in MM0 (for example if two set variables are not necessarily disjoint). You can write a literal expression if the proof is simple enough, or you can call a tactic, or you can pop open a (focus) block as in MM1 to enter tactic mode where you evolve a goal state through the use of imperative commands like applying theorems, switching between goals, calling automation and so on.

At the end of the day, you get a file that represents an alteration to set.mm, or a new .mm file. There is some command in the editor, or you can use the console version of the program such as "mm0-rs compile foo.mm1", and it will construct a new mm file as requested, making minimal formatting changes to the file and otherwise adding your new MM theorems, making some attempt to format things the way they would be formatted in a by hand edit. The user can then make their own formatting tweaks if they want and PR to set.mm repo.

I think I’d need to see this in action (e.g., watch a video of someone doing this) to fully grok this, but it *seems* plausible.

For a library API, I think we should expose the main data structures, with programmatic versions of the things I've mentioned. In MM1 there are two main data structures: the Environment and the Elaborator. The Elaborator is the state that changes in the course of processing an .mm1 file, while the Environment is the "proof" content of the file; the elaborator is discarded once the file is finished and the "result" is the final Environment. The Elaborator stores things like the current status of file-local settings, the errors we need to report, and the file AST. I think most of the elaborator functions, corresponding to individual directives, tactics, and statements in the file, can be exposed as API methods on the Environment to external code. That way external code can say things like "add this notation", "add this theorem", "check this theorem" and so on. Additionally some proof state specific stuff should also be exposed so that external programs can act on a given proof state, allowing for sledgehammer-like tools.

There might need to be some work so that the data structures are easy-to-navigate & not fragile. I wonder if methods would be useful (e.g., what appear to be “values” are really computed). I would hope a REPL could query their state, and in some cases change them (though maybe only in specific ways).


One issue is that MM1 is a proof assistant for MM0, it doesn't currently support MM. One way I would like to support this is to be able to "import "foo.mm";" in an MM1 file, which will convert the MM theorems into MM1 and import them into the current file; this will make it easy to build on set.mm from within an MM0 development, but it doesn't "give back" to the MM database, it is a one way conversion (at least so far), and even if the reverse conversion were added it would probably not be the inverse map, resulting in ugly proofs after round-tripping.

I think that it may well be possible to extend mm0-rs to allow swapping out the back end so that it is actually working with MM, and then most of the additions mentioned here are incremental changes on that baseline. The main goal with this new verifier is to attempt to address the needs of metamath users / set.mm contributors and not rock the boat so much by replacing the foundations as is done with MM0, but I agree that this could be incorporated as a sub-mode of mm0-rs usage.

You’re best suited to know how doable that is.

 
David's tutorials for mmj2 were super helpful for me for getting into using it, something similar might be cool.
 
There are currently no tutorials for MM1, only a few scattered demos and a spec document (https://github.com/digama0/mm0/blob/master/mm0-hs/mm1.md). It's starting to get to a relatively stable place, so I don't mind moving to a more professional packaging if people are interested. There are probably a number of glaring problems for people who are not me that need to be addressed though, so I would encourage folks to try it out, get hopelessly stuck and then report your experience as a basis for documentation and improvements.

Okay. I’m good at getting hopelessly stuck :-).

--- David A. Wheeler

Mario Carneiro

unread,
Nov 21, 2020, 10:15:29 PM11/21/20
to metamath
On Sat, Nov 21, 2020 at 8:42 PM David A. Wheeler <dwhe...@dwheeler.com> wrote:
Providing a Scheme-like interface is a reasonable approach. It’s easy to implement & provides a lot of flexibility. The big problem is that MANY people hate their syntax, e.g., “lots of irritating silly parentheses”. As you know, I think it’d be *much* better if it supported neoteric syntax (at least), so that you can use syntax like f(5) and {2 + 3}, as that eliminates the main complaint about Lisps while retaining all their benefits.

I'm inclined not to spend much time on this complaint. Better to spend some time with it, get a sense for the way things are put together, and *then* propose improvements. Alternate syntax is not out of the question but it needs to fit within the other design constraints of the language. In particular, MM1 is in many ways an MVP for a metamath based proof assistant, enough to help me prove my big theorems and not much more. If this is to become some big flagship product then I don't mind investing more to make it convenient, but only after the more pressing problems are dealt with.

If you’re picky about types, you could even trivially implement a preprocessor to convert the Scheme-like language into Rust, and then compile it into the result. Then, once you’re happy with something, you could “freeze” into a faster Rust implementation.

Indeed, I don't recommend going overboard with the metaprogramming because the result isn't especially maintainable without static types. Complicated tactics should be written as plugins, either in rust or dynamically linked from another language. The main strength of the language is for micro-tactics like when two branches of a proof are almost but not quite the same, and simple pattern matching tactics like norm_num.
 
I like this idea. Rust makes it easy to expose libraries to other programs via crates.io, and I've considered doing this for mm0-rs, although I don't have a particularly clear conception of what the most useful hooks would be. Perhaps this can be tacked as the project gets going.

I expect that’s the only way to tackle it. The key is to agree early on that “this is a library, with a minimal UI on top”. If you can change the UI without changing what’s underneath, then that’s a big success.

I don't have any plans to write an alternate UI for MM0 right now, which is why there hasn't been too much movement on this front for mm0-rs. If/when not-me people start using it, I'm sure this will become a bigger concern.
 2. Implemented in a memory-safe language, and preferably statically typed. I like the confidence this could provide.

I think this is the main reason it's best to write in something like C++ or its better cousin Rust - it's a big project so you want static types, modules, namespaces and all that jazz, but it's also performance sensitive so you want as much native code as you can get. (Java is okay in this regime, but these days I don't really see any reason to pick it over alternatives.)

Neither C nor C++ are memory-safe. I’d prefer to work in memory-safe land.

Memory safety is nice but kind of a low bar. I want my programs to be correct. Within the confines of existing languages, memory safety by default is a nice-to-have but not essential to making a usable program. The main issues I have with C++ are its ridiculous complexity and crufty syntax. They are doing much better these days with encouraging memory safe programming, but Rust is just put together better.
 
Java has garbage collection & semantics that are very familiar to many (e.g., object orientation & references).

Rust's approach to handling memory feels very similar to Java as an end user, but it's quite different under the hood (the latter is obvious if you hear the press, but the former might be more surprising). You almost never have to explicitly drop things, and when you do it's usually because you need to unlock a mutex or something like that that goes beyond regular memory management.
3. Has a small kernel implementing “legal actions”, which is distinguished from the many tactics it includes that allow trying things out without producing an invalid state as long as the tactic obeys simple static rules (the state may be *useless* but not *wrong*).

In MM1 this is mostly implemented through functions on the "Environment" (the set of all proven theorems so far), which only does this work at the unit of entire theorems. Tactics are permitted to construct arbitrary proof terms and are expected to only produce well formed proof terms but you get a runtime error if not. Unlike lean and other LCF-style provers, I don't make any significant attempt to isolate the "kernel", because the source of truth is an external verifier. (If this verifier / proof assistant is to be a proper, trustworthy verifier, the verifier part should probably be isolated from the rest of the system.)

That’s definitely different than an LCF-style prover. Using a separate trustworthy verifier instead does make sense, though wouldn’t that delay detection of problems?

Are there really advantages to *not* having a kernel? I really do want to know.

Certainly if an error is missed by the front end and caught by the real external verifier, the error reporting quality takes a dive. So this is something to avoid. But the luxury that this affords you is that it's not *essential* that such errors are caught, only preferred, and it's okay if it only happens in some weird unlikely conjunction of circumstances.  We don't need to assume the worst of the author, and can help under the assumption that they are trying to make a correct proof, by contrast to the verifier's perspective where the proof author is the antagonist and we must defend against every possible misuse.

There are lots of verification-related technologies that don't have an LCF style approach; SAT and SMT solvers, model checkers, and even some proof languages like PVS and NuPRL come to mind. But I think MM1 is qualitatively different from these examples, in that MM1 is a tool for producing a proof at the end of the day, while the others are producing "correctness", with a yea/nay answer at the end and nothing more. If that's all you have, you really want to know that everything that goes into that yea/nay answer is trustworthy, and that's just what our verifiers are doing. But with a proof object, you can always just check it at your leisure.

In MM1, the proof objects are lisp data (of course) and you can build the objects however you like. The "refine" tactic, which is the main workhorse for unification, will report unification errors on the spot, which amount to 99% of errors that a user needs to deal with. But if you write your own tactic to construct proofs from whole cloth there is no unification check and you can produce ill-formed proofs if you want. Also, refine doesn't catch DV violations (it's a bit tricky to work out DV conditions on a setvar that is a metavariable). However, all proof constructions ultimately funnel through the function Environment::add_decl() function that adds a theorem to the environment, which is where deduplication happens, and other typechecking can be included there. So while there is no kernel in MM1, it is not too far from having one and this could be made more explicit if desired.

Regarding advantages to not having a kernel, it is all a matter of degree. In my list earlier I could have also included Coq, Lean, and Isabelle, which all have kernels which are not particularly small. (All of these are "true kernels" in the sense that they produce yea/nay answers that must be trusted for correctness. Even though Coq and Lean have proof terms, there is a large amount of reduction standing between those terms and a proper verification trace, and some intelligence is needed from the kernel to avoid age-of-the-universe verification times on real databases.) Isabelle's in particular can be contrasted with a real small kernel LCF system like HOL Light. Having a narrow API makes it easier to trust the kernel, but it also means that all computation must filter through those functions, and also there are more function calls overall, whereas if you implement more complicated composite steps you can improve performance. For MM/MM0 this question does not come up because the target is not a yea/nay but instead a proof object satisfying a rigid external specification. You can use whatever means you like to construct the proof, and the spec is linear time checkable on whatever you come up with.
 
In summary, I don't think it's important to have a kernel given how many backup plans we have, although any embedded verifier would be held to the usual standards of correctness, but it can be done if we care to pursue this.

For a library API, I think we should expose the main data structures, with programmatic versions of the things I've mentioned. In MM1 there are two main data structures: the Environment and the Elaborator. The Elaborator is the state that changes in the course of processing an .mm1 file, while the Environment is the "proof" content of the file; the elaborator is discarded once the file is finished and the "result" is the final Environment. The Elaborator stores things like the current status of file-local settings, the errors we need to report, and the file AST. I think most of the elaborator functions, corresponding to individual directives, tactics, and statements in the file, can be exposed as API methods on the Environment to external code. That way external code can say things like "add this notation", "add this theorem", "check this theorem" and so on. Additionally some proof state specific stuff should also be exposed so that external programs can act on a given proof state, allowing for sledgehammer-like tools.

There might need to be some work so that the data structures are easy-to-navigate & not fragile. I wonder if methods would be useful (e.g., what appear to be “values” are really computed). I would hope a REPL could query their state, and in some cases change them (though maybe only in specific ways).

In Rust, there is some focus placed on privacy as an abstraction mechanism. You can have a struct with private fields, and provide public functions as accessors to the data. This is especially important for anything around the "kernel" like the Environment struct, since in a public API we probably want to make it hard to misuse. Right now I think almost nothing in mm0-rs is public, although a lot of things are pub(crate) (meaning accessible by other code in the project but not from other modules). Making something public is to some extent a stability promise; there are semver rules related to changing the layout of a struct to add public fields and so on so each of these things should be considered carefully once we have some user code that wants to interface with it.
 
Mario

Jim Kingdon

unread,
Nov 21, 2020, 11:52:09 PM11/21/20
to meta...@googlegroups.com
Mario Carneiro wrote:

> If there is enough interest from others that I'm not the only one on the project, I don't mind taking the lead here.

I suppose this is the time to express interest or lack of interest. I
will express a small amount of interest. In terms of the value of the
concept, the interest comes from the idea that I can think of various
things (new features, cleaning up various rough edges, fixing buggy or
odd behaviors, etc) we might want which aren't in metamath.exe or mmj2 .
And that those two codebases - for all their usefulness - are I suspect
hitting a bit of a limit in terms of whether you'd want to want to do a
lot of new development to them.

I'll also express a small amount of interest in terms of whether I might
help (one thing which springs to mind is automated testing - in the
context of a verifier especially a testsuite of proofs which should be
rejected).

But emphasis on the word "small". My experience trying to work on the
software side of systems like JHilbert and ghilbert has been that (a)
software development is a lot of work relative to how much hobby time I
have for it, (b) it is a lot easier with a critical mass of users, other
contributors, etc, and there are few proof systems which get beyond the
stage of "one author" or "one author and the students in their lab". For
those reasons I have chosen to devote my energy to metamath, because it
already has this kind of critical mass and I get to work on mathematics
rather than tools.

So you'll need people other than me to satisfy "I'm not the only one".
Especially if you are trying to get a thesis done, I'd recommend that
you be realistic and set the bar fairly high in terms of how many offers
of participation (and real live participation) will be needed to get
this even to the "minimally useful at one set of tasks" stage (much less
the larger visions of what we want it to be eventually).

Mario Carneiro

unread,
Nov 22, 2020, 12:55:31 AM11/22/20
to metamath
On Sat, Nov 21, 2020 at 11:52 PM Jim Kingdon <kin...@panix.com> wrote:
Mario Carneiro wrote:

> If there is enough interest from others that I'm not the only one on the project, I don't mind taking the lead here.

I suppose this is the time to express interest or lack of interest. I
will express a small amount of interest. In terms of the value of the
concept, the interest comes from the idea that I can think of various
things (new features, cleaning up various rough edges, fixing buggy or
odd behaviors, etc) we might want which aren't in metamath.exe or mmj2 .
And that those two codebases - for all their usefulness - are I suspect
hitting a bit of a limit in terms of whether you'd want to want to do a
lot of new development to them.

I don't think it's important that the project get to critical mass in the near term. Building a theorem prover is a long term project. The main goal here is to have a place where people can feel comfortable collaborating. I will chip in when I can with the time that I have, and it doesn't take a whole lot of additional participation for it to become the "place to go" if you are interested in improving the metamath experience generally. Once we have that, it will grow at a natural pace as these things do. The hard part, as you point out, is getting past the point where it is viewed as one person's project. Even just issues are useful if you don't want to get in on the software side. Lean 3 community development is an interesting parallel. After Leo left to work on lean 4, progress on the lean 3 codebase went into a freeze for about a year, until eventually we had picked up too many bugfix workarounds in mathlib, and eventually we opened up the "community edition", which has seen development by 6 or so people. Certainly much less than the 80-some working on mathlib, but enough to keep it alive and release another 19(!) versions. To me that is what a healthy state of development looks like.

Metamath has much fewer contributors, but there are enough users to keep the issues coming, and one or two part-time developers is more than enough to make progress.

Mario

Thierry Arnoux

unread,
Nov 22, 2020, 8:59:09 AM11/22/20
to meta...@googlegroups.com, Mario Carneiro

Hi all,

This all sounds very interesting!

From the different posts, here is a possible breakdown I see:
  • A "core" crate, (could it be based on smm3 ?), acting as a library and providing services like:
    • parsing and verifying metamath expressions and .mm files,
    • listing theorems matching a given pattern,
    • performing unitary steps like unification.
    • manipulating proofs (adding a subgoal, adding a proof step, declaring the proof complete)
  • A "pa" (proof assistant) crate:
    • parsing and executing the (possibly scheme-like) proof scripts
    • giving real-time notifications to the UI
  • Several "tactics" crates, written as plug-ins in Rust.
  • A "ui" crate, handling everything UI related, e.g. based on Druid/Xi
Did I understand everything correctly?

I never tried rust, that could be an opportunity!

BR,
_
Thierry

Benoit

unread,
Nov 22, 2020, 9:59:13 AM11/22/20
to Metamath
This is indeed very interesting!
My language of choice would have been OCaml, which seems to meet many of the requirements expressed above.  But I'll certainly leave the choice to more experienced and active contributors.
Benoît

Glauco

unread,
Nov 22, 2020, 12:39:11 PM11/22/20
to Metamath
In that case, a good first goal would be to determine what the features of mmj2 *are*. Specifically, what features are actually used by people? There are a lot of dead relics. The main ones that come to mind are:
* unification (kind of a requirement for any proof assistant),
* filling in a step given all the type info (and also showing all matches in case some unification is possible),
* as well as the more recent "auto step" proof search, which applies any theorem matching some descent criterion (and not on a blacklist).
Anything else? There are a bunch of options in the RunParms that do stuff but I don't think any of them are really important (the definition checker excepted but that's already on the todo list). There is some documentation as well but it's a mixture of old stuff and new stuff and not very well organized. This is probably something even the less technically minded could help with.

Here is a short list of things that I heavily use in mmj2 (I'll throw more if they come to mind):

- the General Search page: I really use it a lot (as I wrote, I use a Mel O'Cat version where the Search has been fixed/enhanced)

- ctrl+G > Theorem label (loads from the System Area a proven theorem and "translates" it in .mmp format; it can be either be fetched from the .mm loaded file or from a .mmt file in the working folder)

- right click > Reformat Step: Swap Alt (takes a single line and shows it on multiple lines, with "nice" indentation; I wish there were a key shortcut for this one)

- ctrl+O (Reformat Step applied to all the lines in the current .mmp file)

- ctrl+R (when every statement is on a single line, ctrl+r indents them nicely, so that you can see immediately orphan steps)

- Edit > Set Soft DJ Vars Error Handling > options 3 and 2 (option 3 is my "normal" mode, but I use option 2 a lot, when I want to remove dv constraints)

- ProofAsstIncompleteStepCursor,First (this made my productivity boost n times... :-) ; after a ctrl+u it places the cursor to the first unproven line ; I can type dozens of ctrl+u without having to leave the keyboard)


In the first phase I'll be happy to use the new tool and go back to mmj2 when some feature is still missing.

Please, let me know what's the best environment to set up to contribute to the new tool:

- VSCode + RUST seems to be the candidate, right? (if it were possible to have a portable environment it would be a plus)

- at least Windows + Linux is a must, I guess


Glauco


David A. Wheeler

unread,
Nov 22, 2020, 5:00:20 PM11/22/20
to Metamath Mailing List

On Nov 21, 2020, at 10:15 PM, Mario Carneiro <di....@gmail.com> wrote:



On Sat, Nov 21, 2020 at 8:42 PM David A. Wheeler <dwhe...@dwheeler.com> wrote:
Providing a Scheme-like interface is a reasonable approach. It’s easy to implement & provides a lot of flexibility. The big problem is that MANY people hate their syntax, e.g., “lots of irritating silly parentheses”. As you know, I think it’d be *much* better if it supported neoteric syntax (at least), so that you can use syntax like f(5) and {2 + 3}, as that eliminates the main complaint about Lisps while retaining all their benefits.

I'm inclined not to spend much time on this complaint. Better to spend some time with it, get a sense for the way things are put together, and *then* propose improvements. Alternate syntax is not out of the question but it needs to fit within the other design constraints of the language. In particular, MM1 is in many ways an MVP for a metamath based proof assistant, enough to help me prove my big theorems and not much more. If this is to become some big flagship product then I don't mind investing more to make it convenient, but only after the more pressing problems are dealt with.

This is something I can easily do. I’ve implemented neoteric expressions in Scheme, Common Lisp, and Java. It turns out to be very short, *AND* they’re totally optional for use. That is, you can always say (foo bar bar) if you prefer it over foo(bar baz).


If you’re picky about types, you could even trivially implement a preprocessor to convert the Scheme-like language into Rust, and then compile it into the result. Then, once you’re happy with something, you could “freeze” into a faster Rust implementation.

Indeed, I don't recommend going overboard with the metaprogramming because the result isn't especially maintainable without static types. Complicated tactics should be written as plugins, either in rust or dynamically linked from another language. The main strength of the language is for micro-tactics like when two branches of a proof are almost but not quite the same, and simple pattern matching tactics like norm_num.

Fair enough.

Java has garbage collection & semantics that are very familiar to many (e.g., object orientation & references).

Rust's approach to handling memory feels very similar to Java as an end user, but it's quite different under the hood (the latter is obvious if you hear the press, but the former might be more surprising). You almost never have to explicitly drop things, and when you do it's usually because you need to unlock a mutex or something like that that goes beyond regular memory management.

I would say Rust is more like Ada. Ada and C++ both don’t have built-in garbage collection, but instead use user-defined finalization to deallocate help memory. Ada is *unlike* C++, and more like Rust, in that Ada has strict static typing & generally tries to prevent errors at compile time. Ada doesn’t have borrow checking per se, but its SPARK variant does, and I know there’s a proposal to add borrow checking to Ada: https://www.adacore.com/uploads/techPapers/Safe-Dynamic-Memory-Management-in-Ada-and-SPARK.pdf

There are lots of verification-related technologies that don't have an LCF style approach; SAT and SMT solvers, model checkers, and even some proof languages like PVS and NuPRL come to mind. But I think MM1 is qualitatively different from these examples, in that MM1 is a tool for producing a proof at the end of the day, while the others are producing "correctness", with a yea/nay answer at the end and nothing more. If that's all you have, you really want to know that everything that goes into that yea/nay answer is trustworthy, and that's just what our verifiers are doing. But with a proof object, you can always just check it at your leisure.

To be fair, some SMT solvers *do* have proof producing mechanisms (e.g., CVC4 and Z3). For comparisons, see: http://homepage.divms.uiowa.edu/~viswanathn/qual.pdf

...
In summary, I don't think it's important to have a kernel given how many backup plans we have, although any embedded verifier would be held to the usual standards of correctness, but it can be done if we care to pursue this.

Fair enough.

I think we should make some efforts to detect problems “as early as reasonable”, but clearly we have a very strong final defense.

There might need to be some work so that the data structures are easy-to-navigate & not fragile. I wonder if methods would be useful (e.g., what appear to be “values” are really computed). I would hope a REPL could query their state, and in some cases change them (though maybe only in specific ways).

In Rust, there is some focus placed on privacy as an abstraction mechanism. You can have a struct with private fields, and provide public functions as accessors to the data. This is especially important for anything around the "kernel" like the Environment struct, since in a public API we probably want to make it hard to misuse. Right now I think almost nothing in mm0-rs is public, although a lot of things are pub(crate) (meaning accessible by other code in the project but not from other modules). Making something public is to some extent a stability promise; there are semver rules related to changing the layout of a struct to add public fields and so on so each of these things should be considered carefully once we have some user code that wants to interface with it.

Careful consideration is important, agreed. But I think we want to make sure that a lot of things are (eventually) accessible by a library call. If the library can be imported from (say) Python & JavaScript, and the library is eventually rich, then a whole lot of re-implementation does *not* need to happen.

--- David A. Wheeler

Mario Carneiro

unread,
Nov 22, 2020, 5:03:13 PM11/22/20
to metamath
On Sun, Nov 22, 2020 at 12:39 PM Glauco <glaform...@gmail.com> wrote:
Please, let me know what's the best environment to set up to contribute to the new tool:

- VSCode + RUST seems to be the candidate, right? (if it were possible to have a portable environment it would be a plus)

- at least Windows + Linux is a must, I guess

Note that Visual Studio Code is an open source editor platform by microsoft available on all operating systems. It has no relation to Visual Studio, which is a bloated proprietary piece of crap. I guess they liked the name.

Also, mm0-rs uses a standard called the "language server protocol" to communicate to VSCode, which is specifically intended to address the problem of many languages supporting many editors. In theory, any editor which supports LSP (which I believe includes most of the big ones at this point - vim and emacs for sure, haven't seen whether eclipse supports it yet but I wouldn't be surprised) can run any LSP-supporting language server. LSP is mostly pushed by microsoft so it probably has best support in vscode, though, and also I have only tested mm0-rs with vscode because I use it daily, but if someone is interested to maintain support for their preferred editor we can probably make it happen.

David A. Wheeler

unread,
Nov 22, 2020, 5:11:34 PM11/22/20
to Metamath Mailing List

On Nov 21, 2020, at 6:22 PM, Mario Carneiro <di....@gmail.com> wrote:
If there is enough interest from others that I'm not the only one on the project, I don't mind taking the lead here. If it incorporates common code from mm0-rs then that will be a good excuse for me to split it off to a new project (the MM0 monorepo is getting pretty hefty) and maybe also expose a library (depending on whether this new verifier is literally mm0-rs under some settings or a separate executable that just links against a library with common code from mm0-rs).

I’m definitely interested. If there’s a way I can help, I’ll help. My math chops are nowhere near yours, but I can sling code.

Since most of my experience in this area is coming from MM1 and the mm0-rs implementation, that would be my default answer to such questions, unless others object. You interact through vscode or another LSP-enabled editor, with a relatively small file (not set.mm) containing "code" in some TBD language that I will assume looks like MM1 :) . The server runs in the background and provides point info and diagnostics. By typing directives in a do block you can use it like a REPL. At the top of the file is likely "import "set.mm";" which says to preload set.mm and base the current development on it. (Perhaps it should instead be something like "altering "set.mm";" if the goal of the file is to construct an alteration to set.mm that may involve inserting theorems in places, renaming things and other such stuff.)

I think it should be like “import set.mm [[before|after] NAME]”, so you know what’s allowed & what is not.

The user can add lines to adjust set.mm notation to their preference, and the language can come with some presets that can be included so that we can collect an approved set of alterations for set.mm.

If you want to add a new theorem, you can declare one in the file, with a notation like MM1's but possibly with some modifications to support DV conditions that are not expressible in MM0 (for example if two set variables are not necessarily disjoint). You can write a literal expression if the proof is simple enough, or you can call a tactic, or you can pop open a (focus) block as in MM1 to enter tactic mode where you evolve a goal state through the use of imperative commands like applying theorems, switching between goals, calling automation and so on.

At the end of the day, you get a file that represents an alteration to set.mm, or a new .mm file.

This doesn’t seem very interactive. Can this be made fast enough to provide rapid feedback so someone can see what is (or is not) working? Is there a variant that would be easily interactive? Maybe we can cache results so that “if nothing has changed here’s the answer” to make this fast?

One thing I appreciate about mmj2 is its interactivity. One thing I do *not* like about mmj2 is extremely weak & inflexible automation. I’d like both interactivity & automation.

--- David A. Wheeler

David A. Wheeler

unread,
Nov 22, 2020, 5:25:30 PM11/22/20
to Metamath Mailing List, Mario Carneiro

On Nov 22, 2020, at 8:58 AM, Thierry Arnoux <thierry...@gmx.net> wrote:

Hi all,

This all sounds very interesting!

From the different posts, here is a possible breakdown I see:

In principle I think that’s right, at least in the sense that there would be multiple crates, each one focused on different tasks & building up on others. There will probably be changes in the details as we flesh it out.

I think that, for speed, there will probably need to be a separate crate for “bulk verification of a file’s contents”. This is a role performed by smm3. There are optimization tricks for a bulk verifier, and we verify whole files often enough that it makes sense to do that. Also, I think a PA may only accept grammatical databases, while the bulk verifier might be designed to handle non-grammatical ones too.

I’d like to see smm3 modified so that it’s easily called as a library, with parameters/hooks so that it can do “bulk verification” of in-memory long strings. But I expect that wouldn’t be too hard to do.

--- David A. Wheeler


Mario Carneiro

unread,
Nov 22, 2020, 5:35:30 PM11/22/20
to metamath
On Sun, Nov 22, 2020 at 5:00 PM David A. Wheeler <dwhe...@dwheeler.com> wrote:


On Nov 21, 2020, at 10:15 PM, Mario Carneiro <di....@gmail.com> wrote:



On Sat, Nov 21, 2020 at 8:42 PM David A. Wheeler <dwhe...@dwheeler.com> wrote:
Providing a Scheme-like interface is a reasonable approach. It’s easy to implement & provides a lot of flexibility. The big problem is that MANY people hate their syntax, e.g., “lots of irritating silly parentheses”. As you know, I think it’d be *much* better if it supported neoteric syntax (at least), so that you can use syntax like f(5) and {2 + 3}, as that eliminates the main complaint about Lisps while retaining all their benefits.

I'm inclined not to spend much time on this complaint. Better to spend some time with it, get a sense for the way things are put together, and *then* propose improvements. Alternate syntax is not out of the question but it needs to fit within the other design constraints of the language. In particular, MM1 is in many ways an MVP for a metamath based proof assistant, enough to help me prove my big theorems and not much more. If this is to become some big flagship product then I don't mind investing more to make it convenient, but only after the more pressing problems are dealt with.

This is something I can easily do. I’ve implemented neoteric expressions in Scheme, Common Lisp, and Java. It turns out to be very short, *AND* they’re totally optional for use. That is, you can always say (foo bar bar) if you prefer it over foo(bar baz).

In MM1, I think it wouldn't be so great, because (1) it's way too easy to misspell (f g (a b) c) as (f g(a b) c), particularly when typing quickly; and (2) it doesn't mesh well with the @ syntax for cutting down parens. I call it scheme-like but it's sufficiently different that it is its own language with its own idioms, which is why I emphasize that you should try using it before proposing changes. (Also, there is a precedence issue in (f ,g(a) b), does that mean (f (,g a) b) or (f ,(g a) b)?)

I actually meant something more like switching to a completely different not lispy syntax. Maybe haskelly? Since not all of the language is lispy, it is useful to see at a glance where the "code" is and where the math expressions are in a visually distinctive way. I think this needs to be a separate discussion though; this is really a v2 problem.

To be fair, some SMT solvers *do* have proof producing mechanisms (e.g., CVC4 and Z3). For comparisons, see: http://homepage.divms.uiowa.edu/~viswanathn/qual.pdf

Aye, they have proof producing mechanisms, but the proof format has no spec, are unstandardized, and can't really be checked externally. My colleague Seul Baek has been working on an alternative: http://paar2020.gforge.inria.fr/papers/PAAR_2020_paper_17.pdf

Careful consideration is important, agreed. But I think we want to make sure that a lot of things are (eventually) accessible by a library call. If the library can be imported from (say) Python & JavaScript, and the library is eventually rich, then a whole lot of re-implementation does *not* need to happen.

Of course we want to avoid reimplementation of core components as much as possible. My plan is to wait for the dependent program to start existing, and then use its requirements to help determine what needs to be public.

On Sun, Nov 22, 2020 at 5:11 PM David A. Wheeler <dwhe...@dwheeler.com> wrote:
I’m definitely interested. If there’s a way I can help, I’ll help. My math chops are nowhere near yours, but I can sling code.

Great, I was worried it was all mathematicians and no coders in here. When building a proof assistant it's important to have a balance of both types of users.
 
Since most of my experience in this area is coming from MM1 and the mm0-rs implementation, that would be my default answer to such questions, unless others object. You interact through vscode or another LSP-enabled editor, with a relatively small file (not set.mm) containing "code" in some TBD language that I will assume looks like MM1 :) . The server runs in the background and provides point info and diagnostics. By typing directives in a do block you can use it like a REPL. At the top of the file is likely "import "set.mm";" which says to preload set.mm and base the current development on it. (Perhaps it should instead be something like "altering "set.mm";" if the goal of the file is to construct an alteration to set.mm that may involve inserting theorems in places, renaming things and other such stuff.)

I think it should be like “import set.mm [[before|after] NAME]”, so you know what’s allowed & what is not.

Your average set.mm modification, especially a maintenance change, may touch many theorems in many places. So I wouldn't want to require that they all fit in some contiguous block. Perhaps (before/after NAME) can be a freestanding command to change the context, or it can be a modifier on the theorem command to affect where an individual theorem goes.
 
The user can add lines to adjust set.mm notation to their preference, and the language can come with some presets that can be included so that we can collect an approved set of alterations for set.mm.

If you want to add a new theorem, you can declare one in the file, with a notation like MM1's but possibly with some modifications to support DV conditions that are not expressible in MM0 (for example if two set variables are not necessarily disjoint). You can write a literal expression if the proof is simple enough, or you can call a tactic, or you can pop open a (focus) block as in MM1 to enter tactic mode where you evolve a goal state through the use of imperative commands like applying theorems, switching between goals, calling automation and so on.

At the end of the day, you get a file that represents an alteration to set.mm, or a new .mm file.

This doesn’t seem very interactive. Can this be made fast enough to provide rapid feedback so someone can see what is (or is not) working? Is there a variant that would be easily interactive? Maybe we can cache results so that “if nothing has changed here’s the answer” to make this fast?

It's quite interactive. The script is executed continuously every time you type a character in the editor, so you don't have to press a command or anything to make it go. Peano.mm1 is starting to have noticeable slowdown (maybe .7 seconds) but it's already 6500 lines and almost 2000 theorems, and you can always speed it up by moving a prefix of the file to another file. (I'm planning to support more checkpointing within a file so that it's fast even if you are editing in the middle, but currently it just reads and executes the whole file.) I think that's more than fast enough for the average metamath proof. Honestly it's an order of magnitude faster than mmj2.



On Sun, Nov 22, 2020 at 5:25 PM David A. Wheeler <dwhe...@dwheeler.com> wrote:
In principle I think that’s right, at least in the sense that there would be multiple crates, each one focused on different tasks & building up on others. There will probably be changes in the details as we flesh it out.

I think that, for speed, there will probably need to be a separate crate for “bulk verification of a file’s contents”. This is a role performed by smm3. There are optimization tricks for a bulk verifier, and we verify whole files often enough that it makes sense to do that. Also, I think a PA may only accept grammatical databases, while the bulk verifier might be designed to handle non-grammatical ones too.

Yes, that's the "CI verifier" that started this discussion. I find it easier for maintenance purposes to support all the modes in one tool, using subcommands in the command line like "mm0-rs server" and "mm0-rs compile foo.mm1", although you can compile out the tools separately if one of them requires more dependencies or has too much code in it.
 
Mario

David A. Wheeler

unread,
Nov 22, 2020, 10:35:30 PM11/22/20
to Metamath Mailing List

On Nov 22, 2020, at 5:35 PM, Mario Carneiro <di....@gmail.com> wrote:

In MM1, I think it wouldn't be so great, because (1) it's way too easy to misspell (f g (a b) c) as (f g(a b) c), particularly when typing quickly;

In practice that doesn’t seem to be a problem. People tend to pay attention to spacing, after all, (g ab) is already different than (g a b)  :-). Yes, it’s not the identical circumstance, but in my experience this mistake is uncommon in practice. I did some sampling of large swathes of Lisp code years ago, and it turns out that typing (f g(a b)) to mean (f g (a b)) is essentially non-existent in real code bases. I think most people see (f g(a b) c) as “ugly” even though most S-expression readers would quietly accept it. Since people don’t normally use that format, let’s give that format a more useful meaning :-).

and (2) it doesn't mesh well with the @ syntax for cutting down parens.

My sweet-expressions notation implements those using Haskell’s “$” notation. E.g.:
f $ g a    ; Same as (f ( g a ))

That isn’t part of the neoteric-expression definition, but the contstructs do work well together.

I call it scheme-like but it's sufficiently different that it is its own language with its own idioms, which is why I emphasize that you should try using it before proposing changes.

Fair enough. But since I devised a system for exactly this kind of circumstance, it’d be useful to at least consider it as a possibility.

(Also, there is a precedence issue in (f ,g(a) b), does that mean (f (,g a) b) or (f ,(g a) b)?)

Details like that are defined in its SRFI. I’ll switch to ‘ to make its simpler to see:
  (f ‘g(a) b) <=> (f ‘(g a) b)
If you want just g to be quoted, the obvious way is traditional S-expressions:
  (f (‘g a) b)


Of course we want to avoid reimplementation of core components as much as possible. My plan is to wait for the dependent program to start existing, and then use its requirements to help determine what needs to be public.

In practice it’ll be a 2-way street. As long as we focus on trying to create an externally-callable library, many other things can be worked over time.

Your average set.mm modification, especially a maintenance change, may touch many theorems in many places. So I wouldn't want to require that they all fit in some contiguous block. Perhaps (before/after NAME) can be a freestanding command to change the context, or it can be a modifier on the theorem command to affect where an individual theorem goes.

YES. I like that *MUCH* better. I’m sold.

...
It's quite interactive. The script is executed continuously every time you type a character in the editor, so you don't have to press a command or anything to make it go. Peano.mm1 is starting to have noticeable slowdown (maybe .7 seconds) but it's already 6500 lines and almost 2000 theorems, and you can always speed it up by moving a prefix of the file to another file. (I'm planning to support more checkpointing within a file so that it's fast even if you are editing in the middle, but currently it just reads and executes the whole file.) I think that's more than fast enough for the average metamath proof. Honestly it's an order of magnitude faster than mmj2.

Wow, that’s MUCH faster than I was expecting.

However, I want to implement mechanisms that automate proofs that may take much much longer. That said, it should be trivial to cache that information somewhere (e.g., filename.mmcache) so that the calculation only has to be redone when its inputs change & the cached answer won’t work.

Yes, that's the "CI verifier" that started this discussion. I find it easier for maintenance purposes to support all the modes in one tool, using subcommands in the command line like "mm0-rs server" and "mm0-rs compile foo.mm1", although you can compile out the tools separately if one of them requires more dependencies or has too much code in it.

Sounds reasonable, though I think the “bulk verifier” (smm3) should be accessible separately (e.g., for CI pipelines like ours). In addition, I’d like the tool to be as thin veneer as reasonably possible, with practically everything important in the callable library. That way anyone can just import the library from any language & have it “ready to go”.

--- David A. Wheeler

Glauco

unread,
Nov 25, 2020, 8:58:51 AM11/25/20
to Metamath
Hi Mario,

here
you show peano.mm1 in VSCode and you write
"The basic idea is that MM1 is a scripting language for assembling MM0 proofs and statements. It's untrusted, and functions as the IDE for MM0"

Looking at one of your presentations I've seen you've also completely translated set.mm to mm0, isn't it?

Would it be possible, for you, to video-record a session in which you proof a simple new theorem (starting from the mm0 translation of set.mm) ?

No audio is needed, and there's no need to make the video available on youtube to everybody. Just a short session to show us how it would be the workflow:
load available axioms/definitions/terms/theorems > create new theorem> write/verify proof > save created proof (add to the pre-existing "stuff")


Glauco

Mario Carneiro

unread,
Nov 25, 2020, 11:09:30 AM11/25/20
to metamath
On Wed, Nov 25, 2020 at 8:58 AM Glauco <glaform...@gmail.com> wrote:
Hi Mario,

here
you show peano.mm1 in VSCode and you write
"The basic idea is that MM1 is a scripting language for assembling MM0 proofs and statements. It's untrusted, and functions as the IDE for MM0"

Looking at one of your presentations I've seen you've also completely translated set.mm to mm0, isn't it?

Would it be possible, for you, to video-record a session in which you proof a simple new theorem (starting from the mm0 translation of set.mm) ?

There is actually quite some work before this becomes a really usable workflow. The problem with the translated set.mm is that there are *no* notations. Everything is s-expressions which makes it pretty difficult to read.

Most of my MM0 work has been inside peano.mm1, which is a set.mm style axiomatization of peano arithmetic with all the niceties that MM0 brings to bear. It wouldn't be so hard to demo a new theorem there as there is a pretty decent library of supporting theorems. Alternatively, I could use set.mm0, which is a kind of half baked hand translation of the set.mm axioms into MM0, including notations. I never finished the proof of set.mm1 because I needed a library of basic logic, but I could probably port the peano.mm1 library at this point (since the first part of peano.mm1, on basic logic, would be the same as set.mm).

More long term solutions for supporting set.mm would include auto-translating most notations into MM0 style, identifying the constructions that need some new notation, and making some hard decisions for those. MM0 has the requirement that every notation uses a distinct initial constant (or for infix notations, the first constant after the initial variable should be distinctive), which means that a lot of mixfix notations from set.mm like F : A --> B are probably not going to survive.
 
No audio is needed, and there's no need to make the video available on youtube to everybody. Just a short session to show us how it would be the workflow:
load available axioms/definitions/terms/theorems > create new theorem> write/verify proof > save created proof (add to the pre-existing "stuff")

Especially if you mean the end to end MM workflow, a lot of this is still very much a dream/plan/roadmap and not a current reality. What I can show you now with mm0-rs is the use of MM1 to construct a .mmb binary proof file that can be verified by external MM0 verifiers like mm0-c. I can probably declare a few notations and do the same thing starting from set.mm.mm0, but the result will still be an MM0 proof, not an MM proof. That's what the plan is all about.

Mario

Mario Carneiro

unread,
Nov 27, 2020, 12:14:24 AM11/27/20
to metamath
Hi All,

I've just put the finishing touches on the mm0-rs HTML generator, which makes pages such as https://digama0.github.io/mm0/peano/thms/addass.html . As should be pretty obvious, it's based on the metamath web page display, but I would like to make it a bit more interactive, possibly involving doing more verification work directly in JS. What do people want to see from a more "live" version of the web pages? Search seems like an important one, and step folding is also an option although I don't know how to make that not annoying. Another thing that is somewhat important for MM0 is "expr-folding", where subexpressions that are used in many contiguous steps (like the proof context) are abbreviated. MM/MM0 like to work with such expressions as they can be unified in constant time, but the standard metamath proof display format makes this look very repetitive (and it is also expensive to print), suggesting that somehow the print isn't capturing "the essence of the proof" because it appears to be more repetitive than is.

Also, the web pages could benefit from some design work (I did my best but it's not my forte), and it also probably lacks some essential components like a nav bar or something like it. The current design is pleasantly minimalistic but maybe some important elements are missing, so some votes on the most pressing issues would help.

Mario

David A. Wheeler

unread,
Nov 27, 2020, 1:17:46 AM11/27/20
to Mario Carneiro, metamath
On November 27, 2020 12:14:09 AM EST, Mario Carneiro <di....@gmail.com> wrote:
>Hi All,
>
>I've just put the finishing touches on the mm0-rs HTML generator, which
>makes pages such as
>https://digama0.github.io/mm0/peano/thms/addass.html .
>As should be pretty obvious, it's based on the metamath web page
>display,
>but I would like to make it a bit more interactive, possibly involving
>doing more verification work directly in JS. What do people want to see
>from a more "live" version of the web pages? Search seems like an
>important
>one, and step folding is also an option although I don't know how to make
>that not annoying.

If it can be done easily without JavaScript, you should do it that way instead. It's generally faster do to it that way, and it'd be nice to have some functionality work when client-side JavaScript is disabled. There are obviously diminishing returns where that doesn't make sense.

Basic text search is already built into browsers; I don't know if people need more than that for searching. Step expansion could be implemented using thr HTML details tag.

> Another thing that is somewhat important for MM0 is
>"expr-folding", where subexpressions that are used in many contiguous
>steps
>(like the proof context) are abbreviated. MM/MM0 like to work with such
>expressions as they can be unified in constant time, but the standard
>metamath proof display format makes this look very repetitive (and it
>is
>also expensive to print), suggesting that somehow the print isn't
>capturing
>"the essence of the proof" because it appears to be more repetitive
>than is.

A fair point. The metamath proof display has this problem also.

In some texts you would show at the beginning some definitions to make notation simpler, followed by the text that uses it. Perhaps you could automatically detect sequences of symbols of a certain length that get reused more than a certain number of times, give them special symbols, and put them at the top as shorthand definitions. E.g., $1, $2, and so on. Each list would be local to a particular theorem. They could even other shorthand definitions. I don't know if that would actually help, but I'm throwing it out there as an idea.


>Also, the web pages could benefit from some design work (I did my best
>but
>it's not my forte), and it also probably lacks some essential
>components
>like a nav bar or something like it. The current design is pleasantly
>minimalistic but maybe some important elements are missing, so some
>votes
>on the most pressing issues would help.


--- David A.Wheeler

Mario Carneiro

unread,
Nov 27, 2020, 1:27:05 PM11/27/20
to David A. Wheeler, metamath
On Fri, Nov 27, 2020 at 1:17 AM David A. Wheeler <dwhe...@dwheeler.com> wrote:
On November 27, 2020 12:14:09 AM EST, Mario Carneiro <di....@gmail.com> wrote:
step folding is also an option although I don't know how to make
>that not annoying.

If it can be done easily without JavaScript, you should do it that way instead. It's generally faster do to it that way, and it'd be nice to have some functionality work when client-side JavaScript is disabled. There are obviously diminishing returns where that doesn't make sense.

I'm not fussed about the technology used to achieve it, but my point is that if you use step folding you will be clicking on unfold icons all day to get through a 200 step proof. MM0 has a heavy linear sequence due to its use of @ , so perhaps we can fold steps off that linear path; the parenthesis nesting is usually only 3 or 4 (and often lower) except in some highly automated proofs, while the actual step depth increases linearly with the proof size (so it could be 20 or 30 for an average proof).
 
Basic text search is already built into browsers; I don't know if people need more than that for searching.

No? I use metamath's search function all the time when the web site can't help, and I guess URLs pretty often when metamath is not on hand. I'm talking about finding *theorems*, not steps. And anything with a fancy unification search isn't going to be usable with just browser search. Granted, the index page can probably be text-searched, but I doubt it will stay as one page forever; I think there are over 200 mmtheorems*.html pages.
 
> Another thing that is somewhat important for MM0 is
>"expr-folding", where subexpressions that are used in many contiguous
>steps
>(like the proof context) are abbreviated. MM/MM0 like to work with such
>expressions as they can be unified in constant time, but the standard
>metamath proof display format makes this look very repetitive (and it
>is
>also expensive to print), suggesting that somehow the print isn't
>capturing
>"the essence of the proof" because it appears to be more repetitive
>than is.

A fair point. The metamath proof display has this problem also.

In some texts you would show at the beginning some definitions to make notation simpler, followed by the text that uses it. Perhaps you could automatically detect sequences of symbols of a certain length that get reused more than a certain number of times, give them special symbols, and put them at the top as shorthand definitions. E.g., $1, $2, and so on. Each list would be local to a particular  theorem. They could even other shorthand definitions. I don't know if that would actually help, but I'm throwing it out there as an idea.

My idea is that the user selects a target step and a "width" (a configurable constant, say 10), and then all subexpressions that are not "destructured" in the steps which are less than distance 10 from the chosen point along the graph are abbreviated. (In other words, this is what you would get if you wrote all those steps in mmj2 without the formulas and let it find them by unification.) If you consider all the steps, you probably won't get any abbreviations, but the more "local" you make it the more aggressively it will shrink terms. This will require the pretty printer to be in JS, though.

Interestingly, this actually more closely reflects the way the proof author sees the proof they wrote; the author sees metavariables for the unwritten parts and they only go away gradually as they complete the proof; if you go back and look at the completed lines they are less readable then they were when the author looked at that same line.
 
Mario

David A. Wheeler

unread,
Nov 27, 2020, 2:02:48 PM11/27/20
to Metamath Mailing List

On Nov 27, 2020, at 1:26 PM, Mario Carneiro <di....@gmail.com> wrote:



On Fri, Nov 27, 2020 at 1:17 AM David A. Wheeler <dwhe...@dwheeler.com> wrote:
On November 27, 2020 12:14:09 AM EST, Mario Carneiro <di....@gmail.com> wrote:
step folding is also an option although I don't know how to make
>that not annoying.

If it can be done easily without JavaScript, you should do it that way instead. It's generally faster do to it that way, and it'd be nice to have some functionality work when client-side JavaScript is disabled. There are obviously diminishing returns where that doesn't make sense.

I'm not fussed about the technology used to achieve it, but my point is that if you use step folding you will be clicking on unfold icons all day to get through a 200 step proof. MM0 has a heavy linear sequence due to its use of @ , so perhaps we can fold steps off that linear path; the parenthesis nesting is usually only 3 or 4 (and often lower) except in some highly automated proofs, while the actual step depth increases linearly with the proof size (so it could be 20 or 30 for an average proof).

I doubt there’s a “perfect heuristic” for step folding, but I imagine there’s a point at which there’s “too much” step folding. You could impose a maximum depth of folding before you just “show the rest” instead of using more folding.

You could also have a button that “shows all steps” that’s implemented with JavaScript. Users without JavaScript can still see the proof, and users with JavaScript (most of them) can easily see everything.



Basic text search is already built into browsers; I don't know if people need more than that for searching.

No? I use metamath's search function all the time when the web site can't help, and I guess URLs pretty often when metamath is not on hand. I'm talking about finding *theorems*, not steps. And anything with a fancy unification search isn't going to be usable with just browser search. Granted, the index page can probably be text-searched, but I doubt it will stay as one page forever; I think there are over 200 mmtheorems*.html pages.

Sorry, from the step folding I thought you were still talking about *one* proof.

Yes, searching for theorems is of course useful.

My idea is that the user selects a target step and a "width" (a configurable constant, say 10), and then all subexpressions that are not "destructured" in the steps which are less than distance 10 from the chosen point along the graph are abbreviated. (In other words, this is what you would get if you wrote all those steps in mmj2 without the formulas and let it find them by unification.) If you consider all the steps, you probably won't get any abbreviations, but the more "local" you make it the more aggressively it will shrink terms. This will require the pretty printer to be in JS, though.

Interesting. I suspect both approaches have value. If you’re focusing on a *specific* step, then I can imaging that could be helpful,

If you’re looking at the “proof as a whole” then I still think that defining certain common repeating constructs might be really helpful in making a proof easier to follow.

It’s probably impossible to be sure without trying it out, since we’re talking about “what is easier for humans to process”?

--- David A. Wheeler

Glauco

unread,
Nov 27, 2020, 2:14:01 PM11/27/20
to Metamath
I've just put the finishing touches on the mm0-rs HTML generator, which makes pages such as https://digama0.github.io/mm0/peano/thms/addass.html .

Nice page. Can hyperlinks be added to let the visitor "know more" about "nat" and "+" ?

I guess there's the concept of "definition" for those terms(?) : can it be rendered in a linked html page?

Glauco

Glauco

unread,
Nov 27, 2020, 4:51:33 PM11/27/20
to Metamath
I guess there's the concept of "definition" for those terms(?) : can it be rendered in a linked html page?

Browsing this page
I may be wrong, but it looks like "nat" and "+" in Peano are not eliminable, kind of "e." is not eliminable in ZF , so my question is probably ill-posed

Mario Carneiro

unread,
Nov 27, 2020, 6:42:10 PM11/27/20
to metamath
It's certainly possible to have pages for terms and definitions, but I don't have much to show for them beyond what's already on the index page. If I can figure out how to do pretty printing with embedded links, I can also link all the uses of a symbol so that you can click on a "+" anywhere to go to its definition (or at least to the page that says they are an axiomatic term constructor with no definition).

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/6a622530-e403-43d0-962d-8526bd54e1e8n%40googlegroups.com.

savask

unread,
Nov 28, 2020, 4:04:38 AM11/28/20
to Metamath
I'm a bit late to the discussion, but do I understand correctly that this new community verifier is supposed to be a spin-off to MM0/MM1 project? In particular, MM1 code base will be the starting point, proofs won't look like mmj2's proof sheets, and most of tactics/automation will be written in a custom language (as I understood, MM1 uses a scheme-like language)? Also as far as I understand this new verifier should be able to replace an ensemble of tools we use now for CI (or is this a separate project which will include support for ungrammatical databases?).

When I first read this thread, I imagined the project roughly as depicted on Thierry's diagram (a universal core library agnostic of grammar, tactic language etc + PA program using that lib to imitate mmj2 or imitate MM1 and so on), but on the second reading I got the impression that the underlying idea was to base it of MM1 from the beginning. I'm not sure if this is a source of confusion for others, but it is definitely for me, so it would be nice if the intended connection between MM0/MM1 and the new verifier could be clarified.
суббота, 28 ноября 2020 г. в 06:42:10 UTC+7, di....@gmail.com:

Mario Carneiro

unread,
Nov 28, 2020, 5:23:22 AM11/28/20
to metamath
On Sat, Nov 28, 2020 at 4:04 AM savask <sav...@yandex.ru> wrote:
I'm a bit late to the discussion, but do I understand correctly that this new community verifier is supposed to be a spin-off to MM0/MM1 project? In particular, MM1 code base will be the starting point, proofs won't look like mmj2's proof sheets, and most of tactics/automation will be written in a custom language (as I understood, MM1 uses a scheme-like language)? Also as far as I understand this new verifier should be able to replace an ensemble of tools we use now for CI (or is this a separate project which will include support for ungrammatical databases?).

That all sounds about right. To support ungrammatical databases, the CI verifier would need a standard verifier embedded in it, or alternatively we could just use another verifier for that part. It's possible to detect this condition on the fly, too.

By the way, there is another possibility for tactics, which is as rust crates that get linked (statically or dynamically) into mm0-rs (or even in another language if that's your thing). This is probably a better choice for really big tactics because the scripting language, while good for on the spot ad-hoc tactics and simple pattern matching, gets somewhat unmaintainable at large scale because it's not typed and uses only s-exprs for structured data, and it's also interpreted so there is a speed penalty associated with that.
 
When I first read this thread, I imagined the project roughly as depicted on Thierry's diagram (a universal core library agnostic of grammar, tactic language etc + PA program using that lib to imitate mmj2 or imitate MM1 and so on), but on the second reading I got the impression that the underlying idea was to base it of MM1 from the beginning. I'm not sure if this is a source of confusion for others, but it is definitely for me, so it would be nice if the intended connection between MM0/MM1 and the new verifier could be clarified.

That's fair. I wasn't intending it to be based on MM1 from the beginning, but writing a verifier from scratch is an endeavor and especially if it's mostly me leading the charge I will want to share code with mm0-rs because it is currently under active development. However the aim here is not to transition to MM0 the formal system, or at least that's not my current proposal; rather it is to adapt mm0-rs so that it can also work with metamath as the foundation instead of MM0, which requires replacing a few foundational types but not nearly as much as you might expect, because the basic structure of terms is similar to MM.

David points out to me that even this is quite some work, and suggests translating to and from MM0 if the goal is simply to be able to use mm0-rs as a MM proof assistant. There is a bit more manual work involved in that, because library theorems will be somewhat mangled by the roundtrip and there are some limitations on what you can prove especially in early pred calc where non-distinct setvars play a role, but mostly I think this will work at the small scale. Probably we will want both eventually.

Mario

David A. Wheeler

unread,
Nov 28, 2020, 5:37:10 AM11/28/20
to Mario Carneiro, metamath
On November 28, 2020 5:23:07 AM EST, Mario Carneiro <di....@gmail.com> wrote:
>On Sat, Nov 28, 2020 at 4:04 AM savask <sav...@yandex.ru> wrote:
>
>> I'm a bit late to the discussion, but do I understand correctly that
>this
>> new community verifierfin is supposed to be a spin-off to MM0/MM1 project?

I find Mario's MM0/MM1 work very intriguing, but I'm not currently ready to entirely switch to just that. But he's done a lot of interesting work, so the question is, can we build on that somehow?

One intruguing idea is doing more lower-level work in Rust. Rust is fast, it can be compiled to wasm (making it callable in a browser), and its C interface makes it callable from anything. I've started looking at Stephen O'Rear's Rust verifier with the goal of adding capabilities... and least definition checking. Having some library components in Rust could mean others could more easily write things in many languages.


--- David A.Wheeler

David A. Wheeler

unread,
Nov 28, 2020, 8:55:12 PM11/28/20
to Metamath Mailing List

> On Nov 28, 2020, at 5:23 AM, Mario Carneiro <di....@gmail.com> wrote:
> That's fair. I wasn't intending it to be based on MM1 from the beginning, but writing a verifier from scratch is an endeavor and especially if it's mostly me leading the charge I will want to share code with mm0-rs because it is currently under active development. However the aim here is not to transition to MM0 the formal system, or at least that's not my current proposal; rather it is to adapt mm0-rs so that it can also work with metamath as the foundation instead of MM0, which requires replacing a few foundational types but not nearly as much as you might expect, because the basic structure of terms is similar to MM.

If you’re willing to do that, that sounds like a great thing, and I’d be happy to help you with that. If we can adapt mm0-rs so that it works both for standard Metamath & MM0 that’d be great, and the more overlap you can find, the more it might help your PhD work too.

My understanding is that this proof development system would only be able to handle “grammatical” Metamath databases like set.mm and iset.mm. However, the vast majority of Metamath work is in such databases anyway.

Rust is really fast, yet safe. I would like to develop some tactics beyond what mmj2 can do, e.g., mmj2 can easily figure out the theorem/axiom to use given two expressions, but I want to be able to say “try up to 3 steps to start from step X to produce step Y”. I realize that Metamath.exe has “IMPROVE/DEPTH”, but metamath.exe wants you to work strictly backwards, & I’m looking for greater flexibility.

--- David A. Wheeler

Glauco

unread,
Dec 2, 2020, 3:03:06 PM12/2/20
to Metamath
Hi Mario,

it'd be great if you could record a session where you start from set.mm.mm0 and write a MM0 proof.

I'm particularly interested in seeing how the autocomplete (ctrl+space) and the "Go to definition" work.

Can you please remind me the kind of "notations" needed to translate from mm to mm0 ? An example ?

Glauco

vvs

unread,
Dec 11, 2020, 8:47:05 AM12/11/20
to Metamath
Yeah, I was somewhat involved in the research behind that paper. I think it's trying too hard to fit imperative style into a proofy language; it's difficult to be direct with that approach and there are certain kinds of constructions that are impossible with it (like getting the value you store in a pointer-keyed hash table back out later). You still need a good compiler to make the code efficient - this was a big problem in lean 3 and lean 4 is solving it at great development expense. Here I think we do better to just let the established programming languages do what they were designed to do.

Another recent research paper on the subject:

"We introduce Perceus, an algorithm for precise reference counting with
reuse and specialization. Starting from a functional core language with
explicit control-flow, Perceus emits precise reference counting
instructions such that programs are _garbage free_, where only live
references are retained. This enables further optimizations,
like reuse analysis that allows for guaranteed in-place updates at
runtime. This in turn enables a novel programming paradigm that we call
_functional but in-place_ (FBIP). Much like tail-call optimization
enables writing loops with regular function calls, reuse analysis enables
writing in-place mutating algorithms in a purely functional way. We give
a novel formalization of reference counting in a linear resource
calculus, and prove that Perceus is sound and garbage free. We show
evidence that Perceus, as implemented in Koka, has good performance
and is competitive with other state-of-the-art memory collectors." 

Thierry Arnoux

unread,
Aug 15, 2021, 9:23:56 AM8/15/21
to meta...@googlegroups.com, David A. Wheeler, Stefan O'Rear

Hi Stefan, David and all,

Eight months after this discussion was launched I have made what I think is an important breakthrough: the "grammar" addition to SMM3 / metamath-knife can now parse all statements of set.mm, including the complex ones where look ahead is required, into tree-like "formula" structures. Code is as always available on my fork for those interested.

On my MacBookPro 2015, parsing the full database takes around 3s on one single thread, and 1s when multi-threaded. This is roughly the same cost as the "scopeck" step. Building the grammar for the parsing, based on the syntactical axioms, only takes around 13ms. I was surprised how easy it was to introduce multi-threading.

I believe this is the first step to make `metamath-knife` a library for a new community proof assistant. I want to continue to evolve the resulting "Formula" structure so that it supports basic functionalities such as substitution and unification. However I don't want to go too far beyond that, I think further functionality shall be placed in the "application" part.

Stefan, David, I would like to soon move on pulling this code into an official rust crate. You own the repositories for `smetamath`, respectively `metamath-knife`.
How could we proceed?

Specifically:

  • I believe the 'statement parsing' functionality shall be part of a basic Metamath library. Do we have an agreement here? This would mean I could go ahead an issue a PR :-)
  • Which repository shall be used and evolve?  Stefan's `smetamath`, or David's `metamath-knife` ?
  • Since there was broad agreement to build a community project on Rust, could we move the repository into GitHub's Metamath organization? Norm, would you agree?
BR,
_
Thierry

PS. smetamath's TODO lists both grammatical parser (the topic of this mail), and also outline inference: I've also added outline inference in the branch 'outline'.

Mario Carneiro

unread,
Aug 15, 2021, 9:48:05 AM8/15/21
to metamath, David A. Wheeler, Stefan O'Rear
Exciting news!

Where did you get the grammar parsing algorithm from? Is it based on LRParser, a backtracking parser algorithm, or something tailored for set.mm?

Regarding the questions, my own opinions are: (1) Lots of bits of a metamath proof assistant can be separated into crates and put on crates.io for reuse. Especially if you have a second application, this is definitely a good idea, although it often requires reworking the code to make the library more modular and less entangled with the application. (2) I believe David has shown some interest in keeping up maintenance on the project, so I'm inclined toward the metamath-knife repo if it can be opened up to other contributors. And indeed, re: (3), putting it on the metamath organization would be a way to avoid having to pick sides and help establish it as a community project.

Mario

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

Thierry Arnoux

unread,
Aug 15, 2021, 12:00:23 PM8/15/21
to meta...@googlegroups.com, Mario Carneiro, David A. Wheeler, Stefan O'Rear

Hi Mario,

Where did you get the grammar parsing algorithm from?

I had already touched the parsing mechanism in my previous post. I have not followed any given architecture but built my own. After the fact, I see that I originally built a Moore machine, and then mixed it with a Mealy machine. The result is a parser which does not need to do neither lookahead nor backtracking, so it works in O(n). This is probably thanks to the fact that

Here comes the longer explanation:

This works in two steps. First, I'm extracting the grammar information in the form of a tree. That tree represents the Moore Machine, where each node is a state of the automaton, and transitions are made between node based on input tokens. Once a terminal node is reached, a full construct has been read and a 'reduce' step is made. Pictures below are partial graphical representations of that tree / state machine. I encourage you to follow the link above and browse that tree / state machine!

The algorithm keeps two states:

  • a pointer to the current node in that tree (actually, there are several of these on the call stack whenever there are recursive calls),
  • a stack of sub-formulas already previously parsed, but not 'reduced' yet.

For example, when parsing " A C_ B ", the parser would first encounter the ` A ` token. This makes it shift that token and move its pointer to node 115, which tells it 'reduce' a `cA` and push it to its stack.

Next it restarts from the root node of the grammar tree. A class has been parsed, so it would then move to node 56 and then 'shift' the next token, ` C_ `. This leads to node 239. Then ` B ` is parsed recursively. This works just as for the ` A ` token, and ` cB ` is also pushed on the stack, which now contains [ cA, cB ]. ` cB ` is a class, so the machine state pointer moves to node 240, which tells to reduce a ` wss `. This consumes both ` cA ` and ` cB ` nodes which were on the stack, and replaces them with a single ` wss[cA, cB] ` node. The stack only contains one formula node, and all tokens of the expression have been shifted, so parsing is over.


Actually parsing a statement is the easy part, and the complex part really lies in building the tree correctly. This is done in several passes. The first pass simply takes all syntax axioms, and adds them to the grammar tree. However, there are still several ambiguities to handle:

  • Everywhere where there is a class, it's also possible to first parse a setvar, and then reduce it using ` cv ` (cv states that any setvar is a class). I called this type conversion. The parser can detects the special format of ` cv `, because it has a single token which is also a variable. This leads to the special transitions ` setvar / cv ` which partially turned my Moore Machine into a Mealy Machine.
  • Then, there are several ambiguities in the language. For example,  ` ( x e. A ` could be a prefix for either ` ( x e. A |-> B ) ` or ` ( x e. A /\ y e. B ) `. In the first case, ` x ` and ` A ` shall not be reduced to ` x e. A ` (wcel) immediately, but only after the ` /\ ` token has been read. I've resolved this ambiguity by duplicating parts of the state machine until/when the decision can be made. In this case for example, this means that all possible wff primitives ( ` ->` ,  ` /\ ` , ... ) are added after the node corresponding to the partial state ` ( x e. A ` (currently numbered 304), in all cases with a `wcel` reduction on the way. This leads to the most complex parts of the machine.

Below are parser commands I needed to add to set.mm, in order to give hints to the parser about those ambiguities:

  $( Warn the parser about which particular formula prefixes are ambiguous $)
  $( $j ambiguous_prefix ( A   =>   ( ph ;
        type_conversions;
        ambiguous_prefix ( x e. A   =>   ( ph ;
        ambiguous_prefix { <.   =>   { A ;
        ambiguous_prefix { <. <.   =>   { A ;
  $)

It might have been possible to detect them automatically, but I took the shortcut to specify them in set.mm. I'll kindly ask those lines to be included in the official set.mm so that the parser works!

BR,
_
Thierry

PS. This also means that I have introduced a generic mechanism to parse $j statements - that shall also be part of the library.

Norman Megill

unread,
Aug 15, 2021, 1:10:00 PM8/15/21
to Metamath
>   Since there was broad agreement to build a community project on Rust, could we move the repository into GitHub's Metamath organization? Norm, would you agree? ?

That's fine by me.

Norm

David A. Wheeler

unread,
Aug 15, 2021, 2:59:39 PM8/15/21
to Metamath Mailing List, Stefan O'Rear

On Aug 15, 2021, at 9:23 AM, Thierry Arnoux <thierry...@gmx.net> wrote:

Hi Stefan, David and all,

Eight months after this discussion was launched I have made what I think is an important breakthrough: the "grammar" addition to SMM3 / metamath-knife can now parse all statements of set.mm, including the complex ones where look ahead is required, into tree-like "formula" structures. Code is as always available on my fork for those interested.


Cool!

I briefly looked though it here:

On my MacBookPro 2015, parsing the full database takes around 3s on one single thread, and 1s when multi-threaded. This is roughly the same cost as the "scopeck" step. Building the grammar for the parsing, based on the syntactical axioms, only takes around 13ms. I was surprised how easy it was to introduce multi-threading.


I think that’s one of the bigger arguments for Rust. Modern systems have multiple CPUs, but it’s often hard to safely use them. Rust’s compiler-enforced ownership model simplifies much while retaining both safety & speed.

I believe this is the first step to make `metamath-knife` a library for a new community proof assistant. I want to continue to evolve the resulting "Formula" structure so that it supports basic functionalities such as substitution and unification. However I don't want to go too far beyond that, I think further functionality shall be placed in the "application" part.

Stefan, David, I would like to soon move on pulling this code into an official rust crate. You own the repositories for `smetamath`, respectively `metamath-knife`.
How could we proceed?

Specifically:

  • I believe the 'statement parsing' functionality shall be part of a basic Metamath library. Do we have an agreement here? This would mean I could go ahead an issue a PR :-)
I think that makes sense. At least send me a pull request. 
Well, I’m biased. Pick me, pick me :-). Though certainly feel free to propose it to Stefan’s smetamath as well.
It’s quite easy for different repos to bring in commits from other places.

And just to make sure credit is given: I’ve made minor tweaks to his work, but Stefan O’Rear *absolutely* did the heavy lifting with smetamath.

  • Since there was broad agreement to build a community project on Rust, could we move the repository into GitHub's Metamath organization? Norm, would you agree?
I think that makes sense. If it’s okay, I’d like to move metamath-knife into the metamath repo.
I see Norm agreed to that; is there a lot of opposition?

I’ve had less time to work on metamath-knife than expected recently due to work & my father having a lot of health problems. But I still think a library in Rust for Metamath has a lot of promise.

--- David A. Wheeler

Mario Carneiro

unread,
Aug 16, 2021, 12:44:51 AM8/16/21
to Thierry Arnoux, metamath, David A. Wheeler, Stefan O'Rear
It sounds like the algorithm is quite similar to the LRParser implementation (that I have called "KLR" before on the list). It is almost an LR(0) parse, but then set.mm has certain "ambiguities" (not really the right word, more like a garden path sentence) that require the grammar to have some productions duplicated in order to keep to the LR fragment. Currently, there is a $j comment that says something like "unambiguous 'klr 5';" which is supposed to indicate that compositing depth 5 is sufficient to eliminate all ambiguities in the grammar (without this, parse table generation is only a semi-decision procedure for unambiguity, there is no guarantee of termination).

On Sun, Aug 15, 2021 at 12:00 PM Thierry Arnoux <thierry...@gmx.net> wrote:
Below are parser commands I needed to add to set.mm, in order to give hints to the parser about those ambiguities:

  $( Warn the parser about which particular formula prefixes are ambiguous $)
  $( $j ambiguous_prefix ( A   =>   ( ph ;
        type_conversions;
        ambiguous_prefix ( x e. A   =>   ( ph ;
        ambiguous_prefix { <.   =>   { A ;
        ambiguous_prefix { <. <.   =>   { A ;
  $)

It might have been possible to detect them automatically, but I took the shortcut to specify them in set.mm. I'll kindly ask those lines to be included in the official set.mm so that the parser works!

What's the syntax of these commands? Meaning aside, I would request that it stick to only keywords (identifiers) and quoted strings before the semicolon. Arbitrary tokens will make a mess of things - what if you need a token that is a semicolon? E.g.:

  $( $j ambiguous_prefix '(' 'A'   to   '(' 'ph' ;
        type_conversions;
        ambiguous_prefix '(' 'x' 'e.' 'A'   to   '(' 'ph' ;
        ambiguous_prefix '{' '<.'   to   '{' 'A' ;
        ambiguous_prefix '{' '<.' '<.'   to   '{' 'A' ;
  $)

 But I'm not sure how to read them beyond this. The parser I wrote would just composite more whenever a conflict is detected during parse table generation (and it can't help but detect the conflict since it interferes with the generation), so it didn't need hints and can just figure this out. All other things equal I think it would be better for the parser to do this itself, since they will have to be kept up to date with the database and most people won't be in a position to know how to adjust the hints.

Mario

Thierry Arnoux

unread,
Aug 16, 2021, 11:56:28 PM8/16/21
to meta...@googlegroups.com, Mario Carneiro, David A. Wheeler, Stefan O'Rear

Hi Mario,

Indeed set.mm's grammar is unambiguous, and as you write, these are not true ambiguities, but just prefixes which my initial naive implementation did not parse correctly. The concept of "garden-path" was new to me, but describes well this problem: do you think I shall rename the keyword?

Unfortunately the way I wrote the parser, these are not detected naturally when building the grammar, and I don't see how they could. The first pass is simply adds all syntactic axioms to the machine. Each different construct is added independently, they don't interfere with each other. In the example of my last mail, there are 3 different paths, " setvar e. setvar ", " ( wff /\ wff ) ", and " ( setvar e. class |-> class ) ". Even if the last two share " ( " as a common prefix, they differ afterwards because one is followed by a setvar, while the other is followed by a class, so the state machine goes to diverging states at that point.

I completely agree that it would have been better if the parser could have figured them out purely based on the indication that the grammar is KLR(5). The way I initially though about handling that was to let the parser consider each possible combination of production against each production, and see if there was a conflict. However this would have been difficult to implement and computationally very intensive. Giving those hints was the best compromise I could find. I'd be happy if you or anyone else wants to have a closer look at the algorithm and propose improvements! In any case, I'll try to write down detailed explanations so that it is not only in emails.

You have a valid point about the risk of breakage when a new syntax is introduced. But these prefix ambiguities (or "garden paths") arose in set.mm only with basic constructs like class builders and maps-to notations, which we don't introduce anymore, or extremely rarely: df-opab was introduced in 1994, df-oprab in 1995, and df-mpt in 2008, and never modified since. So the introduction of such garden paths is once-in-a-decade event :-)

The meaning of the parser command " ambiguous_prefix X => Y " is "X shadows Y", i.e., one may initially mistakenly read "X", although "Y" might also be valid for the same prefix. "X" is always a prefix of a valid set.mm expression, like " ( x e. A " is a prefix for a mapping-to expression " ( x e. A |-> B ) ".

As for the syntax of the parser commands, they could also have been put in quoted strings. The (MM) parser unquotes the string if it finds quotes, and you can escape quotes, so that the unquoted ( is equivalent with the quoted '('. For a semicolon, you could write ';', and '\'' for a single quote. The way you write them also works.

BR,
_
Thierry

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

Mario Carneiro

unread,
Aug 17, 2021, 1:10:20 AM8/17/21
to Thierry Arnoux, metamath, David A. Wheeler, Stefan O'Rear
On Mon, Aug 16, 2021 at 11:56 PM Thierry Arnoux <thierry...@gmx.net> wrote:

Hi Mario,

Indeed set.mm's grammar is unambiguous, and as you write, these are not true ambiguities, but just prefixes which my initial naive implementation did not parse correctly. The concept of "garden-path" was new to me, but describes well this problem: do you think I shall rename the keyword?

I'm not too fussed about the name of the keyword, although if it is something specific to your algorithm then it should probably be in the keyword.

Unfortunately the way I wrote the parser, these are not detected naturally when building the grammar, and I don't see how they could. The first pass is simply adds all syntactic axioms to the machine. Each different construct is added independently, they don't interfere with each other. In the example of my last mail, there are 3 different paths, " setvar e. setvar ", " ( wff /\ wff ) ", and " ( setvar e. class |-> class ) ". Even if the last two share " ( " as a common prefix, they differ afterwards because one is followed by a setvar, while the other is followed by a class, so the state machine goes to diverging states at that point.

I completely agree that it would have been better if the parser could have figured them out purely based on the indication that the grammar is KLR(5). The way I initially though about handling that was to let the parser consider each possible combination of production against each production, and see if there was a conflict. However this would have been difficult to implement and computationally very intensive. Giving those hints was the best compromise I could find. I'd be happy if you or anyone else wants to have a closer look at the algorithm and propose improvements! In any case, I'll try to write down detailed explanations so that it is not only in emails.

I have a description of the KLR algorithm from a discussion with Stan Polu which I may or may not have posted here before. I'll put it at the bottom of this message. To summarize, it uses the standard LR(0) parse table generation algorithm to produce the tree of states. Doing this naively results in shift-reduce conflicts, and these conflicts are resolved by compositing the rule to be reduced so that both sides become shifts.

You have a valid point about the risk of breakage when a new syntax is introduced. But these prefix ambiguities (or "garden paths") arose in set.mm only with basic constructs like class builders and maps-to notations, which we don't introduce anymore, or extremely rarely: df-opab was introduced in 1994, df-oprab in 1995, and df-mpt in 2008, and never modified since. So the introduction of such garden paths is once-in-a-decade event :-)

The meaning of the parser command " ambiguous_prefix X => Y " is "X shadows Y", i.e., one may initially mistakenly read "X", although "Y" might also be valid for the same prefix. "X" is always a prefix of a valid set.mm expression, like " ( x e. A " is a prefix for a mapping-to expression " ( x e. A |-> B ) ".

As for the syntax of the parser commands, they could also have been put in quoted strings. The (MM) parser unquotes the string if it finds quotes, and you can escape quotes, so that the unquoted ( is equivalent with the quoted '('. For a semicolon, you could write ';', and '\'' for a single quote. The way you write them also works.

Is that parsing specific to the ambiguous_prefix command? Because I would prefer for the outer $j / $t parser to not have to be too smart and know all the commands (especially since $j commands are specifically specced to be ignorable if the parser doesn't know the head keyword). Since spaces are optional inside $j / $t blocks (at least around `;`), you need to be able to lex properly, and allowing arbitrary tokens would make this difficult. That is, I'm advocating for a restrictive outer grammar where everything is identifiers and strings, and then you can do some inner parsing specific to the ambiguous_prefix command if you want.
 
===========================================

[This](https://github.com/digama0/mmj2/blob/master/src/mmj/verify/LRParser.java#L151-L208) is the new part of the code that distinguishes the KLR parser from LR(0). A "conflict" is a place where an LR(0) parser would fail outright.

During parse table generation each state is associated with a bunch of partially read productions that agree on a common prefix, and in this case you are stuck at the state:
```
-> { o <. setvar , setvar >. | wff }
-> { o class }
```
This is known as a shift-reduce conflict, and usually shifting is the right answer, so that's built in as a heuristic, which is why lark takes the first option over the second. But neither choice is "correct", because this changes the grammar - you are now rejecting a string that should be valid to the grammar (`{ <. x , y >. }` in this case) - so essentially you are finding a "closest LALR(1) approximation" to the grammar when you use lark with this heuristic.

To resolve this, the question is what to do from that state if you read a `<.`. We haven't actually hit the conflict yet. In the first production it's clear that we should step to `-> { <. o setvar , setvar >. } | ph }`, but the second production requires us to look at the `class` nonterminals that start with `<.`. (In fact, in the first state we also have all productions for the class nonterminal, like `-> o 1` and `-> o ( class u. class )` and many others. These are represented in a special way in LRParser.java to save space.) Let's step through the states that the example takes. The `shift <.` step takes us to:
```
-> { <. o setvar , setvar >. | wff }
-> <. o class , class >.
all setvar -> o rules
all class -> o rules
```
and `shift x` takes us to:
```
-> x o
```
Since we are now at the end of a production, we can reduce with `setvar -> x` at this point, and there are no competing productions so this is safe. This `reduce x` edge pops the stack and acts like a `shift setvar` edge from the previous step, leading to:
```
-> { <. setvar o , setvar >. | wff }
-> setvar o
```
The `-> setvar o` comes from the `class -> setvar` production. Now we are stuck, because we can both reduce with this production (which gives us a `cv` node) and shift a comma to continue with the first production. This is a shift-reduce conflict, and lark at this point will throw away the reduce option and shift here, leading to
```
-> { <. setvar , o setvar >. | wff }
all setvar -> o rules
```
which is not correct, as we have lost the ability to parse `{ <. x , y >. }`.

What I do instead to resolve this is "pre-compositing" the rules. We first got in trouble at
```
-> { <. setvar o , setvar >. | wff }
-> setvar o
```
which is a "bad state" because of the shift-reduce conflict. We want to remove the reduce node, and we do so by backing up to see how we got here. We obtained this state by `shift setvar` applied to
```
-> { <. o setvar , setvar >. | wff }
-> <. o class , class >.
all setvar -> o rules
all class -> o rules
```
and we want to repair this state so that we don't hit the train wreck one step from here. So we delete the offending rule `-> o setvar` and add the composition of `class -> setvar` with `class -> <. class , class >.` as a new "composite rule" which looks like a production `class -> <. setvar , class >.`, so that the "before" state instead looks like
```
-> { <. o setvar , setvar >. | wff }
-> <. o class , class >.
-> <. o setvar , class >.
all setvar -> o rules
all class -> o rules except -> o setvar
```
and we `shift setvar` from here instead, getting to
```
-> { <. setvar o , setvar >. | wff }
-> <. setvar o , class >.
```
and we have safely cleared the conflict. (The modified "before" state is not a real state, it is only used to calculate this new state. This state is replacing the original shift-reduce bad state as the result of `shift setvar` applied to
```
-> { <. o setvar , setvar >. | wff }
-> <. o class , class >.
all setvar -> o rules
all class -> o rules
```
.)

To finish the example off, let's make it to the end. The next step is `shift ,` which takes us to
```
-> { <. setvar , o setvar >. | wff }
-> <. setvar , o class >.
all setvar -> o rules
all class -> o rules
```
(and `shift y` takes us to the simple `-> y o` state, so we plan to reduce there and come back here with `shift setvar`), and `shift setvar` from here takes us to
```
-> { <. setvar , setvar o >. | wff }
-> setvar o
```
which is another shift reduce conflict. Again, we analyze the conflict to find out what to composite. We want to apply the `class -> setvar` *production here, which was considered one step ago because closure over `-> <. setvar , o class >.` required us to add the `-> o setvar` production to the state. So we composite `class -> <. setvar , class >.` and `class -> setvar` to get a new `class -> <. setvar , setvar >.` production, create a temporary modified version of the previous state
```
-> { <. setvar , o setvar >. | wff }
-> <. setvar , o class >.
-> <. setvar , o setvar >.
all setvar -> o rules
all class -> o rules except -> o setvar
```
and use it to calculate the new result of `shift setvar`, which is
```
-> { <. setvar , setvar o >. | wff }
-> <. setvar , setvar o >.
```
and we have cleared another shift reduce conflict. There is still one more to go, though, since the next step is `shift >.` which takes us to
```
-> { <. setvar , setvar >. o | wff }
-> <. setvar , setvar >. o
```
which is again a shift reduce conflict. Now we must backtrack a lot, because we have to go back 5 steps (the length of the current reduce candidate) to find out which production required us to put `-> <. setvar , setvar >.` into the mix. In fact, five steps ago this production was not even `-> <. setvar , setvar >.` at all but rather `-> <. class , class >.` but this makes no difference, we need two productions to do the compositing. This is the first state I posted, which looks like
```
-> { o <. setvar , setvar >. | wff }
-> { o class }
all class -> o rules
```
where among the `class` rules is `-> o <. class , class >.`. The reason the class rules are there is because of the `-> { o class }` rule, so we composite `class -> { class }` with `class -> <. setvar , setvar >.`  to get the temporary state
```
-> { o <. setvar , setvar >. | wff }
-> { o class }
-> { o <. setvar , setvar >. }
all class -> o rules except -> o <. class , class >.
```
and now shift 5 steps forward along `<. setvar , setvar >.` to get the repaired state
```
-> { <. setvar , setvar >. o | wff }
-> { <. setvar , setvar >. o }
```
Now we have finally cleared the last hurdle, as we can clearly now either `shift |` or `shift }` depending on what we see next to parse both `{ <. x , y >. | ph }` and `{ <. x , y >. }`. For the purpose of the example let's say we shift `}` so we get to state
```
-> { <. setvar , setvar >. } o
```
and a reduce is unambiguous.

But what are we reducing anyway? I've been talking about compositing rules, and what I haven't been showing here is that each production is associated to a partial expression tree. You can imagine them as lambda expressions. The original rules from the grammar will have a result like `\e1 e2. (cpr e1 e2)` for the production `class -> <. class , class >.`, which is to say that we take the two class expressions in the brackets and put them into the two arguments of a `cpr` node. The arguments aren't always in parse order, for example I think `wal` takes its arguments in the order `wal ph x` (because the `$f` variable declaration of `vx` comes after `wph`), so the production `wff -> A. setvar wff` has result `\e1 e2. (wal e2 e1)`.

Now compositing rules has the effect of a composition of these two expressions. In the first part we composited `class -> <. class , class >.` with `class -> setvar`, with associated results `\e1 e2. (cpr e1 e2)` and `\e1. (cv e1)`, so we insert the `cv` expression in for `e1` of the `cpr` expression to get a new result `\e1 e2. (cpr (cv e1) e2)` for the new production `class -> <. setvar , class >.`. Similarly, the production `class -> <. setvar , setvar >.` is formed by inserting `cv` in for `e2` in this new production, resulting in `\e1 e2. (cpr (cv e1) (cv e2))`. And finally, we composited this expression with the `class -> { class }` production with result `\e1. (csn e1)`, and this composition yields, for the composite rule `class -> { <. setvar , setvar >. }`, the result `\e1 e2. (csn (cpr (cv e1) (cv e2)))`. This is what we reduce with.

So for the example `{ <. x , y >. }`, we first reduce using `setvar -> x := vx`, then `setvar -> y := vy`, then `class -> { <. setvar , setvar >. } := \e1 e2. (csn (cpr (cv e1) (cv e2)))` to get the final parse tree `(csn (cpr (cv vx) (cv vy)))`.

Thierry Arnoux

unread,
Aug 17, 2021, 4:46:07 AM8/17/21
to Mario Carneiro, metamath, David A. Wheeler, Stefan O'Rear

Many thanks for the description of the KLR parser! I'll take the time to read and understand it!

Concerning the parsing of $j comments, I've implemented it in two steps:

  • a first step, which is completely agnostic of the usage, and for which I've followed the Metamath book, chapter 4.4.3 : Additional Information Comment ($j). This states:
    Within an additional information comment is a sequence of one or more commands of the form command arg arg ... ; where each of the zero or more arg values can be either a quoted string or a keyword. Note that every command ends in an unquoted semicolon.
    This step does not interpret the commands, but simply saves them as list of arguments in the database. At this step, I'm removing the quotes if there were any.

  • only in a second step does the parser, if and when it is invoked, fetch the "ambiguous_prefix" commands and interprets the arguments in its own specific way. I'm also using the "$j syntax" parser commands, to get a list of type codes, find out which one is the "provable" type code and that the final result of statements shall be wff.

This way, any other part, or user of the library also has access to all parser commands, pre-processed in a very minimal way.

BR,
_
Thierry

Mario Carneiro

unread,
Aug 17, 2021, 1:57:29 PM8/17/21
to Thierry Arnoux, metamath, David A. Wheeler, Stefan O'Rear
On Tue, Aug 17, 2021 at 4:46 AM Thierry Arnoux <thierry...@gmx.net> wrote:

Concerning the parsing of $j comments, I've implemented it in two steps:

  • a first step, which is completely agnostic of the usage, and for which I've followed the Metamath book, chapter 4.4.3 : Additional Information Comment ($j). This states:
    Within an additional information comment is a sequence of one or more commands of the form command arg arg ... ; where each of the zero or more arg values can be either a quoted string or a keyword. Note that every command ends in an unquoted semicolon.
    This step does not interpret the commands, but simply saves them as list of arguments in the database. At this step, I'm removing the quotes if there were any.
For this "outer parser", you should probably retain the distinction between a string (and store the decoded contents of the string), and a keyword (and store the literal text of the keyword), in an enum. The spec is ambiguous about what a "keyword" is (assuming there is no additional text in the spec on that topic) but I interpret it as an identifier (i.e. something that could be an MM label). Here (https://github.com/digama0/mm0/blob/master/mm0-hs/src/MM0/FromMM/Parser.hs#L319-L343) is the outer $j parser I wrote for MM0, and it tokenizes into the "Keyword(String)", "String(String)" and "Semi" variants, where identifiers are lexed using the built in haskell lexer (which is approximate, but should yield roughly the same results as MM label identifiers, with the exception that MM thinks 0fin is an identifier while most programming languages disagree).

Mario

David A. Wheeler

unread,
Aug 28, 2021, 1:45:31 PM8/28/21
to Metamath Mailing List, Thierry Arnoux, Stefan O'Rear, Mario Carneiro
Thierry Arnoux has created this big pull request here:
"This PR brings in the "grammar" changes discussed with Mario...
It also includes a mechanism for parsing parser commands.”

I intend to merge this, but it’s a substantial change, so if someone thinks there’s
a serious problem I think we’d all like to know!

--- David A. Wheeler

Zheng Fan

unread,
Nov 4, 2021, 7:01:15 PM11/4/21
to Metamath
Hi all,

I downloaded Thierry Arnoux's metamath-knife repo and ran it on a version of set.mm (not the current one), and it runs for 10+ min without any output. It is really weird in that I ran it on the predicate fragment (set-pred.mm) and it finishes in a fraction of a second. Could you help me figure out why? I am writing a metamath parser myself and I want to borrow some ideas from your implementation. Is there a write-up of a high-level overview of the algorithm used? Thanks very much.

Thierry Arnoux

unread,
Nov 4, 2021, 11:54:14 PM11/4/21
to meta...@googlegroups.com, Zheng Fan

Hi Zheng Fan,

I'd be glad to help you use and understand that program. There were several new developments since that email, and my personal fork might not have been up-to-date, can you please try again with the code on David's branch?

    https://github.com/david-a-wheeler/metamath-knife

I hope that project will soon be in a state where it can be moved to the set.mm GitHub organization!


Also, could you please point me to the version of set.mm and the command line parameters you're using, so that I can try to reproduce the issue?

In order to avoid bothering this list with our possible back-and-forth trying to pinpoint the root cause of the issue, you could also open an issue in that GitHub repository, and we can keep our exchanges there.


A better description of the algorithm is due, I tried to introduce it in one of my posts here.


More generally, thank you very much for the attribution, but as the thread title suggests, the idea is to build a community verifier. I would not like to see this project as mine but rather as the community's. This is really actually a community work starting from Stefan O'Rear's foundaments, with major contributions from Mario and David. In his original post, Mario expressed his wish to avoid fragmentation in the community.

The plan is to start a new "community verifier" and build it with those tasks in mind. I think there has been too much fragmentation in verifier land; this makes it very hard for newcomers, especially those that are not especially computer literate such as traditional mathematicians.
So the spirit would rather be that instead of building your own verifier/proof assistant, you're welcome to contribute to this community project :-)

BR,

_
Thierry

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

Thierry Arnoux

unread,
Feb 14, 2022, 12:02:45 AM2/14/22
to meta...@googlegroups.com, David A. Wheeler, Stefan O'Rear, Mario Carneiro

Hi all,

I'd like to log some modest progress on a Visual Studio Code assistant, which reuses a lot of Mario's work on MM0, and relies on metamath-knife.

I've put up a small screen recording here, demonstrating basic hover and go-to-definition support:

  • when hovering over a label, the statement and hypotheses are shown, as well as the statement's associated comment,
  • the go-to-definition command shall be self-explanatory!

BR,
_
Thierry


savask

unread,
Feb 14, 2022, 11:39:54 AM2/14/22
to Metamath
Thierry, great work! I was wondering, is your metamath LSP server tied up to Visual Studio? Some other IDEs and editors support LSP (for example, emacs and vim with a plugin), and I'm curious how hard it will be to use your server with a different setup.

Thierry Arnoux

unread,
Feb 14, 2022, 12:33:16 PM2/14/22
to meta...@googlegroups.com, savask
It would probably be relatively easy to port:

This is indeed an implementation of an LSP server, and it shall be
portable to any other IDE with support for the LSP protocol.

There is a thin Typescript layer for the VSCode extension which
basically relays the messages to the LSP server itself, written in Rust
(following the same architecture as MM0's assistant).

Glauco

unread,
Feb 14, 2022, 2:54:21 PM2/14/22
to Metamath
Thierry, this is so cool!

Is a mmj2 ctrl+u like action available? Please, feel free to ask if you need help with this one (I don't know anything about Rust, but it's now on top of my list of things to learn)

Can't wait to build and try it!

Glauco

Thierry Arnoux

unread,
Feb 14, 2022, 9:40:33 PM2/14/22
to meta...@googlegroups.com
Hi Glauco,

MMJ2-style unification is certainly a goal! Many building blocks for this are actually already in place - not all, but we’re slowly getting there.

An even more remote goal would be to use this framework to build a more tactics-based proof assistant, but that’s in a much more distant future!

Right now Metamath-knife’s API is still changing (and I expect it to continue). One thing you’ll have to take care about if building metamath-vspa is to use the correct version. I hope we fix that soon.

Any help is certainly welcome! The choice of Rust was discussed in this forum a while ago, one motivation being that Ralph and Mario both master it. I’ve been self-learning Rust, I still am not very good and I certainly still have a lot to learn, but I had a lot of fun learning it. It has a lot of features I was hoping for when programming in C.

Each editor function (diagnostics, completion, references, search, formatting, etc.) is a different functionality which can be built in parallel and stay relatively independent, so it shall be very feasible to have several people working together on that project.

BR,
_
Thierry


> Le 15 févr. 2022 à 03:54, Glauco <glaform...@gmail.com> a écrit :
>
> Thierry, this is so cool!

Glauco

unread,
Apr 25, 2022, 9:25:58 AM4/25/22
to Metamath
Is there a formal spec for the mmp format? An EBNF like what we can find in Appendix E (of the Metamath book) for the mm spec?

I've a Typescript parser/verifier for .mm/.mmp files that's working pretty well, but I'm adding new features, and I would like to standardize the names of Classes/Properties/Methods to an "official" spec.

If a BNF is not available, is there a "standard" name for the "blob" at the beginning of any mmp statement, I mean, this guy here

72:71,70:mpbid

(I've looked at some mmj2 source code, but I've not been able to easily find a place where that blob is referenced with a name)

Thanks in advance
Glauco

Thierry Arnoux

unread,
Apr 25, 2022, 9:59:19 AM4/25/22
to meta...@googlegroups.com, Glauco

Hi Glauco,

In MMJ2, here are some methods for mmj.pa.ProofStepStmt:

    public String getStepName();
    public String[] getHypSteps();
    public void setRef(Stmt ref);
    public Stmt getRef();
    public String getRefLabel();
    public Formula getFormula();
    public boolean isHypothesisStep();
    public boolean isDerivationStep();

So:

  • "StepName" would be the name of the step, "72" in your example,
  • "HypSteps" would be the essential hypotheses, "71,70" in your example,
  • "RefLabel" would be the label of the theorem applied, like "mpbid",
  • and "formula" is the formula itself (not shown in your example).

Any documentation about MMJ2 should be here: https://github.com/digama0/mmj2/tree/master/doc

In metamath-vspa, I've been using similar names.

BR,
_
Thierry

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

Thierry Arnoux

unread,
Apr 25, 2022, 11:37:35 AM4/25/22
to meta...@googlegroups.com

Hi all,

FYI, I've added a few screenshots to the README file of the metamath-vspa project, including one with an example of unification at the bottom of the page (still work in progress..)

So I'm still slowly getting there...

BR,
_
Thierry

Glauco

unread,
Apr 25, 2022, 11:47:33 AM4/25/22
to Metamath
Hi Thierry,

  • "StepName" would be the name of the step, "72" in your example,
  • "HypSteps" would be the essential hypotheses, "71,70" in your example,
  • "RefLabel" would be the label of the theorem applied, like "mpbid",
  • and "formula" is the formula itself (not shown in your example)

I've come up with names for the single "components" of the blob, and the formula, but I was looking for a "name" for the whole blob (from the standpoint of my WHITESPACE based tokenizer, it's the first token of a mmp statement, when present; then of course another parser breaks this token down to its components).

Are you aware of a formal syntax description of .mmp files?

Thanks again
Glauco

Thierry Arnoux

unread,
Apr 25, 2022, 12:20:53 PM4/25/22
to meta...@googlegroups.com, Glauco

Hi Glauco,

I think I found what you're looking for:

http://mmj2-doc.tirix.org/doc/ProofAssistantGUIDetailedInfo.html


As far as I'm concerned I only have a difference at the level of the proof step, and then these components, so I did not have to name that "prefix".

The comments in ProofWorksheet.loadWorksheet contain some information about what is being checked when loading a proof step, but it is not a definition either.

It could be useful to have such a definition of what a "Metamath Proof File" is. My previous Eclipse plugin (which was a shell around MMJ2) used the same format, metamath-knife can output that format, and I'm starting to use the exact same format in metamath-vspa.

Maybe we could simply write a ".MD" markdown file in set.mm for filling that purpose?

BR,
_
Thierry
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

Mario Carneiro

unread,
Apr 25, 2022, 1:46:32 PM4/25/22
to metamath, Glauco
Note that the syntax is a bit more flexible than that document implies; one of my first early additions to mmj2 was to make the step:hyp:ref blob accept more abbreviated forms. Also, an optional "!" also precedes the blob, this indicates whether it is to be solved using the default automation.

The unabbreviated syntax is (!)?step:hyps:ref, where:
* step can be hN | qed | N | "" where N is any sequence of alphanumeric characters
* hyps is a comma separated list of "" | ? | N where N is a step label
* ref is "" | ? | #N | T where N is a step label and T is a theorem label

In most places "" and ? are treated the same, but a hyp list [a,b,?] is not always the same as [a,b]; the latter is sometimes interpreted as a theorem which refers only to steps a and b while the former can also refer to other steps. (This is confusing and annoying in practice, so some parts of mmj2 ignore the distinction and always allow other steps.)

The abbreviations allow you to leave off some of the colons. Here are the abbreviations I recall:

#ref -> ::#ref
(!)?step -> (!)?step::
(!)?step:ref -> (!)?step::ref


Reply all
Reply to author
Forward
0 new messages