FYI: Started an Eclipse plug-in for Metamath, based on Xtext

96 views
Skip to first unread message

Marnix Klooster

unread,
Oct 22, 2015, 9:29:07 AM10/22/15
to Metamath
Hello all,

For your information: as a side / spare time project, I recently started https://github.com/marnix/metamath-eclipse-xtext.  From the README:

My goal is to be able to edit Metamath .mm files in an Eclipse-based editor, with proof assistance and everything one needs to create Metamath proofs. Hopefully most of the work can be done in the editor, in keeping with Metamath's philosophy of treating the .mm file as source code.

Please note that this project is in the very early stages, and doesn't do much yet. However, even if I don't find the time to continue on it, hopefully someone else might pick it up as a useful starting point...

[...]

Feel very free to create a GitHub issue, create a pull request, or drop me a line, if you have any opinions, bug reports, requests, or whatever about this project.  Thanks!


Because Xtext does all of the heavy lifting, I was able to put this together fairly quickly.  In its current state, this Eclipse plugin can view and navigate .mm files if they are not too large.

Caveat emptor, and enjoy!

Groetjes,
 <><
Marnix
--
Marnix Klooster
marnix....@gmail.com

fl

unread,
Oct 22, 2015, 10:43:03 AM10/22/15
to Metamath
 
 
 

My goal is to be able to edit Metamath .mm files in an Eclipse-based editor

 

 
With the size of set.mm  ?
 
--
FL

Mario Carneiro

unread,
Oct 22, 2015, 10:45:24 AM10/22/15
to metamath
First, I'd like to say that this is a great idea, and if it pans out I will be very interested to use it for general work. I'd like some clarification on the "proof assistant" aspect, though. As you (presumably) know, mmj2 uses a separate syntax for working with incomplete proofs, the .mmp "proof worksheet" format. I don't think that .mm files are suitable for modifying incomplete proofs, but it mixes these machine-constructed parts with human-edited portions like the comments and theorem statements in a way unlike most common programming languages. How do you (intend to) handle this in Eclipse? One hopes for a means to "pop-out" a proof in an .mm file into a separate .mmp (or alternative proof worksheet format, if you prefer) editing session, which retains the context of the .mm file it is coming from somehow. My main interest lies in a good editor interaction for proof assistant work, but a nice syntax highlighter for .mm files as well is a nice bonus.

One refactoring you might want to consider sooner rather than later, since it sounds like the large-file approach is causing you issues, is a complete breakdown by section. Norm has talked about his EXTRACT/MERGE approach, but I'm thinking something much simpler: for each section, make a new file, referring to the immediately previous section by an import. This yields a linear dependence tree, which is less than ideal, but it should help you keep the file size down and not break Eclipse resources.

What can the plugin do currently? Even with no modification Eclipse can open a .mm file, but only as text, and I'm guessing that at the very least your plugin has some understanding of what it is reading. Can it parse or verify the file? (I will probably have more to say once I get a chance to try the plugin myself.)

Mario

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.
Visit this group at http://groups.google.com/group/metamath.
For more options, visit https://groups.google.com/d/optout.

Marnix Klooster

unread,
Oct 22, 2015, 11:57:00 AM10/22/15
to Metamath
Hi Mario,

Thanks for your interest!

Let me try to answer your questions. Part of this should go into the README
in some shape or form...

To start with your last question, currently the plugin does basic syntax
highlighting, navigation (click on a label, constant, or variable to go to its
declaration), search for references, basic validation (every
label/constant/variable needs to be declared), and the 'rename' refactoring.

See the README.md on GitHub for a TODO list of things it does _not_ do yet.
:-)

Now, about how I intend to use this as a proof assistant. I don't want to
have a separate text format like .mmp, and I'd like to avoid having a separate
(non-editor) UI. So I'd like this thing to be modeless.

My basic idea is to

do most of the proof tasks using auto-complete and refactorings.

Basically, my idea is to render the structure of a finished proof in a
'calculational' proof format, roughly as follows:

${
$d x y A $. $d x y B $. $d x y C $. $d x y D $.
$( Subset theorem for cross product. Generalization of Theorem 101 of
[Suppes] p. 52. (The proof was shortened by Andrew Salmon,
27-Aug-2011.) $)
xpss12 $p |- ( ( A C_ B /\ C C_ D ) -> ( A X. C ) C_ ( B X. D ) ) $=

cA cB wss cC cD wss wa vx cv cA wcel vy cv cC wcel wa vx vy copab vx cv
cB wcel vy cv cD wcel wa vx vy copab cA cC cxp cB cD cxp cA cB wss cC cD
wss wa vx cv cA wcel vy cv cC wcel wa vx cv cB wcel vy cv cD wcel wa vx
vy cA cB wss vx cv cA wcel vx cv cB wcel cC cD wss vy cv cC wcel vy cv cD
wcel cA cB vx cv

ssel
$( ` |- ( A C_ B -> ( x e. A -> x e. B ) ) ` $)

cC cD vy cv

ssel
$( ` |- ( C C_ D -> ( y e. C -> y e. D ) ) ` $)

im2anan9
$( ` |- ( ( A C_ B /\ C C_ D )
->
( ( x e. A /\ y e. C ) -> ( x e. B /\ y e. D ) ) ) ` $)
ssopab2dv
$( ` |- ( ( A C_ B /\ C C_ D )
->
{ <. x , y >. | ( x e. A /\ y e. C ) }
C_
{ <. x , y >. | ( x e. B /\ y e. D ) } ) ` $)

vx vy cA cC
df-xp
$( ` |- ( A X. C ) = { <. x , y >. | ( x e. A /\ y e. C ) } ` $)

vx vy cB cD
df-xp
$( ` |- ( B X. D ) = { <. x , y >. | ( x e. B /\ y e. D ) } ` $)

3sstr4g
$( ` |- ( ( A C_ B /\ C C_ D ) -> ( A X. C ) C_ ( B X. D ) ) ` $)

$.
$( [27-Aug-2011] $) $( [26-Aug-1995] $)
$}

(Note that here, the indentation level of a non-syntax step is determined by
the number of non-syntax steps that are left on the stack after processing it.
I find this a _lot_ easier to read than the tree-structure indentation that is
used on metamath.org and in the /NORMAL rendering.)

With the proper highlighting, this would make the essence of the proof fairly
readable, I think.

Now, imagine also rendering _incomplete_ proofs in this form, starting from the
end/bottom, and leaving ?s in the right places, together with the expressions
they represent: these are the remaining proof obligations.

Then auto-complete on ?s can show a list of applicable statements. Since this
list will often be long, this is where searching on label and on statement
content comes in somehow. Of course, preference should be given to the
hypotheses.

Also, it should be possible to 'zoom in' on a subexpression. For example,
when starting the above proof of ~ xpss12 , I might select ` ( A X. C ) ` and
then ~ sseq1d

|- ( ph -> A = B ) ===> |- ( ph -> ( A C_ C <-> B C_ C ) )

could be suggested as the first step, based on the structure of the final
expression.

Finally, refactorings can be used to say things like, "make this current
subproof or ? step into a new $p statement", or the inverse, "inline the proof
of this step at this point." (For Eclipse users who program in Java: these
are Extract Method and Inline Method.) In that way the proofs can be
reorganized easily to have a structure that can clearly be understood.

Once the proof is complete, it can be refactored into a compressed proof,
and we're done.

Note that this does not give us a way to attach information to places in a
proof. However, if that is really required for practical use, then I'd
suggest an extension or a convention that would keep that information in
the presence of proof compression. I strongly prefer that over separate
.mmp files or something like that.

If it is necessary to work in a 'smaller context', then it would be sufficient
to create a separate .mm file which starts with $[ set.mm $] . And yes, I've
already considered to basically build in the script that Stefan uses for
splitting up set.mm for his https://github.com/sorear/set.mm-history
repository, and then reconstructing using a single file with many include
statements.

Does that give you an idea of my ideas?

Groetjes,
<><
Marnix
--
Marnix Klooster
marnix....@gmail.com

Mario Carneiro

unread,
Oct 22, 2015, 12:38:30 PM10/22/15
to metamath
Okay, I think I see where you are coming from. The general idea of using heavily annotated $= blocks is actually a pretty good approach to inlining a worksheet, but I see some problems with the details. In particular, you should not rely on the "normal" proof mode. This format is a pure tree and does not allow reusing subtrees such as are encoded in "compressed" format and represented by backreferences in .mmp files. Second, it does not allow orphan steps - I don't want to need to use dummylink to have steps that do not contribute to the final product (temporarily, during proof construction).

These both make it unlikely that a pure "annotated normal-form proof" will do the job. Instead, I would advocate an "inlined-mmp" style, such as:


  ${
    $d x y A $.  $d x y B $.  $d x y C $.  $d x y D $.
    $( Subset theorem for cross product.  Generalization of Theorem 101 of
       [Suppes] p. 52.  (The proof was shortened by Andrew Salmon,
       27-Aug-2011.) $)
    xpss12 $p |- ( ( A C_ B /\ C C_ D ) -> ( A X. C ) C_ ( B X. D ) ) $= ? $( $x
50::ssel              |- ( A C_ B -> ( x e. A -> x e. B ) )
51::ssel              |- ( C C_ D -> ( y e. C -> y e. D ) )
52:50,51:im2anan9    |- ( ( A C_ B /\ C C_ D ) -> ( ( x e. A /\ y e. C ) -> ( x e. B /\ y e. D ) ) )
53:52:ssopab2dv     |- ( ( A C_ B /\ C C_ D ) -> { <. x , y >. | ( x e. A /\ y e. C ) } C_ { <. x , y >. | ( x e. B /\ y e. D ) } )
54::df-xp           |- ( A X. C ) = { <. x , y >. | ( x e. A /\ y e. C ) }
55::df-xp           |- ( B X. D ) = { <. x , y >. | ( x e. B /\ y e. D ) }
qed:53,54,55:3sstr4g
                   |- ( ( A C_ B /\ C C_ D ) -> ( A X. C ) C_ ( B X. D ) )
      $) $.
      $( [27-Aug-2011] $) $( [26-Aug-1995] $)
  $}

(The ? in the proof can be replaced with the real compressed proof once the proof block is complete, or even while it is incomplete, although I fear that this may be too distracting if it updates too often.) Note that I am not married to the mmp format in any sense, but I want to make sure that all the features are supported in an alternative before I abandon it. Alternative styling of essentially the same information as above is also possible.

Your alternative indent style is something I have not heard before; I look forward to seeing it in action.

This is probably minor, but I also think that 80 chars is not sufficient for mmp-style proof display: since formulas are often long and repetitive from one line to the next, it is advantageous to push the line length as long as possible while still fitting within the user display. The rest of the file should stay as is, but these expanded proof blocks are much easier to read with a line length closer to 150-250 chars.

Mario

Marnix Klooster

unread,
Oct 22, 2015, 3:59:22 PM10/22/15
to Metamath
Hi Mario,

First about the viability of using an 'annotated normal' proof format: I don't think the issues you raise are that much of a problem.

 * For reusing sub-proofs, if it is the reuse of a syntax tree (as it usually is, I think), then I don't really care: this will basically be a bunch of lightly colored fluff that the user should not need to concern himself with.  If it is the reuse of a 'real' statement, then that can easily (and probably should) be refactored out into a separate $p lemma.

 * For 'dummylink' and the order of developing subproofs, personally I probably would not mind to use 'dummylink'.  However, I can see why you would want to avoid it, and I see at least two other options that fit the refactoring/autocomplete approach: create a separate $p that you later 'inline' into the main proof; or put _two_ proofs between $= ... $., let the formatter render this as two calculations, and then later copy one proof over a ? in the other.

Then about a lemmon-like proof format.  I don't much like the 'referencing' kind of proof format, of which .mmp and /LEMMON are examples.  I much prefer a 'calculational' format, which is where my indentation rule comes from; see http://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/EWD1300.html and https://en.wikipedia.org/wiki/Structured_derivations for my sources of inspiration.

And _if_ we want to use this format, it seems to me that we could just do something like this:

    ${
        $d x y A $.  $d x y B $.  $d x y C $.  $d x y D $.
        $( Subset theorem for cross product.  Generalization of Theorem 101 of
           [Suppes] p. 52.  (The proof was shortened by Andrew Salmon,
           27-Aug-2011.) $)
        xpss12 $p |- ( ( A C_ B /\ C C_ D ) -> ( A X. C ) C_ ( B X. D ) ) $=
   
        cA cB wss cC cD wss wa vx cv cA wcel vy cv cC wcel wa vx vy copab vx cv
        cB wcel vy cv cD wcel wa vx vy copab cA cC cxp cB cD cxp cA cB wss cC cD
        wss wa vx cv cA wcel vy cv cC wcel wa vx cv cB wcel vy cv cD wcel wa vx
        vy cA cB wss vx cv cA wcel vx cv cB wcel cC cD wss vy cv cC wcel vy cv cD
        wcel cA cB vx cv
       
        $( 500 $)             ssel      $( ` |- ( A C_ B -> ( x e. A -> x e. B ) ) ` $)
 
        cC cD vy cv

        $( 540 $)             ssel      $( ` |- ( C C_ D -> ( y e. C -> y e. D ) )
        $( 550:500,540 $)     im2anan9  $( ` |- ( ( A C_ B /\ C C_ D ) -> ( ( x e. A /\ y e. C ) -> ( x e. B /\ y e. D ) ) ) ` $)
        $( 560:550 $)         ssopab2dv $( ` |- ( ( A C_ B /\ C C_ D ) -> { <. x , y >. | ( x e. A /\ y e. C ) } C_ { <. x , y >. | ( x e. B /\ y e. D ) } ) ` $)

        vx vy cA cC           

        $( 610 $)             df-xp     $( ` |- ( A X. C ) = { <. x , y >. | ( x e. A /\ y e. C ) } ` $)
       
        vx vy cB cD

        $( 660 $)             df-xp     $( ` |- ( B X. D ) = { <. x , y >. | ( x e. B /\ y e. D ) } ` $)
        $( 670:560,610,660 $) 3sstr4g   $( ` |- ( ( A C_ B /\ C C_ D ) -> ( A X. C ) C_ ( B X. D ) ) ` $)


        $.
        $( [27-Aug-2011] $) $( [26-Aug-1995] $)
    $}

(This would be the 'lemmon format refactoring'.)  And then specific commands and search options, just like mmj2 has, to fill in ? steps and do the actual proof assisting.

However, depending on demand and supply, we might actually create a separate .mmp editor.  In that case, if someone can come up with a fairly formal description of the format, I'd be obliged: I still have to try and run mmj2, and so far I've been put off by the arcane set-up and configuration options.  All I know of it is basically in David Wheeler's YouTube video (@David: thanks!).

Finally, yes, in my previous mail I did indeed do some line-wrapping for my own benefit while writing that mail, and for you reading it, but I agree that that should not be necessary: if the formulas become wider than (say) 200 characters, then the proof likely can be split up using lemmas, and would become both shorter and less wide.

Groetjes,
 <><
Marnix

Mario Carneiro

unread,
Oct 22, 2015, 7:22:34 PM10/22/15
to metamath
On Thu, Oct 22, 2015 at 3:59 PM, Marnix Klooster <marnix....@gmail.com> wrote:
First about the viability of using an 'annotated normal' proof format: I don't think the issues you raise are that much of a problem.

 * For reusing sub-proofs, if it is the reuse of a syntax tree (as it usually is, I think), then I don't really care: this will basically be a bunch of lightly colored fluff that the user should not need to concern himself with.  If it is the reuse of a 'real' statement, then that can easily (and probably should) be refactored out into a separate $p lemma.

I think you are underestimating the difference between compressed and normal proof format for "typical" large proofs in set.mm. All those syntax tree statements or closure proofs can easily make the normal form 10 times longer than "packed" mode (which is compressed mode without the labels-as-numbers encoding step), and Norm periodically sends me a list of worst offenders in need of refactoring, where the ratio can be more like 1000 or 10000 to 1. (The worst case is exponential blowup - Stefan wrote a program which creates these examples for smm testing.) The statements which are being reused here are usually not worth exporting to a lemma, and you definitely don't want to present the user with the same proof 5 times over - this makes modifications very difficult as the user is forced to do the same thing for each instance of the subproof.
 
 * For 'dummylink' and the order of developing subproofs, personally I probably would not mind to use 'dummylink'.  However, I can see why you would want to avoid it, and I see at least two other options that fit the refactoring/autocomplete approach: create a separate $p that you later 'inline' into the main proof; or put _two_ proofs between $= ... $., let the formatter render this as two calculations, and then later copy one proof over a ? in the other.

What is your preferred method for creating proofs currently? It seems that this would not give you nearly as much freedom with forward/backward proving that mmj2 offers.
 
Then about a lemmon-like proof format.  I don't much like the 'referencing' kind of proof format, of which .mmp and /LEMMON are examples.  I much prefer a 'calculational' format, which is where my indentation rule comes from; see http://www.cs.utexas.edu/users/EWD/transcriptions/EWD13xx/EWD1300.html and https://en.wikipedia.org/wiki/Structured_derivations for my sources of inspiration.

The reason that 'referencing' is necessary is because of this compressed/normal distinction. Metamath proofs are *not* trees, they are DAGs (directed acyclic graphs). Proof-theoretically it is equivalent to only consider trees, but not without that exponential blowup. There is no way to represent an arbitrary DAG using only indentation to indicate connections.

However, this doesn't mean that lemmon style is the only way to go. You might be able to find a compromise if you let step labels be optional and use some kind of tree formatting approach to create all the connections except for "long links" which refer back to specific steps. (This has some similarity to the actual storage format of the compressed style, where certain steps are marked with "Z" and only these marked steps are usable in backreferences.)
 
And _if_ we want to use this format, it seems to me that we could just do something like this:
...
(This would be the 'lemmon format refactoring'.)  And then specific commands and search options, just like mmj2 has, to fill in ? steps and do the actual proof assisting.

As mentioned, this does not allow for a step to be referenced multiple times, since the complete subtree will have to appear multiple times in the normal form proof.
 
However, depending on demand and supply, we might actually create a separate .mmp editor.  In that case, if someone can come up with a fairly formal description of the format, I'd be obliged:

Here's an example of a complete proof worksheet:

$( <MM> <PROOF_ASST> THEOREM=xpss12  LOC_AFTER=?

* Subset theorem for cross product.  Generalization of Theorem 101 of

  [Suppes] p. 52.  (The proof was shortened by Andrew Salmon,
  27-Aug-2011.)

50::ssel              |- ( A C_ B -> ( x e. A -> x e. B ) )
51::ssel              |- ( C C_ D -> ( y e. C -> y e. D ) )
52:50,51:im2anan9    |- ( ( A C_ B /\ C C_ D ) -> ( ( x e. A /\ y e. C ) -> ( x e. B /\ y e. D ) ) )
53:52:ssopab2dv     |- ( ( A C_ B /\ C C_ D ) -> { <. x , y >. | ( x e. A /\ y e. C ) } C_ { <. x , y >. | ( x e. B /\ y e. D ) } )
54::df-xp           |- ( A X. C ) = { <. x , y >. | ( x e. A /\ y e. C ) }
55::df-xp           |- ( B X. D ) = { <. x , y >. | ( x e. B /\ y e. D ) }
qed:53,54,55:3sstr4g
                   |- ( ( A C_ B /\ C C_ D ) -> ( A X. C ) C_ ( B X. D ) )
$d x y A
$d x y B
$d x y C
$d x y D
$=    ( vx vy wss wa cv wcel copab cxp ssel im2anan9 ssopab2dv df-xp 3sstr4g )
      ABGZCDGZHZEIZAJZFIZCJZHZEFKUABJZUCDJZHZEFKACLBDLTUEUHEFRUBUFSUDUGABUAMCDU
      CMNOEFACPEFBDPQ $.
$)

Whitespace is ignored for the most part, except for separating tokens. The structure is broken into lines, with the first character of each line being relevant. Lines starting with a space are continuations of the previous line (you could find/replace "\n\s+" with " " and not affect interpretation).

* Lines starting with "$(" are the header, there is only one of these and it is the first line. It always takes the form "$( <MM> <PROOF_ASST> THEOREM=<thm>  LOC_AFTER=<after>" where <thm> is the name of the current theorem and <after> is an existing theorem label or ?.
* Lines starting with "$)" are the footer: it may not be the last line, but everything after this is ignored.
* Lines starting with * are comments, and blank lines can also be ignored.
* Lines starting with $d describe the disjoint variable conditions, same as $d's in a .mm file, with a newline replacing '$.' at the end of a group.
* Lines starting with $= show the completed proof - this is output only and is ignored by the parser.
* Other lines are proof steps. The first whitespace-separated token here is the step:hyp:ref field, and the rest is a formula for the step. The step:hyp:ref field contains exactly two colons, separating the three fields.
  * The step field denotes the name of the current step, with two special cases:
    * A step starting with 'h' denotes a hypothesis step; the 'h' is ignored for the step name in this case and the hyp field should be empty. The ref field should contain the name of the hypothesis and the formula should be the corresponding formula for this hypothesis as read from the .mm file.
    * A step named 'qed' is the conclusion; there is only one and the formula should match the target formula from the .mm file.
  * The hyp field is a comma-separated list of step names which appear earlier in the file; these are the hypotheses used in the theorem being applied in this step. Empty or '?' step references are permissible and indicate an unknown step reference.
  * The ref field is a reference to an earlier theorem, which is being applied in this step.

There are a few extensions of this basic structure which are used by mmj2 for advanced features or for recovering from "malformed input", but that should suffice for an initial attempt.
 
I still have to try and run mmj2, and so far I've been put off by the arcane set-up and configuration options.  All I know of it is basically in David Wheeler's YouTube video (@David: thanks!).

It should run out of the box, provided you have Java installed. The only essential configuration is to set LoadFile,path\to\set.mm in the RunParms.txt file. If you have any specific problems with setup feel free to email me.

You should look at mmj2, because it is also a text editor-based IDE for Metamath, so it has already developed approaches to most of the issues you need to face; you will gain a lot by evaluating these approaches and seeing how many of them fit your ideas.
 
Finally, yes, in my previous mail I did indeed do some line-wrapping for my own benefit while writing that mail, and for you reading it, but I agree that that should not be necessary: if the formulas become wider than (say) 200 characters, then the proof likely can be split up using lemmas, and would become both shorter and less wide.

My target with that comment was actually for the default line length when performing refactorings. Would I be correct in assuming that the intended edit/process loop is very short, as it is in mmj2? That is, my typing for that proof of xpss12 might look like (where c-u is control-u, the unify command):

3sstr4g <c-u> <click> df-xp <down> df-xp <c-u> <click> ssopab2dv <c-u> <select> x <select> y <c-u> im2anan9 <c-u>

The resulting worksheet is shown in the example above. Notice that I press <c-u> almost after every word (which is basically a refactor action), quite unlike the usual situation in Java coding, where you only rarely refactor after many lines typed. Perhaps it is more comparable to an autocomplete operation, although the action is much more complicated than your usual autocomplete (since it's completing the rest of the line, not just the current label).
 
Mario

Marnix Klooster

unread,
Oct 24, 2015, 5:06:03 AM10/24/15
to Metamath
Hi Mario,

First, thanks for the documentation on mmj2's .mmp format, and on encouragement and help on how to get it running!  I hope to get around to that soon.

*

I have to confess that in general I'm more of a tools guy, so the only Metamath proofs I've done so far were fairly simple ones, using the Metamath program, and not recently.  Recently I've tried to start again using the Metamath program, and then decided to try and build tools support for a calculational format and see how I'd fare with that.  That's how this Eclipse plugin got started.

*

Now, about proofs containing backreferences to subproofs.  This is all a bit thinking out loud and rambling, and clearly without practical experience to back me up, so feel free to keep telling me I'm too optimistic. :-)

What I've learned from your last email is that having compressed proofs is not just a size compression mechanism (as I knew it is), but that the sharing of subproofs adds information that is really used and useful while creating and reading proofs.

However, the strange thing is then that the user has no influence at all over _which_ subproofs are actually shared: when I re-compress a proof using the Metamath program, it could choose a different way of sharing than I originally used, and use that e.g. on the web pages.

So, it seems that the important thing is de-duplication amongst "trivial" parts of the proof, so that the proof author can keep the overview of the "real meat" of the proof, without drowning in the details.  Am I correct there?

*

Also, what is the difference --for the process of reading and writing proofs-- between sharing _inside_ a proof, and having temporary/local $p statements?  Because it seems to me that that is equivalent to backreferences in /LEMMON and .mmp do.  Or am I still missing something?

You write:

The statements which are being reused here are usually not worth exporting to a lemma, and you definitely don't want to present the user with the same proof 5 times over - this makes modifications very difficult as the user is forced to do the same thing for each instance of the subproof.

I think that your "not worth exporting to a lemma" means that the _reader_ would not consider the backreferenced statement to be sufficiently important to stand on its own: I assume we're talking about proving some precondition (a set being non-empty, a number being non-zero) which straightforwardly follows from the $e hypotheses.  However, at the same time you're saying that the _author_ of the proof _does_ consider the backreferenced statement sufficiently important to reuse.  And as a reader of the resulting proof, I'd probably also would not want to wade through the duplication.

So (as you already emphasized) the proof author needs to be able to indicate reusable parts.  Inside the 'normal annotated as calculation' format, it seems that that can be done by making it easy to create a temporary 'sub-goal' $p statement, just before the 'main' $p statement, and referring to that in multiple places.

Experimentation with mmj2 will hopefully teach me something one way or the other.

*

Did I miss any topic or points from your previous emails?  In any case, I appreciate this discussion and what I'm learning from it!

Groetjes,
 <><
Marnix

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.
Visit this group at http://groups.google.com/group/metamath.
For more options, visit https://groups.google.com/d/optout.



--
Marnix Klooster
marnix....@gmail.com

Mario Carneiro

unread,
Oct 26, 2015, 2:01:36 AM10/26/15
to metamath
On Sat, Oct 24, 2015 at 5:05 AM, Marnix Klooster <marnix....@gmail.com> wrote:
Hi Mario,

First, thanks for the documentation on mmj2's .mmp format, and on encouragement and help on how to get it running!  I hope to get around to that soon.

*

I have to confess that in general I'm more of a tools guy, so the only Metamath proofs I've done so far were fairly simple ones, using the Metamath program, and not recently.  Recently I've tried to start again using the Metamath program, and then decided to try and build tools support for a calculational format and see how I'd fare with that.  That's how this Eclipse plugin got started.

*

Now, about proofs containing backreferences to subproofs.  This is all a bit thinking out loud and rambling, and clearly without practical experience to back me up, so feel free to keep telling me I'm too optimistic. :-)

What I've learned from your last email is that having compressed proofs is not just a size compression mechanism (as I knew it is), but that the sharing of subproofs adds information that is really used and useful while creating and reading proofs.

However, the strange thing is then that the user has no influence at all over _which_ subproofs are actually shared: when I re-compress a proof using the Metamath program, it could choose a different way of sharing than I originally used, and use that e.g. on the web pages.

So, it seems that the important thing is de-duplication amongst "trivial" parts of the proof, so that the proof author can keep the overview of the "real meat" of the proof, without drowning in the details.  Am I correct there?

The deduplication algorithm is entirely deterministic: Go through the list of steps (top down) and build a subtree proof for each step. Whenever you encounter a subtree which exactly matches another previously created subtree, merge them and make any pointers to either tree point to the merged version. Note that this does not include two subtrees which derive the same thing by different means - if you wrote an algorithm to deduplicate these and take the smaller proof it would be an optimization of the current method. One other exception is for depth-1 subtrees (leaf nodes), which are typically *not* deduplicated in the compressed format because it is shorter to refer directly to the 0-hyp step rather than mark the step as shared and refer to the index in the list of shared subtrees. However, it is better to deduplicate these as well in any presentation to the user (since for the user it makes no difference).

The approach mmj2 takes here is to allow the user to write deduplicated or non-deduplicated proofs as they prefer, but the compressed format will do any deduplication on export anyway, and when you load a compressed proof any deduplication is preserved in the generated worksheet, so this has the effect of forgetting whatever deduplication choices were made by the user when the complete proof is saved. (It is not the only thing that is not saved in passing through the compressed format - step names and step order is not saved either.)
 
Also, what is the difference --for the process of reading and writing proofs-- between sharing _inside_ a proof, and having temporary/local $p statements?  Because it seems to me that that is equivalent to backreferences in /LEMMON and .mmp do.  Or am I still missing something?

Although it *is* possible to mimic the step sharing by using temporary $p statements, and indeed this may be the easier approach for you since you want to leverage Eclipse refactoring options, it is not nearly as efficient, and more to the point you will end up with almost as much duplication as you seek to eliminate. I mentioned that occasionally I break up proofs solely for the reason that the uncompressed proof is too long (i.e. there is an unusually high compressed/uncompressed proof length ratio).

One recent example is cantnflem1, from which I broke off cantnflem1a, cantnflem1b, cantnflem1c, cantnflem1d. The result barely decreased the size of the original theorem at all, but the new sub-lemmas are quite large as well. The reason for this is because even though I pulled out a subtree, there are still many lines in the lemmas which also show up in the main theorem. (I even wrote a program to find optimal cut-points and this was the best it could do.) In order to really cut the duplication I would need a hundred or more sub-lemmas and the resulting proof would be much larger. Also keep in mind that for each new sub-lemma there is a context of hypotheses, which you may or may not be able to share between other sub-lemmas, and these make up a large part of the overhead of adding a new $p statement.
 
You write:

The statements which are being reused here are usually not worth exporting to a lemma, and you definitely don't want to present the user with the same proof 5 times over - this makes modifications very difficult as the user is forced to do the same thing for each instance of the subproof.

I think that your "not worth exporting to a lemma" means that the _reader_ would not consider the backreferenced statement to be sufficiently important to stand on its own: I assume we're talking about proving some precondition (a set being non-empty, a number being non-zero) which straightforwardly follows from the $e hypotheses.  However, at the same time you're saying that the _author_ of the proof _does_ consider the backreferenced statement sufficiently important to reuse.  And as a reader of the resulting proof, I'd probably also would not want to wade through the duplication.

I don't usually think of it in terms of "the author considers the backreferenced statement sufficiently important to reuse". Rather, consider the following "backward proving" approach to constructing a proof: One starts with the goal, and applies a theorem to it to reduce to one or more subgoals, and works recursively down the tree. It is often the case that a statement is needed by multiple subgoals - closure and nonzero theorems are obvious examples of this - and mmj2 can be set up so that any newly generated subgoal that matches an existing one is automatically discharged by "deduplicating" on the spot. This means that the author never even has to think about deduplication because it is handled automatically - instead, he can simply prove once that some number is nonzero, and then any later theorems for division by the number will be valid with the appropriate subproofs being linked as needed.
 
So (as you already emphasized) the proof author needs to be able to indicate reusable parts.  Inside the 'normal annotated as calculation' format, it seems that that can be done by making it easy to create a temporary 'sub-goal' $p statement, just before the 'main' $p statement, and referring to that in multiple places.

As the above discussion may suggest, I find it better still if the program can automatically detect and apply the necessary "long links" by searching for matching statements, marking the appropriate statements as reusable. From the author's perspective, everything is (a candidate to be) reusable.

You should consider the "packed" format (output by "save proof xxx/packed") for doing annotated proof output, since it does deduplication but is otherwise almost as readable as "normal" output. Here's a mockup of what it might look like:



  ${
    $d x y A $.  $d x y B $.  $d x y C $.  $d x y D $.
    $( Subset theorem for cross product.  Generalization of Theorem 101 of

       [Suppes] p. 52.  (The proof was shortened by Andrew Salmon,
       27-Aug-2011.) $)
    xpss12 $p |- ( ( A C_ B /\ C C_ D ) -> ( A X. C ) C_ ( B X. D ) ) $=
      cA cB 1:wss cC cD 2:wss 3:wa vx 4:cv cA 5:wcel vy 6:cv cC 7:wcel 8:wa vx
      vy copab 4 cB 9:wcel 6 cD 10:wcel 11:wa vx vy copab cA cC cxp cB cD cxp 3
      8 11 vx vy 1 5 9 2 7 10 cA cB 4

      ssel $( |- ( A C_ B -> ( x e. A -> x e. B ) ) $)

      cC cD 6

      ssel $( |- ( C C_ D -> ( y e. C -> y e. D ) ) $)
      im2anan9 $( |- ( ( A C_ B /\ C C_ D ) -> ( ( x e. A /\ y e. C ) -> ( x e. B /\ y e. D ) ) ) $)
      ssopab2dv $( |- ( ( A C_ B /\ C C_ D ) -> { <. x , y >. | ( x e. A /\ y e. C ) } C_ { <. x , y >. | ( x e. B /\ y e. D ) } ) $)

      vx vy cA cC

      df-xp $( |- ( A X. C ) = { <. x , y >. | ( x e. A /\ y e. C ) } $)

      vx vy cB cD

      df-xp $( |- ( B X. D ) = { <. x , y >. | ( x e. B /\ y e. D ) } $)
      3sstr4g $( |- ( ( A C_ B /\ C C_ D ) -> ( A X. C ) C_ ( B X. D ) ) $)

      $.
      $( [27-Aug-2011] $) $( [26-Aug-1995] $)
  $}

One thing I don't like about this approach is that the syntax proofs are visible - ideally these should be completely hidden except in compressed proofs or other non-human-readable output. If I am to be typing in and around this, I think it will be too fragile to keep them around (the step formulas are a much more reliable human-readable way to keep the syntax proofs straight).

Don't forget that the target is integration with Eclipse. I think that it would be best to have a more code-like imperative interface to constructing proofs, because Eclipse will be able to make more sense of that. If you use my original suggestion of just stubbing out the proof and inserting a $( $x $) comment, you can feel free to write in any language you can think of, and then just transition back to standard metamath once the proof is done. This will put you in the same league as Isabelle (https://isabelle.in.tum.de/) or Lean (http://leanprover.github.io/) input, so studying those may also be useful.
 
Mario

Marnix Klooster

unread,
Oct 29, 2015, 2:58:01 AM10/29/15
to Metamath
Hi Mario,

Thanks for your extensive response and input and example; I'm digesting and trying to think things through before coding-- I don't nearly have enough time to try and code up several options. :-)

I might respond to some other parts of your mail later, but I have a question about one part you write, about the step sharing that is done by mmj2 .mmp worksheets, /LEMMON and /PACKED (which I didn't know about):


Although it *is* possible to mimic the step sharing by using temporary $p statements, and indeed this may be the easier approach for you since you want to leverage Eclipse refactoring options, it is not nearly as efficient, and more to the point you will end up with almost as much duplication as you seek to eliminate. I mentioned that occasionally I break up proofs solely for the reason that the uncompressed proof is too long (i.e. there is an unusually high compressed/uncompressed proof length ratio).

In what sense do you use the word "efficient" here?

My goal here (like yours, I believe) is to help the proof author in the process of creating the proof.  They should not need to handle duplicate things while writing the proof.  So, there is a need for a mechanism --during proof writing-- to point to the same proof tree from multiple places.  For this we can choose $p statements, or steps in an .mmp worksheet, or steps in a packed representation. What is the difference in 'efficiency' between these?  I don't see any, except that $p statements will take up more vertical screen space while writing the proof, which might be inconvenient.

Or are you talking about space-efficiency of the resulting compressed proof?  In that case, after I've produced some proof in Eclipse using my envisioned method with temporary $p statements perhaps even being automatically being extracted based on auto-deduplication, I'd just convert it to a compressed proof, with all the temporary $p statements automatically inlined and gone. Just like the standard mmj2 process.

So I have the feeling that we're misunderstanding each other somewhere, but I can't yet see where.

Oh, and about hiding syntax proofs and having a separate in-comment proof authoring sub-language: it might ultimately be the best option, but let's just say I'd like to avoid that for now: language design is hard, and I'd like to see how far I can get with what is already there.  If I were to do a separate language, I'd probably reuse mmj2's.  Also, for now I want to try and see how 'invisible' I can make syntax proofs during proof development by highlighting (e.g., give them a smaller font and a softer color), allow instead a ??? token to mean 'push as many items as are necessary for the $f hypotheses of the following step, no line wrapping for them, and perhaps other mechanisms.  So yes, this is an open design issue. :-)

Thanks again!

Groetjes,
 <><
Marnix

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.
Visit this group at http://groups.google.com/group/metamath.
For more options, visit https://groups.google.com/d/optout.



--
Marnix Klooster
marnix....@gmail.com

Mario Carneiro

unread,
Oct 29, 2015, 1:29:33 PM10/29/15
to metamath
On Thu, Oct 29, 2015 at 2:57 AM, Marnix Klooster <marnix....@gmail.com> wrote:
Hi Mario,

Thanks for your extensive response and input and example; I'm digesting and trying to think things through before coding-- I don't nearly have enough time to try and code up several options. :-)

I might respond to some other parts of your mail later, but I have a question about one part you write, about the step sharing that is done by mmj2 .mmp worksheets, /LEMMON and /PACKED (which I didn't know about):

Although it *is* possible to mimic the step sharing by using temporary $p statements, and indeed this may be the easier approach for you since you want to leverage Eclipse refactoring options, it is not nearly as efficient, and more to the point you will end up with almost as much duplication as you seek to eliminate. I mentioned that occasionally I break up proofs solely for the reason that the uncompressed proof is too long (i.e. there is an unusually high compressed/uncompressed proof length ratio).

In what sense do you use the word "efficient" here?

My goal here (like yours, I believe) is to help the proof author in the process of creating the proof.  They should not need to handle duplicate things while writing the proof.  So, there is a need for a mechanism --during proof writing-- to point to the same proof tree from multiple places.  For this we can choose $p statements, or steps in an .mmp worksheet, or steps in a packed representation. What is the difference in 'efficiency' between these?  I don't see any, except that $p statements will take up more vertical screen space while writing the proof, which might be inconvenient.

Or are you talking about space-efficiency of the resulting compressed proof?  In that case, after I've produced some proof in Eclipse using my envisioned method with temporary $p statements perhaps even being automatically being extracted based on auto-deduplication, I'd just convert it to a compressed proof, with all the temporary $p statements automatically inlined and gone. Just like the standard mmj2 process.

So I have the feeling that we're misunderstanding each other somewhere, but I can't yet see where.

I usually measure 'efficiency' by counting the total number of characters in the .mm file, which is not without its faults but gets a ballpark figure on most measures of interest. In your case this might not be a fair measure because the extra pop-out $p statements are intended as temporary things only, but it makes the $p option look much worse than the other two (especially if intermediate step formulas are stripped).

Here's a rough explanation: Imagine that you did a full decomposition of the statement, with every step in a new $p. (I actually had plans to do this for the opentheory converter, because OT does not have any distinction between steps of different proofs, only one huge line of steps with backreferences in the form of saving an loading from a dictionary, with a small number of the steps highlighted as "theorems".) Since we conventionally do not store $p steps for syntax proofs, I will assume these are all inline. Now suppose that several steps involved the same large term. (The way metamath works causes this to be a very common situation.) The derivation of this term will have to be present in the step-$p statements of each of the involved steps, while in the other situations it would be many references to one derivation. Even if syntax proofs are elided this term shows up as a string (the length of the string is usually comparable to the length of its derivation, unless it is highly self-similar) in the $p formula of each step-$p proof. (But at this point we are clearly being unfair since we stripped the intermediate step formulas in the other formats.)

But to get back to the point, if these are only temporary constructs it probably doesn't matter if they are blown up and huge - indeed it may be easier for the reader to follow,  provided that all those extra characters convey something useful to the user. The idea of using the $p formulas as a surrogate for the step formulas is a new one as well, and you might even be able to get a layout similar to an .mmp worksheet with the following setup:

${
  hyp1 $e |- Hyp1 $.
  hyp2 $e |- Hyp2 $.
  thm-3 $p |- step3 $= hyp1 a $( h1:a $) $.
  thm-4 $p |- step4 $= hyp2 hyp1 hyp2 thm-1 b $( h2,3:b $) $.
  thm $p |- Thm $= hyp1 hyp2 thm-2 c $( 4:c $) $.
$}

which is equivalent to the .mmp worksheet

h1::hyp1 |- Hyp1
h2::hyp2 |- Hyp2
3:1:a |- step3
4:2,3:b |- step4
qed:4:c |- Thm

The first version is a valid normal-form proof (if the theorems a,b,c existed), except that it does not provide any syntax proofs (but as we are all aware these can be generated). The comment after each step-proof is designed to be sufficient to generate the rest of the proof, so the proofs could all be stubbed out safely for a more compact presentation, and it is based on the mmj2 step:hyp:ref line without the step part, which is already written before the $p.

This gives me another idea, for syntax proof-free output, but it is of sufficiently general interest that I will discuss it in another thread.

Mario

Jens-Wolfhard Schicke-Uffmann

unread,
Nov 2, 2015, 9:20:00 AM11/2/15
to meta...@googlegroups.com
Hi,

On Sat, Oct 24, 2015 at 11:05:42AM +0200, Marnix Klooster wrote:
> So (as you already emphasized) the proof author needs to be able to indicate
> reusable parts.  Inside the 'normal annotated as calculation' format, it seems
> that that can be done by making it easy to create a temporary 'sub-goal' $p
> statement, just before the 'main' $p statement, and referring to that in
> multiple places.
my Igor proof assistant usually automatically re-uses any fully proven
subgoal when possible, without any user interaction.

I am aiming more for an Isar-Style (as in Isabelle) proof input, i.e. an
Igor "proof" is just a script nudging the proof assistant in the right
direction. This creates shorter proof inputs, possibly at the expense of
longer proofs. Also, the interactive interface can be used in a
Gentzen-style way, if desired.

Some examples (possibly no longer working, as Igor is still much work in
progress): https://gist.github.com/Drahflow/22c9936394c4a0de2992
The Igor input for dfrtrcl2 is 44 lines, the metamath proof has 107
essential steps.

If you are aiming for efficient proof input, I think you should abstract
from many menial things (like solving boring ` ( x = y -> ph <-> ps ) `
subgoals).


Regards,
Drahflow
signature.asc
Reply all
Reply to author
Forward
0 new messages