Yet another independent-ish client-side JS parser, verifier and explorer

251 views
Skip to first unread message

Samuel Goto

unread,
Nov 29, 2022, 4:47:09 AM11/29/22
to Metamath
Hey all,

   I accidentally ran into metamath and immediately fell in love with the concept. I started building a parser, a verifier and some visualizations to help me understand how and why it worked.

   It is not much different from what has already been posted before (except, maybe, that's all done client-side on the web), but I figure you'd all appreciate an independent parser, verifier and visualizer out there.

   So, here it goes:

   Source Code

    Set.mm's client-side verification (takes ~1 min in my macbook m1)

    2p2e4 Visualization (takes a while to load, but once it does, you can navigate easily)

    Hoffstader's systems:

    If you dig into my blog, you'll likely find why I ran into metamath and why I find it so interesting: https://code.sgo.to/

    Good stuff you all,

    Hope this helps,

    Sam
    
     

Antony Bartlett

unread,
Nov 29, 2022, 4:16:02 PM11/29/22
to meta...@googlegroups.com
Yes, very impressive!  And yes, I believe the client-side proof explorer does break new ground - I was edging in that direction, but quite slowly.  Is the (React) source code for that around anywhere?  I see you're trying to get it all streaming, presumably so you can parse and display proofs as soon as that part of the file is downloaded without having to wait for the whole thing to arrive (a download progress bar would be great too). All things I was planning to do eventually.

Nice to see standard lexer and parsing tools in your verifier too.  Would you consider adding a licence to the package.json file?  It looks very reusable!

   Best regards,

        Antony

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/23d122fd-021d-4e59-9281-b485d687c047n%40googlegroups.com.

sgoto

unread,
Dec 2, 2022, 7:27:46 PM12/2/22
to meta...@googlegroups.com
On Tue, Nov 29, 2022 at 1:16 PM Antony Bartlett <a...@akb.me.uk> wrote:
Yes, very impressive!  And yes, I believe the client-side proof explorer does break new ground - I was edging in that direction, but quite slowly.  Is the (React) source code for that around anywhere? 

The source code is available on my website (I don't obfuscate it) if you are curious enough to dig into it, but I can probably make it more accessible/reusable/decent if that's something that's desirable.

 
Is that something of interest?

I see you're trying to get it all streaming, presumably so you can parse and display proofs as soon as that part of the file is downloaded without having to wait for the whole thing to arrive (a download progress bar would be great too).

Ah, yes, I'm trying to make it so that I can download, tokenize, parse and verify in a streaming fashion. That makes my code slightly different from most other verifiers.

I wrote a couple of parsers (a BNF parser and a recursive descendent parser): surprisingly, that's taking up most of the time. Need to think a bit more how to make my recursive descent parser be able to be a streaming one.
 
All things I was planning to do eventually.

Nice to see standard lexer and parsing tools in your verifier too.  Would you consider adding a licence to the package.json file?  It looks very reusable!


Ah, yes, will do! I just need to go through a few things but will get back to you with an open source license here. Will report back!

Cheers to you all, you guys rock,

Sam
 


--
f u cn rd ths u cn b a gd prgmr !

Antony Bartlett

unread,
Dec 3, 2022, 4:43:36 PM12/3/22
to meta...@googlegroups.com
> Ah, yes, I'm trying to make it so that I can download, tokenize, parse and verify in a streaming fashion. That makes my code slightly different from most other verifiers.

I had a go at this.  You can see my attempt here: https://github.com/Antony74/mm-streaming-parser

I'm getting an error

set.mm downloaded 0.0MB/40.8MB
C:\mm\mm-streaming-parser\node_modules\nearley\lib\nearley.js:295
                var err = new Error(this.reportLexerError(e));
                          ^
Error: invalid syntax at line 22 col 1:

20
21
22  $( !
    ^
23  #*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
24    Metamath source file for logic and set theory
Unexpected input (lexer error). Instead, I was expecting to see one of the following:

This looks a lot like the following bug report https://github.com/kach/nearley/issues/575

The theory is that

> nearley is probably able to handle streaming properly. But moo is not.

Still, I feel like I was close.  A streaming parser would solve all of the issues I was bemoaning here


And makes it a concern of the parser rather than something we have to worry about, which would be lovely!

    Best regards,

        Antony

sgoto

unread,
Dec 3, 2022, 8:49:36 PM12/3/22
to meta...@googlegroups.com
On Sat, Dec 3, 2022 at 1:43 PM Antony Bartlett <a...@akb.me.uk> wrote:
> Ah, yes, I'm trying to make it so that I can download, tokenize, parse and verify in a streaming fashion. That makes my code slightly different from most other verifiers.

I had a go at this.  You can see my attempt here: https://github.com/Antony74/mm-streaming-parser

Oh, it would be really neat to make nearley able to parse set.mm! I had to disable mine, because it ran out of memory and it was also taking too long:


I wrote a recursive descendant parser which is much faster, but I don't know how to make it streamable:

 

I'm getting an error

set.mm downloaded 0.0MB/40.8MB
C:\mm\mm-streaming-parser\node_modules\nearley\lib\nearley.js:295
                var err = new Error(this.reportLexerError(e));
                          ^
Error: invalid syntax at line 22 col 1:

I haven't looked at your code yet, but did you get a chance to try and run mine? My nearley grammar may give you some help, as well as the testing infrastructure I set up. Here is mine:


And if you do manage to make this fast, I'd love to reuse it because making nearley streamable is easy!
 

sgoto

unread,
Dec 3, 2022, 8:49:49 PM12/3/22
to meta...@googlegroups.com
I also need to be better at writing instructions, but mine should work with a:

cd metamath
npm install
npm test <-- this run all the tests, and they should pass, and the test parse set.mm with a couple of my parsers

does this help?

Antony Bartlett

unread,
Dec 4, 2022, 8:38:31 AM12/4/22
to meta...@googlegroups.com
I've put a workaround in for the problem I mentioned, and can now reproduce your out of memory issue.  It might be easier to debug while streaming to the parser by watching exactly what is growing more than it should be, rather than throw the whole file at it so it thinks about it for a while then keels over (the slowness is almost certainly another symptom of the same problem.  Something's wrong, there's no fundamental reason why a general purpose parser-generator shouldn't be able to handle set.mm).

> I haven't looked at your code yet, but did you get a chance to try and run mine?

Yes.  It's your nearley parser and moo lexer I was trying to get working via streaming.

> I also need to be better at writing instructions

It's a well laid out npm project - thanks, but I didn't need instructions, they're always almost identical. e.g. for mine

cd mm-streaming-parser
npm install
npm start

    Best regards,

        Antony


Glauco

unread,
Dec 4, 2022, 12:46:24 PM12/4/22
to Metamath
Hi Antony,

here
I'm creating the grammar rules for set.mm that I will then use for diagnostics, unification, etc. for .mmp files
The rules are for Nearly.js
The grammar created (for my local set.mm file) has 1624 rules (but 3 are for working vars), so I'm pretty sure, Nearly.js has no problem with large-ish grammars.

I've never used moo: in the same folder you can see a bunch of lexers / tokenizer that implement the required interface. Maybe you can try with one of those.

Since you are there, you can also try the extension, I guess :-)  (if you know mmj2, you should feel at home)

Please, consider it is my first Typescript (and Node.js) project, so expect weird code :-)

Glauco

Antony Bartlett

unread,
Dec 5, 2022, 7:33:24 AM12/5/22
to meta...@googlegroups.com
Hi Glauco,

Thank you very much for sharing your code.

> Please, consider it is my first Typescript (and Node.js) project, so expect weird code :-)

You've got nothing at all to worry about there.  It's clear from your coding style that you're an experienced developer, even though this is your first time with node/ts (your OO style suggests to me your main language from other projects is probably Java).  Also, to paraphrase something Voltaire actually never said, I may occasionally disagree with the way you code something, but I will defend to the death your right as a hobbyist programmer to code things in whatever way you please.  So if I do make suggestions, I'll endeavour to be constructive, and you're of course free to take or leave whatever advice I may offer.

> Since you are there, you can also try the extension, I guess :-)  (if you know mmj2, you should feel at home)

Yes, I would like to see your extension running.  That would be cool all by itself, but also I find it helps me understand code better sometimes if I can see it running. I'm afraid I'm not familiar with mmj2.  Now according to your instructions:

- Run `npm install` in this folder. This installs all necessary npm modules in both the client and server folder
- Open an .mmp file (this triggers the extension)

However, when I install your project and run Visual Studio Code, I don't see any new extension, nothing noticeably special has changed when I open a .mm file, and I'm unsure what a .mmp file is and there don't seem to be any in your repository.  Sorry for not being able to figure this out.

> The grammar created (for my local set.mm file) has 1624 rules (but 3 are for working vars)

So the grammar updates itself programmatically as it finds certain constructs in the .mm file?  That's pretty cool, I'm looking forward to seeing what the output parse tree (AST) looks like.  I did notice when I was browsing your code that the grammar is set programmatically - there's no Nearley file with the .ne extension.  That's cool, but it does make it slightly harder to pop the parser out of your project and into one of my own, like I already did with Samuel's parser.

The first step would be to change this line to point at your repo rather than his


then I would have to update the beginning of this script to create one of your parsers instead of one of Samuel's


Hmm... interesting... I imagine I could probably figure that out...

Using a git repo as an npm package, by the way, is only intended as a temporary measure, to identify and test the reusability of code before publishing it (in this case your parser) as a separate package.  Ideally you would be involved in such a process, and your yamma package, Samuel, myself, and others, would all become potential users of such a package, and I'm someway off proposing any such thing.  I only mention it because you might not have come across the use of a git repo as an npm package, and I think it's important to understand it's not good practice, except temporarily for development purposes.

mm-streaming-parser is intended as a simple example of how to get piecemeal updates from the fetch api and pass them into a Nearley parser.  It's always good when writing web content to be able to display something before everything has loaded, but this is a pressing concern with Metamath content because of how set.mm is structured as a single monolithic file (I believe this has been raised relatively recently in another thread), and particularly once we've gone past the proof we're currently interested in, it seems a shame to have to wait for the rest of the file to download.

    Best regards,

        Antony


Glauco

unread,
Dec 5, 2022, 9:08:07 AM12/5/22
to Metamath
Hi Antony,

feel free to open issues in the repo, I'll be glad to learn from your experience, and give further support to try the extension (see below)

Yes, I would like to see your extension running.  That would be cool all by itself, but also I find it helps me understand code better sometimes if I can see it running. I'm afraid I'm not familiar with mmj2.  Now according to your instructions:

- Run `npm install` in this folder. This installs all necessary npm modules in both the client and server folder
- Open an .mmp file (this triggers the extension)


- the final goal is to publish it as an extension, in the VSCode marketplace (then it will work by simply installing it)

- but for the time being, you should launch it "as a developer" ( F5 / "launch client" is enough, to see it work)

- it should open a new VSCode instance, named "Extension Host" (and move your "focus" to the new window); this new VSCode instance has yamma installed as an extension

- there you should open an .mmp file (below further details); be sure to have a set.mm file in the same folder (it is used, by default, as the main theory)

- this will activate the extension

- the extension has several configuration parameters: the most important is a path to a .mm file; by default, it looks for set.mm in the same folder of the .mmp file you've opened

- yamma parses set.mm (in the bottom left corner displays the progress notifications: on my low resources Linux VMs, takes about 10 seconds; it also creates a number of statistics that are going to be used by the extension, later on)

- a notification is shown when the theory has been parsed

- then you will see diagnostics in your .mmp file (for instance, if you are using labels that don't exist, if there are hard or soft dv constraints violated / missing, if there are wff syntax errors and where, and so on)

- you will get "on hover" information and semantic tokenization (these are language server protocol "notions")

- you should be able to use "step suggestions" (but to get "smart-ish" suggestions, you need a "model": a .mms file that takes about an hour to be updated; there's a script to build it; maybe I will create a repository for it); this feature is the main reason why I wanted to write yamma

- you will see syntax suggestions while you write the  wff(s) and diagnostics appear as soon as you write a character that "invalidates" the wff

- you will get unification and "step derivation" as in mmj2

- you will have some "quick fix" (for instance, to add missing dv constraints)

- you will be able to search the theory

About .mmp file:
this is the format used by mmj2 (I use a slight variation for "meta" instructions, but the proof steps format is identical)
If you are interested, I suggest to look at David's amazing video tutorials, on YouTube
Here's an example of a .mmp file

* A double modus ponens inference.  (Contributed by NM, 5-Apr-1994.)

h50::mp2.1           |- ph
h51::mp2.2          |- ps
h52::mp2.3           |- ( ph -> ( ps -> ch ) )
53:50,52:ax-mp      |- ( ps -> ch )
qed:51,53:ax-mp    |- ch

You can create a .mmp file for any theorem in your .mm file:
- launch mmj2
- ctrl + g
- it asks for a label
- insert the name of the theorem > enter
- done


As a final note, it's clearly not yet ready for prime time, I'm only writing these pseudo-instructions for you, Antony, because you're familiar with Typescript / npm (much more than I am).
If/when it will be published as an extension, everybody will be able to use, by simply installing the extension from the marketplace.

As I wrote above, feel free to open issues on the repo, I'll be more than happy (with your help) to write detailed instructions, at least to build / launch it "as a developer".

Thanks
Glauco

 


Samuel Goto

unread,
Dec 21, 2022, 7:03:13 PM12/21/22
to Metamath
Just to report back on this thread:

(a) I did start the process within my company to apply the appropriate open source licenses to my JS verifier. It takes some time, but should be fine. I'll report back.
(b) I did manage to make my parser and lexer streamable [1] (I rewrote the lexer as a state machine), which is good, but it made it substantially slower (I used async/await in JS which I think creates a Promise for every single token -- which likely is too much for set.mm's MB file), which is bad. I have something else in mind that I wanted to try, which is different than making the parser/lexer streamable, so I'll have to report back on this too.

Antony Bartlett

unread,
Dec 22, 2022, 6:47:11 AM12/22/22
to meta...@googlegroups.com
Samuel Goto wrote:
> (a) I did start the process within my company to apply the appropriate open source licenses to my JS verifier. It takes some time, but should be fine. I'll report back.

Sorry, I wouldn't have requested this if I'd known it wasn't your IP and there'd be a process to go through.  But perhaps it is worth testing the process to see what happens if you've never tried it before.  In the meantime I suggest marking any package.json in public repositories with "private": true, just so there's no doubt over the status of the code you're sharing.

So you've got one of those programming gigs that allows you to work on whatever project you like on certain days (every Friday is usually cited as an example)?  Nice!  How do I land one of those? ;-)

> (b) I did manage to make my parser and lexer streamable [1] (I rewrote the lexer as a state machine), which is good, but it made it substantially slower (I used async/await in JS which I think creates a Promise for every single token -- which likely is too much for set.mm's MB file), which is bad. I have something else in mind that I wanted to try, which is different than making the parser/lexer streamable, so I'll have to report back on this too.

Yes, I've tried awaiting every single token in set.mm and it reduced my performance three-fold.


My prefered solution is still a Nearley parser because then it owns a bunch of concerns that I don't think are particularly straightforward to deal with (but if that's the challenge you want, then hey, you've already written a working recursive descent parser).  I know you abandoned your Nearley parser, but if you point it at demo0.mm rather than set.mm, I think you'll see the problem is that it's not generating any parser.results.  I think trying to handle something it considers ambiguous is what ultimately leads to it running out of memory on larger files.

I'm still hoping I can figure out how to use Glauco's parser, but I realise I've been frittering away my spare time writing docker containers full of metamath command line tools, and working through David A Wheeler's video in mmj2 and yamma as Glauco recommended, instead.  However, I do intend to be back on the parse issue that's perhaps become a bit of a blocker for both of us, soon.  

    Best regards,

        Antony

Glauco

unread,
Dec 22, 2022, 4:04:55 PM12/22/22
to Metamath
I'm still hoping I can figure out how to use Glauco's parser, but I realise I've been frittering away my spare time writing docker containers full of metamath command line tools, and working through David A Wheeler's video in mmj2 and yamma as Glauco recommended, instead.  However, I do intend to be back on the parse issue that's perhaps become a bit of a blocker for both of us, soon.  

Hi Antony, feel free to open issues in the yamma repository, I'll be more than happy to help you to try a "Run Client" session :-)

Antony Bartlett

unread,
Jan 4, 2023, 2:18:26 PM1/4/23
to meta...@googlegroups.com
On Sat, Dec 3, 2022 at 2:10 PM sgoto <samue...@gmail.com> wrote:
Oh, it would be really neat to make nearley able to parse set.mm! I had to disable mine, because it ran out of memory and it was also taking too long

 I've had a go at this myself.  I'm going to report back now because my Christmas leave has just ended so it's going to be much slower going from here.  It is distressingly easy to blow nearley's memory usage up - I think it's if you give it something with an infinite number of possible interpretations.  So it's not immediately obvious how to tell the difference between a bug in my parser spec, and literally running out of memory.  That can happen too, I've got a good parse of demo0.mm, but my json parse tree is one hundred times bigger than the original file!  Multiply the size of set.mm by a hundred and that's four gig, and by default nodejs is limited to a little over one gig I think.  Yes, that's probably why we're both running out of memory even after debugging our parser spec to the point where it can parse demo0.mm.  There's just too much in each individual moo token, and rather a lot of arrays inside arrays too actually.

The solution, which I haven't started to implement, is to massage the parse tree as it's being generated.  This will end up as a more pleasing tree anyway.  Nearley itself does this as it parses our parser spec


That's the JavaScript function you see on the right hand side, after each bit of EBNF-like code on the left hand side.

So that's my plan.  I've still got a lot of work to do on my .mm parser however, but you can see it here


The project says 'parsers' (plural) because I also intend to add a parser for .mmp files.

The format of a .mm file is formally specified by some EBNF code given at the back of the Metamath book.  I'm trying to produce a semi-canonical parser by cleaving as close to this spec as I can. 

(by the way, I think there's a bracket missing from the EBNF: https://github.com/metamath/metamath-book/pull/241)

I say semi-canonical because I don't want to skip the comments and the whitespace, there's useful information in the comments, and I've got a prettier plugin for formatting .mm files, so I'm interested in whitespace too.  For some operations performed on a .mm file it's likely to be desirable to preserve whitespace.  Finally, for performance reasons it's impractical to use a parser without a lexer.  This deviates considerably from the appearance of the EBNF.

I also notice that the .mm lexemes COMPRESSED_PROOF_BLOCK, LABEL, and MATH_SYMBOL, are not lexically distinct.  That means there's some overlap, you can't tell which of the three you are looking at just from the text, though it's easy to tell from the context of where it appears in the file.  This is a solvable problem, but I haven't got it right yet - my parser's not as strict as it should be in terms of what characters it will accept for these lexemes.

However, the railroad diagram that nearley generates is very nice, and probably a lot more pleasant to read than the original EBNF.


    Best regards,

        Antony



Antony Bartlett

unread,
Jan 8, 2023, 3:53:03 PM1/8/23
to meta...@googlegroups.com
On Sun, Dec 4, 2022 at 5:46 PM Glauco <glaform...@gmail.com> wrote:
here
I'm creating the grammar rules for set.mm that I will then use for diagnostics, unification, etc. for .mmp files
The rules are for Nearly.js

Yes, if I've read the Yamma source code correctly, you parse assertions (i.e. axioms, definitions, and the part of the proof that asserts the thing being proved).  The rules for how assertions are constructed are contained in the mm file itself, which is why your grammar rules are dynamically created when the mm file is parsed.  The mm parser itself is hand coded, not Nearley-generated.  That's what took me a while to figure out even though it's all there in your email, and why I first said I was investigating your parsers, then later said I'm writing my own based upon Nearley and EBNF at the back of the Metamath book.  Sorry for any consternation I might have caused while I was trying to get myself orientated around a parsing strategy.

> The grammar created (for my local set.mm file) has 1624 rules (but 3 are for working vars), so I'm pretty sure, Nearly.js has no problem with large-ish grammars.

No, it's the size of the file it's parsing, not the size of the grammar, that I'm saying may be problematic.  I'm making progress on the issues I outlined in my 4th Jan email, but it's still too early to tell.  You're unlikely to see the issue Sam and I have reported even if you (possibly inadvisably) try to keep the parse tree for every assertion in set.mm in memory.

    Best regards,

        Antony

Glauco

unread,
Jan 9, 2023, 4:13:15 PM1/9/23
to Metamath
Hi Antony,

On Sun, Dec 4, 2022 at 5:46 PM Glauco  wrote:
here
I'm creating the grammar rules for set.mm that I will then use for diagnostics, unification, etc. for .mmp files
The rules are for Nearly.js

Yes, if I've read the Yamma source code correctly, you parse assertions (i.e. axioms, definitions, and the part of the proof that asserts the thing being proved).

I use the "syntax axioms" (see pag. 127 of the Metamath book) to dynamically build the grammar for the specific theory (i.e. $a statements with a typecode other than '|-' )

 The mm parser itself is hand coded, not Nearley-generated.  That's what took me a while to figure out even though it's all there in your email, and why I first said I was investigating your parsers, then later said I'm writing my own based upon Nearley and EBNF at the back of the Metamath book.  Sorry for any consternation I might have caused while I was trying to get myself orientated around a parsing strategy.

To parse the .mm file, I use a TypeScript "translation" of the csharp version, that's a translation of the python version (I guess). It is hardcoded, and it should be linear w.r.t. the file size.
 
> The grammar created (for my local set.mm file) has 1624 rules (but 3 are for working vars), so I'm pretty sure, Nearly.js has no problem with large-ish grammars.

No, it's the size of the file it's parsing, not the size of the grammar, that I'm saying may be problematic. 

Is it problematic in time or in space? (for what I read, if you can express the grammar left-recursively, even early parsers should be linear in time)

Glauco


 

Antony Bartlett

unread,
Jan 21, 2023, 5:23:49 AM1/21/23
to meta...@googlegroups.com
On Mon, Jan 9, 2023 at 9:13 PM Glauco <glaform...@gmail.com> wrote:
> The grammar created (for my local set.mm file) has 1624 rules (but 3 are for working vars), so I'm pretty sure, Nearly.js has no problem with large-ish grammars.

No, it's the size of the file it's parsing, not the size of the grammar, that I'm saying may be problematic. 

Is it problematic in time or in space? (for what I read, if you can express the grammar left-recursively, even early parsers should be linear in time)
 
Space.  Sorry for the late answer but I couldn't be entirely certain it wasn't both, because running out of memory can be a time consuming business.  But when I set up my parser to generate an empty parse tree it zipped over set.mm in an eyeblink.  Hopefully the problems were just down to bugs in my code and an inefficient parse tree.  It was dragging me down for a while but after this insight a couple of nights ago I feel like I'm winning with this problem once again.

    Best regards,

        Antony

sgoto

unread,
Aug 7, 2023, 4:03:03 PM8/7/23
to meta...@googlegroups.com
On Wed, Dec 21, 2022 at 4:03 PM Samuel Goto <samue...@gmail.com> wrote:
Just to report back on this thread:

(a) I did start the process within my company to apply the appropriate open source licenses to my JS verifier. It takes some time, but should be fine. I'll report back.

Ok, just to report back on this thread, I did manage to get this pushed out as an open source project. Here it goes:


I'm still playing with the renderer, but here is a demo:


You can find more examples of me using this library for other sub-axiomatic systems here:


I also found the compression and decompression of proofs poorly documented, so I tried to encapsulate that into something that others would be able to use to debug their verifiers.
 
(b) I did manage to make my parser and lexer streamable [1] (I rewrote the lexer as a state machine), which is good, but it made it substantially slower (I used async/await in JS which I think creates a Promise for every single token -- which likely is too much for set.mm's MB file), which is bad. I have something else in mind that I wanted to try, which is different than making the parser/lexer streamable, so I'll have to report back on this too.

So, I did experiment with a bunch of different ways to accomplish a good and scalable mechanism to download, parse and verify metamath in the browser: an asynchronous parser, an asynchronous verifier, etc.

At some point, I gave up on that approach and pursued another one: break down set.mm into smaller self-sufficient files, and process (download, paser and verify) it incrementally in the browser rather than as a whole.

This led me to developing some modularization syntax for metamath, so that the theorems and axioms can be scoped per file, rather than within a large set.mm database file. It helped me write metamath for other things, so maybe that's something that you all would be curious/interested in (or maybe not).

I'll write a bit more on what that looks like and why it makes processing metamath incrementally more easily (specially inside browsers) and report back to this group here.

In the meantime, if you are curious about this and have the energy to read from source, you can probably see what I did here:

 
You received this message because you are subscribed to a topic in the Google Groups "Metamath" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/metamath/Db2WrDEOX-o/unsubscribe.
To unsubscribe from this group and all its topics, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/99d2acfa-69ae-4beb-8fb4-7b5d01a7ad18n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages