Hi all,
I notice there are several independent Metamath parser/verifiers
in the making these days:
On one hand that's a sign of a vibrant community and that's very
good. On the other hand these are fragmented efforts (to
paraphrase Mario) and we might achieve more if we were all working
on a single project.
There is of course a cost for team work, as we need to agree on a
direction and design choices, and coordinate efforts. This also
means sometimes making concessions, which is never easy. GitHub is
a wonderful tool which can help a lot here.
Personally I know the time I can spend on Metamath is limited, so
I want to make it count - and like everyone I also want to have
fun!
My questions to the community are:
As far as the project I've chosen to work on, the goal with `metamath-knife` and the Metamath VsCode Extension would be to provide a feature-rich proof assistant in VsCode just like Lean or MMJ2, or at least provide in VsCode the functionalities I had in my Eclipse plugin (I made a video about that). I've learned Rust as it was the programming language suggested by Mario and because `metamath-knife` was seen as a "new community verifier".
All work-in-progress is in GitHub and external contributions are
of course welcome, though only Mario and I are contributing.
I'd love to learn more about other projects (Benoît, Glauco) even though we discussed a bit directly: what are your goals/aspirations? Would you welcome external contributions?
BR,
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/b29a801b-a1b7-480b-b348-7c67ae5ca456n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/8bd47a1f-ca46-5e8d-c159-75877d3eea5d%40gmx.net.
I also have been working on a verifier, which is at
https://git.sr.ht/~philipwhite/mmweb, if anyone is interested. The
implementation includes a CLI for verifying files, but the main point
of the parser/verifier is to power a web UI, which I am currently
hosting at https://trailingwhite.space/proofs/ if you want to take a
look (it downloads set.mm on page load, which takes several seconds on
my connection, but it gets cached by the browser so that future page
loads are faster). All the design decisions are aimed at enabling the
web UI to be as good as possible.
One reason I starting working on my own verifier instead of
contributing to a different one is that I thought it would be a fun
project with a limited enough scope that I could finish it pretty
quickly. I picked OCaml because that is that language I like best; the
goal wasn't to make the version that everyone should use.
After I got to the point where the verifier worked well enough, I
started thinking about interesting things that I could do with such a
library, and the idea of an interesting web UI came up. At this point,
I have to answer the question, "why not switch to using a more hardcore
and battle-tested library like metamath-knife?" I can think of a lot
answers to this:
- Using a thing that I wrote and that nobody depends on means that I
can change it to fit my needs whenever I want, and without asking
anyone else.
- It wouldn't at all surprise me if a general-purpose API is difficult
to make suitable for web user interfaces, and in particular that type
of user interface code that I write. Bonsai, the library I am using,
combine both the virtual-dom approach popularized by React, and also
a library called incremental, which preserves work from previous
frames whenever it can. To make the incremental library happy, it's a
good thing to use persistent data structures and make updates in a
way that shares as much memory structure as possible, since that will
allow the incremental library to short-circuit computations that it
has already done. However, this approach seems awful if your goal is
to make a verifier that goes as fast as possible over the entire file.
- My build can be all in one language, and I don't have to worry about
how to link Rust and OCaml together. Maybe it's not that hard to "get
it to work", but polyglot systems are always at least a little less
convenient; for instance, you only get to support the intersection of
the systems that all the languages support.
In my case, I think
binding OCaml to Rust and making it work in the browser seems like a
not fun situation.
I admit it would be a shame if lots of people did a bunch of redundant
work, and then we end up with a bunch of different verifiers with the
same feature-set.
However, if what's happening is that if a bunch of
people are writing verifiers so that they can then explore some
interesting ideas on top of it, that seems like a great situation to
me.
Haskell was the result of lots of people focusing their efforts
together to do research on lazy functional programming (I think). This
seems like a big win because decent programming languages take a while
to build, so it helps a lot if researchers have an already built
language as a starting point.
In comparison, metamath verifiers are stupid easy to write. (I don't
mean your stupid if you can't make one, but that it's much much easier
to make one than to make a programming language compiler).
In my
estimation, at this point, if everybody exploring interesting ideas
were working on the same codebase, it could be rather inefficient.
Eventually, once ideas mature, it makes sense to me to think about how
to combine them all into one system, but now feels too early.
To clarify, what I really mean is that if you like Rust and
metamath-knife matches closely enough with a thing your working on,
then that's a good project to work with. However, if, like me, you
don't really like Rust (it's fine, I just don't really want do
low-level programming),
> FYI: I tried loading the page, I get "Loaded N" where N rapidly
> counts up to 42869 and then nothing happens. This happens even after
> a reload, set.txt is cached so that doesn't seem to be the issue.
Yup. The text above the loading count gives you the keyboard shortcuts
for doing anything. I know it's a terribly undiscoverable UI at the
moment. The description wasn't there until an hour or two ago when I
added that so it would meet the bare minimum of sharability. Anyway, if
the description doesn't help, just press the forward slash key, and
a fuzzy-finder window should pop up that you can search through all the
assertion labels that have been loaded.
> This kind of approach is fine for toy projects but for 'real'
> projects that you want to be used by the community this is
> not-invented-here syndrome. Even if you don't want to write rust
> code, one thing you can do is document your implementation strategy
> and feature set so that the code can be ported by others.
I agree that it's a good idea to document the interesting parts of the
project so that other efforts can benefit as well. Some of my
motivation is intentionally NIH, since making things helps me learn how
they work and I also find using libraries that I've written more fun
than using libraries other people have written. I disagree that
building a new library when one already exists is always bad idea; if
you don't think that either, than maybe we only disagree on the
criteria of when to do it.
Interesting, sounds like I'm mistaken. The incrementality I'm taking
about is pretty pervasive. If you can figure out how the demo I linked
works, then you'll be able to step forward and backward through proofs
and see the stack and heap growing and shrinking as the steps are
executed. The stack and heap are represented with immutable maps that
share most of their structure between each step so that you can you
step forward with minimal recomputation of the virtual-dom tree; that
is, you only need to compute the newly added items to the stack,
instead of recomputing the whole page again.
Does mm-knife have incrementality during the execution of a single
proof? I could be wrong, but my approach sounds like it would not
achieve the lightning speeds that I've heard mm-knife is capable of. In
contrast, my verifier takes about 13 seconds to verify set.mm.
There's an argument to be made that virtual-dom + persistent data
structures + all these other incrementality abstractions +
js_of_ocaml's slow compilation scheme are making my program more
complicated and slow. Maybe if I were to just do DOM manipulations it
would be faster and less code. Maybe, but I'm skeptical, and I also
don't think I would enjoy working on it as much, and I would probably
write more bugs.
> If you are compiling to web assembly, I don't think the system support
> really matters, you are ultimately constrained by the browser in the
> end. Besides, I don't think target support is an issue these days no
> matter what language you choose.
>
> Interestingly, if you are using web assembly (EDIT: or compiling to
> javascript, if the *.bc.js files are any indication), you might not
> even need to link them together directly, since they can both just
> separately talk to javascript.
Yeah, that's right. I'm using js_of_ocaml, which translates the bytecode
output of the ocaml compiler into javascript. I'm sure you're right
that it could be done. I would still contend that having bindings is
inconvenient because you have to think about how your language's
runtime will think about different pieces of data. Like, you probably
can't define a variant type in Rust and have it be translated to a
variant type in OCaml (unless you write that marshalling code yourself).
> Metamath *verifiers* are easy to write, but metamath *proof
> assistants* not so much. I think the need for centralizing our fairly
> small pool of manpower is because the proof assistant-styled features
> are not nearly powerful enough in any system except to some extent
> mmj2 and MM-PA, both of which are severely hampered by losing their
> principal architect / maintainer.
Fair point, although all the "fracturing" that's been mentioned seems
to be about the proliferation of verifiers, and not proof assistants.
My experiment is currently just in the direction of an interface for
visualization and exploration, which I *think* is orthogonal to
building a proof assistant?
On 5/31/22 17:34, Mario Carneiro wrote:
> I don't think proliferation of verifiers is a problem at all. I would
> like to see metamath verifiers on Rosetta Code and written in 200
> languages. It's a weekend project so it doesn't have the same risks
> unless you want to start adding fancy features like math parsing or
> markup rendering, or anything that is not just read-only on the mm file.
Hmm, by that criteria a second implementation of the definition checker
would seem to be "fancy".
Which is too bad, in the sense that it is (at least to my way of
thinking) part of the "kernel" in the sense of being needed to confirm
soundness.
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/CAFXXJSuQ559xCk2qjneXbJqGJLFBuWghfb4iYCegafPQMp4fdA%40mail.gmail.com.