# Metamath Zero

1072 views

### Mario Carneiro

Feb 20, 2019, 6:32:56 AM2/20/19
to metamath
I'm experimenting with a new language, Metamath Zero, an amalgamation of Metamath, Ghilbert and Lean syntax. The goal is to balance human readability and ease of writing a verifier, while also making some attempts to patch the bugs in metamath soundness (not that it's wrong, but there is a lot of soundness justification currently happening outside the metamath verifier kernel). Without giving too much explanation of what the syntax means (it's all fictional syntax at the moment, there is no parser yet), I would like to use some of you as guinea pigs for the "readability" aspect. How obvious is it what things mean? I will put some more explanatory comments at the end.

nonempty sort wff;

var ph ps ch: wff*;

term wi (ph ps: wff): wff; infixr wi: -> 25;

term wn (ph: wff): wff; prefix wn: ~ 40;

axiom ax-1: << ph -> ps -> ph >>

(wi ph (wi ps ph));

axiom ax-2: << (ph -> ps -> ch) -> (ph -> ps) -> ph -> ch >>

(wi (wi ph (wi ps ch)) (wi (wi ph ps) (wi ph ch)));

axiom ax-3: << (~ph -> ~ps) -> ps -> ph >>

(wi (wi (wn ph) (wn ps)) (wi ps ph));

axiom ax-mp:

<< ph >> ph ->

<< ph -> ps >> (wi ph ps) ->

<< ps >> ps;

theorem a1i: << ph >> ph -> << ps -> ph >> (wi ps ph);

def wb (ph ps: wff): wff :=

<< ~((ph -> ps) -> ~(ps -> ph)) >>

(wn (wi (wi ph ps) (wn (wi ps ph)))) {

infix wb: <-> 20;

theorem bi1: << (ph <-> ps) -> ph -> ps >>

(wi (wb ph ps) (wi ph ps));

theorem bi2: << (ph <-> ps) -> ps -> ph >>

(wi (wb ph ps) (wi ps ph));

theorem bi3: << (ph -> ps) -> (ps -> ph) -> (ph <-> ps) >>

(wi (wi ph ps) (wi (wi ps ph) (wb ph ps)));

}

def wa (ph ps: wff): wff := << ~(ph -> ~ps) >> (wn (wi ph (wn ps))) {

infix wa: /\ 20;

theorem df-an: << (ph /\ ps) <-> ~(ph -> ~ps) >>

(wi (wa ph ps) (wn (wi ph (wn ps))));

}

def wtru (bound p: wff): wff := << p <-> p >> (wb p p) {

notation wtru := << T. >>;

theorem df-tru: << T. <-> (ph <-> ph) >> (wb wtru (wb ph ph));

}

pure nonempty sort set;

bound x y z w: set;

term wal (x: set) (ph: wff x): wff; prefix wal: A. 30;

def wex (x: set) (ph: wff x): wff :=

<< ~A. x ~ph >> (wn (wal x (wn ph))) {

prefix wex: E. 30;

theorem df-ex: << E. x ph <-> ~A. x ~ph >>

(wb (wex x ph) (wn (wal x (wn ph))));

}

axiom ax-gen: << ph >> ph -> << A. x ph >> (wal x ph);

axiom ax-4: << A. x (ph -> ps) -> A. x ph -> A. x ps >>

(wi (wal x (wi ph ps)) (wi (wal x ph) (wal x ps)));

axiom ax-5 (ph: wff): << ph -> A. x ph >> (wi ph (wal x ph));

var a b c: set;

term weq (a b: set): wff;

term wel (a b: set): wff;

{

local infix weq: = 50;

local infix wel: e. 40;

def weu (x: set) (ph: wff x): wff :=

<< E. y A. x (ph <-> x = y) >>

(wex y (wal x (wb ph (weq x y)))) {

prefix wex: E! 30;

theorem df-ex: << E! x ph <-> E. y A. x (ph <-> x = y) >>

(wb (weu x ph) (wex y (wal x (wb ph (weq x y)))));

}

axiom ax-6: << E. x x = a >> (wex x (weq x a));

axiom ax-7: << a = b -> a = c -> b = c >>

(wi (weq a b) (wi (weq a c) (weq b c)));

axiom ax-8: << a = b -> a e. c -> b e. c >>

(wi (weq a b) (wi (wel a c) (wel b c)));

axiom ax-9: << a = b -> c e. a -> c e. b >>

(wi (weq a b) (wi (wel c a) (wel c b)));

axiom ax-10: << ~A. x ph -> A. x ~ A. x ph >>

(wi (wal x ph) (wal x (wn (wal x ph))));

axiom ax-11: << A. x A. y ph -> A. y A. x ph >>

(wi (wal x (wal y ph)) (wal y (wal x ph)));

axiom ax-12 (ph: wff y): << A. y ph -> A. x (x = y -> ph) >>

(wi (weq x y) (wi (wal y ph) (wal x (wi (weq x y) ph))));

axiom ax-ext: << A. x (x e. a <-> x e. b) -> a = b >>

(wi (wal x (wb (wel x a) (wel x b))) (weq a b));

axiom ax-rep (ph: wff y z):

<< A. y E. x A. z (ph -> z = x) ->

E. x A. z (z e. x <-> E. y (y e. a /\ ph)) >>

(wi (wal y (wex x (wal z (wi ph (weq z x)))))

(wex x (wal z (wb (wel z x) (wex y (wa (wel y a) ph))))));

axiom ax-pow: << E. x A. y (A. z (z e. y -> z e. a) -> y e. x) >>

(wex x (wal y (wi (wal z (wi (wel z y) (wel z a))) (wel y x))));

axiom ax-un: << E. x A. y (E. z (y e. z /\ z e. a) -> y e. x) >>

(wex x (wal y (wi (wex z (wa (wel y z) (wel z a))) (wel y x))));

axiom ax-reg:

<< E. x x e. a -> E. x (x e. a /\ A. y (y e. x -> ~ y e. a)) >>

(wi (wex x (wel x a))

(wex x (wa (wel x z) (wal y (wi (wel y x) (wn (wel y a)))))));

axiom ax-inf:

<< E. x (a e. x /\ A. y (y e. x -> E. z (y e. z /\ z e. x))) >>

(wex x (wa (wel a x) (wal y (wi (wel y x)

(wex z (wa (wel y z) (wel z x)))))));

axiom ax-ac:

<< E. x A. y (y e. a -> E. z z e. y ->

E! z (z e. y /\ E. w (w e. x /\ y e. w /\ z e. w))) >>

(wex x (wal y (wi (wel y a) (wi (wex z (wel z y))

(weu z (wa (wel z y) (wex w

(wa (wa (wel w x) (wel y w)) (wel z w)))))))));

}

nonempty sort class;

var A B: class*;

term cab (x: set) (ph: wff x): class;

term welc (a: set) (A: class): wff;

notation cab (x: set) (ph: wff x) := << { x | ph } >>;

{

local infix welc: e. 50;

axiom ax-8b: << a = b -> a e. A -> b e. A >>

(wi (weq a b) (wi (welc a A) (welc b A)));

axiom ax-clab: << x e. {x | ph} <-> ph >>

(wb (welc x (cab x ph)) ph);

def wceq (A B: class): wff :=

<< A. x (x e. A <-> x e. B) >>

(wal x (welc x A) (welc x B)) {

infix wceq: = 50;

theorem df-ceq: << A = B <-> A. x (x e. A <-> x e. B) >>

(wb (wceq A B) (wal x (welc x A) (welc x B)));

}

def cv (a: set): class := << {x | wel x a} >>

(cab x (wel x a)) {

coercion cv: set -> class;

theorem df-cv: << a e. b <-> wel a b >>

(wb (welc a (cv b)) (wel a b));

}

}

def wcel (A B: class): wff :=

<< E. x (x = A /\ welc x B) >>

(wex x (wceq (cv x) A) (welc x B)) {

infix wcel: e. 50;

theorem df-cel: << A e. B <-> E. x (x = A /\ welc x B) >>

(wb (wcel A B) (wex x (wceq (cv x) A) (welc x B)));

}

* The "sort" keyword declares a new typecode.
* "var" is like "\$f".
* "bound" is another kind of variable, that can appear in binders. (Each mm0 sort is represented by two mm sorts, one for bound variables and one for terms. Only bound variables can participate in DV conditions.)
* "term" declares a new axiomatic term constructor.
* DV conditions are implicitly declared by saying what a variable depends on. The global wff variables are declared with type " wff* " meaning that they are open terms, they are not disjoint from anything. The global set variables a,b are declared as "set" meaning that they are disjoint from all bound variables in theorems containing these variables. Bound variables are always disjoint from each other.
* "def" declares a definition. Different from both Ghilbert and Metamath, definitions here are "limited lifetime unfolded". Each def has a block after it containing one or more theorems about the definition. These theorems are proven where the definition is replaced by the definiendum, and once the block is closed these theorems are the only access to the definition. (This enables some information privacy if desired.) See wb for example, which uses characterizing theorems instead of an iff.
* The notation system turned out a bit more complicated than I'd hoped. Each "math expression" is stated twice, once in chevrons and once in s-expression notation. The intent is that the system actually uses the s-expressions to represent the axioms (so there is no parse tree funny business), and because they form part of the trust base they are displayed to the user. The notation in the chevrons is checked only in the reverse direction - the s-expression is printed and compared to the given string. (In an IDE there would be a mode where you only type the notation and the computer generates the s-expression.) There is still a danger of ambiguous parsing, but at least you can see the parse directly in the case of a mismatch, and the worst effect of a bad parse is that the theorem being proven is not what the user thought - the ambiguity can't be further exploited in the proof.

You might wonder where the proofs are. They go in a separate file, not human readable. The idea is that this file contains all the information that is necessary to appraise the content and correctness of a theorem, and the proof file is just directions to the verifier to produce a proof of the claimed theorems. In particular, if this "specification file" is inspected and correct, then there is no possible input for the proof file that makes the verifier falsely accept a bad theorem. (So no adding axioms, although new definitions are okay.)

I'm curious to hear what you all think of a project such as this. I'm especially interested in ways to simplify the grammar if we can retain readability.

Mario Carneiro

### ookami

Feb 20, 2019, 8:11:20 AM2/20/19
to Metamath

Am Mittwoch, 20. Februar 2019 12:32:56 UTC+1 schrieb Mario Carneiro:
I'm experimenting with a new language, Metamath Zero, an amalgamation of Metamath, Ghilbert and Lean syntax. The goal is to balance human readability and ease of writing a verifier, while also making some attempts to patch the bugs in metamath soundness (not that it's wrong, but there is a lot of soundness justification currently happening outside the metamath verifier kernel). Without giving too much explanation of what the syntax means (it's all fictional syntax at the moment, there is no parser yet), I would like to use some of you as guinea pigs for the "readability" aspect. How obvious is it what things mean? I will put some more explanatory comments at the end.

nonempty sort wff;

var ph ps ch: wff*;

term wi (ph ps: wff): wff; infixr wi: -> 25;

What does the number say? Is it a precedence weight? Is the right-associativity hinted somewhere?

term wn (ph: wff): wff; prefix wn: ~ 40;

axiom ax-1: << ph -> ps -> ph >>

(wi ph (wi ps ph));

Is the line above an interpretation of the human readable infix, right-associative implication chain within double-angle brackets? In UPN format?

axiom ax-2: << (ph -> ps -> ch) -> (ph -> ps) -> ph -> ch >>

(wi (wi ph (wi ps ch)) (wi (wi ph ps) (wi ph ch)));

axiom ax-3: << (~ph -> ~ps) -> ps -> ph >>

(wi (wi (wn ph) (wn ps)) (wi ps ph));

axiom ax-mp:

<< ph >> ph ->

<< ph -> ps >> (wi ph ps) ->

<< ps >> ps;

I'm at work, and have no time to go through the whole grammar file now, but I will after work.

Wolf

### Sauer, Andrew Jacob

Feb 20, 2019, 11:00:26 AM2/20/19
That looks awesome! And it's not much harder to read than Metamath, if at all.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.

### Raph Levien

Feb 20, 2019, 11:08:57 AM2/20/19
to Metamath Mailing List
I haven't dug deeply into understanding this, but it warms my heart that ideas from Ghilbert are being considered to push Metamath forward, and this direction looks fairly solid to me. I completely agree that the soundness story in current Metamath is rough, and this was one of the main motivations for Ghilbert. On the other hand, I bit off more than I could chew, and mostly never quite figured out an appropriate module system, or a way to reconcile types (of a dependent flavor) with the mostly untyped approach of Metamath. I think an intermediate approach like this could fly.

Raph
Message has been deleted

### David A. Wheeler

Feb 20, 2019, 1:48:33 PM2/20/19
to metamath, metamath
On Wed, 20 Feb 2019 06:32:39 -0500, Mario Carneiro <di....@gmail.com> wrote:
> I'm experimenting with a new language, Metamath Zero, an amalgamation of
> Metamath, Ghilbert and Lean syntax.

I'm hoping it'll be designed so databases like set.mm and iset.mm can be
automatically converted into it. It looks like that's the case.

Syntax should be simple, though "simple" is in the eye of the beholder :-).

> var ph ps ch: wff*;

I don't see why you need a "*" here.

> term wi (ph ps: wff): wff; infixr wi: -> 25;

Why do you need to repeat "wi:" here?
I wonder if adding a "precedence: 25" would help, else the number is a mystery.

> axiom ax-1: << ph -> ps -> ph >>
> (wi ph (wi ps ph));

I see the s-expression below the axiom, but isn't that trivial to re-discover?
I think you should apply "Don't repeat yourself (DRY)", or a verifier will
have to recheck something that shouldn't need to be said at all.

> axiom ax-mp:
>
> << ph >> ph ->
> << ph -> ps >> (wi ph ps) ->
> << ps >> ps;

That's a lot of chevrons, and that seems much harder to follow
than metamath's current approach.
Axioms, in particular, are a big deal. It should be
"as obvious as possible" to the reader that they're correct.

I'm fine with s-expressions, but a lot of people DO NOT understand them.
If you make people input s-expressions, you might lose
some people entirely. In cases where the s-expression can be easily determined,
you shouldn't make a human create them.

You might consider using neoteric expressions instead of s-expressions.
They have the same meaning, but look more like math.
Basically "x(a b c)" means the same thing as "(x a b c)".
More details available here from Readable Lisp (my project):

That would convert this:
> (wn (wi (wi ph ps) (wn (wi ps ph))))
into this:
> wn(wi(wi(ph ps) wn(wi(ps ph))))

> theorem bi2: << (ph <-> ps) -> ps -> ph >>
> (wi (wb ph ps) (wi ps ph));

Again, I don't see why you need to expressly state the syntax proof.
It's noise for the human, and the computer can recalculate & check that trivially.

I see infixr. Are you planning to support arbitrary-length expressions of the
same operator, e.g., "( 1 + 2 + 3 + 4 )"?

> * The notation system turned out a bit more complicated than I'd hoped.
> Each "math expression" is stated twice, once in chevrons and once in
> s-expression notation. The intent is that the system actually uses the
> s-expressions to represent the axioms (so there is no parse tree funny
> business), and because they form part of the trust base they are displayed
> to the user. The notation in the chevrons is checked only in the reverse
> direction - the s-expression is printed and compared to the given string.
> (In an IDE there would be a mode where you only type the notation and the
> computer generates the s-expression.) There is still a danger of ambiguous
> parsing, but at least you can see the parse directly in the case of a
> mismatch, and the worst effect of a bad parse is that the theorem being
> proven is not what the user thought - the ambiguity can't be further
> exploited in the proof.

Wouldn't it be simpler to simply require that you cannot add a new
axiom or definition if it results in a syntactic ambiguity?
I *think* that would get rid of the "funny business" while eliminating duplicate
declarations. But that depends on what "funny business" you mean,
and I'm not sure I know what you're referring to.

> You might wonder where the proofs are. They go in a separate file, not
> human readable. The idea is that this file contains all the information
> that is necessary to appraise the content and correctness of a theorem, and
> the proof file is just directions to the verifier to produce a proof of the
> claimed theorems.

I think it's important that the proofs include the specific step-by-step instructions
as metamath does now - that makes it trivial to verify by humans and computers.
If it *also* has higher-level instructions (say, to try to reconstruct a proof if some
definitions have changed) that's fine.

--- David A. Wheeler

### David A. Wheeler

Feb 20, 2019, 10:40:36 PM2/20/19
to metamath, metamath
On Wed, 20 Feb 2019 06:32:39 -0500, Mario Carneiro <di....@gmail.com> wrote:
> I'm experimenting with a new language, Metamath Zero, an amalgamation of
> Metamath, Ghilbert and Lean syntax. The goal is to balance human
> readability and ease of writing a verifier, while also making some attempts
> to patch the bugs in metamath soundness (not that it's wrong, but there is
> a lot of soundness justification currently happening outside the metamath
> verifier kernel).

If possible, I think it'd be good to have an additional reason to create a new language,
or at least argue why similar results can't be done with tweaks to
existing mechanisms like \$j.

1. Putting more soundness justifications into the underlying verifier
is indeed a worthy goal. But most of set.mm is proofs, not definitions,
and there are very few "axioms" in the traditional sense... so it's not clear
that all the verifiers need to check that so much.
I can also imagine tweaking existing systems instead of creating a new language.

2. Making the input "more readable" is worthwhile, but I'm not sure that
by *itself* this is enough. Making it so that we can be more flexible
about parentheses would *certainly* be a nice improvement.
Supporting ph /\ ps /\ ch /\ ... would be nice as well.

Expanding on #2: If it can manage to easily support arbitrary-length arguments like
ph /\ ps /\ ch /\ th /\ ... then I can suddenly see that it's able to do something
that Metamath today can't. Is that easy? Worth it?
I don't know, but that *would* go beyond our current setup.

No matter what, I recommend that you do NOT depend
too much on precedence, even when available.
"Developer beliefs about binary operator precedence (part 1)” by Derek M. Jones
http://www.knosof.co.uk/cbook/accu06.html
"a survey of developers at the 2006 ACCU conference to determine if they correctly applied the precedence and associativity of the binary operators common to C, C++, C#, Java, Perl, and PHP. In this experiment only 66.7% of the answers were correct (standard deviation 8.8, poorest performer 45.2% correct, best performer 80.5% correct); this was not much better than random chance (50%). Even many widely-used operator pairs had relatively poor results; only 94% of the problems were correctly solved when they combined * and +, 92% when they combined * and -, and 69% when they combined / and +."
Professional mathematicians probably do get precedence correct, but where possible I'd like the results to be understood by others.

When we developed my "Readable Lisp" we supported infix but decided to NOT support precedence. The trade-offs are very different here, so I don't think that reasoning really applies here. But my point is that supporting lots of precedence levels isn't really, by itself, a big win.

> * "def" declares a definition. Different from both Ghilbert and Metamath,
> definitions here are "limited lifetime unfolded". Each def has a block
> after it containing one or more theorems about the definition. These
> theorems are proven where the definition is replaced by the definiendum,
> and once the block is closed these theorems are the only access to the
> definition. (This enables some information privacy if desired.) See wb for
> example, which uses characterizing theorems instead of an iff.

That makes sense. However, in some cases you will want a way to say
"don't use this theorem" so you can prove statements that you don't use.

But what about proving theorems that have to "reach into" multiple definitions to work?
You might need a way to express that you're intentionally breaking the rules.

> * The notation system turned out a bit more complicated than I'd hoped.
> Each "math expression" is stated twice, once in chevrons and once in
> s-expression notation.

As I noted earlier, stating things twice is undesirable,
and I don't think it's necessary in this case.

--- David A. Wheeler

### Mario Carneiro

Feb 20, 2019, 10:48:01 PM2/20/19
to metamath
Thank you everyone for all the suggestions. I will see if I can respond to each of them.

On Wed, Feb 20, 2019 at 8:11 AM 'ookami' via Metamath <meta...@googlegroups.com> wrote:
nonempty sort wff;

var ph ps ch: wff*;

term wi (ph ps: wff): wff; infixr wi: -> 25;

What does the number say? Is it a precedence weight? Is the right-associativity hinted somewhere?

It is indeed a precedence number. The right associativity is the "r" suffix, but this command and its cousins have been called out by several people, so I think I need to rethink it. Maybe even having precedences in a numeric order is a bad idea, although I am really wary about making the notation system any more complicated than it already is (and if I can simplify it dramatically that would be even better).

term wn (ph: wff): wff; prefix wn: ~