tauto.mm1

52 views
Skip to first unread message

Gino Giotto

unread,
Dec 31, 2025, 11:23:54 PM (10 days ago) 12/31/25
to Metamath
I made a MM1 tactic for finding proofs in classical propositional logic. The tactic is located in this repository. It supports all main symbols used in set.mm, specifically: ->, -., /\ , \/, <->, -/\, -\/, \/_, T., F., if-, hadd, cadd (the symbols are slighly different in MM0, I made an explanatory table here). I took the name "tauto" from Lean tauto tactic that serves a similar purpose.

The goal statements can be of any shape or form you like, as long as they are provable. The only limitation is that it does not support rules of inference, so only statements in closed form can be proved (for now).

It is required to use the latest MM0 version (commit 6d5f0d4) from the github repo. The MM0 extension on the marketplace won't work because the tauto tactic heavily depends on this issue being solved.

To prove a theorem, you simply write "tauto" as its proof, examples:

theorem id (ph: wff): $ ph -> ph $= tauto;
theorem simplll (ph ps ch th: wff): $ ( ( ( ( ph /\ ps ) /\ ch ) /\ th ) -> ph ) $= tauto;
theorem trujust {x y: set}: $ ( ( A. x x =s x -> A. x x =s x ) <-> ( A. y y =s y -> A. y y =s y ) ) $= tauto;
theorem falnantru: $ ( ( F. -/\ T. ) <-> T. ) $= tauto;
theorem ifpfal (ph ps ch: wff): $ ( ~ ph -> ( ifp ph ps ch <-> ch ) ) $= tauto;
theorem cadnot (ph ps ch: wff): $ ( ~ cad ph ps ch <-> cad (~ph) (~ps) (~ch) ) $= tauto;
You can also attempt wild statements like:

theorem _ (ph ps ch: wff): $ ~((~~((~~((ph -> ps) -> ~(ps -> ph)) -> ch) -> ~(ch -> ~~((ph -> ps) -> ~(ps -> ph))))
-> ~~(ph -> ~~((ps -> ch) -> ~(ch -> ps))) -> ~(~ph -> ~~~((ps -> ch) -> ~(ch -> ps)))) -> ~((~~(ph -> ~~((ps -> ch)
-> ~(ch -> ps))) -> ~(~ph -> ~~~((ps -> ch) -> ~(ch -> ps)))) -> ~~((~~((ph -> ps) -> ~(ps -> ph)) -> ch) -> ~(ch ->
~~((ph -> ps) -> ~(ps -> ph)))))) $= tauto;

To view the generated proofs, you can use the tactic below (borrowed from peano.mm1). A tooltip will display the proof when hovering over the code.

do {
  (def (get-proof x) (nth 1 @ @ nth 6 @ get-decl x))
  (display @ pp @ get-proof 'pm2_48)
};

In the example above, the generated proof of pm2_48 is:

(:conv
  (im (not (or ph ps)) (or ph (not ps)))
  (im
    (not (:unfold or (ph ps) (im (not ph) ps)))
    (:unfold or (ph (not ps)) (im (not ph) (not ps))))
  (impneg (~ph) ps (~ph -> ~ps)
    (expn (~ph) ps (~ph -> ~ps)
      (expn (~(~ph -> ps)) ph (~ps)
        (impneg (~(~ph -> ps)) ph (~ps)
          (impneg (~ph) ps (~ph -> ~ps)
            (com23 (~ph) (~ph) (~ps) (~ps)
              (expi (~ph) (~ph) (~ps -> ~ps)
                (idd (~(~ph -> ~~ph)) (~ps))))))))))

The tactic is designed for generality rather than for producing short proofs. Most generated proofs will be longer than their set.mm counterparts.  

I tested the tactic against 786 theorems of set.mm (all theorems in closed form preceding the predicate logic section). The tactic successully found a proof for all of them. To check that the tactic was indeed successful, you can verify the proofs with the following commands (if mm0-c prints nothing then it means the databases have no errors):

./mm0-rs compile -W tauto.mm1 tauto.mmb
./mm0-c tauto.mmb < tauto.mm0

It's interesting to compare the MM1 tauto tactic with the Rumm tactics I wrote a year ago. The latter ones were tested against 112 theorems and solver1.rmm took about 2 seconds to compute all combined proofs while solver2.rmm took about 4 seconds. Note that the Rumm tactics only support primitive symbols, so every statement has to be written using implication "->" and negation "-." only. The general algorithm that I used in MM1 is roughly the same as the Rumm one. The tactic splits the main goal into statements in implicational normal form and then proves them. One of the main reasons I wanted to try out MM1 is that it supports work variables (see this issue), but I was also curious to see whether MM1 definitional unfolding makes the proof development easier (the only unfortunate aspect of MM1 is that it does not have a translator to MM, see this discussion).

I'm not sure how to measure the time MM1 needs to generate proofs. One approach that I found reasonable is to run "mm0-rs compile tauto.mm1 tauto.mmu", which produces an MMU file containing all the generated proof objects from a MM1 file in a readable format. This command completes in about 2-3 seconds on my machine, so maybe I can claim that my tauto tactic proved all those 786 theorems in that amount of time.

It would also be interesting to compare it to Lean's tauto tactic. I don’t know much about Lean at the moment, so I’m putting this forward as a question to people with more experience: how quickly would Lean’s tauto prove those 786 theorems from set.mm? What are its limitations? (I guess the Lean version is likely to be a lot more efficient than mine, given that it has a very active community of hundreds if not thousands of people constantly updating and optimizing the mathlib4 database).

Perhaps, one idea to make the MM1 tauto faster would be to avoid doing definition unfolding explicitely. Normally, the MM1 refine tactic is able to unfold definitions on its own. The issue is that the match tactic does not unfold a goal when matched against a pattern, which is the main reason why I had to write the unfolding myself. Such feature could look like this:

(match $ ,x /\ ,y $
[ $ ~(,x -> ~,y) $ #f]
[ (unfold $ ~(,x -> ~,y) $) #t])

Where the above match returns #t rather than a matching failure.

The MM1 tauto tactic can also be used as a SAT solver. To determine the satisfiability of a statement, it is sufficient to ask tauto to prove the negation of that statement. If it finds a proof, the original statement is unsatisfiable, if it does not find a proof then it is satisfiable. When tested against an unprovable statement, tauto should return the error "Statement not provable". If it returns an overflow error then it means I missed something (indeed, the "Statement not provable" error was put to avoid overflow).

Due to MM0 approach to definitions, some theorems beyond propositional logic can be proved with tauto more easily, example:

We first introduce class terms and define class equality (as done in the set.mm0 file of the MM0 repo):

strict sort class;
term welc (a: set) (A: class): wff; infixl welc: $ec.$ prec 50;
def wceq {.x: set} (A B: class): wff = $ A. x (x ec. A <-> x ec. B) $;
infixl wceq: $=$ prec 50;

And then we can prove some theorems of class equality with the help of tauto:

theorem eqcid (A: class): $ A = A $= '(!! ax_gen x ,tauto);
theorem eqscom (A B: class): $ A = B -> B = A $ = '(!! alimi x ,tauto);
theorem eqscomb (A B: class): $ A = B <-> B = A $ = '(!! albii x ,tauto);
theorem eqstr (A B C: class): $ A = B -> B = C -> A = C $ = '(!! al2imi x ,tauto);
theorem eqstr2 (A B C: class): $ A = B -> B = C -> C = A $ = '(!! al2imi x ,tauto);
theorem eqstr3 (A B C: class): $ B = A -> B = C -> A = C $ = '(!! al2imi x ,tauto);
theorem eqstr4 (A B C: class): $ A = B -> C = B -> A = C $ = '(!! al2imi x ,tauto);

In set.mm, those theorems would require all axioms up to ax-ext, while here they only require axioms up to ax-4. This is a very basic example of how one could use the tauto tactic for later developments.

Mario Carneiro

unread,
Jan 1, 2026, 12:35:09 AM (10 days ago) Jan 1
to meta...@googlegroups.com
1. Here's what I get if I try compiling mmu vs mmb:

$ hyperfine -- "mm0-rs compile tauto.mm1 tauto.mmb" "mm0-rs compile tauto.mm1 tauto.mmu"
Benchmark 1: mm0-rs compile tauto.mm1 tauto.mmb
  Time (mean ± σ):     879.7 ms ±  34.9 ms    [User: 858.7 ms, System: 19.7 ms]
  Range (min … max):   813.5 ms … 929.4 ms    10 runs
 
Benchmark 2: mm0-rs compile tauto.mm1 tauto.mmu
  Time (mean ± σ):      1.002 s ±  0.086 s    [User: 0.972 s, System: 0.028 s]
  Range (min … max):    0.883 s …  1.165 s    10 runs
 
Summary
  mm0-rs compile tauto.mm1 tauto.mmb ran
    1.14 ± 0.11 times faster than mm0-rs compile tauto.mm1 tauto.mmu


If you leave off either mmb or mmu you get just the cost of compiling the file and running the tactics without any export, so that is going to reflect more of your code.

2. MM0 generally produces better (shorter) proofs if you don't use its conversion facility. You also won't be able to translate these proofs easily to metamath; it's more straightforward if you only use unfolding in helper lemmas and have straight theorem applications in tactic-generated proofs.

3. You can use MM1 to automatically generate conversion proofs if you just assert the type inside `refine`, like so:
 
  (def (tauto refine t) (refine t '(:verb ,(start refine (trans t)))))

That way you don't need trans1 (although you might still want something like trans1 if you follow advice 2).

4. If you do follow 2, you can use a basis other than -> and ~, which is probably preferable for a sat solver like this.

5. You have several occurrences of e.g. "(fn (refine t) (conclusion refine t))", this can be reduced to just "conclusion".

6. You have used -- comments before declarations; if you use --| comments then they will show up as doc comments.

7. My guess is that the overall program will be faster if you don't use metavariables etc. and just generate the complete proof. This is of course more syntactically cumbersome though, so I don't blame you for using the existing facilities to do what they are designed to do.

I'll write back with my own take on this tactic later.


--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/metamath/73fef060-9fcc-4724-b5f7-2ac3f3460e72n%40googlegroups.com.

Gino Giotto

unread,
Jan 1, 2026, 7:20:13 PM (9 days ago) Jan 1
to Metamath

Thanks for the PR. The tactic is now at least 6x faster than my original implementation, and I can definitely feel the difference on the VS Code LSP server side.

> If you leave off either mmb or mmu you get just the cost of compiling the file and running the tactics without any export, so that is going to reflect more of your code.

This answers my question. Therefore, I will use "mm0-rs compile tauto.mm1" to extrapolate the time that the tactics spend for computing proofs. This will be useful for upcoming attempts.

> MM0 generally produces better (shorter) proofs if you don't use its conversion facility. You also won't be able to translate these proofs easily to metamath; it's more straightforward if you only use unfolding in helper lemmas and have straight theorem applications in tactic-generated proofs.

Looking at your code, I noticed that tauto_nhad is introduced as a standalone lemma even if its proof is just idi + unfolding. So, it seems that one should delegate definitional unfolding to already proved theorems as much as possible, even when refine could figure it out on its own. I was wondering initially what was the better performance trade-off between using conversion proofs and attempting more match clauses.

> You can use MM1 to automatically generate conversion proofs if you just assert the type inside `refine`, like so:

This would have been quite useful for my original implementation, since it took me a while to understand the syntax of conversion proofs. I initially looked at peano.mm1 as reference, but :conv appears only twice there and the unfolding seemed to be obscure. I ended up finding an explicit example by accident in an MMU file in the examples.

> You have several occurrences of e.g. "(fn (refine t) (conclusion refine t))", this can be reduced to just "conclusion".

You're right. I initially thought that binding a new refine was required to make the next tactic capture the goal at the current location, but the placement of the tactic inside the proof already gives that information, so calling it directly is enough.

Reply all
Reply to author
Forward
0 new messages