Proof Design based on Natural Deduction

93 views
Skip to first unread message

samiro.d...@rwth-aachen.de

unread,
Jun 7, 2025, 3:20:04 PMJun 7
to Metamath

Roughly a fortnight ago, I wrote a C++ procedure to convert Fitch-style natural deduction proofs of propositional theorems to Hilbert-style proofs.
I released it today as part of pmGenerator 1.2.2.

Since Metamath concerns Hilbert systems I'd like to share a few insights I learned along the way, apart from presenting the new feature. In particular, I noticed that Mario Carneiro gave a talk on "Natural Deduction in the Metamath Proof Language", but it doesn't seem to have gained much traction; at least I do not see many theorems in Metamath that would be crucial to apply proof design based on natural deduction.

In my opinion, the option to design such proofs gives a great advantage since they are very easy for humans to find (at least at the propositional level), whereas Hilbert-style proofs are rather complicated (even at the propositional level).

For demonstration, here's a proof constructor to try out natural deduction proof design: https://mrieppel.github.io/FitchFX/
My converter is using the same syntax (with "Deduction Rules for TFL" only). Some exemplary proofs are: m_ffx.txt, w1_ffx.txt, …, w6_ffx.txt — of the seven minimal single axioms of classical propositional logic with operators {→,¬}. These files can also be imported via copy/paste into the FitchFX tool under the "Export" tab.

Key Concepts

[Note: In the following, exponents ⁿ (or ^n) mean n-fold concatenation of sequences, and D stands for (2-ary) condensed detachment in prefix notation, i.e. most general application of modus ponens, taking a proof of the conditional as first and a proof of the antecedent as second argument.]

  • Most rules can be enabled by using a corresponding theorem. For example, p→(q→(p∧q)) (pm3.2) can be used — in combination with two modus ponens applications — to apply conjunction introduction, i.e. ∧I: Γ∪{p,q}⊢(p∧q). There may be multiple rule-enabling theorems, for example p→(q→(q∧p)) (pm3.21) can accomplish the same thing by changing the order of arguments. I provided a table of rule-enabling theorems at nd/NdConverter.h, many of which do not have set.mm correspondents: Of six ∨E-enabling theorems there is only (jao), and I didn't find any of (p→⊥)→¬p, ¬p→(p→⊥), p→(¬p→⊥), (¬p→⊥)→p, i.e. no enabling theorems for any of ¬I, ¬E, IP.
  • However, in natural deduction proofs, there are blocks at certain depths, each starting with an assumption.
    For example, ∧I: Γ∪{p,q}⊢(p∧q) at depth 3 is actually Γ∪{a→(b→(c→p)),a→(b→(c→q)}⊢a→(b→(c→(p∧q))). Fortunately, such variants can easily be constructed from the zero-depth rule-enabling theorems:
    For symbols 1 := (ax-1) and 2 := (ax-2), the proof σ_mpd(d) for σ_mpd(0) := D and σ_mpd(n+1) := (σ_mpd(n))²(D1)ⁿ2 can be used to apply modus ponens at depth d. For example, σ_mpd(0) is (ax-mp), σ_mpd(1) is (mpd), and σ_mpd(2) is (mpdd). But σ_mpd(d) for d ≥ 3 seem to be still missing from Metamath.
    Every theorem can be shifted one deeper by adding an antecedent via preceding its proof with D1, i.e. with a single application of (a1i).
    In combination with σ_mpd, rule-enabling theorems can thereby be applied at any depth. I gave detailed constructions of all supported rules at nd/NdConverter.cpp#L538-L769.
  • We cannot simply make use of some rule-enabling theorem to translate conditional introduction, i.e. →I: from Γ∪{p}⊢q infer Γ⊢(p→q), since it handles the elimination of blocks and depth, which is necessary because Hilbert-style proofs operate on a global scope everywhere. Other rules just call it in order to eliminate a block and then operate on the resulting conditional.
    To eliminate an assumption p for a caller at depth d, we can replace it with an appropriate proof a1_a1i(n, m) with d = n+m+1 of either a₁→(…→(aₘ→(p→p))…) for n = 0, or a₁→(…→(aₘ→(p→(q₀→(q₁→(…→(qₙ→p)…)))))…) for n > 0, when the assumption is used from a position n deeper than the assumption's depth m+1.
    We can construct a1_a1i(n, m) based on 1 := (ax-1) and 2 := (ax-2) via a1_a1i(0, m) := (D1)^mDD211, and a1_a1i(n, m) := (D1)^m(DD2D11)ⁿ1 for n > 0. Note that DD211 and D2D11 are just proofs of p→p and (p→q)→(p→(r→q)), respectively. Oddly, the second proof is not part of set.mm, but for its theorem I found two different proofs (mercolem3) and (merco1lem15) for which new usage is discouraged. In combination with modus ponens, the theorem can be used with conditionals to slip in additional antecedents.
  • In general, we can use (p→q)→(p→(r→q)) in combination with (a1i) to construct proofs slip(n, m, σ) from proofs σ to slip in m new antecedents after n known antecedents for a known conclusion. This makes the implementation — in particular due to the possible use of reiteration steps — much simpler: Regardless of from which depth and with how many common assumptions a line is called, the appropriate numbers of antecedents can be slipped in where they are needed in order to rebuild σ's theorem to match the caller's context.
  • Since the final line in the Fitch-style proof makes the initial call and (for correct proofs without premises) must be in the global scope, all lines can be fully decontextualized, i.e. transformed into theorems, in this manner.

The core of the translation algorithm can be found at nd/NdConverter.cpp#L815-L947 (definition and call of recursive lambda function translateNdProof).

Usage

My converter (pmGenerator --ndconvert) uses aliases by default (as mentioned in the header file) rather than treating connectives other than {→,¬} as real symbols and operators, with the same aliases that are used by pmproofs.txt. This is unlike Metamath. But there is the option -h to use heterogeneous language (i.e. with extra axioms to define additional operators). But then the user must also provide rule-enabling theorems in order to enable their corresponding rules for translation.

My procedure can translate into all kinds of propositional Hilbert systems, as long as the user provides proofs of (ax-1) and (ax-2) together with sufficient information for the used rules. When using {→,¬}-pure language, providing a proof for (ax-3) in addition to (ax-1), (ax-2) is already sufficient to enable all rules.

For example, m.txt (which is data/m.txt in the release files) can be used via

pmGenerator --ndconvert input.txt -n -b data/m.txt -o result.txt

to generate a proof based on (meredith) as a sole axiom, for whichever theorem there is a FitchFX proof in input.txt. All rules are supported since m.txt contains proofs for (ax-1), (ax-2), and (ax-3). Since it also contains a proof for p→p that is shorter than building one based on DD211, resulting proofs use the corresponding shortcut.

Results can then be transformed via

pmGenerator --transform result.txt -f -n […options…] -o transformedResult.txt

and optionally be compressed with -z or -x to potentially find fundamentally shorter proofs. When exploring new systems, the hardest part can be to find the first proofs of sufficient theorems (or figure out they don't exist).

Question

Regarding Metamath's databases, shouldn't there be more elements to support proof design based on natural deduction? One cannot prepare all the elements, of course, since translations of Fitch-style proofs of unlimited depth require infinitely many derivations — at least one per depth per rule-enabling theorem.

The lack of more intuitive tools for proof design in set.mm is something that bothered me when I discovered Metamath years ago, but I couldn't quite make use of it. I hope we can shed some light on the matter.

— Samiro Discher

Mario Carneiro

unread,
Jun 8, 2025, 9:52:20 AMJun 8
to meta...@googlegroups.com
On Sat, Jun 7, 2025 at 8:20 PM samiro.d...@rwth-aachen.de <samiro....@rwth-aachen.de> wrote:

In particular, I noticed that Mario Carneiro gave a talk on "Natural Deduction in the Metamath Proof Language", but it doesn't seem to have gained much traction; at least I do not see many theorems in Metamath that would be crucial to apply proof design based on natural deduction.

To the contrary, I would say that the idea behind that talk was quite influential on set.mm; writing large theorems in "deduction style" basically became the default, replacing the more limited usage of inference style + weak deduction theorem as well as "theorem style" closed inferences which do a lot of context manipulation. Personally I consider writing deduction proofs in metamath / set.mm to be basically a solved problem, enough that I felt no need to introduce deduction style features into MM0 (which adds complexity to the kernel) and instead did it all in a simulation layer styled after the set.mm approach.
  • Most rules can be enabled by using a corresponding theorem. For example, p→(q→(p∧q)) (pm3.2) can be used — in combination with two modus ponens applications — to apply conjunction introduction, i.e. ∧I: Γ∪{p,q}⊢(p∧q). There may be multiple rule-enabling theorems, for example p→(q→(q∧p)) (pm3.21) can accomplish the same thing by changing the order of arguments. I provided a table of rule-enabling theorems at nd/NdConverter.h, many of which do not have set.mm correspondents: Of six ∨E-enabling theorems there is only (jao), and I didn't find any of (p→⊥)→¬p, ¬p→(p→⊥), p→(¬p→⊥), (¬p→⊥)→p, i.e. no enabling theorems for any of ¬I, ¬E, IP.
* If I want to do conjunction introduction, I use jca (which is named iand in MM0, a name I find rather more intuitive).
* For or elimination I use jaod or jaodan (and again, I find eord to be a better name)
* For not elimination/introduction I will usually use absurd (~a -> a -> b) and cases ({a -> b, ~a -> b} |- b). It can also be used in various other ways via lemmas like contraposition; I would say that using the natded axiom directly is not always the most convenient option here.

In general, you should check out the table https://us.metamath.org/mpeuni/natded.html , which contains equivalents for every intuitionistic logic axiomatization we have seen in the literature.
  • However, in natural deduction proofs, there are blocks at certain depths, each starting with an assumption.
    For example, ∧I: Γ∪{p,q}⊢(p∧q) at depth 3 is actually Γ∪{a→(b→(c→p)),a→(b→(c→q)}⊢a→(b→(c→(p∧q))). Fortunately, such variants can easily be constructed from the zero-depth rule-enabling theorems:
    For symbols 1 := (ax-1) and 2 := (ax-2), the proof σ_mpd(d) for σ_mpd(0) := D and σ_mpd(n+1) := (σ_mpd(n))²(D1)ⁿ2 can be used to apply modus ponens at depth d. For example, σ_mpd(0) is (ax-mp), σ_mpd(1) is (mpd), and σ_mpd(2) is (mpdd). But σ_mpd(d) for d ≥ 3 seem to be still missing from Metamath.
    Every theorem can be shifted one deeper by adding an antecedent via preceding its proof with D1, i.e. with a single application of (a1i).
    In combination with σ_mpd, rule-enabling theorems can thereby be applied at any depth. I gave detailed constructions of all supported rules at nd/NdConverter.cpp#L538-L769.
As I explain in the talk, you don't actually need depth > 1 since you can use conjunction instead. That's exactly the reason why this whole setup works pragmatically and doesn't require an infinite list of axioms. Instead of stacking things up as

ph -> a -> b -> c -> p,
ph -> a -> b -> c -> q
|- ph -> a -> b -> c -> (p /\ q)

if you use

ph /\ a /\ b /\ c -> p,
ph /\ a /\ b /\ c -> q
|- ph /\ a /\ b /\ c -> (p /\ q)

then this is just a substitution instance of jca again. You use imp/ex in order to push and pop assumptions to the context but otherwise you never have to worry about larger depths.
  • We cannot simply make use of some rule-enabling theorem to translate conditional introduction, i.e. →I: from Γ∪{p}⊢q infer Γ⊢(p→q), since it handles the elimination of blocks and depth, which is necessary because Hilbert-style proofs operate on a global scope everywhere. Other rules just call it in order to eliminate a block and then operate on the resulting conditional.
I don't understand this sentence, but I think I just answered it: the equivalent of ->I is ex: {Γ /\ p -> q} |- Γ -> p -> q.

Question

Regarding Metamath's databases, shouldn't there be more elements to support proof design based on natural deduction? One cannot prepare all the elements, of course, since translations of Fitch-style proofs of unlimited depth require infinitely many derivations — at least one per depth per rule-enabling theorem.

I hope I've already answered this, but just to reiterate: One *can* prepare all the elements, and set.mm actually does so, and it is used in practice and incredibly effective at making it unnecessary to play with the context nearly as much as what used to be the case. The only infinite families you really need are the simp-*r and ad*antlr series, for implementing the hypothesis rule. (You can still live with a finite axiomatization, you just need to use ad10antr when you reach the limit.) (PS: I also didn't like these names, and use an*lr and anw*l respectively in MM0.)

The way I think of this is that in metamath, there is a low "cost" associated with manipulating expressions which are near to the root of the expression tree. Therefore ((a /\ b) /\ c) -> d is optimized to manipulate c and d, and a and b are harder to get to, so if I have a bunch of deduction theorems to apply I don't want everything to be in the form a -> (b -> (c -> d)) where now a and b are easiest to reach and c and d are farther away from the root, since d is the key thing I want to manipulate. This  reason why high depth intuitionistic equivalent theorems like {a -> b -> c -> p, a -> b -> c -> q} |- a -> b -> c -> (p /\ q) are a pain and should be avoided.

Mario

Thierry Arnoux

unread,
Jun 8, 2025, 11:59:55 AMJun 8
to meta...@googlegroups.com, Mario Carneiro

On 08/06/2025 15:52, Mario Carneiro wrote:

On Sat, Jun 7, 2025 at 8:20 PM samiro.d...@rwth-aachen.de <samiro....@rwth-aachen.de> wrote:

In particular, I noticed that Mario Carneiro gave a talk on "Natural Deduction in the Metamath Proof Language", but it doesn't seem to have gained much traction; at least I do not see many theorems in Metamath that would be crucial to apply proof design based on natural deduction.

To the contrary, I would say that the idea behind that talk was quite influential on set.mm; writing large theorems in "deduction style" basically became the default, replacing the more limited usage of inference style + weak deduction theorem as well as "theorem style" closed inferences which do a lot of context manipulation. Personally I consider writing deduction proofs in metamath / set.mm to be basically a solved problem.

This was also my first reaction.

As far as I'm concerned, virtually all of my recent additions are deduction style.

Here is a discussion which basically calls for even more deduction from theorems: https://github.com/metamath/set.mm/issues/3990, with some strong support.

A quick look at changes-set.mm will show that recently, we've been moving a lot of deduction style proofs to the main part of set.mm (those with labels ending in `d`)

BR,
_
Thierry


samiro.d...@rwth-aachen.de

unread,
Jun 8, 2025, 1:06:14 PMJun 8
to Metamath
On 08/06/2025 15:52, Mario Carneiro wrote:
As I explain in the talk, you don't actually need depth > 1 since you can use conjunction instead. That's exactly the reason why this whole setup works pragmatically and doesn't require an infinite list of axioms. Instead of stacking things up as

ph -> a -> b -> c -> p,
ph -> a -> b -> c -> q
|- ph -> a -> b -> c -> (p /\ q)

if you use

ph /\ a /\ b /\ c -> p,
ph /\ a /\ b /\ c -> q
|- ph /\ a /\ b /\ c -> (p /\ q)

then this is just a substitution instance of jca again. You use imp/ex in order to push and pop assumptions to the context but otherwise you never have to worry about larger depths.

Ah, I missed the generality part. I only found the talk after I already completed my converter and didn't follow carefully.

So the idea is to limit oneself to a certain flat kind of natural deduction proofs that can simulate deeper blocks via conjunction rules, to then be able to simulate those proofs with fewer translation rules.

And this became a standard approach which people are satisfied with, so that nobody really needs a broader support for natural deduction.
At least it makes sense to me now!


Thierry Arnoux schrieb am Sonntag, 8. Juni 2025 um 17:59:55 UTC+2:

> Here is a discussion which basically calls for even more deduction from theorems: https://github.com/metamath/set.mm/issues/3990, with some strong support.

I am under the impression that this post comes from a similar place as mine..

If the theorems around https://us.metamath.org/mpeuni/natded.html (which I've also seen before) are really meant as general tools for natural deduction based proof design, the page should probably mention this immediately (and then link to corresponding tutorials)?

David A. Wheeler

unread,
Jun 8, 2025, 3:37:56 PMJun 8
to Metamath Mailing List
For a broader discussion about natural deduction in set.mm (and iset.mm!, see:
https://us.metamath.org/mpeuni/mmnatded.html

That in turn links to:
https://us.metamath.org/mpeuni/natded.html

--- David A. Wheeler

Meta Kunt

unread,
Jun 9, 2025, 6:55:21 AMJun 9
to meta...@googlegroups.com
Natural deduction is so good, so so good. Context depth 1 makes everything work simply because you get free autocomplete.
I have a small list of theorems to use, so that if a certain problem pops up I know how to manipulate it in the right way.

I have made the point to even move all theorems from closed to deduction form, as the application of those would be much simpler. You get free autocompletes and unifications as the theorems are in a certain "normal form".
 




Your E-Mail. Your Cloud. Your Office. eclipso Mail Europe.
Reply all
Reply to author
Forward
0 new messages