Removing dependencies on some logically redundant axioms

155 views
Skip to first unread message

Benoit

unread,
May 28, 2019, 11:47:27 AM5/28/19
to Metamath
Hi all,

I wanted to draw attention to Wolf Lammen's recent work in the "existential uniqueness" section.  His proof of mo2v (compare it with mo2) removed dependency on ax-11 and ax-13 from moim, mo2icl, and several other theorems depending on them.

This was made possible by his new proof of eu5 (compare eu5OLD), that removes dependency on ax-7,10,11,12,13 from eu5 and several theorems depending on it, including eumo.  (Also note that the new proof of eu5 uses no dummy variable; actually, it only uses euex and propositional calculus.)

Probably the same thing can be done by proving some other "dv-versions" of existing theorems: mov, mo3v, eu1v, eu2v, eu3v, sb8euv, eumo0v (after checking that each "nf-version" indeed requires the additional axioms).  Similarly, proving mo4 directly (and not from mo4f) should remove some dependencies as well, including from eu4 (and maybe they could be relabelled mo4 / mo4v to follow the same pattern, at least in this section --- the pattern xxxf / xxx coexists in other sections).

Indeed, there is a common pattern that the use of a non-freeness $e-hypothesis instead of a dv-condition requires ax-11 (typically from nfal) and ax-13 (typically from hbs1).  Wolf has certainly already seen that, but I wanted to give it more publicity by posting it on the discussion group.  I also think that in this case, both the "nf-version" and the "dv-version" can coexist in the main part of set.mm, given that they are in a relatively early part of set.mm, that each have their use, that the nf-version is slightly more general, and that the dv-version requires fewer axioms.

Two remarks:
* I think that some dependencies on ax-12 can be removed (as well as some dependencies on ax-10) by adopting an alternate definition of non-freeness, see df-bj-nf and its comment, and compare bj-nfn with nfn.
* With the same goal as Wolf, I recently added vtoclg1f in order to remove dependency on ax-11 and ax-13 from the important vtoclg and several theorems depending on it (including sbc8g).

I'll be interested in your comments, especially on adopting the alternate definition for non-freeness.

Benoit

Mario Carneiro

unread,
May 28, 2019, 12:02:38 PM5/28/19
to metamath
I'm okay with the alternative df-bj-nf definition. If the use of the definition E. is undesirable, here are some more alternatives:

$a |- ( F/1 x ph <-> A. x ( ph -> A. x ph ) )
$a |- ( F/2 x ph <-> ( E. x ph -> A. x ph ) )
$a |- ( F/3 x ph <-> ( -. A. x ph -> A. x -. ph ) )
$a |- ( F/4 x ph <-> ( A. x ph \/ A. x -. ph ) )

F/1 is the original definition, F/2 is Benoit's. F/3 and F/4 are equivalent to F/2 up to df-ex and propositional logic. F/3 has the advantage that it uses only primitive symbols, and appears as a commutation. F/4 has fewer negations and is easy to understand in terms of ph being always true or always false. And F/2 has no negations and uses the dual quantifier instead.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/25b494de-7940-4876-b474-a9421c379d44%40googlegroups.com.

Benoit

unread,
May 28, 2019, 12:10:24 PM5/28/19
to Metamath
On Tuesday, May 28, 2019 at 6:02:38 PM UTC+2, Mario Carneiro wrote:
I'm okay with the alternative df-bj-nf definition. If the use of the definition E. is undesirable, here are some more alternatives:

$a |- ( F/1 x ph <-> A. x ( ph -> A. x ph ) )
$a |- ( F/2 x ph <-> ( E. x ph -> A. x ph ) )
$a |- ( F/3 x ph <-> ( -. A. x ph -> A. x -. ph ) )
$a |- ( F/4 x ph <-> ( A. x ph \/ A. x -. ph ) )

F/1 is the original definition, F/2 is Benoit's. F/3 and F/4 are equivalent to F/2 up to df-ex and propositional logic. F/3 has the advantage that it uses only primitive symbols, and appears as a commutation. F/4 has fewer negations and is easy to understand in terms of ph being always true or always false. And F/2 has no negations and uses the dual quantifier instead.


Thanks Mario.  The form F/4 is already there as bj-nf3; the form F/3 is interesting and I will add it, together with your remarks on comparative advantages.

Norman Megill

unread,
May 28, 2019, 6:28:40 PM5/28/19
to Metamath
On Tuesday, May 28, 2019 at 12:10:24 PM UTC-4, Benoit wrote:
On Tuesday, May 28, 2019 at 6:02:38 PM UTC+2, Mario Carneiro wrote:
I'm okay with the alternative df-bj-nf definition. If the use of the definition E. is undesirable, here are some more alternatives:

$a |- ( F/1 x ph <-> A. x ( ph -> A. x ph ) )
$a |- ( F/2 x ph <-> ( E. x ph -> A. x ph ) )
$a |- ( F/3 x ph <-> ( -. A. x ph -> A. x -. ph ) )
$a |- ( F/4 x ph <-> ( A. x ph \/ A. x -. ph ) )

I think my preference for the official definition would be F/2 or F/4, then F/1, then F/3.  F/1 (our current definition) has the slight inelegance of nested quantification, and F/3 is somewhat non-intuitive (to me).

If you want to replace our official df-nf with F/2 or F/4, that's OK with me.  I'm not bothered by the use of df-ex but F/4 seems the "most" intuitive to me.

Norm
 

Mario Carneiro

unread,
May 28, 2019, 6:56:10 PM5/28/19
to metamath
Given Norm's preferences, I will put my vote in for F/4. It's not very intuitionistically satisfying, but that's a problem for another database. (I would use F/2 in iset.mm.)

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

Benoit

unread,
May 28, 2019, 7:04:55 PM5/28/19
to Metamath
I was considering the same thing... Intuitionistically, one has

( A. x ph \/ A. x -. ph ) => ( E. x ph -> A. x ph ) => ( -. A. x ph -> A. x -. ph )

To me, this was an argument that ( A. x ph \/ A. x -. ph ) is too strong though, and that the correct characterization of "non-free" should be ( E. x ph -> A. x ph ).  I still have to think about it, though...  Do people who have some iset.mm practice see a practical advantage to either ?

Benoit

unread,
May 28, 2019, 7:08:05 PM5/28/19
to Metamath
Sorry, I misread Mario's post.  It looks like we agree on F/2 from an intuitionitic point of view. Then why not take the same definition for set.mm ?

In any case, I will not do this right now and I would first write a draft in my mathbox before making any change.

Mario Carneiro

unread,
May 28, 2019, 7:11:38 PM5/28/19
to metamath
I think you are right - the appropriate intuitionistic version is F/2, which is equivalent to F/1 so is least disruptive. But in set.mm, we're being classical anyway so if the "naive" reading is simpler (always true or always false) then that seems best. The intuitionistic version is a little lopsided - if it's sometimes true then it's always true - not as easy to grasp. The best intuitive version is A. x A. y ph(x) <-> ph(y) but that requires more axioms than we are assuming at this point.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

David A. Wheeler

unread,
May 28, 2019, 10:11:38 PM5/28/19
to metamath, metamath
On Tue, 28 May 2019 19:11:24 -0400, Mario Carneiro <di....@gmail.com> wrote:
> I think you are right - the appropriate intuitionistic version is F/2,
> which is equivalent to F/1 so is least disruptive. But in set.mm, we're
> being classical anyway so if the "naive" reading is simpler (always true or
> always false) then that seems best.

In cases like this, where there's not a "really obviously winning definition",
I'd prefer to use a definition that would also work in intuitionistic logic (iset.mm).

That would make the iset.mm work a little easier, and there's also the
argument that if it's still valid in intuitionistic logic (which is strictly weaker)
then it's a definition that makes "fewer" assumptions.
That would move this to F/2.

I don't want to go too far with this idea, because there are lots of cases where
that's too hard / not worth it. But if it's relatively painless, then using the
definition that ports more easily seems reasonable. You can always prove
the equivalence of one definition with another, as we've done in many other cases.

--- David A. Wheeler

Jim Kingdon

unread,
May 28, 2019, 10:47:28 PM5/28/19
to Benoit, Metamath
I don't really have an opinion about whether set.mm and iset.mm should use the same definition as each other or not, but as for the options in iset.mm, we have http://us2.metamath.org/ileuni/nf3.html http://us2.metamath.org/ileuni/nf2.html and http://us2.metamath.org/ileuni/df-nf.html . Since we've shown these are equivalent in iset.mm, there isn't a practical difference between them (given the current axioms, anyway).

The nf4 one only holds in one direction: http://us2.metamath.org/ileuni/nf4r.html

To the extent that this is all about what the axioms are and which axioms are needed for what, I think iset.mm has been much less explored than set.mm that way. I wouldn't be surprised if we still have redundant predicate logic axioms (even after having eliminated a few), for example.

ookami

unread,
May 30, 2019, 1:29:13 PM5/30/19
to Metamath
Since my name is mentioned in the opening post, I am going to give a quick answer here:

Am Dienstag, 28. Mai 2019 17:47:27 UTC+2 schrieb Benoit:
...
Indeed, there is a common pattern that the use of a non-freeness $e-hypothesis instead of a dv-condition requires ax-11 (typically from nfal) and ax-13 (typically from hbs1).  Wolf has certainly already seen that, but I wanted to give it more publicity by posting it on the discussion group.  I also think that in this case, both the "nf-version" and the "dv-version" can coexist in the main part of set.mm, given that they are in a relatively early part of set.mm, that each have their use, that the nf-version is slightly more general, and that the dv-version requires fewer axioms.

You are right, I know that nfal is a key theorem injecting a dependency on ax-11.  I avoid it to the utmost possible.  As far as ax-13 is concerned, almost all sb theorems use this axiom. Thjs is necessary because of the 'bundling' nature of df-sb.  I use the tautology [ y / x ] T. as reminder of this fact.  Note that T. is free of both x and y, but this won't help, as long as you don't know that x and y are different variables on substitution.  I think one has to derive a complete theory of a 'substitution light' being A. x ( x = y -> ph ) or E. x ( x = y /\ ph ) (don't know what to prefer) along with a dv condition on x and y, in order to see how far you get without ax-13.

Two remarks:
* I think that some dependencies on ax-12 can be removed (as well as some dependencies on ax-10) by adopting an alternate definition of non-freeness, see df-bj-nf and its comment, and compare bj-nfn with nfn.
* With the same goal as Wolf, I recently added vtoclg1f in order to remove dependency on ax-11 and ax-13 from the important vtoclg and several theorems depending on it (including sbc8g).

Yes, nf2 is a good replacement for df-nf.

Benoit

While I am at it, I'd like to point out that the propositional section is in some sense incomplete. I know for some years, that all propositional expressions can be dissected into implication chains like

Ph -> ( Ps -> ... -> Ka )..)

where each node in the chain is either a wff variable, or the negation thereof.  An expression (like (( ph -> ps ) -> ch) can map to more than one such chain ( ps -> ch and -. ph -> ch ). In any case, this process is reversible (see ja, bija for example).  This lets the above chains appear as normalized expressions covering the whole propositional logic.  A few basic operations on chains like reordering (com12, con1i), adding (a1i) or dropping (syllogisms, pm2.18, pm2.61) nodes are sufficient to do all you need in propositional logic.  Why do I emphasize this? Should you ever strive for automated theorem proving, then such normalized chains are a good operational base.

Wolf

Mario Carneiro

unread,
May 30, 2019, 1:45:20 PM5/30/19
to metamath
On Thu, May 30, 2019 at 1:29 PM 'ookami' via Metamath <meta...@googlegroups.com> wrote:
While I am at it, I'd like to point out that the propositional section is in some sense incomplete. I know for some years, that all propositional expressions can be dissected into implication chains like

Ph -> ( Ps -> ... -> Ka )..)

where each node in the chain is either a wff variable, or the negation thereof.  An expression (like (( ph -> ps ) -> ch) can map to more than one such chain ( ps -> ch and -. ph -> ch ). In any case, this process is reversible (see ja, bija for example).  This lets the above chains appear as normalized expressions covering the whole propositional logic.  A few basic operations on chains like reordering (com12, con1i), adding (a1i) or dropping (syllogisms, pm2.18, pm2.61) nodes are sufficient to do all you need in propositional logic.  Why do I emphasize this? Should you ever strive for automated theorem proving, then such normalized chains are a good operational base.

In the literature, this is usually called conjunctive normal form, where the implication chain is replaced by a disjunction ( -. ph \/ -. ps \/ ... \/ ka ). Most SAT solvers are based on this representation. The particular form that usually comes up in deduction proofs is called a Horn clause, in which only one disjunct is true and all the others are false. This can be written as ( ph -> ps -> ... -> ka ) or ( ( ph /\ ps /\ ... ) -> ka ), and in this case there is actually an efficient algorithm for SAT. The general case requires lots of case splits and is worst case exponential.

Mario
 

Giovanni Mascellani

unread,
May 30, 2019, 2:29:42 PM5/30/19
to meta...@googlegroups.com
Hi,

Il 30/05/19 19:45, Mario Carneiro ha scritto:
> In the literature, this is usually called conjunctive normal form, where
> the implication chain is replaced by a disjunction ( -. ph \/ -. ps \/
> ... \/ ka ). Most SAT solvers are based on this representation. The
> particular form that usually comes up in deduction proofs is called a
> Horn clause, in which only one disjunct is true and all the others are
> false. This can be written as ( ph -> ps -> ... -> ka ) or ( ( ph /\ ps
> /\ ... ) -> ka ), and in this case there is actually an efficient
> algorithm for SAT. The general case requires lots of case splits and is
> worst case exponential.

BTW, one of the strategies implemented by my program mmpp is actually
based on this construction. Most of the absurd mess happening in the
proof of [1] is actually an instance of a SAT proof converted by mmpp to
Metamath. The conversion from a generic WFF to a CNF form is made by
virtue of the Tseitin transformation (which is not the only possibility
for that, but differently from the "obvious" recursive algorithm has the
advantage that the generated CNF has cardinality which is linear,
instead of exponential, in the original formula's length).

[1] http://us.metamath.org/mpeuni/sbcom3OLD.html

Giovanni.
--
Giovanni Mascellani <g.masc...@gmail.com>
Postdoc researcher - Université Libre de Bruxelles
Reply all
Reply to author
Forward
0 new messages