Hi Mario,
I did an experiment that seems applicable to what we discussed last summer, which is to maximally unbundle all set variables for easier automated translation of set.mm to another FOL language. I think you were experimenting with that in ax-7d through ax-11d.
I made every set variable in set.mm distinct from every other globally, for all axioms and all theorems, and ran 'verify proof *' to see what failed. I ignored errors caused by $d's on weq, wel, wsb (due to the global scope of my new $d's). There were 731 error messages in 424 proofs, all of them with "variable ... in common" type failures of course. There were 77 previously-bundled axioms and theorems whose $d's violations caused these errors.
To fix these, we would provide alternate versions of the 77 violated statements, reproved with bundled variables made the same, such as '-. A. x -. x = x` (call it say ax9-same-xy) to eliminate the ax-9 violations. Then we would do 'minimize ax9-same-xy/allow' or equivalent to replace ax-9 in proofs where it now causes a failure. (For this to work always, I would have to add an option to 'minimize' to ignore $d failures when there are multiple previously-bundled statements causing them.)
Some cases, like ax9-same-xy, can be proved as theorems; others might need additional axioms.
So unless I'm overlooking something, it shouldn't be an insurmountable task to obtain universal $d's on set variables.
Here are the 77 bundled statements that cause failures when universal $d's are added, sorted in set.mm order:
ax-8 ax-13 ax-14 ax-12o ax10lem27 ax-9 hbequid ax67 ax467 19.2 ax-9o equtr equequ1 equequ2 elequ1 elequ2 hbae hbae-o hbnae hbnae-o hbnaes dral2 dral2-o drex2 cbv1 drsb1 sbequ12 sbequ5 aev aev-o drsb2 a16gb df-clab hbab eqsb3 hbeq hbel cbvral2v hbeqd hbned hbsbcg sbcabel copab zfrepclf zfpair2 elopab epel dfid3 djudisj xporderlem rdglem1 eqerlem cbvixp pwfilem fin23lem32 suprlub suprleub infmrgelb btwnz isfild metequiv evthicc norm1exi norm1hex elmzpcl ax4567 bnj1156 bnj1309 bnj1307 bnj1326 bnj1329 bnj1418 ax9lem1 cdlemk22 erngdvlem4 erngdvlem4-rN cdlemn2a
I would not want to do this to the standard set.mm but instead to a modified version specifically for a project requiring unbundling. It might be possible to keep it in sync with 3-way merges.
As for wff and class variables, I don't know too much about what features other proof languages have, but I think some (maybe Isabelle?) can have predicates and terms with fixed arguments, such as phi(x) shown for ax-sep in http://us.metamath.org/mpegif/ax-sep.html .
Feel free to post publicly if you want.
Norm
On 2/15/16 1:53 PM, Mario Carneiro wrote:
Could you make a branch off of develop for this test? That way we canplay around with it and if necessary split it off to its own file later.
It is a rather trivial change, it just adds 3 $d statements:
$d x y z w v u t $. above wal
$d f g x y z w v u t $. in cbvex4v (for the local f,g)
$d a b c d e f g h i j k l m n o p q r s t u v w x y z $. above df-clab
A final version would omit $d's on wel, weq, wsb, maybe by moving those theorems up.
Question: Would this actually be used, or is it just hypothetical? In particular, are there other unresolved issues that would prevent the translation(s) you have in mind? I recall we had talked about ax-gen and others where bundling can occur as a side effect of substitutions.
I did this experiment because it was quick and easy, just to get an idea of what would be involved, and was pleasantly surprised that it wasn't as bad as I feared. But it would still take some days of time to do the work, which I wouldn't want to do if it will never be used for anything. Or perhaps it would make sense to postpone it until we're closer to using it and understand the issues more clearly.
// set variables
set: type
// well-formed formulas
wff: type
// all variables (disjoint union of set and wff)
var: type
set-var: set => var
wff-var: wff => var
// connectives
weq: set => set => wff # 1 = 2 prec 10
wel: set => set => wff # 1 e. 2 prec 10
wi: wff => wff => wff # 1 -> 2 prec 5
wn: wff => wff # -. 1 prec 5
wal: wff => set => wff # A. 2 1 prec 5
// disjoint variable pair productions
djset: set => var => type # $d 1 2 prec 0
proof: wff => type # |- 2 prec 0
// prop calc
ax-1: {ph:wff,ps:wff} |- ( ph -> ( ps -> ph ) )
ax-2: {ph:wff,ps:wff,ch:wff} |- ( ( ph -> ( ps -> ch ) ) -> ( ( ph -> ps ) -> ( ph -> ch ) ) )
ax-3: {ph:wff,ps:wff} |- ( ( -. ph -> -. ps ) -> ( ps -> ph ) )
ax-mp: {ph:wff,ps:wff} |- ph => |- ( ph -> ps ) => |- ps
// Example proof
a1i: {ph:wff,ps:wff} |- ph => |- ( ps -> ph )
= [ph][ps][H] ax-mp ph (ps -> ph) H (ax-1 ph ps)
// pred calc
ax-5: {ph:wff,ps:wff,x:set} |- ( A. x ( ph -> ps ) -> ( A. x ph -> A. x ps ) )
ax-6: {ph:wff,x:set} |- ( -. A. x ph -> A. x -. A. x ph )
ax-7: {ph:wff,x:set,y:set} |- ( A. x A. y ph -> A. y A. x ph )
ax-gen: {ph:wff,x:set} |- ph => |- A. x ph
ax-8: {x:set,y:set,z:set} |- ( x = y -> ( x = z -> y = z ) )
ax-9: {x:set,y:set} |- -. A. x -. x = y
ax-11: {ph:wff,x:set,y:set} |- ( x = y -> ( A. y ph -> A. x ( x = y -> ph ) ) )
ax-12: {x:set,y:set,z:set} |- ( -. x = y -> ( y = z -> A. x y = z ) )
ax-13: {x:set,y:set,z:set} |- ( x = y -> ( x e. z -> y e. z ) )
ax-14: {x:set,y:set,z:set} |- ( x = y -> ( z e. x -> z e. y ) )
ax-17: {ph:wff,x:set} $d x ph => |- ( ph -> A. x ph )
// Tricky: forget dummy vars. The hypothesis "fresh x s" means that x is not
// any of the set variables in s.
// Inductive definition for a list of variables
vars: type
vars0: vars # [] prec 10
varsp1: var => vars => vars # 1 . 2 prec 10
// I omit the inductive definitions of these operations on vars, treated as
// sets of elements
in: var => vars => type
subset: vars => vars => type
union: vars => vars => vars
varsin: var => vars
varsin-set: {x:set} varsin x = x . []
varsin-weq: {x:set,y:set} varsin (x = y) = (varsin x) union (varsin y)
varsin-wel: {x:set,y:set} varsin (x e. y) = (varsin x) union (varsin y)
varsin-wi: {ph:wff,ps:wff} varsin (ph -> ps) = (varsin ph) union (varsin ps)
varsin-wn: {ph:wff} varsin (-. ph) = varsin ph
varsin-wal: {ph:wff,x:set} varsin (A. x ph) = (varsin x) union (varsin ph)
// deduce disjointness
dj: {x:set,ph:var} ({a:var} a in (varsin ph) => $d x a) => $d x ph
fresh: set => vars => type
freshdj: {x:set,y:var,s:vars} fresh x s => y in s => $d x y
forget: {ph:wff,x:set,s:vars} not (x in s) =>
((varsin ph) subset s) => (fresh x s => |- ph) => |- ph
I can't resist. I didn't know Lean. I've come across this site. This person would use set.mm, he would know the conjecturehe is trying to prove is not a theorem :-)
lemma inconsistent : false :=
assert h : 0 == 0 → (∀ x, x == 0), from (lma num (λ n, n==0) 0),
assert e : 1 == 0, begin
refine (h _ 1),
reflexivity
end,
show false, by cases e
Strictly speaking, this is not necessarily true, and to make this approach make rigorous sense would require infinitary logic.
In particular, Lean (usually) uses constructive logic, so a simple truth table may not always be enough to prove a tautology.
Your breakdown assumes that x = 0 -> F. evaluates to F., but if the antecedent is also false it might still be provable.
But I agree that my explanation has as little rigour as his own.
Thanks, Giovanni.
--
Giovanni Mascellani <g.masc...@gmail.com>
Postdoc researcher - Université Libre de Bruxelles
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.
Visit this group at https://groups.google.com/group/metamath.
For more options, visit https://groups.google.com/d/optout.
On Sun, 21 Apr 2019 18:33:23 -0700 (PDT), Norman Megill wrote:
> ... Of course Metamath
> proofs are not formal proofs as defined in textbooks (where proof steps are
> object-language instantiations of the axiom and inference rule schemes) ...
I disagree.
Metamath proofs absolutely *are* formal proofs using the usual definition.
"A formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference."
https://en.wikipedia.org/wiki/Formal_proof and The Cambridge Dictionary of Philosophy, deduction
Metamath proofs completely meet that definition, full stop, no qualifiers.
It's true that most computer systems for formalized math only work at
object-language instantiations. HOWEVER, I think mathematicians don't actually
take that fact all that seriously. For example, let's say someone proved this claim:
∀ x p(x) # CLAIM 1
Imagine someone who said, "Ah, but that's totally different than what I wanted
to prove, because that claim does not prove the following":
∀ y p(y) # CLAIM 2
If you *seriously* believe that your system only works at the object-language instantiation
level, then strictly speaking that is correct. After all, you have to take an extra step to
convert claim 1 into claim 2. But that's absurd. NO ONE would take that seriously.
Once you prove claim 1, no one would bother to try to prove claim 2 unless their tool required it.
One advantage of providing unbundled forms, even if there's an automated process,
is that it'd make the correspondence easier to understand.
Another *potential* advantage would be if it made
the system more acceptable to more mathematicians, by making "most expressions"
more like what they're familiar with.
I think it'd be key to understand what the result would be from Mario's idea.
Axiom ax-7 probably a good case in point. Would the idea be to create multiple
axioms for each unbundled variant? Or create the bundled form, prove
all the unbundled variants, and then mark the bundled version as
"Further usage is discouraged"? What would the impact be?
There are a *lot* of words expended to try to help people understand bundling.
It's powerful, and allows for some simplifications, but it's clearly confusing to many people.
On Sun, 21 Apr 2019 18:33:23 -0700 (PDT), Norman Megill <n...@alum.mit.edu> wrote:
> ... Of course Metamath
> proofs are not formal proofs as defined in textbooks (where proof steps are
> object-language instantiations of the axiom and inference rule schemes) ...
I disagree.
Metamath proofs absolutely *are* formal proofs using the usual definition.
"A formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference."
https://en.wikipedia.org/wiki/Formal_proof and The Cambridge Dictionary of Philosophy, deduction
Metamath proofs completely meet that definition, full stop, no qualifiers.
It's true that most computer systems for formalized math only work at
object-language instantiations. HOWEVER, I think mathematicians don't actually
take that fact all that seriously. For example, let's say someone proved this claim:
∀ x p(x) # CLAIM 1
Imagine someone who said, "Ah, but that's totally different than what I wanted
to prove, because that claim does not prove the following":
∀ y p(y) # CLAIM 2
If you *seriously* believe that your system only works at the object-language instantiation
level, then strictly speaking that is correct. After all, you have to take an extra step to
convert claim 1 into claim 2. But that's absurd. NO ONE would take that seriously.
Once you prove claim 1, no one would bother to try to prove claim 2 unless their tool required it.
I think that while mathematical books & tools often formalize only the object level,
in *practice* mathematicians actually work more at the metamath level.
That is, they presume that you can swap out variables (subject to certain constraints)
and that those expressions in the two claims prove "the same" thing.
> Now I'm not saying that set.mm would become much larger if we unbundle the
> axioms. Practically it probably wouldn't, as our experiments have shown.
> But doing this seems like we are artificially forcing setvar metavariables
> to pretend to be object-language variables, which they are not.
I don't think Mario is claiming that they are the same thing.
Instead, I think what he's arguing for is to intentionally use forms that
are easier to convert to other systems & back, and are easier for other
mathematicians to reason about.
Intentionally limiting the use of some capability,
to gain some other advantage, seems very much worth considering.
I'd like to see it considered further, in particular, I'd like to see an example or two.
> I understand the desire or need to have an unbundled version of set.mm that
> emulates object-language variables in order to provide a practical
> interface to other proof languages. However IMO the proper way to do this
> would be with an automated tool that transforms the existing set.mm into
> another .mm file with the required unbundling. Even if it results in a few
> "silly" proofs, this shouldn't matter because essentially it would just be
> an intermediate file in a longer process.
If that's easy, that'd be great.
But if that's not easy, I can see a real advantage to modifying set.mm to make it
easier to import/export proofs with other systems.
One advantage of providing unbundled forms, even if there's an automated process,
is that it'd make the correspondence easier to understand.
Another *potential* advantage would be if it made
the system more acceptable to more mathematicians, by making "most expressions"
more like what they're familiar with.
I think it'd be key to understand what the result would be from Mario's idea.
Axiom ax-7 probably a good case in point. Would the idea be to create multiple
axioms for each unbundled variant? Or create the bundled form, prove
all the unbundled variants, and then mark the bundled version as
"Further usage is discouraged"? What would the impact be?
There are a *lot* of words expended to try to help people understand bundling.
It's powerful, and allows for some simplifications, but it's clearly confusing to many people.
If *not* using that capability results in a system that's easier to use
and import/export, then those are good arguments for reconsideration.
It's true that Metamath variables aren't the object variables. But in *practice*
mathematicians use an object system for "x", and mentally note that that
really applies regardless of what the variable really is... which is closer to what
Metamath does in the first place :-). If by no longer using bundling
the system would be easier to understand, accept, and import/export, then
we should think about doing so. Bundling doesn't seem as powerful in *practice*
as these other potential advantages (though the proof is in the pudding).
On Monday, April 22, 2019 at 1:03:43 PM UTC-4, David A. Wheeler wrote:On Sun, 21 Apr 2019 18:33:23 -0700 (PDT), Norman Megill wrote:
> ... Of course Metamath
> proofs are not formal proofs as defined in textbooks (where proof steps are
> object-language instantiations of the axiom and inference rule schemes) ...
I disagree.
Metamath proofs absolutely *are* formal proofs using the usual definition.
"A formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference."
https://en.wikipedia.org/wiki/Formal_proof and The Cambridge Dictionary of Philosophy, deduction
Metamath proofs completely meet that definition, full stop, no qualifiers.
The key phrase is "each of which is an axiom...", not "each of which is an axiom scheme...". The "axioms" in set.mm correspond to axiom schemes in first-order logic: http://us.metamath.org/mpeuni/mmset.html#schemes
On the other hand, Metamath can be considered its own formal language, and from that point of view it does display formal proofs in that formal language. But they aren't, strictly speaking, formal proofs in FOL.
I agree the distinction is somewhat pedantic, but I think it is useful to understand the difference, which I don't see as being hard to understand.
If that's easy, that'd be great.
But if that's not easy, I can see a real advantage to modifying set.mm to make it
easier to import/export proofs with other systems.
The algorithm to do that should be straightforward, if somewhat tedious to write if we want to discard unbundled instances that aren't used, preserve the niceties of set.mm formatting, and so on. But once written it should run very fast.
Going back from unbundled to bundled, though, is nontrivial, and all the work doing that would be lost if we adopted the longer unbundled version.
The only reason I can see to unbundle is to emulate object-language variables for easier export of set.mm proofs. Actually, I would think that translating theorems/proofs that have wff variables would be more problematic than the (somewhat straightforward) issue of unbundling, because no other language AFAIK has the general-purpose wff variables that set.mm does. Mario has probably thought about that and may wish to comment.
For importing proofs, unbundled theorems would in principle automatically work and can be just added to set.mm. A more serious problem with importing proofs is that AFAIK no other prover can output formal proofs, and an automated conversion of steps in their internal algorithms to formal proofs may lead to astronomically long proofs. I don't know what the solution to that is. At this point, proofs in other languages can only be used as hints to create a Metamath version manually.
There are a *lot* of words expended to try to help people understand bundling.
It's powerful, and allows for some simplifications, but it's clearly confusing to many people.
I don't see how it is confusing provided you spend 5 minutes reading my link above. Certainly that is a far less onerous prerequisite than for almost every other proof language.
I think, and believe this is the correct understanding of Norman Megill, that
when he writes "object language", he doesn't mean models and interpretations.
But effectively another formal language, which houses the "real" proofs.
That this understanding of Norman Megill makes a dent, can be easily seen,
by variables of type wff. Some theorem proves **cannot** produce a
theorem of the form:
⊢ (𝜑 → ((𝜑 → 𝜓) → 𝜓))
http://us.metamath.org/mpeuni/pm2.27.html
For example a resolution theorem prover, where literals must have
a predicate name and aritry included, cannot produce such a result
strictly speaking.
On Tue, Apr 23, 2019 at 12:14:39PM -0700, Norman Megill wrote:
> On Monday, April 22, 2019 at 1:03:43 PM UTC-4, David A. Wheeler wrote:
> There are a *lot* of words expended to try to help people understand
> bundling.
> It's powerful, and allows for some simplifications, but it's clearly
> confusing to many people.
> I don't see how it is confusing provided you spend 5 minutes reading my link
> above. Certainly that is a far less onerous prerequisite than for almost every
> other proof language.
Also one should keep in mind that the set of people interested in formal
mathematics on the level of computer verified proofs tend to have a high
tolerance for spending some time on pedantries.
I think throwing away the more powerful and harder proofs just to remove
a automated translation layer is a bad idea.
On Mon, Apr 22, 2019 at 1:03 PM David A. Wheeler wrote:On Sun, 21 Apr 2019 18:33:23 -0700 (PDT), Norman Megill wrote:
> ... Of course Metamath
> proofs are not formal proofs as defined in textbooks (where proof steps are
> object-language instantiations of the axiom and inference rule schemes) ...
I disagree.
Metamath proofs absolutely *are* formal proofs using the usual definition.
"A formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference."
https://en.wikipedia.org/wiki/Formal_proof and The Cambridge Dictionary of Philosophy, deduction
Metamath proofs completely meet that definition, full stop, no qualifiers.
I agree with David here, and I think this an important point. Metamath is built on an intuition that it is formalizing metavariable mathematics, and it has theorems that are schemes over some unstated "object language".
There is a real impedance mismatch here with logicians, as I've tried to stress. A logician knows how variables work - you don't need to tell them. What's more, you show them a system where 99% of the time variables are acting just like the variables they know. And then in 1% of cases variables do this weird collapsing bound variable capture thing and they are running for the door. My goal with the metamath zero variable rules are to capture the things about metamath variables that make substitution dead simple, while eliminating the weird variable capture thing that scares all the serious mathematicians away.
I think there is some great utility in treating variables the way metamath does, and direct substitution is an important factor in the blazing fast speed of metamath verification, but it's not good for the intuition. We need a good story for how we can go from a system that is fast to a verify to a logic that is well understood and studied. Saying "Tarski did it" doesn't seem to cut it for most people.
The "new variable sort" is a very recent proposal.
On Tuesday, April 23, 2019 at 6:13:07 PM UTC-4, Mario Carneiro wrote:On Mon, Apr 22, 2019 at 1:03 PM David A. Wheeler wrote:On Sun, 21 Apr 2019 18:33:23 -0700 (PDT), Norman Megill wrote:
> ... Of course Metamath
> proofs are not formal proofs as defined in textbooks (where proof steps are
> object-language instantiations of the axiom and inference rule schemes) ...
I disagree.
Metamath proofs absolutely *are* formal proofs using the usual definition.
"A formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference."
https://en.wikipedia.org/wiki/Formal_proof and The Cambridge Dictionary of Philosophy, deduction
Metamath proofs completely meet that definition, full stop, no qualifiers.I agree with David here, and I think this an important point. Metamath is built on an intuition that it is formalizing metavariable mathematics, and it has theorems that are schemes over some unstated "object language".
Maybe I'm misunderstanding, but the intended object language isn't unstated; it is described in English in mmset.html. We define object-language wffs as expressions built from variables v1,v2,... and connectives ->, -., A., =, e. . The setvar and wff (meta)variables in set.mm range over these variables and formulas. It is no different from a logic textbook that says, "Let phi and psi be wffs. The axioms are all formulas of the following forms: (phi -> (psi -> phi)) [etc.]".
If what you mean is that the object language is not a formal part of the set.mm language, that is true, but there is no need because the object language is never used there.
I suppose we could extend the Metamath language with a mechanism to instantiate its schemes and proof steps into object-language formulas, but that seems unnecessarily pedantic since it is obvious to anyone who has read the English-language description and looked at its examples. It would just be needless complexity that probably no one would ever use.
There is a real impedance mismatch here with logicians, as I've tried to stress. A logician knows how variables work - you don't need to tell them. What's more, you show them a system where 99% of the time variables are acting just like the variables they know. And then in 1% of cases variables do this weird collapsing bound variable capture thing and they are running for the door. My goal with the metamath zero variable rules are to capture the things about metamath variables that make substitution dead simple, while eliminating the weird variable capture thing that scares all the serious mathematicians away.
A logician should know the difference between metavariables and variables, and if not they'll have to spend 5 minutes reading the mmset.html link I provided. What is so difficult about that?
It is hard to believe that bundling "scares all the serious mathematicians away." Are they also scared away from the papers of Tarski and colleagues that use bundling freely? Bundling is a trivial concept and the whole issue seems to be making a mountain out of a molehill. The advanced theorems in set.mm that they would be interested in are unlikely to have bundling anyway, so if they are scared away it probably isn't because of that.
I think there is some great utility in treating variables the way metamath does, and direct substitution is an important factor in the blazing fast speed of metamath verification, but it's not good for the intuition. We need a good story for how we can go from a system that is fast to a verify to a logic that is well understood and studied. Saying "Tarski did it" doesn't seem to cut it for most people.Tarski was a well-known leader in the field who people take seriously.
But independently of that, I chose his system because I found it very compelling due to the extreme simplicity of its primitive notions. A reviewer of Tarksi's 1965 paper (Shepherdson, JSL, 1974) said "the author's observations provide the most elegant and most simply describable axiom systems for predicate logic with identity".
The proposal to revamp set.mm to prohibit bundling, add a new variable sort for sets with additional axioms, and add a new collection of variable names them, would change it from a simple and pure metalanguage to a kind of hybrid of metalanguage and quasi-object language that is more complex. To me that is not as elegant and is even somewhat pointless.
On Fri, 26 Apr 2019 05:14:05 -0700 (PDT), Norman Megill <n...@alum.mit.edu> wrote:
> It is hard to believe that bundling "scares all the serious mathematicians
> away." Are they also scared away from the papers of Tarski and colleagues
> that use bundling freely? Bundling is a trivial concept and the whole
> issue seems to be making a mountain out of a molehill. The advanced
> theorems in set.mm that they would be interested in are unlikely to have
> bundling anyway, so if they are scared away it probably isn't because of that.
I expect that "all" is somewhat hyperbolic, but I believe Mario's report of
his own experiences, so I believe it's true in at least *some* situations.
The "new variable sort" is a very recent proposal. I have serious concerns
about the first cut proposal (in particular all those "duplicate" symbols), so I'm
certainly not sold on that. On the other hand, it's
*already* the case that set variables, sets, and classes (which can be
proper classes) are handled differently in set.mm, so having a new typecode
for sets (different from setvar and class) is not insane on its face.
There is such a thing as "mainstream logic", and while I wasn't around when Tarski and contemporaries were developing these metavariable systems, I can say with some confidence that this is not mainstream logic anymore, if it ever was. People expect to see FOL, its subsystems, and extensions. There is nothing a priori wrong with investigating other things; indeed that's usually where all the interesting research directions are, but if you are trying to cater to the mainstream then this is a downside.
I dont think bound variables are easily meta variables, that can
be freely renamed. Renaming has to obey also some rules.
Renaming is alfa conversion, which similarly like beta conversion,
is based on substitution. And subsequently needs to be capture
avoiding. Here is an example of a theorem that gets wrong,
when a wrong alfa conversion is done:
Well I should write the proof more precisely:
---------------- (Reflexivity of =)
|- f(v0) = f(v0)
---------------------------- (Exists Intro)
|- exists v1 f(v0) = v1
-------------------------------------- (Forall Intro)|- forall v0 exists v1 f(v0) = v1
I do not exclude the possibility, if meta variables are used
in the above proof in place of v0 and v1, or any other proof
with the same outcome, that a side condition x <> y might
result. Is this demonstrable in meta math?
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.
Visit this group at https://groups.google.com/group/metamath.
For more options, visit https://groups.google.com/d/optout.
Ok, a different proof. And some curriosity. Reminds me
of Chapter 14: Propositional Quantifiers, Dov M.
Gabbay his exists p (p <-> A). How do you introduce
exists quantifier otherwise in metamath?
Am Samstag, 27. April 2019 02:33:23 UTC+2 schrieb Mario Carneiro:
This theorem is easily provable in metamath. Since f(v0) is a term, and set.mm's FOL= fragment doesn't have any terms, we need the extension to classes. Then we can write ( F ` x ) in place of f(v0). So the proof looks like this:$d x y $. $d y F $.1::fvex |- ( F ` x ) e. _V2:1:isseti |- E. y y = ( F ` x )3:2:ax-gen |- A. x E. y y = ( F ` x )Because isseti has a DV condition between its two variables, disjointness of y and ( F ` x ) requires that y be disjoint from F and from x. So you can't substitute x and y for the same variable and thus prove |- A. x E. x x = ( F ` x ) using this theorem.
--
Am Samstag, 27. April 2019 02:19:28 UTC+2 schrieb Mario Carneiro:On Fri, Apr 26, 2019 at 8:01 PM j4n bur53 <burs...@gmail.com> wrote:I dont think bound variables are easily meta variables, that can
be freely renamed. Renaming has to obey also some rules.
Renaming is alfa conversion, which similarly like beta conversion,
is based on substitution. And subsequently needs to be capture
avoiding. Here is an example of a theorem that gets wrong,
when a wrong alfa conversion is done:I think this is a decent example of the kind of confusion that can result from misunderstanding how metamath variables work by using intuitions from FOL. In this case, the metamath theorem would have a disjoint variable condition between x and y, that would prevent substitution of the same variable for both. So that FOL theorem is still expressible in metamath but it has a proviso that is not present or at least not explicit in the FOL version, that those variables can be "alpha renamed" but they can't be renamed to the same thing, which avoids the problem.In metamath, substitution is always direct, and capture avoiding is done by disallowing any substitutions that would potentially be problematic wrt bound variable capture.Norm: I want to point out that Jan is using natural deduction in his example. When I was taught logic at CMU, they used natural deduction. My proof theory course used natural deduction for first order logic (along with many variations, like one-sided vs two sided calculi). My modal logic course used a HIlbert calculus without explicit commitment to axioms ("all classical tautologies" was one of the schemata); we only briefly touched on first order logic. My model theory class, in a different department, used FOL (the proof theory wasn't important so it was not introduced). And of course the type theorists all use natural deduction and the simply typed lambda calculus to draw the Curry Howard correspondence. My higher order typed compilation course used a variety of type systems but variables are treated using de Bruijn indices and/or explicit substitution calculi. At least where I live, this is what "mainstream" looks like. Explicit variables are in the minority, Hilbert logics are in the minority, and metavariable logics are nonexistent except when I talk to people about metamath.Mario
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to meta...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.
Visit this group at https://groups.google.com/group/metamath.
For more options, visit https://groups.google.com/d/optout.
BTW: Meta math has a notable mention in this paper:
https://files.sketis.net/Isabelle_Workshop_2018/Isabelle_2018_paper_3.pdfSubstitutionless First-Order Logic: A Formal Soundness Proof
Andreas Halkjær From, John Bruntse Larsen, Anders Schlichtkrull and Jørgen Villadsen
DTU Compute, AlgoLoG, Technical University of Denmark, 2800 Kongens Lyngby, Denmark
Am Samstag, 27. April 2019 02:01:54 UTC+2 schrieb j4n bur53:
I dont think bound variables are easily meta variables, that can
be freely renamed. Renaming has to obey also some rules.
Renaming is alfa conversion, which similarly like beta conversion,
is based on substitution. And subsequently needs to be capture
avoiding. Here is an example of a theorem that gets wrong,
when a wrong alfa conversion is done:
FOL Theorem, provable in FOL:
forall x exists y f(x) = y
Here is a proof, in natural deduction:
---------------- (Reflexivity of =)
|- f(x) = f(x)
------------------------ (Exists Intro)
|- exists y f(x) = y
--------------------------------- (Forall Intro)
|- forall x exists y f(x) = y
And here what happens if I blindly rename x to y:
|- forall y exists y f(y) = y
Which is not anymore a theorem of FOL.
--
Maybe its a quick rip-off, to get quickly a foot into the door.
Since I couldn't find something longer than the abstract,
and since I dont understand their GitHub, I stopped
looking at it. Inside their abstract they refer to (Monk, 1965)
as an early inspiration. Maybe one would need to start a
study there...?
To unsubscribe from this group and stop receiving emails from it, send an email to meta...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.
Visit this group at https://groups.google.com/group/metamath.
For more options, visit https://groups.google.com/d/optout.
Norm: I want to point out that Jan is using natural deduction in his example. When I was taught logic at CMU, they used natural deduction. My proof theory course used natural deduction for first order logic (along with many variations, like one-sided vs two sided calculi). My modal logic course used a HIlbert calculus without explicit commitment to axioms ("all classical tautologies" was one of the schemata); we only briefly touched on first order logic. My model theory class, in a different department, used FOL (the proof theory wasn't important so it was not introduced). And of course the type theorists all use natural deduction and the simply typed lambda calculus to draw the Curry Howard correspondence. My higher order typed compilation course used a variety of type systems but variables are treated using de Bruijn indices and/or explicit substitution calculi. At least where I live, this is what "mainstream" looks like. Explicit variables are in the minority, Hilbert logics are in the minority, and metavariable logics are nonexistent except when I talk to people about metamath.
Ah, you are right, the C1-8 axioms are directly from Monk (1965). The three versions of C5 are needed because Monk's version isC51 |- p→(∀x.p), x e/ FV(p)and they are trying to avoid the FV predicate. I'm not sure if the idea to combine the rules for negation and forall into one rule is original with metamath or is in one of these older logics. Norm will probably know.
Now these claims deeply confuse me.
Well metamath is a metavariable system, but not a metavariable logic.
Since the metavariable logic itself is not formalized, its only used.
I guess what happens with variable disjunction conditions etc.. is
what the verifier does with a text based on the many keywords
$c, $v, etc... So would need to look at the verifier. Also test cases
are not really a formal specification. A formal specification
would be some axioms and definitions of the metavariable logic
in some logic, so as to make it a logic. Basically all what would
need to be formalized is the verifyer itself to get to a
metavariable logic, maybe the Haskel implementation of the verifier
can be used to produce a metavariable logic:
HmmImpl.hs:
{-
In the following definition we compute the DVRSet on top of the stack as
follows:
(1) First we collect all DVRs of the hypotheses that are popped off the
stack.
(2) We add the DVRs of the assertion that we are processing, after
applying the substitution to it.
(3) We select only those DVRs that occur in the new expression on top
of the stack.
:-}
Maybe there are shortcuts, or it can be done indirectly. And
maybe ultimately the metamath metavariable logic could be
done directly inside metamath. Kind of reflecting metamath
metavariable logic, in metamath itself. Such things have already
a long tradition. Like 10 years ago HOAS was en vogue, dunno
what the latest trend is. Ultimately a metavariable logic would
maybe reify the DVRs somehow. Not sure. I know about another
metavariable logic based on deBruijn indexes, there is a paper
by Huet, with what he calls "The Prism Theorem" in it. Just
doing a fresh googling I find this quite interesting paper, which
I did not yet see, which shows various takes on metavariable
logics, a paper which might be more accessible than Huets:
Proof Pearl: De Bruijn Terms Really Do Work
Michael Norrish and Ren´e Vestergaard
https://www.researchgate.net/publication/221302248_Proof_Pearl_De_Bruijn_Terms_Really_Do_Work
Does have metamath such a matevariable logic specification,
where the verifyer itself is axiomatized?
But it could also be the case that metamath is somehow resilient,
since the other example, Mario Carneiro used x e V, in the
forall x exists y (f(x)=y) versus exists y(f(y)=y) example.
So if conditions such as x e V are stll hanging around,
Then this rule does not apply:
(3) We select only those DVRs that occur in the new expression on top
of the stack.
But I might also misinterpret (3). I interpret it as a kind
of garbage collection(*) of DVRs?
(*) Which makes it an interesting problem also for
system implementors, not only logicians.
Am Dienstag, 30. April 2019 09:31:50 UTC+2 schrieb j4n bur53:Comming from constraint solving, this step here in the verifier:
(3) We select only those DVRs that occur in the new expression on top
of the stack.
Could be problematic. I know some examples from SMT where this
dropping constraints gives wrong answers, false positives. Unfortunately
this would need me some time, to dig them out.
But I guess I have found an example outside SMT, and inside metamath,
which could show some pathological behaviour:
|- ALL(x):ALL(z):[{x}=z <=> ALL(u):[u ε z <=> x=u]]
--------------------------------------------------- (ALL Elim)|- ALL(z):[{x}=z <=> ALL(u):[u ε z <=> x=u]]The second natural deduction (ALL Elim) , where I use the
-------------------------------------------- (ALL Elim) ---------- (Refl)
|- {x}={x} <=> ALL(u):[u ε {x} <=> x=u]] |- {x}={x}
------------------------------------------------------------------- (MP)
|- ALL(u):[u ε {x} <=> x=u]]
----------------------------- (ALL Elim)
|- y ε {x} <=> x=y
substitution z-->{x}, would provoke a caputure avoiding variable
constraint u not in x.
But since u does not appear in the result y ε {x} <=> x=y,
the constraint would be dropped. But what if in some further
course u would be nevertheless used again?
So a verified verifier could be useful. The verification of the
verifier itself could fail, because of such problematic cases.
--
Hi Mario,
Il 03/05/19 10:06, Mario Carneiro ha scritto:
> Good news: The MM -> MM0 translation works! It produces unbundled copies
> of MM theorems and runs them successfully through the MM0 verifier. (So
> mm0-hs finally counts as another MM verifier, if in a very roundabout
> way.)
That's very nice news! :-)
I tried and I managed to so the translation and verification too with
your Haskell program.
Since I believe that your MM0 specification might now be considered
nearly frozen (what is your opinion on this?),
I brushed off my project
of writing an independent verifier in G. In preparation for that I began
writing a Python implementation, to understand the specification and to
use it as a model for the G implementation. I'm working on a clone of
your repository which is here:
https://github.com/giomasce/mm0/blob/master/mm0-py/parser.py
A question: in the specs it is said that dummy variables are always
bound. Does this mean that it is illegal to declare a dummy variable
with round brackets, or that if a dummy variables it is declared in
round brackets it is automatically considered as declared in braces? If
so, is there a difference between a dummy variable in round brackets or
in braces (i.e., "(.x : set)" vs "{.x : set}")? For example, in
examples/set.mm0 definition wtru has "(.p: wff)". Now, if p must be
considered bound, then it violates the requirement that a bound variable
cannot have a strict sort; if p must be considered unbound, then it
violates the requirement that a dummy variable must be bound. What am I
missing?
Regarding the G compiler from asmc, I wrote a simple emulator for a
subspec of 32 bit x86 machine code that is enough to execute the asmc OS
together with its G compiler and the code produced by the G compiler,
therefore ideally the MM0 verifier implementation in G once it will
exist. The emulator is about 1000 lines of rather clean and readable
C++, so it is helpful to pinpoint exactly what asmc assumes from its
platform. Therefore if we want prove correctness for asmc, G and the G
MM0 checker we know what we must assume as the execution environment,
and it is reasonable (for how the x86 can be reasonable).
Hi,
Il 03/05/19 12:14, Mario Carneiro ha scritto:
> Dummy variables are always considered bound, regardless of the kind of
> brackets used. Both bracket kinds are legal but it makes no difference.
But then, why is it this line in examples/set.mm0 valid?
def wtru (.p: wff): wff = $ p <-> p $;
Dummy variables are always bound, but bound variables cannot have strict
type, and wff is strict.
Hi Mario,
Il 05/05/19 11:17, Mario Carneiro ha scritto:
> Given all these considerations, I think the most reasonable course of
> action is to just drop this whole corner of the language. It is slightly
> necessary for metamath definitions which is why I added it in the first
> place, but there are only a handful of occurrences of the technique -
> wtru is the primary (only?) example, and it's not worth inheriting
> metamath weirdness for the sake of one example. Most logicians will tell
> you that T. is not a definition, it is a primitive term constructor (or
> equivalently F. is primitive), and there is an easy argument that given
> only implication and negation there is no way to show that there are any
> ground wffs at all, so being able to produce one seems suspicious.
That's unfortunate (and I am curious about this easy argument that you
mention: do you have a reference for it, or can you repeat it on the
mailing list?), but it's true that dummy variables are probably mostly
used for set variables.
> So what is the consequence for the language? Let's drop the "free" sort
> modifier, and make it be always true. That is, definitions can't just
> drop variables like this. That means that wtru is no longer a valid
> definition. The MM -> MM0 translation is not affected because I haven't
> implemented definitions yet, but the fallback was always just to leave
> it as an axiom. (Getting all the things marked as "definitions" in MM to
> actually be definitions is a really really hard problem, requiring lots
> of heuristics and decision procedures, plus many $j annotations to do
> the proofs.) Dummy variables are treated as bound variables, so they
> cannot have strict type. I'll remove that example from set.mm0.
I see that in the specs you opted for a lighter version of this change,
retaining the "strict" specifier[1] and mandating that a dummy cannot be
strict (instead of dropping "strict" and dummies altogether). Is this
definitive?
I see that in the specs you opted for a lighter version of this change,
retaining the "strict" specifier[1] and mandating that a dummy cannot be
strict (instead of dropping "strict" and dummies altogether). Is this
definitive?
Norm's decision to call df-tru a definition does not originate with him, of course - there are logicians who make such a definition. The usual way this is done is that we have a set of wffs, with a countable set of propositional variables, and we define T. to be either (1) p0 <-> p0, or (2) p <-> p for an arbitrary propositional variable p, and then show that the result doesn't depend on the choice.
The constant F. is used as a primitive in some propositional calculus axiomatizations, and for classical logic at least, I think -. phi is typically defined as (phi -> F.) early on and the F. abandoned for practical development.
On Mon, 6 May 2019 12:00:21 -0700 (PDT), Norman Megill wrote:
> Early on there were objections to df-tru because it has a "free" wff
> variable on one side that isn't on the other. Andrew Salmon felt that A. x
> x = x would be a better definition, but that would have required that
> df-tru be moved to the predicate calculus section.
>
> I have no fundamental objection to defining T. as A. x x = x and moving it
> to pred calc. ...
That's rather unfortunate, because T. and F. really belong
in the propositional calculus section, not in the later predicate calculation section.
Is there no alternative for keeping T. and F. within propositional calculus?
Perhaps an alternative is to simply state it axiomatically, something like this:
df-tru $a |- T. $.
|- T. <-> ( A. x x = x <-> A. x x = x )
On Monday, May 6, 2019 at 3:22:42 PM UTC-4, David A. Wheeler wrote:On Mon, 6 May 2019 12:00:21 -0700 (PDT), Norman Megill wrote:
> Early on there were objections to df-tru because it has a "free" wff
> variable on one side that isn't on the other. Andrew Salmon felt that A. x
> x = x would be a better definition, but that would have required that
> df-tru be moved to the predicate calculus section.
>
> I have no fundamental objection to defining T. as A. x x = x and moving it
> to pred calc. ...
That's rather unfortunate, because T. and F. really belong
in the propositional calculus section, not in the later predicate calculation section.
Is there no alternative for keeping T. and F. within propositional calculus?
Perhaps an alternative is to simply state it axiomatically, something like this:
df-tru $a |- T. $.
This doesn't qualify as a definition but would be an axiom. The problem with adding it as an axiom is that it implies that ax-1,2,3 is not a complete axiomatization for prop calc because it can't prove this df-tru. Or that df-tru is an extralogical notion beyond the reach of prop calc.
Another possibility is T. <-> A. x ( x = x <-> x = x ). Then we would only need ax-gen to prove trutru (the only theorem we need from any df-tru). Since ax-gen is the very first axiom in pred calc, this df-tru can be added as the very first section in pred calc, immediately after prop calc. Even though we are using pred calc (very mildly), we aren't introducing any new axioms.
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.
Visit this group at https://groups.google.com/group/metamath.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/36864373-ee31-30c5-ff9c-e66c0d27f677%40gmail.com.
|- T. <-> ( A. x x = x <-> A. x x = x )I think this should really be avoided. The current definition is better. Apparently, my message from earlier didn't get through, so I copy it here, hoping this message does:------------------------Digression regarding "T.":I think Mario described the situation clearly. This is one of the reasons why I had proposed last year (attachment to https://groups.google.com/d/msg/metamath/mYchAks747s/TDntf_0CAAAJ) to introduce T. via the axiomax-tru $a |- T. $.
On Mon, May 6, 2019 at 3:38 PM Norman Megill wrote:On Monday, May 6, 2019 at 3:22:42 PM UTC-4, David A. Wheeler wrote:On Mon, 6 May 2019 12:00:21 -0700 (PDT), Norman Megill wrote:
> Early on there were objections to df-tru because it has a "free" wff
> variable on one side that isn't on the other. Andrew Salmon felt that A. x
> x = x would be a better definition, but that would have required that
> df-tru be moved to the predicate calculus section.
>
> I have no fundamental objection to defining T. as A. x x = x and moving it
> to pred calc. ...
That's rather unfortunate, because T. and F. really belong
in the propositional calculus section, not in the later predicate calculation section.
Is there no alternative for keeping T. and F. within propositional calculus?
Perhaps an alternative is to simply state it axiomatically, something like this:
df-tru $a |- T. $.
This doesn't qualify as a definition but would be an axiom. The problem with adding it as an axiom is that it implies that ax-1,2,3 is not a complete axiomatization for prop calc because it can't prove this df-tru. Or that df-tru is an extralogical notion beyond the reach of prop calc.I think we should get beyond thinking that just because an axiom uses <-> means it's a definition. I think this new df-tru is no worse than the current df-tru as a definition. We have a simple check for conservativity of most set.mm definitions, but df-tru already doesn't qualify. This axiom is a conservative extension (whether it is stated like df-tru or tru), so in the broad sense it is still a definition, but neither one strictly follows the rules of definition construction. The set.mm convention is to name things df-* when they are a conservative extension, i.e. there is *some* way to justify they are conservative, even if they fail the syntactic check, so I'm comfortable with adopting Benoit's and David's suggestion.We're already doing this with df-bi, by the way. Presumably you wouldn't argue the same thing about people worrying that <-> is an extralogical notion, yet it involves a new axiom different from ax-1,2,3.As long as the new axiom is still labeled df-tru, there won't be any adverse affects - it will still be considered a definition by metamath.exe so it will be classed as such in the HTML webpages. The only downside is that mmj2 will need to have an exception added for df-tru. You might wonder why it's accepting it now, and that's because it's picking up on the justification theorem. But as I've tried to explain, the metatheory of the justification theorem approach leads to existential metavariables which, while possible, increases the logical strength of the metamath core in ways that we haven't really considered properly.
While we are on the subject of mind bending definitions, here's another one that works without moving to pred calc:|- T. <-> ( T. -> T. )I want to emphasize that if you think this is not a definition because it uses some trickery, you're right - now look at all the other definitions and notice similar trickery.
Another possibility is T. <-> A. x ( x = x <-> x = x ). Then we would only need ax-gen to prove trutru (the only theorem we need from any df-tru). Since ax-gen is the very first axiom in pred calc, this df-tru can be added as the very first section in pred calc, immediately after prop calc. Even though we are using pred calc (very mildly), we aren't introducing any new axioms.I disagree with moving it to pred calc. "T." is logically a part of prop calc, and this incurs a dependence of wtru on wceq and setvar and class(!). This will make the levels even more muddled than they are currently.
As I noted earlier, I like this definition of constant true:
df-tru $a |- T. $.
On Monday, May 6, 2019 at 7:49:12 PM UTC-4, Mario Carneiro wrote:As long as the new axiom is still labeled df-tru, there won't be any adverse affects - it will still be considered a definition by metamath.exe so it will be classed as such in the HTML webpages. The only downside is that mmj2 will need to have an exception added for df-tru. You might wonder why it's accepting it now, and that's because it's picking up on the justification theorem. But as I've tried to explain, the metatheory of the justification theorem approach leads to existential metavariables which, while possible, increases the logical strength of the metamath core in ways that we haven't really considered properly.
Yes, I was wondering about mmj2 and df-tru. Is df-tru the only one (not counting the 4 in the exception list, df-bi, df-clel, df-cleq, df-clab) that requires a separate set.mm justification theorem?
T. <-> ( A. x x = x <-> A. x x = x ) would not have that problem since no second-order metatheory would be needed for its justification. Also, wouldn't this provide a cleaner interface with the mm0 translator (which is what brought this up in the first place)? Maybe we should recognize that prop calc simply isn't well suited for definitions involving dummy variables (because of the 2nd order justification required). I don't see anything wrong theoretically with defining it using pred calc syntax; it mainly seems an aesthetic issue.
The 4 exceptions for the mmj2 checker are very important definitions crucial for practical development of set.mm, and we go out of our way to provide extensive verbal explanations and justifications. But df-tru is not that important in the big picture. Except for 56 theorems about T. and F., set.mm would barely be affected if we got rid of df-tru completely. To me doesn't seem worth making it a 5th exception or having special logic in the checking algorithm specifically for it, especially since (unlike the other 4) there _is_ a way to avoid putting in the exception list.
Another possibility is T. <-> A. x ( x = x <-> x = x ). Then we would only need ax-gen to prove trutru (the only theorem we need from any df-tru). Since ax-gen is the very first axiom in pred calc, this df-tru can be added as the very first section in pred calc, immediately after prop calc. Even though we are using pred calc (very mildly), we aren't introducing any new axioms.I disagree with moving it to pred calc. "T." is logically a part of prop calc, and this incurs a dependence of wtru on wceq and setvar and class(!). This will make the levels even more muddled than they are currently.
It is not really "moving it to pred calc" because no pred calc axioms are involved (for the T. <-> ( A. x x = x <-> A. x x = x ) variant). We are just moving up some syntax definitions in order to make its justification first order instead of second order, so it can be checked with the same algorithm used to check pred calc definitions. Most people wouldn't even notice the change outside of the definition itself, which we can explain in its description. It's similar to how we moved up the set theory wceq, wcel into pred calc. It affects only one definition and one proof, and if we decide it was a mistake it is easily reverted.
But there is another aspect to this that I haven't yet mentioned, and that's using free wff/class metavariables in proofs. Metamath allows it, but I think it's considered bad practice and once you have T. and _V, they are always eliminable. My HOL translation is almost to the point where it will report these as errors, so I will find out then what needs cleaning up.A more contentious case is using free set variables. These are also permitted by Metamath, but they are probably necessary in some cases and will need support from the language. The ability to drop free set variables basically amounts to the assertion that the collection of all sets is nonempty, which is assumed pretty early on in pred calc. For example, we can prove the following (theorem 19.2):1 |- ( A. x ph -> ph )2 |- ( ph -> E. x ph )3:1,2:syl |- ( A. x ph -> E. x ph )Without a proper analysis of the variable dependencies here, it's not obvious what just happened, but this theorem claims that there exists an element in the universe (we can use T. for ph to discharge the assumption). The problem isn't with step 1 or step 2, which are both true in FOL without any nonempty assumption,
On Tuesday, May 7, 2019 at 8:28:26 PM UTC-4, Mario Carneiro wrote:
...But there is another aspect to this that I haven't yet mentioned, and that's using free wff/class metavariables in proofs. Metamath allows it, but I think it's considered bad practice and once you have T. and _V, they are always eliminable. My HOL translation is almost to the point where it will report these as errors, so I will find out then what needs cleaning up.A more contentious case is using free set variables. These are also permitted by Metamath, but they are probably necessary in some cases and will need support from the language. The ability to drop free set variables basically amounts to the assertion that the collection of all sets is nonempty, which is assumed pretty early on in pred calc. For example, we can prove the following (theorem 19.2):1 |- ( A. x ph -> ph )2 |- ( ph -> E. x ph )3:1,2:syl |- ( A. x ph -> E. x ph )Without a proper analysis of the variable dependencies here, it's not obvious what just happened, but this theorem claims that there exists an element in the universe (we can use T. for ph to discharge the assumption). The problem isn't with step 1 or step 2, which are both true in FOL without any nonempty assumption,
Steps 1 and 2 don't hold in a free logic (a logic that allows an empty domain).
I disagree. If I'm wrong about this the HOL translation needs a major rewrite, but step 1 should hold in a free logic, because it's true for all values of the free argument x. Presumably you agree that the following statement is true in a free logic (this is just FOL, not metamath):
> On 5/6/19 12:00 PM, Norman Megill wrote:
> > There are another 361 uses of wtru beyond prop calc, and from a quick
> > check it seems they are mainly used to convert deductions into inferences.
There are a lot of *indirect* uses of the definition of ` T. ` , though.
For example, constant false is defined in df-fal as ` ( F. <-> -. T. ) ` ,
and constant false shows up in many places.
I ran:
metamath 'read set.mm' 'show usage df-tru /recursive' quit
and it reported:
Statement "df-tru" directly or indirectly affects the proofs of 28606 statements:
...
So I think we definitely need to keep T. (and F.).
The latest set.mm now discourages the use of df-tru, and df-tru is only used to
prove tru which is ` |- T. ` . Everything else depends on tru.
That will make it easier to change the definition of T. if we do so.
Now we just need to decide
if we're going to change the definition of T. (and to what if it's changed).
There's actually a lot of use of T. and F. in the prop calc section,
beyond just their definitions and the truth tables.
Other sections that use T. or F. include:
* "Derive the Lukasiewicz Axioms from the Tarski-Bernays-Wajsberg Axioms"
* "Derive the Tarski-Bernays-Wajsberg axioms from Meredith's First CO Axiom"
* "Derive the Tarski-Bernays-Wajsberg axioms from Meredith's Second CO Axiom"