--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.
Visit this group at http://groups.google.com/group/metamath.
For more options, visit https://groups.google.com/d/optout.
It looks like you've done some impressive work. When you are ready,
send me an email with a link to the latest version, and I will create a
mathbox for you in the official set.mm.
On proof size: While there is no hard and fast rule for the maximum
size of a proof, I prefer proofs that compress in under a second using
'save proof xxx /compressed' in the metamath program, in order to make
my routine maintenance of set.mm easier. If it takes longer than that,
you can break up the proof into two or more smaller lemmas.
Mario: Liouville's theorem (presumably you mean ~ aaliou ) indeed looks like a nice one. I might give that a try later.
Do we have a list of surface syntax conventions to follow, beyond those that are enforced by rewrap? I found a description of the section break comment syntax hidden in "HELP TEX". I've been breaking after connectives — I'll switch to before for consistency.
What constitutes ready in this case? I expect that late next week or so I'll be revisiting most of the old proofs to remove mathboxes and apply any "lessons learned". Would after that be a good point?
(So far, I have basically not touched working proofs. If it verifies and the statement is correct, I've left it be. This will be changing soon.)
Looking at the revision log in set.mm, it seems like active contributors feed their changes back every couple weeks. Is that a good model to follow?
How do you all manage your changes that haven't been fed back? Do you just have a modified copy of set.mm? I started a new file with $[ set.mm $] at the top, and I've been keeping that open in a text editor since it's substantially smaller than set.mm. Relatedly, I have not been using WRITE SOURCE in my process; proofs just get copied and pasted out (and occasionally into the wrong theorems, with hilarious results). I've been keeping said file in version control locally so I can revert anything I damage, and running a V P * before every commit to catch blatant editing errors. (The other reason I have a split out file is that git has issues with repeated small edits to multi-megabyte files like a copy of set.mm would be.)
If you use modified set.mms, what's the standard practice for updating them when they change upstream?
Something else I haven't come to a satisfactory opinion on is naming, especially for definitions. I feel rather dirty claiming three-letter names like rmX and rmY in a global namespace like we have for constants. None of my names are very long because typing them would get annoying quickly, but many of them are rather cryptic (would you have any idea what a Pell14QR is? The set of solutions of a Pell equation coded as reals and restricted to the right half-plane, i.e. quadrants 1 and 4.) I could probably be using the temporary definition idiom (I mean things like the $e statement ~ vitali.1 ; is there another name for this?) much more than I am, but there are also case where it doesn't seem to apply. What can be said about community best practices for naming concepts and the responsible use of the global namespace?
On proof size: While there is no hard and fast rule for the maximum
size of a proof, I prefer proofs that compress in under a second using
'save proof xxx /compressed' in the metamath program, in order to make
my routine maintenance of set.mm easier. If it takes longer than that,
you can break up the proof into two or more smaller lemmas.I can't tell if this is in response to something specific, so some general thoughts. My prejudice is that if (after set w 79; sh n/c) the proof is taller than it is wide, it's too long. I've noticed that the proof assistant becomes quite sluggish around 30k uncompressed steps, which has happened twice so far and resulted in an immediate sidetrack to look for places to split. I'll apply the compression-time heuristic from now on as well.
I seem to have a particular problem with proofs that rely heavily on algebraic manipulation. I don't suppose you know any magic bullets for those. (I do have an idea I've been exploring, but it involves a change to ~ df-op and that's probably overstepping my bounds as a newbie juuuust a tad.)
Hopefully this isn't too many questions for today. Part of my motivation for asking all this here is so that the answers are openly visible for the next person to come along. (One last: How do you pronounce Megill?)
I usually break before, but there are some that break after
Hi Stefan,
> Do we have a list of surface syntax conventions to follow, beyond those
> that are enforced by rewrap? I found a description of the section break
> comment syntax hidden in "HELP TEX". I've been breaking after
> connectives — I'll switch to before for consistency.
Early theorems in set.mm tend to break after the connective.
A few years ago, someone convinced me it was more natural (consistent
with mathematical texts) to break before the connective, such as
a string of equalities in the literature with each line beginning
with "=", and I've adopted that practice ever since, even retrofitting
some old ones occasionally.
> 10. experiment with the proof shrinking potential of deduction versions of the algebra theorems
> 10b. <. A , B >. = if ( B e. _V , { A , { A , B } } , 1o )
> 10b. <. A , B >. = if ( B e. _V , { A , { A , B } } , 1o )I'm not sure what the idea is with this definition. Why 1o = { (/) } ? What problem does this definition solve? The issue with using <. A , B >. = { A , { A , B } } is that you need ax-reg to prove opth because otherwise if x = {y} and y = {x} then <. x , x >. = <. y , y >. , and this is not desirable (ax-reg affects very little mathematically, and we like to keep it that way). Furthermore, even though usually definitions are subject to change (and I have been responsible for MANY definition changes over the time I've been contributing), practically speaking the Kuratowski definition of df-op is holding up a lot more theorems than you would think - it doesn't just stop at opth - for example ovprc2 or brrelex, also rankop, which are themselves holding up many more theorems.
> 2. WRITE SOURCE with $[ $] would make my life much easier
This is on my to-do list (has been for many years). I'll move up its
priority.
> 3. Namespaces - see separate doc
See the 13-May-04 entry in http://us.metamath.org/mpegif/mmnotes2004.txt
for the rationale behind label naming at that time. At the top of
set.mm, you can see there has been fine-tuning of thousands of label
names over time to improve consistency.
I notice that you use upper case in some labels. So far, almost all
labels are lower case for ease of typing, although this is the
convention more than a prohibition. Others may wish to discuss this.
> 20. Blank lines before/after SHOW PROOF would help
The metamath program is very stingy with blank lines in order to fit as
much info on the screen as possible. Historically, terminal windows
were fixed at 24x80 and are still the typical default. If you still
consider this problematic after more experience w/ the program, bring it
up again.
> 21. backrefs in SEARCH
Not sure what you mean.
> 26. ASSIGN with $e does not work as described. it would be much more useful if out of scope $e's were excluded
Can you be more specific? When assigning hypotheses, I often do 'assign
last *.a' if the local hypothesis is 'abc.a' and there are no global
labels matching *.a.
> 27. /WIDTH to SHOW PROOF
'set width' sets the default width for all output, including proofs.
> 28. LET [VAR] $1 = $2 = ...
Not sure how often this would be used. Hopefully you use rlwrap to
call metamath so that the uparrow key will recall the last command,
then just erase the last $2 to make it $3 or whatever.
> 29. LET STEP seems possibly nonfunctional, need to investigate
Let me know if you can pin down a problem. Personally I rarely use it
(like almost never) so it's possible it has a bug I missed.
> 30. V P {MY-MATHBOX}
Not sure if it is worth the effort - is 'verify proof *' too slow? I also use
'verify proof */s' to find my unproved theorems quickly. Currently there is
nothing that identifies "your" mathbox except when you are inside of MM-PA.
> 31. MMPE: hover over GIFs to see original math symbol?
Actually, this would be a set.mm thing: add 'title=' in addition to
'alt=' in the htmldef. Or can we just replace 'alt' with 'title' to
help conserve page size? Anyone know whether this would cause problems
with some browsers?
Note that 'alt=' causes copy/paste into a text editor to show the
original symbol.
On Wednesday, October 15, 2014 8:08:16 PM UTC-7, Mario Carneiro wrote:> 10b. <. A , B >. = if ( B e. _V , { A , { A , B } } , 1o )I'm not sure what the idea is with this definition. Why 1o = { (/) } ? What problem does this definition solve? The issue with using <. A , B >. = { A , { A , B } } is that you need ax-reg to prove opth because otherwise if x = {y} and y = {x} then <. x , x >. = <. y , y >. , and this is not desirable (ax-reg affects very little mathematically, and we like to keep it that way). Furthermore, even though usually definitions are subject to change (and I have been responsible for MANY definition changes over the time I've been contributing), practically speaking the Kuratowski definition of df-op is holding up a lot more theorems than you would think - it doesn't just stop at opth - for example ovprc2 or brrelex, also rankop, which are themselves holding up many more theorems.Since you asked, the point is actually to break ovprc2. The second A is supposed to be { A } - that's just a typo. The intent here is that the pair becomes symmetrical in its treatment of proper classes; < _V, x > is already 1o, under the notional change < x , _V > is also 1o and we can conclude ( < A , B > =/= 1o <-> ( A e. _V /\ B e. _V ) ). This in turn allows for a strong symmetrical form of brrelex: ( ( Rel R /\ A R B ) -> ( A e. _V /\ B e. _V ) ), and also gives us ( -. ( A e. CC /\ B e. CC ) -> ( A + B ) = (/) ), which means that addcom, etc can be proven with no antecedents at all.
Throw in ax-cnmario $a -. (/) e. CC $. from earlier this year, and we can strengthen this to ( ( A + B ) e. CC <-> ( A e. CC /\ B e. CC ) ), which allows all polynomial identities where both sides have the same two or more variables (i.e. not npcan, but would allow stuff like muladd and sub4) to become similarly antecedent-free. Effectively what has happened is that (/) is now playing the role of an IEEE 754 NaN value; any arithmetic formula where some operations don't make sense evaluates to (/), and since (/) is not a complex number, this propagates up to the top and satisfies the equality. We don't get npcan because ( ( 1 - _V ) + _V ) = (/) =/= 1, although we could (very likely a net loss, so will not pursue this sub-suggestion further) slightly weaken the antecedent on npcan to A e. ( CC u. 1o ) which is interesting as the domain of absolute closure of arithmetic operations.
In non-algebra parts of set.mm, ensymg loses its antecedent, while numth gains one; so far, the only example I've found where antecedents increase is numth and other theorems which use the same construction, but I'm sure I'd find a lot more if I ever started this project in earnest.
I tried for about an hour earlier to retrofit set.mm to use Wiener pairs, which have the same symmetric proper class property, and did indeed discover that it broke rankop, but also broke the U. U. idiom for extracting domain and range for a relation, which seemed like a major pain to rework. Since this version reduces to the current definition when B is a set (and in particular when B is a set variable), there is a glimmer of hope it could be less disruptive. I haven't spent any significant time on the current iteration of the proposal; I expect it to be dismissed quickly for most of the same reasons from the -. (/) e. CC thread, and also because there's a significant principle-of-least-surprise violation potential for <. A , B >. being anything other than a basic Kuratowski pair. (Although I could also argue that proper classes are "undefined behavior" for Kuratowski and we're free to do whatever is most convenient in that case...)
On a mostly unrelated note, the current lttrd and friends have 2-3 unnecessary antecedents: from A < B we can already infer A e. RR* and B e. RR* (since < is irreflexive, it doesn't matter whether we use the current opprc2 or the notional new one).
So. Ridiculous proposal? Or worth further discussion? Still feeling rather self-conscious about being that guy who shows up and immediately demands for everything to be changed.
> 3. Namespaces - see separate doc
See the 13-May-04 entry in http://us.metamath.org/mpegif/mmnotes2004.txt
for the rationale behind label naming at that time. At the top of
set.mm, you can see there has been fine-tuning of thousands of label
names over time to improve consistency.
This item was intended to refer more to math symbols, as in "what if two unrelated fields want to define a J function"?
On Thu, Oct 16, 2014 at 1:02 AM, Stefan O'Rear <stef...@cox.net> wrote:Throw in ax-cnmario $a -. (/) e. CC $. from earlier this year, and we can strengthen this to ( ( A + B ) e. CC <-> ( A e. CC /\ B e. CC ) ), which allows all polynomial identities where both sides have the same two or more variables (i.e. not npcan, but would allow stuff like muladd and sub4) to become similarly antecedent-free. Effectively what has happened is that (/) is now playing the role of an IEEE 754 NaN value; any arithmetic formula where some operations don't make sense evaluates to (/), and since (/) is not a complex number, this propagates up to the top and satisfies the equality. We don't get npcan because ( ( 1 - _V ) + _V ) = (/) =/= 1, although we could (very likely a net loss, so will not pursue this sub-suggestion further) slightly weaken the antecedent on npcan to A e. ( CC u. 1o ) which is interesting as the domain of absolute closure of arithmetic operations.As the result of that thread suggests, you probably won't get far with this part. The best approach is to see what is derivable assuming that the truth of (/) e. CC is unknown and also not assuming ax-addopr if you can help it. Indeed this was the main reason I wanted to introduce that axiom in the first place, but I respect wanting to have an "axiomatic" treatment of the reals, so that's that.
(Another antecedent-simplifying modification would be defining (x/0)=0, but Norm likes to follow the literature whenever possible, and the usual language suggests that the function is not defined at zero.)
On a mostly unrelated note, the current lttrd and friends have 2-3 unnecessary antecedents: from A < B we can already infer A e. RR* and B e. RR* (since < is irreflexive, it doesn't matter whether we use the current opprc2 or the notional new one).I've noticed this, and fixing it is on my to do list, although Norm may object because this relies on < being a binary op and not a syntax construction. Since < is a derived operation, though (the axiomatic relation is <RR ), you may be okay here.
>> Hopefully this isn't too many questions for today. Part of my motivation for asking all this here is so that the answers are openly visible for the next person to come along. (One last: How do you pronounce Megill?)
On Wed, Oct 15, 2014 at 12:14 PM, <n...@alum.mit.edu> wrote:Hi Stefan,
> Do we have a list of surface syntax conventions to follow, beyond those
> that are enforced by rewrap? I found a description of the section break
> comment syntax hidden in "HELP TEX". I've been breaking after
> connectives — I'll switch to before for consistency.
Early theorems in set.mm tend to break after the connective.
A few years ago, someone convinced me it was more natural (consistent
with mathematical texts) to break before the connective, such as
a string of equalities in the literature with each line beginning
with "=", and I've adopted that practice ever since, even retrofitting
some old ones occasionally.Hm, I didn't know this bit of history. It's worth mentioning that the Python style guide you linked to also recommends breaking after a connective
(although opinions tend to be split in programming languages - http://yohanan.org/steve/projects/java-code-conventions/ recommends breaking before every connective except a comma).
Regarding mathematical texts, I would argue that the behavior of breaking before equals is limited to the equals sign or other binary relations like < , while operations like + tend to be either at the end of the line or on both lines (although I dislike the practice of putting it on both lines since then it doesn't make any sense if you delete the line breaks).
On a different topic, in the far far future, Metamath may evolve into a different language (hopefully just as simple but perhaps cleaner in some way), but our proofs will be timeless [...]
On a different topic, in the far far future, Metamath may evolve into a different language (hopefully just as simple but perhaps cleaner in some way), but our proofs will be timeless [...]
Or forgotten for the rest of the eternity. People are so negligent. Who is reading "Principia Mathematica" nowadays?
Um... we are? I don't think that's a very good example for your point. Where do you think theorem 19.26 came from?
--
So who is reading Principia Mathematica? nobody I'm afraid.
No, really I don't believe that eternity or universality can be envisioned seriously.
Norm has made us a very beautiful present and must be thanked for that.
I'm not sure, but I think you are thinking too ego-centrically about this.
I don't mean that in a bad way, but my point is that the proofs themselves can live long past their attachment to us as individuals. For example, consider the invention/discovery of gunpowder.
Sure. A definitive contribution to the history of humanity.
I'm not sure, [...]
--
(There is something that annoys me. In hol.mm you definitely have metavariables by the constructionof metamath. It shouldn't exist in HOL. can it have an side effect on the theory?)
--
I just wanted to point people to http://levien.com/garden/ghilbert/hol/hol-zfc.gh if they are not already aware of it.