I'm just a newbie (indirect) metamathematicien so, my opinion
(from the point of view of a metamath tool writer) might not be
worth much :
- Simplicity is great (one of the best feature of metamath)
- Deduplication is awesome (it reduces the (memory and time) cost
of writing proofs)
Recently, there has been a debate lately about having underscore
in ids. My (tool writer) opinion is that this debate should not
happen :
In software development, there is a generally held opinion that
ids should be opaque tokens.
I fear that you are mixing 2 roles :
- ids that should uniquely determinate a theorem forever
- (indexed) fields that help humans find theorems among the set.mm
database
This happens because Metamath was designed to be easy to work with
from a basic text editor with search/replace capabilities.
This worked great (metamath is successfull after all) but this
probably is not the optimal set up for metamath :
An illustration : There are plenty of languages that tried to be
the next Java (groovy, scala, C#, kotlin),
but the one that is really succeeding right now is Kotlin, because
it's designers realized that
- they had to design a better Language than Java
- they had to write great tools alongside it to make it
easy/enticing for developers to adopt/migrate to their new
platform.
So far, all android developers (we are talking million developers)
switched from Java to Kotlin. And Kotlin is still aiming to take
over other traditional spaces of software development
My point and my opinion is :
Metamath is really awesome but if we developed tooling for it,
alongside it, it would remove some constraints it has and maybee
become even better.
I think that people should not have to work in metamath in text
editors,
computers should provide the help they need, when it comes to
displaying, looking for theorems, etc...
For example, why isn't there a simple text to search softawre
(that could interact with mmj2 or other software) that allows
people to input some string (say "( a + b ) =" or ( a in CC )) and
that returns a theorem id ?
Theorems could be hashed,indexed in many ways...with customized
settings, so that people who would rather have a_in_CC than aInCC
could be happy)
People should NOT have to learn and remember theorem ids, this is
a useless skill in real life,
and it prevents more people to work with metamath
Also, releasing the constraint that metamath should be used in
text editors or should be nice to human eyes may allow to :
adopt a syntax aimed at computers, that ensures coherency and that
makes it easier to parse, avoid pitfalls, allow definitions that
are not axioms...
It would be the job of the tools to display the result in a nice
manners for humans, and make working with metamath nice for
humans...
Well, just a rant :)
As a side note, I managed to use antlr-kotlin to port the antlr
metamath parser I was using to kotlin multiplatform.
It was the only roadblock preventing releases of Mephistolus for
the browser/android/ios/linux/windows/mac...
I'm going to work on the browser /javascript target now.
Best regards,
Olivier
Tierry, Alexander, and Benoit have asked for clarification of the goals of set.mm. Here are some of my opinions. I am moving the discussion in https://github.com/metamath/set.mm/issues/1431 to here for wider readership.
1. There is no "goal" per se for set.mm. People work on what they are interested in. Typically work starts off in mathboxes, where people pretty much have freedom to do whatever they want. Two situations that typically will result in the work being moved to the main part of set.mm are (1) more than one person wants to use or develop the work and (2) the work is an mm100 theorem. There are other factors, though, that I'll discuss. And in any case it is a judgment call by me and others that may not be perfect. Just because we choose not to import from a mathbox right away doesn't mean the value of the work is less, it just means that in our possibly flawed judgment it either doesn't quite fit in with the set.mm philosophy or it hasn't found its place there yet.
So what I will primarily discuss is what I see as the "philosophy" that has been generally used so far in set.mm. Hopefully that can guide the goals that people set for themselves as they contribute to set.mm.
2. A very important thing to me is the philosophy of striving for simplicity wherever possible. This particularly impacts what definitions we choose.
"Rules" for introducing new definitions
Definitions are at the heart of achieving simplicity. While developing set.mm, two rules of thumb that evolved in my own work were (1) there shouldn't be two definitions that mean almost the same thing and (2) definitions shouldn't be introduced unless there is an actual need in the work at hand. Maybe I should add (3), don't formally define something that doesn't help proofs and can be stated simply in English in a theorem's comment.
Concerning the 1st rule, very early on, when I was starting to work with ordinals, I noticed that many books would redefine "e." (epsilon) as "<" and "C_" (subset) as "<_" (less than or equal) since those were the ordering roles they played in von Neumann ordinal theory. I figured since books did it, it must be important, so I also did that in the early set.mm. (I may have even used those symbols because reals were a distant vision at that point.) Books casually interchanged them in theorems and proofs, but formalization meant I might have 2 or more versions of many theorems. It was a constant nuisance to have to convert back and forth inside of proofs, making proofs longer. I eventually decided to use just "e." and "C_", and it made formalizations much more pleasant with fewer theorems and shorter proofs, sometimes providing more general theorems that could be used outside of ordinal theory. I never missed not having the ordering symbols to remind me of their roles. I liked the feeling of being closer to the underlying set theory rather than being an abstraction layer away from it. If the "less than" role of "e." was important for human understanding in a particular case, I could simply use the English language "less than" the comment (3rd rule) while continuing to use "e." in the $p statement.
As for the 2nd rule, there are often different choices that can be made in the details of how something is defined. It is sometimes hard or impossible to anticipate what choice is optimal until we actually start using the definition for serious work. There is also the question of whether the definition is even necessary. More than once I've found that a definition I thought might be needed (for example was used in the literature proof) actually wasn't. Sometimes, like the "e." vs. "<" case above, the definition was even a hindrance and made proofs longer. Other times an intermediate definition of my own invention that isn't in the literature turned out to be advantageous for shorter proofs. It can be best to let the work drive the need for the definition and its precise details.
Another example of a literature definition we purposely don't use and illustrates the 3rd rule is Takeuti/Zaring's Russell class with symbol "Ru", which they formally define as "{ x | x e/ x }". The only theorem that uses it (in set.mm and in T/K) is ~ ru. For the purposes of set.mm, it is wasteful and pointless since we can just define the Russell class verbally in the comment of ~ ru.
A problem that has arisen in the past is where a person has added a set of definitions from the literature, proved some simple property theorems, then is disappointed that the work isn't imported into the main set.mm. Sometimes we do and sometimes we don't, but the main issue is whether the 2nd rule above is being followed. Without knowing exactly how the definition is going to be applied, very often it won't be optimal and will have to be adjusted, and sometimes it is more efficient just to start from scratch with a definition in the form that is needed. I prefer to see some "serious" theorems proved from a definition before importing it.
As a somewhat crude example of why a definition in isolation may not be optimal, suppose someone defined sine as "$a |- sin A = ..." (not a straw man; this kind of thing has been attempted in the past) i.e. a 1-place function-like "sin A" rather than a stand-alone object "sin". While this can be used to prove a few things, it is very limiting since the symbol "sin" in isolation has no meaning, and we can't prove e.g. "sin : CC --> CC". It is an easy mistake to make if you don't understand set theoretical picture but are blindly copying a textbook definition. Here it is pretty obvious, but in general such issues can be subtle and may depend on the application. Also, there are often multiple ways to define something, some of which are easy to start from and others which would require a lot of background development. The best choice can't always be anticipated without knowing what background will be available.
3. Let me comment on the recent proposal to import df-mgm, df-asslaw, df-sgrp. These are my opinions, and correct me if I am wrong.
df-mgm: A magma is nothing but an empty shell with a binary operation. There are no significant theorems that follow from it. All the properties of binary operations that we have needed have already been proven. If additional properties are needed in the future it seems it would be better to state them directly because they will be more useful generally. Just because it appears in Bourbaki or other texts is not to me a justification: it also needs to be useful and be driven by an actual need for it. There are many definitions in textbooks that have turned out to be unnecessary.
df-asslaw: All this is doing is encapsulating the class of associative operations. The only thing that can be derived from it AFAIK is the associative law. I don't see how it would benefit proofs since additional work is needed to convert to and from it. I don't see how it helps human understanding because it is easier for a human to see "show the associative law" in the description of a theorem than be asked to learn yet another definition (an example of the 3rd rule).
df-sgrp: This is basically df-asslaw converted to an extensible structure. So in essence it duplicates df-asslaw (or vice versa depending on your point of view), violating the first rule (no redundancy). More importantly, AFAIK there is nothing that can be derived from it except the associative law, unlike say df-grp where deep and non-obvious theorems are possible. Yes, it can be used as a "layer" on top of which we can slightly simplify df-mnd and df-rng, but it is really worth having an entire, otherwise useless definitional layer just to avoid having to state "(x+y)+z=x+(y+z)"? The fact that it simplifies two definitions is a slight positive, but I'm still torn about it. It doesn't quite pay for itself in terms of reducing set.mm size, and it increases the burden on the reader who has to learn another definition and drill down another layer to understand df-mnd and df-rng. Personally I'd prefer to see "(x+y)+z=x+(y+z)" directly because it is immediately obvious without having to look at a deeper layer. In the case of the description in df-rng0, it would be more direct and less obscure to state "associative operation" instead of "semigroup operation", since most people (at the level of studying that definition) would know what "associative" means, but fewer would would know what "semigroup" means.
I can't say that these will never be imported. Possibly something will drive a need for them, maybe something related to category theory where even a magma might be a useful object. But I would want to see the definitions be driven by that need when it arises; we don't even know yet whether the present extensible structure form can be adapted for that purpose. It is fine to store them in a mathbox indefinitely, but I'd like to see a true need driving their use before importing.
4. As for adding an exhaustive list of all possible definitions one can find in Bourbaki or whatever, as someone suggested, I don't think something like that belongs in set.mm, for all the reasons above. There are other resources that already list these (Wikipedia, Planetmath, nLab). Their precise formalization will depend on the context in which they are needed in set.mm, which is unknown until they are used. In isolation (perhaps with a couple of property theorems that basically check syntax) there is no guarantee that they are correct, even in set.mm. To restate, I think the philosophy should be that definitions should be added by need, not as busywork. Adding a definition when needed is one of the smallest parts of building a series of significant proofs. It doesn't have to be done in advance, and I don't think it is generally productive or useful to do so.
5. Finally, let me touch on the issue of > (greater than).
There are many non-symmetrical relations that use reversed symbols to swap arguments: "B contains A" with reversed epsilon, "B includes A" with reversed subset symbol, "Q if P" with reversed arrow, etc. I've seen all of these in the literature. If we really feel the reader will encounter them and expect set.mm to explain their meaning (which is likely explained in their textbook anyway), we could mention informally the reversed symbol usage when we introduce the forward symbol. But we don't add $a's for them because we will never use them. That is because either we would have to add a huge number of theorems containing all possible forward and reversed combinations of the symbols, or we would have to constantly convert between them inside of proofs. Both of those are contrary to a philosophy of simplicity.
IMO the same should be done with >, mentioning what it means in the description for <. Introducing a formal $a statement for > that will never be used is unnecessary and wasteful of resources. If we want to be excessively pedantic, we could mention in the description for < that the the formal definition would be "|- > = `' >" , although that seems less intuitive than simply saying that "in theorem descriptions, we occasionally use the terminology 'A is greater than B' to mean 'B is less than A'." A grade school student can (and does) easily understand that.
Basically, ">" violates all 3 "rules" for new definitions I proposed above.
Norm
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/6e9144b6-5b8f-488d-bf19-83e5a2a250e0%40googlegroups.com.
Just because it appears in Bourbaki or other texts is not to me a justification:
4. As for adding an exhaustive list of all possible definitions one can find in Bourbaki or whatever, as someone suggested,
"Rules" for introducing new definitions
Definitions are at the heart of achieving simplicity. While developing set.mm, two rules of thumb that evolved in my own work were (1) there shouldn't be two definitions that mean almost the same thing and (2) definitions shouldn't be introduced unless there is an actual need in the work at hand. Maybe I should add (3), don't formally define something that doesn't help proofs and can be stated simply in English in a theorem's comment.
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/3fc7e623-798b-4ffc-83c6-e50976e46772%40googlegroups.com.
Two situations that typically will result in the work being moved to the main part of set.mm are (1) more than one person wants to use or develop the work and
Tierry, Alexander, and Benoit have asked for clarification of the goals of set.mm. Here are some of my opinions. I am moving the discussion in https://github.com/metamath/set.mm/issues/1431 to here for wider readership.
.....
2. Provide a stable archive of this information for all future time.
I don't think people appreciate how ephemeral information is.
The main logic system of the western world for centuries was
Stoic logic; we know very little about it, because practically
everything has been lost since. Modern civilization is even *less*
good at archiving; most storage devices only last around 10 years,
not thousands. Yet many civilizations have collapsed over the centuries,
and it is hubris to think it could not happen again.
I want to see etched copies of these databases
put in caves, deep in the sea, on the moon, and sent out to space
so it's possible for others to recover them after much else is lost.
There are organizations like the Arch Mission Foundation
(a non-profit organization that archives the knowledge and species
of Earth for future generations, https://www.archmission.org/ ) - but
they have to have material to work with.
The main logic system of the western world for centuries was
Stoic logic; we know very little about it, because practically
everything has been lost since. Modern civilization is even *less*
good at archiving; most storage devices only last around 10 years,
not thousands.
I have no wish (I'll adapt to what is available) but I have a
vision :
1) Methamath (or a descendant) is the universal computer format
for maths (instead of LaTeX).
2) A proof can be exported to TeX/LaTeX/Context (pdf for print),
MathJax/html/svg/css ( for the web),
mp4/youtube (tutorials, video lessons) in 50+ human languages
3) math research is about publishing computer checkable proofs,
that can be exported to a printable proof (in any human language)
4) There is a decentralized database of what has been proven for
maths, with proofcheckers that commit/push things
(no more pear reviews, with flawed humans and all their flaws)
5) Everyone on earth can do math/research/access the knowledge
base
6) there are computer games powered with a math engine that let
children enjoy themselves by solving math puzzle.
The game actually teach them useful things for life
7) math teachers write their stuff in computer checkable format
instead of TeX/LaTeX/ConTeXT
8) you can assign work to math students, they have to write
computer checkable proofs (no more grading students paper)
9) your profile on linkedin tells the math level you have
achieved, so playing games about math, can get you a job.
Ok, I may be crazy.
Still, I have started to put my master plan in motion like 1 year
ago and, to me, it looks doable so I'm making it happen ^^.
Because metamath/mm0 are already quite up to the task, the first
step is to be able to build a proof authoring tool that is good
enough/great.
I fail if I cannot make expert metamathematicians (that seem to
need better tools) give up mmj2 to switch to Mephistolus.
If metamath/mm0 become even better, I will like that a lot though
Best regards,
Olivier
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/8c7c93d9-c254-4245-8e69-f5bc34111cd5%40googlegroups.com.
Giovanni Mascellani:
> To me MM100 is a an interesting goal, but definitely an intermediate
> one. The (admittedly, very ambitious) goal that I have in mind for
> Metamath (or MM0, or whatever incarnation of it we will have in the
> future) is that virtually every published mathematical result is
> available in set.mm or some reverse dependency of it. I envision that if
> you search for the comment "(Based on ArXiv paper: xxx.yyy)" you find
> all the main theorems from that ArXiv paper.
My goal is similar. I would like to see a future descendent of Metamath
databases (including set.mm and iset.mm)
have essentially *all* mathematics formalized.
On Saturday, January 25, 2020 at 3:12:17 PM UTC+1, David A. Wheeler wrote:Giovanni Mascellani:
> To me MM100 is a an interesting goal, but definitely an intermediate
> one. The (admittedly, very ambitious) goal that I have in mind for
> Metamath (or MM0, or whatever incarnation of it we will have in the
> future) is that virtually every published mathematical result is
> available in set.mm or some reverse dependency of it. I envision that if
> you search for the comment "(Based on ArXiv paper: xxx.yyy)" you find
> all the main theorems from that ArXiv paper.
My goal is similar. I would like to see a future descendent of Metamath
databases (including set.mm and iset.mm)
have essentially *all* mathematics formalized.
I want to associate myself with Giovanni and David: the goal should be a database in which "every published mathematical result is available" and which has "essentially *all* mathematics formalized".
Concerning simplicity, however, a knowledge base containing much/most of mathematics cannot be simple in general, because mathematics is not simple. But to use this knowledge base can (and should) be *made* simple by providing appropriate tools and "views". Because of this, there could be complex definitions in set.mm. Actually there are a lot of them, which only become clear by the corresponding "defining theorems" - e.g. ~df-mamu:
$a |- maMul = ( r e. _V , o e. _V |-> [_ ( 1st ` ( 1st ` o ) ) / m ]_ [_ ( 2nd ` ( 1st ` o ) ) / n ]_ [_ ( 2nd ` o ) / p ]_ ( x e. ( ( Base ` r ) ^m ( m X. n ) ) , y e. ( ( Base ` r ) ^m ( n X. p ) ) |-> ( i e. m , k e. p |-> ( r gsum ( j e. n |-> ( ( i x j ) ( .r ` r ) ( j y k ) ) ) ) ) ) ) $.which becomes clear only after looking at theorem ~mamuval:$p |- ( ph -> ( X F Y ) = ( i e. M , k e. P |-> ( R gsum ( j e. N |-> ( ( i X j ) .x. ( j Y k ) ) ) ) ) ) with F = ( R maMul <. M , N , P >. )
On Sat, 25 Jan 2020 18:40:53 -0800 (PST), Norman Megill <n...@alum.mit.edu> wrote:
> I know I keep harping on not wanting to add a catalog of definitions (with
> shallow theorems) that are otherwise unused, but it has been done in the
> past with the work essentially wasted, sometimes causing resentment, and I
> want to avoid that. With an ambitious goal like "all of mathematics," it
> will be very tempting to start collecting definitions, leading to a false
> sense of accomplishment to see the syntax errors ironed out and simple
> property theorems proved.
Not just tempting. Necessary.
> But in the end it is busywork with a good chance
> of being discarded later. Moreover, a few shallow value and property
> theorems don't ensure that the definition is correct, possibly misleading
> someone who might want to use set.mm as a reference source that can be
> trusted to be rigorously correct.
*Nothing* other than human review ensures a definition is correct.
If we defined 9 as "( 8 + 3 )" there is no tool that could detect the error.
Tools can detect if a definition is syntactically valid,
but no tool can figure out "was this really what you meant?"
The best solution for that is to provide definitions & enable multi-person
human review to *detect* those problems. Failing to formalize those
definitions means that the needed multi-person review can't ever happen.
A useful additional step would be to prove some "expected" properties to
ensure that they "do what's expected". Again, that requires that they
be formalized in the database itself somewhere.
> This is the motivation for rule 2 I
> proposed, "definitions shouldn't be introduced unless there is an actual
> need in the work at hand."
I would add:
... or if the definition will *never* be used by the rest of set.mm, and it is
there solely to help translate widely-used constructs used in the literature.
Also,
Why not put in set.mm,
a definition of ">" relating it to the "negation of <_"
And put a "you are absolutely forbidden to use this definition to
write theorems,
usage is discouraged, you'll be cursed and a ghost will haunt you
if you do ! "
With some explanation ?
(arghh I'm meddling ^^)
Happy funny sunday every one :)
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/CAFXXJSsAZNLFTEWnFL1knc7qt4vAL-8sO0NWBWnU3JgCfEd-bw%40mail.gmail.com.
Could this be an approach to find an acceptable way to achieve everybody`s short/mid-term goals? What would remain to be discussed are the criteria for definitions (and/or theorems) serving the 1st level goals, i.e. which definitions are acceptable to be added to the main body/Volume I. of set.mm...
I know I keep harping on not wanting to add a catalog of definitions (with shallow theorems) that are otherwise unused, but it has been done in the past with the work essentially wasted, sometimes causing resentment, and I want to avoid that.
To have only one additional "Volume" instead of two may respect the concerns of vvs (see https://groups.google.com/d/msg/metamath/mnkBQV1_cXc/Xkrv9_9AAQAJ) a little bit...
...
2. A very important thing to me is the philosophy of striving for simplicity wherever possible. This particularly impacts what definitions we choose.
"Rules" for introducing new definitions
Definitions are at the heart of achieving simplicity. While developing set.mm, two rules of thumb that evolved in my own work were (1) there shouldn't be two definitions that mean almost the same thing and (2) definitions shouldn't be introduced unless there is an actual need in the work at hand. Maybe I should add (3), don't formally define something that doesn't help proofs and can be stated simply in English in a theorem's comment.
Concerning the 1st rule, very early on, when I was starting to work with ordinals, I noticed that many books would redefine "e." (epsilon) as "<" and "C_" (subset) as "<_" (less than or equal) since those were the ordering roles they played in von Neumann ordinal theory. I figured since books did it, it must be important, so I also did that in the early set.mm. (I may have even used those symbols because reals were a distant vision at that point.) Books casually interchanged them in theorems and proofs, but formalization meant I might have 2 or more versions of many theorems. It was a constant nuisance to have to convert back and forth inside of proofs, making proofs longer. I eventually decided to use just "e." and "C_", and it made formalizations much more pleasant with fewer theorems and shorter proofs, sometimes providing more general theorems that could be used outside of ordinal theory. I never missed not having the ordering symbols to remind me of their roles. I liked the feeling of being closer to the underlying set theory rather than being an abstraction layer away from it. If the "less than" role of "e." was important for human understanding in a particular case, I could simply use the English language "less than" the comment (3rd rule) while continuing to use "e." in the $p statement.
As for the 2nd rule, there are often different choices that can be made in the details of how something is defined. It is sometimes hard or impossible to anticipate what choice is optimal until we actually start using the definition for serious work. There is also the question of whether the definition is even necessary. More than once I've found that a definition I thought might be needed (for example was used in the literature proof) actually wasn't. Sometimes, like the "e." vs. "<" case above, the definition was even a hindrance and made proofs longer. Other times an intermediate definition of my own invention that isn't in the literature turned out to be advantageous for shorter proofs. It can be best to let the work drive the need for the definition and its precise details.
Another example of a literature definition we purposely don't use and illustrates the 3rd rule is Takeuti/Zaring's Russell class with symbol "Ru", which they formally define as "{ x | x e/ x }". The only theorem that uses it (in set.mm and in T/Z) is ~ ru. For the purposes of set.mm, it is wasteful and pointless since we can just define the Russell class verbally in the comment of ~ ru.
A problem that has arisen in the past is where a person has added a set of definitions from the literature, proved some simple property theorems, then is disappointed that the work isn't imported into the main set.mm. Sometimes we do and sometimes we don't, but the main issue is whether the 2nd rule above is being followed. Without knowing exactly how the definition is going to be applied, very often it won't be optimal and will have to be adjusted, and sometimes it is more efficient just to start from scratch with a definition in the form that is needed. I prefer to see some "serious" theorems proved from a definition before importing it.
As a somewhat crude example of why a definition in isolation may not be optimal, suppose someone defined sine as "$a |- sin A = ..." (not a straw man; this kind of thing has been attempted in the past) i.e. a 1-place function-like "sin A" rather than a stand-alone object "sin". While this can be used to prove a few things, it is very limiting since the symbol "sin" in isolation has no meaning, and we can't prove e.g. "sin : CC --> CC". It is an easy mistake to make if you don't understand set theoretical picture but are blindly copying a textbook definition. Here it is pretty obvious, but in general such issues can be subtle and may depend on the application. Also, there are often multiple ways to define something, some of which are easy to start from and others which would require a lot of background development. The best choice can't always be anticipated without knowing what background will be available.
3. Let me comment on the recent proposal to import df-mgm, df-asslaw, df-sgrp. These are my opinions, and correct me if I am wrong.
df-mgm: A magma is nothing but an empty shell with a binary operation. There are no significant theorems that follow from it. All the properties of binary operations that we have needed have already been proven. If additional properties are needed in the future it seems it would be better to state them directly because they will be more useful generally. Just because it appears in Bourbaki or other texts is not to me a justification: it also needs to be useful and be driven by an actual need for it. There are many definitions in textbooks that have turned out to be unnecessary.
df-asslaw: All this is doing is encapsulating the class of associative operations. The only thing that can be derived from it AFAIK is the associative law. I don't see how it would benefit proofs since additional work is needed to convert to and from it. I don't see how it helps human understanding because it is easier for a human to see "show the associative law" in the description of a theorem than be asked to learn yet another definition (an example of the 3rd rule).
df-sgrp: This is basically df-asslaw converted to an extensible structure. So in essence it duplicates df-asslaw (or vice versa depending on your point of view), violating the first rule (no redundancy). More importantly, AFAIK there is nothing that can be derived from it except the associative law, unlike say df-grp where deep and non-obvious theorems are possible. Yes, it can be used as a "layer" on top of which we can slightly simplify df-mnd and df-rng, but it is really worth having an entire, otherwise useless definitional layer just to avoid having to state "(x+y)+z=x+(y+z)"? The fact that it simplifies two definitions is a slight positive, but I'm still torn about it. It doesn't quite pay for itself in terms of reducing set.mm size, and it increases the burden on the reader who has to learn another definition and drill down another layer to understand df-mnd and df-rng. Personally I'd prefer to see "(x+y)+z=x+(y+z)" directly because it is immediately obvious without having to look at a deeper layer. In the case of the description in df-rng0, it would be more direct and less obscure to state "associative operation" instead of "semigroup operation", since most people (at the level of studying that definition) would know what "associative" means, but fewer would would know what "semigroup" means.
I can't say that these will never be imported. Possibly something will drive a need for them, maybe something related to category theory where even a magma might be a useful object. But I would want to see the definitions be driven by that need when it arises; we don't even know yet whether the present extensible structure form can be adapted for that purpose. It is fine to store them in a mathbox indefinitely, but I'd like to see a true need driving their use before importing.
4. As for adding an exhaustive list of all possible definitions one can find in Bourbaki or whatever, as someone suggested, I don't think something like that belongs in set.mm, for all the reasons above. There are other resources that already list these (Wikipedia, Planetmath, nLab). Their precise formalization will depend on the context in which they are needed in set.mm, which is unknown until they are used. In isolation (perhaps with a couple of property theorems that basically check syntax) there is no guarantee that they are correct, even in set.mm. To restate, I think the philosophy should be that definitions should be added based on need, not as busywork. Adding a definition when needed is one of the smallest parts of building a series of significant proofs. It doesn't have to be done in advance, and I don't think it is generally productive or useful to do so.
I want to come back to the first post of this topic, and I want to concentrate on the "definition" parts (2 and 4):
On Friday, January 24, 2020 at 5:58:14 AM UTC+1, Norman Megill wrote:...
2. A very important thing to me is the philosophy of striving for simplicity wherever possible. This particularly impacts what definitions we choose.
"Rules" for introducing new definitions
Definitions are at the heart of achieving simplicity. While developing set.mm, two rules of thumb that evolved in my own work were (1) there shouldn't be two definitions that mean almost the same thing and (2) definitions shouldn't be introduced unless there is an actual need in the work at hand. Maybe I should add (3), don't formally define something that doesn't help proofs and can be stated simply in English in a theorem's comment.I fully agree with this in principle, but what does "almost the same", "actual need" and "help proofs" exactly mean? There are major discrepancies in how to interpret these terms.
Concerning the 1st rule, very early on, when I was starting to work with ordinals, I noticed that many books would redefine "e." (epsilon) as "<" and "C_" (subset) as "<_" (less than or equal) since those were the ordering roles they played in von Neumann ordinal theory. I figured since books did it, it must be important, so I also did that in the early set.mm. (I may have even used those symbols because reals were a distant vision at that point.) Books casually interchanged them in theorems and proofs, but formalization meant I might have 2 or more versions of many theorems. It was a constant nuisance to have to convert back and forth inside of proofs, making proofs longer. I eventually decided to use just "e." and "C_", and it made formalizations much more pleasant with fewer theorems and shorter proofs, sometimes providing more general theorems that could be used outside of ordinal theory. I never missed not having the ordering symbols to remind me of their roles. I liked the feeling of being closer to the underlying set theory rather than being an abstraction layer away from it. If the "less than" role of "e." was important for human understanding in a particular case, I could simply use the English language "less than" the comment (3rd rule) while continuing to use "e." in the $p statement.Yes, that's absolutely right. These would be exactly identical definitions for the same concept, only using different symbols, so here it is clear what "almost the same" means (namely "exactly the same"). In this case, the symbols "<" and "C_" are used to denote different things anyway. But do you think the definition of ">" would be "almost the same" as the available definition of "<"?
As for the 2nd rule, there are often different choices that can be made in the details of how something is defined. It is sometimes hard or impossible to anticipate what choice is optimal until we actually start using the definition for serious work. There is also the question of whether the definition is even necessary. More than once I've found that a definition I thought might be needed (for example was used in the literature proof) actually wasn't. Sometimes, like the "e." vs. "<" case above, the definition was even a hindrance and made proofs longer. Other times an intermediate definition of my own invention that isn't in the literature turned out to be advantageous for shorter proofs. It can be best to let the work drive the need for the definition and its precise details.
Another example of a literature definition we purposely don't use and illustrates the 3rd rule is Takeuti/Zaring's Russell class with symbol "Ru", which they formally define as "{ x | x e/ x }". The only theorem that uses it (in set.mm and in T/Z) is ~ ru. For the purposes of set.mm, it is wasteful and pointless since we can just define the Russell class verbally in the comment of ~ ru.
A problem that has arisen in the past is where a person has added a set of definitions from the literature, proved some simple property theorems, then is disappointed that the work isn't imported into the main set.mm. Sometimes we do and sometimes we don't, but the main issue is whether the 2nd rule above is being followed. Without knowing exactly how the definition is going to be applied, very often it won't be optimal and will have to be adjusted, and sometimes it is more efficient just to start from scratch with a definition in the form that is needed. I prefer to see some "serious" theorems proved from a definition before importing it.Here you use again criteria which could be interpreted differently: what is "serious"?
Is it the number of theorems using (directly or indirectly) a definition,
or must it be a theorem from the 100 theorem list (or the list of theorems in Wikipedia) using the definition?
Or is it sufficient that the theorem using the definition is mentioned in a book, maybe labelled as theorem (or proposition etc.)?
As a somewhat crude example of why a definition in isolation may not be optimal, suppose someone defined sine as "$a |- sin A = ..." (not a straw man; this kind of thing has been attempted in the past) i.e. a 1-place function-like "sin A" rather than a stand-alone object "sin". While this can be used to prove a few things, it is very limiting since the symbol "sin" in isolation has no meaning, and we can't prove e.g. "sin : CC --> CC". It is an easy mistake to make if you don't understand set theoretical picture but are blindly copying a textbook definition. Here it is pretty obvious, but in general such issues can be subtle and may depend on the application. Also, there are often multiple ways to define something, some of which are easy to start from and others which would require a lot of background development. The best choice can't always be anticipated without knowing what background will be available.Absolutely correct, but here we have rules how (new) definitions should look like: they usually should be binary relations or functions/operations, with exceptions only in very specially justified cases - I think these are not documented yet (in section "Conventions"), but these conventions for definitions came up as I started with Graph Theory, in a discussion (in this Google group) mainly with Mario. By such a conventioned, such crude definitions as descibed can be avoided.
3. Let me comment on the recent proposal to import df-mgm, df-asslaw, df-sgrp. These are my opinions, and correct me if I am wrong.
df-mgm: A magma is nothing but an empty shell with a binary operation. There are no significant theorems that follow from it. All the properties of binary operations that we have needed have already been proven. If additional properties are needed in the future it seems it would be better to state them directly because they will be more useful generally. Just because it appears in Bourbaki or other texts is not to me a justification: it also needs to be useful and be driven by an actual need for it. There are many definitions in textbooks that have turned out to be unnecessary.We have already several theorems valid for magmas, which are proven for monoids, or at least in the context of monoids (because we have nothing else). Proving them for monoids, however, could confuse a reader, because he does not see if an identity is required for the proof or not. Additionally, ther term "magma" is used 20 times in main set.mm, so it has some significance. Furthermore, the definition of a monoid could be based on the definition of a magma (or semigroup, see below) as it is done for the definition of groups based on the definition of monoids, e.g.MndALT1 = { g e. Mgm | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. ( A. x e. b A. y e. b A. z e. b ( ( x p y ) p z ) = ( x p ( y p z ) ) /\ E. e e. b A. x e. b ( ( e p x ) = x /\ ( x p e ) = x ) ) }or using the definition of a semigroupMndALT2 = { g e. SGrp | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. E. e e. b A. x e. b ( ( e p x ) = x /\ ( x p e ) = x ) }
For me, these are already serious reasons to have these definitions. To be concrete, however, would it be sufficient to prove the statement in ~ xrs1mnd that RR*s is neither a monoid nor a semigroup (it is "only" a magma) to justify the definition of a magma?
df-asslaw: All this is doing is encapsulating the class of associative operations. The only thing that can be derived from it AFAIK is the associative law. I don't see how it would benefit proofs since additional work is needed to convert to and from it. I don't see how it helps human understanding because it is easier for a human to see "show the associative law" in the description of a theorem than be asked to learn yet another definition (an example of the 3rd rule).The main reason why I (still want to) introduce these definitions, is that they can be used as building blocks for very different structures.
If you look at Wikipedia (e.g. the article "Magma"), you can find a table classifying all "Group-like structures" - the "law"-theorems correspond to the columns of this table.
By these, you could characterize a monoid by three of these laws. Instead of defining a monoid as it is done today bydf-mnd $a |- Mnd = { g | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. ( A. x e. b A. y e. b A. z e. b ( ( x p y ) e. b
/\ ( ( x p y ) p z ) = ( x p ( y p z ) ) ) /\ E. e e. b A. x e. b ( ( e p x ) = x /\ ( x p e ) = x ) ) }you do not see immediately that it contains closure, associativity and having an identity! This would be different by a definition like
df-mndalt $a |- MndALT = { g | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. ( p clLaw b /\ p assLaw b /\ p idLaw b ) }
Concerning the human understanding, we have (and we keep) corresponding theorems, e.g.mndass $p |- ( ( G e. Mnd /\ ( X e. B /\ Y e. B /\ Z e. B ) ) -> ( ( X .+ Y ) .+ Z ) = ( X .+ ( Y .+ Z ) ) ) $=But in parallel, we could have theorems in the short form, e.g.:mndasslaw $p |- ( ( G e. MndALT -> .+ assLaw B ) $=df-sgrp: This is basically df-asslaw converted to an extensible structure. So in essence it duplicates df-asslaw (or vice versa depending on your point of view), violating the first rule (no redundancy). More importantly, AFAIK there is nothing that can be derived from it except the associative law, unlike say df-grp where deep and non-obvious theorems are possible. Yes, it can be used as a "layer" on top of which we can slightly simplify df-mnd and df-rng, but it is really worth having an entire, otherwise useless definitional layer just to avoid having to state "(x+y)+z=x+(y+z)"? The fact that it simplifies two definitions is a slight positive, but I'm still torn about it. It doesn't quite pay for itself in terms of reducing set.mm size, and it increases the burden on the reader who has to learn another definition and drill down another layer to understand df-mnd and df-rng. Personally I'd prefer to see "(x+y)+z=x+(y+z)" directly because it is immediately obvious without having to look at a deeper layer. In the case of the description in df-rng0, it would be more direct and less obscure to state "associative operation" instead of "semigroup operation", since most people (at the level of studying that definition) would know what "associative" means, but fewer would would know what "semigroup" means.In my opinion, this is not a duplication: df-asslaw would be part of df-sgrp, characterizing the operation of the extensible structure. Concerning the reader: yes, for some it is a burden, for others it is a pleasure to have a stringent hierarchy of definitions.
Finally, I think there will be a lot of serious theorems which could be proven (there is a complete "Semigroup Theory"). To wait until such theorems are formally proven within set.mm is not necessary, in this case, because basic theorems can already be provided - and they cannot be wrong, because they are also valid for monoids in its currently defined manner.
I can't say that these will never be imported. Possibly something will drive a need for them, maybe something related to category theory where even a magma might be a useful object. But I would want to see the definitions be driven by that need when it arises; we don't even know yet whether the present extensible structure form can be adapted for that purpose. It is fine to store them in a mathbox indefinitely, but I'd like to see a true need driving their use before importing.
4. As for adding an exhaustive list of all possible definitions one can find in Bourbaki or whatever, as someone suggested, I don't think something like that belongs in set.mm, for all the reasons above. There are other resources that already list these (Wikipedia, Planetmath, nLab). Their precise formalization will depend on the context in which they are needed in set.mm, which is unknown until they are used. In isolation (perhaps with a couple of property theorems that basically check syntax) there is no guarantee that they are correct, even in set.mm. To restate, I think the philosophy should be that definitions should be added based on need, not as busywork. Adding a definition when needed is one of the smallest parts of building a series of significant proofs. It doesn't have to be done in advance, and I don't think it is generally productive or useful to do so.This is also something we can agree on immediately: Defnitions without usage (i.e. without theorems or only with theorems for syntax checks) are not useful (within set.mm) and should not be contained in the main body of set.mm.In summary, we agree in most cases, and differences are only in the interpretation of the rules. Regarding the "law" definitions and definitions for magmas, semigroups and (non-unital) rings, I still would like to have them in the main body of set.mm, making itmore concise and consistent (especially if you look at the theorems with labels containing "rng" which are valid for unital rings only, which might be confusing). From my point of view, the benefits for a user are greater than the disadvantages.
Hi all,
Here is my take on the "goals and philosophy" for set.mm:
Overall, I'm also seeing the end goal as set.mm covering "all of mathematics", which means I would not exclude any theorem.
As for the audience, I think the goal shall be that any
interested person, with even limited background, can grasp our
definitions. So, they shall be as simple as possible, and rely on
basic concepts whenever possible. That does not prevent us to
provide an definitions or equivalents, relying on more advanced or
less-used concepts.
About which theorems shall enter the main part of set.mm,
I think Norm and Mario's philosophy of "based on need" is good. To
give a concrete example, when I wanted to prove general things
about probability theory, with the help of people on the mailing
list, I found out I needed to define the Bochner integral, which
required to define the canonical homomorphism from the real
numbers `RRHom`, which in turn required extension by continuity
`CnExt`, and uniform structures `UnifSt` and `metUnif`, in turn
requiring pseudometrics `PsMet`.
So, I ended up adding pseudo-metrics to set.mm, not in order to
be exhaustive, but because I had a practical end-goal for that. I
have not yet come back to the Bochner integral, but there are
several interesting theorems about extension by continuity I'm
happy I added, and which will probably be useful for other things.
Just like for groups, there are other generalizations of metrics than pseudometrics (quasi-, meta-, semi-, pre-, etc.), I introduced only pseudometrics, and for a good reason. When introducing the concept of pseudometric, I did not change the definition of metric so that metric is seen as a specialization of pseudometrics, but kept the generic definition, and just added the generalization theorem ~xmetpsmet.
More generally about new definitions, I also think
interesting definitions are those which summarize complex concepts
into a simpler form. In the example given by Norm, ` A C_ B `
stands for ` ( A i^i B ) = A ` (~df-ss) or ` A. x ( x e. A -> x
e. B ) ` (~dfss2). The form ` A C_ B ` is shorter and more
convenient than both, and is widely used. In ` A > B <->
B < A `, the new form "greater than" is exactly as long and as
convenient as ` A < B `, I see no added value to use
it.
It may still be interesting to define it, just to express
"how it would look like if we had to define this concept in
set.mm", but I believe we shall explain in the comments why we
don't use it in set.mm, and refer to the "less than" definition.
Another example which comes to my mind is "countable". I have a
few theorems about countable sets in my
mathbox. We could imagine to define the class of countable
sets, like `Count = { x | x ~<_ _om }`. However, stating that a
set is countable using `A e. Count` would be just as long and
convenient as stating `A ~<_ _om`. I think this is another good
example where we don't need to introduce a new definition,
although, it may not be obvious at first (it was not to me), that
`A ~<_ _om` means `A is countable`, because none of the
concepts there (dominance relation, natural numbers), are simple
concepts.
Concerning the structure of set.mm, while I like the idea to add definitions in a specific sections, I'm afraid that this would break its overall structure, we could consider another option, where "other concepts" not developed in set.mm would still be in the corresponding chapter (like, putting magmas in the algebra section, just defined, not developped).
BR,
_
Thierry
semigroup is more frequent.
For the last week I have been "actively working" on providing "applications" of the definitions and related theorems for magmas and semigroups (I've spent almost my whole spare time for it). Although the results may be trivial, they are "serious", at least in my opinion.I've provided the results in my latest two PRs (they can be found in my mathbox). Here they are (in brief):
- There is a small magma (with a base set containing two elements) which is not a semigroup: ~mgm2nsgrp
- There is a small semigroup (with a base set containing two elements) which is not a monoid: ~sgrp2nmnd
- The previous results prove the proper inclusion chain `Mnd C. SGrp C. Mgm`: ~mndsssgrp and ~sgrpssmgm
- The (additive group of the) extended reals is a magma, but not a semigroup. Formalization of the statements in the comments of ~xrsmcmn and ~xrs1mnd: ~xrsmgmdifsgrp
- A stronger version of ~ gsumpropd if at least one of the involved structures is a magma, see ~gsumpropd2: ~I mention this in the spirit of full disclosure. :)
On January 31, 2020 3:39:46 PM PST, Norman Megill wrote:
>I would like to hear other opinions on whether "magma" and "semigroup"
>should become part of the main set.mm body.
I don't have a strong opinion one way or the other. I suspect Norm is right that these are a whole lot more obscure than groups and rings and fields and the other usual suspects. On the other hand, I guess things are arranged so that a proof about groups doesn't really need to know about magmas and semigroups?
That might point us to a "harmless even if not strictly necessary" stance.
Regarding the possible introduction in the main part of semigroups and magmas:
When I look at the page http://us2.metamath.org/mpeuni/df-mnd.html I feel a bit dizzy. The abundance of parentheses and conjunctions makes it hard to parse.
Did you notice that the clause stating closedness is actually quantified over an extra z ? (This is of course innocuous, but still strange.) I find it much easier to look successively at http://us2.metamath.org/mpeuni/df-mgmALT.html and http://us2.metamath.org/mpeuni/df-sgrp.html and [the definition of Monoid from Semigroup -- Alexander: can you add it?]. So, to me, there is already a pedagogical benefit, even if no theorems are proved. I find these piecemeal definitions easier to learn. Once you grab the final concept, you can forget about the intermediate steps. I.e., you do not have to remember what a magma or semigroup is, if you don't want. This was only an aid in the process of getting the definition of a group.
Regarding Norm's remark on Bourbaki: Indeed, their set theory is known to be awkward. None of its members were logicians or set theorists, and they just wanted to "get this done" before moving to what they considered more interesting math. By the way: you actually use a lot of Bourbaki's terminology and notation in set.mm: the symbol for the empty set, the blackboard bold typeface, the terms "ball" and "sphere" in metric spaces, the terms injection/surjection/bijection, etc. Actually, new terms were carefully chosen, and as much as possible simple words from everyday life, like "magma" (because volcanic magma looks structureless), "ball", even when they are not round (the previous term was some impossible jargon), in/sur/bijection (previous terms were similar jargon). The terms "barrel" (a barrel is closed, convex, balanced, absorbing: this makes as much sense in everyday life as in formal math), bornivorous, etc.
Regarding the possible introduction in the main part of semigroups and magmas:
When I look at the page http://us2.metamath.org/mpeuni/df-mnd.html I feel a bit dizzy. The abundance of parentheses and conjunctions makes it hard to parse. Did you notice that the clause stating closedness is actually quantified over an extra z ? (This is of course innocuous, but still strange.) I find it much easier to look successively at http://us2.metamath.org/mpeuni/df-mgmALT.html and http://us2.metamath.org/mpeuni/df-sgrp.html and [the definition of Monoid from Semigroup -- Alexander: can you add it?].
Regarding the possible introduction in the main part of semigroups and magmas:
When I look at the page http://us2.metamath.org/mpeuni/df-mnd.html I feel a bit dizzy. The abundance of parentheses and conjunctions makes it hard to parse. Did you notice that the clause stating closedness is actually quantified over an extra z ? (This is of course innocuous, but still strange.)
On Friday, January 31, 2020 at 8:23:08 PM UTC-5, Benoit wrote:Regarding the possible introduction in the main part of semigroups and magmas:
When I look at the page http://us2.metamath.org/mpeuni/df-mnd.html I feel a bit dizzy. The abundance of parentheses and conjunctions makes it hard to parse.
If you already know about magmas, semigroups, and monoids, of course you'd feel that way. Look at it from the perspective of someone who's never heard of these and really doesn't care to learn, because all they came for was groups. Just the names of these things are somewhat intimidating and have little or nothing to do with what they really are.
As I wrote earlier:
"The user knows from the description and from the property theorems that are referenced. But even without that, I do see all of these properties pretty immediately - they're right there in the definition: ( x p y ) e. b for closure, ( ( x p y ) p z ) = ( x p ( y p z ) ) for associativity, and E. e ... ( ( e p x ) = x /\ ( x p e ) = x ) for left and right identity."
I acknowledge that Bourbaki has had a lot influence, and I have no problem using its notation and terminology - if it has survived and is still used in the mainstream literature after 80+ years. What I think is not a good idea is the blind use of Bourbaki as being the "bible" of mathematics.
On Saturday, February 1, 2020 at 3:37:57 AM UTC+1, Norman Megill wrote:On Friday, January 31, 2020 at 8:23:08 PM UTC-5, Benoit wrote:Regarding the possible introduction in the main part of semigroups and magmas:
When I look at the page http://us2.metamath.org/mpeuni/df-mnd.html I feel a bit dizzy. The abundance of parentheses and conjunctions makes it hard to parse.
If you already know about magmas, semigroups, and monoids, of course you'd feel that way. Look at it from the perspective of someone who's never heard of these and really doesn't care to learn, because all they came for was groups. Just the names of these things are somewhat intimidating and have little or nothing to do with what they really are.I think we will not agree on this. I prefer to face difficulties one by one, instead of all three at the same time. I personnally still find the long chain of characters in http://us2.metamath.org/mpeuni/df-mnd.html intimidating. It's always difficult to look from the perspective of someone who's completely new to it. I may be underestimating the difficulty of facing several notions, and you may be underestimating the difficulty of reading such long expressions of formal math
I'm just
saying, if you want to learn group theory from Metamath, there's a lot
of stuff you're going to be fighting, and standard monoids, semigroups
and magmas are hardly going to be your breaking point.)
I might agree to it anyway just to get the Bourbaki people off my back. :)
Your counterexamples to show non-membership are interesting. I appreciate your work dong this, and certainly it would accompany the magma and semigroup structures should we decide to import them. However, showing the proper subset relationship in and of itself still doesn't quite yet show us that we have a true need for these structures.
For example, if we defined EE as the set of even numbers, we could have a theorem showing it is a proper subset of ZZ, but such a theorem isn't enough in itself to justify introducing it. We would want a sufficient number of theorems about even numbers whose statements would become significantly shorter and clearer. That hasn't happened and probably won't. Just because the terminology "even number" appears in the literature doesn't mean we have to have a special symbol for it; instead, "2 || N" suffices.
Regarding Bourbaki: A long time ago, before Metamath, I eagerly and naively started off trying to learn the Theory of Sets before knowing any formal logic or set theory. Its bizarre "assemblies" (tau notation) are extremely inefficient and seem to ignore all the developments in FOL and axiomatic set theory that were already essentially mature and standard in the rest of the world. With much effort and the assistance of a computer program I wrote, I worked out the thousands of symbols needed to define the empty set. Then I gave up when I realized the number 1 would be unfeasible (it turns out to require trillions of symbols). Even with intermediate definitions to shorten things, I was unable to gain an intuitive understanding of what was going on.
After I learned "real" set theory, I became somewhat less than enamored with Bourbaki, so I mention this experience in the spirit of full disclosure. :) With that said, the feeling I get is that Bourbaki sometimes does its own thing that may or may not appear in mainstream mathematical literature. My personal preference is not to use Bourbaki as a primary reference for terminology and notation, unless it is also standard in modern textbooks.
Two relatively modern and well-regarded textbooks are Lang's "Undergraduate algebra" (~400 pp) and his graduate-level "Algebra" (900 pp). The first book starts off defining "group", and the second starts with "monoid". Neither book mentions "magma" or "semigroup". Even "monoid" is apparently a topic too specialized for Lang's undergraduate book, which doesn't mention it at all.
Part I The Language of Algebra
1 Glossary of Basic Algebraic Structures
1.1 Binary Operations
1.2 Semigroups and Monoids
1.2.1 Semigroups
1.2.2 Monoids
1.3 Groups
1.4 Rings and Fields
1.4.1 Rings
…
I want to comment on the following post in more detail:
@benoit and @alexander - Thank you for your comments. Go ahead and import semigroup (assuming no one else objects). Hopefully, though, I've communicated my point that we don't want this to lead to flurry of shallow definitions that people must learn, just to address a perceived difficulty in understanding the df-* statements (that are technical definitions not always meant to be read directly by humans).
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/751ddf1b-388e-49ce-a32c-ec09d8fdaecd%40googlegroups.com.
If somebody could find a copy of "the File" and make it available I think it would be good for the history of mathematics.