Philosophy and goals for set.mm

494 views
Skip to first unread message

Norman Megill

unread,
Jan 23, 2020, 11:58:14 PM1/23/20
to meta...@googlegroups.com
Tierry, Alexander, and Benoit have asked for clarification of the goals of set.mm.  Here are some of my opinions.  I am moving the discussion in https://github.com/metamath/set.mm/issues/1431 to here for wider readership.

1. There is no "goal" per se for set.mm.  People work on what they are interested in.  Typically work starts off in mathboxes, where people pretty much have freedom to do whatever they want.  Two situations that typically will result in the work being moved to the main part of set.mm are (1) more than one person wants to use or develop the work and (2) the work is an mm100 theorem.  There are other factors, though, that I'll discuss.  And in any case it is a judgment call by me and others that may not be perfect.  Just because we choose not to import from a mathbox right away doesn't mean the value of the work is less, it just means that in our possibly flawed judgment it either doesn't quite fit in with the set.mm philosophy or it hasn't found its place there yet.

So what I will primarily discuss is what I see as the "philosophy" that has been generally used so far in set.mm.  Hopefully that can guide the goals that people set for themselves as they contribute to set.mm.


2. A very important thing to me is the philosophy of striving for simplicity wherever possible.  This particularly impacts what definitions we choose.

"Rules" for introducing new definitions


Definitions are at the heart of achieving simplicity.  While developing set.mm, two rules of thumb that evolved in my own work were (1) there shouldn't be two definitions that mean almost the same thing and (2) definitions shouldn't be introduced unless there is an actual need in the work at hand.  Maybe I should add (3), don't formally define something that doesn't help proofs and can be stated simply in English in a theorem's comment.

Concerning the 1st rule, very early on, when I was starting to work with ordinals, I noticed that many books would redefine "e."  (epsilon) as "<" and "C_" (subset) as "<_" (less than or equal) since those were the ordering roles they played in von Neumann ordinal theory.  I figured since books did it, it must be important, so I also did that in the early set.mm.  (I may have even used those symbols because reals were a distant vision at that point.)  Books casually interchanged them in theorems and proofs, but formalization meant I might have 2 or more versions of many theorems.  It was a constant nuisance to have to convert back and forth inside of proofs, making proofs longer.  I eventually decided to use just "e." and "C_", and it made formalizations much more pleasant with fewer theorems and shorter proofs, sometimes providing more general theorems that could be used outside of ordinal theory.  I never missed not having the ordering symbols to remind me of their roles.  I liked the feeling of being closer to the underlying set theory rather than being an abstraction layer away from it.  If the "less than" role of "e." was important for human understanding in a particular case, I could simply use the English language "less than" the comment (3rd rule) while continuing to use "e." in the $p statement.

As for the 2nd rule, there are often different choices that can be made in the details of how something is defined.  It is sometimes hard or impossible to anticipate what choice is optimal until we actually start using the definition for serious work.  There is also the question of whether the definition is even necessary.  More than once I've found that a definition I thought might be needed (for example was used in the literature proof) actually wasn't.  Sometimes, like the "e." vs.  "<" case above, the definition was even a hindrance and made proofs longer.  Other times an intermediate definition of my own invention that isn't in the literature turned out to be advantageous for shorter proofs.  It can be best to let the work drive the need for the definition and its precise details.

Another example of a literature definition we purposely don't use and illustrates the 3rd rule is Takeuti/Zaring's Russell class with symbol "Ru", which they formally define as "{ x | x e/ x }".  The only theorem that uses it (in set.mm and in T/Z) is ~ ru.  For the purposes of set.mm, it is wasteful and pointless since we can just define the Russell class verbally in the comment of ~ ru.

A problem that has arisen in the past is where a person has added a set of definitions from the literature, proved some simple property theorems, then is disappointed that the work isn't imported into the main set.mm.  Sometimes we do and sometimes we don't, but the main issue is whether the 2nd rule above is being followed.  Without knowing exactly how the definition is going to be applied, very often it won't be optimal and will have to be adjusted, and sometimes it is more efficient just to start from scratch with a definition in the form that is needed.  I prefer to see some "serious" theorems proved from a definition before importing it.

As a somewhat crude example of why a definition in isolation may not be optimal, suppose someone defined sine as "$a |- sin A = ..."  (not a straw man; this kind of thing has been attempted in the past) i.e. a 1-place function-like "sin A" rather than a stand-alone object "sin".  While this can be used to prove a few things, it is very limiting since the symbol "sin" in isolation has no meaning, and we can't prove e.g.  "sin :  CC --> CC".  It is an easy mistake to make if you don't understand set theoretical picture but are blindly copying a textbook definition.  Here it is pretty obvious, but in general such issues can be subtle and may depend on the application.  Also, there are often multiple ways to define something, some of which are easy to start from and others which would require a lot of background development.  The best choice can't always be anticipated without knowing what background will be available.


3. Let me comment on the recent proposal to import df-mgm, df-asslaw, df-sgrp.  These are my opinions, and correct me if I am wrong.

df-mgm:  A magma is nothing but an empty shell with a binary operation.  There are no significant theorems that follow from it.  All the properties of binary operations that we have needed have already been proven.  If additional properties are needed in the future it seems it would be better to state them directly because they will be more useful generally.  Just because it appears in Bourbaki or other texts is not to me a justification:  it also needs to be useful and be driven by an actual need for it.  There are many definitions in textbooks that have turned out to be unnecessary.

df-asslaw:  All this is doing is encapsulating the class of associative operations.  The only thing that can be derived from it AFAIK is the associative law.  I don't see how it would benefit proofs since additional work is needed to convert to and from it.  I don't see how it helps human understanding because it is easier for a human to see "show the associative law" in the description of a theorem than be asked to learn yet another definition (an example of the 3rd rule).

df-sgrp:  This is basically df-asslaw converted to an extensible structure.  So in essence it duplicates df-asslaw (or vice versa depending on your point of view), violating the first rule (no redundancy).  More importantly, AFAIK there is nothing that can be derived from it except the associative law, unlike say df-grp where deep and non-obvious theorems are possible.  Yes, it can be used as a "layer" on top of which we can slightly simplify df-mnd and df-rng, but it is really worth having an entire, otherwise useless definitional layer just to avoid having to state "(x+y)+z=x+(y+z)"?  The fact that it simplifies two definitions is a slight positive, but I'm still torn about it.  It doesn't quite pay for itself in terms of reducing set.mm size, and it increases the burden on the reader who has to learn another definition and drill down another layer to understand df-mnd and df-rng.  Personally I'd prefer to see "(x+y)+z=x+(y+z)" directly because it is immediately obvious without having to look at a deeper layer.  In the case of the description in df-rng0, it would be more direct and less obscure to state "associative operation" instead of "semigroup operation", since most people (at the level of studying that definition) would know what "associative" means, but fewer would would know what "semigroup" means.

I can't say that these will never be imported.  Possibly something will drive a need for them, maybe something related to category theory where even a magma might be a useful object.  But I would want to see the definitions be driven by that need when it arises; we don't even know yet whether the present extensible structure form can be adapted for that purpose.  It is fine to store them in a mathbox indefinitely, but I'd like to see a true need driving their use before importing.


4. As for adding an exhaustive list of all possible definitions one can find in Bourbaki or whatever, as someone suggested, I don't think something like that belongs in set.mm, for all the reasons above.  There are other resources that already list these (Wikipedia, Planetmath, nLab).  Their precise formalization will depend on the context in which they are needed in set.mm, which is unknown until they are used.  In isolation (perhaps with a couple of property theorems that basically check syntax) there is no guarantee that they are correct, even in set.mm.  To restate, I think the philosophy should be that definitions should be added based on need, not as busywork.  Adding a definition when needed is one of the smallest parts of building a series of significant proofs.  It doesn't have to be done in advance, and I don't think it is generally productive or useful to do so.


5. Finally, let me touch on the issue of > (greater than).

There are many non-symmetrical relations that use reversed symbols to swap arguments: "B contains A" with reversed epsilon, "B includes A" with reversed subset symbol, "Q if P" with reversed arrow, etc.  I've seen all of these in the literature.  If we really feel the reader will encounter them and expect set.mm to explain their meaning (which is likely explained in their textbook anyway), we could mention informally the reversed symbol usage when we introduce the forward symbol.  But we don't add $a's for them because we will never use them.  That is because either we would have to add a huge number of theorems containing all possible forward and reversed combinations of the symbols, or we would have to constantly convert between them inside of proofs.  Both of those are contrary to a philosophy of simplicity.

IMO the same should be done with >, mentioning what it means in the description for <.  Introducing a formal $a statement for > that will never be used is unnecessary and wasteful of resources.  If we want to be excessively pedantic, we could mention in the description for < that the the formal definition would be "|- > = `' <" , although that seems less intuitive than simply saying that "in theorem descriptions, we occasionally use the terminology 'A is greater than B' to mean 'B is less than A'."  A grade school student can (and does) easily understand that.

Basically, ">" violates all 3 "rules" for new definitions I proposed above.

Norm

OlivierBinda

unread,
Jan 24, 2020, 3:55:49 AM1/24/20
to meta...@googlegroups.com

I'm just a newbie (indirect) metamathematicien so, my opinion (from the point of view of a metamath tool writer) might not be worth much :

- Simplicity is great (one of the best feature of metamath)
- Deduplication is awesome (it reduces the (memory and time) cost of writing proofs)

Recently, there has been a debate lately about having underscore in ids. My (tool writer) opinion is that this debate should not happen :

In software development, there is a generally held opinion that ids should be opaque tokens.
I fear that you are mixing 2 roles  :
- ids that should uniquely determinate a theorem forever
- (indexed) fields that help humans find theorems among the set.mm database

This happens because Metamath was designed to be easy to work with from a basic text editor with search/replace capabilities.

This worked great (metamath is successfull after all) but this probably is not the optimal set up for metamath :

An illustration : There are plenty of languages that tried to be the next Java (groovy, scala, C#, kotlin),
but the one that is really succeeding right now is Kotlin, because it's designers realized that
- they had to design a better Language than Java
- they had to write great tools alongside it to make it easy/enticing for developers to adopt/migrate to their new platform.

So far, all android developers (we are talking million developers) switched from Java to Kotlin. And Kotlin is still aiming to take over other traditional spaces of software development

My point and my opinion is :
Metamath is really awesome but if we developed tooling for it, alongside it, it would remove some constraints it has and maybee become even better.

I think that people should not have to work in metamath in text editors,
computers should provide the help they need, when it comes to displaying, looking for theorems, etc...

For example, why isn't there a simple text to search softawre (that could interact with mmj2 or other software) that allows people to input some string (say "( a + b ) =" or ( a in CC )) and that returns a theorem id ?

Theorems could be hashed,indexed in many ways...with customized settings, so that people who would rather have a_in_CC than aInCC could be happy)

People should NOT have to learn and remember theorem ids, this is a useless skill in real life,
and it prevents more people to work with metamath

Also, releasing the constraint that metamath should be used in text editors or should be nice to human eyes may allow to :
adopt a syntax aimed at computers, that ensures coherency and that makes it easier to parse, avoid pitfalls, allow definitions that are not axioms...

It would be the job of the tools to display the result in a nice manners for humans, and make working with metamath nice for humans...

Well, just a rant :)

As a side note, I managed to use antlr-kotlin to port the antlr metamath parser I was using to kotlin multiplatform.
It was the only roadblock preventing releases of Mephistolus for the browser/android/ios/linux/windows/mac...
I'm going to work on the browser /javascript target now.

Best regards,
Olivier

Le 24/01/2020 à 05:58, Norman Megill a écrit :
Tierry, Alexander, and Benoit have asked for clarification of the goals of set.mm.  Here are some of my opinions.  I am moving the discussion in https://github.com/metamath/set.mm/issues/1431 to here for wider readership.

1. There is no "goal" per se for set.mm.  People work on what they are interested in.  Typically work starts off in mathboxes, where people pretty much have freedom to do whatever they want.  Two situations that typically will result in the work being moved to the main part of set.mm are (1) more than one person wants to use or develop the work and (2) the work is an mm100 theorem.  There are other factors, though, that I'll discuss.  And in any case it is a judgment call by me and others that may not be perfect.  Just because we choose not to import from a mathbox right away doesn't mean the value of the work is less, it just means that in our possibly flawed judgment it either doesn't quite fit in with the set.mm philosophy or it hasn't found its place there yet.

So what I will primarily discuss is what I see as the "philosophy" that has been generally used so far in set.mm.  Hopefully that can guide the goals that people set for themselves as they contribute to set.mm.


2. A very important thing to me is the philosophy of striving for simplicity wherever possible.  This particularly impacts what definitions we choose.

"Rules" for introducing new definitions


Definitions are at the heart of achieving simplicity.  While developing set.mm, two rules of thumb that evolved in my own work were (1) there shouldn't be two definitions that mean almost the same thing and (2) definitions shouldn't be introduced unless there is an actual need in the work at hand.  Maybe I should add (3), don't formally define something that doesn't help proofs and can be stated simply in English in a theorem's comment.

Concerning the 1st rule, very early on, when I was starting to work with ordinals, I noticed that many books would redefine "e."  (epsilon) as "<" and "C_" (subset) as "<_" (less than or equal) since those were the ordering roles they played in von Neumann ordinal theory.  I figured since books did it, it must be important, so I also did that in the early set.mm.  (I may have even used those symbols because reals were a distant vision at that point.)  Books casually interchanged them in theorems and proofs, but formalization meant I might have 2 or more versions of many theorems.  It was a constant nuisance to have to convert back and forth inside of proofs, making proofs longer.  I eventually decided to use just "e." and "C_", and it made formalizations much more pleasant with fewer theorems and shorter proofs, sometimes providing more general theorems that could be used outside of ordinal theory.  I never missed not having the ordering symbols to remind me of their roles.  I liked the feeling of being closer to the underlying set theory rather than being an abstraction layer away from it.  If the "less than" role of "e." was important for human understanding in a particular case, I could simply use the English language "less than" the comment (3rd rule) while continuing to use "e." in the $p statement.

As for the 2nd rule, there are often different choices that can be made in the details of how something is defined.  It is sometimes hard or impossible to anticipate what choice is optimal until we actually start using the definition for serious work.  There is also the question of whether the definition is even necessary.  More than once I've found that a definition I thought might be needed (for example was used in the literature proof) actually wasn't.  Sometimes, like the "e." vs.  "<" case above, the definition was even a hindrance and made proofs longer.  Other times an intermediate definition of my own invention that isn't in the literature turned out to be advantageous for shorter proofs.  It can be best to let the work drive the need for the definition and its precise details.

Another example of a literature definition we purposely don't use and illustrates the 3rd rule is Takeuti/Zaring's Russell class with symbol "Ru", which they formally define as "{ x | x e/ x }".  The only theorem that uses it (in set.mm and in T/K) is ~ ru.  For the purposes of set.mm, it is wasteful and pointless since we can just define the Russell class verbally in the comment of ~ ru.


A problem that has arisen in the past is where a person has added a set of definitions from the literature, proved some simple property theorems, then is disappointed that the work isn't imported into the main set.mm.  Sometimes we do and sometimes we don't, but the main issue is whether the 2nd rule above is being followed.  Without knowing exactly how the definition is going to be applied, very often it won't be optimal and will have to be adjusted, and sometimes it is more efficient just to start from scratch with a definition in the form that is needed.  I prefer to see some "serious" theorems proved from a definition before importing it.

As a somewhat crude example of why a definition in isolation may not be optimal, suppose someone defined sine as "$a |- sin A = ..."  (not a straw man; this kind of thing has been attempted in the past) i.e. a 1-place function-like "sin A" rather than a stand-alone object "sin".  While this can be used to prove a few things, it is very limiting since the symbol "sin" in isolation has no meaning, and we can't prove e.g.  "sin :  CC --> CC".  It is an easy mistake to make if you don't understand set theoretical picture but are blindly copying a textbook definition.  Here it is pretty obvious, but in general such issues can be subtle and may depend on the application.  Also, there are often multiple ways to define something, some of which are easy to start from and others which would require a lot of background development.  The best choice can't always be anticipated without knowing what background will be available.


3. Let me comment on the recent proposal to import df-mgm, df-asslaw, df-sgrp.  These are my opinions, and correct me if I am wrong.

df-mgm:  A magma is nothing but an empty shell with a binary operation.  There are no significant theorems that follow from it.  All the properties of binary operations that we have needed have already been proven.  If additional properties are needed in the future it seems it would be better to state them directly because they will be more useful generally.  Just because it appears in Bourbaki or other texts is not to me a justification:  it also needs to be useful and be driven by an actual need for it.  There are many definitions in textbooks that have turned out to be unnecessary.

df-asslaw:  All this is doing is encapsulating the class of associative operations.  The only thing that can be derived from it AFAIK is the associative law.  I don't see how it would benefit proofs since additional work is needed to convert to and from it.  I don't see how it helps human understanding because it is easier for a human to see "show the associative law" in the description of a theorem than be asked to learn yet another definition (an example of the 3rd rule).

df-sgrp:  This is basically df-asslaw converted to an extensible structure.  So in essence it duplicates df-asslaw (or vice versa depending on your point of view), violating the first rule (no redundancy).  More importantly, AFAIK there is nothing that can be derived from it except the associative law, unlike say df-grp where deep and non-obvious theorems are possible.  Yes, it can be used as a "layer" on top of which we can slightly simplify df-mnd and df-rng, but it is really worth having an entire, otherwise useless definitional layer just to avoid having to state "(x+y)+z=x+(y+z)"?  The fact that it simplifies two definitions is a slight positive, but I'm still torn about it.  It doesn't quite pay for itself in terms of reducing set.mm size, and it increases the burden on the reader who has to learn another definition and drill down another layer to understand df-mnd and df-rng.  Personally I'd prefer to see "(x+y)+z=x+(y+z)" directly because it is immediately obvious without having to look at a deeper layer.  In the case of the description in df-rng0, it would be more direct and less obscure to state "associative operation" instead of "semigroup operation", since most people (at the level of studying that definition) would know what "associative" means, but fewer would would know what "semigroup" means.

I can't say that these will never be imported.  Possibly something will drive a need for them, maybe something related to category theory where even a magma might be a useful object.  But I would want to see the definitions be driven by that need when it arises; we don't even know yet whether the present extensible structure form can be adapted for that purpose.  It is fine to store them in a mathbox indefinitely, but I'd like to see a true need driving their use before importing.


4. As for adding an exhaustive list of all possible definitions one can find in Bourbaki or whatever, as someone suggested, I don't think something like that belongs in set.mm, for all the reasons above.  There are other resources that already list these (Wikipedia, Planetmath, nLab).  Their precise formalization will depend on the context in which they are needed in set.mm, which is unknown until they are used.  In isolation (perhaps with a couple of property theorems that basically check syntax) there is no guarantee that they are correct, even in set.mm.  To restate, I think the philosophy should be that definitions should be added by need, not as busywork.  Adding a definition when needed is one of the smallest parts of building a series of significant proofs.  It doesn't have to be done in advance, and I don't think it is generally productive or useful to do so.



5. Finally, let me touch on the issue of > (greater than).

There are many non-symmetrical relations that use reversed symbols to swap arguments: "B contains A" with reversed epsilon, "B includes A" with reversed subset symbol, "Q if P" with reversed arrow, etc.  I've seen all of these in the literature.  If we really feel the reader will encounter them and expect set.mm to explain their meaning (which is likely explained in their textbook anyway), we could mention informally the reversed symbol usage when we introduce the forward symbol.  But we don't add $a's for them because we will never use them.  That is because either we would have to add a huge number of theorems containing all possible forward and reversed combinations of the symbols, or we would have to constantly convert between them inside of proofs.  Both of those are contrary to a philosophy of simplicity.

IMO the same should be done with >, mentioning what it means in the description for <.  Introducing a formal $a statement for > that will never be used is unnecessary and wasteful of resources.  If we want to be excessively pedantic, we could mention in the description for < that the the formal definition would be "|- > = `' >" , although that seems less intuitive than simply saying that "in theorem descriptions, we occasionally use the terminology 'A is greater than B' to mean 'B is less than A'."  A grade school student can (and does) easily understand that.

Basically, ">" violates all 3 "rules" for new definitions I proposed above.

Norm
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/6e9144b6-5b8f-488d-bf19-83e5a2a250e0%40googlegroups.com.

fl

unread,
Jan 24, 2020, 6:05:40 AM1/24/20
to Metamath
  Just because it appears in Bourbaki or other texts is not to me a justification: 


We never say that. Never. At no moment.


We say something else.

-- 
FL

fl

unread,
Jan 24, 2020, 6:08:16 AM1/24/20
to Metamath

4. As for adding an exhaustive list of all possible definitions one can find in Bourbaki or whatever, as someone suggested,

We never say that. Never. At no moment.

-- 
FL 

vvs

unread,
Jan 24, 2020, 7:40:31 AM1/24/20
to Metamath
"Rules" for introducing new definitions

Definitions are at the heart of achieving simplicity.  While developing set.mm, two rules of thumb that evolved in my own work were (1) there shouldn't be two definitions that mean almost the same thing and (2) definitions shouldn't be introduced unless there is an actual need in the work at hand.  Maybe I should add (3), don't formally define something that doesn't help proofs and can be stated simply in English in a theorem's comment.

Definitions are opaque in Metamath - that's a restriction which was part of the language from the beginning. And that prevents automatic unfolding of definitions using definitional equivalence. It's possible to do just that using conventions and external tools, but we don't have it at the moment.

Mario probably could express more authoritative opinion, especially in light of his work on MM0.

N.B. Yes, such technical details affect philosophical goals after all and must be taken into account.

Mario Carneiro

unread,
Jan 24, 2020, 8:26:01 AM1/24/20
to metamath
I'm actually deliberately not answering the header question, as that's Norm's place and not mine. But I can give my take on definitions and the way metamath has historically handled them (following Norm's vision outlined here).

As you my know, MM0 departs from metamath in its handling of definitions. While the MM definition mechanism is beautifully simple (and I think that MM0 has missed the simplicity mark the most when it comes to this aspect), it's not really addressing the problem when it declares that definitions are "just axioms", because they aren't just axioms, and in particular every new axiom is another potential unsoundness that requires its own review. One can argue that the definition checker solves this problem, but in some ways it's not that much better because the mmj2 definition checker, at least, massively increases the trust base, because mmj2 is not a simple program and the definition checker is leaning on a lot of mmj2 code like the grammar parsing and automatic proof generation mechanisms. This is a part of the core language, and it does not deserve to be treated as a second class citizen.

Baking definitions into the kernel also means that it is more obvious how to do automatic definition unfolding, but this is a double edged sword, as it makes one definition preferenced over others. Most MM definitions you wouldn't actually want to unfold; there is pretty much always some "definitional theorem" that should be used in preference to the actual definition. It would be nice if some automation generated this theorem.

But this isn't what Norm is talking about. I agree with the overall concept of parsimony of definitions, and I practice it whenever I can, although I have to admit that definitions are much lower cost when you have definition unfolding for "free". Achieving something similar in metamath requires a big step up in either annotations or annotation inference for things like definitional theorems and general context management. I'm building something similar to this as a tactic mode for MM0, but I think the ideas may be back portable to MM.

But even if definition making was easy, there would be the responsibility to "complete the graph" of relations between defined concepts. I think it is irresponsible to create a definition and not relate it to everything else. That is my metric for proper cohesion of the library, ensuring that you can painlessly work with multiple parts of the library at once. For example, if you define a semigroup, then you had better prove that a monoid is a semigroup, and you should also migrate any monoid theorems that are actually about semigroups to be about semigroups. (This came up in the part 18 discussion: there was a theorem about unital magmas, and we have that theorem but the assumptions have been "rounded up" to monoid, because that's the weakest structure that is actually in set.mm that satisfies the conditions. If we get unital magmas then this theorem will be changed appropriately, but it's not an argument for them on its own.)

I am a *huge* fan of opaque definitions though, and I need to bake it a bit more explicitly into MM1. Dependent type theory has strengthened my view on this by showing all the bad things that can happen when you can't hide anything. Luckily for MM0, while there is definition unfolding, it's not needed for any typing, and there is an explicit proof term for it, so it's really just a matter of selectively turning off the MM1 feature that generates these proofs.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

fl

unread,
Jan 24, 2020, 9:30:19 AM1/24/20
to Metamath


 Two situations that typically will result in the work being moved to the main part of set.mm are (1) more than one person wants to use or develop the work and


And concerning your "survival of the fittest" ideology, you have never respected your own rules:  Carneiro's work has not been imported that way, proofs related to 
Tarskian geometry have not been imported that way, and the proofs related to the mm100 have not been imported that way.

90 % of the main part have not been imported that way.

-- 
FL

fl

unread,
Jan 24, 2020, 9:36:33 AM1/24/20
to Metamath

Even the extensible structures have not been imported that way.
 
-- 
FL

Norman Megill

unread,
Jan 24, 2020, 10:29:18 PM1/24/20
to Metamath
My essay yesterday represents the opinion I formed after looking back and reflecting on some things I felt were accomplished successfully and others not so successfully with set.mm.  I try to keep an open mind, and my opinion can change if I can be convinced another way is better.

Careful choice of definitions seems essential to the success of set.mm and its philosophy, and that is what I focused on.  My purpose is not to dictate mandatory rules but to open up a discussion of what the philosophy should be, particularly with regard to definitions, using my suggested rules as a starting point and a possible guide.

I didn't intend to diminish the value of anyone's work with the specific examples I brought up.  If I was perceived as too critical I apologize for that.  I simply used them as convenient illustrations for the discussion.  And it is possible I am wrong.  I appreciate the contributions made by everyone and recognize that it has involved serious, hard work with good intentions.  In any case, the definitional "rules" didn't even exist before I wrote them down last night, so it's hardly fair to have expected anyone to have followed them.


----

As for goals for set.mm (as opposed to "philosophy"), coming up with a long-term master plan of some sort is difficult because people's interests vary widely.  I would fear that vague things like "build an algebraic hierarchy" would end up as a voluminous collection of definitions without a lot of substance.  Instead, I would prefer to see some significant theorems (such as named theorems in the literature) as goals, which for example might drive the creation of an appropriate algebraic hierarchy when it becomes essential to have it.

We do have the MM100 list if we want some concrete targets, and anyone can focus on those (and make David, and me, very happy...).  We might as well call those official goals.

Curiously, if I type "list of named theorems" in Google, Freek's list (the MM100 list) appears at the top in a big box.  So I guess Google thinks they're important. :)  It certainly gives Metamath a lot of visibility, being in 3rd place.  Anyway, the next link down is to the page https://en.wikipedia.org/wiki/List_of_theorems with a huge list with a thousand or more named theorems.  So there are plenty to choose from.

Personally, if a magic genie were to offer me a set.mm proof of any theorem I chose, I would probably pick FLT.  (And I might as well exploit the genie's magic and ask for the shortest possible proof.)  I don't know if it is realistic to consider FLT as a possible "flyspeck" (Kepler conjecture) type project, but the huge amount of background material that would have to be developed would be amazing and probably useful for many other things long before the final goal is reached (if it ever is).

What would other people like to see (magic genie or not)?

Norm



On Thursday, January 23, 2020 at 11:58:14 PM UTC-5, Norman Megill wrote:
Tierry, Alexander, and Benoit have asked for clarification of the goals of set.mm.  Here are some of my opinions.  I am moving the discussion in https://github.com/metamath/set.mm/issues/1431 to here for wider readership.

.....

Giovanni Mascellani

unread,
Jan 25, 2020, 3:38:01 AM1/25/20
to meta...@googlegroups.com
Hi,

I have not been contributing anything for quite some time, so far from
me to dictate what Metamath's goals should be, but since one of the
reasons I like this place is that experienced people always seem
supportive and interested into beginners' ideas, I'll share my thoughts.

Il 25/01/20 04:29, Norman Megill ha scritto:
> We do have the MM100 list if we want some concrete targets, and anyone
> can focus on those (and make David, and me, very happy...).  We might as
> well call those official goals.

To me MM100 is a an interesting goal, but definitely an intermediate
one. The (admittedly, very ambitious) goal that I have in mind for
Metamath (or MM0, or whatever incarnation of it we will have in the
future) is that virtually every published mathematical result is
available in set.mm or some reverse dependency of it. I envision that if
you search for the comment "(Based on ArXiv paper: xxx.yyy)" you find
all the main theorems from that ArXiv paper.

(a requisite for this to happen, of course, is that tools for
formalizing theorems become much much more powerful in terms of reducing
the De Bruijn constant; interesting results are already appearing on
this list, but we have a long way to go)

> Personally, if a magic genie were to offer me a set.mm proof of any
> theorem I chose, I would probably pick FLT.  (And I might as well
> exploit the genie's magic and ask for the shortest possible proof.)  I
> don't know if it is realistic to consider FLT as a possible "flyspeck"
> (Kepler conjecture) type project, but the huge amount of background
> material that would have to be developed would be amazing and probably
> useful for many other things long before the final goal is reached (if
> it ever is).

My usual approach with wish-bounded genies is instead to ask for wishes
that are as complex as possible, so that I can pack as much wishable
material in them as possible. For example, I would rather ask for a
proof of FLT that is much much longer than it needs to be, hoping that
at some point there is also a proof of Poincaré's conjecture (possibly
discarded with a dummylink, unless Poincaré's conjecture is actually a
useful step in proving FLT, which would make it even more interesting).

Next genie I find, I'll tell you if it worked.

Giovanni.
--
Giovanni Mascellani <g.masc...@gmail.com>
Postdoc researcher - Université Libre de Bruxelles

signature.asc

vvs

unread,
Jan 25, 2020, 8:51:03 AM1/25/20
to Metamath
There is still some confusion here. I'd separate Metamath as a language (implementations, proof assistants, UI) vs. formalization (database, taxonomy, conventions). Clearly their philosophy and goals should be different.

Jon P

unread,
Jan 25, 2020, 9:04:07 AM1/25/20
to Metamath
I also think the future of mathematics lies in formal proofs. If there were a database of all known proofs creating new proofs would be much easier and more reliable. It would also reduce duplication of efforts which must happen a lot. 

One thing I think is there is probably a network effect issue. Given there are many formal proof systems my guess is one will win out and become the standard and then anyone who is choosing which to use will default to that one because it is the one everyone uses. 

For that reason I think things like Davids new tutorial video are very powerful for the future of the system. Another is the idea of having a web based editor that is easy to access.

David A. Wheeler

unread,
Jan 25, 2020, 9:12:17 AM1/25/20
to metamath, Metamath
Giovanni Mascellani:
> To me MM100 is a an interesting goal, but definitely an intermediate
> one. The (admittedly, very ambitious) goal that I have in mind for
> Metamath (or MM0, or whatever incarnation of it we will have in the
> future) is that virtually every published mathematical result is
> available in set.mm or some reverse dependency of it. I envision that if
> you search for the comment "(Based on ArXiv paper: xxx.yyy)" you find
> all the main theorems from that ArXiv paper.

My goal is similar. I would like to see a future descendent of Metamath
databases (including set.mm and iset.mm)
have essentially *all* mathematics formalized.
I would like to see that for at least two reasons:

1. Provide *far* more confidence that what we think is true
(given the axioms) is actually true. Mathematics is full of "proofs"
that weren't, and as mathematics has become more complex, depending
on humans to be perfect is becoming unjustifiable.
There are other tools that have a lot going for them;
I'll note that I like Coq. However, in most alternatives you never see
what I call the "actual proofs"; you only see assertions that a tool
thinks it found the actual proofs. Metamath is nearly unique in
its focus that "every step must be visibly justified by a previous step",
and that is what draws me strongly to Metamath.
2. Provide a stable archive of this information for all future time.
I don't think people appreciate how ephemeral information is.
The main logic system of the western world for centuries was
Stoic logic; we know very little about it, because practically
everything has been lost since. Modern civilization is even *less*
good at archiving; most storage devices only last around 10 years,
not thousands. Yet many civilizations have collapsed over the centuries,
and it is hubris to think it could not happen again.
I want to see etched copies of these databases
put in caves, deep in the sea, on the moon, and sent out to space
so it's possible for others to recover them after much else is lost.
There are organizations like the Arch Mission Foundation
(a non-profit organization that archives the knowledge and species
of Earth for future generations, https://www.archmission.org/ ) - but
they have to have material to work with.

The Metamath 100 is merely an intermediate goal from this perspective,
but I think it's useful. "Everything" is hard to achieve, while hard
but more achievable & measurable goals is useful.
(Fermat's Last Theorem probably won't be formalized for a while, but
that's okay. What's a heaven for?)

I think different people will have different goals.
That is normal and expected. Hopefully we can find ways to
work together; I think we can, because much has been
accomplished so far.

I intend to post more on this later, but other duties call for now :-).

--- David A. Wheeler

vvs

unread,
Jan 25, 2020, 9:32:40 AM1/25/20
to Metamath
2. Provide a stable archive of this information for all future time.
   I don't think people appreciate how ephemeral information is.
   The main logic system of the western world for centuries was
   Stoic logic; we know very little about it, because practically
   everything has been lost since. Modern civilization is even *less*
   good at archiving; most storage devices only last around 10 years,
   not thousands.  Yet many civilizations have collapsed over the centuries,
   and it is hubris to think it could not happen again.
   I want to see etched copies of these databases
   put in caves, deep in the sea, on the moon, and sent out to space
   so it's possible for others to recover them after much else is lost.
   There are organizations like the Arch Mission Foundation
   (a non-profit organization that archives the knowledge and species
   of Earth for future generations, https://www.archmission.org/ ) - but
   they have to have material to work with.

I share this concern but let's not fool ourselves. There are enough dead languages already which nobody understands. And without Rosetta stone formal systems are as easy or even easier to lose. Just ask yourself what would happen if all computers will become obsolete and no books survive either (which are already mostly in a digital form). We should admit that information is still transferred from one human to another within a single culture and without it semantic is lost. I don't believe in transcendent semantic so I'm not sure if we could understand alien mathematics because soundness doesn't exist in syntax alone.

fl

unread,
Jan 25, 2020, 10:07:19 AM1/25/20
to Metamath

   The main logic system of the western world for centuries was
   Stoic logic; we know very little about it, because practically
   everything has been lost since. Modern civilization is even *less*
   good at archiving; most storage devices only last around 10 years,
   not thousands.  

On the other hand, the Gilgamesh epic has been preserved. It's dated to the second or
third millennium BC. Nothing better than clay tablets.

    I want to see etched copies of these databases
   put in caves, deep in the sea, on the moon, and sent out to space
   so it's possible for others to recover them after much else is lost.
 

What it is good for ?

--
FL

OlivierBinda

unread,
Jan 25, 2020, 10:35:23 AM1/25/20
to meta...@googlegroups.com

I have no wish (I'll adapt to what is available) but I have a vision :

1) Methamath (or a descendant) is the universal computer format for maths (instead of LaTeX).
2) A proof can be exported to TeX/LaTeX/Context (pdf for print), MathJax/html/svg/css ( for the web),
mp4/youtube (tutorials, video lessons) in 50+ human languages
3) math research is about publishing computer checkable proofs, that can be exported to a printable proof (in any human language)
4) There is a decentralized database of what has been proven for maths, with proofcheckers that commit/push things
(no more pear reviews, with flawed humans and all their flaws)
5) Everyone on earth can do math/research/access the knowledge base
6) there are computer games powered with a math engine that let children enjoy themselves by solving math puzzle.
The game actually teach them useful things for life
7) math teachers write their stuff in computer checkable format instead of TeX/LaTeX/ConTeXT
8) you can assign work to math students, they have to write computer checkable proofs (no more grading students paper)
9) your profile on linkedin tells the math level you have achieved, so playing games about math, can get you a job.

Ok, I may be crazy.
Still, I have started to put my master plan in motion like 1 year ago and, to me, it looks doable so I'm making it happen ^^.

Because metamath/mm0 are already quite up to the task, the first step is to be able to build a proof authoring tool that is good enough/great.
I fail if I cannot make expert metamathematicians (that seem to need better tools) give up mmj2 to switch to Mephistolus.


If metamath/mm0 become even better, I will like that a lot though

Best regards,
Olivier

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

Alexander van der Vekens

unread,
Jan 25, 2020, 5:10:21 PM1/25/20
to Metamath
On Saturday, January 25, 2020 at 3:12:17 PM UTC+1, David A. Wheeler wrote:
Giovanni Mascellani:
> To me MM100 is a an interesting goal, but definitely an intermediate
> one. The (admittedly, very ambitious) goal that I have in mind for
> Metamath (or MM0, or whatever incarnation of it we will have in the
> future) is that virtually every published mathematical result is
> available in set.mm or some reverse dependency of it. I envision that if
> you search for the comment "(Based on ArXiv paper: xxx.yyy)" you find
> all the main theorems from that ArXiv paper.

My goal is similar. I would like to see a future descendent of Metamath
databases (including set.mm and iset.mm)
have essentially *all* mathematics formalized.


I want to associate myself with Giovanni and David: the goal should be a database in which "every published mathematical result is available" and which has "essentially *all* mathematics formalized". And this database is (currently) set.mm, and it should remain set.mm (maybe not physically, e.g. as a single file, but logically, concerning its contents). Having this goal, set.mm is a knowledge base and a library: on the one hand to be read by interested people, who should find everything they are looking for (regarding mathematics, of course) and be confident that it is (formally) correct what they find (yes, they should also find a formal definition of a magma, and theorems which only uses the properties of a magma, asking themselves the question: do I really need an identity element to prove this?). Metamath (language and tools) provides an excellent basis for that (and this topic is not about the goal of Metamath, which is pretty clear and beyond all possible doubt - it's about the goal of *set.mm* and how to continue to improve/expand it).

To summarize this aspect, Giovanni, David and I request the following property of the database set.mm: Completeness (as far as possible)! Another property would be "Beauty" (see also section 1.1.4 in the metamath book): Not only the proofs should be beautiful (being short, elegant, tricky, etc.), but also the definitions and theorems themselves. And concerning the entirety of the database, its structure should also be beautiful! The properties "Simplicity" and "Rigor" (see also sections 1.1.5 and 1.1.6 in the metamath book) are "inherited" from Metamath (and the axioms provided within set.mm), so we do not need to question that. Concerning simplicity, however, a knowledge base containing much/most of mathematics cannot be simple in general, because mathematics is not simple. But to use this knowledge base can (and should) be *made* simple by providing appropriate tools and "views". Because of this, there could be complex definitions in set.mm. Actually there are a lot of them, which only become clear by the corresponding "defining theorems" - e.g. ~df-mamu:
$a |- maMul = ( r e. _V , o e. _V |-> [_ ( 1st ` ( 1st ` o ) ) / m ]_ [_ ( 2nd ` ( 1st ` o ) ) / n ]_ [_ ( 2nd ` o ) / p ]_ ( x e. ( ( Base ` r ) ^m ( m X. n ) ) , y e. ( ( Base ` r ) ^m ( n X. p ) ) |-> ( i e. m , k e. p |-> ( r gsum ( j e. n |-> ( ( i x j ) ( .r ` r ) ( j y k ) ) ) ) ) ) ) $.
which becomes clear only after looking at theorem ~mamuval:
$p |- ( ph -> ( X F Y ) = ( i e. M , k e. P |-> ( R gsum ( j e. N |-> ( ( i X j ) .x. ( j Y k ) ) ) ) ) ) with F = ( R maMul <. M , N , P >. )

By providing tools and views, the underlying data/knowledge base can be used by non-specialists (using simple views) as well as by experts (using advanced views).

On the other hand, if set.mm is used as library (or tool box) to provide and prove new theorems, it is also very helpful if it is complete as possible: I do not want to provide hundreds of auxiliary theorems first, I expect that they (or at least most of them) are simply there (OK, this is a vision, but we should work on it). If set.mm becomes too big to find the adequate theorems easily, the tools and views need to be improved. But this is again not a problem of the data/knowledge base.

To conclude: To answer the question about the goals for set.mm, we have to agree on the purposes (e.g. knowledge base/library vs. pedagocial textbook) and the properties (completeness, beauty, simplicity, brevity, etc.) first.

-- Alexander

David A. Wheeler

unread,
Jan 25, 2020, 5:43:57 PM1/25/20
to metamath
On Fri, 24 Jan 2020 19:29:17 -0800 (PST), Norman Megill <n...@alum.mit.edu> wrote:
> My essay yesterday represents the opinion I formed after looking back and
> reflecting on some things I felt were accomplished successfully and others
> not so successfully with set.mm. I try to keep an open mind, and my
> opinion can change if I can be convinced another way is better.

Norman Megill, 2020-01-23 point 1:
> There is no "goal" per se for set.mm.
> People work on what they are interested in.

I think that's a necessary side effect of working with many people.
People will have different goals, and where we *can* find
ways to work together it's great to do so. In general I think
the results are stronger for it when we can make that happen.


Norman Megill, 2020-01-23 point 2:
>A very important thing to me is the philosophy of striving for
> simplicity wherever possible. This particularly impacts what definitions
> we choose.

For me, simplicity is not a goal in itself, but it *is* necessary
to support other goals.
My real goal is having strong formal proofs of the majority of significant
mathematical claims, recorded in a way that it can be archived for the future.
Relative simplicity is a necessary part of that.

I think you can go too far in some notions of simplicity.
For example, I appreciate MM0's ability to add arbitrary parentheses
(not available in straight Metamath). But those are not critical, and
there is so much to appreciate about Metamath.

So let's talk about definitions in set.mm, which is really a question about
how to best create certain Metamath databases. After all,
all agree that the Metamath language & toolsets support
adding definitions. Having many definitions has problems as noted:

> Definitions are at the heart of achieving simplicity. While developing
> set.mm, two rules of thumb that evolved in my own work were (1) there
> shouldn't be two definitions that mean almost the same thing and (2)
> definitions shouldn't be introduced unless there is an actual need in the
> work at hand. Maybe I should add (3), don't formally define something that
> doesn't help proofs and can be stated simply in English in a theorem's
> comment.

There are good reasons to do this. For example, with multiple similar definitions:

> It was a constant nuisance to have to convert back and forth
i> nside of proofs, making proofs longer. I eventually decided to use just
> "e." and "C_", and it made formalizations much more pleasant with fewer
> theorems and shorter proofs...

Some may expect me to disagree with this. But no,
I strongly agree with Norm on this point. Every time there's a set of similar
definitions in wide *use* in set.mm, there's a risk of having to do a
lot of back-and-forth conversions. Having conventions prefer one form
over another simplifies proofs and reduces the number of necessary theorems.

> *4.* As for adding an exhaustive list of all possible definitions one can
> find in Bourbaki or whatever, as someone suggested, I don't think something
> like that belongs in set.mm, for all the reasons above. There are other
> resources that already list these (Wikipedia, Planetmath, nLab). Their
> precise formalization will depend on the context in which they are needed
> in set.mm, which is unknown until they are used. In isolation (perhaps
> with a couple of property theorems that basically check syntax) there is no
> guarantee that they are correct, even in set.mm. To restate, I think the
> philosophy should be that definitions should be added by need, not as
> busywork.
>Adding a definition when needed is one of the smallest parts of
> building a series of significant proofs. It doesn't have to be done in
> advance, and I don't think it is generally productive or useful to do so.

I think we should not try to create an "exhaustive list of definitions".
I have not proposed such a thing.
But I *do* think that we *should* provide over time
a set of standardized "definitions with significant use in mathematics" along
with a relevant expression in set.mm (and iset.mm).

This reflects a difference in our goals. My goal is not simplicity for its own sake.
My goal is to formalize all significant mathematical works so that
they are far more certain & can be archived for all future time.

From that point of view, failing to formalize commonly-used terms
such as "<" is just that: a *failure*. And it's an unnecessary failure.
Adding definitions of common terms is one of the smallest parts of
building a set of significant proofs. Thus, defining these terms is one of the most
*productive* things that can be done to formalize common mathematics,
*because* it takes less time than some other activities to produce results.

I don't accept that "There are other resources that already list these".
Most of them do not formalize definitions to the same degree as set.mm, and
there's no guarantee that their terminology matches set.mm
even when they are formal.

In addition, I think it's unwise to assume that these resources will
necessarily exist the future. Almost all of Stoic logic is lost forever, even
though that was the primary system of logic for centuries.
We've lost about 2/3rds of Aristotle's works, and people *tried* to keep them.
More recent works are much *more* likely to be lost, since recording
devices only last ~10years and copyright laws often make archiving illegal.
Closer to home: Many old Metamath discussions that have been lost forever
due to one of the old websites suddenly imploding.
While set.mm *should* give credit to all external resources, we should not
assume that those external resources will always exist. In the long term
it is unlikely that everything we reference will be available.

It will be much easier to archive set.mm than other sources, *because*
it can provide a single uniform archive. That would be fantastic.

> IMO the same should be done with >, mentioning what it means in the
> description for <. Introducing a formal $a statement for > that will never
> be used is unnecessary and wasteful of resources.

No, I don't agree at all, because this depends on what your goals are.
If your goal is to record all major mathematical
constructs, then recording ">" is absolutely necessary.

> If we want to be
> excessively pedantic, we could mention in the description for < that the
> the formal definition would be "|- > = `' >" , although that seems less
> intuitive than simply saying that "in theorem descriptions, we occasionally
> use the terminology 'A is greater than B' to mean 'B is less than A'."

No, that would be useless. Because it's in a comment, it would *not*
go through formal checks, and we couldn't prove a few simple theorems to
show it works as intended (e.g., "4 > 3"). So including that information in
a comment is *much* worse than using $a.

Now, regarding:
> wasteful of resources.

I think what I have in mind is *not* a waste of resources.

I also think there's a potential misunderstanding here.
Perhaps a clarification will help.

I *agree* that set.mm should *not* try to *use* definitions like ">"
in arbitrary ways. As noted above, that leads to much longer & more complex
proofs (because the proofs have to do many conversions), as well as
a huge number of theorems necessary to support the use of such definitions.
For example, to seriously *use* ">" you can't just create the definition, you
have to create a lot of supporting theorems to support its convenient use.

I do *not* advocate adding definitions like ">" to be used in this way,
for all the reasons listed above. So I *agree* with your concerns, and
I think we can do something else that will avoid those problems.

What I'm advocating for is a section near the end of set.mm, let's call it
"Other definitions", where definitions of other commonly-used mathematical
constructs are formally defined *and* are expressly marked as
"(New usage is discouraged.)". The purpose of this section would *not* be
to create definitions for use (that would be expressly discouraged!).
The purpose of the section would be to formally define and record, for future readers,
the meaning of other common mathematical constructs using set.mm concepts.
It would also help people understand what constructs they *should* use by
convention - they can look up what they planned to use, and see what is
recommended instead. Think of it as a Rosetta Stone.

This would have practically no resource use. Each construct would have a definition,
plus maybe 1-3 other theorems (e.g., to express it more clearly, relate it to
other expressions that would be used in set.mm instead by convention, or
to demonstrate that it has the "expected meaning"). This would be vanishingly
small subset of the definitions and theorems in the rest of it
(especially as it starts approaching "all of mathematics").

Please note that this would *not* have the problems noted earlier.
There would be no "converting back and forth" in normal proofs, because
these definitions would be expressly marked as symbols not to be used
in the normal case. You wouldn't normally see these other constructs in formal
statements unless you were looking for them. It *would* make it easier to use
these other symbols in comments, because they *would* be defined, and now
those symbols would have "real" definitions if someone wanted to learn
exactly what was intended.

Anyway, I hope this helps clarify the difference.
I'm hoping to talk you eventually into letting me add that new section :-).

--- David A. Wheeler

Norman Megill

unread,
Jan 25, 2020, 9:40:54 PM1/25/20
to Metamath
On Saturday, January 25, 2020 at 5:10:21 PM UTC-5, Alexander van der Vekens wrote:
On Saturday, January 25, 2020 at 3:12:17 PM UTC+1, David A. Wheeler wrote:
Giovanni Mascellani:
> To me MM100 is a an interesting goal, but definitely an intermediate
> one. The (admittedly, very ambitious) goal that I have in mind for
> Metamath (or MM0, or whatever incarnation of it we will have in the
> future) is that virtually every published mathematical result is
> available in set.mm or some reverse dependency of it. I envision that if
> you search for the comment "(Based on ArXiv paper: xxx.yyy)" you find
> all the main theorems from that ArXiv paper.

My goal is similar. I would like to see a future descendent of Metamath
databases (including set.mm and iset.mm)
have essentially *all* mathematics formalized.


I want to associate myself with Giovanni and David: the goal should be a database in which "every published mathematical result is available" and which has "essentially *all* mathematics formalized".

This is a great goal and we can dream it, but it won't happen for decades or a century or maybe never.  In the meantime, what is the first step?  What specifically should people work on now?  Perhaps we should look at what we would like to achieve in say 1 year, 2 years, 5 years.  Of course people will want to work on what interests them, but having specific and realistic goals to select from might be useful to guide them.

While the MM100 list is simply someone's opinion of important theorems, I was surprised at how visible it is (with Freek's list at the top of the Google search for "list of named theorems"), and it is definitely publicity for Metamath.  A few years ago Metamath was rarely noticed by anyone in the field, whereas more and more I see it mentioned in slide presentations and article references.  The more people who are aware of Metamath, the more contributors we will probably have, moving faster towards "all of mathematics".

In 2016 I wrote on this newsgroup,

"...by far the most important part of the Metamath project is not the language but the proofs people have contributed.  While it is interesting to speculate how Metamath might evolve in 100 years with advances in computer technology, proofs that I formalize now will, in some form, live on forever as a foundation that will be built upon and always have my name associated with it.  They are permanent.  This contrasts to almost everything else in the software world, which with few exceptions become obsolete and forgotten in a decade or two..."

Whether or not that will turn out to be true, believing it gives me a great deal of satisfaction and keeps me inspired.  Already with MM0 and its translators to Lean and others, we are close to a point where "my" proofs (and yours) may become important in other proof languages as well.
 
...
 
Concerning simplicity, however, a knowledge base containing much/most of mathematics cannot be simple in general, because mathematics is not simple. But to use this knowledge base can (and should) be *made* simple by providing appropriate tools and "views". Because of this, there could be complex definitions in set.mm. Actually there are a lot of them, which only become clear by the corresponding "defining theorems" - e.g. ~df-mamu:
$a |- maMul = ( r e. _V , o e. _V |-> [_ ( 1st ` ( 1st ` o ) ) / m ]_ [_ ( 2nd ` ( 1st ` o ) ) / n ]_ [_ ( 2nd ` o ) / p ]_ ( x e. ( ( Base ` r ) ^m ( m X. n ) ) , y e. ( ( Base ` r ) ^m ( n X. p ) ) |-> ( i e. m , k e. p |-> ( r gsum ( j e. n |-> ( ( i x j ) ( .r ` r ) ( j y k ) ) ) ) ) ) ) $.
which becomes clear only after looking at theorem ~mamuval:
$p |- ( ph -> ( X F Y ) = ( i e. M , k e. P |-> ( R gsum ( j e. N |-> ( ( i X j ) .x. ( j Y k ) ) ) ) ) ) with F = ( R maMul <. M , N , P >. )

I agree that ~ df-mamu (and ~ mamuval for its practical use) is a good example.  In particular, it is used directly or indirectly by 224 theorems, so there is no question that it is useful and even essential.  BTW it would be a good idea to reference ~ mamuval in the description of df-mamu.  We often do that for value theorems (or at least I have done so in e.g. ~ df-re).

df-mamu illustrates another point.  If we look at the reference cited (Lang p. 504), the actual form of df-mamu cannot easily be deduced from the definition printed on that page (a mixture of informal language and some formulas).  My guess is that df-mamu was need-driven (its author S.O. would have to answer that) i.e. was added as the theorems needing it were being proved and it was needed to move forward.  It likely required an intimate knowledge of how it was going to be applied along with knowledge of the set.mm objects it was going to be applied to.

Thus I think it is unlikely that someone simply transcribing definitions from a book would come up with exactly df-mamu.  In addition, while I don't know the history of df-mamu specifically, my own experience is that I've often had to fine-tune complex definitions to achieve what I wanted from them.  Sometimes problems can be determined only by actually using the definition in its intended application.

I know I keep harping on not wanting to add a catalog of definitions (with shallow theorems) that are otherwise unused, but it has been done in the past with the work essentially wasted, sometimes causing resentment, and I want to avoid that.  With an ambitious goal like "all of mathematics," it will be very tempting to start collecting definitions, leading to a false sense of accomplishment to see the syntax errors ironed out and simple property theorems proved.  But in the end it is busywork with a good chance of being discarded later. Moreover, a few shallow value and property theorems don't ensure that the definition is correct, possibly misleading someone who might want to use set.mm as a reference source that can be trusted to be rigorously correct.  This is the motivation for rule 2 I proposed, "definitions shouldn't be introduced unless there is an actual need in the work at hand."

One thing I have done in the past is to add definitions I expect to use eventually in commented-out form, in order to serve as a reference to help guide the project's direction.  When they become needed, I uncomment and adjust them based on how the project and its notation actually evolved.  Quite often definitions not in the literature will be needed to help fill in gaps in the informal proof.  Other times definitions in the literature end up not being useful or have to be modified for the needs of the formalization.  We have to remember that authors of written proofs are human, they usually have no idea their work will be formalized nor appreciate what kind of definitions are needed for that, and they may gloss over edge cases and other details that affect a rigorous formalization. 

Norm

David A. Wheeler

unread,
Jan 25, 2020, 10:55:02 PM1/25/20
to metamath
On Saturday, January 25, 2020 at 5:10:21 PM UTC-5, Alexander van der Vekens wrote:
> > the goal should be a
> > database in which "every published mathematical result is available" and
> > which has "essentially *all* mathematics formalized".

On Sat, 25 Jan 2020 18:40:53 -0800 (PST), Norman Megill wrote:
> This is a great goal and we can dream it, but it won't happen for decades
> or a century or maybe never. In the meantime, what is the first step?
> What specifically should people work on now? ... Of course
> people will want to work on what interests them, but having specific and
> realistic goals to select from might be useful to guide them.
>
> While the MM100 list is simply someone's opinion of important theorems, I
> was surprised at how visible it is (with Freek's list at the top of the
> Google search for "list of named theorems"), and it is definitely publicity
> for Metamath. A few years ago Metamath was rarely noticed by anyone in the
> field, whereas more and more I see it mentioned in slide presentations and
> article references. The more people who are aware of Metamath, the more
> contributors we will probably have, moving faster towards "all of
> mathematics".

I completely agree. I think goals that are achievable in the "more near term"
are very valuable. I think the MM100 list has given Metamath a lot of visibility AND
it's encouraged development of a lot of intermediate theorems that are
important and useful elsewhere.

> In 2016 I wrote on this newsgroup,
>
> "...by far the most important part of the Metamath project is not the
> language but the proofs people have contributed. While it is interesting
> to speculate how Metamath might evolve in 100 years with advances in
> computer technology, proofs that I formalize now will, in some form, live
> on forever as a foundation that will be built upon and always have my name
> associated with it. They are permanent. This contrasts to almost
> everything else in the software world, which with few exceptions become
> obsolete and forgotten in a decade or two..."
>
> Whether or not that will turn out to be true, believing it gives me a great
> deal of satisfaction and keeps me inspired. Already with MM0 and its
> translators to Lean and others, we are close to a point where "my" proofs
> (and yours) may become important in other proof languages as well.

I think that as the set of proofs in Metamath increases, the likelihood of it
continuing to grow increases. It isn't a big deal to create a few small proofs.
But as more and more proofs are added to the database, anyone else who
wants to formalize mathematics will be attracted to building on that
database instead of starting from scratch. The Metamath language may change,
or the databases may get translated into some newer system (like MM0),
but that means the proof work endures.

--- David A. Wheeler

Jim Kingdon

unread,
Jan 25, 2020, 10:57:24 PM1/25/20
to meta...@googlegroups.com
On 1/25/20 6:04 AM, Jon P wrote:

> I also think the future of mathematics lies in formal proofs.

I've been (pleasantly) surprised to find that at least occasionally the
PRESENT of mathematics lies in formal proofs.

From the preface to the HoTT book: 'The present work has its origins in
our collective attempts to
develop a new style of “informal type theory” that can be read and
understood by a human be-
ing, as a complement to a formal proof that can be checked by a machine.
. . . much of the material
presented here was actually done first in the fully formalized setting
inside a proof assistant,
and only later “unformalized” to arrive at the presentation you find
before you — a remarkable
inversion of the usual state of affairs in formalized mathematics.'


David A. Wheeler

unread,
Jan 25, 2020, 11:23:19 PM1/25/20
to metamath
On Sat, 25 Jan 2020 14:10:21 -0800 (PST), "'Alexander van der Vekens' via Metamath" <meta...@googlegroups.com> wrote:
> the goal should be a
> database in which "every published mathematical result is available" and
> which has "essentially *all* mathematics formalized".

That's a big dream, and for the very long term, a worthy (and incredibly
ambitious) goal.

We'll need to find some smaller & shorter term goals.
I've been championing filling in the Metamath 100, and I think it's
been a useful example of that. I think other shorter-term goals could be useful
For example, we could have the goal of fully formalizing certain works.
An obvious example is to complete the formalization of
elementary geometry based on Tarski's axioms, following [Schwabhauser].
Formalizing certain works will help us develop the key infrastructure
needed to support many other works.

I think in *reality* there will be multiple databases for different axiom sets.
E.g., if you don't believe in classical logic, iset.mm (or its descendent) may
be what you care about. But that's okay, it just means that there will really
be a *collection* of such databases.

[Desirable properties proposed are...]
> Completeness (as far as possible)! ...

I don't think it will ever *actually* be complete, but sure, it's a worthy
long-term goal.

> "Beauty" (see also section 1.1.4 in the metamath book):
> Not only the proofs should be beautiful (being short, elegant, tricky,
> etc.), but also the definitions and theorems [and] structure...

That's a reasonable desire, though beauty is in the eye of the beholder.
In any case, I think efforts to do that are useful, as long as we
acknowledge that there will always be differences of opinions &
imperfections.

> "Simplicity" and "Rigor" ...

Desirable, sure.

But in the end, I think actually *having* proofs (leaning towards
completeness) is the more important part. Without proofs, there's
no point in it.

> To conclude: To answer the question about the goals for set.mm, we have to
> agree on the purposes (e.g. knowledge base/library vs. pedagocial textbook)
> and the properties (completeness, beauty, simplicity, brevity, etc.) first.

I don't really agree with that statement. Different people may have different
goals and still work together. At a high level people will agree on those generic
properties; the issue will be working them out in practice.

As far as *purpose* goes, if it must be one, it should be knowledge base/library.
But I think that's a false dichotomy: I don't think it should be just
one or the other. A knowledge base has great pedagogical value.
One problem with traditional pedagogical systems is that students *cannot*
follow the "whole string" in any area that interests them.
Indeed, some areas can be hard to understand *because* the full
story is never told. Metamath can enable solutions to this.

If I may, I have an computer science analogy. In the early 1970s
people who wanted to study operating systems studied the code of Unix,
because that was available to study. Then AT&T decided to make it
hard to *legally* study Unix. So people created "toy" operating systems
like Minux and Xinu, and there was a whole fad focused on
microkernels, because these systems were the kind that
academics could easily create themselves. But many academics
quickly became completely disconnected from studying *actual*
operating systems that many people used. An incredible number of
academics studied schedulers (because that was easy to study on
toy systems) & increasingly many had no idea how *actual*
operating systems worked (e.g., in *real* operating systems the
majority of code was in drivers, a key fact not visible to most academics).
When Linux grew in capability in the *real* world (and to some extent this
was true of the *BSDs as well), academics could once again study real systems.
(Yes, I'm aware that QNX is a microkernel system, but there are reasons
such systems are rare.)

In short: Simple models for teaching purposes do have their place.
But for serious academic work, there's nothing like a *real* thing.
In the math context, the "real" things is proofs of the actual
math statements you care about, including the statement and
handling of special cases. Once you have a database of the
"real" definitions and proofs, you can use various pedagogical
approaches to learn about it... and the student can have confidence
that they are learning about something "real".
(Well, as "real" as things get in math. We can have the
realism/nominalism argument another time :-). )

We've already had at least one high school student submit many set.mm
proofs. So it's clearly possible to learn from "the real thing".

--- David A. Wheeler

David A. Wheeler

unread,
Jan 25, 2020, 11:36:37 PM1/25/20
to metamath
On Sat, 25 Jan 2020 18:40:53 -0800 (PST), Norman Megill <n...@alum.mit.edu> wrote:
> I know I keep harping on not wanting to add a catalog of definitions (with
> shallow theorems) that are otherwise unused, but it has been done in the
> past with the work essentially wasted, sometimes causing resentment, and I
> want to avoid that. With an ambitious goal like "all of mathematics," it
> will be very tempting to start collecting definitions, leading to a false
> sense of accomplishment to see the syntax errors ironed out and simple
> property theorems proved.

Not just tempting. Necessary.

> But in the end it is busywork with a good chance
> of being discarded later. Moreover, a few shallow value and property
> theorems don't ensure that the definition is correct, possibly misleading
> someone who might want to use set.mm as a reference source that can be
> trusted to be rigorously correct.

*Nothing* other than human review ensures a definition is correct.
If we defined 9 as "( 8 + 3 )" there is no tool that could detect the error.
Tools can detect if a definition is syntactically valid,
but no tool can figure out "was this really what you meant?"

The best solution for that is to provide definitions & enable multi-person
human review to *detect* those problems. Failing to formalize those
definitions means that the needed multi-person review can't ever happen.

A useful additional step would be to prove some "expected" properties to
ensure that they "do what's expected". Again, that requires that they
be formalized in the database itself somewhere.

> This is the motivation for rule 2 I
> proposed, "definitions shouldn't be introduced unless there is an actual
> need in the work at hand."

I would add:
... or if the definition will *never* be used by the rest of set.mm, and it is
there solely to help translate widely-used constructs used in the literature.

BTW, a possible place for my proposed "Other definitions" section would be
*after* all the mathboxes. That way it'd be impossible for someone to
accidentally use those definitions within a mathbox. That could also make
it crystal-clear that these definitions are not for use within the regular
database, but are instead provided to ensure that there are definitions
for the *other* widely-used terms conventionally avoided in set.mm.

--- David A. Wheeler

Mario Carneiro

unread,
Jan 26, 2020, 2:50:44 AM1/26/20
to metamath
On Sat, Jan 25, 2020 at 11:36 PM David A. Wheeler <dwhe...@dwheeler.com> wrote:
On Sat, 25 Jan 2020 18:40:53 -0800 (PST), Norman Megill <n...@alum.mit.edu> wrote:
> I know I keep harping on not wanting to add a catalog of definitions (with
> shallow theorems) that are otherwise unused, but it has been done in the
> past with the work essentially wasted, sometimes causing resentment, and I
> want to avoid that.  With an ambitious goal like "all of mathematics," it
> will be very tempting to start collecting definitions, leading to a false
> sense of accomplishment to see the syntax errors ironed out and simple
> property theorems proved.

Not just tempting. Necessary.

I think I would also advocate a "theorems first" approach here. Assuming that mathematicians have done their work right, definitions should naturally fall out of the commonalities between theorems that are stated in the literature. We don't need to give them any help in this - good definitions make themselves known. After writing the same sequence of characters enough times you get tired and want to compact it to something more manageable.

I think the point about wasted work causing resentment also requires acknowledgement. I would give at least a 50% chance, maybe more, that an untested definition will need to be rewritten when one really gets around to using it, and attribution at this point becomes messy, because the original author may feel slighted by a rewrite (they shouldn't, set.mm is public domain and we have always recognized that your theorems may be rewritten to suit future work, but sometimes we get attached to things anyway). While I don't really want to enforce a policy along these lines, I certainly place the worth of untested definitions very low. They can be recreated on first use with little trouble.

> But in the end it is busywork with a good chance
> of being discarded later. Moreover, a few shallow value and property
> theorems don't ensure that the definition is correct, possibly misleading
> someone who might want to use set.mm as a reference source that can be
> trusted to be rigorously correct.

*Nothing* other than human review ensures a definition is correct.
If we defined 9 as "( 8 + 3 )" there is no tool that could detect the error.
Tools can detect if a definition is syntactically valid,
but no tool can figure out "was this really what you meant?"

I disagree with this sentiment. If you define 9 to be 8 + 3, then you will later stumble when trying to prove that 9 * 9 is 81, or 9 + 1 = 10, or 9 is the first odd composite greater than 1. Basically, proving theorems about a definition boxes it in, making it much less likely that you got the statement wrong if all the expected proofs go through. If the only proof you do is the definitional theorem, you only get a very weak version of this, because the definitional theorem is much more likely to be correlated with the definition itself, so it is quite possible for errors to make it past both the axiom and the definitional theorem.

The best solution for that is to provide definitions & enable multi-person
human review to *detect* those problems. Failing to formalize those
definitions means that the needed multi-person review can't ever happen.

If it isn't in set.mm, it doesn't need to be reviewed. Review only happens *after* some development is submitted. And again, I can't properly appraise the appropriateness of a definition without seeing how it is going to be used.

A useful additional step would be to prove some "expected" properties to
ensure that they "do what's expected". Again, that requires that they
be formalized in the database itself somewhere.

If you are talking about "unit tests" like 9 + 3 = 12, that's good, but it is no substitute for general theorems, which constrain the definition in a much broader way.

> This is the motivation for rule 2 I
> proposed, "definitions shouldn't be introduced unless there is an actual
> need in the work at hand."

I would add:
... or if the definition will *never* be used by the rest of set.mm, and it is
there solely to help translate widely-used constructs used in the literature.

I can actually see a use for this, in the sense that having a definition in main blocks future attempts to add that definition. If we have a definition for greater-than, then we won't get repeated requests to add it because it's already there. At the same time, that's a bunch of unnecessary axioms to add to the database, and we would have to guard them against use, which sounds like extra complexity for the review process. (Yes I know we have discouraged theorems, but it's still something we have to monitor.)

Mario

OlivierBinda

unread,
Jan 26, 2020, 3:45:54 AM1/26/20
to meta...@googlegroups.com
About ">"...

It isn't really necessary as set.mm has done just fine without it with
negation and "<_", right ?

Still, the general public knows about > and >_ and will want to use them.
But they can, with the right tools.

I know of 2 aspects of maths : content and presentation.
You get that in mathml

So, imo, it is fine if set.mm only has "<" and if the tool has an option
to display "the negation of <_" as ">".

For example, in the video for mephistolus M6 (with different variable names)
I display the sum sum_ k e. ( 0 ... n ) ( x ^ k) with

\sum_{k\in\{0...n} x^k

But people will want to see and work with

\sum_{k=0}^nx^k

I thought about it and it is completely possible (still it requires an
improved version of the selection widget because, if you want to select
( 0 .. n) with a mouse, you end up selecting the whole sum with the
current version, but I thought about it, this is doable with some
additionnal control/display) :

presentational math is just an aspect/projection/shade of content maths.
There are many possible shades and selecting such a shade should be a
presentational option

now, as a tool writer, I really really love deduplication of
theorems/definitions.
Because it means half the memory and the time cost
And cost grows exponentially with the depth of automatic proof research


I am considering what to do about commutativity, etc.. also
If it is possible for the tool to get a decent job done with a unique
powerful theorem, that gets projected to many shades

like |- ( 5 x. 7 ) = ; 3 5

and |- ( 7 x. 5 ) = ; 3 5


I think I will go that way

But this is just my opinion and I actually don't really want to
interfere with the metamath/mm0 direction because :
As math teachers have to select a subset of a mathematical corpus to
teach students and to work with that,

Mephistolus will should/will have to do that too. And so, it should be
able to adapt to metamath/mm0 (and not the other way around)

Best regards,
Olivier

OlivierBinda

unread,
Jan 26, 2020, 3:51:51 AM1/26/20
to meta...@googlegroups.com

Also,

Why not put in set.mm,
a definition of ">" relating it to the "negation of <_"
And put a "you are absolutely forbidden to use this definition to write theorems,
usage is discouraged, you'll be cursed and a ghost will haunt you if you do ! "

With some explanation ?

(arghh I'm meddling ^^)

Happy funny sunday every one :)

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

Alexander van der Vekens

unread,
Jan 26, 2020, 5:47:45 AM1/26/20
to Metamath
I want to try to draw some conclusions from (or to express just my impressions of) the discussion until now:

Differentiation between long-term goals and short/mid-term goals

Whereas the long-term goals semm to be clear and acceptable for everybody (to have a knowledge base containing *all* known mathematics formalized), there are different opinions about the short/mid-term goals: how shall we continue working on set.mm?. Therefore, this discussion should concentrate on these short/mid-term goals.

Different "levels" of (short/mid-term) goals

It seems that there are different "levels" of (short/mid-term) goals:
  • level 1 - general goals: These goals are universal for (the main body of) set.mm itself and all users of set.mm. They should be expressed as part of the "Conventions" . We may need also two levels of conventions: overall conventions valid for the whole set.mm, and conventions to achieve the 1st level goals. Anybody not agreeing with the overall conventions (at least in some extend) should not use set.mm, but should create a separate database to achieve his/her goals. The conventions for the 1st level goals contain the criteria for moving definitions/theorems into the main body of set.mm. By this, the conventions can be regarded as analogue to the "Universal Declaration of Human Rights" or to the "Ten Commandments" in the bible (this comparison is extremely exaggerated, I know).
    To describe such 1st level goals, we can assign the following ascertainments to purposes and properties of set.mm:
    • Purpose: knowledge base, which also can be used as library and for teaching/research
    • Completeness: should cover most important theorems of mathematics (Attention: this must be clarified/specified in more detail, because this seems to be the main reason for the current discussion...)
    • Beauty, Simplicity: Here we should give (and have already) some precedence rules, e.g. "shorter proofs are prefered", or "proofs using less axioms are prefered",...
    • Rigor: currently assured, but must be kept whatever happens
  • level 2 - common goals: These are goals shared by a group of users who want to work collectively on reaching these goals. This was expressed in several posts that several users could have a different opinion on what is "beautiful", "an elegent proof", "a good structure", a "useful theorem", etc.
  • level 3 - personal goals: as long as these goals do not contradict the overall conventions, they could be whatever the user wants.
High level restructuring of set.mm

Although it is clear that the main body of set.mm should serve the general goals, it is yet not clear how to handle 2nd level goals. A proposal (which was given in several ways by several people) could be to restructure set.mm at a high level: currently we have two "parts"/realms of set.mm ("part" not in the meaning as it is currently used in set.mm - maybe the currently used "parts" should become "chapters"...):
  • main "part"/body of set.mm
  • mathboxes
Since it is clear where the 3rd level goals belong to (to the mathboxes!), it is not clear where to work on the 2nd level goals (and this is the main reason for the current discussion). For some users it is not sufficient to work on them in the mathboxes, and for other users it is not
acceptable that these are contained in the main body of set.mm. To be specific, I would propose the following structure (the titles can/should be discussed):

  • Volume I. main body of set.mm, consisting of parts 1-17
  • Volume II. supplementary material, consisting of parts/chapters like "Set Theory according to Bourbaki" (Benoit's hobbyhorse) or "Algebraic Structures according to Bourbaki" (my hobbyhorse), etc.
  • Volume III. mathboxes (current part 21)
  • Volume IV. alternative definitions and theorems: the current parts 18-20 could be placed here, but also a part/chapter containing David's definitions and theorems for ">".
Definitions/theorems contained in Volume IV. need not to be marked as "usage discouraged", because they cannot be used in proofs of Volumes I.-III., but it should be possible to use them within Volume IV. without restriction. By the way, the ALT-definitions/theorems currently contained in the main body may also be moved into this Volume IV.

Could this be an approach to find an acceptable way to achieve everybody`s short/mid-term goals? What would remain to be discussed are the criteria for definitions (and/or theorems) serving the 1st level goals, i.e. which definitions are acceptable to be added to the main body/Volume I.  of set.mm...

vvs

unread,
Jan 26, 2020, 7:33:34 AM1/26/20
to Metamath
Could this be an approach to find an acceptable way to achieve everybody`s short/mid-term goals? What would remain to be discussed are the criteria for definitions (and/or theorems) serving the 1st level goals, i.e. which definitions are acceptable to be added to the main body/Volume I.  of set.mm...

A CS engineer in me have a feeling that this is a clumsy half-and-half resolution. Wouldn't it make a zoo of existing informal agreements even less maintainable? Huge database size and single name space hinder scalability. Is there a hope that it could be fixed by something more conventional once and for all?

fl

unread,
Jan 26, 2020, 10:04:55 AM1/26/20
to Metamath

I know I keep harping on not wanting to add a catalog of definitions (with shallow theorems) that are otherwise unused, but it has been done in the past with the work essentially wasted, sometimes causing resentment, and I want to avoid that.



I just don't see how a "catalog" of definitions could cause resentment because as far as I know the definitions have always been carefully kept in the mathboxes.

Funny explanation where one gives as a cause of something one has always wanted, something which has never existed.

And to be honest we have never given a "catalog" of definitions. The definitions in the topology part were useful and used to prove theorems contrary to what you say. Those in the algebra part were useful too. The rare ones that are useless can be easily removed since they ere not used.

As far as I remember, you didn't want the definition of module. I insisted to keep it. It is perhaps the resentment you are speaking of. But in that case it is yours. 
I just don't understand why you make such a fuss over the definition of a module. It is in use in current mathematics. If the extensible structures must only be reduced 
to the chain "group - ring - vector space " I think they are not very exensible. But if you insist we can just stop using the extensible structure.

-- 
FL

David A. Wheeler

unread,
Jan 26, 2020, 8:43:37 PM1/26/20
to metamath
On Sun, 26 Jan 2020 02:50:31 -0500, Mario Carneiro <di....@gmail.com> wrote:
> I think I would also advocate a "theorems first" approach here. Assuming
> that mathematicians have done their work right, definitions should
> naturally fall out of the commonalities between theorems that are stated in
> the literature.
...
> I can't properly appraise
> the appropriateness of a definition without seeing how it is going to be used.

We seem to talking past each other.

I *agree* with you & Norm that if a definition is going to be used,
we should focus on ensuring there are a number of theorems that will use it,
and show that the definitions are useful before accepting it.
A definition intended for use needs to earn its keep.

That is *not* what I'm talking about. I'm not talking about
creating definitions intended for use in later proofs.
I'm talking about definitions created *specifically* to *not* use them,
but instead to define them so that people will know what *other*
materials & set.mm comments mean by those expressions.
For example, these definitions can serve
as a kind of Rosetta stone to explain others' notations, as well as
formalizing the use of such terms in set.mm comments.
For example, a lot of set.mm comments refer to "greater than",
but we do not formally define the expression.
That concept should be defined as rigorously as we can reasonably do it.

David A. Wheeler:
> > This is the motivation for rule 2 I
> > > proposed, "definitions shouldn't be introduced unless there is an actual
> > > need in the work at hand."
> >
> > I would add:
> > ... or if the definition will *never* be used by the rest of set.mm, and
> > it is there solely to help translate widely-used constructs used in the
> > literature.

Mario:
> I can actually see a use for this, in the sense that having a definition in
> main blocks future attempts to add that definition.
> If we have a definition
> for greater-than, then we won't get repeated requests to add it because
> it's already there.

Yes. That's not my *primary* motivation, but it's a good side-effect.

> At the same time, that's a bunch of unnecessary axioms
> to add to the database, and we would have to guard them against use, which
> sounds like extra complexity for the review process. (Yes I know we have
> discouraged theorems, but it's still something we have to monitor.)

That's easy: Add a new section to set.mm at the end, *after* the mathboxes.
Call it "Other definitions" or some such. All definitions there
are discouraged. We'd want a few theorems to show that each of
the definitions have the "expected" behavior, but there'd be no need
to create all the "support" theorems that make such definitions easy to use.
Then the tools will ensure that those definitions can't be used by anything
else, since they're at the end & would be marked to discourage further use.
But they *can* be discovered, typeset, referred to
if someone wants to know what a term in a comment means, etc.
If it turns out they're more useful than originally expected, they can be
moved up per our usual process.

I'll try to mock up in my mathbox what I have in mind.
I really already have a starting point with ">" and the hyperbolic functions.

--- David A. Wheeler

Alexander van der Vekens

unread,
Jan 27, 2020, 1:49:01 AM1/27/20
to Metamath
This corresponds to my yesterday's proposal - to restructure set.mm at a high level (see https://groups.google.com/d/msg/metamath/mnkBQV1_cXc/A5OEsTk7AQAJ):

  • Volume I. main body of set.mm, consisting of parts 1-17
  • Volume II. supplementary material, consisting of parts/chapters like "Set Theory according to Bourbaki" (Benoit's hobbyhorse) or "Algebraic Structures according to Bourbaki" (my hobbyhorse), etc.
  • Volume III. mathboxes (current part 21)
  • Volume IV. alternative definitions and theorems: the current parts 18-20 could be placed here, but also a part/chapter containing David's definitions and theorems for ">".
So my "Volume IV." corresponds to David's section "Other definitions". But maybe it is sufficient to have an area between the current main body of set.mm and the mathboxes (Volume II.). To mark definitions and theorems for e.g. ">" should be sufficient to prevent that these are used arbitrarily, but they could be used in the mathboxes if somebody wants to experiment with them.
Hence, the following could be (an example for) a substructure of Volume II.:
  • Immature definitions and theorems
      • "Set Theory according to Bourbaki"
      • "Algebraic Structures according to Bourbaki"
      • ...
    • Deprecated definitions and theorems
      • (PART 18)  ADDITIONAL MATERIAL ON GROUPS, RINGS, AND FIELDS (DEPRECATED) - if not deleted completely meanwhile...
      • (PART 19)  COMPLEX TOPOLOGICAL VECTOR SPACES (DEPRECATED)
      • (PART 20)  COMPLEX HILBERT SPACE EXPLORER (DEPRECATED)
    • Alternative definitions and theorems
      • Greater than (">") and greater than or equal to (">_").
    To have only one additional "Volume" instead of two may respect the concerns of vvs (see https://groups.google.com/d/msg/metamath/mnkBQV1_cXc/Xkrv9_9AAQAJ) a little bit...

    vvs

    unread,
    Jan 27, 2020, 7:32:19 AM1/27/20
    to Metamath
    To have only one additional "Volume" instead of two may respect the concerns of vvs (see https://groups.google.com/d/msg/metamath/mnkBQV1_cXc/Xkrv9_9AAQAJ) a little bit...

    No, that wasn't my concern. My concern is that Metamath' s collection of informal conventions and rules already goes way overboard. This looks extremely weird for software which primary purpose is to formalize things.

    Instead of adding yet another informal agreement why not just split database into several files and put them into different name spaces? AFAIK there already is rudimentary support for including other files and it's worth to extend and enforce its use formally by software. That way we should kill several birds with one stone: reduce the size of main database section and separate different kinds of work on different sections. This will make git easier to use concurrently and will formally meet your concerns at the same time.

    Alexander van der Vekens

    unread,
    Jan 27, 2020, 8:33:39 AM1/27/20
    to Metamath
    I fully agree, splitting set.mm in smaler files would be very helpful, and, in my opinion, possible since the split command is available. But this should be discussed in a separate thread. My proposal, however, could be a first step in this direction, splitting set.mm in 4 files according to my "Volumes"...

    fl

    unread,
    Jan 28, 2020, 11:40:24 AM1/28/20
    to meta...@googlegroups.com




    > And to be honest we have never given a "catalog" of definitions. The definitions in the topology part were useful and used to prove theorems contrary to what you say. 
    > Those in the algebra part were useful too. The rare ones that are useless can be easily removed since they ere not used.

    In any case, I don't feel able to code a theorem like FLT that would use theorems from many areas of mathematics without explaining them anywhere. 
    I need to understand what I'm coding. And for that I need a well-structured, systematic and progressive book. I don't think I'm the only one.

    Focusing on the definition of a magma doesn't seem relevant to me. You've chosen the worst-case scenario. Apart from a magma, that doesn't even appear in the set.mm
    main part, I wonder what definitions sound so stupid to you.

    -- 
    FL

    fl

    unread,
    Jan 28, 2020, 11:56:46 AM1/28/20
    to Metamath
    And the challenge side of it bores me deeply.

    -- 
    FL

    Alexander van der Vekens

    unread,
    Jan 28, 2020, 1:53:14 PM1/28/20
    to Metamath
    I want to come back to the first post of this topic, and I want to concentrate on the "definition" parts (2 and 4):

    On Friday, January 24, 2020 at 5:58:14 AM UTC+1, Norman Megill wrote:
    ...
    2. A very important thing to me is the philosophy of striving for simplicity wherever possible.  This particularly impacts what definitions we choose.

    "Rules" for introducing new definitions


    Definitions are at the heart of achieving simplicity.  While developing set.mm, two rules of thumb that evolved in my own work were (1) there shouldn't be two definitions that mean almost the same thing and (2) definitions shouldn't be introduced unless there is an actual need in the work at hand.  Maybe I should add (3), don't formally define something that doesn't help proofs and can be stated simply in English in a theorem's comment.
    I fully agree with this in principle, but what does "almost the same", "actual need" and "help proofs" exactly mean? There are major discrepancies in how to interpret these terms.

    Concerning the 1st rule, very early on, when I was starting to work with ordinals, I noticed that many books would redefine "e."  (epsilon) as "<" and "C_" (subset) as "<_" (less than or equal) since those were the ordering roles they played in von Neumann ordinal theory.  I figured since books did it, it must be important, so I also did that in the early set.mm.  (I may have even used those symbols because reals were a distant vision at that point.)  Books casually interchanged them in theorems and proofs, but formalization meant I might have 2 or more versions of many theorems.  It was a constant nuisance to have to convert back and forth inside of proofs, making proofs longer.  I eventually decided to use just "e." and "C_", and it made formalizations much more pleasant with fewer theorems and shorter proofs, sometimes providing more general theorems that could be used outside of ordinal theory.  I never missed not having the ordering symbols to remind me of their roles.  I liked the feeling of being closer to the underlying set theory rather than being an abstraction layer away from it.  If the "less than" role of "e." was important for human understanding in a particular case, I could simply use the English language "less than" the comment (3rd rule) while continuing to use "e." in the $p statement.
    Yes, that's absolutely right. These would be exactly identical definitions for the same concept, only using different symbols, so here it is clear what "almost the same" means (namely "exactly the same"). In this case, the symbols "<" and "C_" are used to denote different things anyway. But do you think the definition of ">" would be "almost the same" as the available definition of "<"?

    As for the 2nd rule, there are often different choices that can be made in the details of how something is defined.  It is sometimes hard or impossible to anticipate what choice is optimal until we actually start using the definition for serious work.  There is also the question of whether the definition is even necessary.  More than once I've found that a definition I thought might be needed (for example was used in the literature proof) actually wasn't.  Sometimes, like the "e." vs.  "<" case above, the definition was even a hindrance and made proofs longer.  Other times an intermediate definition of my own invention that isn't in the literature turned out to be advantageous for shorter proofs.  It can be best to let the work drive the need for the definition and its precise details.

    Another example of a literature definition we purposely don't use and illustrates the 3rd rule is Takeuti/Zaring's Russell class with symbol "Ru", which they formally define as "{ x | x e/ x }".  The only theorem that uses it (in set.mm and in T/Z) is ~ ru.  For the purposes of set.mm, it is wasteful and pointless since we can just define the Russell class verbally in the comment of ~ ru.

    A problem that has arisen in the past is where a person has added a set of definitions from the literature, proved some simple property theorems, then is disappointed that the work isn't imported into the main set.mm.  Sometimes we do and sometimes we don't, but the main issue is whether the 2nd rule above is being followed.  Without knowing exactly how the definition is going to be applied, very often it won't be optimal and will have to be adjusted, and sometimes it is more efficient just to start from scratch with a definition in the form that is needed.  I prefer to see some "serious" theorems proved from a definition before importing it.
    Here you use again criteria which could be interpreted differently: what is "serious"? Is it the number of theorems using (directly or indirectly) a definition, or must it be a theorem from the 100 theorem list (or the list of theorems in Wikipedia) using the definition? Or is it sufficient that the theorem using the definition is mentioned in a book, maybe labelled as theorem (or proposition etc.)?

    As a somewhat crude example of why a definition in isolation may not be optimal, suppose someone defined sine as "$a |- sin A = ..."  (not a straw man; this kind of thing has been attempted in the past) i.e. a 1-place function-like "sin A" rather than a stand-alone object "sin".  While this can be used to prove a few things, it is very limiting since the symbol "sin" in isolation has no meaning, and we can't prove e.g.  "sin :  CC --> CC".  It is an easy mistake to make if you don't understand set theoretical picture but are blindly copying a textbook definition.  Here it is pretty obvious, but in general such issues can be subtle and may depend on the application.  Also, there are often multiple ways to define something, some of which are easy to start from and others which would require a lot of background development.  The best choice can't always be anticipated without knowing what background will be available.
    Absolutely correct, but here we have rules how (new) definitions should look like: they usually should be binary relations or functions/operations, with exceptions only in very specially justified cases - I think these are not documented yet (in section "Conventions"), but these conventions for definitions came up as I started with Graph Theory, in a discussion (in this Google group) mainly with Mario. By such a conventioned, such crude definitions as descibed can be avoided.


    3. Let me comment on the recent proposal to import df-mgm, df-asslaw, df-sgrp.  These are my opinions, and correct me if I am wrong.

    df-mgm:  A magma is nothing but an empty shell with a binary operation.  There are no significant theorems that follow from it.  All the properties of binary operations that we have needed have already been proven.  If additional properties are needed in the future it seems it would be better to state them directly because they will be more useful generally.  Just because it appears in Bourbaki or other texts is not to me a justification:  it also needs to be useful and be driven by an actual need for it.  There are many definitions in textbooks that have turned out to be unnecessary.
    We have already several theorems valid for magmas, which are proven for monoids, or at least in the context of monoids (because we have nothing else). Proving them for monoids, however, could confuse a reader, because he does not see if an identity is required for the proof or not. Additionally, ther term "magma" is used 20 times in main set.mm, so it has some significance. Furthermore, the definition of a monoid could be based on the definition of a magma (or semigroup, see below) as it is done for the definition of groups based on the definition of monoids, e.g.
    MndALT1 = { g e. Mgm | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. ( A. x e. b A. y e. b A. z e. b ( ( x p y ) p z ) = ( x p ( y p z ) ) /\ E. e e. b A. x e. b ( ( e p x ) = x /\ ( x p e ) = x ) ) }
    or using the definition of a semigroup
    MndALT2 = { g e. SGrp | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. E. e e. b A. x e. b ( ( e p x ) = x /\ ( x p e ) = x ) }

    For me, these are already serious reasons to have these definitions. To be concrete, however, would it be sufficient to prove the statement in ~ xrs1mnd that RR*s is neither a monoid nor a semigroup (it is "only" a magma) to justify the definition of a magma?


    df-asslaw:  All this is doing is encapsulating the class of associative operations.  The only thing that can be derived from it AFAIK is the associative law.  I don't see how it would benefit proofs since additional work is needed to convert to and from it.  I don't see how it helps human understanding because it is easier for a human to see "show the associative law" in the description of a theorem than be asked to learn yet another definition (an example of the 3rd rule).
    The main reason why I (still want to) introduce these definitions, is that they can be used as building blocks for very different structures. If you look at Wikipedia (e.g. the article "Magma"), you can find a table classifying all "Group-like structures" - the "law"-theorems correspond to the columns of this table. By these, you could characterize a monoid by three of these laws. Instead of defining a monoid as it is done today by
    df-mnd $a |- Mnd = { g | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. ( A. x e. b A. y e. b A. z e. b ( ( x p y ) e. b
                /\ ( ( x p y ) p z ) = ( x p ( y p z ) ) ) /\ E. e e. b A. x e. b ( ( e p x ) = x /\ ( x p e ) = x ) ) }
    you do not see immediately that it contains closure, associativity and having an identity! This would be different by a definition like
    df-mndalt $a |- MndALT = { g | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. ( p clLaw b /\ p assLaw b /\ p idLaw b ) }

    Concerning the human understanding, we have (and we keep) corresponding theorems, e.g.
    mndass $p |- ( ( G e. Mnd /\ ( X e. B /\ Y e. B /\ Z e. B ) ) -> ( ( X .+ Y ) .+ Z ) = ( X .+ ( Y .+ Z ) ) ) $=
    But in parallel, we could have theorems in the short form, e.g.:
    mndasslaw $p |- ( ( G e. MndALT -> .+ assLaw B ) $=


    df-sgrp:  This is basically df-asslaw converted to an extensible structure.  So in essence it duplicates df-asslaw (or vice versa depending on your point of view), violating the first rule (no redundancy).  More importantly, AFAIK there is nothing that can be derived from it except the associative law, unlike say df-grp where deep and non-obvious theorems are possible.  Yes, it can be used as a "layer" on top of which we can slightly simplify df-mnd and df-rng, but it is really worth having an entire, otherwise useless definitional layer just to avoid having to state "(x+y)+z=x+(y+z)"?  The fact that it simplifies two definitions is a slight positive, but I'm still torn about it.  It doesn't quite pay for itself in terms of reducing set.mm size, and it increases the burden on the reader who has to learn another definition and drill down another layer to understand df-mnd and df-rng.  Personally I'd prefer to see "(x+y)+z=x+(y+z)" directly because it is immediately obvious without having to look at a deeper layer.  In the case of the description in df-rng0, it would be more direct and less obscure to state "associative operation" instead of "semigroup operation", since most people (at the level of studying that definition) would know what "associative" means, but fewer would would know what "semigroup" means.
    In my opinion, this is not a duplication: df-asslaw would be part of df-sgrp, characterizing the operation of the extensible structure. Concerning the reader: yes, for some it is a burden, for others it is a pleasure to have a stringent hierarchy of definitions. Finally, I think there will be a lot of serious theorems which could be proven (there is a complete "Semigroup Theory"). To wait until such theorems are formally proven within set.mm is not necessary, in this case, because basic theorems can already be provided - and they cannot be wrong, because they are also valid for monoids in its currently defined manner.

    I can't say that these will never be imported.  Possibly something will drive a need for them, maybe something related to category theory where even a magma might be a useful object.  But I would want to see the definitions be driven by that need when it arises; we don't even know yet whether the present extensible structure form can be adapted for that purpose.  It is fine to store them in a mathbox indefinitely, but I'd like to see a true need driving their use before importing.


    4. As for adding an exhaustive list of all possible definitions one can find in Bourbaki or whatever, as someone suggested, I don't think something like that belongs in set.mm, for all the reasons above.  There are other resources that already list these (Wikipedia, Planetmath, nLab).  Their precise formalization will depend on the context in which they are needed in set.mm, which is unknown until they are used.  In isolation (perhaps with a couple of property theorems that basically check syntax) there is no guarantee that they are correct, even in set.mm.  To restate, I think the philosophy should be that definitions should be added based on need, not as busywork.  Adding a definition when needed is one of the smallest parts of building a series of significant proofs.  It doesn't have to be done in advance, and I don't think it is generally productive or useful to do so.
    This is also something we can agree on immediately: Defnitions without usage (i.e. without theorems or only with theorems for syntax checks) are not useful (within set.mm) and should not be contained in the main body of set.mm.

    In summary, we agree in most cases, and differences are only in the interpretation of the rules. Regarding the "law" definitions and definitions for magmas, semigroups and (non-unital) rings, I still would like to have them in the main body of set.mm, making it
    more concise and consistent (especially if you look at the theorems with labels containing "rng" which are valid for unital rings only, which might be confusing). From my point of view, the benefits for a user are greater than the disadvantages.

    Olivier Binda

    unread,
    Jan 28, 2020, 5:14:21 PM1/28/20
    to meta...@googlegroups.com
    If metamath/mm0 become even better, I will like that a lot though :)

    Best regards,
    Olivier

    Norman Megill

    unread,
    Jan 29, 2020, 11:07:40 PM1/29/20
    to Metamath
    On Tuesday, January 28, 2020 at 1:53:14 PM UTC-5, Alexander van der Vekens wrote:
    I want to come back to the first post of this topic, and I want to concentrate on the "definition" parts (2 and 4):

    Thanks for your comments!
     

    On Friday, January 24, 2020 at 5:58:14 AM UTC+1, Norman Megill wrote:
    ...
    2. A very important thing to me is the philosophy of striving for simplicity wherever possible.  This particularly impacts what definitions we choose.

    "Rules" for introducing new definitions


    Definitions are at the heart of achieving simplicity.  While developing set.mm, two rules of thumb that evolved in my own work were (1) there shouldn't be two definitions that mean almost the same thing and (2) definitions shouldn't be introduced unless there is an actual need in the work at hand.  Maybe I should add (3), don't formally define something that doesn't help proofs and can be stated simply in English in a theorem's comment.
    I fully agree with this in principle, but what does "almost the same", "actual need" and "help proofs" exactly mean? There are major discrepancies in how to interpret these terms.

    I agree they are vague at this point, partly to initiate discussion, and partly because there will always be borderline cases where a subjective decision is required.  Also, perhaps I should have called them "guidelines" rather than "rules" (they certainly aren't official rules at this point).

    "Almost the same" is somewhat subjective, but "B > A" the very clearly the same as "A < B" except for argument order.  Similarly, we don't introduce backwards epsilon even though we verbally sometimes say "B contains A" to mean "A e. B".  We don't try to introduce a new notation for every possible phrase that occurs in a comment.  Instead, we explain it in the comment when necessary or when the notation is introduced.  See e.g. the comment of ~wel where we describe 4 different English phrases for "e." but don't introduce 4 notations.

    "Actual need" - If we want a strict, well-defined rule, we could require that the definition reduces the file size of set.mm (or will reduce it eventually with its expected usage).  That's probably too strict though.  A better meaning of "actual need" would be a definition whose need is driven by the work in progress in order to simplify that work.  Sometimes it will be a definition that might not even be in the literature, and sometimes literature definitions will not be needed.

    "Helps proofs" means the definition should ideally lead to shorter proofs.  For example, defining A C_ B (subset) clearly makes proofs simpler by not having to work with A. x ( x e. A -> x e. B ) all the time.  If we replaced A C_ B by its definiens everywhere, many proofs would be significantly larger.


    Concerning the 1st rule, very early on, when I was starting to work with ordinals, I noticed that many books would redefine "e."  (epsilon) as "<" and "C_" (subset) as "<_" (less than or equal) since those were the ordering roles they played in von Neumann ordinal theory.  I figured since books did it, it must be important, so I also did that in the early set.mm.  (I may have even used those symbols because reals were a distant vision at that point.)  Books casually interchanged them in theorems and proofs, but formalization meant I might have 2 or more versions of many theorems.  It was a constant nuisance to have to convert back and forth inside of proofs, making proofs longer.  I eventually decided to use just "e." and "C_", and it made formalizations much more pleasant with fewer theorems and shorter proofs, sometimes providing more general theorems that could be used outside of ordinal theory.  I never missed not having the ordering symbols to remind me of their roles.  I liked the feeling of being closer to the underlying set theory rather than being an abstraction layer away from it.  If the "less than" role of "e." was important for human understanding in a particular case, I could simply use the English language "less than" the comment (3rd rule) while continuing to use "e." in the $p statement.
    Yes, that's absolutely right. These would be exactly identical definitions for the same concept, only using different symbols, so here it is clear what "almost the same" means (namely "exactly the same"). In this case, the symbols "<" and "C_" are used to denote different things anyway. But do you think the definition of ">" would be "almost the same" as the available definition of "<"?

    Absolutely.  It is exactly the same except for the order of the arguments.  We could, and perhaps should, add to the comment of df-ltxr something like, ""In theorem descriptions, we occasionally use the terminology 'A is greater than B' to mean 'B is less than A'."  We haven't done so because literally no one has expressed confusion being able to figure out what 'A is greater than B' means in the comment when the theorem itself says 'B < A'.  Elementary school children are expected to understand that; their books have word problems that interchange terminology like that all the time.
     


    As for the 2nd rule, there are often different choices that can be made in the details of how something is defined.  It is sometimes hard or impossible to anticipate what choice is optimal until we actually start using the definition for serious work.  There is also the question of whether the definition is even necessary.  More than once I've found that a definition I thought might be needed (for example was used in the literature proof) actually wasn't.  Sometimes, like the "e." vs.  "<" case above, the definition was even a hindrance and made proofs longer.  Other times an intermediate definition of my own invention that isn't in the literature turned out to be advantageous for shorter proofs.  It can be best to let the work drive the need for the definition and its precise details.

    Another example of a literature definition we purposely don't use and illustrates the 3rd rule is Takeuti/Zaring's Russell class with symbol "Ru", which they formally define as "{ x | x e/ x }".  The only theorem that uses it (in set.mm and in T/Z) is ~ ru.  For the purposes of set.mm, it is wasteful and pointless since we can just define the Russell class verbally in the comment of ~ ru.

    A problem that has arisen in the past is where a person has added a set of definitions from the literature, proved some simple property theorems, then is disappointed that the work isn't imported into the main set.mm.  Sometimes we do and sometimes we don't, but the main issue is whether the 2nd rule above is being followed.  Without knowing exactly how the definition is going to be applied, very often it won't be optimal and will have to be adjusted, and sometimes it is more efficient just to start from scratch with a definition in the form that is needed.  I prefer to see some "serious" theorems proved from a definition before importing it.
    Here you use again criteria which could be interpreted differently: what is "serious"?

    This is also subjective, but I would say "serious" means that the definition leads to something other than just restatements of its properties i.e. something other than what I would call "shallow" theorems.  For example, assLaw can't prove anything deeper than the associative law AFAIK, which is an immediately obvious consequence.  Something like uniqueness of the identity element in Grp might be an example that is slightly beyond "shallow", since it isn't immediately obvious to everyone.
     
    Is it the number of theorems using (directly or indirectly) a definition,

    Not in itself.  We could add a theorem for ">" reproducing every theorem now using "<", and many more with all possible combinations of the two, but that wouldn't justify introducing a definition for it.
     
    or must it be a theorem from the 100 theorem list (or the list of theorems in Wikipedia) using the definition?
     
    No, it doesn't have to.
     
    Or is it sufficient that the theorem using the definition is mentioned in a book, maybe labelled as theorem (or proposition etc.)?

    No, that isn't sufficient (or necessary).

    I would say the most important thing is that it involves material that is actively being worked on, and its introduction is driven by that work and practically essential for its continued practical development.  Sometimes this may even mean that we introduce definitions that aren't in any books.
     

    As a somewhat crude example of why a definition in isolation may not be optimal, suppose someone defined sine as "$a |- sin A = ..."  (not a straw man; this kind of thing has been attempted in the past) i.e. a 1-place function-like "sin A" rather than a stand-alone object "sin".  While this can be used to prove a few things, it is very limiting since the symbol "sin" in isolation has no meaning, and we can't prove e.g.  "sin :  CC --> CC".  It is an easy mistake to make if you don't understand set theoretical picture but are blindly copying a textbook definition.  Here it is pretty obvious, but in general such issues can be subtle and may depend on the application.  Also, there are often multiple ways to define something, some of which are easy to start from and others which would require a lot of background development.  The best choice can't always be anticipated without knowing what background will be available.
    Absolutely correct, but here we have rules how (new) definitions should look like: they usually should be binary relations or functions/operations, with exceptions only in very specially justified cases - I think these are not documented yet (in section "Conventions"), but these conventions for definitions came up as I started with Graph Theory, in a discussion (in this Google group) mainly with Mario. By such a conventioned, such crude definitions as descibed can be avoided.


    3. Let me comment on the recent proposal to import df-mgm, df-asslaw, df-sgrp.  These are my opinions, and correct me if I am wrong.

    df-mgm:  A magma is nothing but an empty shell with a binary operation.  There are no significant theorems that follow from it.  All the properties of binary operations that we have needed have already been proven.  If additional properties are needed in the future it seems it would be better to state them directly because they will be more useful generally.  Just because it appears in Bourbaki or other texts is not to me a justification:  it also needs to be useful and be driven by an actual need for it.  There are many definitions in textbooks that have turned out to be unnecessary.
    We have already several theorems valid for magmas, which are proven for monoids, or at least in the context of monoids (because we have nothing else). Proving them for monoids, however, could confuse a reader, because he does not see if an identity is required for the proof or not. Additionally, ther term "magma" is used 20 times in main set.mm, so it has some significance. Furthermore, the definition of a monoid could be based on the definition of a magma (or semigroup, see below) as it is done for the definition of groups based on the definition of monoids, e.g.
    MndALT1 = { g e. Mgm | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. ( A. x e. b A. y e. b A. z e. b ( ( x p y ) p z ) = ( x p ( y p z ) ) /\ E. e e. b A. x e. b ( ( e p x ) = x /\ ( x p e ) = x ) ) }
    or using the definition of a semigroup
    MndALT2 = { g e. SGrp | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. E. e e. b A. x e. b ( ( e p x ) = x /\ ( x p e ) = x ) }

    Unless they are used for many things other than monoid, I don't see the advantage of either of these over just df-mnd.  Neither magma nor semigroup have any "serious" (non-shallow) consequences AFAIK.  The overhead of introducing them just to define 2 layers under monoid is far more than the overhead of just df-mnd by itself.

    Finally, df-mnd is self-contained allowing the reader to grasp it immediately, and the comment in df-mnd explains it very clearly, so I think df-mnd is much less of a burden on the reader, especially one who has never heard of magma and semigroup and has no reason to care about them.


    For me, these are already serious reasons to have these definitions. To be concrete, however, would it be sufficient to prove the statement in ~ xrs1mnd that RR*s is neither a monoid nor a semigroup (it is "only" a magma) to justify the definition of a magma?


    df-asslaw:  All this is doing is encapsulating the class of associative operations.  The only thing that can be derived from it AFAIK is the associative law.  I don't see how it would benefit proofs since additional work is needed to convert to and from it.  I don't see how it helps human understanding because it is easier for a human to see "show the associative law" in the description of a theorem than be asked to learn yet another definition (an example of the 3rd rule).
    The main reason why I (still want to) introduce these definitions, is that they can be used as building blocks for very different structures.

    Then I think we should wait until we have a need for those structures.  We don't right now.  If and when it makes sense to change df-mnd to incorporate them, we can do it then.  This was done with df-grp to incorporate monoid.  It's a relatively trivial retrofit.
     
    If you look at Wikipedia (e.g. the article "Magma"), you can find a table classifying all "Group-like structures" - the "law"-theorems correspond to the columns of this table.

    The article also mentions about 25 variations of magmas obtained by adding various properties.  Many of them don't even link to Wikipedia pages, suggesting they are probably not widely used if used at all.  I don't think they should be added to set.mm until a need for them arises, driven by work in progress.

     
    By these, you could characterize a monoid by three of these laws. Instead of defining a monoid as it is done today by
    df-mnd $a |- Mnd = { g | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. ( A. x e. b A. y e. b A. z e. b ( ( x p y ) e. b
                /\ ( ( x p y ) p z ) = ( x p ( y p z ) ) ) /\ E. e e. b A. x e. b ( ( e p x ) = x /\ ( x p e ) = x ) ) }
    you do not see immediately that it contains closure, associativity and having an identity! This would be different by a definition like

    The user knows from the description and from the property theorems that are referenced.  But even without that, I do see all of these properties pretty immediately - they're right there in the definition: ( x p y ) e. b for closure,  ( ( x p y ) p z ) = ( x p ( y p z ) ) for associativity, and E. e ... ( ( e p x ) = x /\ ( x p e ) = x ) for left and right identity.
     
    df-mndalt $a |- MndALT = { g | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. ( p clLaw b /\ p assLaw b /\ p idLaw b ) }

    The user would not know what this means without looking up all the auxiliary definitions.  It isn't obvious that idLaw means both left and right identity.  If avoiding the quantifiers that clutter df-mnd is the goal, they are going to encounter those anyway when drilling down to learn the extra definitions.

    To me, these kinds of definitions are better explained in the English description.  They are more things to burden the reader with learning.  The description of df-mnd very clearly states these properties.
     

    Concerning the human understanding, we have (and we keep) corresponding theorems, e.g.
    mndass $p |- ( ( G e. Mnd /\ ( X e. B /\ Y e. B /\ Z e. B ) ) -> ( ( X .+ Y ) .+ Z ) = ( X .+ ( Y .+ Z ) ) ) $=
    But in parallel, we could have theorems in the short form, e.g.:
    mndasslaw $p |- ( ( G e. MndALT -> .+ assLaw B ) $=


    df-sgrp:  This is basically df-asslaw converted to an extensible structure.  So in essence it duplicates df-asslaw (or vice versa depending on your point of view), violating the first rule (no redundancy).  More importantly, AFAIK there is nothing that can be derived from it except the associative law, unlike say df-grp where deep and non-obvious theorems are possible.  Yes, it can be used as a "layer" on top of which we can slightly simplify df-mnd and df-rng, but it is really worth having an entire, otherwise useless definitional layer just to avoid having to state "(x+y)+z=x+(y+z)"?  The fact that it simplifies two definitions is a slight positive, but I'm still torn about it.  It doesn't quite pay for itself in terms of reducing set.mm size, and it increases the burden on the reader who has to learn another definition and drill down another layer to understand df-mnd and df-rng.  Personally I'd prefer to see "(x+y)+z=x+(y+z)" directly because it is immediately obvious without having to look at a deeper layer.  In the case of the description in df-rng0, it would be more direct and less obscure to state "associative operation" instead of "semigroup operation", since most people (at the level of studying that definition) would know what "associative" means, but fewer would would know what "semigroup" means.
    In my opinion, this is not a duplication: df-asslaw would be part of df-sgrp, characterizing the operation of the extensible structure. Concerning the reader: yes, for some it is a burden, for others it is a pleasure to have a stringent hierarchy of definitions.

    This is a good point.  How do we choose which reader to satisfy?  Mathematicians who already know the stuff probably would find the hierarchy more pleasing, and readers who are trying to learn unfamiliar material would find more definitions to learn a burden.

     
    Finally, I think there will be a lot of serious theorems which could be proven (there is a complete "Semigroup Theory"). To wait until such theorems are formally proven within set.mm is not necessary, in this case, because basic theorems can already be provided - and they cannot be wrong, because they are also valid for monoids in its currently defined manner.

    I can't say that these will never be imported.  Possibly something will drive a need for them, maybe something related to category theory where even a magma might be a useful object.  But I would want to see the definitions be driven by that need when it arises; we don't even know yet whether the present extensible structure form can be adapted for that purpose.  It is fine to store them in a mathbox indefinitely, but I'd like to see a true need driving their use before importing.


    4. As for adding an exhaustive list of all possible definitions one can find in Bourbaki or whatever, as someone suggested, I don't think something like that belongs in set.mm, for all the reasons above.  There are other resources that already list these (Wikipedia, Planetmath, nLab).  Their precise formalization will depend on the context in which they are needed in set.mm, which is unknown until they are used.  In isolation (perhaps with a couple of property theorems that basically check syntax) there is no guarantee that they are correct, even in set.mm.  To restate, I think the philosophy should be that definitions should be added based on need, not as busywork.  Adding a definition when needed is one of the smallest parts of building a series of significant proofs.  It doesn't have to be done in advance, and I don't think it is generally productive or useful to do so.
    This is also something we can agree on immediately: Defnitions without usage (i.e. without theorems or only with theorems for syntax checks) are not useful (within set.mm) and should not be contained in the main body of set.mm.

    In summary, we agree in most cases, and differences are only in the interpretation of the rules. Regarding the "law" definitions and definitions for magmas, semigroups and (non-unital) rings, I still would like to have them in the main body of set.mm, making it
    more concise and consistent (especially if you look at the theorems with labels containing "rng" which are valid for unital rings only, which might be confusing). From my point of view, the benefits for a user are greater than the disadvantages.

    "rng" was chosen for the label abbreviation because I wasn't aware that non-unital rings had their own symbol.  It is a simple substitution to change the names.  We still have no application driving the introduction of Rng.

    Norm

    fl

    unread,
    Jan 30, 2020, 7:57:57 AM1/30/20
    to Metamath

    I'm fascinated by discussions about usefulness. For if there is one theorem that is perfectly useless, it is Fermat's last theorem.
    No one knew if it was true for three centuries and no one had a problem with it.

    So let's be honest: "useful" means that Norm finds something interesting and "useless" means that Norm finds that something is not interesting.

    The problem, again, is that Fermat's theorem is a big thing and it's laid out in a way that will make it difficult to collaborate.

    You should at least try to identify the concepts or theories that are involved in the proof.

    -- 
    FL

    Thierry Arnoux

    unread,
    Jan 30, 2020, 8:01:34 AM1/30/20
    to meta...@googlegroups.com

    Hi all,

    Here is my take on the "goals and philosophy" for set.mm:

    Overall, I'm also seeing the end goal as set.mm covering "all of mathematics", which means I would not exclude any theorem.

    As for the audience, I think the goal shall be that any interested person, with even limited background, can grasp our definitions. So, they shall be as simple as possible, and rely on basic concepts whenever possible. That does not prevent us to provide an definitions or equivalents, relying on more advanced or less-used concepts.

    About which theorems shall enter the main part of set.mm, I think Norm and Mario's philosophy of "based on need" is good. To give a concrete example, when I wanted to prove general things about probability theory, with the help of people on the mailing list, I found out I needed to define the Bochner integral, which required to define the canonical homomorphism from the real numbers `RRHom`, which in turn required extension by continuity `CnExt`, and uniform structures `UnifSt` and `metUnif`, in turn requiring pseudometrics `PsMet`.

    So, I ended up adding pseudo-metrics to set.mm, not in order to be exhaustive, but because I had a practical end-goal for that. I have not yet come back to the Bochner integral, but there are several interesting theorems about extension by continuity I'm happy I added, and which will probably be useful for other things.

    Just like for groups, there are other generalizations of metrics than pseudometrics (quasi-, meta-, semi-, pre-, etc.), I introduced only pseudometrics, and for a good reason. When introducing the concept of pseudometric, I did not change the definition of metric so that metric is seen as a specialization of pseudometrics, but kept the generic definition, and just added the generalization theorem ~xmetpsmet.

    More generally about new definitions, I also think interesting definitions are those which summarize complex concepts into a simpler form. In the example given by Norm, ` A C_ B ` stands for ` ( A i^i B ) = A ` (~df-ss) or ` A. x ( x e. A -> x e. B ) ` (~dfss2). The form ` A C_ B ` is shorter and more convenient than both, and is widely used. In ` A > B <-> B < A `, the new form "greater than" is exactly as long and as convenient as ` A < B `, I see no added value to use it.

    It may still be interesting to define it, just to express "how it would look like if we had to define this concept in set.mm", but I believe we shall explain in the comments why we don't use it in set.mm, and refer to the "less than" definition.

    Another example which comes to my mind is "countable". I have a few theorems about countable sets in my mathbox. We could imagine to define the class of countable sets, like `Count = { x | x ~<_ _om }`. However, stating that a set is countable using `A e. Count` would be just as long and convenient as stating `A ~<_ _om`. I think this is another good example where we don't need to introduce a new definition, although, it may not be obvious at first (it was not to me), that `A ~<_ _om` means `A is countable`, because none of the concepts there (dominance relation, natural numbers), are simple concepts.

    Concerning the structure of set.mm, while I like the idea to add definitions in a specific sections, I'm afraid that this would break its overall structure, we could consider another option, where "other concepts" not developed in set.mm would still be in the corresponding chapter (like, putting magmas in the algebra section, just defined, not developped).

    BR,
    _
    Thierry

    fl

    unread,
    Jan 30, 2020, 8:59:05 AM1/30/20
    to Metamath

    Arxiv query on the presence of the word "semigroup" in the abstract.


    Usefulness is a relative concept.

    -- 
    FL

    fl

    unread,
    Jan 30, 2020, 12:27:25 PM1/30/20
    to Metamath
    I also notice that in a project like Mizar, having a new structure is never a problem for them. Only here
    there's a lot of debate about whether we should have a module or a semigroup.

    Gosh!

    -- 
    FL

    fl

    unread,
    Jan 30, 2020, 12:51:14 PM1/30/20
    to Metamath
    Arxiv query on the presence of the word "monoid" in the abstract.

    3,532 results.


    semigroup is more frequent.

    -- 
    FL

    fl

    unread,
    Jan 30, 2020, 1:33:25 PM1/30/20
    to Metamath

    semigroup is more frequent.


    If I remember correctly, it was not  monoids which were in the hierarchy, it was the semigroups. They were
    replaced without explanation.
     
    -- 
    FL

    Alexander van der Vekens

    unread,
    Jan 31, 2020, 2:01:03 AM1/31/20
    to Metamath
    For the last week I have been "actively working" on providing "applications" of the definitions and related theorems for magmas and semigroups (I've spent almost my whole spare time for it). Although the results may be trivial, they are "serious", at least in my opinion.

    I've provided the results in my latest two PRs (they can be found in my mathbox). Here they are (in brief):
    • There is a small magma (with a base set containing two elements) which is not a semigroup: ~mgm2nsgrp
    • There is a small semigroup (with a base set containing two elements) which is not a monoid: ~sgrp2nmnd
    • The previous results prove the proper inclusion chain `Mnd C. SGrp C. Mgm`: ~mndsssgrp and  ~sgrpssmgm
    • The (additive group of the) extended reals is a magma, but not a semigroup. Formalization of the statements in the comments of ~xrsmcmn and ~xrs1mnd: ~xrsmgmdifsgrp
    • A stronger version of ~ gsumpropd if at least one of the involved structures is a magma, see ~gsumpropd2: ~gsummgmpropd
    Especially the proof of ~sgrp2nmnd was time-consuming, although not difficult/complex: 32 "sums" had to be"calculated" to show the associativity (see ~sgrp2nmndlem4).

    Now I want to ask if these results are sufficient to move the definitions and basic theorems for magmas and semigroups (and the results themselves, of course) to the main body of set.mm? From my point of view, they should, at least in they plain variants (~df-mgmALT, ~df-sgrp), without using the "law" definitions (although I like them very much, there is still no "serious" use of them, and they can be introduced easily later when they are really required).  And afterwards, many theorems proven for/in the context of monoids could be tidied up accordingly.

    For rings (non-unital vs. unital) I plan to provide similar results, maybe also a result in category theory ("the category of rngs has a zero object while the category of rings does not" as proposed by Benoit) - but unfortunately, even the category of rings is not available in set.mm yet...

    savask

    unread,
    Jan 31, 2020, 5:11:15 AM1/31/20
    to Metamath
    I'm definitely not a regular contributor to set.mm, yet the topic discussed is so interesting that I would like to comment on it.

    I think defining the "goal of set.mm" is of little use, since goals must be achievable and things like "formalize all of mathematics" imply that either mathematics will cease to develop or all created mathematics will somehow find place in set.mm which doesn't sound very realistic. We don't see software libraries containing all possible functions and procedures or mathematical treatises covering the whole of mathematics (even "Elements of mathematics" don't come close), so in my opinion neither should set.mm try to formalize the whole body of mathematical knowledge.

    I personally would like to see set.mm as an attempt to formalize "standard" mathematics, to form a mathematical "core library" which can be used to leverage the formalization of other fields. It is mostly old mathematics for sure, but even formalizing basics of algebra, analysis, topology, geometry etc. looks like a worthwhile goal for set.mm. I think the Metamath 100 list serves as a good guide to which topics should be covered and right now it is still a stable source of feasible tasks for formalization. I don't remember if this has been suggested before, but we could also form our own, say, "Metamath 200" list which will contain other results metamath community deems important.

    As for moving contributions to the main part, I fully support the "move when used by someone else" strategy. As Alexander demonstrated, it is hard to judge whether something is "serious" or "important" without being subjective, so probably strict rules like "move on use" are more preferable. Recently it was agreed that Metamath 100 results are also important enough to be moved into the main part, and I think these rules are explicit enough to reduce any disputable judgements to a minimum. There are certain cases when "big" results are moved into the main part as an exception from mentioned rules. Since such cases seem to generate much discussion, maybe those rules should be enforced in all cases? That might slow down the accumulation of results in the main part, but at least it will be fair to everyone.

    Another solution would be to follow a fixed set of books (say, Bourbaki), since it will automatically solve the problem of structuring sections, choosing definitions and determining which results belong to the main part (I think FL suggested that before). I don't think it is that easy to pull that off in practice, for instance, many definitions in set.mm are tweaked to fit formalization needs (so they don't look like their mathematical counterparts). I still think the idea is important, and maybe we could adapt it in a modified way. Say, we prescribe a set of books to each section (for example, Lang to abstract algebra, Munkres to general topology etc), we try to structure these sections according to the book chosen and we import results to the main part if they match some theorem/proposition/etc from the book. That still leaves the question of how all definitions coming from the chosen book should be formalized in set.mm, but at least it puts the burden of deciding whether something is useful on the book author :-)

    Finally, I would like to comment on David's catalogue of definitions. Maybe instead of moving it to the end of set.mm and adding "discouraged" tags, it could be a separate database importing set.mm? It seems like the goal of cataloguing widely used symbols is not essential to set.mm, so keeping the unused definitions list in the main database is unnecessary. But, the catalogue could be a nice project by itself, which could benefit from being a separate database (for instance, one can use a completely different sectioning, say, alphabetical resembling a handbook). Also it might be one of the first good cases for using database imports.

    David A. Wheeler

    unread,
    Jan 31, 2020, 11:13:06 AM1/31/20
    to metamath, Metamath
    Given this discussion, I plan to keep the definitions for ">" etc. in my mathbox
    and drop the PR to move it into the main body of set.mm.
    It can still be referred to in comments, of course.

    I'll probably do some minor work in my mathbox to show the utility
    of "definitions not in the main body". My hope is that this
    will convince others later that there's value in doing so.
    But that's for another time, and clearly not accepted at this time.

    --- David A. Wheeler

    Norman Megill

    unread,
    Jan 31, 2020, 6:39:46 PM1/31/20
    to Metamath
    Your counterexamples to show non-membership are interesting.  I appreciate your work dong this, and certainly it would accompany the magma and semigroup structures should we decide to import them.  However, showing the proper subset relationship in and of itself still doesn't quite yet show us that we have a true need for these structures.

    For example, if we defined EE as the set of even numbers, we could have a theorem showing it is a proper subset of ZZ, but such a theorem isn't enough in itself to justify introducing it.  We would want a sufficient number of theorems about even numbers whose statements would become significantly shorter and clearer.  That hasn't happened and probably won't.  Just because the terminology "even number" appears in the literature doesn't mean we have to have a special symbol for it; instead, "2 || N" suffices.

    Regarding Bourbaki:  A long time ago, before Metamath, I eagerly and naively started off trying to learn the Theory of Sets before knowing any formal logic or set theory.  Its bizarre "assemblies" (tau notation) are extremely inefficient and seem to ignore all the developments in FOL and axiomatic set theory that were already essentially mature and standard in the rest of the world.  With much effort and the assistance of a computer program I wrote, I worked out the thousands of symbols needed to define the empty set.  Then I gave up when I realized the number 1 would be unfeasible (it turns out to require trillions of symbols).  Even with intermediate definitions to shorten things, I was unable to gain an intuitive understanding of what was going on.

    After I learned "real" set theory, I became somewhat less than enamored with Bourbaki, so I mention this experience in the spirit of full disclosure. :)  With that said, the feeling I get is that Bourbaki sometimes does its own thing that may or may not appear in mainstream mathematical literature.  My personal preference is not to use Bourbaki as a primary reference for terminology and notation, unless it is also standard in modern textbooks.

    Two relatively modern and well-regarded textbooks are Lang's "Undergraduate algebra" (~400 pp) and his graduate-level "Algebra" (900 pp).  The first book starts off defining "group", and the second starts with "monoid".  Neither book mentions "magma" or "semigroup".  Even "monoid" is apparently a topic too specialized for Lang's undergraduate book, which doesn't mention it at all.

    I would like to hear other opinions on whether "magma" and "semigroup" should become part of the main set.mm body.

    Norm



    On Friday, January 31, 2020 at 2:01:03 AM UTC-5, Alexander van der Vekens wrote:
    For the last week I have been "actively working" on providing "applications" of the definitions and related theorems for magmas and semigroups (I've spent almost my whole spare time for it). Although the results may be trivial, they are "serious", at least in my opinion.

    I've provided the results in my latest two PRs (they can be found in my mathbox). Here they are (in brief):
    • There is a small magma (with a base set containing two elements) which is not a semigroup: ~mgm2nsgrp
    • There is a small semigroup (with a base set containing two elements) which is not a monoid: ~sgrp2nmnd
    • The previous results prove the proper inclusion chain `Mnd C. SGrp C. Mgm`: ~mndsssgrp and  ~sgrpssmgm
    • The (additive group of the) extended reals is a magma, but not a semigroup. Formalization of the statements in the comments of ~xrsmcmn and ~xrs1mnd: ~xrsmgmdifsgrp
    • A stronger version of ~ gsumpropd if at least one of the involved structures is a magma, see ~gsumpropd2: ~I mention this in the spirit of full disclosure.  :) 

    Jim Kingdon

    unread,
    Jan 31, 2020, 8:19:00 PM1/31/20
    to Norman Megill, Metamath


    On January 31, 2020 3:39:46 PM PST, Norman Megill <n...@alum.mit.edu> wrote:

    >I would like to hear other opinions on whether "magma" and "semigroup"
    >should become part of the main set.mm body.

    I don't have a strong opinion one way or the other. I suspect Norm is right that these are a whole lot more obscure than groups and rings and fields and the other usual suspects. On the other hand, I guess things are arranged so that a proof about groups doesn't really need to know about magmas and semigroups? (Sorry if that is naive, I haven't done much with extensible structures myself). That might point us to a "harmless even if not strictly necessary" stance.

    Benoit

    unread,
    Jan 31, 2020, 8:23:08 PM1/31/20
    to Metamath
    Regarding the possible introduction in the main part of semigroups and magmas:
    When I look at the page http://us2.metamath.org/mpeuni/df-mnd.html I feel a bit dizzy.  The abundance of parentheses and conjunctions makes it hard to parse.  Did you notice that the clause stating closedness is actually quantified over an extra z ? (This is of course innocuous, but still strange.)  I find it much easier to look successively at http://us2.metamath.org/mpeuni/df-mgmALT.html and http://us2.metamath.org/mpeuni/df-sgrp.html and [the definition of Monoid from Semigroup -- Alexander: can you add it?].  So, to me, there is already a pedagogical benefit, even if no theorems are proved.  I find these piecemeal definitions easier to learn.  Once you grab the final concept, you can forget about the intermediate steps. I.e., you do not have to remember what a magma or semigroup is, if you don't want.  This was only an aid in the process of getting the definition of a group.

    Regarding Norm's remark on Bourbaki: Indeed, their set theory is known to be awkward.  None of its members were logicians or set theorists, and they just wanted to "get this done" before moving to what they considered more interesting math.  By the way: you actually use a lot of Bourbaki's terminology and notation in set.mm: the symbol for the empty set, the blackboard bold typeface, the terms "ball" and "sphere" in metric spaces, the terms injection/surjection/bijection, etc.  Actually, new terms were carefully chosen, and as much as possible simple words from everyday life, like "magma" (because volcanic magma looks structureless), "ball", even when they are not round (the previous term was some impossible jargon), in/sur/bijection (previous terms were similar jargon).  The terms "barrel" (a barrel is closed, convex, balanced, absorbing: this makes as much sense in everyday life as in formal math), bornivorous, etc.

    I think that Bourbaki was somehow victim of its success: it was so influential that after a few years or even decades, people took it for granted that there was a common language and notation for all the branches of mathematics.  But I think it is easy now to underestimate the novelty that it was in the 1930s and 1940s.  Of course there was already work on logic.  And of course each specialist in his domain was ahead of what was meant to be a treatise of "dead mathematics".  Discoveries also spread slower at the time: the internet was a bit slower in those days, notwithstanding political considerations... 

    Benoît

    Norman Megill

    unread,
    Jan 31, 2020, 9:07:05 PM1/31/20
    to Metamath
    On Friday, January 31, 2020 at 8:19:00 PM UTC-5, Jim Kingdon wrote:


    On January 31, 2020 3:39:46 PM PST, Norman Megill wrote:

    >I would like to hear other opinions on whether "magma" and "semigroup"
    >should become part of the main set.mm body.
     
    I don't have a strong opinion one way or the other. I suspect Norm is right that these are a whole lot more obscure than groups and rings and fields and the other usual suspects. On the other hand, I guess things are arranged so that a proof about groups doesn't really need to know about magmas and semigroups?

    Yes, it is simple to arrange it so that groups don't have to know about magmas and semigroups:  we can just leave the group definition and proofs alone, since they don't depend on them now.
     
    That might point us to a "harmless even if not strictly necessary" stance.

    If we leave groups alone, then importing magma and semigroup will be "harmless" in principle.

    However, logically they would belong before groups are introduced.  This means that it is likely that proof of the group associative law will start to borrow from the same theorem about semigroups in order to shorten the proof (as we always do with layered structures).  This in principle is also harmless.

    What I fear happening is that we've introduced unnecessary additional complexity for anyone just wanting to learn about groups (such as an engineer).  What was a straightforward definition turns into a tower of associated structures they've never heard of (monoid, semigroup, magma) and don't care about, but are forced to learn.  Will they give up and go elsewhere (e.g. an undergraduate book on group theory that simply starts with groups at the beginning)?  Even monoids are apparently a graduate-level topic based on Lang's books.

    We could put semigroups and monoids after df-grp to prevent "contamination", but that is also weird and somewhat contrary to the set.mm philosophy.

    In their pure form, magmas and semigroups have no serious applications.  Most semigroup applications (in e.g. FL's search) concern semigroups with additional properties, not pure semigroups.  When we start to have those kinds of applications, we can revisit this and see if it makes sense to share the common pure semigroup base they're built on.  But we don't have such applications now.
     
    Norm

    Norman Megill

    unread,
    Jan 31, 2020, 9:37:57 PM1/31/20
    to Metamath
    On Friday, January 31, 2020 at 8:23:08 PM UTC-5, Benoit wrote:
    Regarding the possible introduction in the main part of semigroups and magmas:
    When I look at the page http://us2.metamath.org/mpeuni/df-mnd.html I feel a bit dizzy.  The abundance of parentheses and conjunctions makes it hard to parse.

    If you already know about magmas, semigroups, and monoids, of course you'd feel that way.  Look at it from the perspective of someone who's never heard of these and really doesn't care to learn, because all they came for was groups.  Just the names of these things are somewhat intimidating and have little or nothing to do with what they really are.

    As I wrote earlier:


    "The user knows from the description and from the property theorems that are referenced.  But even without that, I do see all of these properties pretty immediately - they're right there in the definition: ( x p y ) e. b for closure,  ( ( x p y ) p z ) = ( x p ( y p z ) ) for associativity, and E. e ... ( ( e p x ) = x /\ ( x p e ) = x ) for left and right identity."

    We can put that in the comment if people find the conjunction of them confusing.  In any case the current definition has the advantage of being able to see everything at once in one place, right in front of you.  You don't have to open another tab for semigroups and another tab for magmas and keep going back and forth between them, trying to remember what those terms mean while trying to keep everything in your head all at once the see the whole picture.
     
      Did you notice that the clause stating closedness is actually quantified over an extra z ? (This is of course innocuous, but still strange.)  I find it much easier to look successively at http://us2.metamath.org/mpeuni/df-mgmALT.html and http://us2.metamath.org/mpeuni/df-sgrp.html and [the definition of Monoid from Semigroup -- Alexander: can you add it?].  So, to me, there is already a pedagogical benefit, even if no theorems are proved.  I find these piecemeal definitions easier to learn.  Once you grab the final concept, you can forget about the intermediate steps. I.e., you do not have to remember what a magma or semigroup is, if you don't want.  This was only an aid in the process of getting the definition of a group.

    Regarding Norm's remark on Bourbaki: Indeed, their set theory is known to be awkward.  None of its members were logicians or set theorists, and they just wanted to "get this done" before moving to what they considered more interesting math.  By the way: you actually use a lot of Bourbaki's terminology and notation in set.mm: the symbol for the empty set, the blackboard bold typeface, the terms "ball" and "sphere" in metric spaces, the terms injection/surjection/bijection, etc.  Actually, new terms were carefully chosen, and as much as possible simple words from everyday life, like "magma" (because volcanic magma looks structureless), "ball", even when they are not round (the previous term was some impossible jargon), in/sur/bijection (previous terms were similar jargon).  The terms "barrel" (a barrel is closed, convex, balanced, absorbing: this makes as much sense in everyday life as in formal math), bornivorous, etc.

    I acknowledge that Bourbaki has had a lot influence, and I have no problem using its notation and terminology - if it has survived and is still used in the mainstream literature after 80+ years.  What I think is not a good idea is the blind use of Bourbaki as being the "bible" of mathematics.

    Norm

    Alexander van der Vekens

    unread,
    Feb 1, 2020, 2:36:02 AM2/1/20
    to Metamath
    Just a short comment before I have to go out (see below)...


    On Saturday, February 1, 2020 at 2:23:08 AM UTC+1, Benoit wrote:
    Regarding the possible introduction in the main part of semigroups and magmas:
    When I look at the page http://us2.metamath.org/mpeuni/df-mnd.html I feel a bit dizzy.  The abundance of parentheses and conjunctions makes it hard to parse.  Did you notice that the clause stating closedness is actually quantified over an extra z ? (This is of course innocuous, but still strange.)  I find it much easier to look successively at http://us2.metamath.org/mpeuni/df-mgmALT.html and http://us2.metamath.org/mpeuni/df-sgrp.html and [the definition of Monoid from Semigroup -- Alexander: can you add it?]. 
     
    Yes, I'll add it! I had this in mind for some time, but I didn't consider it necessary. But now I think it may be the sticking point for this discussion. Having a first glance at it, I discovered that ~df-mnd is currently used in theorem ~ismnd only (in the main body of set.mm)! Therefore, it can be simply replaced by
    { g e. SGrp | [. ( Base ` g ) / b ]. [. ( +g ` g ) / p ]. E. e e. b A. x e. b ( ( e p x ) = x /\ ( x p e ) = x ) },
    and the proof of ~ ismnd can be rewritten to use this definition instead of the current one. Therefore, all theorems using ~ismnd are not affected at all. And as for many definitions in set.mm, the definition needs not to be understood by beginners (see my ~df-mamu example above) - if they look at the defining theorem ~ismnd, they find everything they need in a clear way (much clearer than in the current ~df-mnd). Furthermore, the definition of groups ~df-grp is also not affected at all, because it is based already on the definition of monoids ( `Grp = { g e. Mnd | ... `).

    As a conclusion, there will be more definitions and theorems about magmas and semigroups (for interested readers), but only one significant change (definition of monoids) for beginners/readers not interested in magmas and semigroups, which can be, however, ignored by them if they only look at ~ismnd (if at all - if they are interested in groups only, actually nothing will change).

    fl

    unread,
    Feb 1, 2020, 6:26:48 AM2/1/20
    to Metamath


    Who has spoken of "blind use" ? You don't want it. that's all. You speak of the set theory book
    when nobody mentions it, I just don't understand what you call "real" set theory. I let you imagine
    the size of any theorems in set.mm when reduced to the primitive symbols.
    Semigroups are a problem you  mainly because it is not in Lang

    Serge Lang born in 1925 dead in 2005. First edition of his algebra book 1965


    Very modern indeed.

    I just let you discover Bourbalki's chronology.

    --
    FL

    Alexander van der Vekens

    unread,
    Feb 1, 2020, 7:28:43 AM2/1/20
    to Metamath

    On Saturday, February 1, 2020 at 2:23:08 AM UTC+1, Benoit wrote:
    Regarding the possible introduction in the main part of semigroups and magmas:
    When I look at the page http://us2.metamath.org/mpeuni/df-mnd.html I feel a bit dizzy.  The abundance of parentheses and conjunctions makes it hard to parse.  Did you notice that the clause stating closedness is actually quantified over an extra z ? (This is of course innocuous, but still strange.) 
     
    That's really not nice and may be confusing. It's the same with ~ismnd. In a new version of ~ismnd we should/can avoid this. Since ~ismnd is used only 5 times in the main body of set.mm (and 2 times in mathboxes), it could be replaced without much effort.

    Benoit

    unread,
    Feb 1, 2020, 10:54:31 AM2/1/20
    to Metamath
    On Saturday, February 1, 2020 at 3:37:57 AM UTC+1, Norman Megill wrote:
    On Friday, January 31, 2020 at 8:23:08 PM UTC-5, Benoit wrote:
    Regarding the possible introduction in the main part of semigroups and magmas:
    When I look at the page http://us2.metamath.org/mpeuni/df-mnd.html I feel a bit dizzy.  The abundance of parentheses and conjunctions makes it hard to parse.

    If you already know about magmas, semigroups, and monoids, of course you'd feel that way.  Look at it from the perspective of someone who's never heard of these and really doesn't care to learn, because all they came for was groups.  Just the names of these things are somewhat intimidating and have little or nothing to do with what they really are.

    I think we will not agree on this.  I prefer to face difficulties one by one, instead of all three at the same time.  I personnally still find the long chain of characters in http://us2.metamath.org/mpeuni/df-mnd.html intimidating.  It's always difficult to look from the perspective of someone who's completely new to it.  I may be underestimating the difficulty of facing several notions, and you may be underestimating the difficulty of reading such long expressions of formal math.  As Groucho Marx said: "Send someone to fetch a child of five." Or actually, fetch at least a thousand, half of them learning one way and half the other way, and see which group does best !  But most probably, each type of learning will suit different people.
     
    As I wrote earlier:

    "The user knows from the description and from the property theorems that are referenced.  But even without that, I do see all of these properties pretty immediately - they're right there in the definition: ( x p y ) e. b for closure,  ( ( x p y ) p z ) = ( x p ( y p z ) ) for associativity, and E. e ... ( ( e p x ) = x /\ ( x p e ) = x ) for left and right identity."

    You see them immediately because you've been reading expressions like this every day for more than a decade.  By the way: you are doing in the above sentence exactly what the intermediate definitions do: you are clearly separating the three properties of closure, associativity and unitality from each other, so that we can digest each of them successively. This is indeed clearer, and that's exactly the point.
     
    I acknowledge that Bourbaki has had a lot influence, and I have no problem using its notation and terminology - if it has survived and is still used in the mainstream literature after 80+ years.  What I think is not a good idea is the blind use of Bourbaki as being the "bible" of mathematics.

    I fully agree: no book should be followed blindly, nor used as a bible.

    Benoît

    Norman Megill

    unread,
    Feb 1, 2020, 8:48:18 PM2/1/20
    to Metamath
    On Saturday, February 1, 2020 at 10:54:31 AM UTC-5, Benoit wrote:
    On Saturday, February 1, 2020 at 3:37:57 AM UTC+1, Norman Megill wrote:
    On Friday, January 31, 2020 at 8:23:08 PM UTC-5, Benoit wrote:
    Regarding the possible introduction in the main part of semigroups and magmas:
    When I look at the page http://us2.metamath.org/mpeuni/df-mnd.html I feel a bit dizzy.  The abundance of parentheses and conjunctions makes it hard to parse.

    If you already know about magmas, semigroups, and monoids, of course you'd feel that way.  Look at it from the perspective of someone who's never heard of these and really doesn't care to learn, because all they came for was groups.  Just the names of these things are somewhat intimidating and have little or nothing to do with what they really are.

    I think we will not agree on this.  I prefer to face difficulties one by one, instead of all three at the same time.  I personnally still find the long chain of characters in http://us2.metamath.org/mpeuni/df-mnd.html intimidating.  It's always difficult to look from the perspective of someone who's completely new to it.  I may be underestimating the difficulty of facing several notions, and you may be underestimating the difficulty of reading such long expressions of formal math

    Question:  why do you think that Lang's Algebra (2002 edition) doesn't define or mention "semigroup" in its 900 pages?

    Norm

    David Starner

    unread,
    Feb 1, 2020, 8:53:26 PM2/1/20
    to meta...@googlegroups.com
    On Fri, Jan 31, 2020 at 6:07 PM Norman Megill <n...@alum.mit.edu> wrote:
    > What I fear happening is that we've introduced unnecessary additional complexity for anyone just wanting to learn about groups (such as an engineer). What was a straightforward definition turns into a tower of associated structures they've never heard of (monoid, semigroup, magma) and don't care about, but are forced to learn. Will they give up and go elsewhere (e.g. an undergraduate book on group theory that simply starts with groups at the beginning)? Even monoids are apparently a graduate-level topic based on Lang's books.

    I'm here in part because I failed to make the step from undergraduate
    mathematics to graduate-level mathematics; I was forced to take my
    B.S. in math and go. Group theory/modern algebra is something I'd like
    to learn, and sort of have been faffing about for many years now. From
    my perspective, Metamath introduces a whole pile of stuff into group
    theory; it's constantly worrying about predicate logic and set theory
    at a level introductory modern algebra just ignores, combined with a
    whole bunch of Metamath details and stuff; it's actively harder to
    read than raw TeX, and I suspect your engineer doesn't like reading
    raw TeX, either. I'm here because if I can prove some of this stuff, I
    can feel like I'm doing something of value and it will hopefully help
    me nail down exactly what it means in my head. There's good argument
    that for me, Metamath is yak shaving, a way of avoiding actually
    working on learning any one field of math intensely. I would actively
    discourage anyone from using Metamath to learn group theory, maybe
    unless they were of the same inclination, to use it as a way to have
    full understanding of the subject formally checked. I don't think
    that's engineers; I think that's mainly people like me who have read
    books on set theory (including New Foundations) and the consequences
    of the Axiom of Choice and Gödel's theorems, those that are looking at
    group theory from a hardcore math perspective, not an engineering
    perspective, and I don't think adding monoids, semigroups, and magmas
    are going to make much difference.

    (And if you are worried, take CycGrp = {𝑔 ∈ Grp ∣ ∃𝑥 ∈ (Base‘𝑔)ran
    (𝑛 ∈ ℤ ↦ (𝑛(.g‘𝑔)𝑥)) = (Base‘𝑔)}. Grp is obvious enough, but
    CycGrp? ran?* I could derive what (𝑛(.g‘𝑔)𝑥) means from context,
    but it took a lot of clicking on definitions to fully get that
    (𝑛(.g‘𝑔)𝑥) is applying a binary function to n and x, and that
    function .g is group exponentiation, not the group operation. I'm just
    saying, if you want to learn group theory from Metamath, there's a lot
    of stuff you're going to be fighting, and standard monoids, semigroups
    and magmas are hardly going to be your breaking point.)

    (Oh, and your engineer, depending on discipline, may be familiar with
    monoids, since they're big in functional programming; see e.g.
    https://fsharpforfunandprofit.com/posts/monoids-without-tears/ )

    * Ken Thompson was once asked what he would do differently if he were
    redesigning the UNIX system. His reply: "I'd spell creat with an e."

    --
    Kie ekzistas vivo, ekzistas espero.

    Norman Megill

    unread,
    Feb 1, 2020, 10:13:38 PM2/1/20
    to meta...@googlegroups.com
    On Saturday, February 1, 2020 at 8:53:26 PM UTC-5, David Starner wrote:

    ...


    I'm just
    saying, if you want to learn group theory from Metamath, there's a lot
    of stuff you're going to be fighting, and standard monoids, semigroups
    and magmas are hardly going to be your breaking point.)

    Thanks for your comments.

    I question whether semigroup is a standard thing that algebra students learn, since it is often omitted from textbooks, either because it is considered too trivial or too specialized, not sure.  But in any case importing it isn't that big a deal in and of itself and would not have much impact on the big picture.  I might agree to it anyway just to get the Bourbaki people off my back.  :)  (I think magma would be going too far, though.)

    My main concern is that it sets a precedent of introducing definitions that have no mathematical need driving them.  There is no significant theory that follows from the trivial definition of semigroups; you show the associative law and you're done.  The main reason given for so far for its introduction is that it simplifies the monoid definition one layer up, because otherwise the monoid definition is too confusing to understand, being a jumble of parentheses and conjunctions.

    First, df-mnd isn't even in the top 100 of complex definitions.  If we set a precedent of introducing subdefinitions to simplify parts of bigger ones, where do we stop?  Would we add hundreds of definitions defining trivial little pieces of larger definitions just because the larger definitions make some people dizzy?  If we do, the reader would then have to learn both the name of each new definition as well as the property it specifies, instead of just the property as presented in the parent definition.  I think it is usually more friendly to describe the properties informally in the description instead of introducing a somewhat cryptic acronym for each property.

    Second, df-mnd and other structure definitions aren't really meant to be studied directly in their raw form, which is a technical requirement in order to have nice self-contained objects that make things easier later on.  Instead, the description of the definition should and usually does point to theorems that show the main properties that should be studied, such as the associative law in ~ mndass, which I think is quite easy to read.  If we need to improve the descriptions to indicate that more clearly, I'm in favor of that.

    Norm

    fl

    unread,
    Feb 2, 2020, 7:16:56 AM2/2/20
    to Metamath
     
      I might agree to it anyway just to get the Bourbaki people off my back.  :)

    How charming! Let's say what the problem is. The problem is not a dilemma between "semigroup" or not "semigroup" .The problem is that you let people believe that they are free to choose their subject, and at the end when they have spent hours on something you systematically block their work
    using a pretext or another: the mental "deficiencies" of the poor engineers, the mental overload of the lonesome undergraduates,
    mathematical darwinism, usefulness according to yourself etc. It's not possible to do that. You deceive them. Once and for all clarify your choices. 
    You say that you do not want a definition higher than the undergraduate level  to appear in the official part and in the
    same time you want Fermat's last theorem to be formalized. You say that you leave people perfectly free but at
    the same time, if it is not in a certain undergraduate textbook popular in the USA, you make a fuss about it. You don't know what you want.

    -- 
    FL

    Benoit

    unread,
    Feb 2, 2020, 11:06:05 AM2/2/20
    to Metamath
    To answer FL's latest message: the first and foremost thing that Norm has done and still does for us is having created the Metamath language, continuously improving the Metamath C program (the "official implementation" of the Metamath language), and having begun and still being the main contributor (in number of theorems contributed) of the database set.mm.  And he's doing all of this voluntarily, benevolently, with open and free licensing.

    Then, and only then, FL might have a point in that sometimes, it feels like Norm chooses the one reason that will make a definition rejected from the main part.  By now, we know the usual reasons: not pedagogical for engineers/undergrads, not enough applications in set.mm, does not simplify other proofs enough, can be coded "not-too-difficulty" in terms of previous definitions, etc.  Since there will most often be at least one criterion not fulfilled by a given definition, it is easy to pick one of these reasons to reject almost any definition.  This is why I asked for the goals of set.mm, to have some clarity.

    As for Norm's question: "Question:  why do you think that Lang's Algebra (2002 edition) doesn't define or mention "semigroup" in its 900 pages?":

    [I'm using your edition for the page numbers.]  On page 3, he defines (terms in boldface):
    - law of composition
    - associative
    - unit element
    and finally he defines, verbatim, "A monoid is a set G, with a law of composition which is associative, and having a unit element".
    So, even if the words "magma" and "semigroup" are not there, this looks furiously like a piecemeal definition, using the three steps successively, and they effectively correspond to respectively "magma", "semigroup", "monoid".  He doesn't introduce these terms because he does not study them (they are of course less important than groups, rings, etc.).

    Also, note that the symbols \forall and \exists appear almost never (or maybe even never) in the 900 pages.  Therefore, his readers are not faced with intimidating long strings of formal symbols, and he doesn't need to introduce "magma" and "semigroup" as aids.

    By the way: I just noticed that he uses the symbol \subset in a reflexive sense (page ix).  Therefore, I'm happy that I insisted a few months ago, against the opinions of several people here (https://groups.google.com/d/topic/metamath/_OqalbIKkgE/discussion), to remove this symbol from set.mm because of its ambiguity, and to replace it with \subsetneq (because that's what it meant in set.mm!) -- the symbol for the reflexive relation remaining \subseteq.

    Benoît

    Alexander van der Vekens

    unread,
    Feb 2, 2020, 11:11:27 AM2/2/20
    to Metamath
    I want to comment on the following post in more detail:


    On Saturday, February 1, 2020 at 12:39:46 AM UTC+1, Norman Megill wrote:
    Your counterexamples to show non-membership are interesting.  I appreciate your work dong this, and certainly it would accompany the magma and semigroup structures should we decide to import them.  However, showing the proper subset relationship in and of itself still doesn't quite yet show us that we have a true need for these structures.

    I agree that the proper subset relationship would not be enough, but together with the other results, there are a sufficiently high number of theorems which would justify the definitions, in my opinion. Especially the theorem about extended reals (~xrsmgmdifsgrp): its proof is scetched within the comment of ~xrs1mnd, which actually calls for a formalization.


    For example, if we defined EE as the set of even numbers, we could have a theorem showing it is a proper subset of ZZ, but such a theorem isn't enough in itself to justify introducing it.  We would want a sufficient number of theorems about even numbers whose statements would become significantly shorter and clearer.  That hasn't happened and probably won't.  Just because the terminology "even number" appears in the literature doesn't mean we have to have a special symbol for it; instead, "2 || N" suffices.

    I already started to prove that the even numbers/integers are a non-unital ring (trivial, in principle, because 1 is not even ;) ). I do not plan to provide a definition for them, although this would be appealing: I proved already some "lemmas" which could become theorems if there was a definition of even numbers.
     

    Regarding Bourbaki:  A long time ago, before Metamath, I eagerly and naively started off trying to learn the Theory of Sets before knowing any formal logic or set theory.  Its bizarre "assemblies" (tau notation) are extremely inefficient and seem to ignore all the developments in FOL and axiomatic set theory that were already essentially mature and standard in the rest of the world.  With much effort and the assistance of a computer program I wrote, I worked out the thousands of symbols needed to define the empty set.  Then I gave up when I realized the number 1 would be unfeasible (it turns out to require trillions of symbols).  Even with intermediate definitions to shorten things, I was unable to gain an intuitive understanding of what was going on.

    After I learned "real" set theory, I became somewhat less than enamored with Bourbaki, so I mention this experience in the spirit of full disclosure. :)  With that said, the feeling I get is that Bourbaki sometimes does its own thing that may or may not appear in mainstream mathematical literature.  My personal preference is not to use Bourbaki as a primary reference for terminology and notation, unless it is also standard in modern textbooks.

    I do not know about Bourbaki's Theory of Sets, therefore I cannot rate it. But only because you made bad experiences with Boubaki's set theory, this does not mean that everything written by Bourkaki is bad.


    Two relatively modern and well-regarded textbooks are Lang's "Undergraduate algebra" (~400 pp) and his graduate-level "Algebra" (900 pp).  The first book starts off defining "group", and the second starts with "monoid".  Neither book mentions "magma" or "semigroup".  Even "monoid" is apparently a topic too specialized for Lang's undergraduate book, which doesn't mention it at all.

    I found a book, called "Undergraduate Algebra", and it is published in 2019! The author is Matej Bresar from Slovenia (ISBN 978-3-030-14052-6). Its table of contents starts with:

    Part I The Language of Algebra

    1 Glossary of Basic Algebraic Structures
       1.1 Binary Operations
       1.2 Semigroups and Monoids
          1.2.1 Semigroups
          1.2.2 Monoids
       1.3 Groups
       1.4 Rings and Fields
       1.4.1 Rings
      


    and this is exactly in line with our proposal: In section 1.1., the first definition is the definition of a binary operation as mapping from S X S to S. In the following text, he differentiate it from "external binary operation", introducing the definition of closure. Although the word "magma" is not used, I think the first two pages are about magmas in the sense we use it. Afterwards (2nd definition), the associative law is defined, followed by the commutative law (3rd definition). Finally, an identity/neutral element is defined (4th def.). Section 1.2 starts with the definition of a semigroup. In summary, Bresar first defines the "laws" (properties of binary operations) and then, based on them, the corresponding algebraic structures.

    This is an excellent example for a very modern book, aimed at readers with only basic mathematical knowledge...

    Alexander

    Norman Megill

    unread,
    Feb 2, 2020, 11:52:48 AM2/2/20
    to Metamath
    @benoit and @alexander - Thank you for your comments.  Go ahead and import semigroup (assuming no one else objects).  Hopefully, though, I've communicated my point that we don't want this to lead to flurry of shallow definitions that people must learn, just to address a perceived difficulty in understanding the df-* statements (that are technical definitions not always meant to be read directly by humans).

    Norm


    On Sunday, February 2, 2020 at 11:11:27 AM UTC-5, Alexander van der Vekens wrote:
    I want to comment on the following post in more detail:


    ....

    Alexander van der Vekens

    unread,
    Feb 2, 2020, 1:59:55 PM2/2/20
    to Metamath
    Thanks, Norm, for your obligingness in this case. I think this was a special case, with a lot of arguments for and against the definitions, and the discusson was an appropriate process for quality assurance. I have the feeling that we had a tie in our argumentation, so there are no winners and no losers. I also see no danger that there will be a lot of (not necessarily needed or superfluous or even harmful) definitions will be added to (the main body of) set.mm: each request for adding such a definition should be intensively discussed (as done in this case) - the current agreement is no blank check for future cases.

    Nevertheless, this special case was a good example to illustrate and substantiate the goals of set.mm.

    I'll start moving the material into the main body of set.mm tomorrow (evening), a fine tuning can be performed by Github comments. This should also support the cleanup of Part 18, see https://groups.google.com/d/msg/metamath/0O19iCOc7LA/ypVqkxC_FAAJ.

    Alexander

    fl

    unread,
    Feb 2, 2020, 2:03:57 PM2/2/20
    to Metamath


    @benoit and @alexander - Thank you for your comments.  Go ahead and import semigroup (assuming no one else objects).  Hopefully, though, I've communicated my point that we don't want this to lead to flurry of shallow definitions that people must learn, just to address a perceived difficulty in understanding the df-* statements (that are technical definitions not always meant to be read directly by humans).



    I think you have taken the right decision. Your contributors will be grateful.
     
    -- 
    FL

    Mario Carneiro

    unread,
    Feb 2, 2020, 3:54:56 PM2/2/20
    to metamath
    I am still not really convinced that semigroup is worth the trouble, but I'm happy to follow the crowd here, and I don't want to belabor any points made by Norm. I just wanted to remark that if you want to prove that Semigroup C. Monoid, the easiest example (that already exists!) is NN.

    --
    You received this message because you are subscribed to the Google Groups "Metamath" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/751ddf1b-388e-49ce-a32c-ec09d8fdaecd%40googlegroups.com.

    fl

    unread,
    Feb 2, 2020, 4:58:24 PM2/2/20
    to meta...@googlegroups.com
    I've been trying to find out who Serge Lang was. I found this. Born in Saint-Germain-En-Laye (upper middle-class suburb of Paris) or in Paris (the sources don't match). He then emigrated to the United States.  He was a member of the Bourbaki group (impossible to find the dates). But Bourbaki's Algebra Book is from 1942 and Lang's Algebra Book is from 1965 i.e. he copied Bourbaki. He was famous for his taste for intellectual polemics. He gave exercises to his students to prove the inanity of Huntington's theses in "The Clash of Civilizations" (1996). At the end of his life he would engage in more dubious polemics. Never mind. Also a specialist in number theory and algebraic geometry. 

    If somebody could find a copy of "the File" and make it available I think it would be good for the history of mathematics.





    -- 
    FL

    vvs

    unread,
    Feb 2, 2020, 5:18:42 PM2/2/20
    to Metamath
    If somebody could find a copy of "the File" and make it available I think it would be good for the history of mathematics.

    If you don't mind using famous pirate library, here's the link (this book is copyrighted):

    If there are objections, this post could be removed.
    Reply all
    Reply to author
    Forward
    0 new messages