Thanks Robert for a thought-provoking talk and thanks to participants for a lively discussion.
There were a couple of claims I wanted to make that were probably too in-depth to cover during last night's discussion:
1. Many valid monoids exist for Int that all satisfy the monoid properties/laws. Therefore, we can't say Int "is a" monoid, but rather "has a" monoid, with specific properties. Here are 7 examples of Int monoids:
Addition: (Int, 0, +)
Multiplication: (Int, 1, *)
Minimum with ∞: (Int ∪ {+∞}, +∞, min)
Maximum with -∞: (Int ∪ {−∞}, −∞, max)
Bitwise XOR: (Int, 0, ^)
Greatest Common Divisor: (Int≥0, 0, gcd)
Least Common Multiple: (Int≥0, 1, lcm)
2. Mathematics doesn't prescribe Addition as the "canonical" monoid for Int. Haskell (and Rust) have a practice of defining a canonical monoid for Int, but this is a programming language design choice not a mathematical inevitability. Scala takes a different choice, where we can define a default monoid for Int, but override this in specific scopes (call-trees). This design choice, ofted called "Typeclass Coherence", is an interesting & open area of debate as there are pros and cons of both approaches; it's a "damned if you do, damned if you don't" situation IMO.
3. What is the definition of addition anyway? If many monoids exist for Int that aren't addition, then clearly the laws of addition aren't merely defined by monoid laws. You may recall I mentioned algebraic "Rings" last night. Rings (actually, Semirings) [
https://en.wikipedia.org/wiki/Semiring] take a step towards the definition of addition by pairing two monoids together, one called "addition" and one called "multiplication", and adding specific non-symmetrical laws:
- commutativity of addition
- multiplication distributes over addition,
Intuitively, semirings describe addition and multiplication relative to one another. However, the addition story doesn't stop there. There are many possible lawful rings where the addition slot is filled by something other than regular addition. One example is the field of so-called "tropical algebra" where addition is defined as: min(x, y) [
https://en.wikipedia.org/wiki/Tropical_semiring]
If we progress past semirings, we layer on additional laws, eg:
- Rings (vs semirings) [
https://en.wikipedia.org/wiki/Ring_(mathematics)] require the addition operator to have an additive inverse (aka negation aka subtraction)
- Fields [
https://en.wikipedia.org/wiki/Field_(mathematics)] require the multiplicative operator to have a multiplicative inverse (aka division)
- Exponential fields [
https://en.wikipedia.org/wiki/Exponential_field] relate addition and multiplication together again via a newly introduced Exponentiation function E, requiring: E(a + b) = E(a) * E(b)
As we layer on these additional constraints, it becomes harder and harder for the "addition monoid" slot in the semiring to be filled by anything other than regular addition. So actually, the definition of everyday arithmetic addition emerges gradually, and requires far more concepts than just monoid, to specify unambiguously.
In closing, the "property-centric" approach to thinking about classification of data & operators in programs, as advocated by Robert last night, resonates with me. I think it's basically the "right approach".
Just wanted to highlight:
- Operators like Monoid are inherently "polymorphic" over data types, ie, there are multiple valid implementations. A language designer needs to think about how a programmer specifies which implementation should be used in a given context. Does one declare one as canonical, like the addition monoid? But then, how to use a different one?
- Some subtleties that emerge when one tries to specify properties. As we saw above, specifying arithmetic addition requires a good deal more algebraic concepts than just monoids alone.
-Ben