Follow up from Robert's talk on Subtyping in a Functional Language

10 views
Skip to first unread message

Ben Hutchison

unread,
Sep 18, 2025, 7:48:08 PMSep 18
to Melbourne Compose Group
Thanks Robert for a thought-provoking talk and thanks to participants for a lively discussion.

There were a couple of claims I wanted to make that were probably too in-depth to cover during last night's discussion:

1. Many valid monoids exist for Int that all satisfy the monoid properties/laws. Therefore, we can't say Int "is a" monoid, but rather "has a" monoid, with specific properties. Here are 7 examples of Int monoids:

Addition: (Int, 0, +)
Multiplication: (Int, 1, *)
Minimum with ∞: (Int ∪ {+∞}, +∞, min)
Maximum with -∞: (Int ∪ {−∞}, −∞, max)
Bitwise XOR: (Int, 0, ^)
Greatest Common Divisor: (Int≥0, 0, gcd)
Least Common Multiple: (Int≥0, 1, lcm)


2. Mathematics doesn't prescribe Addition as the "canonical" monoid for Int. Haskell (and Rust) have a practice of defining a canonical monoid for Int, but this is a programming language design choice not a mathematical inevitability. Scala takes a different choice, where we can define a default monoid for Int, but override this in specific scopes (call-trees). This design choice, ofted called "Typeclass Coherence", is an interesting & open area of debate as there are pros and cons of both approaches; it's a "damned if you do, damned if you don't" situation IMO.


3. What is the definition of addition anyway? If many monoids exist for Int that aren't addition, then clearly the laws of addition aren't merely defined by monoid laws. You may recall I mentioned algebraic "Rings" last night. Rings (actually, Semirings) [https://en.wikipedia.org/wiki/Semiring] take a step towards the definition of addition by pairing two monoids together, one called "addition" and one called "multiplication", and adding specific non-symmetrical laws:
- commutativity of addition
- multiplication distributes over addition,

Intuitively, semirings describe addition and multiplication relative to one another. However, the addition story doesn't stop there. There are many possible lawful rings where the addition slot is filled by something other than regular addition. One example is the field of so-called "tropical algebra" where addition is defined as: min(x, y)   [https://en.wikipedia.org/wiki/Tropical_semiring]

If we progress past semirings, we layer on additional laws, eg:
- Rings (vs semirings) [https://en.wikipedia.org/wiki/Ring_(mathematics)] require the addition operator to have an additive inverse (aka negation aka subtraction) 
- Fields [https://en.wikipedia.org/wiki/Field_(mathematics)] require the multiplicative operator to have a multiplicative inverse (aka division)
- Exponential fields [https://en.wikipedia.org/wiki/Exponential_field] relate addition and multiplication together again via a newly introduced Exponentiation function E, requiring: E(a + b) = E(a) * E(b)

As we layer on these additional constraints, it becomes harder and harder for the "addition monoid" slot in the semiring to be filled by anything other than regular addition. So actually, the definition of everyday arithmetic addition emerges gradually, and requires far more concepts than just monoid, to specify unambiguously.


In closing, the "property-centric" approach to thinking about classification of data & operators in programs, as advocated by Robert last night, resonates with me. I think it's basically the "right approach".

Just wanted to highlight:

- Operators like Monoid are inherently "polymorphic" over data types, ie, there are multiple valid implementations. A language designer needs to think about how a programmer specifies which implementation should be used in a given context. Does one declare one as canonical, like the addition monoid? But then, how to use a different one?

- Some subtleties that emerge when one tries to specify properties. As we saw above, specifying arithmetic addition requires a good deal more algebraic concepts than just monoids alone.

-Ben

Jack Kelly

unread,
Sep 18, 2025, 8:56:03 PMSep 18
to Ben Hutchison, Melbourne Compose Group
Thanks for a really interesting email, sounds like it was a fun night. When's the October event? I will be visiting Melbourne in October, perhaps there's a chance I can attend.

September 19, 2025 at 9:47 AM, "Ben Hutchison" <brhut...@gmail.com mailto:brhut...@gmail.com?to=%22Ben%20Hutchison%22%20%3Cbrhutchison%40gmail.com%3E > wrote:

> 2. Mathematics doesn't prescribe Addition as the "canonical" monoid for Int. Haskell (and Rust) have a practice of defining a canonical monoid for Int, but this is a programming language design choice not a mathematical inevitability.

Haskell does have this capability, but there's no instance for `Semigroup Int` defined in `base`:

ghci> 1 <> 2 :: Int

<interactive>:1:3: error: [GHC-39999]
• No instance for ‘Semigroup Int’ arising from a use of ‘<>’
• In the expression: 1 <> 2 :: Int
In an equation for ‘it’: it = 1 <> 2 :: Int

> 3. What is the definition of addition anyway? If many monoids exist for Int that aren't addition, then clearly the laws of addition aren't merely defined by monoid laws. You may recall I mentioned algebraic "Rings" last night. Rings (actually, Semirings) [https://en.wikipedia.org/wiki/Semiring] take a step towards the definition of addition by pairing two monoids together, one called "addition" and one called "multiplication", and adding specific non-symmetrical laws:
> - commutativity of addition
> - multiplication distributes over addition,
>
> Intuitively, semirings describe addition and multiplication relative to one another.

https://web.archive.org/web/20190521022847/https://stedolan.net/research/semirings.pdf is a fun paper about some other things you can describe with semirings.

> However, the addition story doesn't stop there. There are many possible lawful rings where the addition slot is filled by something other than regular addition. One example is the field of so-called "tropical algebra" where addition is defined as: min(x, y)   [https://en.wikipedia.org/wiki/Tropical_semiring]
>
> If we progress past semirings, we layer on additional laws, eg:
> - Rings (vs semirings) [https://en.wikipedia.org/wiki/Ring_(mathematics)] require the addition operator to have an additive inverse (aka negation aka subtraction) 

I was taught about groups before rings, which are monoids that have an additive inverse. Then you can work with that without necessarily having the second monoid you need to make a full ring. This is also useful from a system design perspective, because when you talk about groups and group actions, you can do implement fun things like "undo" as the inverse of some operation and property test that it's a true inverse, etc...

Brent Yorgey's "Monoids: Theme and Variations" is a lovely paper http://ozark.hendrix.edu/~yorgey/pub/monoid-pearl.pdf . It starts with monoids but also shows the utility of monoid homomorphisms and actions in program design.

There are even more interesting structures between Monoids and full Groups. One that I find particularly cute is the Inverse Semigroup, where instead of your inverse operation being a complete inverse (a + a^-1 = 0), it instead satisfies (a^-1 + a + a^-1 = a = a + a^-1 + a). This shows up e.g. when you lift an inverse semigroup or a group `g` into a `Map k g` (a dictionary/map with key type `k` and value type `g`) by inverting all of the values, and using your semigroup operator to merge elements when keys collide. It is not a group because if you take the inverse of a map and apply it, you will get a map containing empty elements instead of the empty map.

Ed Kmett gave a good talk on Inverse Semigroups: https://www.youtube.com/watch?v=HGi5AxmQUwU , and one on "monoidal parsing" https://www.youtube.com/watch?v=Txf7swrcLYs that detours into language design before bringing in some interesting semidirect products and actions.

> In closing, the "property-centric" approach to thinking about classification of data & operators in programs, as advocated by Robert last night, resonates with me. I think it's basically the "right approach".
>
> Just wanted to highlight:
>
> - Operators like Monoid are inherently "polymorphic" over data types, ie, there are multiple valid implementations. A language designer needs to think about how a programmer specifies which implementation should be used in a given context. Does one declare one as canonical, like the addition monoid? But then, how to use a different one?

My unhinged "if I got a magic wand" wish for Haskell would be to make `(+)` the operator of `class Semigroup`, `(-)` an operator introduced by `class Group`, etc., so that traditional arithmetic looks "normal" but if you were working in other algebras you could ask for only the power you needed.

-- Jack

Ben Hutchison

unread,
Sep 18, 2025, 9:58:51 PMSep 18
to Melbourne Compose Group

---------- Forwarded message ---------
From: Ben Hutchison <brhut...@gmail.com>
Date: Fri, 19 Sept 2025, 11:58 am
Subject: Re: Follow up from Robert's talk on Subtyping in a Functional Language
To: Jack Kelly <ja...@jackkelly.name>


Hi Jack 

It'd be great to have you along. it's always 3rd Thursday of the month. So October 16...

Hope you make it. 

-Ben 

Robert Smart

unread,
Sep 19, 2025, 11:32:54 PMSep 19
to Melbourne Compose Group
1. Many valid monoids exist for Int that all satisfy the monoid properties/laws. Therefore, we can't say Int "is a" monoid, but rather "has a" monoid, with specific properties. Here are 7 examples of Int monoids:

Addition: (Int, 0, +)
Multiplication: (Int, 1, *)
...
Right! Which I handle by giving them slightly different names: Monoid.add, Monoid.mul, and programmers are free to invent their own. The way I imagine it is that the standard library will define: (a) a generic Monoid that can have different secondary names to distinguish them; (b) declare and prove that Nat and Int, and possibly others, conformTo Monoid.add via their respective addition rules. Then if you want to define a new type, Quaternion, above Nat, and say that it conformsTo Monoid.add (so that you can use the "+" infix operator) then it has to be compatible with Nat's Monoid.add. I.e. if you add 2 Nats and then convert the result to Quaternion then you get the same answer as if you convert the Nats to Quaternions and then add. (It's a commuting diagram).
 
2. Mathematics doesn't prescribe Addition as the "canonical" monoid for Int. Haskell (and Rust) have a practice of defining a canonical monoid for Int, but this is a programming language design choice not a mathematical inevitability. Scala takes a different choice, where we can define a default monoid for Int, but override this in specific scopes (call-trees). This design choice, ofted called "Typeclass Coherence", is an interesting & open area of debate as there are pros and cons of both approaches; it's a "damned if you do, damned if you don't" situation IMO.

I can handle multiple uses of the monoid  structure at once. That seems likely to be better. Or maybe I don't understand the use cases. I'm not claiming that Monoid.add is more canonical than other Monoids, only that its connection to Nat and Int in the standard library give it a bit of special status.

3. What is the definition of addition anyway?

I  don't claim to have a definition of add. It is only when two types are in an X isa Y relation that the same Behaviours have to be compatible. The definition of add in the standard library then imposes a compatibility constraint on some other uses of Monoid.add. However it doesn't impose a constraint on Monoid.add(String) because String is not in any relation to Nat. So we can define it to be concatenation, giving + another useful and natural and not confusing meaning.

Robert

Robert Smart

unread,
Sep 20, 2025, 8:09:35 PMSep 20
to Melbourne Compose Group
OK, here's approximately how I envisaged it to work. Actually I'd rather see the subname stuff separated from the mathematical definition, so this is going to change. Indeed I need a general subnaming system.

`Monoid `subn:Atom = # ` in front of an identifier introduces a new identifier
  behaviour of Type by
    { Type `T =>> StructureType :  # StructureType, like Prop, is a subtype of Type
      Structure :                # `[ ... ] is an alias for Structure ... EndStructure
        `subName : Atom/{Atom `x =>> Prop: x .= subn} ;    # subn will be an atom, such as .add, .mul, or from Atom.new
        `unit : T ;              # e.g. 0 for addition
        `binop : T*T=>>T ;       # binary operation
        `leftId : forall { T `t =>> Prop: binop(unit,t) .= t } ; # .= creates a Prop
        `rightId : forall { T `t =>> Prop: binop(t,unit) .= t } ;
      EndStructure
    };
`NatDefine = behaviour of Type by
   {Type`Nat =>> 
StructureType :
     `[ :
       `zero:Nat;
       `succ:Nat<=>>Nat; # TEmbed, backwards fails for zero.
       `succNotZero:∀{Nat`n=>>Prop:Neq(Nat)(succ n,zero)};
       `induct:∀{(Nat=>>Type)`p =>> Prop:
                   (p(zero) And ∀{Nat`n=>>Prop: p(n) Implies p(succ n)})
                   Implies ∀{Nat`n=>>Prop: p(n)} };
     ] };
`Nat = newType NatDefine; # defining properties

`addNat: Nat*Nat =>> Nat;
addNat (Nat.zero,`n:Nat) = n;
addNat (Nat.succ(`m:Nat),`n:Nat) = Nat.succ(addNat(m,n));
Nat conformsTo Monoid.add by
    @`[Monoid.add.by(Nat): # we start from the Structure type we will instantiate
        subName = .add;
        unit = Nat.zero;
        binop = addNat;
        leftId = toForall { Nat `t =>> typeOf(leftId).at(t): binop(unit,t) = addNat(Nat.zero,t) @.= t };
        rightId = toForall { Nat `t =>> typeOf(rightId).at(t): ... use Nat.induct I think ... };
     ];
Reply all
Reply to author
Forward
0 new messages