Extending number systems (i.e. how to go from natural numbers to integers, to rational numbers, etc.)

119 views
Skip to first unread message

Lucas Brunno Luna

unread,
May 23, 2020, 4:57:32 PM5/23/20
to Calculational Mathematics
Dear colleagues,

Greetings.

Quite some time ago, I tried to the develop the theory of arithmetic from the ground up à la Peano. I.e. Starting with the set of axioms below, I proceeded to prove the familiar properties of the arithmetic operators on natural numbers, i.e. commutativity, associativity, distributivity, etc.

(Note: The variables x and y below range over the natural numbers, and S denotes the so-called "successor function", i.e. a function that associates each natural number to its successor, which is also a natural number.)

Ax. S.x = S.y  ->  x = y
       
(i.e. S is injective)

Ax. P.0 /\ (A x :: P.x -> P.(S.x))  ->  (A x :: P.x)
       
(a.k.a. the induction axiom)

Ax. x + 0    =  x
Ax. x + S.y  =  S.(x + y)

Ax. x * 0    =  0
Ax. x * S.y  =  x + x * y

Ax. 1 = S.0
       
(Definition of 1, just for convenience.)

For the record, this exercise had three purposes, viz.

0. To make myself more familiar with mathematical theory building;
1. To try to get my understanding about numbers on more rigorous footing;
2. To finally understand how mathematical induction works, something that had always puzzled me. This desire was especially triggered by attempting to understand some of the proofs in Dijktra's "A Discipline of Programming". (The semantic definition of the repetitive construct had already left me completely baffled.) I regard that this strategy did work out very well, in the sense that I am no longer afraid of mathematical induction, have actually written many proofs appealing to it, and am able to follow the proofs in Dijkstra's book.

So far, so good. That was already a long time ago. Later on I tried to extend my domain of natural numbers to include also the negative numbers, but with seemingly little success. I had seen EWD923a, and was especially struck by the paragraph where he says "For quite some time now, roots of equations seems to me the paradigm par excellence for definitions." So I tried to follow such an approach and postulated that, for all natural x, the equation

y: x + y = 0

Has at least one solution.

But is the above postulate sufficient to prove that the familiar properties of addition, multiplication, etc. extend to all integers? In the same vein, how can one perform further extensions to the rational numbers, the reals, etc.?

I didn't seem to be able to succeed even in trying to prove that the above equation has a unique solution corresponding to each natural x. Every time I try to do so I become unsure of whether I have sufficient ground to assume that the expressions I'm calculating with are even defined.

Also, as a "bonus" issue, with relation <= defined on natural numbers as:

Ax. x <= y  ==  (E r :: x + r = y)

How can one prove that it is antisymmetric and transitive? I think that I already know how to prove that it is reflexive, viz. by observing for any x:

    x <= x
==    { Definition of <= }
   
(E r :: x + r = x)
<-    { (Read below) }
    x
+ 0 = x
== { Identity of + }
    x
= x
== { = reflexive }
   
true

The second manipulation above seems to appeal to a rule roughly described by "using a witness" for a existentially quantified variable, but I'm not very familiar with that. It seems to be the "dual" (forgive my ignorance, I don't understand much about that either) analogue of the rule of instantiation for universal quantification; the two rules seem to be generalizations of the rules of absorption involving disjunction/conjunction and implication.

I would appreciate any suggestions, comments, or general insights on these problems.

Thank you all in advance!
All the best.

(PS.: Do we need to only use ASCII characters to write down formulae in these messages?)

Wolfram Kahl

unread,
May 23, 2020, 5:36:55 PM5/23/20
to calculationa...@googlegroups.com
On Sat, May 23, 2020 at 01:57:32PM -0700, Lucas Brunno Luna wrote:
> Quite some time ago, I tried to the develop the theory of arithmetic from
> the ground up à la Peano. I.e. Starting with the set of axioms below, I
> proceeded to prove the familiar properties of the arithmetic operators on
> natural numbers, i.e. commutativity, associativity, distributivity, etc.
>
> (Note: The variables x and y below range over the natural numbers, and S
> denotes the so-called "successor function", i.e. a function that associates
> each natural number to its successor, which is also a natural number.)
>
> Ax. S.x = S.y -> x = y
> (i.e. S is injective)
>
> Ax. P.0 /\ (A x :: P.x -> P.(S.x)) -> (A x :: P.x)
> (a.k.a. the induction axiom)
>
> Ax. x + 0 = x
> Ax. x + S.y = S.(x + y)
>
> Ax. x * 0 = 0
> Ax. x * S.y = x + x * y
>
> Ax. 1 = S.0
> (Definition of 1, just for convenience.)

(You can play with an essentially-equivalent approach in CalcCheck

http://calccheck.mcmaster.ca/
http://calccheck.mcmaster.ca/CalcCheckDoc/2DM3-2019-instances.html

, see:

Homework 5: Natural Numbers and Induction
http://130.113.68.214:9423/

, but check out at least

http://calccheck.mcmaster.ca/CalcCheckDoc/GettingStartedWithCalcCheck.html
http://calccheck.mcmaster.ca/CalcCheckDoc/GettingStartedWithCalcCheckWeb.html

first, and possibly also some of the earlier homeworks and exercises.

For actually using the induction axiom, see:

Homework 20: Using the Induction Principle for `suc` in $\mathbb{N}$
http://130.113.68.214:9493/

Exercise 12.1: Proving Induction Principles for Natural Numbers
http://130.113.68.214:9494/
)

> So far, so good. That was already a long time ago. Later on I tried to
> extend my domain of natural numbers to include also the negative numbers,
> but with seemingly little success. I had seen EWD923a, and was especially
> struck by the paragraph where he says "For quite some time now, roots of
> equations seems to me the paradigm par excellence for definitions." So I
> tried to follow such an approach and postulated that, for all natural x,
> the equation
>
> y: x + y = 0
>
> Has at least one solution.

Gries and Schneider (1993) take the same approach in LADM chapter 15,
and proceed to prove that an appropriately axiomatised unary minus
provides those (unique) solutions.

The list of theorems around that is also in:

Extra Notebook: A Theory of Integers 1: http://130.113.68.214:9405/

(With all the proofs left as online-checkable exercises for the reader.

The crucial
Theorem (15.8) ``Cancellation of +'': a + b = a + c \equiv b = c
requires quantifier reasoning.
)

> But is the above postulate sufficient to prove that the familiar properties
> of addition, multiplication, etc. extend to all integers? In the same vein,
> how can one perform further extensions to the rational numbers, the reals,
> etc.?

(The above-mentioned development does not attempt to connect the inductive view
of natural numbers with the theory of integral domains from LADM chapter 15.)



Wolfram

Jeremy Weissmann

unread,
May 23, 2020, 10:21:00 PM5/23/20
to mathmeth

So I tried to follow such an approach and postulated that, for all natural x, the equation

y: x + y = 0

Has at least one solution.

But is the above postulate sufficient to prove that the familiar properties of addition, multiplication, etc. extend to all integers? In the same vein, how can one perform further extensions to the rational numbers, the reals, etc.?

Which axioms are you extending to these new solutions?  Not induction I hope?  The problem is, this equation is not true in the naturals.
 

I didn't seem to be able to succeed even in trying to prove that the above equation has a unique solution corresponding to each natural x. Every time I try to do so I become unsure of whether I have sufficient ground to assume that the expressions I'm calculating with are even defined.

Depending on what you assume, I'm not sure you will be able to prove that the solutions are unique.  One could imagine the natural numbers extended with several copies of the negative integers.

Some years ago I got very involved in a (sad) Facebook conversation trying to figure out what you'd need to assume in order to extend the naturals via a unary operator - .  We didn't come to any easy conclusions.

Trying to axiomatize "The Integers" is not easy.  The same is true for any domain.  It's much nicer to take an algebraic approach to some aspect of the integers, like, say, the ordering relation, and prove what you can in that mini-domain.  When we say, for example, that the integers under addition form a group, or that they form with multiplication a commutative ring, we're not axiomatizing the domain — there are tons of groups and commutative rings!  We're not trying to pin down the mathematical object in our minds uniquely, we're just studying its structure.

It's the difference between trying to define precisely what a human being is, versus just studying aspects of our bodies and behavior.

 

Also, as a "bonus" issue, with relation <= defined on natural numbers as:

Ax. x <= y  ==  (E r :: x + r = y)

How can one prove that it is antisymmetric and transitive? I think that I already know how to prove that it is reflexive, viz. by observing for any x:

    x <= x
==    { Definition of <= }
   
(E r :: x + r = x)
<-    { (Read below) }
    x
+ 0 = x
== { Identity of + }
    x
= x
== { = reflexive }
   
true

The second manipulation above seems to appeal to a rule roughly described by "using a witness" for a existentially quantified variable, but I'm not very familiar with that. It seems to be the "dual" (forgive my ignorance, I don't understand much about that either) analogue of the rule of instantiation for universal quantification; the two rules seem to be generalizations of the rules of absorption involving disjunction/conjunction and implication.

Yes, the rule of instantiation is important to use in both directions:  (Ex :: f.x) <= f.c  and  (Ax :: f.x) => f.c .   When you are constructing a strengthening chain and you have existential quantification, you should be on the lookout for a witness — just as if you are constructing a weakening chain and you have universal quantification, you should be on the lookout for a useful instantiation.

As for antisymmetric, how about:

     x ≤ y /\ y ≤ x
==    { definition of  ≤ ,  twice }
     (Er,s :: x+r = y /\ y+s = x)
=>    { Leibniz }
     (Er,s :: x+r = y /\ y+s = x /\ (x+r)+s = x )
==    { algebra }
     (Er,s :: x+r = y /\ y+s = x /\ r+s = 0 )
==    { r,s ≥ 0 }
     (Er,s :: x+r = y /\ y+s = x /\ r = 0 /\ s = 0)
==    { one-point rule }
     x+0=y /\ y+0=x
==    { algebra }
     x = y     .

I don't particularly like this calculation.  It's too wide.  How about:

|[ Context:  x ≤ y

     y ≤ x
==   { • x+r = y }
     x+r ≤ x
==   { algebra (you can prove this easily) }
     r ≤ 0
==   { • r ≥ 0 }
     r = 0
==   { algebra }
     x+r = x
==   { • x+r = y }
     y = x

]|

The heuristic of missing symbols tells us that if we calculate  R (which doesn't depend on c)  using property  p.c ,  then we can conclude  R  as long as the context implies  (Ec :: p.c) .   In our case we calculated  y ≤ x == y = x  (which doesn't depend on r)  using  r ≥ 0 /\ x+r = y ,  so we can conclude  y ≤ x == y = x as long as the context (x ≤ y) implies  (Er :: r ≥ 0 /\ x+r = y) , which of course it does.  So now I've proved:

     x ≤ y   =>   ( y ≤ x  ==  y = x )

which is stronger than antisymmetry.

   By the way, the  "you can prove this easily"  lemma is:   x ≤ y == x+c ≤ y+c  which I think will not be hard for you to prove.

   The calculational style is not always great for working with witnesses and existential quantifications.  Sometimes it's just easier to say,  "let  r  and  s  satisfy  x+r = y  and  y+s = x"  and then work from there.

   As for transitivity, I'm just thinking off the cuff here about how I might use the heuristic of missing symbols again:

|[ context: x ≤ y /\ y ≤ z

     x ≤ z
==    { def }
     (Er :: x+r = z)
==    { • x+s = y ,  adding  s  to both sides }
     (Er :: y+r = z+s)
==    { • y+t = z ,  adding  t  to both sides }
     (Er :: z+r = z+s+t)
==    { algebra }
     (Er :: r = s+t)
==    { positive numbers are closed under addition }
     true

]|

Something like that, anyway.

   My suggestion is narrow your focus!

+j

Jeremy Weissmann

unread,
May 24, 2020, 1:07:10 AM5/24/20
to mathmeth
Just thinking a bit more, I don’t dig a definition like:

x ≤ y ≡ ⟨∃a :: x+a = y⟩

because it’s only correct if a is nonnegative, but if we have a notion of positive then why not just define

x ≤ y ≡ y−x is nonnegative ?

> On May 23, 2020, at 22:20, Jeremy Weissmann <plusj...@gmail.com> wrote:
>

Jeremy Weissmann

unread,
May 24, 2020, 9:40:11 AM5/24/20
to calculationa...@googlegroups.com
Wolfram,

What incredible resources you’ve created. I don’t understand how they work but pretty incredible nonetheless!

Can we see your proof of a·b = 0 ≡ a = 0 ∨ b = 0 without mutual implication or case analysis?

+j

> On May 23, 2020, at 17:36, Wolfram Kahl <ka...@cas.mcmaster.ca> wrote:
> --
> You received this message because you are subscribed the mathmeth.com mailing list.
> To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
> ---
> You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/calculational-mathematics/20200523213653.GJ18658%40ritchie.cas.mcmaster.ca.

Jeremy Weissmann

unread,
May 24, 2020, 9:42:24 AM5/24/20
to calculationa...@googlegroups.com
(I think I have one with case analysis.)

> On May 24, 2020, at 09:40, Jeremy Weissmann <plusj...@gmail.com> wrote:
>
> Wolfram,

Wolfram Kahl

unread,
May 24, 2020, 9:54:51 AM5/24/20
to calculationa...@googlegroups.com
Jeremy,

On Sun, May 24, 2020 at 09:40:09AM -0400, Jeremy Weissmann wrote:
> Can we see your proof of a·b = 0 ≡ a = 0 ∨ b = 0 without mutual implication or case analysis?

My proof also uses case analysis --- I just wrote “without mutual implication”...

Theorem (15.11): a · b = 0 ≡ a = 0 ∨ b = 0
Proof:
By cases: `a = 0`, `a ≠ 0`
Completeness: By “LEM” and “Definition of ≠”
Case `a = 0`:
a · b = 0 ≡ a = 0 ∨ b = 0
≡⟨ Assumption `a = 0` ⟩
0 · b = 0 ≡ true ∨ b = 0
≡⟨ “Zero of ·”, “Zero of ∨” ⟩
true
Case `a ≠ 0`:
a · b = 0
≡⟨ “Zero of ·” ⟩
a · b = a · 0
≡⟨ “Cancellation of ·” with assumption `a ≠ 0` ⟩
b = 0
≡⟨ “Identity of ∨”, assumption `a ≠ 0` with “Definition of ≠” ⟩
a = 0 ∨ b = 0


Wolfram



P.S.:

> I don’t understand how they work

The papers linked from

http://calccheck.mcmaster.ca/

should help a bit with that.

Jeremy Weissmann

unread,
May 24, 2020, 12:33:03 PM5/24/20
to calculationa...@googlegroups.com
Those were my proofs almost exactly. My second was:

a·b = 0 ≡ a = 0 ∨ b = 0
≡ { a ≠ 0, algebra, pred. calc. }
a·b = a·0 ≡ b = 0
≡ { a ≠ 0 }
true


> On May 24, 2020, at 09:54, Wolfram Kahl <ka...@cas.mcmaster.ca> wrote:
>
> Jeremy,
> --
> You received this message because you are subscribed the mathmeth.com mailing list.
> To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
> ---
> You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/calculational-mathematics/20200524135450.GK18658%40ritchie.cas.mcmaster.ca.

Wolfram Kahl

unread,
May 24, 2020, 12:48:03 PM5/24/20
to calculationa...@googlegroups.com
On Sun, May 24, 2020 at 12:24:40PM -0400, Jeremy Weissmann wrote:
> Those were my proofs almost exactly. My second was:
>
> a·b = 0 ≡ a = 0 ∨ b = 0
> ≡ { a ≠ 0, algebra, pred. calc. }
> a·b = a·0 ≡ b = 0
> ≡ { a ≠ 0 }
> true

Now you only need to get that into a shape that CalcCheck accepts.

;-)

(You can try it right there in http://130.113.68.214:9405/ .)

(Expressions and hints frequently get longer if you prove

p ≡ q
≡ ⟨ ? ⟩
true

instead of

p
≡ ⟨ ? ⟩
q

.
In my proof, I still did the former in the first case
since it allowed me to invoke

Assumption `a = 0`

only once for using it twice;
for the second case, there are no such savings.

In the end, this is all a matter of taste, and easy to transform.
)


Wolfram
> To view this discussion on the web visit https://groups.google.com/d/msgid/calculational-mathematics/86694683-3CDB-4F2E-85C1-209655C8B9D0%40gmail.com.
>
>
> !DSPAM:5ecaa1c047851804284693!

Jeremy Weissmann

unread,
May 24, 2020, 1:58:51 PM5/24/20
to calculationa...@googlegroups.com
If I was concerned about correctness I would!

> On May 24, 2020, at 12:48, Wolfram Kahl <ka...@cas.mcmaster.ca> wrote:
> To view this discussion on the web visit https://groups.google.com/d/msgid/calculational-mathematics/20200524164802.GN18658%40ritchie.cas.mcmaster.ca.

Jeremy Weissmann

unread,
May 24, 2020, 8:14:30 PM5/24/20
to mathmeth
Dear Wolfram,

   My first step is perhaps a bit overloaded. It doesn’t even reflect how I wrote it on the page. My actual proofs were

|[ Context:  a = 0

      a·b = 0   ==   a = 0  \/  b = 0
==     {  a = 0 ,  twice }
      0·b = 0   ==   0 = 0  \/  b = 0
==     { property of  0 ;  equality }
      true  ==  true  \/  b = 0
==     {  true  is the zero element of  \/  }
      true

]|

and:

|[ Context:  a ≠ 0

      a·b = 0   ==   a = 0  \/  b = 0
==     { a ≠ 0 ;  false  is the unit element of  \/  }
      a·b = 0  ==  b = 0
==     { property of  0  }
      a·b = a·0  ==  b = 0
==     { cancellation, using  a ≠ 0  }
      true

]|     .

Of course these hints are not tasteful  —I don't consider it appropriate to put algebraic and predicate calculus detail in the same calculation— ,  but I wanted to be explicit.  In fact, it is the first proof which is  "overloaded" ,  in that we're doing a mass substitution of  a = 0  and simplifying everywhere.  Wim Feijen's proof format is not really suited to such  "blackboard"  simplifications.  The second calculation by contrast is lovely and has nice heuristics for the middle step.

   I can't seem to escape the feeling that this case analysis is a hidden mutual implication argument.  Probably because the two calculations use the same properties we would if we proved the theorem by mutual implication.  Something to think about there.

   The consideration that motivates our proof by case analysis is that the case  a = 0  should offer a great simplification to the demonstrandum.  The cancellation property is  "what's left"  after we deal with the first case.  This evening I decided how the proof would go if we started with the cancellation property, which is indeed suggested by the shape of the demonstrandum:

      true
==     { cancellation }
      a ≠ 0  =>  (a·b = 0 == b = 0)
==     {  =>  into  \/  }
      a = 0  \/  (a·b = 0 == b = 0)
==     {  \/  over  == ,  to form part of the demonstrandum }
      a = 0  \/  a·b = 0   ==   a = 0  \/  b = 0
==     {  !!  }
      a·b = 0   ==   a = 0  \/  b = 0     .

Now, that would be nice, but in the hint marked  !!  we used:

a = 0  \/  a·b = 0   ==   a·b = 0     ,

which by predicate calculus is equivalent to:

a = 0  =>  a·b = 0     .

Fortunately, we can dispatch this easily, like so:

|[ Context:  a = 0

     a·b
=     {  a = 0  }
     0·b
=     { property of  0  }
     0

]|     .

   Now what do you think of that!

+j

Wolfram Kahl

unread,
May 24, 2020, 9:22:54 PM5/24/20
to calculationa...@googlegroups.com
Dear Jeremy,

> The consideration that motivates our proof by case analysis is that the
> case a = 0 should offer a great simplification to the demonstrandum. The
> cancellation property is "what's left" after we deal with the first
> case. This evening I decided how the proof would go if we started with the
> cancellation property, which is indeed suggested by the shape of the
> demonstrandum:
>
> true
>
> == { cancellation }

Not just cancellation...

> a ≠ 0 => (a·b = 0 == b = 0)
>
> == { => into \/ }
>
> a = 0 \/ (a·b = 0 == b = 0)
>
> == { \/ over == , to form part of the demonstrandum }
>
> a = 0 \/ a·b = 0 == a = 0 \/ b = 0
>
> == { !! }
>
> a·b = 0 == a = 0 \/ b = 0 .
>
>
> Now, that would be nice, but in the hint marked !! we used:
>
> a = 0 \/ a·b = 0 == a·b = 0 ,
>
>
> which by predicate calculus is equivalent to:

(No predicate calculus needed, just basic propositional calculus.)

> a = 0 => a·b = 0 .
>
>
> Fortunately, we can dispatch this easily, like so:
>
> |[ Context: a = 0
>
> a·b
> = { a = 0 }
> 0·b
> = { property of 0 }
> 0
>
>
> ]| .
>
>
> Now what do you think of that!

Nice!


How do you put that ``Context'' construction into the place
where the ``!!'' is?


In CalcCheck, you can use subproofs:

Theorem (15.11): a · b = 0 ≡ a = 0 ∨ b = 0
Proof:
a ≠ 0 ⇒ (a · b = a · 0 ≡ b = 0) — This is “Cancellation of ·”
≡⟨ “Zero of ·”, “Definition of ≠” ⟩
¬ (a = 0) ⇒ (a · b = 0 ≡ b = 0)
≡⟨ “Definition of ⇒” (3.59), “Double negation” ⟩
a = 0 ∨ (a · b = 0 ≡ b = 0)
≡⟨ “Distributivity of ∨ over ≡” ⟩
a = 0 ∨ a · b = 0 ≡ a = 0 ∨ b = 0
≡⟨ “Definition of ⇒” (3.57) with subproof for `a = 0 ⇒ a · b = 0`:
Assuming `a = 0`: By assumption `a = 0` and “Zero of ·”

a · b = 0 ≡ a = 0 ∨ b = 0


And you have lots of verbosity, spacing, and line break options, e.g.:


Theorem (15.11): a · b = 0 ≡ a = 0 ∨ b = 0
Proof:
a ≠ 0 ⇒ (a · b = a · 0 ≡ b = 0) — This is “Cancellation of ·”
≡ ⟨ “Zero of ·”, “Definition of ≠” ⟩
¬ (a = 0) ⇒ (a · b = 0 ≡ b = 0)
≡ ⟨ “Definition of ⇒” (3.59), “Double negation” ⟩
a = 0 ∨ (a · b = 0 ≡ b = 0)
≡ ⟨ “Distributivity of ∨ over ≡” ⟩
a = 0 ∨ a · b = 0 ≡ a = 0 ∨ b = 0
≡ ⟨ “Definition of ⇒” (3.57) with subproof:
Assuming `a = 0`:
a · b
= ⟨ Assumption `a = 0`, “Zero of ·” ⟩
0

a · b = 0 ≡ a = 0 ∨ b = 0



Wolfram

Jeremy Weissmann

unread,
May 24, 2020, 9:48:48 PM5/24/20
to mathmeth
Dear Wolfram,
 
> ==     { cancellation }

Not just cancellation...

   Fair enough but I'm not going for machine-checkable proofs here, I'm communicating in the context of the rest of the email!
 
> which by predicate calculus is equivalent to:

(No predicate calculus needed, just basic propositional calculus.)

   My nomenclature comes from Dijkstra, Feijen, et al, who call this 'predicate calculus'.
 
How do you put that ``Context'' construction into the place
where the ``!!'' is?

   Why would I want to do that?!  I prefer and advocate for separate calculations for separate concerns.  If I prove  X == Y  by mutual implication, then one way to do this is to prove  X => Y  and, separately, to prove  Y => X .   If that is my design shape, then I do justice to that shape best, I feel, by presenting the calculations separately.

   In my case, on the road to calculating the demonstrandum, I discovered I needed a little lemma.  I noted the lemma and proved it.  The lemma has a life of its own!  Cut the umbilical cord!
 
In CalcCheck, you can use subproofs:
[...]

And you have lots of verbosity, spacing, and line break options, e.g.:

   I certainly prefer the second, but as I say, my preference is to simply keep them as separate calculations and express the overarching logical argument in natural language.  A similar case is when I have to prove  (Ax :: p.x) .   I might manipulate the whole expression, or I might calculate  p.x  in the context of  x's  type, which would be expressed as,  "for all  x ,  we have [etc.]" .

   But again this is only my personal preference.

+j

Wolfram Kahl

unread,
May 24, 2020, 10:28:23 PM5/24/20
to calculationa...@googlegroups.com
Dear Jeremy,

> In my case, on the road to calculating the demonstrandum, I discovered I
> needed a little lemma. I noted the lemma and proved it. The lemma has a
> life of its own! Cut the umbilical cord!

Certainly, if you want to give a lemma a life of its own,
then you can do that also in CalcCheck.
But for single-use lemmas, subproofs are frequently less distracting.

> > In CalcCheck, you can use subproofs:
> > [...]
> > And you have lots of verbosity, spacing, and line break options, e.g.:
> >
>
> I certainly prefer the second, but as I say, my preference is to simply
> keep them as separate calculations and express the overarching logical
> argument in natural language.

I noticed the same tendency in LADM (Gries and Schneider, 1993):
They do present proof structures in chapter 4, but they hardly ever
use them later, and instead ``express the overarching logical argument
in natural language''.

I think that there is a case to be made for actually using
formally-structured proofs, and LADM actually frequently argues for
use of formalism, e.g., on p. 121:

| Formal proofs are often simpler than their informal counterparts in
| English, because inferences can be based purely on well-defined
| syntactic manipulation.

And on p. 78:

| That the formal proof is much simpler is no accident. Using formal
| tools, and not even letting contorted English sentences come into
| one's thoughts, can often lead to simpler arguments. Let the formal
| tools do the work.

CalcCheck demonstrates that for the proof style of LADM,
including structured proofs in the spirit of chapter 4,
the “de Bruijn factor” (http://www.cs.ru.nl/~freek/factor/index.html)
can be not much above 1, and in many cases where the informal proof
``expresses the overarching logical argument in natural language'',
it can actually be quite a bit below 1...

And machine-checkability has advantages even for professionals:
I know colleagues who worked in sub-theories of relation algebras
in the usual rigorous calculational style, and inadvertently
used properties not available in the sub-theory under consideration.



Wolfram

Jeremy Weissmann

unread,
May 24, 2020, 11:30:30 PM5/24/20
to mathmeth
Dear Wolfram,

But for single-use lemmas, subproofs are frequently less distracting.

   For me it is all part of the design.  I'm not writing to allow someone to verify the correctness of a proof — well, not only that — ;  I'm giving an exposition of a design of a calculation to be read and followed by a human being.  In my design, the larger concern is calculating the demonstrandum.  I want the reader to focus on how I got there — including steps where I'm making a conjecture.  Then, after the main calculation (attempt) is complete, we can revisit the conjecture and see if it's even true.  For all I know it might not be!  (But that may inspire a case analysis; etc.)

I noticed the same tendency in LADM (Gries and Schneider, 1993):
They do present proof structures in chapter 4, but they hardly ever
use them later, and instead ``express the overarching logical argument
in natural language''.

   I confess to being pretty ignorant of that book.
 
I think that there is a case to be made for actually using
formally-structured proofs, and LADM actually frequently argues for
use of formalism, e.g., on p. 121:

 | Formal proofs are often simpler than their informal counterparts in
 | English, because inferences can be based purely on well-defined
 | syntactic manipulation.

   That point is not at all clear to me, and actually crystallizes my main complaint with LADM.  That book, to my tastes, confuses  'correct'  with  'well-designed and easy to follow' .   "If it's true, it's easy to follow!  If every step refers to one of 100 axioms or theorems earlier in the book, it's correct and hence a well-designed proof!"  Uh, no.  That is traditional, conceptual mathematics, presented in symbols.

   Don't get me wrong: I'm all for synthesizing arguments and using calculational reasoning to eliminate needless conceptual reasoning.  But I firmly believe that humans understand structurally — not just in a random sequence of logical steps which are in principle verifiable.  For a human to follow calculations only, with no natural language clarifying the structure of the argument, they have to have all the relevant definitions, properties, theorems, and what have you, at hand.  That is hugely burdensome for the reader.  Sure, it's great for someone who knows (say) LADM back to front, but that's a narrow audience.

   If I say, "I'm going to draw a picture of a pony; then I'm going to juggle three apples; then I'm going to feed one of the apples to the pony." ,  I think it's pretty clear to any human what I'm going to do.  So why shouldn't I also say,  "I'm going to present a proof of  X == Y  by mutual implication.  First I will prove  X => Y ;  then I will prove  Y => X ." ?   I don't believe anything is gained by eliminating the natural language (the structure of the argument), and putting everything into a calculation.
 
And on p. 78:

 | That the formal proof is much simpler is no accident. Using formal
 | tools, and not even letting contorted English sentences come into
 | one's thoughts, can often lead to simpler arguments. Let the formal
 | tools do the work.

   Yes, I agree wholeheartedly.  There are no general rules, only principles.  If you hand me a paper, say nothing, and all it has on it is a calculation, I'm handing that paper back to you.  I don't know any of the conventions you're using, and you haven't even told me what you're showing me.  You've given me no context.  But of course I use calculations in nearly every mathematical argument I make, and would never think of casting them in English.

   By the way, LADM has tons of contorted English sentences, as well as very ugly, difficult-to-follow calculations.
 
CalcCheck demonstrates that for the proof style of LADM,
including structured proofs in the spirit of chapter 4,
the “de Bruijn factor” (http://www.cs.ru.nl/~freek/factor/index.html)
can be not much above 1, and in many cases where the informal proof
``expresses the overarching logical argument in natural language'',
it can actually be quite a bit below 1...

   I'm not hugely into quantifying good design.

And machine-checkability has advantages even for professionals:
I know colleagues who worked in sub-theories of relation algebras
in the usual rigorous calculational style, and inadvertently
used properties not available in the sub-theory under consideration.

    Machine checkability is certainly cool.  But I confess I'm not so interested in truth; just design.

+j

Lucas Brunno Luna

unread,
May 25, 2020, 10:41:17 AM5/25/20
to Calculational Mathematics
Dear Wolfram and Jeremy, I thank you both for your thoughtful answers!

To Wolfram:

Thank you for your openness in sharing those pages; I did read most of them.

Truly, the treatment of the natural numbers is essentially equivalent to the one I was pursuing. I gave the proof verification mechanism a try, but to me it seemed to be more trouble than it's worth; Entschuldigung, lieber Kumpel!

I think I will make a few observations concerning Homework 5, for what they're worth. The first one is that one can prove a variant of the "Successor" theorem, viz. S.x = 1 + x (using my notation), without induction, by observing for any x:

    S.x
= { Left-identity of + }
    S
.(0 + x)
= { Definition of + for S }
    S
.0 + x
= { Definition of 1 }
   
1 + x
   
On the other hand, if one first proves the "Adding the Successor" theorem (viz. x + S.y = S.(x + y)), then the "Successor" theorem, as presented on the page, can also be proved without induction (using, besides "Adding the Successor", also the "Right-identity of +").

As for the "Even double" theorem, one can easily prove it without induction by appealing to the distribution of even over +, viz.:

Th. even.(x + y) even.x even.y

which, in turn, can be proved via induction.

The book by Gries and Schneider that you mentioned, viz. A Logical Approach to Discrete Math, does indeed seem like a fine text that I would enjoy very much. Too bad I cannot acquire it right now, since I'm totally broke, have no source of income for myself, and I don't even know how to buy stuff off the internet. (I also don't have a credit card.) By the way, I couldn't even acquire Dijkstra's book, viz. A Discipline of Programming, by myself: Someone bought it for me as a birthday gift. (And I must say that I'm really glad for that, since that book is an extremely important piece of computing science writing and also a masterpiece in general.)

Speaking of LADM, I have observed that, although I'm still very new to abstract algebra, the axiomatization for the integers as presented in the Extra Notebook seems to be equivalent to that of a commutative ring, except perhaps for the "Cancellation of multiplication" law, for which I don't know whether a derivation from the commutative ring axioms alone can be given or not; in any case, I see that said law is stated there as an axiom.

To Jeremy:


Which axioms are you extending to these new solutions?

See, my plan was not to extend any of the original axioms, in principle. The new domain would have the natural numbers as a subdomain, and the variables in the original axioms were to be understood as still ranging over that subdomain only. The new axiom was in principle the only statement that mentioned the rest of the points of the new domain at all, viz. by postulating that, for every natural x, there exists an integer y such that x + y = 0. (Perhaps this axiom should have been strengthened from the start by letting x range over all points of the domain. I don't know.) I guess I didn't make this very clear in my original message; sorry about that, haha.

In any case, I was having problems with that approach. Hence my question.



Depending on what you assume, I'm not sure you will be able to prove that the solutions are unique.  One could imagine the natural numbers extended with several copies of the negative integers.

Some years ago I got very involved in a (sad) Facebook conversation trying to figure out what you'd need to assume in order to extend the naturals via a unary operator - .  We didn't come to any easy conclusions.

Well, if you so desire, feel free to share anything about the conclusions you've arrived at by occasion of that discussion; there's no need even to be too specific.



Yes, the rule of instantiation is important to use in both directions:  (Ex :: f.x) <= f.c  and  (Ax :: f.x) => f.c .   When you are constructing a strengthening chain and you have existential quantification, you should be on the lookout for a witness — just as if you are constructing a weakening chain and you have universal quantification, you should be on the lookout for a useful instantiation.

All right, thank you for those remarks!



As for antisymmetric, how about:

     x ≤ y /\ y ≤ x
==    { definition of  ≤ ,  twice }
     (Er,s :: x+r = y /\ y+s = x)
=>    { Leibniz }
     (Er,s :: x+r = y /\ y+s = x /\ (x+r)+s = x )
==    { algebra }
     (Er,s :: x+r = y /\ y+s = x /\ r+s = 0 )
==    { r,s ≥ 0 }
     (Er,s :: x+r = y /\ y+s = x /\ r = 0 /\ s = 0)
==    { one-point rule }
     x+0=y /\ y+0=x
==    { algebra }
     x = y     .

See, shortly after I had posted this question, I managed to come up with a proof appealing to the very same manipulations. That whole juggling with Leibniz's principle/predicate calculus, together with the fact that I had not yet tried to prove the injectivity of +, got me a little confused there and hence I got lazy. But, in the end, the proof is actually very simple. My bad, I'll try not to do that again.

By the way, one of the steps (i.e. the fourth) in the proof you've presented seems to rely on an axiom which I actually did postulate but had not shared in my original message, viz. for any natural number x:

Ax. S.x 0

I actually have no problem at all with that particular proof. In any case, you offer an alternative proof which is also not bad at all. It's definitely another very valid approach, and I thank you for sharing it! I add that I wasn't familiar with that heuristic of missing symbols.



By the way, the  "you can prove this easily"  lemma is:   x ≤ y == x+c ≤ y+c  which I think will not be hard for you to prove.

True, that was actually very easy. The only additional theorem that I needed was the injectivity of +, which is a consequence of the injectivity of S and can be established via induction.



The calculational style is not always great for working with witnesses and existential quantifications.  Sometimes it's just easier to say,  "let  r  and  s  satisfy  x+r = y  and  y+s = x"  and then work from there.

I think that's interesting to keep in mind. Thanks!



As for transitivity, I'm just thinking off the cuff here about how I might use the heuristic of missing symbols again:

|[ context: x ≤ y /\ y ≤ z

     x ≤ z
==    { def }
     (Er :: x+r = z)
==    { • x+s = y ,  adding  s  to both sides }
     (Er :: y+r = z+s)
==    { • y+t = z ,  adding  t  to both sides }
     (Er :: z+r = z+s+t)
==    { algebra }
     (Er :: r = s+t)
==    { positive numbers are closed under addition }
     true

]|

Something like that, anyway.

That's one way to go about it; I liked that. But for the sake of the discussion, I mention that I have also come up with the prototype of a proof more in line with my other proofs for reflexivity and antisymmetry, viz.

    x y    x z
{ Definition of twice; predicate calculus }
   
(E r, s :: x + r = y    y + s = z)
{ Leibniz (+ injective) }
   
(E r, s :: x + r + s = y + s    y + s = z)
{ = transitive }
   
(E r, s :: x + r + s = z)
{ ??? }
   
(E t :: x + t = z)
{ Definition of }
    x
z

The trouble is that I still don't know how to justify the fourth step.



Just thinking a bit more, I don’t dig a definition like:

     x ≤ y  ≡  ⟨∃a :: x+a = y⟩

because it’s only correct if  a  is nonnegative, but if we have a notion of positive then why not just define

     x ≤ y  ≡  y−x  is nonnegative       ?

Well, the definition that used universal quantification seemed most natural for the natural numbers (no pun intended), which constituted the domain I was working with originally. (And, hence, also the r (your a) was to be implicitly understood as having the natural numbers as its range.) As soon as we try to extend that relation to the integers, we seem to be faced once again with the issue presented in the title of my original message. But if we are working with all integers from the beginning, perhaps your suggestion would present a line of reasoning to be pursued.



Trying to axiomatize "The Integers" is not easy.  The same is true for any domain.  It's much nicer to take an algebraic approach to some aspect of the integers, like, say, the ordering relation, and prove what you can in that mini-domain.  When we say, for example, that the integers under addition form a group, or that they form with multiplication a commutative ring, we're not axiomatizing the domain — there are tons of groups and commutative rings!  We're not trying to pin down the mathematical object in our minds uniquely, we're just studying its structure.

Those are actually very interesting remarks that got me thinking. I guess you have a point, for, after all, is it not but the structure of a theory that justifies both its construction and its applications? Furthermore, I do believe that there's much to be gained in avoiding overspecification, and your suggestion sounds like it would help with that.



My suggestion is narrow your focus!

I'll definitely try to keep that in mind. Thank you once again!

Lucas Brunno Luna

unread,
May 25, 2020, 1:09:44 PM5/25/20
to Calculational Mathematics
Dear all,

I have just realized that I made a typo in the first line of the prototype for my proof of the transitivity of ≤. The x in the right conjunct should be a y. Here is the corrected version of said calculation:

    x y    y z
{ Definition of twice; predicate calculus }
   
(E r, s :: x + r = y    y + s = z)
{ Leibniz (+ injective) }
   
(E r, s :: x + r + s = y + s    y + s = z)
{ = transitive }
   
(E r, s :: x + r + s = z)
{ ??? }
   
(E t :: x + t = z)
{ Definition of }
    x
z

Sorry about that blemish.

Jeremy Weissmann

unread,
May 25, 2020, 3:59:31 PM5/25/20
to mathmeth
Dear Lucas,

   Some scattered comments.

I think I will make a few observations concerning Homework 5, for what they're worth. The first one is that one can prove a variant of the "Successor" theorem [...]

   In line with my comments to you yesterday, I would say, rather than thinking about  S  as being part of an axiomatization of the naturals, simply consider it to be part of an algebraic system, and see what you can prove about it!  Do you see the difference?  In the axiomatization approach, you are trying to make sure you assume enough about  S  to prove that it can only mean what you mean it to mean, namely, the successor function.  In the algebraic approach, you are studying the consequences of assumptions about  S ,  regardless of what instantiations you can think of for it.

   There is nothing wrong with the former line of inquiry — but that's not really what the calculational style is about.  The calculational style is about algebraic calculation of formulae.  Axiomatic systems are about studying those systems — for example, proving limitations of those systems, etc.

 
As for the "Even double" theorem, one can easily prove it without induction by appealing to the distribution of even over +, viz.:

Th. even.(x + y) even.x even.y

which, in turn, can be proved via induction.

   I can't help but comment again.  Please, go ahead and wonder how you might prove this theorem using induction.  Maybe even do it!  But part of the program laid out by Dijkstra et al (which is in keeping with a long, long tradition in mathematics and philosophy, for what that's worth) is to consider what we're looking at (a formula, a theorem, a collection of symbols, some mathematical concepts, etc) and ask yourself what sort of framework or approach best serves it.  That is careful pondering, thoughtful design, the art effective reasoning.

   When I look at your formula, I see natural numbers, so of course induction is a possible approach to proof.  But I also see two variables, so I wonder if I'll need a double induction, which doesn't sound fun.  Also, I see a sort of distributivity over addition.  Induction wouldn't do justice to that.  (See similar comments in JAW2.)  So if I'm going to consider how I might prove your theorem, I want to be guided by the formula.

   Can we think of any sort of axiomatization of the predicate  even  which also enjoys some nice properties with respect to  + ?   Well, we often say a number is even if its evenly divisible by 2.  And divisibility enjoys nice properties with respect to  + !   (If we didn't know this, we might wish to explore it.)  Letting  ⊑  denote  'divides' ,  we have the following  "standard"  property:

(0)     x ⊑ y  /\  x ⊑ z   =>   x ⊑ y+z     ,

but a calculational mathematician (actually also a good traditional mathematician) also knows the more flexible:

(1)     d ⊑ x   =>   ( d ⊑ y  ==  d ⊑ a·x+b·y )     or equivalently
(1')    d ⊑ x  \/  d ⊑ y   ==   d ⊑ x  \/  d ⊑ a·x + b·y     .

   How do we prove these?  A separate concern, but again we want to stick to operators that work well with  +  and  · .   I'd suggest something like:

x ⊑ y  ==  (En : integer.n : x·n = y)     ,

which caters to lovely proofs of  (0)  and  (1) .

   But back to the theorem at hand.  Using our interface, we rewrite   even.x  ==  2 ⊑ x .   But if you try now to prove your theorem using this interface and  (0)/(1) ,  you will discover that you need more!  That is,  d ⊑ x+y  ==  d ⊑ x  ==  d ⊑ y  is not a theorem unless  d = 2 .   Calculating what remains to be shown, then figuring out what new machinery to introduce (and lemmata to be proven!) is a wonderful exercise.

   Proving your theorem via induction is just an exercise.
 
The book by Gries and Schneider that you mentioned, viz. A Logical Approach to Discrete Math, does indeed seem like a fine text that I would enjoy very much. Too bad I cannot acquire it right now, since I'm totally broke, have no source of income for myself, and I don't even know how to buy stuff off the internet. (I also don't have a credit card.) By the way, I couldn't even acquire Dijkstra's book, viz. A Discipline of Programming, by myself: Someone bought it for me as a birthday gift. (And I must say that I'm really glad for that, since that book is an extremely important piece of computing science writing and also a masterpiece in general.)

   Yes.  Unfortunately these books are all quite expensive.  I had meant to write my own introduction to the calculational style many years ago, but my path diverged somewhat.  I'd love to get back to it someday.  Perhaps I'm inching closer!

   That being said, there is lots of ugliness in LADM, and there is plenty to soak up from the free  EWDs  and  WFs  (and even JAWs, if I may be so bold)  online.  Consider having a look at some of the free resources here:  http://www.mathmeth.com/read.shtml .   We're all happy to help you work through them.
 
Speaking of LADM, I have observed that, although I'm still very new to abstract algebra, the axiomatization for the integers as presented in the Extra Notebook seems to be equivalent to that of a commutative ring, except perhaps for the "Cancellation of multiplication" law, for which I don't know whether a derivation from the commutative ring axioms alone can be given or not; in any case, I see that said law is stated there as an axiom.

   A commutative ring with cancellable multiplication is indeed something new.
 
To Jeremy:

Which axioms are you extending to these new solutions?

See, my plan was not to extend any of the original axioms, in principle.

   Okay, but then I don't think you can do what you hope to do.

   For example, you've extended the domain but you've also silently done so for  +  as well, when you write   x + y = 0 ,  right?  One of those symbols is a natural number, while the other is a new creature.  But which addition symbol is that?  Is it the one that enjoys the axiom of commutativity?  Yes or no!  If yes, then you already have commutativity.  If not, then you have two additions floating around, one commutative, one (perhaps) not.  What is the relationship between the two?  If you don't write down some properties, I'm not sure you'll be able to prove anything.  For example, do you propose:

x ++ y  =  x + y     if  x,y  are natural    ?

Etc.  Start thinking about what you're really assuming about your domain, and then you can start asking real questions!

   If all you have is  x ++ y  =  x + y  for  x,y  natural, then I'm not sure you have enough.  Because your only introduced axiom about  ++  is that for all (natural)  x ,  there exists  y  such that  x ++ y = 0 .   I'm not sure what prevents the possibility that   x ++ y = 0  for all other cases.

 
By the way, the  "you can prove this easily"  lemma is:   x ≤ y == x+c ≤ y+c  which I think will not be hard for you to prove.

True, that was actually very easy. The only additional theorem that I needed was the injectivity of +, which is a consequence of the injectivity of S and can be established via induction.


   If you want to prove any theorems about  +  and  ≤ ,  you need to do more theory building.   Theorems regarding  +  and  ≤  should depend on properties of those symbols — not properties of  S .   If you then want to implement them in terms of  S  and prove new theorems, go right ahead, but keep your concerns separate... make a firewall!

   In my understanding, Peano's postulates are not about proving things about addition.  We already know addition is commutative and injective.  If Peano had come up with some postulates for  S  and then defined  +  in terms of  S  so that it wasn't commutative or wasn't injective, what would people have said?  "These are not suitable axioms.  They don't correctly model the concept."   Not,  "Wow, I guess addition isn't commutative!" .

   Peano's postulates are about creating an axiomatization for arithmetic, so that if we wish to study the logical power/limitations of arithmetic, prove (in)completeness theorems, compare it to other similar systems, explore nonstandard models (https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic), etc, we can do so.  But they are not suitable for actually doing mathematics.
 
    x y    x z
{ Definition of twice; predicate calculus }
   
(E r, s :: x + r = y    y + s = z)
{ Leibniz (+ injective) }
   
(E r, s :: x + r + s = y + s    y + s = z)
{ = transitive }
   
(E r, s :: x + r + s = z)
{ ??? }
   
(E t :: x + t = z)
{ Definition of }
    x
z

The trouble is that I still don't know how to justify the fourth step.

   Very nice!  In the fourth step, you are rewriting the dummy under  t = r+s .   But in fact you don't have  ==  in the fourth step, because the change from  (r,s)  to  t  is not invertible.  (For example, if  x = 2  and  y = 3 ,  then the only option is  t = 1  and there is no corresponding solution in  (r,s)  since they both must be positive.)

   You should have changed from  (Er,s : r ≥ 1 /\ s ≥ 1 : ... )  to  (Et : t ≥ 2 : ... ) .   Then you weaken the range to  t ≥ 1 .

Just thinking a bit more, I don’t dig a definition like:

     x ≤ y  ≡  ⟨∃a :: x+a = y⟩

because it’s only correct if  a  is nonnegative, but if we have a notion of positive then why not just define

     x ≤ y  ≡  y−x  is nonnegative       ?

Well, the definition that used universal quantification seemed most natural for the natural numbers (no pun intended), which constituted the domain I was working with originally. (And, hence, also the r (your a) was to be implicitly understood as having the natural numbers as its range.) As soon as we try to extend that relation to the integers, we seem to be faced once again with the issue presented in the title of my original message. But if we are working with all integers from the beginning, perhaps your suggestion would present a line of reasoning to be pursued.

   Aha, I see!  Sorry for my confusion.  I'd have to rethink the calculations then.

+j

Jeremy Weissmann

unread,
May 25, 2020, 4:39:17 PM5/25/20
to mathmeth
Not sure why I replaced  ≤  with  <  in my mind and started insisting on r,s > 0. 

Sorry about that!

On May 25, 2020, at 15:59, Jeremy Weissmann <plusj...@gmail.com> wrote:



Lucas Brunno Luna

unread,
May 25, 2020, 5:59:47 PM5/25/20
to Calculational Mathematics
Dear Jeremy,

I would like to have some more time to think about the response you have just given, which, I should say from the outset, contains some very interesting points. I'm just writing this email to mention, in passing, a few small remarks.



As for the "Even double" theorem, one can easily prove it without induction by appealing to the distribution of even over +, viz.:

Th. even.(x + y) even.x even.y

which, in turn, can be proved via induction.

   I can't help but comment again.  Please, go ahead and wonder how you might prove this theorem using induction.  Maybe even do it!

I had already proved this theorem before writing that, haha. It is really simple.

With predicate even defined on natural numbers by (essentially the axiomatization given in the page linked by Wolfram):
Ax. even.0
Ax. even.(S.x) ¬even.x


We observe for natural x, y:
Th. even.(x + y) even.x even.y

The proof is by induction over y.
Base step:
    even
.(x + 0)
{ Identity of + }
    even
.x
{ Identity of }
    even
.x true
{ even axiom }
    even
.x even.0

Induction step:
    even
.(x + S.y)
{ Successor axiom }
    even
.(S.(x + y))
{ even axiom }
   
¬even.(x + y)
{ Induction hypothesis }
   
¬(even.x even.y)
{ Predicate calculus }
    even
.x ¬even.y
 { even axiom }
    even
.x even.(S.y)

Also:


    x y    x z
{ Definition of twice; predicate calculus }
   
(E r, s :: x + r = y    y + s = z)
{ Leibniz (+ injective) }
   
(E r, s :: x + r + s = y + s    y + s = z)
{ = transitive }
   
(E r, s :: x + r + s = z)
{ ??? }
   
(E t :: x + t = z)
{ Definition of }
    x
z

The trouble is that I still don't know how to justify the fourth step.

   Very nice!  In the fourth step, you are rewriting the dummy under  t = r+s .

Thanks!

But truly, that's just it? Rewriting the dummy? I used to think that would only allow one to replace an atomic dummy identifier by another, such as replacing j by k, say.

(For the record:


Not sure why I replaced  ≤  with  <  in my mind and started insisting on r,s > 0. 

Sorry about that!

I got a little confused when I first the corresponding part in the previous email, but that explains it; so, no problem.)

Jeremy Weissmann

unread,
May 25, 2020, 8:33:39 PM5/25/20
to mathmeth
Dear Lucas,

I had already proved this theorem before writing that, haha. It is really simple.

   I see.  With an tool like:

(*)     even.(S.x) == ¬even.x

at your disposal, induction definitely suggests itself!

   However, there is a symptom here worth looking out for:  if you view the natural numbers as  "fundamentally"  being the mathematical objects which can be obtained by repeatedly applying  S  to  0 ,  then you will be inclined to phrase your properties and theorems in order to accommodate this view.  Case in point, the property  (*) .   Of course  (*)  is true!  But let us ask ourselves why Gries and Schneider have chosen it as an axiom.  Is this really a fundamental and useful way to characterize  even ?   Or is it simply the way they chose to define it because they decided to prove their theorems using induction?

   I'm not trying to say that  G&S  have done anything sneaky.  If you're working in a particular framework, then you need your tools to reflect that framework.  The question is:  is that framework well-chosen?  Usually, in an algebraic approach to mathematics, we need more than one framework.  Induction is important — but so is divisibility and prime factorization.  And ultimately,  even  is a concept that is fundamentally related to divisibility and prime factorization, not induction.

   Hopefully I'm making myself clear.  To be even more explicit, viewing  even.x == 2 ⊑ x  means that we have a host of theorems about  even  at our disposal.  Using the simple interface  x ⊑ y  ==  (En : n integer : x·n = y) ,  we can prove a host of simple theorems about the  ⊑  relation, and hence about  even .   For example,  even.x /\ even.y  =>  even.(x+y)  or  even.(x+y)  =>  (even.x == even.y) ,  both of which have nothing to do with the number  2 .   What is special about  even  is that:

¬even.x /\ ¬even.y  =>  even.(x+y)

is also a theorem, which is not true in general.

   But even this may be a bit of a smokescreen, because perhaps the most powerful way to reason is in terms of  ≈ ,  or modular equivalence:

x ≈ y  ==  d ⊑ y–x       ,

a definition which easily lends itself to proving a host of useful theorems about  ≈ .   Then with the parameter  d = 0 ,  we have  even.x == (x ≈ 0) .   And we have for any  x ,  x ≈ 0 ≠≠ x ≈ 1 ,  which is of course only true with  d = 0 .

   I'm not saying any of these interfaces is  'right'  or  'wrong' .   But I do feel that these interfaces get at something deeper than the  G&S  axioms for  even .   You don't want to use induction for every proof relating to number theory!  (I don't want to use it for ANY proof relating to number theory, hehe.)

   Very nice!  In the fourth step, you are rewriting the dummy under  t = r+s .

But truly, that's just it? Rewriting the dummy? I used to think that would only allow one to replace an atomic dummy identifier by another, such as replacing j by k, say.

   There are rules for rewriting the dummy!  Replacing  j  by  k  is also not straightforward:  if you wish to retain equality, they have to be of the same type.  It is also okay to replace  j  by  k  if  k  satisfies  j's  type, but then in general the range will shrink/strengthen.

   One way to view your transformation in more detail is:

      (Er,s :: ... r+s ... )
==     { one-point rule,  r+s  satisfies  t's  type }
      (Er,s :: (Et : t = r+s : ... r+s ... ))
==     { Leibniz }
      (Er,s :: (Et : t = r+s : ... t ... ))
==     { quantifier interchange }
      (Et :: (Er,s : t = r+s : ... t ... ))
==     { vacuous quantification, assuming  (Er,s :: t = r+s)  }
      (Et :: ... t ... )          .

So in our case,  r,s,t  are all of type natural.  Certainly  r+s  satisfies  t's  type.  As for the last step, we need  (Er,s :: t = r+s) ,  so we can take  r,s := t,0 .   That's a little ugly, but it's what we need.

   The simplest kind of transformation is an invertible one, where each set of dummy variables has a solution in terms of the other set.  In that case it really is just a renaming.

   Probably it would be good to write a note on these sorts of transformations.

+j
 

Wolfram Kahl

unread,
May 25, 2020, 10:06:33 PM5/25/20
to calculationa...@googlegroups.com
On Mon, May 25, 2020 at 08:33:34PM -0400, Jeremy Weissmann wrote:
> I see. With an tool like:
>
> (*) even.(S.x) == ¬even.x
>
> at your disposal, induction definitely suggests itself!
>
> [...] But let us ask ourselves why Gries and Schneider have chosen it
> as an axiom.

I must admit that I have chosen this axiom for this Homework,
which follows a lecture in Week 3 that introduces induction
without reference to LADM chapter 12 “Mathematical Induction”,
and after having done only equational calculations on integers
and in propositional calculus.
(With a quick search, I couldn't even find any definition for `even` in LADM.)
I introduce induction early because it is easy, and I want students
to know that it is easy, since otherwise some of them are unnecessarily scared.

I chose this example and this axiom in order to have something different from
just addition, subtraction, and multiplication, and something different
from last year. I may choose a different axiom next year. And even
within the same year, the same concept may at different times be
introduced via different axioms --- with the ``old axioms'' typically
turning into exercises.

This is in a second-year course for students in Computer Science and
in Software Engineering programs; the latter have had ``Engineering Math''
(2 calculus courses and 1 linear algebra course) and one programming course
using Python in their first year.

Students there need practice, practice, practice, including
opportunity to try their skills on different problems, and to try
different approaches once they are available to them. The esthetics
can only be appreciated once the alternatives are seen. I do
emphasise the esthetics, and some of the students appreciate it, but
all of them have to learn to use logics as a tool.

The CalcCheck notebooks on

http://calccheck.mcmaster.ca/CalcCheckDoc/2DM3-2019-instances.html

are just one part of the teaching materials for that course;
in lectures I normally interact with CalcCheck only very occasionally.
(In 2018 I had blackboards, and my most important teaching tools
were in a box of coloured chalk; CalcCheck is mainly a learning tool
for the students.
)


> To be even more explicit, viewing
> even.x == 2 ⊑ x means that we have a host of theorems about even at our
> disposal.

(After we have introduced a host of other theorems.
My axiom (*) had the advantage that it did not need that.
)

> Using the simple interface x ⊑ y == (En : n integer : x·n = y) ,

(“simple” is relative...
I only did quantification two weeks later.
)


All the best,

Wolfram

Jeremy Weissmann

unread,
May 25, 2020, 10:20:42 PM5/25/20
to calculationa...@googlegroups.com
Ah sorry I thought this was all from LADM.

I understand your needs and priorities, and I can’t speak to them and would never try to. But I have my priorities as well :)

I found out about Dijkstra and the calculational style outside of school. I’m grateful for what I learned and discovered (though sad that there weren’t more structured materials). I don’t have any answers for how to package these ideas into a semester and a curriculum. Perhaps if they belong there at all, you’ve don’t the best job of that.

But for those of us who are not in school, and who have an interest in the art and science of effective reasoning, I put my tastes forward as well!

Incidentally, “use mathematical induction and the following properties of the predicate even to prove [...]” seems like a fine exercise. I consider that separate from notions of “axiomatization”. That word is heavy enough that to my mind I feel it needs care.

+j

> On May 25, 2020, at 22:06, Wolfram Kahl <ka...@cas.mcmaster.ca> wrote:
> --
> You received this message because you are subscribed the mathmeth.com mailing list.
> To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
> ---
> You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/calculational-mathematics/20200526020632.GJ18658%40ritchie.cas.mcmaster.ca.

Lucas Brunno Luna

unread,
May 26, 2020, 5:32:26 PM5/26/20
to Calculational Mathematics
Dear Jeremy,

Let us continue our explorations.


I think I will make a few observations concerning Homework 5, for what they're worth. The first one is that one can prove a variant of the "Successor" theorem [...]

   In line with my comments to you yesterday, I would say, rather than thinking about  S  as being part of an axiomatization of the naturals, simply consider it to be part of an algebraic system, and see what you can prove about it!  Do you see the difference?  In the axiomatization approach, you are trying to make sure you assume enough about  S  to prove that it can only mean what you mean it to mean, namely, the successor function.  In the algebraic approach, you are studying the consequences of assumptions about  S ,  regardless of what instantiations you can think of for it.

Well, I can see that a function like S could lend itself to some kind of generalization, all right. The most essential, distinguishing aspect about such a function seems to be the fact that it can be applied to its result. So, formally, I guess we could say that we are considering a class of entities f such that:

Ax. f: X Y                  i.e. f is a function from some domain X to some codomain Y
Ax. (A x : x X : f.x X)   i.e. f can be applied to its result

Now, I did investigate the above axioms a little, and tried to find out whether, together, they could tell me something about X and Y. The one thing I found out is that the range of the function is a subset of their intersection, which still doesn't sound very impressive...

Should I continue to explore these axioms, or perhaps introduce others? I must say, while I can see that these explorations about generalizing S could indeed have some value, I'm still not really sure why I'm doing this in the first place, but, since I'm enjoying the exercise, I'm willing to go along with it if you suggest that there's something in it for us to learn.

Of course, we don't need to restrict ourselves to functions, we could generalize S to a relation as well. The trouble is that I'm still not very used to relations; neither to the relational calculus, for that matter. My whole dilemma has been trying to come to grips with J, ~ and ; without an underlying model... I now understand that, in the standard model,

0. Relations are modeled by boolean functions defined on the Cartesian square of some set;
1. J is modeled by x(J)y ≡ x = y;
2. ~ is modeled by by x(~R)y ≡ y(R)x;
3. And ; is modeled by x(R;S)y ≡ (E z :: x(R)z ∧ z(S)y).

But I've still got to carefully go through the relevant passage in EWD1123 that develops the relational calculus from more general axioms for the two new operators and the new constant. If you have any tips on coming to grips with these axioms in their own right, I'm listening. (Reading, haha.) I guess I could adopt an approach such as going "OK, he's introducing these new operators and a constant with such and such properties, and he's going to derive some theorems from that", but without the model I had been feeling like I didn't have enough incentive to go through the trouble of learning the calculus.



   There is nothing wrong with the former line of inquiry — but that's not really what the calculational style is about.  The calculational style is about algebraic calculation of formulae.  Axiomatic systems are about studying those systems — for example, proving limitations of those systems, etc.

Hmm, that's informative...



   But part of the program laid out by Dijkstra et al (which is in keeping with a long, long tradition in mathematics and philosophy, for what that's worth) is to consider what we're looking at (a formula, a theorem, a collection of symbols, some mathematical concepts, etc) and ask yourself what sort of framework or approach best serves it.  That is careful pondering, thoughtful design, the art effective reasoning.

   When I look at your formula, I see natural numbers, so of course induction is a possible approach to proof.  But I also see two variables, so I wonder if I'll need a double induction, which doesn't sound fun.  Also, I see a sort of distributivity over addition.  Induction wouldn't do justice to that.  (See similar comments in JAW2.)  So if I'm going to consider how I might prove your theorem, I want to be guided by the formula.

   Can we think of any sort of axiomatization of the predicate  even  which also enjoys some nice properties with respect to  + ?   Well, we often say a number is even if its evenly divisible by 2.  And divisibility enjoys nice properties with respect to  + !   (If we didn't know this, we might wish to explore it.)  Letting  ⊑  denote  'divides' ,  we have the following  "standard"  property:

(0)     x ⊑ y  /\  x ⊑ z   =>   x ⊑ y+z     ,

but a calculational mathematician (actually also a good traditional mathematician) also knows the more flexible:

(1)     d ⊑ x   =>   ( d ⊑ y  ==  d ⊑ a·x+b·y )     or equivalently
(1')    d ⊑ x  \/  d ⊑ y   ==   d ⊑ x  \/  d ⊑ a·x + b·y     .

   How do we prove these?  A separate concern, but again we want to stick to operators that work well with  +  and  · .   I'd suggest something like:

x ⊑ y  ==  (En : integer.n : x·n = y)     ,

which caters to lovely proofs of  (0)  and  (1) .

   But back to the theorem at hand.  Using our interface, we rewrite   even.x  ==  2 ⊑ x .   But if you try now to prove your theorem using this interface and  (0)/(1) ,  you will discover that you need more!  That is,  d ⊑ x+y  ==  d ⊑ x  ==  d ⊑ y  is not a theorem unless  d = 2 .   Calculating what remains to be shown, then figuring out what new machinery to introduce (and lemmata to be proven!) is a wonderful exercise.

Well, that is a sensible approach to take for defining the predicate even on the integers in general. I will not pursue those investigations right now, but I'm thankful for the fact that you took the time to make those remarks!



   Yes.  Unfortunately these books are all quite expensive.  I had meant to write my own introduction to the calculational style many years ago, but my path diverged somewhat.  I'd love to get back to it someday.  Perhaps I'm inching closer!

I do think that there is value in having a reasonably self-contained document explicitly presenting the motivations for/virtues of the calculational style; as opposed to having to gather that from what is scattered throughout the EWD's. What I understand of the calculational style is that, for instance, it is pretty effective in making reasoning more explicit, and thereby more rigorous, because each step can, in principle, be easily checked. For the proof designer it provides valuable heuristics and helps in refraining from making avoidable mistakes in reasoning. A great highlight of calculation for me is how it helps in reasoning about abstract concepts, and also its compatibility with the discipline of coming to grips with huge sets by relying on the definition of a set as whole. Another aspect that I find interesting is that the manipulation of formulae by application of discrete steps from some repertoire helps in exposing the isomorphism (Dijkstra actually uses this terminology in EWD1243, for instance) between designing a proof and programming a subroutine. Of course, one can also mention that a great motivation for the calculational style is improving the efficiency of reasoning, "by reducing the use of the brain and letting the symbols do the work."

Though, in this discussion, you've definitely been shedding light on some aspects of the calculational style that I had hardly realized.



That being said, there is lots of ugliness in LADM, and there is plenty to soak up from the free  EWDs  and  WFs  (and even JAWs, if I may be so bold)  online.  Consider having a look at some of the free resources here:  http://www.mathmeth.com/read.shtml .   We're all happy to help you work through them.

OK, noted.



A commutative ring with cancellable multiplication is indeed something new.

Hm, I see.


To Jeremy:

Which axioms are you extending to these new solutions?

See, my plan was not to extend any of the original axioms, in principle.

   Okay, but then I don't think you can do what you hope to do.

   For example, you've extended the domain but you've also silently done so for  +  as well, when you write   x + y = 0 ,  right?

(Yes, that's correct.)


One of those symbols is a natural number, while the other is a new creature.  But which addition symbol is that?  Is it the one that enjoys the axiom of commutativity?  Yes or no!  If yes, then you already have commutativity.  If not, then you have two additions floating around, one commutative, one (perhaps) not.  What is the relationship between the two?  If you don't write down some properties, I'm not sure you'll be able to prove anything.  For example, do you propose:

x ++ y  =  x + y     if  x,y  are natural    ?

Etc.  Start thinking about what you're really assuming about your domain, and then you can start asking real questions!

   If all you have is  x ++ y  =  x + y  for  x,y  natural, then I'm not sure you have enough.  Because your only introduced axiom about  ++  is that for all (natural)  x ,  there exists  y  such that  x ++ y = 0 .   I'm not sure what prevents the possibility that   x ++ y = 0  for all other cases.

That is all very illuminating, because it lies at the heart of what had me confused originally. I would have been less confused had I introduced a new symbol such as ++ in the manner that you have shown us. Perhaps I'll continue examining these issues in specific connection with the integers in more detail later, but the general lesson learned here is certainly something worth taking notice!



By the way, the  "you can prove this easily"  lemma is:   x ≤ y == x+c ≤ y+c  which I think will not be hard for you to prove.

True, that was actually very easy. The only additional theorem that I needed was the injectivity of +, which is a consequence of the injectivity of S and can be established via induction.


   If you want to prove any theorems about  +  and  ≤ ,  you need to do more theory building.   Theorems regarding  +  and  ≤  should depend on properties of those symbols — not properties of  S .   If you then want to implement them in terms of  S  and prove new theorems, go right ahead, but keep your concerns separate... make a firewall!
 
OK, I think that is some sound advice.



   In my understanding, Peano's postulates are not about proving things about addition.  We already know addition is commutative and injective.  If Peano had come up with some postulates for  S  and then defined  +  in terms of  S  so that it wasn't commutative or wasn't injective, what would people have said?  "These are not suitable axioms.  They don't correctly model the concept."   Not,  "Wow, I guess addition isn't commutative!" .

   Peano's postulates are about creating an axiomatization for arithmetic, so that if we wish to study the logical power/limitations of arithmetic, prove (in)completeness theorems, compare it to other similar systems, explore nonstandard models (https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic), etc, we can do so.  But they are not suitable for actually doing mathematics.

Again, that is pretty informative. And I didn't know about those non-standard models of arithmetic. I'm thankful for those observations, though I cannot add anything to that discussion at the moment.


If you're working in a particular framework, then you need your tools to reflect that framework.  The question is:  is that framework well-chosen?  Usually, in an algebraic approach to mathematics, we need more than one framework.  Induction is important — but so is divisibility and prime factorization.  And ultimately,  even  is a concept that is fundamentally related to divisibility and prime factorization, not induction.

 That makes sense and is interesting to think about, all right.



What is special about  even  is that:

¬even.x /\ ¬even.y  =>  even.(x+y)

is also a theorem, which is not true in general.

I must admit that I did not understand what you meant here at all. (At all.) The theorem seems to state that the sum of two odd numbers is even; what do you mean by "not true in general?" Care to try to explain that again?



But even this may be a bit of a smokescreen, because perhaps the most powerful way to reason is in terms of  ≈ ,  or modular equivalence:

x ≈ y  ==  d ⊑ y–x       ,

a definition which easily lends itself to proving a host of useful theorems about  ≈ .   Then with the parameter  d = 0 ,  we have  even.x == (x ≈ 0) .   And we have for any  x ,  x ≈ 0 ≠≠ x ≈ 1 ,  which is of course only true with  d = 0 .

   I'm not saying any of these interfaces is  'right'  or  'wrong' .   But I do feel that these interfaces get at something deeper than the  G&S  axioms for  even .   You don't want to use induction for every proof relating to number theory!  (I don't want to use it for ANY proof relating to number theory, hehe.)

I'm still not used very used to the modular equivalence. I have already seen you using it as a connective in some proofs, though. A prominent example that comes to mind was in a proof for a theorem in graph theory, which, if I remember correctly, had been communicated by an user called João. I remember that I found your solution a very good one. (In any case, just for the sake of curiosity, I think I had come up with another proof for the same theorem, but using invariants. I tried sharing that solution in a comment under João's post on his blog wherein he had shared what you both had done on that problem. That was already many months ago, but the comment never got published; I guess João doesn't really update that blog anymore. Anyhow, this was my proof: http://www.mediafire.com/file/kkbe81u3zoabjk6/even-number-of-odd-nodes-in-graphs-again.txt/file)

Also, I would like to say that

x y    d y x

does indeed seem like a nice way to define modular equivalence (modulo d, I presume?).

By the way, what is number theory all about, really? I'd appreciate it if you could tell me.


There are rules for rewriting the dummy!  Replacing  j  by  k  is also not straightforward:  if you wish to retain equality, they have to be of the same type.  It is also okay to replace  j  by  k  if  k  satisfies  j's  type, but then in general the range will shrink/strengthen.

   One way to view your transformation in more detail is:

      (Er,s :: ... r+s ... )
==     { one-point rule,  r+s  satisfies  t's  type }
      (Er,s :: (Et : t = r+s : ... r+s ... ))
==     { Leibniz }
      (Er,s :: (Et : t = r+s : ... t ... ))
==     { quantifier interchange }
      (Et :: (Er,s : t = r+s : ... t ... ))
==     { vacuous quantification, assuming  (Er,s :: t = r+s)  }
      (Et :: ... t ... )          .

So in our case,  r,s,t  are all of type natural.  Certainly  r+s  satisfies  t's  type.  As for the last step, we need  (Er,s :: t = r+s) ,  so we can take  r,s := t,0 .   That's a little ugly, but it's what we need.

   The simplest kind of transformation is an invertible one, where each set of dummy variables has a solution in terms of the other set.  In that case it really is just a renaming.

That was a great explanation; thanks!



Probably it would be good to write a note on these sorts of transformations.

That could be useful.

Jeremy Weissmann

unread,
May 26, 2020, 9:29:16 PM5/26/20
to mathmeth
Dear Lucas,

Well, I can see that a function like S could lend itself to some kind of generalization, all right. The most essential, distinguishing aspect about such a function seems to be the fact that it can be applied to its result. So, formally, I guess we could say that we are considering a class of entities f such that:

   You may be taking me a little too literally.   S  may not have many applications outside of the usual one.  What I'm saying is: who cares?  Once you have enough properties of a symbol to make a nice calculus, go exploring!

 
Ax. f: X Y                  i.e. f is a function from some domain X to some codomain Y
Ax. (A x : x X : f.x X)   i.e. f can be applied to its result

   Might be just simpler to say  f : X -> X .

 
Of course, we don't need to restrict ourselves to functions, we could generalize S to a relation as well.

   Not following you here.   S  is a unary operation on the natural numbers (if you like, natural structures).
 
The trouble is that I'm still not very used to relations; neither to the relational calculus, for that matter.

   Relational calculus is very hard!  Have you mastered simple proofs in the predicate calculus?  I would certainly go next to explore some lattice theory (there are EWDs and WFs on the subject), and then maybe try some simple number theory (like the divisibility stuff I mentioned earlier).

   Relational calculus is varsity-level.  The design of such proofs is hard.
 
   There is nothing wrong with the former line of inquiry — but that's not really what the calculational style is about.  The calculational style is about algebraic calculation of formulae.  Axiomatic systems are about studying those systems — for example, proving limitations of those systems, etc.

Hmm, that's informative...

   I mean, I certainly exaggerate the case.  Non-standard models exist for most axiomatizations.  The point, though, is that the concerns of a logician axiomatizing arithmetic are not the concerns of a number theorist.  Logicians want to axiomatize arithmetic as simply as possible so they can (for example) prove Gödel's incompleteness theorem.  Number theorists need to surround themselves with many powerful tools to prove theorems, and the distinction between axiom and theorem is totally irrelevant.  For example, I consider it a curiosity at best that

(*)     x /\ (y \/ z)  ==  (x /\ y) \/ (x /\ z)          (for all  x,y,z)

can be proved from:

     x \/ (y == z)  ==  x \/ y  ==  x \/ z
     x /\ y == x == y == x \/ y

and the symmetry and associativity of  \/  and  == .   From my perspective,  (*)  is fundamental and indispensible.  Trying to prove it from  "simpler"  assumptions about the symbols is nothing more than an exercise in theory development.  It is meaningless to the working mathematician because we already accept that it is true.  This echoes a point from an earlier email:  if we took different axioms and  (*)  wasn't true, we wouldn't say,  "Wow, actually  (*)  is not true!" ,  we would say,  "Your axioms are wrong." .   The axioms serve our needs, not the other way around.    
 
Well, that is a sensible approach to take for defining the predicate even on the integers in general. I will not pursue those investigations right now, but I'm thankful for the fact that you took the time to make those remarks!

   You could also define it on the integers by the same formula  even.x == (En :: x = 2*n) ,  where  n  is restricted to naturals.  You will be able to prove all the same theorems.

 
I do think that there is value in having a reasonably self-contained document explicitly presenting the motivations for/virtues of the calculational style; as opposed to having to gather that from what is scattered throughout the EWD's. [...]

Though, in this discussion, you've definitely been shedding light on some aspects of the calculational style that I had hardly realized.

   I appreciate the comment.  For me  —and I think I am right to say that I am carrying the torch of Dijkstra in some modest way— ,  what we call  'the calculational style'  is not fundamentally about calculation.  It is about the art of effective reasoning (EWD473, 619, etc), and calculation itself is an extreme synthesis of thought where concepts have been transcended and reasoning can be carried out in terms of symbol manipulation.  Much of the time, calculation is not possible, but we can always strive to think clearly and powerfully.

   I apologize for not having written a book yet.  :P

 
One of those symbols is a natural number, while the other is a new creature.  But which addition symbol is that?
That is all very illuminating, because it lies at the heart of what had me confused originally. I would have been less confused had I introduced a new symbol such as ++ in the manner that you have shown us. Perhaps I'll continue examining these issues in specific connection with the integers in more detail later, but the general lesson learned here is certainly something worth taking notice!

   One (traditional) construction of the integers from the naturals goes as follows.  On ordered pairs of naturals define a relation  (a,b) ~ (x,y)  ==  a+y = b+x .   We can prove that  ~  is an equivalence relation (reflexive, symmetric, transitive), and then we may denote by  [(a,b)]  the  'equivalence class'  containing all ordered pairs related to  (a,b)  by  ~ .   (This is a common sort of construction.)   One can now define addition on the equivalence classes like so:

     [(a,b)] ++ [(x,y)]  =  [(a+x,b+y)]         .

It can be proved easily that  ++  is well-defined (that it satisfies Leibniz), and that  ++  is symmetric and associative.  If one identifies natural  n  with  [(n,0)] ,  then you can now specify  -n = [(0,n)]  and everything you might want to prove can be proved.

   BUT.  This is overspecific.  It's a beautiful construction, but it is not necessary consider integers as equivalence classes of ordered pairs of natural numbers (that's a mouthful!) to be able to do math with integers.

   The point in sharing this construction is that the operation  ++  really is something new in this case.  It's related to  +  but it's new.   If you don't want to do that, you're trying to prove some sort of canonical extension theorem, the general case of which might be something like:

    "Suppose you have an binary operation  %  on a domain  X ,  and further suppose that there is a subdomain  S ⊆ X  such that  %  has such-and-such properties.  Then  %  has a unique extension to all of  X  which preserves all the properties."

The question is, with  X = "integers"  and  S = "naturals"  and  % = + ,  what are the such-and-such properties you need?  (I don't know the answer.)  (This is a question for logicians, by the way, not number theorists.)
 
OK, I think that is some sound advice.

   (credit: Dijkstra)
 
Again, that is pretty informative. And I didn't know about those non-standard models of arithmetic. I'm thankful for those observations, though I cannot add anything to that discussion at the moment.

    Long ago I was a mathematical logician.  I really love mathematical logic and find it fascinating.  That being said, I don't find any intersection between that and calculational mathematics.  But non-standard models are really cool!

What is special about  even  is that:

¬even.x /\ ¬even.y  =>  even.(x+y)

is also a theorem, which is not true in general.

I must admit that I did not understand what you meant here at all. (At all.) The theorem seems to state that the sum of two odd numbers is even; what do you mean by "not true in general?" Care to try to explain that again?

    Yes, I mean that

¬(d ⊑ x) /\ ¬(d ⊑ y)  =>  d ⊑ x+y

is only true when  d = 2 .   Sorry for the confusion.  The point was that if we recast the theorem in question in terms of divisibility:

2 ⊑ x  ==  2 ⊑ y  ==  2 ⊑ x+y

much of the content of the theorem is true even when replacing  2  by any natural number.  Only a small piece of it depends on the number  2 .

 
I'm still not used very used to the modular equivalence. I have already seen you using it as a connective in some proofs, though. A prominent example that comes to mind was in a proof for a theorem in graph theory, which, if I remember correctly, had been communicated by an user called João. I remember that I found your solution a very good one. (In any case, just for the sake of curiosity, I think I had come up with another proof for the same theorem, but using invariants. I tried sharing that solution in a comment under João's post on his blog wherein he had shared what you both had done on that problem. That was already many months ago, but the comment never got published; I guess João doesn't really update that blog anymore. Anyhow, this was my proof: http://www.mediafire.com/file/kkbe81u3zoabjk6/even-number-of-odd-nodes-in-graphs-again.txt/file)

   Oh man!  That was a long time ago!  2011?  Having trouble getting to the file in that link though.

 
 Also, I would like to say that

x y    d y x

does indeed seem like a nice way to define modular equivalence (modulo d, I presume?).

By the way, what is number theory all about, really? I'd appreciate it if you could tell me.

   Uh, numbers?  ;)  Cutting edge number theory is not really about numbers, and I don't know enough to cogently answer the question.  But elementary number theory is definitely the study of natural numbers, integers, and equations involving these.  The wikipedia page will give you some notion of the diversity of the subject!  When I use the term, though, I'm just talking about simple properties of numbers as we use them in basic calculations.
 
Probably it would be good to write a note on these sorts of transformations.

That could be useful.

Sigh.  :)

+j 

Lucas Brunno Luna

unread,
May 26, 2020, 11:23:48 PM5/26/20
to Calculational Mathematics
Hi Jeremy,

Again, thank you very much for your answer. I will try replying to it properly later. For now, I'll just attach the file whose download link I had provided, but is not working for some reason; I apologize for the inconvenience.

Feel free to make any comments that you might have about that file.
proof-of-even-number-of-odd-nodes-in-graphs.txt

Lucas Brunno Luna

unread,
May 27, 2020, 4:16:25 PM5/27/20
to Calculational Mathematics
Dear Jeremy,



You may be taking me a little too literally.   S  may not have many applications outside of the usual one.  What I'm saying is: who cares?  Once you have enough properties of a symbol to make a nice calculus, go exploring!

That sounds like it would be of some interest, though I'm having a bit of a hard time trying to see how I am going to apply your suggestion in practice.



Ax. f: X Y                  i.e. f is a function from some domain X to some codomain Y
Ax. (A x : x X : f.x X)   i.e. f can be applied to its result

   Might be just simpler to say  f : X -> X .

Perhaps that route could turn out to be more profitable. My reasoning was just that at this stage of the game insisting on X = Y would be premature, so I opted for that weaker axiomatization. What is more, by investigating the composition of two functions from that same class (i.e. on the same domain to the same codomain and applicable to their results) I observed that said composition is defined and the new function thus formed is also a member of the same class. The general lesson learned about any two functions (not just from the class that we were just considering) was that, for the composition g∘f to be defined, it is sufficient that the range of f be a subset of the domain of g; it is not necessary that the domain of g should match (i.e. be equal to) the codomain of f, as is suggested by certain formulations of functional composition which assume that f and g are such that:

f: X Y
g: Y Z

(Question: Would a better verbal rendering for an expression like f: X
→ Y be "f is a function on X to Y" rather than "f is a function 'from' X to Y"? Sorry, I don't know whether there's any difference.)


Of course, we don't need to restrict ourselves to functions, we could generalize S to a relation as well.

   Not following you here.   S  is a unary operation on the natural numbers (if you like, natural structures).

Well, what I had in mind is quite simple: Why insist that the output of S should be a unique function of its input? It could very well relate various outputs to the same input, and that is just what a relation might do. Thus, non-deterministic mechanisms, such as we find in programming, would also provide a model for S. (I.e. when taking the initial state as the input and the final state as the output, one could arrive at different final states after different invocations of the mechanism with the same initial state.)

I don't know what else to say about that, though.

Also, perhaps I should like to point out that, as you can see, I was more interested in generalizing S in such a way that it could be defined on any domain, rather than just natural numbers or natural structures, as you put it. I confess that I have yet not seem the potential of investigating the latter special case. (Even because the original S in the presented axiomatization of the natural numbers only served to introduce the usual operators, which were defined in terms of it. I think it would have been perfectly fine to just have used "+ 1" in place of it.) Do you have anything in mind?



Have you mastered simple proofs in the predicate calculus?

I would say that I'm comfortable with it at this point, yes.


I would certainly go next to explore some lattice theory (there are EWDs and WFs on the subject)

I've been working on that; I certainly find lattice theory interesting, and I guess I can decently deal with it without a model, though being presented with various examples of instantiations for ⊑ and the corresponding ↑/↓ has certainly been an incentive, at least initially, to study the theory, such as e.g. the relations "at most" and "divides" with the respective max/min and scm/gcd on integers, the "containment" relation and the corresponding convex hull/intersection of convex figures, and also the ordinary implication with disjunction/conjunction. These models were not my only incentive, though: its structural properties, such as the "dual" versions of formulae, and the fact that ↑ and ↓ generalize to quantifications, have also attracted me; but what I have found most fascinating is the light it sheds on the design of certain proofs, with special mention to how it helps in formulating elegant definitions and avoiding case analyses.


and then maybe try some simple number theory (like the divisibility stuff I mentioned earlier).

That sounds like a good idea.


   There is nothing wrong with the former line of inquiry — but that's not really what the calculational style is about.  The calculational style is about algebraic calculation of formulae.  Axiomatic systems are about studying those systems — for example, proving limitations of those systems, etc.

Hmm, that's informative...

   I mean, I certainly exaggerate the case.  Non-standard models exist for most axiomatizations.  The point, though, is that the concerns of a logician axiomatizing arithmetic are not the concerns of a number theorist.  Logicians want to axiomatize arithmetic as simply as possible so they can (for example) prove Gödel's incompleteness theorem.  Number theorists need to surround themselves with many powerful tools to prove theorems, and the distinction between axiom and theorem is totally irrelevant.  For example, I consider it a curiosity at best that

(*)     x /\ (y \/ z)  ==  (x /\ y) \/ (x /\ z)          (for all  x,y,z)

can be proved from:

     x \/ (y == z)  ==  x \/ y  ==  x \/ z
     x /\ y == x == y == x \/ y

and the symmetry and associativity of  \/  and  == .   From my perspective,  (*)  is fundamental and indispensible.  Trying to prove it from  "simpler"  assumptions about the symbols is nothing more than an exercise in theory development.  It is meaningless to the working mathematician because we already accept that it is true.  This echoes a point from an earlier email:  if we took different axioms and  (*)  wasn't true, we wouldn't say,  "Wow, actually  (*)  is not true!" ,  we would say,  "Your axioms are wrong." .   The axioms serve our needs, not the other way around.

That's an interesting perspective to think about.



Well, that is a sensible approach to take for defining the predicate even on the integers in general. I will not pursue those investigations right now, but I'm thankful for the fact that you took the time to make those remarks!

   You could also define it on the integers by the same formula  even.x == (En :: x = 2*n) ,  where  n  is restricted to naturals.  You will be able to prove all the same theorems.

Really? but then

  even.(-2)
(E n :: -2 = 2 * n)
false .


I appreciate the comment.  For me  —and I think I am right to say that I am carrying the torch of Dijkstra in some modest way— ,  what we call  'the calculational style'  is not fundamentally about calculation.  It is about the art of effective reasoning (EWD473, 619, etc), and calculation itself is an extreme synthesis of thought where concepts have been transcended and reasoning can be carried out in terms of symbol manipulation.  Much of the time, calculation is not possible, but we can always strive to think clearly and powerfully.

What you describe generally as the calculational style sounds like it would be more aptly termed "mathematical methodology", no? In any case, I reckon that "the art and science of effective reasoning" was a phrase used by Dijkstra to define mathematics in general, is that not right?


   One (traditional) construction of the integers from the naturals goes as follows.  On ordered pairs of naturals define a relation  (a,b) ~ (x,y)  ==  a+y = b+x .   We can prove that  ~  is an equivalence relation (reflexive, symmetric, transitive), and then we may denote by  [(a,b)]  the  'equivalence class'  containing all ordered pairs related to  (a,b)  by  ~ .   (This is a common sort of construction.)   One can now define addition on the equivalence classes like so:

     [(a,b)] ++ [(x,y)]  =  [(a+x,b+y)]         .

It can be proved easily that  ++  is well-defined (that it satisfies Leibniz), and that  ++  is symmetric and associative.  If one identifies natural  n  with  [(n,0)] ,  then you can now specify  -n = [(0,n)]  and everything you might want to prove can be proved.

   BUT.  This is overspecific.  It's a beautiful construction, but it is not necessary consider integers as equivalence classes of ordered pairs of natural numbers (that's a mouthful!) to be able to do math with integers.

   The point in sharing this construction is that the operation  ++  really is something new in this case.  It's related to  +  but it's new.   If you don't want to do that, you're trying to prove some sort of canonical extension theorem, the general case of which might be something like:

    "Suppose you have an binary operation  %  on a domain  X ,  and further suppose that there is a subdomain  S ⊆ X  such that  %  has such-and-such properties.  Then  %  has a unique extension to all of  X  which preserves all the properties."

The question is, with  X = "integers"  and  S = "naturals"  and  % = + ,  what are the such-and-such properties you need?  (I don't know the answer.)  (This is a question for logicians, by the way, not number theorists.)

That was excellent! Thanks!



Yes, I mean that

¬(d ⊑ x) /\ ¬(d ⊑ y)  =>  d ⊑ x+y

is only true when  d = 2 .   Sorry for the confusion.  The point was that if we recast the theorem in question in terms of divisibility:

2 ⊑ x  ==  2 ⊑ y  ==  2 ⊑ x+y

much of the content of the theorem is true even when replacing  2  by any natural number.  Only a small piece of it depends on the number  2 .

I think I understand what you mean now; thanks for the clarification! That is interesting, by the way.



By the way, what is number theory all about, really? I'd appreciate it if you could tell me.

   Uh, numbers?  ;)  Cutting edge number theory is not really about numbers, and I don't know enough to cogently answer the question.  But elementary number theory is definitely the study of natural numbers, integers, and equations involving these.  The wikipedia page will give you some notion of the diversity of the subject!  When I use the term, though, I'm just talking about simple properties of numbers as we use them in basic calculations.

OK, it's just that I don't really know the difference between number theory and arithmetic. What is more, there are other kinds of numbers, e.g. irrational numbers, complex numbers... and my impression is that neither arithmetic nor what is called "number theory" seem to deal with these other kinds of numbers. What should one study if one wants to know more about them?

Best regards, everyone.

Jeremy Weissmann

unread,
May 27, 2020, 10:58:12 PM5/27/20
to mathmeth
Dear Lucas, 
 
Ax. f: X Y                  i.e. f is a function from some domain X to some codomain Y
Ax. (A x : x X : f.x X)   i.e. f can be applied to its result

   Might be just simpler to say  f : X -> X .

Perhaps that route could turn out to be more profitable. My reasoning was just that at this stage of the game insisting on X = Y would be premature [...]

   The properties you postulated give  f(X) ⊆ X ,  so the only part of  Y  that relates to  f  is the part that is in  X !   More of interest is the difference between  X  and  f(X) .   Also, I consider that  S  is an operator on the natural numbers, so the appropriate generalization to my tastes is still an arbitrary operator.

 
Well, that is a sensible approach to take for defining the predicate even on the integers in general. I will not pursue those investigations right now, but I'm thankful for the fact that you took the time to make those remarks!

   You could also define it on the integers by the same formula  even.x == (En :: x = 2*n) ,  where  n  is restricted to naturals.  You will be able to prove all the same theorems.

Really? but then

  even.(-2)
(E n :: -2 = 2 * n)
false .

   But  -2  isn't in the natural numbers.  I'm saying the same definition works for the naturals if we restrict ourselves to the naturals.


I appreciate the comment.  For me  —and I think I am right to say that I am carrying the torch of Dijkstra in some modest way— ,  what we call  'the calculational style'  is not fundamentally about calculation.  It is about the art of effective reasoning (EWD473, 619, etc), and calculation itself is an extreme synthesis of thought where concepts have been transcended and reasoning can be carried out in terms of symbol manipulation.  Much of the time, calculation is not possible, but we can always strive to think clearly and powerfully.

What you describe generally as the calculational style sounds like it would be more aptly termed "mathematical methodology", no? In any case, I reckon that "the art and science of effective reasoning" was a phrase used by Dijkstra to define mathematics in general, is that not right?

   I think it's even deeper than mathematical methodology!  And yes, you're right, that was Dijkstra's definition of mathematics, but  —and this may just be an issue of semantics— ,  I think mathematics (like calculation) is an extreme example of synthesized thought.  But there's something deeper here.  Just about organizing our perceptions.
 
OK, it's just that I don't really know the difference between number theory and arithmetic. What is more, there are other kinds of numbers, e.g. irrational numbers, complex numbers... and my impression is that neither arithmetic nor what is called "number theory" seem to deal with these other kinds of numbers. What should one study if one wants to know more about them?

   Tee hee.  Number theory involves a lot of complex analysis.  :)

   More on the successor function to come.

+j 

Lucas Brunno Luna

unread,
May 28, 2020, 12:07:26 PM5/28/20
to Calculational Mathematics
Dear Jeremy,


The properties you postulated give  f(X) ⊆ X ,  so the only part of  Y  that relates to  f  is the part that is in  X !   More of interest is the difference between  X  and  f(X) .

That is true; I can see that.

I think it's worth to remark that I have also realized a few other things that could be of interest.

Firstly, I think neither of the generalizations which I had proposed for S, viz. as a function or as a relation, is necessarily better than the other: both can be worth investigating in their own right, depending on which additional structure one wants to explore. One might want to conduct these further explorations assuming uniqueness of the results of applications or not.

Secondly, as a result of our preceding explorations, I have finally grasped something which had me somewhat confused for a long time, and that has made me learn some very general lessons.

The consideration of functional composition has led me to have a look at the Wikipedia article about the concept (so I could get the unicode character for the composition sign to put in my formulae, haha), and there I found a section about "composition monoids", wherein I read that functions of a type of the form X → X are called "transformations". Since a long time already, I have not only seen functions being called transformations, but also matrices being called that, specifically in the context of linear algebra. But are the transformations functions or matrices, after all? I had never quite grasped that. And now I have realized:

The matrices are the functions. (Literally.)

Think about all the algebraic properties of functions. What are they? I can think of (with f, g, x, X, and Y of the appropriate types):

Ax. f: X Y        (x X   f.x Y)
Ax. x = y          f.x = f.y
Ax. (fg).x     =    f.(g.x)
Ax. f = g          (A x :: f.x = g.x)

I now see that, when one takes matrices (standing for linear transformations) for functions, the matrix multiplication operator both for the application dot and for the composition sign, and the domains/codomains for sets of (column) vectors, then one obtains a model of the theory axiomatized above!

So, matrices can be a possible model for the notion of a function, and this is clear when one regards that a "function" is simply anything that satisfies the above algebraic properties. Structure is the key! But why did it take me so long to realize that? I think I can find a few explanations...

First of all, the interpretation of a function as a procedure has been with me for a long time, and, while in school, I had been presented to the notion of regarding functions as sets of ordered pairs. But what I realize now is that both views are nothing more than models for the notion of a function. Even when I think of the most general verbal description of a function that I had known until now, viz. a rule which associates every element of some domain to a unique element of some codomain, I now regard that also that could be considered overspecific. Furthermore, I speculate that the fact that functional application is usually denoted without the recourse to an explicit operator such as the infix dot adopted by Dijkstra et al. might have something to do with my slowness to realize the above.

Truly, the essence of a function is subsumed in the algebraic properties of the concept, and when one thinks of those properties, one can clearly recognize its models as such. In particular, Leibniz's principle seems to be the most essential property in this context, because it subsumes the fundamental relationship between the key notions of argument, application, and function. To illustrate this point with a more concrete example, think about how one could associate a pair of integers to a -- unique -- integer. Now, what integer should that be? Is it their sum, their product, their difference, their smallest common multiple, or something else? Thus, one applies a function to the pair of integers to obtain the sought integer, and the function is quite simply that which determines which integer it should be. In the context of linear algebra, one wants to associate column vectors to other column vectors, and functional application to that end is performed by matrix multiplication, and the left matrix is the thing that determines which vector should be the result for any one argument.

I think those are valuable lessons, and, on this account (but also in others), I'm grateful for having these discussions with you, Jeremy.


You could also define it on the integers by the same formula  even.x == (En :: x = 2*n) ,  where  n  is restricted to naturals.  You will be able to prove all the same theorems.
Really? but then

  even.(-2)
(E n :: -2 = 2 * n)
false .

   But  -2  isn't in the natural numbers.  I'm saying the same definition works for the naturals if we restrict ourselves to the naturals.


OK, what had me confused was just that you said that we could define it on the integers using the same formula, viz. even.x ≡ (E n :: x = 2·n), but you had only said that n was restricted to the natural numbers; I understood x to range over the integers. But no problem, you have clarified it now; and it wasn't a big deal, anyway, haha. Of course, we also know that the formula works for the integers if we let both x and n range over the integers, which is equivalent to saying that 2 divides the integer x.



I think it's even deeper than mathematical methodology!  And yes, you're right, that was Dijkstra's definition of mathematics, but  —and this may just be an issue of semantics— ,  I think mathematics (like calculation) is an extreme example of synthesized thought.  But there's something deeper here.  Just about organizing our perceptions.

Hmm...


More of interest is the difference between  X  and  f(X) .   Also, I consider that  S  is an operator on the natural numbers, so the appropriate generalization to my tastes is still an arbitrary operator.

[...]


More on the successor function to come.

All right, looking forward to your next email.

Jeremy Weissmann

unread,
May 28, 2020, 1:31:50 PM5/28/20
to mathmeth
Dear Lucas,

   Happy to see you're having some insights!  Some comments:

 
But are the transformations functions or matrices, after all?

   First, surely you appreciate that it's possible for a word to be used in multiple contexts and have a different meaning.  Like  'graph'  as the collection of points  (x, f.x)  for some function  x ,  but also  'graph'  for a network of vertices and edges.   'Natural'  as in  'natural numbers'  but also as in  'natural extension' .   Etc.  So it could simply have been that this was a coincidence.

   Of course, it's not, as you realized.  Function application interpreted as matrix multiplication, and functions as matrices.  That indeed is one instantiation for functions.  Well, linear functions.
 
Even when I think of the most general verbal description of a function that I had known until now, viz. a rule which associates every element of some domain to a unique element of some codomain, I now regard that also that could be considered overspecific.

   Sure.  I think Dijkstra put it best that a function is something that satisfies Leibniz's rule:  x = y  =>  f.x = f.y .   (If types are involved then we also need to say something about types, like  if  f  is a function from  A  to  B  and  a ∈ A ,  then  f.a ∈ B .   And we would restrict Leibniz's rule to  x,y ∈ A .)
 
OK, what had me confused was just that you said that we could define it on the integers using the same formula

   If I said that, it was a typo.  I meant,  "on the naturals" .   Oops!

 +j

Lucas Brunno Luna

unread,
May 28, 2020, 3:08:51 PM5/28/20
to Calculational Mathematics
Dear Jeremy,

I really appreciate your comments, I reckon that you made good reminders.



OK, what had me confused was just that you said that we could define it on the integers using the same formula

   If I said that, it was a typo.  I meant,  "on the naturals" .   Oops!

No problem!

Jeremy Weissmann

unread,
May 28, 2020, 3:20:06 PM5/28/20
to mathmeth
Dear Lucas,

   When it comes to quantification, we have the split-off rule:

(*)
     (Qx : x ∈ R : f.x)
=     { assuming  c ∈ R  )
     (Qx : x ∈ R\{c} : f.x)  Q  f.c

which is nice, but there is nothing special about splitting off one element (though admittedly it comes up often).  We want to know the following range split rule just as well:

(**)
     (Qx : x ∈ R : f.x)
=     {  R  is the disjoint union of  S  and  T  }
     (Qx : x ∈ S : f.x)  Q  (Qx : x ∈ T : f.x)

where quantification over an empty range is assumed equal to the identity element of  Q  (which we may arbitrarily define for convenience if none exists).

   Similarly, I have been racking my brain trying to figure out how to write something about your successor function.  Its role in the logical study of the natural numbers and systems derived thereof is clear:  we want to axiomatize the natural numbers using as few symbols and as simple properties as possible.  However, from a calculational perspective, even from the perspective of a budding mathematician practicing calculation, I just don't get it.

   But I must be wrong; the proof being that Wolfram has come up with lots of lovely little exercises using the successor function.  So there!

   Nevertheless, if I look past the ways in which I am wrong, I feel I am right.  ;)

   When I wish to understand numbers additively, what seems clear is that the (positive) natural numbers are the smallest set containing something called  1  and which are closed under some associative operation  + .   This  "smallest set etc"  or   "the set generated by such-and-such"  is an extremely common and powerful construction in mathematics.  In general, we look at sets containing certain elements which are closed under certain operations; these are typically called inductive sets.  The intersection of all inductive sets is what we mean by the smallest inductive set, or the set generated by the certain elements and operations.

   I prefer this understanding because I don't see the merit in breaking symmetries to define a concept, only to restore them later.  Viz your definition of  +  which is defined by induction in its right argument, only to prove later that you could have defined it by induction on its left argument.  I like a good challenge as much as the next person, but here I really have to throw up my hands and ask,  "Why?" .   My definition also has the added bonus (not at all necessary) that it conforms to the way we treat naturals under addition in practice:  as collections (sums) of a single unit (1) that can be freely rearranged at will.

   Similar to defining the positives (shall we say) in this way, we could define the natural numbers to be the set generated by  1 ,  + ,  and  0  — the identity element of  + .   The integers could be defined as being the set generated by  1 ,  + ,  0 ,  and a unary operator  - .

   In my definition of the positives above, by construction we have the induction scheme that a property which holds for  1  and which is closed under  +  holds for all positives.  So for example we can prove immediately that every positive is a sum of  1s .   From which symmetry of  +  follows immediately.  We can also now prove that all positives are either  1  or  p+1  for positive  p ,  from which traditional mathematical induction follows.

   Of course, each of these approaches will require some additional assumptions of the symbols in question.  For example, we probably want to assume  x + 1 = y + 1  =>  x = y ,  from which arbitrary cancellation holds.  And we probably want  -  to be injective and to satisfy  1 + (-1) = (-1) + 1 = 0 ,  from which the general property  x + (-x) = (-x) + x = 0  holds, as well as  -(x + y) = (-x) + (-y) ,  as well as the fact that  -  is an involution, etc.  (Also group theory could help us here.)  And all of these approaches may yield modular arithmetic unless we rule out loops like  0 = 1 + 1 + 1 ,  etc.

   In any case, my point is that I prefer definitions like  (**) ,  not  (*) .

   As for multiplication, if I were exploring it in isolation of course I would be looking at the set generated by the set  P  of primes under an associative, symmetric operator  · .   But this is not a good definition for combining  +  and  · .

   Where do I start with multiplication?  It goes without saying:  the law of distributivity.  What is so special about your definition:

x · (y + 1)  =  x·y + x·1

as opposed to:

x · (y + z)  =  x·y + x·z          ?

The only merit of your axiom is that it is tailor-made to your other axioms!  It is not substantively more minimal, from my perspective.  As for me, I want a fundamental property that looks like  (**) ,  not like  (*) .   (Whenever I am trying to axiomatize an associative, symmetric operator.)

* * *

   So, to step back for a moment from my rant (!), I really put the question to you about this successor business:  what exactly are you trying to accomplish with such an axiomatization?  I don't ask that rhetorically.  I really want to understand what is the goal of your exercise.

+j

On Sat, May 23, 2020 at 4:57 PM Lucas Brunno Luna <lucasl...@hotmail.com> wrote:
Dear colleagues,

Greetings.

Quite some time ago, I tried to the develop the theory of arithmetic from the ground up à la Peano. I.e. Starting with the set of axioms below, I proceeded to prove the familiar properties of the arithmetic operators on natural numbers, i.e. commutativity, associativity, distributivity, etc.

(Note: The variables x and y below range over the natural numbers, and S denotes the so-called "successor function", i.e. a function that associates each natural number to its successor, which is also a natural number.)

Ax. S.x = S.y  ->  x = y
       
(i.e. S is injective)

Ax. P.0 /\ (A x :: P.x -> P.(S.x))  ->  (A x :: P.x)
       
(a.k.a. the induction axiom)

Ax. x + 0    =  x
Ax. x + S.y  =  S.(x + y)

Ax. x * 0    =  0
Ax. x * S.y  =  x + x * y

Ax. 1 = S.0
       
(Definition of 1, just for convenience.)

For the record, this exercise had three purposes, viz.

0. To make myself more familiar with mathematical theory building;
1. To try to get my understanding about numbers on more rigorous footing;
2. To finally understand how mathematical induction works, something that had always puzzled me. This desire was especially triggered by attempting to understand some of the proofs in Dijktra's "A Discipline of Programming". (The semantic definition of the repetitive construct had already left me completely baffled.) I regard that this strategy did work out very well, in the sense that I am no longer afraid of mathematical induction, have actually written many proofs appealing to it, and am able to follow the proofs in Dijkstra's book.

So far, so good. That was already a long time ago. Later on I tried to extend my domain of natural numbers to include also the negative numbers, but with seemingly little success. I had seen EWD923a, and was especially struck by the paragraph where he says "For quite some time now, roots of equations seems to me the paradigm par excellence for definitions." So I tried to follow such an approach and postulated that, for all natural x, the equation

y: x + y = 0

Has at least one solution.

But is the above postulate sufficient to prove that the familiar properties of addition, multiplication, etc. extend to all integers? In the same vein, how can one perform further extensions to the rational numbers, the reals, etc.?

I didn't seem to be able to succeed even in trying to prove that the above equation has a unique solution corresponding to each natural x. Every time I try to do so I become unsure of whether I have sufficient ground to assume that the expressions I'm calculating with are even defined.

Also, as a "bonus" issue, with relation <= defined on natural numbers as:

Ax. x <= y  ==  (E r :: x + r = y)

How can one prove that it is antisymmetric and transitive? I think that I already know how to prove that it is reflexive, viz. by observing for any x:

    x <= x
==    { Definition of <= }
   
(E r :: x + r = x)
<-    { (Read below) }
    x
+ 0 = x
== { Identity of + }
    x
= x
== { = reflexive }
   
true

The second manipulation above seems to appeal to a rule roughly described by "using a witness" for a existentially quantified variable, but I'm not very familiar with that. It seems to be the "dual" (forgive my ignorance, I don't understand much about that either) analogue of the rule of instantiation for universal quantification; the two rules seem to be generalizations of the rules of absorption involving disjunction/conjunction and implication.

I would appreciate any suggestions, comments, or general insights on these problems.

Thank you all in advance!
All the best.

(PS.: Do we need to only use ASCII characters to write down formulae in these messages?)

--
You received this message because you are subscribed the mathmeth.com mailing list.
To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
---
You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.

Wolfram Kahl

unread,
May 28, 2020, 3:43:22 PM5/28/20
to calculationa...@googlegroups.com
On Thu, May 28, 2020 at 03:19:54PM -0400, Jeremy Weissmann wrote:
> Where do I start with multiplication? It goes without saying: the law
> of distributivity. What is so special about your definition:
>
> x · (y + 1) = x·y + x·1
>
>
> as opposed to:
>
> x · (y + z) = x·y + x·z ?
>
>
> The only merit of your axiom is that it is tailor-made to your other
> axioms! It is not substantively more minimal, from my perspective.

If you use the distributivity law as definition of multiplication,
you have the proof obligation that this defines a function,
whereas this is immediate from the shape of the (primitive) recursive definition

> > Ax. x * 0 = 0
> > Ax. x * S.y = x + x * y

.


Wolfram

Jeremy Weissmann

unread,
May 28, 2020, 4:20:36 PM5/28/20
to calculationa...@googlegroups.com
I don’t think I’m even suggesting to use distributivity as a definition. I’m arguing that as a user of calculational systems, range splitting like (**) or distributivity is the fundamental property.

However, since distributivity fairly trivially proves both those axioms, is this even a problem?

Also, I have no obligation to prove that multiplication is a function. We know it is. If we introduce properties under which multiplication isn’t a function (which is different from being able to prove it is) then our properties do not capture multiplication.

If what we want to do is study integer multiplication as an axiomatic system, then I’m more inclined to agree with you. Our concerns overlap but are not identical.

> On May 28, 2020, at 15:43, Wolfram Kahl <ka...@cas.mcmaster.ca> wrote:
> --
> You received this message because you are subscribed the mathmeth.com mailing list.
> To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
> For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
> ---
> You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/calculational-mathematics/20200528194321.GU18658%40ritchie.cas.mcmaster.ca.

Lucas Brunno Luna

unread,
May 29, 2020, 4:39:25 PM5/29/20
to Calculational Mathematics
Dear all,

Once again, greetings.

Wolfram, I appreciate your wish to bring something to the discussion. Feel free to do so whenever you like. I hope you're somehow getting something out of these conversations.

Jeremy, in connection with:


Similarly, I have been racking my brain trying to figure out how to write something about your successor function.  Its role in the logical study of the natural numbers and systems derived thereof is clear:  we want to axiomatize the natural numbers using as few symbols and as simple properties as possible.  However, from a calculational perspective, even from the perspective of a budding mathematician practicing calculation, I just don't get it.

Perhaps it will be worthwhile to remind ourselves that this whole attention devoted to the successor function has been triggered by the following suggestion of yours, viz.:


In line with my comments to you yesterday, I would say, rather than thinking about  S  as being part of an axiomatization of the naturals, simply consider it to be part of an algebraic system, and see what you can prove about it!  Do you see the difference?  In the axiomatization approach, you are trying to make sure you assume enough about  S  to prove that it can only mean what you mean it to mean, namely, the successor function.  In the algebraic approach, you are studying the consequences of assumptions about  S ,  regardless of what instantiations you can think of for it.

To which I replied:

Well, I can see that a function like S could lend itself to some kind of generalization, all right. The most essential, distinguishing aspect about such a function seems to be the fact that it can be applied to its result. [...]

And I proceeded to investigate the consequences of the above. As you remember, I've learned a few lessons by occasion of these explorations; but I thought that you had literally suggested me to conduce them because you had something specific in mind that you intended for me to learn that way. It sounds like this was but a misunderstanding from my part, in which case maybe I should apologize: You say that you've been racking your brain, so I hope I didn't give you too much of a setback.

Moving on, you will probably like to know that I have been seriously pondering about your text. At first, I was going to put forward a few questionings, but, reflecting more and more upon my own questions, I think that you have given me some new insights; what I propose to do now is to present explicitly what my considerations have been, and feel free to let me know what you think.

First of all, I have to admit that it took me a while to appreciate your point about multiplication and distribution. I did not realize this before, but with your help I could now see that, in a vein similar to how we have abstracted from our specific successor function to transformations in general, there is an essential, general concept for which the multiplication of numbers provides but an instantiation (as you would put it), and that is distribution. I suppose that is also the idea behind using "multiplication" to embed all those different units into formulae in physics (I had never quite grasped that before), despite the name by which the operation is called.

Honestly, this is quite curious to think about. Perhaps this is similar to the "crazy" abstractions behind the axioms of the relational calculus.

And secondly, you mentioned that, when Peano provided his axiomatization of arithmetic, people already knew that addition was associative and commutative. But, even to this day, we could not say the same about the associativity of equivalence, or the fact that disjunction distributes over it. If people were to be told that these are general laws of the logic of propositions, they'd probably want some sort of proof. What is more, there are even forms of logic, such as the "continuous logic" discussed in EWD931, wherein these laws do not hold. Yet, Dijkstra has both of these laws as axioms in his development of the predicate calculus. How come?

I reckon that one of the big justifications for mathematics in the first place is to have more reasonable grounds on which we can base our confidence in certain propositions, so that, in a sense, we don't have to make conclusions by basing ourselves purely on things such as "poor man's induction". On the other hand, my understanding of the motivation for postulating axioms is that they serve as an interface between the concerns of one who is exploring their consequences and/or is trying to erect a calculus, versus the concerns of one who is trying to apply the theory to a specific situation.

In view of what we have discussed so far in these emails, I now have a better appreciation for why Dijkstra developed the predicate calculus the way he did. He wasn't merely providing a number of axioms from which one could derive the laws of sentential logic, he was synthesizing a structure by subsuming it in an algebraic system, for which the ordinary logic of sentences would provide only an instantiation. By explicitly postulating axioms such as the associativity of equivalence and the distribution of disjunction over it, he deliberately ruled out the "continuous logic" mentioned above as a possible instantiation. His remark that the choice between axioms and theorems is somewhat arbitrary means that he could have postulated a number of the propositions marked as theorems and proved other propositions marked as axioms, so that in the end he would still end up with the same structure that he wanted.

Thus, if one is concerned about whether the calculus is applicable to replace whatever framework of sentential logic one is using, it suffices to show that this logic somehow satisfies the axioms of the calculus. But here I add that sentential logic is not the only instantiation for the calculus: another one would be provided by an interpretation of predicates as figures in the Euclidean plane; negation of a figure could then correspond to the figure subtended by the remainder of the plane that is not covered by the original figure, disjunction of two figures could be the figure subtended by that part of the plane that is covered by either of them, etc.

That was it.



   When I wish to understand numbers additively, what seems clear is that the (positive) natural numbers are the smallest set containing something called  1  and which are closed under some associative operation  + .   This  "smallest set etc"  or   "the set generated by such-and-such"  is an extremely common and powerful construction in mathematics.  In general, we look at sets containing certain elements which are closed under certain operations; these are typically called inductive sets.  The intersection of all inductive sets is what we mean by the smallest inductive set, or the set generated by the certain elements and operations.

   I prefer this understanding because I don't see the merit in breaking symmetries to define a concept, only to restore them later.  Viz your definition of  +  which is defined by induction in its right argument, only to prove later that you could have defined it by induction on its left argument.  I like a good challenge as much as the next person, but here I really have to throw up my hands and ask,  "Why?" .   My definition also has the added bonus (not at all necessary) that it conforms to the way we treat naturals under addition in practice:  as collections (sums) of a single unit (1) that can be freely rearranged at will.

   Similar to defining the positives (shall we say) in this way, we could define the natural numbers to be the set generated by  1 ,  + ,  and  0  — the identity element of  + .   The integers could be defined as being the set generated by  1 ,  + ,  0 ,  and a unary operator  - .

   In my definition of the positives above, by construction we have the induction scheme that a property which holds for  1  and which is closed under  +  holds for all positives.  So for example we can prove immediately that every positive is a sum of  1s .   From which symmetry of  +  follows immediately.  We can also now prove that all positives are either  1  or  p+1  for positive  p ,  from which traditional mathematical induction follows.

You certainly make an interesting case there. Unfortunately, I'm yet not very familiar with concepts such as generating sets, inductive sets, and a property being closed under an operation.



So, to step back for a moment from my rant (!), I really put the question to you about this successor business:  what exactly are you trying to accomplish with such an axiomatization?  I don't ask that rhetorically.  I really want to understand what is the goal of your exercise.

Well... I quote from the very first email in this thread:


For the record, this exercise had three purposes, viz.

0. To make myself more familiar with mathematical theory building;
1. To try to get my understanding about numbers on more rigorous footing;
2. To finally understand how mathematical induction works, something that had always puzzled me. This desire was especially triggered by attempting to understand some of the proofs in Dijktra's "A Discipline of Programming". (The semantic definition of the repetitive construct had already left me completely baffled.) I regard that this strategy did work out very well, in the sense that I am no longer afraid of mathematical induction, have actually written many proofs appealing to it, and am able to follow the proofs in Dijkstra's book.

As said in that quotation, I have already achieved the objective described in (2). And I have gained many insights in connection with motivation (0), especially by occasion of our discussion.

We are left with (1). Really, what I wanted was to have some framework with which I could better reason about numbers and prove theorems about them. Such as the stuff one is apparently supposed to know if one wants perform well in the Mathematical Olympics.

As always, Wolfram, Jeremy, I thank you very much for these very productive discussions.

Jeremy Weissmann

unread,
May 29, 2020, 7:41:33 PM5/29/20
to mathmeth
Dear Lucas,

Perhaps it will be worthwhile to remind ourselves that this whole attention devoted to the successor function has been triggered by the following suggestion of yours, viz.:

   Yes, we got a little in the weeds with that discussion.  You definitely clarified things at the end of your last email where you brought us back to the very beginning and reminded me what your goals are.  As far as theory-building,  S  is... okay.  Again I think there are more substantive approaches.  As for mathematical induction,  S  undoubtedly provides some nice exercises, but I think you need to get an  "Introduction to Proof"-style  book  (there are many available free online)  and embark on many exercises.

   While you're at it, it's worth getting comfortable with induction in the general case, looking at sets generated by elements and operations.  Mathematical induction is only a special case of this.

   I wasn't trying to get you to generalize  S  per se.  I was just suggesting that you take the approach of writing down properties and seeing what you can derive from them (or picking a goal theorem to prove, and design what you need to assume to prove it), rather than worrying about whether or not  S  axiomatizes whether you think it does.  I was just exhorting you to treat  S  algebraically.  As a consequence, you'll be proving something more general.  Sorry for the confusion but it sounds like you got a lot out of it.  :)
 
Moving on, you will probably like to know that I have been seriously pondering about your text. At first, I was going to put forward a few questionings, but, reflecting more and more upon my own questions, I think that you have given me some new insights; what I propose to do now is to present explicitly what my considerations have been, and feel free to let me know what you think.

   I look forward to whatever you have to say!
 
And secondly, you mentioned that, when Peano provided his axiomatization of arithmetic, people already knew that addition was associative and commutative. But, even to this day, we could not say the same about the associativity of equivalence, or the fact that disjunction distributes over it. If people were to be told that these are general laws of the logic of propositions, they'd probably want some sort of proof. What is more, there are even forms of logic, such as the "continuous logic" discussed in EWD931, wherein these laws do not hold. Yet, Dijkstra has both of these laws as axioms in his development of the predicate calculus. How come?

   Well here we start to see things a bit differently, and it feels like you might not be separating your concerns enough.  (Or, I'm just misunderstanding!)

   There is an operation which we call  'addition' ,  and it is associative.  You can imagine other addition-like operations which are not associative, and they might be incredibly interesting to explore!  But those are something different.

   Now, doing calculational mathematics means I want to be able to let symbols do the work for me whenever possible.  Since I use addition a lot, I need a symbol for it, and I need to have properties of that symbol at my disposal.  Associativity is one of these properties.  If I'm going to work on some number theory, say, exploring some properties of divisibility and modular arithmetic, I need a symbol  + ,  and I need it to be associative, because that's the mathematical concept I want to use.

   If you're not using addition, but you're just wondering,  "If I didn't assume associativity of  + ,  how might I prove it?" ,  then this is indeed something you could explore.  So you could try to derive it from some other properties.  But if you can't prove associativity from those other properties, all that means is you've failed to find an interesting equivalent formulation of associativity.  It doesn't mean that addition is not associative.

   The equivalence we use in traditional mathematics is associative.  That fact can be established in a minute or two using a truth table.  And associativity of equivalence is not only true, it is useful!  That's why Dijkstra postulates this associativity of  == .   (When calculational mathematicians talk about  'postulating a property' ,  we really just mean we're making it available in any context.  We don't need a notion of  'truth' .   There are just contexts and calculations in contexts.)

   You can explore logics where equivalence doesn't satisfy associativity, and that might be interesting!  But Dijkstra isn't including associativity to  "rule out different interpretations" ;  he's including it because he wants to use it.
 
I reckon that one of the big justifications for mathematics in the first place is to have more reasonable grounds on which we can base our confidence in certain propositions, so that, in a sense, we don't have to make conclusions by basing ourselves purely on things such as "poor man's induction".

   That is a perfectly valid perspective that I don't happen to share.  :)  Or at least, I feel ambivalent about it.  I would say it can help us to be more clear in our thinking and our communication.  But I like to stay away from notions like  'truth'  (you didn't say the word but it felt implied) .
  
On the other hand, my understanding of the motivation for postulating axioms is that they serve as an interface between the concerns of one who is exploring their consequences and/or is trying to erect a calculus, versus the concerns of one who is trying to apply the theory to a specific situation.

   No objections here.
 
You certainly make an interesting case there. Unfortunately, I'm yet not very familiar with concepts such as generating sets, inductive sets, and a property being closed under an operation.

   You should get familiar!  Let  S  be the smallest set (ie the intersection of all sets) which contain  1  and which is closed under associative  + .   Then if we prove  P.1  and  P.x /\ P.y => P.(x+y)  for all  x,y ∈ S ,  then  P  is true for all  x ∈ S .   (This follows from the definition of  S .)  So for example, all elements of  S  are sums of  1s ,  because  1  is a sum of  1s  (definitionally, perhaps) ,  and if  x  and  y  are sums of  1s ,  then so is  x+y .   And if we assume  x+1 = y+1  =>  x=y ,  then we can prove general cancellation, since if  z  and  w  are cancellable then  x+(z+w) = y+(z+w) == (x+z)+w = (y+z)+w => x+z = y+z => x=y .
 
   But this construction is very general.
 
0. To make myself more familiar with mathematical theory building;

   Okay.
 
1. To try to get my understanding about numbers on more rigorous footing;

   I feel so-so on this point.  I don't think you should feel that an operation distributing over  y  and  1  is more  "rigorous"  than it distributing over  y  and  z .

2. To finally understand how mathematical induction works, something that had always puzzled me.

   Good!  But please do more exercises in induction.
  
We are left with (1). Really, what I wanted was to have some framework with which I could better reason about numbers and prove theorems about them. Such as the stuff one is apparently supposed to know if one wants perform well in the Mathematical Olympics.

   Oh, the successor function will have nothing to do with this.  I suggest you find a book in elementary number theory and try working through it, perhaps even trying to recast some of the traditional arguments calculationally!  It will be a great exercise (and I'd be happy to join you in your efforts).

+j
 

Lucas Brunno Luna

unread,
May 30, 2020, 3:33:10 PM5/30/20
to Calculational Mathematics
Dear all,

I salute you one last time for the time being. These discussions have been very worthwhile for me, and even funny at times, I might say. It is my sincere hope that everyone could get something out of it for them, but I believe that the time has come for me to conduce my own further studies in isolation, as well as dedicate myself more fully to other matters.

Jeremy, you have helped me considerably, and you know what I mean. I thank you very much for your comments and suggestions for further study. For now, I just wish to leave some last thoughts.

In the first place, regarding the techniques you mentioned in connection with constructing the natural numbers, I found your presentation of them curious and I think that they are valid in their own right. And perhaps they can indeed be applied in this context to good advantage, though sometimes you also reminded me of a passage from the preface of A Discipline of Programming wherein Dijkstra mentions people wanting to define repetitive constructs in terms of recursive procedures. I believe that recursion is a very valid concept in its own right; I believe Dijkstra did too. I also found it curious to read his remark in connection with the approach adopted by these people that he didn't like to crack an egg with a sledgehammer, no matter how effective the sledgehammer is for doing so.

Secondly, I appreciate your insights about the development of the predicate calculus. In any case, I also appreciate the fact that Dijkstra's axioms rule out things like the continuous logic discussed in EWD931, since, as shown in that document, the equivalence is basically useless within that framework, because not only is it not associative, it is not even transitive, making that framework hostile to one of the cornerstones for the deliverance of extremely elegant proofs in the predicate calculus. As an aside, the law of the excluded middle doesn't hold there, either.

And finally, well, if you find that keeping your distance from concepts such as "truth" serves you better, who am I to contest that? Nevertheless, you reminded me of an incident from yesteryear that happened in a city a couple hundreds of kilometers from where I live. A series of miscalculations led to the collapse of a dam classified under "low risk", leading to the release of enormous amounts of mud that suddenly buried over 200 people to their deaths. I think that, at that moment, truth ended up staring all of these poor victims in the face, and perhaps some of them had even for a long time preferred to keep their distance from it.

OK, Jeremy, Wolfram; I reckon it is high time I hit the road. I thank all users once again, and wish good luck to all of you.
Cheers!

Jeremy Weissmann

unread,
May 30, 2020, 4:39:59 PM5/30/20
to calculationa...@googlegroups.com
Thanks for the thanks! This conversation has really got the wheels turning. 

I very much look forward to more of your mathematical efforts!

On May 30, 2020, at 15:33, Lucas Brunno Luna <lucasl...@hotmail.com> wrote:


--
You received this message because you are subscribed the mathmeth.com mailing list.
To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
---
You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.

Jeremy Weissmann

unread,
May 30, 2020, 5:02:10 PM5/30/20
to mathmeth
One last thing,

 
And finally, well, if you find that keeping your distance from concepts such as "truth" serves you better, who am I to contest that? Nevertheless, you reminded me of an incident from yesteryear that happened in a city a couple hundreds of kilometers from where I live. A series of miscalculations led to the collapse of a dam classified under "low risk", leading to the release of enormous amounts of mud that suddenly buried over 200 people to their deaths. I think that, at that moment, truth ended up staring all of these poor victims in the face, and perhaps some of them had even for a long time preferred to keep their distance from it.


If I in any way communicated that incidents such as these are meaningless to me, I apologize.  I am, in fact, twisted into knots over the state of the world and the plight of many within it.

However, your example exemplifies to me why I am extremely wary of using the concept of  'truth'  outside of mathematical contexts  —even though our perception of  'truth'  in the real world surely motivated the mathematical concept!— .   Establishing something as true is like multiplying by zero: context is erased and we are left only with an inscrutable truth value.  Truth limits thought and care.  It ends conversation and questioning.  As such it is my feeling that truth is a concept used by those with power and money to influence and control those without it. 

I don't want to get super political, but it is the same sort of wariness I have in acknowledging that the death penalty could in principle be a suitable punishment for certain crimes, but I could never support it in practice.  We are associating a punishment with infinite cost with verdict drawn from finite reasoning.  The case you point out is also like a death penalty of sorts.

What I said earlier is that I see math as a tool to help us achieve greater clarity and care in thought and communication.  If that infinite process of refinement is our goal, then I believe we will have fewer of these horrible outcomes.  But if we have access to  'truth' ,  well... funny how everyone seems to think they know what the truth is.

In any case, that truly sucks.

Be well,

+j


Lucas Brunno Luna

unread,
May 30, 2020, 5:35:57 PM5/30/20
to Calculational Mathematics
You remind me of a man.

Jeremy Weissmann

unread,
May 30, 2020, 5:36:28 PM5/30/20
to calculationa...@googlegroups.com
I am one...?

On May 30, 2020, at 17:35, Lucas Brunno Luna <lucasl...@hotmail.com> wrote:


You remind me of a man.

--
You received this message because you are subscribed the mathmeth.com mailing list.
To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
---
You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.

Lucas Brunno Luna

unread,
May 30, 2020, 5:38:29 PM5/30/20
to Calculational Mathematics
Another man.

Lucas Brunno Luna

unread,
May 30, 2020, 5:40:30 PM5/30/20
to Calculational Mathematics
The man with the power.

Jeremy Weissmann

unread,
May 30, 2020, 5:41:57 PM5/30/20
to calculationa...@googlegroups.com
Oh no. That’s a horrible comparison. :(

On May 30, 2020, at 17:40, Lucas Brunno Luna <lucasl...@hotmail.com> wrote:


The man with the power.

--
You received this message because you are subscribed the mathmeth.com mailing list.
To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
---
You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.

Jeremy Weissmann

unread,
May 30, 2020, 5:46:00 PM5/30/20
to calculationa...@googlegroups.com

I confess I don’t see any similarities, but I can’t begrudge your for what you see. 
On May 30, 2020, at 17:41, Jeremy Weissmann <plusj...@gmail.com> wrote:



Lucas Brunno Luna

unread,
May 30, 2020, 5:49:01 PM5/30/20
to Calculational Mathematics
Perhaps this could be a subject...

for the future.

That is all, folks! ;D

Jeremy Weissmann

unread,
May 30, 2020, 5:52:12 PM5/30/20
to calculationa...@googlegroups.com
Well okay but I will say, making a comparison between someone and a disgusting demagogue and then saying “that’s all folks” is not a great look.

+j

On May 30, 2020, at 17:49, Lucas Brunno Luna <lucasl...@hotmail.com> wrote:


Perhaps this could be a subject...

for the future.

That is all, folks! ;D

--
You received this message because you are subscribed the mathmeth.com mailing list.
To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
---
You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.

Jeremy Weissmann

unread,
May 30, 2020, 6:00:02 PM5/30/20
to calculationa...@googlegroups.com
If you would like to follow up personally I’m happy to discuss. 

On May 30, 2020, at 17:52, Jeremy Weissmann <plusj...@gmail.com> wrote:


Reply all
Reply to author
Forward
0 new messages