Ax. S.x = S.y -> x = y
(i.e. S is injective)
Ax. P.0 /\ (A x :: P.x -> P.(S.x)) -> (A x :: P.x)
(a.k.a. the induction axiom)
Ax. x + 0 = x
Ax. x + S.y = S.(x + y)
Ax. x * 0 = 0
Ax. x * S.y = x + x * y
Ax. 1 = S.0
(Definition of 1, just for convenience.)
y: x + y = 0
Ax. x <= y == (E r :: x + r = y)
x <= x
== { Definition of <= }
(E r :: x + r = x)
<- { (Read below) }
x + 0 = x
== { Identity of + }
x = x
== { = reflexive }
true
So I tried to follow such an approach and postulated that, for all natural x, the equation
y: x + y = 0
Has at least one solution.
But is the above postulate sufficient to prove that the familiar properties of addition, multiplication, etc. extend to all integers? In the same vein, how can one perform further extensions to the rational numbers, the reals, etc.?
I didn't seem to be able to succeed even in trying to prove that the above equation has a unique solution corresponding to each natural x. Every time I try to do so I become unsure of whether I have sufficient ground to assume that the expressions I'm calculating with are even defined.
Also, as a "bonus" issue, with relation <= defined on natural numbers as:
Ax. x <= y == (E r :: x + r = y)
How can one prove that it is antisymmetric and transitive? I think that I already know how to prove that it is reflexive, viz. by observing for any x:
x <= x
== { Definition of <= }
(E r :: x + r = x)
<- { (Read below) }
x + 0 = x
== { Identity of + }
x = x
== { = reflexive }
true
The second manipulation above seems to appeal to a rule roughly described by "using a witness" for a existentially quantified variable, but I'm not very familiar with that. It seems to be the "dual" (forgive my ignorance, I don't understand much about that either) analogue of the rule of instantiation for universal quantification; the two rules seem to be generalizations of the rules of absorption involving disjunction/conjunction and implication.
|[ Context: a = 0
a·b = 0 == a = 0 \/ b = 0== { a = 0 , twice }0·b = 0 == 0 = 0 \/ b = 0== { property of 0 ; equality }true == true \/ b = 0== { true is the zero element of \/ }true
]|
|[ Context: a ≠ 0
a·b = 0 == a = 0 \/ b = 0== { a ≠ 0 ; false is the unit element of \/ }a·b = 0 == b = 0== { property of 0 }a·b = a·0 == b = 0== { cancellation, using a ≠ 0 }true
]| .
true
== { cancellation }
a ≠ 0 => (a·b = 0 == b = 0)
== { => into \/ }
a = 0 \/ (a·b = 0 == b = 0)
== { \/ over == , to form part of the demonstrandum }
a = 0 \/ a·b = 0 == a = 0 \/ b = 0
== { !! }
a·b = 0 == a = 0 \/ b = 0 .
a = 0 \/ a·b = 0 == a·b = 0 ,
a = 0 => a·b = 0 .
|[ Context: a = 0
a·b= { a = 0 }0·b= { property of 0 }0
]| .
> == { cancellation }
Not just cancellation...
> which by predicate calculus is equivalent to:
(No predicate calculus needed, just basic propositional calculus.)
How do you put that ``Context'' construction into the place
where the ``!!'' is?
In CalcCheck, you can use subproofs:
[...]
And you have lots of verbosity, spacing, and line break options, e.g.:
But for single-use lemmas, subproofs are frequently less distracting.
I noticed the same tendency in LADM (Gries and Schneider, 1993):
They do present proof structures in chapter 4, but they hardly ever
use them later, and instead ``express the overarching logical argument
in natural language''.
I think that there is a case to be made for actually using
formally-structured proofs, and LADM actually frequently argues for
use of formalism, e.g., on p. 121:
| Formal proofs are often simpler than their informal counterparts in
| English, because inferences can be based purely on well-defined
| syntactic manipulation.
And on p. 78:
| That the formal proof is much simpler is no accident. Using formal
| tools, and not even letting contorted English sentences come into
| one's thoughts, can often lead to simpler arguments. Let the formal
| tools do the work.
CalcCheck demonstrates that for the proof style of LADM,
including structured proofs in the spirit of chapter 4,
the “de Bruijn factor” (http://www.cs.ru.nl/~freek/factor/index.html)
can be not much above 1, and in many cases where the informal proof
``expresses the overarching logical argument in natural language'',
it can actually be quite a bit below 1...
And machine-checkability has advantages even for professionals:
I know colleagues who worked in sub-theories of relation algebras
in the usual rigorous calculational style, and inadvertently
used properties not available in the sub-theory under consideration.
S.x
= { Left-identity of + }
S.(0 + x)
= { Definition of + for S }
S.0 + x
= { Definition of 1 }
1 + x
Th. even.(x + y) ≡ even.x ≡ even.y
Which axioms are you extending to these new solutions?
Depending on what you assume, I'm not sure you will be able to prove that the solutions are unique. One could imagine the natural numbers extended with several copies of the negative integers.Some years ago I got very involved in a (sad) Facebook conversation trying to figure out what you'd need to assume in order to extend the naturals via a unary operator - . We didn't come to any easy conclusions.
Yes, the rule of instantiation is important to use in both directions: (Ex :: f.x) <= f.c and (Ax :: f.x) => f.c . When you are constructing a strengthening chain and you have existential quantification, you should be on the lookout for a witness — just as if you are constructing a weakening chain and you have universal quantification, you should be on the lookout for a useful instantiation.
As for antisymmetric, how about:x ≤ y /\ y ≤ x== { definition of ≤ , twice }(Er,s :: x+r = y /\ y+s = x)=> { Leibniz }(Er,s :: x+r = y /\ y+s = x /\ (x+r)+s = x )== { algebra }(Er,s :: x+r = y /\ y+s = x /\ r+s = 0 )== { r,s ≥ 0 }(Er,s :: x+r = y /\ y+s = x /\ r = 0 /\ s = 0)== { one-point rule }x+0=y /\ y+0=x== { algebra }x = y .
Ax. S.x ≠ 0
By the way, the "you can prove this easily" lemma is: x ≤ y == x+c ≤ y+c which I think will not be hard for you to prove.
The calculational style is not always great for working with witnesses and existential quantifications. Sometimes it's just easier to say, "let r and s satisfy x+r = y and y+s = x" and then work from there.
As for transitivity, I'm just thinking off the cuff here about how I might use the heuristic of missing symbols again:|[ context: x ≤ y /\ y ≤ zx ≤ z== { def }(Er :: x+r = z)== { • x+s = y , adding s to both sides }(Er :: y+r = z+s)== { • y+t = z , adding t to both sides }(Er :: z+r = z+s+t)== { algebra }(Er :: r = s+t)== { positive numbers are closed under addition }true]|Something like that, anyway.
x ≤ y ∧ x ≤ z
≡ { Definition of ≤ twice; predicate calculus }
(E r, s :: x + r = y ∧ y + s = z)
≡ { Leibniz (+ injective) }
(E r, s :: x + r + s = y + s ∧ y + s = z)
⇒ { = transitive }
(E r, s :: x + r + s = z)
≡ { ??? }
(E t :: x + t = z)
≡ { Definition of ≤ }
x ≤ z
Just thinking a bit more, I don’t dig a definition like:
x ≤ y ≡ ⟨∃a :: x+a = y⟩
because it’s only correct if a is nonnegative, but if we have a notion of positive then why not just define
x ≤ y ≡ y−x is nonnegative ?
Trying to axiomatize "The Integers" is not easy. The same is true for any domain. It's much nicer to take an algebraic approach to some aspect of the integers, like, say, the ordering relation, and prove what you can in that mini-domain. When we say, for example, that the integers under addition form a group, or that they form with multiplication a commutative ring, we're not axiomatizing the domain — there are tons of groups and commutative rings! We're not trying to pin down the mathematical object in our minds uniquely, we're just studying its structure.
My suggestion is narrow your focus!
x ≤ y ∧ y ≤ z
≡ { Definition of ≤ twice; predicate calculus }
(E r, s :: x + r = y ∧ y + s = z)
≡ { Leibniz (+ injective) }
(E r, s :: x + r + s = y + s ∧ y + s = z)
⇒ { = transitive }
(E r, s :: x + r + s = z)
≡ { ??? }
(E t :: x + t = z)
≡ { Definition of ≤ }
x ≤ z
I think I will make a few observations concerning Homework 5, for what they're worth. The first one is that one can prove a variant of the "Successor" theorem [...]
As for the "Even double" theorem, one can easily prove it without induction by appealing to the distribution of even over +, viz.:
Th. even.(x + y) ≡ even.x ≡ even.y
which, in turn, can be proved via induction.
x ⊑ y == (En : integer.n : x·n = y) ,
The book by Gries and Schneider that you mentioned, viz. A Logical Approach to Discrete Math, does indeed seem like a fine text that I would enjoy very much. Too bad I cannot acquire it right now, since I'm totally broke, have no source of income for myself, and I don't even know how to buy stuff off the internet. (I also don't have a credit card.) By the way, I couldn't even acquire Dijkstra's book, viz. A Discipline of Programming, by myself: Someone bought it for me as a birthday gift. (And I must say that I'm really glad for that, since that book is an extremely important piece of computing science writing and also a masterpiece in general.)
Speaking of LADM, I have observed that, although I'm still very new to abstract algebra, the axiomatization for the integers as presented in the Extra Notebook seems to be equivalent to that of a commutative ring, except perhaps for the "Cancellation of multiplication" law, for which I don't know whether a derivation from the commutative ring axioms alone can be given or not; in any case, I see that said law is stated there as an axiom.
To Jeremy:Which axioms are you extending to these new solutions?
See, my plan was not to extend any of the original axioms, in principle.
x ++ y = x + y if x,y are natural ?
By the way, the "you can prove this easily" lemma is: x ≤ y == x+c ≤ y+c which I think will not be hard for you to prove.
True, that was actually very easy. The only additional theorem that I needed was the injectivity of +, which is a consequence of the injectivity of S and can be established via induction.
x ≤ y ∧ x ≤ z
≡ { Definition of ≤ twice; predicate calculus }
(E r, s :: x + r = y ∧ y + s = z)
≡ { Leibniz (+ injective) }
(E r, s :: x + r + s = y + s ∧ y + s = z)
⇒ { = transitive }
(E r, s :: x + r + s = z)
≡ { ??? }
(E t :: x + t = z)
≡ { Definition of ≤ }
x ≤ z
The trouble is that I still don't know how to justify the fourth step.
Just thinking a bit more, I don’t dig a definition like:
x ≤ y ≡ ⟨∃a :: x+a = y⟩
because it’s only correct if a is nonnegative, but if we have a notion of positive then why not just define
x ≤ y ≡ y−x is nonnegative ?
Well, the definition that used universal quantification seemed most natural for the natural numbers (no pun intended), which constituted the domain I was working with originally. (And, hence, also the r (your a) was to be implicitly understood as having the natural numbers as its range.) As soon as we try to extend that relation to the integers, we seem to be faced once again with the issue presented in the title of my original message. But if we are working with all integers from the beginning, perhaps your suggestion would present a line of reasoning to be pursued.
On May 25, 2020, at 15:59, Jeremy Weissmann <plusj...@gmail.com> wrote:
As for the "Even double" theorem, one can easily prove it without induction by appealing to the distribution of even over +, viz.:
Th. even.(x + y) ≡ even.x ≡ even.y
which, in turn, can be proved via induction.I can't help but comment again. Please, go ahead and wonder how you might prove this theorem using induction. Maybe even do it!
Ax. even.0
Ax. even.(S.x) ≡ ¬even.x
Th. even.(x + y) ≡ even.x ≡ even.y
The proof is by induction over y.
Base step:
even.(x + 0)
≡ { Identity of + }
even.x
≡ { Identity of ≡ }
even.x ≡ true
≡ { even axiom }
even.x ≡ even.0
Induction step:
even.(x + S.y)
≡ { Successor axiom }
even.(S.(x + y))
≡ { even axiom }
¬even.(x + y)
≡ { Induction hypothesis }
¬(even.x ≡ even.y)
≡ { Predicate calculus }
even.x ≡ ¬even.y
≡ { even axiom }
even.x ≡ even.(S.y)
x ≤ y ∧ x ≤ z
≡ { Definition of ≤ twice; predicate calculus }
(E r, s :: x + r = y ∧ y + s = z)
≡ { Leibniz (+ injective) }
(E r, s :: x + r + s = y + s ∧ y + s = z)
⇒ { = transitive }
(E r, s :: x + r + s = z)
≡ { ??? }
(E t :: x + t = z)
≡ { Definition of ≤ }
x ≤ z
The trouble is that I still don't know how to justify the fourth step.Very nice! In the fourth step, you are rewriting the dummy under t = r+s .
Not sure why I replaced ≤ with < in my mind and started insisting on r,s > 0.Sorry about that!
I had already proved this theorem before writing that, haha. It is really simple.
¬even.x /\ ¬even.y => even.(x+y)
x ≈ y == d ⊑ y–x ,
Very nice! In the fourth step, you are rewriting the dummy under t = r+s .
But truly, that's just it? Rewriting the dummy? I used to think that would only allow one to replace an atomic dummy identifier by another, such as replacing j by k, say.
(Er,s :: ... r+s ... )
== { one-point rule, r+s satisfies t's type }
(Er,s :: (Et : t = r+s : ... r+s ... ))
== { Leibniz }
(Er,s :: (Et : t = r+s : ... t ... ))
== { quantifier interchange }
(Et :: (Er,s : t = r+s : ... t ... ))
== { vacuous quantification, assuming (Er,s :: t = r+s) }
(Et :: ... t ... ) .
I think I will make a few observations concerning Homework 5, for what they're worth. The first one is that one can prove a variant of the "Successor" theorem [...]In line with my comments to you yesterday, I would say, rather than thinking about S as being part of an axiomatization of the naturals, simply consider it to be part of an algebraic system, and see what you can prove about it! Do you see the difference? In the axiomatization approach, you are trying to make sure you assume enough about S to prove that it can only mean what you mean it to mean, namely, the successor function. In the algebraic approach, you are studying the consequences of assumptions about S , regardless of what instantiations you can think of for it.
Ax. f: X → Y i.e. f is a function from some domain X to some codomain Y
Ax. (A x : x ∈ X : f.x ∈ X) i.e. f can be applied to its result
There is nothing wrong with the former line of inquiry — but that's not really what the calculational style is about. The calculational style is about algebraic calculation of formulae. Axiomatic systems are about studying those systems — for example, proving limitations of those systems, etc.
But part of the program laid out by Dijkstra et al (which is in keeping with a long, long tradition in mathematics and philosophy, for what that's worth) is to consider what we're looking at (a formula, a theorem, a collection of symbols, some mathematical concepts, etc) and ask yourself what sort of framework or approach best serves it. That is careful pondering, thoughtful design, the art effective reasoning.When I look at your formula, I see natural numbers, so of course induction is a possible approach to proof. But I also see two variables, so I wonder if I'll need a double induction, which doesn't sound fun. Also, I see a sort of distributivity over addition. Induction wouldn't do justice to that. (See similar comments in JAW2.) So if I'm going to consider how I might prove your theorem, I want to be guided by the formula.Can we think of any sort of axiomatization of the predicate even which also enjoys some nice properties with respect to + ? Well, we often say a number is even if its evenly divisible by 2. And divisibility enjoys nice properties with respect to + ! (If we didn't know this, we might wish to explore it.) Letting ⊑ denote 'divides' , we have the following "standard" property:(0) x ⊑ y /\ x ⊑ z => x ⊑ y+z ,but a calculational mathematician (actually also a good traditional mathematician) also knows the more flexible:(1) d ⊑ x => ( d ⊑ y == d ⊑ a·x+b·y ) or equivalently(1') d ⊑ x \/ d ⊑ y == d ⊑ x \/ d ⊑ a·x + b·y .How do we prove these? A separate concern, but again we want to stick to operators that work well with + and · . I'd suggest something like:x ⊑ y == (En : integer.n : x·n = y) ,which caters to lovely proofs of (0) and (1) .But back to the theorem at hand. Using our interface, we rewrite even.x == 2 ⊑ x . But if you try now to prove your theorem using this interface and (0)/(1) , you will discover that you need more! That is, d ⊑ x+y == d ⊑ x == d ⊑ y is not a theorem unless d = 2 . Calculating what remains to be shown, then figuring out what new machinery to introduce (and lemmata to be proven!) is a wonderful exercise.
Yes. Unfortunately these books are all quite expensive. I had meant to write my own introduction to the calculational style many years ago, but my path diverged somewhat. I'd love to get back to it someday. Perhaps I'm inching closer!
That being said, there is lots of ugliness in LADM, and there is plenty to soak up from the free EWDs and WFs (and even JAWs, if I may be so bold) online. Consider having a look at some of the free resources here: http://www.mathmeth.com/read.shtml . We're all happy to help you work through them.
A commutative ring with cancellable multiplication is indeed something new.
To Jeremy:Which axioms are you extending to these new solutions?
See, my plan was not to extend any of the original axioms, in principle.Okay, but then I don't think you can do what you hope to do.
For example, you've extended the domain but you've also silently done so for + as well, when you write x + y = 0 , right?
One of those symbols is a natural number, while the other is a new creature. But which addition symbol is that? Is it the one that enjoys the axiom of commutativity? Yes or no! If yes, then you already have commutativity. If not, then you have two additions floating around, one commutative, one (perhaps) not. What is the relationship between the two? If you don't write down some properties, I'm not sure you'll be able to prove anything. For example, do you propose:x ++ y = x + y if x,y are natural ?Etc. Start thinking about what you're really assuming about your domain, and then you can start asking real questions!If all you have is x ++ y = x + y for x,y natural, then I'm not sure you have enough. Because your only introduced axiom about ++ is that for all (natural) x , there exists y such that x ++ y = 0 . I'm not sure what prevents the possibility that x ++ y = 0 for all other cases.
By the way, the "you can prove this easily" lemma is: x ≤ y == x+c ≤ y+c which I think will not be hard for you to prove.
True, that was actually very easy. The only additional theorem that I needed was the injectivity of +, which is a consequence of the injectivity of S and can be established via induction.If you want to prove any theorems about + and ≤ , you need to do more theory building. Theorems regarding + and ≤ should depend on properties of those symbols — not properties of S . If you then want to implement them in terms of S and prove new theorems, go right ahead, but keep your concerns separate... make a firewall!
In my understanding, Peano's postulates are not about proving things about addition. We already know addition is commutative and injective. If Peano had come up with some postulates for S and then defined + in terms of S so that it wasn't commutative or wasn't injective, what would people have said? "These are not suitable axioms. They don't correctly model the concept." Not, "Wow, I guess addition isn't commutative!" .Peano's postulates are about creating an axiomatization for arithmetic, so that if we wish to study the logical power/limitations of arithmetic, prove (in)completeness theorems, compare it to other similar systems, explore nonstandard models (https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic), etc, we can do so. But they are not suitable for actually doing mathematics.
If you're working in a particular framework, then you need your tools to reflect that framework. The question is: is that framework well-chosen? Usually, in an algebraic approach to mathematics, we need more than one framework. Induction is important — but so is divisibility and prime factorization. And ultimately, even is a concept that is fundamentally related to divisibility and prime factorization, not induction.
What is special about even is that:¬even.x /\ ¬even.y => even.(x+y)is also a theorem, which is not true in general.
But even this may be a bit of a smokescreen, because perhaps the most powerful way to reason is in terms of ≈ , or modular equivalence:x ≈ y == d ⊑ y–x ,a definition which easily lends itself to proving a host of useful theorems about ≈ . Then with the parameter d = 0 , we have even.x == (x ≈ 0) . And we have for any x , x ≈ 0 ≠≠ x ≈ 1 , which is of course only true with d = 0 .I'm not saying any of these interfaces is 'right' or 'wrong' . But I do feel that these interfaces get at something deeper than the G&S axioms for even . You don't want to use induction for every proof relating to number theory! (I don't want to use it for ANY proof relating to number theory, hehe.)
x ≈ y ≡ d ⊑ y – x
There are rules for rewriting the dummy! Replacing j by k is also not straightforward: if you wish to retain equality, they have to be of the same type. It is also okay to replace j by k if k satisfies j's type, but then in general the range will shrink/strengthen.One way to view your transformation in more detail is:(Er,s :: ... r+s ... )== { one-point rule, r+s satisfies t's type }(Er,s :: (Et : t = r+s : ... r+s ... ))== { Leibniz }(Er,s :: (Et : t = r+s : ... t ... ))== { quantifier interchange }(Et :: (Er,s : t = r+s : ... t ... ))== { vacuous quantification, assuming (Er,s :: t = r+s) }(Et :: ... t ... ) .So in our case, r,s,t are all of type natural. Certainly r+s satisfies t's type. As for the last step, we need (Er,s :: t = r+s) , so we can take r,s := t,0 . That's a little ugly, but it's what we need.The simplest kind of transformation is an invertible one, where each set of dummy variables has a solution in terms of the other set. In that case it really is just a renaming.
Probably it would be good to write a note on these sorts of transformations.
Well, I can see that a function like S could lend itself to some kind of generalization, all right. The most essential, distinguishing aspect about such a function seems to be the fact that it can be applied to its result. So, formally, I guess we could say that we are considering a class of entities f such that:
Ax. f: X → Y i.e. f is a function from some domain X to some codomain Y
Ax. (A x : x ∈ X : f.x ∈ X) i.e. f can be applied to its result
Of course, we don't need to restrict ourselves to functions, we could generalize S to a relation as well.
The trouble is that I'm still not very used to relations; neither to the relational calculus, for that matter.
There is nothing wrong with the former line of inquiry — but that's not really what the calculational style is about. The calculational style is about algebraic calculation of formulae. Axiomatic systems are about studying those systems — for example, proving limitations of those systems, etc.
Hmm, that's informative...
Well, that is a sensible approach to take for defining the predicate even on the integers in general. I will not pursue those investigations right now, but I'm thankful for the fact that you took the time to make those remarks!
I do think that there is value in having a reasonably self-contained document explicitly presenting the motivations for/virtues of the calculational style; as opposed to having to gather that from what is scattered throughout the EWD's. [...]
Though, in this discussion, you've definitely been shedding light on some aspects of the calculational style that I had hardly realized.
One of those symbols is a natural number, while the other is a new creature. But which addition symbol is that?
That is all very illuminating, because it lies at the heart of what had me confused originally. I would have been less confused had I introduced a new symbol such as ++ in the manner that you have shown us. Perhaps I'll continue examining these issues in specific connection with the integers in more detail later, but the general lesson learned here is certainly something worth taking notice!
OK, I think that is some sound advice.
Again, that is pretty informative. And I didn't know about those non-standard models of arithmetic. I'm thankful for those observations, though I cannot add anything to that discussion at the moment.
What is special about even is that:¬even.x /\ ¬even.y => even.(x+y)is also a theorem, which is not true in general.
I must admit that I did not understand what you meant here at all. (At all.) The theorem seems to state that the sum of two odd numbers is even; what do you mean by "not true in general?" Care to try to explain that again?
¬(d ⊑ x) /\ ¬(d ⊑ y) => d ⊑ x+y
2 ⊑ x == 2 ⊑ y == 2 ⊑ x+y
I'm still not used very used to the modular equivalence. I have already seen you using it as a connective in some proofs, though. A prominent example that comes to mind was in a proof for a theorem in graph theory, which, if I remember correctly, had been communicated by an user called João. I remember that I found your solution a very good one. (In any case, just for the sake of curiosity, I think I had come up with another proof for the same theorem, but using invariants. I tried sharing that solution in a comment under João's post on his blog wherein he had shared what you both had done on that problem. That was already many months ago, but the comment never got published; I guess João doesn't really update that blog anymore. Anyhow, this was my proof: http://www.mediafire.com/file/kkbe81u3zoabjk6/even-number-of-odd-nodes-in-graphs-again.txt/file)
Also, I would like to say that
x ≈ y ≡ d ⊑ y – x
does indeed seem like a nice way to define modular equivalence (modulo d, I presume?).
By the way, what is number theory all about, really? I'd appreciate it if you could tell me.
Probably it would be good to write a note on these sorts of transformations.
That could be useful.
You may be taking me a little too literally. S may not have many applications outside of the usual one. What I'm saying is: who cares? Once you have enough properties of a symbol to make a nice calculus, go exploring!
Ax. f: X → Y i.e. f is a function from some domain X to some codomain Y
Ax. (A x : x ∈ X : f.x ∈ X) i.e. f can be applied to its resultMight be just simpler to say f : X -> X .
f: X → Y
g: Y → Z
Of course, we don't need to restrict ourselves to functions, we could generalize S to a relation as well.Not following you here. S is a unary operation on the natural numbers (if you like, natural structures).
Have you mastered simple proofs in the predicate calculus?
I would certainly go next to explore some lattice theory (there are EWDs and WFs on the subject)
and then maybe try some simple number theory (like the divisibility stuff I mentioned earlier).
There is nothing wrong with the former line of inquiry — but that's not really what the calculational style is about. The calculational style is about algebraic calculation of formulae. Axiomatic systems are about studying those systems — for example, proving limitations of those systems, etc.
Hmm, that's informative...I mean, I certainly exaggerate the case. Non-standard models exist for most axiomatizations. The point, though, is that the concerns of a logician axiomatizing arithmetic are not the concerns of a number theorist. Logicians want to axiomatize arithmetic as simply as possible so they can (for example) prove Gödel's incompleteness theorem. Number theorists need to surround themselves with many powerful tools to prove theorems, and the distinction between axiom and theorem is totally irrelevant. For example, I consider it a curiosity at best that(*) x /\ (y \/ z) == (x /\ y) \/ (x /\ z) (for all x,y,z)can be proved from:x \/ (y == z) == x \/ y == x \/ zx /\ y == x == y == x \/ yand the symmetry and associativity of \/ and == . From my perspective, (*) is fundamental and indispensible. Trying to prove it from "simpler" assumptions about the symbols is nothing more than an exercise in theory development. It is meaningless to the working mathematician because we already accept that it is true. This echoes a point from an earlier email: if we took different axioms and (*) wasn't true, we wouldn't say, "Wow, actually (*) is not true!" , we would say, "Your axioms are wrong." . The axioms serve our needs, not the other way around.
Well, that is a sensible approach to take for defining the predicate even on the integers in general. I will not pursue those investigations right now, but I'm thankful for the fact that you took the time to make those remarks!You could also define it on the integers by the same formula even.x == (En :: x = 2*n) , where n is restricted to naturals. You will be able to prove all the same theorems.
even.(-2)
≡ (E n :: -2 = 2 * n)
≡ false .
I appreciate the comment. For me —and I think I am right to say that I am carrying the torch of Dijkstra in some modest way— , what we call 'the calculational style' is not fundamentally about calculation. It is about the art of effective reasoning (EWD473, 619, etc), and calculation itself is an extreme synthesis of thought where concepts have been transcended and reasoning can be carried out in terms of symbol manipulation. Much of the time, calculation is not possible, but we can always strive to think clearly and powerfully.
One (traditional) construction of the integers from the naturals goes as follows. On ordered pairs of naturals define a relation (a,b) ~ (x,y) == a+y = b+x . We can prove that ~ is an equivalence relation (reflexive, symmetric, transitive), and then we may denote by [(a,b)] the 'equivalence class' containing all ordered pairs related to (a,b) by ~ . (This is a common sort of construction.) One can now define addition on the equivalence classes like so:[(a,b)] ++ [(x,y)] = [(a+x,b+y)] .It can be proved easily that ++ is well-defined (that it satisfies Leibniz), and that ++ is symmetric and associative. If one identifies natural n with [(n,0)] , then you can now specify -n = [(0,n)] and everything you might want to prove can be proved.BUT. This is overspecific. It's a beautiful construction, but it is not necessary consider integers as equivalence classes of ordered pairs of natural numbers (that's a mouthful!) to be able to do math with integers.The point in sharing this construction is that the operation ++ really is something new in this case. It's related to + but it's new. If you don't want to do that, you're trying to prove some sort of canonical extension theorem, the general case of which might be something like:"Suppose you have an binary operation % on a domain X , and further suppose that there is a subdomain S ⊆ X such that % has such-and-such properties. Then % has a unique extension to all of X which preserves all the properties."The question is, with X = "integers" and S = "naturals" and % = + , what are the such-and-such properties you need? (I don't know the answer.) (This is a question for logicians, by the way, not number theorists.)
Yes, I mean that¬(d ⊑ x) /\ ¬(d ⊑ y) => d ⊑ x+yis only true when d = 2 . Sorry for the confusion. The point was that if we recast the theorem in question in terms of divisibility:2 ⊑ x == 2 ⊑ y == 2 ⊑ x+ymuch of the content of the theorem is true even when replacing 2 by any natural number. Only a small piece of it depends on the number 2 .
By the way, what is number theory all about, really? I'd appreciate it if you could tell me.Uh, numbers? ;) Cutting edge number theory is not really about numbers, and I don't know enough to cogently answer the question. But elementary number theory is definitely the study of natural numbers, integers, and equations involving these. The wikipedia page will give you some notion of the diversity of the subject! When I use the term, though, I'm just talking about simple properties of numbers as we use them in basic calculations.
Ax. f: X → Y i.e. f is a function from some domain X to some codomain Y
Ax. (A x : x ∈ X : f.x ∈ X) i.e. f can be applied to its resultMight be just simpler to say f : X -> X .
Perhaps that route could turn out to be more profitable. My reasoning was just that at this stage of the game insisting on X = Y would be premature [...]
Well, that is a sensible approach to take for defining the predicate even on the integers in general. I will not pursue those investigations right now, but I'm thankful for the fact that you took the time to make those remarks!You could also define it on the integers by the same formula even.x == (En :: x = 2*n) , where n is restricted to naturals. You will be able to prove all the same theorems.
Really? but then
even.(-2)
≡ (E n :: -2 = 2 * n)
≡ false .
I appreciate the comment. For me —and I think I am right to say that I am carrying the torch of Dijkstra in some modest way— , what we call 'the calculational style' is not fundamentally about calculation. It is about the art of effective reasoning (EWD473, 619, etc), and calculation itself is an extreme synthesis of thought where concepts have been transcended and reasoning can be carried out in terms of symbol manipulation. Much of the time, calculation is not possible, but we can always strive to think clearly and powerfully.
What you describe generally as the calculational style sounds like it would be more aptly termed "mathematical methodology", no? In any case, I reckon that "the art and science of effective reasoning" was a phrase used by Dijkstra to define mathematics in general, is that not right?
OK, it's just that I don't really know the difference between number theory and arithmetic. What is more, there are other kinds of numbers, e.g. irrational numbers, complex numbers... and my impression is that neither arithmetic nor what is called "number theory" seem to deal with these other kinds of numbers. What should one study if one wants to know more about them?
The properties you postulated give f(X) ⊆ X , so the only part of Y that relates to f is the part that is in X ! More of interest is the difference between X and f(X) .
Ax. f: X → Y ⇒ (x ∈ X ≡ f.x ∈ Y)
Ax. x = y ⇒ f.x = f.y
Ax. (f∘g).x = f.(g.x)
Ax. f = g ≡
(A
x :: f.x = g.x)
You could also define it on the integers by the same formula even.x == (En :: x = 2*n) , where n is restricted to naturals. You will be able to prove all the same theorems.Really? but then
even.(-2)
≡ (E n :: -2 = 2 * n)
≡ false .But -2 isn't in the natural numbers. I'm saying the same definition works for the naturals if we restrict ourselves to the naturals.
I think it's even deeper than mathematical methodology! And yes, you're right, that was Dijkstra's definition of mathematics, but —and this may just be an issue of semantics— , I think mathematics (like calculation) is an extreme example of synthesized thought. But there's something deeper here. Just about organizing our perceptions.
More of interest is the difference between X and f(X) . Also, I consider that S is an operator on the natural numbers, so the appropriate generalization to my tastes is still an arbitrary operator.
[...]
More on the successor function to come.
But are the transformations functions or matrices, after all?
Even when I think of the most general verbal description of a function that I had known until now, viz. a rule which associates every element of some domain to a unique element of some codomain, I now regard that also that could be considered overspecific.
OK, what had me confused was just that you said that we could define it on the integers using the same formula
OK, what had me confused was just that you said that we could define it on the integers using the same formulaIf I said that, it was a typo. I meant, "on the naturals" . Oops!
(Qx : x ∈ R : f.x)
= { assuming c ∈ R )
(Qx : x ∈ R\{c} : f.x) Q f.c
(Qx : x ∈ R : f.x)
= { R is the disjoint union of S and T }
(Qx : x ∈ S : f.x) Q (Qx : x ∈ T : f.x)
x · (y + 1) = x·y + x·1
x · (y + z) = x·y + x·z ?
Dear colleagues,
Greetings.
Quite some time ago, I tried to the develop the theory of arithmetic from the ground up à la Peano. I.e. Starting with the set of axioms below, I proceeded to prove the familiar properties of the arithmetic operators on natural numbers, i.e. commutativity, associativity, distributivity, etc.
(Note: The variables x and y below range over the natural numbers, and S denotes the so-called "successor function", i.e. a function that associates each natural number to its successor, which is also a natural number.)
Ax. S.x = S.y -> x = y
(i.e. S is injective)
Ax. P.0 /\ (A x :: P.x -> P.(S.x)) -> (A x :: P.x)
(a.k.a. the induction axiom)
Ax. x + 0 = x
Ax. x + S.y = S.(x + y)
Ax. x * 0 = 0
Ax. x * S.y = x + x * y
Ax. 1 = S.0
(Definition of 1, just for convenience.)
For the record, this exercise had three purposes, viz.
0. To make myself more familiar with mathematical theory building;
1. To try to get my understanding about numbers on more rigorous footing;
2. To finally understand how mathematical induction works, something that had always puzzled me. This desire was especially triggered by attempting to understand some of the proofs in Dijktra's "A Discipline of Programming". (The semantic definition of the repetitive construct had already left me completely baffled.) I regard that this strategy did work out very well, in the sense that I am no longer afraid of mathematical induction, have actually written many proofs appealing to it, and am able to follow the proofs in Dijkstra's book.
So far, so good. That was already a long time ago. Later on I tried to extend my domain of natural numbers to include also the negative numbers, but with seemingly little success. I had seen EWD923a, and was especially struck by the paragraph where he says "For quite some time now, roots of equations seems to me the paradigm par excellence for definitions." So I tried to follow such an approach and postulated that, for all natural x, the equation
y: x + y = 0
Has at least one solution.
But is the above postulate sufficient to prove that the familiar properties of addition, multiplication, etc. extend to all integers? In the same vein, how can one perform further extensions to the rational numbers, the reals, etc.?
I didn't seem to be able to succeed even in trying to prove that the above equation has a unique solution corresponding to each natural x. Every time I try to do so I become unsure of whether I have sufficient ground to assume that the expressions I'm calculating with are even defined.
Also, as a "bonus" issue, with relation <= defined on natural numbers as:
Ax. x <= y == (E r :: x + r = y)
How can one prove that it is antisymmetric and transitive? I think that I already know how to prove that it is reflexive, viz. by observing for any x:
x <= x
== { Definition of <= }
(E r :: x + r = x)
<- { (Read below) }
x + 0 = x
== { Identity of + }
x = x
== { = reflexive }
true
The second manipulation above seems to appeal to a rule roughly described by "using a witness" for a existentially quantified variable, but I'm not very familiar with that. It seems to be the "dual" (forgive my ignorance, I don't understand much about that either) analogue of the rule of instantiation for universal quantification; the two rules seem to be generalizations of the rules of absorption involving disjunction/conjunction and implication.
I would appreciate any suggestions, comments, or general insights on these problems.
Thank you all in advance!
All the best.
(PS.: Do we need to only use ASCII characters to write down formulae in these messages?)
--
You received this message because you are subscribed the mathmeth.com mailing list.
To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
---
You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/calculational-mathematics/b8a927cb-0899-4760-8bce-4a2e8ac00170%40googlegroups.com.
Similarly, I have been racking my brain trying to figure out how to write something about your successor function. Its role in the logical study of the natural numbers and systems derived thereof is clear: we want to axiomatize the natural numbers using as few symbols and as simple properties as possible. However, from a calculational perspective, even from the perspective of a budding mathematician practicing calculation, I just don't get it.
In line with my comments to you yesterday, I would say, rather than thinking about S as being part of an axiomatization of the naturals, simply consider it to be part of an algebraic system, and see what you can prove about it! Do you see the difference? In the axiomatization approach, you are trying to make sure you assume enough about S to prove that it can only mean what you mean it to mean, namely, the successor function. In the algebraic approach, you are studying the consequences of assumptions about S , regardless of what instantiations you can think of for it.
Well, I can see that a function like S could lend itself to some kind of generalization, all right. The most essential, distinguishing aspect about such a function seems to be the fact that it can be applied to its result. [...]
When I wish to understand numbers additively, what seems clear is that the (positive) natural numbers are the smallest set containing something called 1 and which are closed under some associative operation + . This "smallest set etc" or "the set generated by such-and-such" is an extremely common and powerful construction in mathematics. In general, we look at sets containing certain elements which are closed under certain operations; these are typically called inductive sets. The intersection of all inductive sets is what we mean by the smallest inductive set, or the set generated by the certain elements and operations.I prefer this understanding because I don't see the merit in breaking symmetries to define a concept, only to restore them later. Viz your definition of + which is defined by induction in its right argument, only to prove later that you could have defined it by induction on its left argument. I like a good challenge as much as the next person, but here I really have to throw up my hands and ask, "Why?" . My definition also has the added bonus (not at all necessary) that it conforms to the way we treat naturals under addition in practice: as collections (sums) of a single unit (1) that can be freely rearranged at will.Similar to defining the positives (shall we say) in this way, we could define the natural numbers to be the set generated by 1 , + , and 0 — the identity element of + . The integers could be defined as being the set generated by 1 , + , 0 , and a unary operator - .In my definition of the positives above, by construction we have the induction scheme that a property which holds for 1 and which is closed under + holds for all positives. So for example we can prove immediately that every positive is a sum of 1s . From which symmetry of + follows immediately. We can also now prove that all positives are either 1 or p+1 for positive p , from which traditional mathematical induction follows.
So, to step back for a moment from my rant (!), I really put the question to you about this successor business: what exactly are you trying to accomplish with such an axiomatization? I don't ask that rhetorically. I really want to understand what is the goal of your exercise.
For the record, this exercise had three purposes, viz.
0. To make myself more familiar with mathematical theory building;
1. To try to get my understanding about numbers on more rigorous footing;
2. To finally understand how mathematical induction works, something that had always puzzled me. This desire was especially triggered by attempting to understand some of the proofs in Dijktra's "A Discipline of Programming". (The semantic definition of the repetitive construct had already left me completely baffled.) I regard that this strategy did work out very well, in the sense that I am no longer afraid of mathematical induction, have actually written many proofs appealing to it, and am able to follow the proofs in Dijkstra's book.
Perhaps it will be worthwhile to remind ourselves that this whole attention devoted to the successor function has been triggered by the following suggestion of yours, viz.:
Moving on, you will probably like to know that I have been seriously pondering about your text. At first, I was going to put forward a few questionings, but, reflecting more and more upon my own questions, I think that you have given me some new insights; what I propose to do now is to present explicitly what my considerations have been, and feel free to let me know what you think.
And secondly, you mentioned that, when Peano provided his axiomatization of arithmetic, people already knew that addition was associative and commutative. But, even to this day, we could not say the same about the associativity of equivalence, or the fact that disjunction distributes over it. If people were to be told that these are general laws of the logic of propositions, they'd probably want some sort of proof. What is more, there are even forms of logic, such as the "continuous logic" discussed in EWD931, wherein these laws do not hold. Yet, Dijkstra has both of these laws as axioms in his development of the predicate calculus. How come?
I reckon that one of the big justifications for mathematics in the first place is to have more reasonable grounds on which we can base our confidence in certain propositions, so that, in a sense, we don't have to make conclusions by basing ourselves purely on things such as "poor man's induction".
On the other hand, my understanding of the motivation for postulating axioms is that they serve as an interface between the concerns of one who is exploring their consequences and/or is trying to erect a calculus, versus the concerns of one who is trying to apply the theory to a specific situation.
You certainly make an interesting case there. Unfortunately, I'm yet not very familiar with concepts such as generating sets, inductive sets, and a property being closed under an operation.
0. To make myself more familiar with mathematical theory building;
1. To try to get my understanding about numbers on more rigorous footing;
2. To finally understand how mathematical induction works, something that had always puzzled me.
We are left with (1). Really, what I wanted was to have some framework with which I could better reason about numbers and prove theorems about them. Such as the stuff one is apparently supposed to know if one wants perform well in the Mathematical Olympics.
On May 30, 2020, at 15:33, Lucas Brunno Luna <lucasl...@hotmail.com> wrote:
--
You received this message because you are subscribed the mathmeth.com mailing list.
To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
---
You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/calculational-mathematics/5216b42b-5d9e-4b2f-9799-8171df888f6b%40googlegroups.com.
And finally, well, if you find that keeping your distance from concepts such as "truth" serves you better, who am I to contest that? Nevertheless, you reminded me of an incident from yesteryear that happened in a city a couple hundreds of kilometers from where I live. A series of miscalculations led to the collapse of a dam classified under "low risk", leading to the release of enormous amounts of mud that suddenly buried over 200 people to their deaths. I think that, at that moment, truth ended up staring all of these poor victims in the face, and perhaps some of them had even for a long time preferred to keep their distance from it.
You remind me of a man.
--
You received this message because you are subscribed the mathmeth.com mailing list.
To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
---
You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/calculational-mathematics/36a665f2-af70-4407-b91c-c432e4937b64%40googlegroups.com.
On May 30, 2020, at 17:40, Lucas Brunno Luna <lucasl...@hotmail.com> wrote:
The man with the power.
--
You received this message because you are subscribed the mathmeth.com mailing list.
To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
---
You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/calculational-mathematics/5e0d7347-a911-4168-adfc-1828eb510d58%40googlegroups.com.
On May 30, 2020, at 17:41, Jeremy Weissmann <plusj...@gmail.com> wrote:
On May 30, 2020, at 17:49, Lucas Brunno Luna <lucasl...@hotmail.com> wrote:
Perhaps this could be a subject...
for the future.
That is all, folks! ;D
--
You received this message because you are subscribed the mathmeth.com mailing list.
To unsubscribe from this group, send email to Calculational-Math...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/Calculational-Mathematics?hl=en
---
You received this message because you are subscribed to the Google Groups "Calculational Mathematics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to calculational-math...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/calculational-mathematics/9567f35a-cf5f-4888-bb15-8cbf3b7348c2%40googlegroups.com.
On May 30, 2020, at 17:52, Jeremy Weissmann <plusj...@gmail.com> wrote: