Could you please expound on the difference between a logic and an algebra? [...] for instance, how come the need for inference rules was rejected by Prof. Dijkstra, while Prof. Gries kept employing them (e.g., in his paper "Foundations for Calculational Logic"), even after circulation of the open letter? [...]
What follows is a very unstructured, but hopefully understandable summary of my thoughts on the matter. I apologize in advance for the lack of organization.
There's no question that algebraic logic is correct, just as there's no question that traditional approaches to logic are correct. In my opinion, they reflect totally different use cases. I say this as someone who used to study mathematical logic, even as I used the "algebraic" approach in my writings — sometimes even in my writings about mathematical logic!
When in logic we create something called an "inference system" , we are creating a model of human reasoning which is fit for mathematical study. Thus we specify certain "axioms" and "inference rules" , define "proof" in terms of these, and then proceed to reason about what can be "proved" in this "system" , or even about what cannot be "proved" ! Logical systems are mathematical objects used to study (that is, 'abstractions of') axiomatic reasoning.
Note that in the last sentences I used the words 'reason' and 'reasoning' . I didn't use the word 'proof' because I didn't want to confuse semantic levels, but 'reasoning' is what we mean by 'proof' , in a mathematical context. It is no different from, in abstract algebra, where we might define the "zero element" of a ring, which we use to study/generalize the number 0 in the ring of integers. Similarly, 'inference systems' are abstractions of human reasoning we can use to study our own reasoning using mathematics.
So, how do we carry out this reasoning? We could use inference systems, but this is a Bad Idea. Inference systems are intentionally defined in a minimalistic way, because they are not meant to be used — they are meant to be studied. Gries and Scholten attempted to "soup up" their inference system to make it more "user-friendly" , but I would still maintain that they used the wrong tool for the job. (You can make the bottom of a screwdriver heavy and metallic and use it to drive nails into wood, but a hammer will be a better tool for the job because of how it enables the use of rotational force.)
Mostly we carry out reasoning in natural language. Natural language can be an incredibly powerful communication tool, but often it is simply not up to the task of dealing with highly abstract concepts related in highly specific, precise ways. Thus, over the millenia, we invented symbolic representations of mathematical concepts, and over the last few centuries we developed algebraic systems: calculations with uninterpreted formulae. All professional mathematicians (including logicians) use algebraic calculation to some extent in their proofs. The algebraic approach was so powerful and wide-ranging that it even enabled us to mathematicize mathematical reasoning itself! Mathematical logic simply could not have existed until we had already seen how mathematical reasoning could be synthesized into a symbolic form.
What Dijkstra, Feijen, et al did was to apply calculation more broadly; to make calculational tools more streamlined, flexible, and powerful; and to show how calculational was not only a useful shorthand, but could also drive proof construction. In this sense, though, they were just expanding on a very mathematical tradition.
Make no mistake about it: symbolic reasoning still comprises only a small portion of human mathematical reasoning. Dijkstra's writings are full of English words and explanations. Gries and Scholten's inference system is defined using the English language. Sure: within a particular framework, it's possible to state a theorem and prove it without any natural language reasoning, but in order to string those frameworks together to develop a theory, one needs natural language. I would bet that the extinction of the human race would come way before any significant progress in turning all of human reasoning into symbols. The reason is that symbolic reasoning reflects a deep understanding and great synthesis of ideas, understanding that takes centuries if not millennia to reach. Humans did arithmetic for millennia before number systems were developed, and then even longer before algebra could be developed. These early mathematicians were not dummies; they were great geniuses with an exceptional command of mathematical concepts. They never could have imagined that these esoteric concepts could be so systematized that someday they could be taught to five-year-olds.
But I digress. So, it is not the case that Gries and Schneider's inference system is any more correct than the traditional mathematician's (or Dijkstra's) algebraic approach. It is a human system designed in terms of human concepts, which makes it as susceptible to error as any other human-designed system. And yet it is also exceptionally cumbersome and limited. So, I see no real benefits to their system.
I think there was a real misunderstanding between Dijkstra and Gries. When Dijkstra mathematicized logic, it was because he felt —as someone steeped in a great tradition of mathematics— that it wasn't beneficial to merely algebraically manipulate x and y and integrals and so forth, but also to manipulate the structure of the arguments themselves, using predicate calculus. As far as I can tell, Dijkstra had no interest in "mathematical logic" per se; that is, he had no interest in designing systems to study reasoning and the limitations of reasoning. Nevertheless, if he had, he surely would have used algebraic calculation to structure and streamline his arguments about mathematical logic — because that is what any professional mathematician would do. And then, indeed, he would need to introduce more symbols and make more distinctions, because the study of mathematical logic would require it. The *use* of mathematical logic, however, does not.
A final point. There is no human reasoning without "postulates" and "inference rules" . But by these terms I don't mean the mathematical ones; I mean the human concepts which the mathematical ones model. A good example is the Law of Modus Ponens: From X and X => Y we can conclude Y . On page 10 in EWD1123 , Dijkstra proves a symbolic version of Modus Ponens, and even remarks that it no longer requires a special name. Well, that may be true, but Dijkstra has already used the natural language version of Modus Ponens a dozen times by the time he gets to page 10 ! In his first calculation on page 6 there is a three-step calculation, where each step is equality. He shows A = B , B = C , and C = D , and wants us to conclude A = D . Well, what is this but the law of Modus Ponens applied to the transitivity of equality, which states that X = Y and Y = Z implies X = Z ? I suppose one could avoid this problem by turning the transitivity of equality into a inference rule, but then we still have an inference rule. Similar remarks can be made for the Law of Instantiation: r.x /\ (Ay : r.y : t.y) => t.x . We can and should prove and use symbolic versions of the Law of Instantiation, but we simply cannot do mathematics without using this as an "inference rule" .
Gries and Scholten (and possibly other logicians) get things wrong when they think that their inference system is able to avoid this problem. For they use Modus Ponens and the Law of Instantiation and other forms of logical reasoning to set up their inference system.
To sum up: Human reasoning uses certain agreed-upon principles of logic, and these cannot be dispensed with — they *are* the definition of human reasoning. Symbols are used to simplify human reasoning to make it more powerful. Mathematical logic is a system of symbols used to study human reasoning.
+j