I’m trying to understand if TLA+ is useful if your primary implementation language is a functional one like Haskell. So I’m going to make some assertions and come to a conclusion, in the hopes that people will point out where I’m going wrong or right. I’m half-way through the TLA+ book, so I apologize if I get some things wrong.
Specs in TLA+ take the form of temporal formulas with the following structure:
Spec == Init /\ [][Next]_vars /\ Liveness
Here, Spec
can be said to be satisfied by a certain class of behaviors. Behaviors are a sequence of states. Crucially, a state is a set of variables whose values can change over time.
Functional languages like Haskell heavily discourage the use of variable mutation. So is this a problem for TLA+?
There are two cases to analyze.
First case: The first case is when you are breaking the rules of Haskell, and introducing mutable state into your program anyway. In Haskell this is discouraged but allowed. Well, in this easy case, TLA+ can help you reason about your partially stateful program as usual (i.e., TLA+ will be just as useful to you had you written your program in C or Java).
Second case: Now suppose your program is completely pure. What then is the use of TLA+? It seems to me that the following would be true:
(x = x')
would be trivially satisfied by all behaviors.[][Next]_vars /\ Liveness
can therefore be stripped from TLA+ specs, since they cannot express any interesting properties (i.e., they will either be vacuously true or false).Thus, if you are writing a purely functional program, all of your TLA+ specs will be purely non-temporal, in the sense that they will take the form of first-order logic sentences with no temporal symbols like [] or apostrophe. But, I conclude, writing such a spec will still be helpful, for it will force you to reason about the properties of your (purely functional) program in the mathematical language of first-order logic. This is because it will force you to think of your program at a higher-level than even Haskell.
If your program is purely functional, then TLA+ forces you to write a non-temporal spec. But reducing your spec to a non-temporal formula still forces you to think and write in terms of math (i.e., first-order logic and set theory), which seems useful. Finally, if your functional program requires the use of mutable state anyway (which can be done in Haskell, despite being discouraged), then a TLA+ spec will be just as useful to you had you written your program in a non-functional language like Java or C.
So do people think this line of reasoning is sound?
I find your argument sound, though the subject of the posting is
misleading. As you make clear, the language you're using is
irrelevant. It's whether you're writing a simple functional program
or a reactive program--that is, implementing a system.
You should mention that the TLA+ tools were not optimized for
evaluating functions. While writing a mathematical specification of a
functional program may help you understand what you're doing, the
tools might not be useful in helping you to debug it. If the tools
can't find errors in your spec, you might be better off writing an
informal spec rather than a formal TLA+ spec.
A benefit of using TLA+ is that it teaches you to think
mathematically, which teaches you to write better informal specs. In
particular, it teaches you to think above the code level--meaning at a
level at which the language you're using is irrelevant. An example
is the TLA+ pretty-printer in my lecture "Programming Should be More
than Coding", which is at
https://www.youtube.com/watch?v=6QsTfL-uXd8
One other possible use of TLA+ for functional programs that you didn't
mention is in coming up with an algorithm to compute the function.
The complexity of an algorithm is how many steps it takes to execute.
The complexity of a function is the smallest number of steps it takes
to compute it. That's not a practically useful concept of complexity.
The complexity of sorting is N log N, but if I write bubble sort in a
functional language, the complexity of the function I've defined is
N log N, but no compiler is going to produce a program with that
complexity. I expect that there are many algorithms that are easier
to think of imperatively than functionally. It may be because
I'm used to programming imperatively.
Leslie
Leslie — I’m watching the video you posted. It’s great.
As you make clear, the language you’re using is
irrelevant. It’s whether you’re writing a simple functional program
or a reactive program—that is, implementing a system.
Why is a (purely) functional program not considered a system? Does “system” imply state transitioning?
One other possible use of TLA+ for functional programs that you didn’t
mention is in coming up with an algorithm to compute the function.
The complexity of an algorithm is how many steps it takes to execute.
The complexity of a function is the smallest number of steps it takes
to compute it. That’s not a practically useful concept of complexity.
The complexity of sorting is N log N, but if I write bubble sort in a
functional language, the complexity of the function I’ve defined is
N log N, but no compiler is going to produce a program with that
complexity. I expect that there are many algorithms that are easier
to think of imperatively than functionally. It may be because
I’m used to programming imperatively.
I’m assuming here your point is that (as it relates to TLA+), in a purely functional language, one can wrap a pure function interface over an imperative algorithm and, when doing so, use TLA+ to help specify various safety and liveness properties of the algorithm.
Why is a (purely) functional program not considered a system? Does
system imply state transitioning?
Why is a (purely) functional program not considered a system?
Does system imply state transitioning?
A program is something that gets executed on a computer. A program
can be described as a state machine--something specified by a TLA
formula Init /\ [][Next]_vars. A functional language is a particular
way of writing programs. A purely functional language, which can
implement only functional specifications, allows a lot of freedom to a
compiler--arguments can be evaluated in arbitrary order, it can
perform common subexpression elimination, etc. This means that
- It is probably inconvenient to describe the meaning of
such a program as a state machine.
- Different compilers can translate the same purely functional
program into code with very different computational complexity.
Those two observations are not unrelated.
one can wrap a pure function interface over an imperative
algorithm
An algorithm is a collection of behaviors. I presume by an imperative
algorithm you mean one described in an imperative language, or as
a TLA formula Init /\ [][Next]_vars. I don't know what it means to
wrap an interface over an algorithm. Perhaps you mean that the algorithm
can implement a functional property. In TLA, that means that the
specification obtained from Init /\ [][Next]_vars by hiding all
the variables except those that describe the communication between
the system and its environment--the specification written
\EE some_vars : Init /\ [][Next]_vars
implies the functional specification (the one the allows behaviors with
those two non-stuttering steps).
What all this boils down to is that functional programs (ones that
just compute a function) are a particular subclass of systems. Since
all systems can be described in TLA by temporal formulas, this
subclass can as well. However, that doesn't mean that it's a useful
way of describing that subclass.
I'm sorry if this is incoherent, I don't have time to try to make
it clearer.
Leslie
Crucially, a state is a set of variables whose values can change over time.
gcd :: Int -> Int -> Intgcd x y = if x == y then xelse if x > y then gcd (x - y) yelse gcd x (y - x)