I've posted my slides from Friday's talk on the (static) pattern calculus:
http://ww2.cs.mu.oz.au/~bjpop/slides/static_pattern_calculus.pdf
Haskell code for my interpreter for the static pattern calculus is available on patch tag:
http://patch-tag.com/r/bjpop/static-pattern-calculus
You can browse the code online, or download it with darcs:
darcs get http://patch-tag.com/r/bjpop/static-pattern-calculus
Cheers,
Bernie.
Here is the syntax of the dynamic pattern calculus (page 47):
t ::=
x (variable)
x^ (matchable symbol)
t t (application)
[theta] t -> t (case)
There are several notable aspects of the language:
- Patterns and terms are identified (now they belong to the same syntactic class).
- Data constructors are gone.
- Matchable symbols are distinguished from variables (matchable symbols have a "hat" above them like so: x^)
- The left hand side of a case contains a term, which effectively means that "patterns" can be computed.
- Binding occurrences of variables are indicated by [theta], which is attached to case terms.
Two things in the language are entirely conventional:
1) Variables behave as they always do; they undergo substitution during reduction. What makes them slightly more interesting is that they can now appear inside the "pattern" part of a case.
2) Term application hasn't changed.
The most unusual aspect of the language is the matchable symbols. In the static calculus the matchable symbols behave in a straightforward way, more-or-less in the same way that pattern variables behave in Haskell. The only real difference between the static pattern calculus and Haskell is that the former allows the head of a compound pattern to be a matchable symbol, whereas the latter requires it to be a constructor. In the dynamic pattern calculus matchable symbols are first-class terms, and as such they can be involved in reductions.
On page 46 Barry makes several remarks about the difference between variables and matchable symbols. He points out that matchable symbols really have two roles to play: 1) matching and 2) binding. The big issue to address is that reduction in a pattern can eliminate occurrences of matchable symbols. This is a potential problem if the eliminated symbols are also treated as binding occurrences of variables. That is to say, the scope of a variable/symbol would become a dynamic property of the program, rather than a static property. He says "That is, bindings must be static, even though patterns are dynamic". Unfortunately he doesn't say /why/ bindings must be static (except for the problem of dynamic scope). Of course dynamic scope is generally frowned upon today, nevertheless I think there is a missing justification that should have been said. To make bindings static he requires that binding occurrences of variables are annotated by [theta] on case terms.
Curiously the matchable symbols which do not appear in binding positions act just like data constructors, and so they play that role too (and data constructors are no longer needed as a separate entity).
Cheers,
Bernie.