I'm wondering if the rules described below can be found in existing
literature. In what follows we use the "=" sign to indicate that both
sides of the equation can be beta-reduced to one and the same term.
Classic definition of the beta reduction is well-known:
(\x. M A) -> M[x := A]. (*)
Here substitution is determined by the following four rules:
i) x[x := A] = A;
ii) y[x := A] = y;
iii) (\y. M)[x := A] = \y. M[x := A];
iv) (M N)[x := A] = (M[x := A] N[x := A]).
One can observe that
(\x. x A) = A, (1)
and
(\x. y A) = y. (2)
Further, due to (*)
(\x. \y. M A) -> (\y. M)[x := A] =
\y. M[x := A] <- \y. (\x. M A),
hence
(\x. \y. M A) = \y. (\x. M A). (3)
Finally, using (*) one can write
(\x. (M N) A) -> (M N)[x := A] =
(M[x := A] N[x := A]) <- ((\x. M A) (\x. N A)),
hence
(\x. (M N) A) = ((\x. M A) (\x. N A)). (4)
Usage of the rules (1)-(4) could simplify construction of a machine
for automated evaluation of lambda expressions: if each of the above
rules is applied within one clock cycle, then after every cycle the
memory will contain a valid expression.
I will be very grateful for any information about where this or a
similar approach to reduction of lambda expressions was considered in
the literature.
There was also a spate of programming languages in the 1960's and 70's
time frame with names like Planner, Conniver, and Schemer with
interesting twists. The names of routines (pieces of code) were
descriptions of the kinds of problems that the code could solve - not
name symbols. The "calls" on these pieces of code were descriptions of
the types of (sub)problems a piece of code wanted solved. If you think
about it for a while, this is a generalization of syntax-determined
substitution rules and may (or may not) be of interest to you.
-- Jeff Barnett
It appears that this set of rules is not good when the classic
"leftmost outermost" reduction strategy is used.
Let us apply the rules (1)-(4) to the following lambda expression:
\a. \b. \a a b -(rule 3)->
\b. (\a. a a) b -(rule 4)->
(\b. \a. a b) (\b. a b) -(rule 3)->
\a. (\b. a b) (\b. a b) -(rule 4)->
\a. \b. a (\b. a b) (\a. b (\b. a b)) -(rule 3)->
...
It is easy to see that the expression grows indefinitely at further
application of the rules, and the normal form is never reached. This
is because the third rule creates "parasitic" redexes.
Perhaps, there exists another reduction strategy that gives the normal
form for any reducible expression and is "compatible" with these
rules. I would be very grateful for any ideas on how such a strategy
could be discovered.
These rules were discussed in the paper "A two-level approach to logic
plus functional programming integration" by M. Bellia et al.