Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Looking for references on a set of expression transformation rules

2 views
Skip to first unread message

Anton Salikhmetov

unread,
Mar 4, 2008, 10:30:02 AM3/4/08
to
Hello,

I'm wondering if the rules described below can be found in existing
literature. In what follows we use the "=" sign to indicate that both
sides of the equation can be beta-reduced to one and the same term.

Classic definition of the beta reduction is well-known:

(\x. M A) -> M[x := A]. (*)

Here substitution is determined by the following four rules:

i) x[x := A] = A;
ii) y[x := A] = y;
iii) (\y. M)[x := A] = \y. M[x := A];
iv) (M N)[x := A] = (M[x := A] N[x := A]).

One can observe that

(\x. x A) = A, (1)

and

(\x. y A) = y. (2)

Further, due to (*)

(\x. \y. M A) -> (\y. M)[x := A] =
\y. M[x := A] <- \y. (\x. M A),

hence

(\x. \y. M A) = \y. (\x. M A). (3)

Finally, using (*) one can write

(\x. (M N) A) -> (M N)[x := A] =
(M[x := A] N[x := A]) <- ((\x. M A) (\x. N A)),

hence

(\x. (M N) A) = ((\x. M A) (\x. N A)). (4)

Usage of the rules (1)-(4) could simplify construction of a machine
for automated evaluation of lambda expressions: if each of the above
rules is applied within one clock cycle, then after every cycle the
memory will contain a valid expression.

I will be very grateful for any information about where this or a
similar approach to reduction of lambda expressions was considered in
the literature.

Jeff Barnett

unread,
Mar 7, 2008, 12:30:01 PM3/7/08
to
As you may (or may not) know, the programming language Lisp was invented
by the then MIT and now Stanford logician John McCarthy in the late
1950's. (Do not confuse the original Lisp concepts with modern Common
Lisp, Scheme, etc.) The semantic base for the original Lisp was Church's
lambda calculus and the rules of evaluation were meant to mimic lambda
substitution. N.B. the substitution rules could do the equivalent of
generating new operators. Because of this, it was possible to compactly
and perspicaciously represent some extremely complicated calculations. I
would suggest searching around in the early Lisp literature to see if
you can find some papers to your taste.

There was also a spate of programming languages in the 1960's and 70's
time frame with names like Planner, Conniver, and Schemer with
interesting twists. The names of routines (pieces of code) were
descriptions of the kinds of problems that the code could solve - not
name symbols. The "calls" on these pieces of code were descriptions of
the types of (sub)problems a piece of code wanted solved. If you think
about it for a while, this is a generalization of syntax-determined
substitution rules and may (or may not) be of interest to you.

-- Jeff Barnett

Anton Salikhmetov

unread,
Mar 7, 2008, 12:30:18 PM3/7/08
to

It appears that this set of rules is not good when the classic
"leftmost outermost" reduction strategy is used.

Let us apply the rules (1)-(4) to the following lambda expression:

\a. \b. \a a b -(rule 3)->

\b. (\a. a a) b -(rule 4)->

(\b. \a. a b) (\b. a b) -(rule 3)->

\a. (\b. a b) (\b. a b) -(rule 4)->

\a. \b. a (\b. a b) (\a. b (\b. a b)) -(rule 3)->

...

It is easy to see that the expression grows indefinitely at further
application of the rules, and the normal form is never reached. This
is because the third rule creates "parasitic" redexes.

Perhaps, there exists another reduction strategy that gives the normal
form for any reducible expression and is "compatible" with these
rules. I would be very grateful for any ideas on how such a strategy
could be discovered.

Anton Salikhmetov

unread,
Mar 10, 2008, 8:33:20 PM3/10/08
to
On Mar 4, 6:30 pm, Anton Salikhmetov <salikhme...@gmail.com> wrote:

These rules were discussed in the paper "A two-level approach to logic
plus functional programming integration" by M. Bellia et al.

0 new messages