Semantics of typed lambda calculus

24 views
Skip to first unread message

Russell Wallace

unread,
Feb 19, 2011, 8:56:42 PM2/19/11
to one-...@googlegroups.com
Trying to make more progress on figuring out what this actually is, consider

(^[P]:P(2,2) & P(true,true))(=)

Here, we are passing the function template = to a function which
instantiates it twice, once for integers, once for booleans. In other
words, each occurrence of the symbol P has a different type.

It seems to me the only way this can work is if every occurrence of
every symbol is distinct. In other words, typed lambda calculus has
pure reference semantics.

From an implementation viewpoint, that means absolutely no structure
sharing. That also means we can't compare a term with a known constant
symbol like & just with a pointer comparison, we have to look up the
contained symbol every time.

On the other hand, maybe we can make lemonade with a cache friendly
representation that has high locality, perhaps by writing a given
formula entirely in a small contiguous block of memory.

Russell Wallace

unread,
Feb 19, 2011, 9:41:19 PM2/19/11
to one-...@googlegroups.com
To explore this a little further, given that we have pure reference
semantics, what exactly is the definition of syntactic equality of
terms?

Consider (^[X]:X)(2) and (^[X]:X)(true); are the two occurrences of
the identity function syntactically equal to each other? We want to be
able to say yes. The fact that the different occurrences of X are
distinct doesn't cause a problem, because lambda calculus uses the
notion of equality modulo renaming of bound variables.

But they have different types. Is this a problem?

To answer this, it seems to me we can fall back on our idea that the
foundation is untyped lambda calculus, with type checking being
something we do to make sure we aren't making any unsound inferences.
So from that perspective I think we can say the two occurrences of the
identity function are indeed equal, the same function, each being used
in a different context, each also being used in a valid context. So we
can still use the same notion of syntactic equality that we used in
untyped lambda calculus.

Abram Demski

unread,
Feb 19, 2011, 11:39:59 PM2/19/11
to one-...@googlegroups.com
Russel,

It seems to me that this is just Curry semantics vs Church semantics again; Curry-style seems more appropriate for untyped logics, whereas Church style is more appropriate for lambda-cube type logics.

http://en.wikipedia.org/wiki/Simply_typed_lambda_calculus#Semantics

--Abram


--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.




--
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic

Russell Wallace

unread,
Feb 19, 2011, 11:54:39 PM2/19/11
to one-...@googlegroups.com
Yes, as far as I can tell Church semantics is flatly useless since it
can't even represent =, so Curry semantics would hopefully be the way
to go. But I still don't understand what Curry semantics actually is,
and I've just realized the version I gave in the last couple of
e-mails appears to be wrong. Consider:

^X:f(g(X),h(X))

suppose g takes a parameter of type integer, and h takes a parameter
of type Boolean, we need to reject this as ill typed, right?

That means that the type variable attached to the first occurrence of
X, must be the same type variable as attached to the second and third
occurrences, right?

But now consider my earlier example

(^[P]:P(2,2) & P(true,true))(=)

Now this can't work.

Any idea how to reconcile this?

Russell Wallace

unread,
Feb 20, 2011, 12:14:44 AM2/20/11
to one-...@googlegroups.com
On Sun, Feb 20, 2011 at 4:54 AM, Russell Wallace
<russell...@gmail.com> wrote:
> But now consider my earlier example
>
> (^[P]:P(2,2) & P(true,true))(=)
>
> Now this can't work.

Ah! HOL Light says:

# `(\x.f (x (1=1) (1=1)) (x 2 2) )(=)`;;
Exception: Failure "typechecking error (initial type assignment)".

At least in the short to medium term, it's reasonable to take HOL
Light as defining the requirements, so that sets limits on what we
need to do. That's looking more promising!

Abram Demski

unread,
Feb 20, 2011, 12:15:32 AM2/20/11
to one-...@googlegroups.com
Russell,

Yea, I think you are right (unless of course Boolean is conveniently considered a subset of Int). Of course, we still have the example (2=2)&(true=true) to make the same point you made in your original post... at least for Church semantics.

For Curry semantics, it does seem like we'd want to say that the two "=" signs are equal... it's just that (^[P]:P(2,2) & P(true,true))(=) is not simply typed! (On the other hand, it's hopefully logically equivalent to the simply typed expression "true"... it's important to realise that well-typed-ness is not preserved even just by beta-equivalence, much less general logical equivalence.)

Russell Wallace

unread,
Feb 20, 2011, 12:24:57 AM2/20/11
to one-...@googlegroups.com
On Sun, Feb 20, 2011 at 5:15 AM, Abram Demski <abram...@gmail.com> wrote:
> Russell,
>
> Yea, I think you are right (unless of course Boolean is conveniently
> considered a subset of Int). Of course, we still have the example
> (2=2)&(true=true) to make the same point you made in your original post...
> at least for Church semantics.

Right, whereas for Curry semantics, it does work:

# `(2 = 2) /\ ((1=1) = (1=1))`;;
val it : term = `2 = 2 /\ (1 = 1 <=> 1 = 1)`

(at least, I assume HOL Light is using Curry semantics)

Okay, so what's the difference? It seems to me the difference is:

1. Every occurrence of a predefined constant symbol like = gets a
different set of type variables.

2. Every occurrence of a bound variable gets the same set of type variables.

3. What about user-defined constant symbols?

# `(eekwalz 2 2) /\ (eekwalz (1=1) (1=1))`;;


Exception: Failure "typechecking error (initial type assignment)".

Bleagh. So predefined template functions like = are special case hacks.

But this is the system Intel used for verifying floating-point units.
And what are the alternatives? If we try to move up the typing scale,
isn't the next stop on the way System F., where type inference is
undecidable?

Conjecture: the best way forward for the moment is to live with the
special case hacks, proceed on this basis and reevaluate once we've
got more experience using the resulting system. If anyone has a
better suggestion, it's not too late!

Abram Demski

unread,
Feb 20, 2011, 12:25:24 AM2/20/11
to one-...@googlegroups.com
Russel,

Are you comfortable with extending the systems given in Hindley & Seldin chapters 11 and 12 (11 for combinators, 12 for lambda) to support HOL Light's axioms? It looked straightforward to me, but of course I may be missing critical details (since I didn't sit down and do it).

--Abram


--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.

Russell Wallace

unread,
Feb 20, 2011, 12:37:11 AM2/20/11
to one-...@googlegroups.com
If you'd asked me a little while ago, I would've said I don't know
yet, still trying to figure out what they actually mean.

Now, assuming Hindley & Seldin Chapter 12 plus the special case hacks
for predefined template functions is the system used by HOL Light, and
that its semantics are those I gave in my last e-mail, I think my
answer is tentatively yes, pending figuring out how to translate the
axioms into effective automatic inference algorithms.

Abram Demski

unread,
Feb 20, 2011, 12:45:46 AM2/20/11
to one-...@googlegroups.com
Russel,

Would you be using chapter 11 or chapter 12 then?

My sense is that what this gets us is just slightly more than HOL Light, in that the resulting system has a bit of untyped logic surrounding the typed HOL Light logic.

--Abram

Russell Wallace

unread,
Feb 20, 2011, 12:48:31 AM2/20/11
to one-...@googlegroups.com
On Sun, Feb 20, 2011 at 5:45 AM, Abram Demski <abram...@gmail.com> wrote:
> Russel,
>
> Would you be using chapter 11 or chapter 12 then?

Chapter 12. There are some hefty disadvantages with using combinator
logic, I investigated using it anyway in the hope that it might lead
to a system that's fundamentally simple, but since that's not going to
happen, there's no point paying the price for it.

> My sense is that what this gets us is just slightly more than HOL Light, in
> that the resulting system has a bit of untyped logic surrounding the typed
> HOL Light logic.

That wasn't my impression, I don't suppose you can nail down any specifics?

Abram Demski

unread,
Feb 20, 2011, 1:12:15 AM2/20/11
to one-...@googlegroups.com
Russel,

A small example is in note 11.13 on page 125, which notes that one thing the Curry-style typing system has over the Church style one of the previous chapter is that the system can answer questions of the form "if exp1 had type t1, what type would exp2 have?" That is, we can prove conditional well-typedness claims.

More exciting stuff happens in section 11k, where an untyped notion of equality is used.

I'd have to re-read 11 and read 12 to make very insightful comments... the relevant stuff just isn't loaded into my short-term memory at the moment. (12 appears to depend on the concepts from 11.)

--Abram


--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.

Russell Wallace

unread,
Feb 20, 2011, 1:44:42 AM2/20/11
to one-...@googlegroups.com
On Sun, Feb 20, 2011 at 6:12 AM, Abram Demski <abram...@gmail.com> wrote:
> Russel,
>
> A small example is in note 11.13 on page 125, which notes that one thing the
> Curry-style typing system has over the Church style one of the previous
> chapter is that the system can answer questions of the form "if exp1 had
> type t1, what type would exp2 have?" That is, we can prove conditional
> well-typedness claims.

Sure, but HOL Light can do that too, can't it? Put type annotations on
some components of a term and see what it infers for the rest? In
other words, doesn't HOL Light use Curry semantics?

> More exciting stuff happens in section 11k, where an untyped notion of
> equality is used.

Hmm, but that rule, as it says, is undecidable, and therefore so is
type inference in a system where that rule is used.

On the other hand, that rule would seem to be (hopefully/fortunately!)
unnecessary in lambda, because lambda has a stronger notion of
equality by reduction than CL has.

> I'd have to re-read 11 and read 12 to make very insightful comments... the
> relevant stuff just isn't loaded into my short-term memory at the moment.

*nods* I know how that goes.

Abram Demski

unread,
Feb 20, 2011, 9:24:45 AM2/20/11
to one-...@googlegroups.com
On Sun, Feb 20, 2011 at 1:44 AM, Russell Wallace <russell...@gmail.com> wrote:
On Sun, Feb 20, 2011 at 6:12 AM, Abram Demski <abram...@gmail.com> wrote:
> Russel,
>
> A small example is in note 11.13 on page 125, which notes that one thing the
> Curry-style typing system has over the Church style one of the previous
> chapter is that the system can answer questions of the form "if exp1 had
> type t1, what type would exp2 have?" That is, we can prove conditional
> well-typedness claims.

Sure, but HOL Light can do that too, can't it? Put type annotations on
some components of a term and see what it infers for the rest? In
other words, doesn't HOL Light use Curry semantics?

What I mean is, the systems in 11/12 are reasoning explicitly about types, whereas HOL Light is not... at least not in the official set of inference rules! If HOL Light can do that, then ok, but it's not specified in the stated set of axioms and inference rules. (From those, HOL Light could be curry typed or church typed; it's just a matter of semantics!)

What this property tells us is that the system will play fairly well with new axioms about types; its typing knowledge is flexible.
 

> More exciting stuff happens in section 11k, where an untyped notion of
> equality is used.

Hmm, but that rule, as it says, is undecidable, and therefore so is
type inference in a system where that rule is used.

Which is exactly what you get in any reasonable untyped system! You can't perform a full check to see if a statement is meaningful; sometimes you've got to either loop forever on meaningless input (at which point the user says "what went wrong?") or give up and ask the user to provide the proof of well-typedness.

What the rule gets us is the ability to prove that some very reasonable terms are "safe" despite not being simply typeable, because if we reduce them we eventually get simply typeable terms. In other words, we extend the notion of well-typedness to force it to be preserved under some notion of equivalence, such as beta-convertibility.

It sounds like you're leaning towards wanting a typed logic? So why not just use HOL Light?
 

On the other hand, that rule would seem to be (hopefully/fortunately!)
unnecessary in lambda, because lambda has a stronger notion of
equality by reduction than CL has.

I don't think so... the rule is not *necessary* in either case (since we get standard polymorphic typing without it), but it seems *useful* in ether case; for example, it makes your term (^[P]:P(2,2) & P(true,true))(=) typeable as boolean!
 

> I'd have to re-read 11 and read 12 to make very insightful comments... the
> relevant stuff just isn't loaded into my short-term memory at the moment.

*nods*  I know how that goes.

--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.

Abram Demski

unread,
Feb 20, 2011, 9:31:27 AM2/20/11
to one-...@googlegroups.com
Russel,

To give a little more intuition.. the book proves a theorem in chapter 11 which says "terms never get less safe as execution proceeds"; ie, something never loses its type as we beta-reduce it. The thing is, terms can get more safe as we beta-reduce; ie, a term can become type-able thanks to reduction. This seems silly because we know it was really safe to begin with. Therefore, it seems reasonable to introduce the rule to force typing to be closed under beta-equivalence.

Which is why we can get slightly more than HOL Light.

"Slightly" comes from the theorems which are given after the rule, which show that the equality rule only weakly interacts with the other rules; we can always move the use of that rule to the end of the proof. This suggests that the resulting system is just a bit stronger.

--Abram

Russell Wallace

unread,
Feb 20, 2011, 10:51:34 AM2/20/11
to one-...@googlegroups.com
On Sun, Feb 20, 2011 at 2:24 PM, Abram Demski <abram...@gmail.com> wrote:
> What I mean is, the systems in 11/12 are reasoning explicitly about types,
> whereas HOL Light is not... at least not in the official set of inference
> rules! If HOL Light can do that, then ok, but it's not specified in the
> stated set of axioms and inference rules.

Well hang on, let's distinguish between three kinds/levels of reasoning:

1. term inference, which is (at least potentially) both formal
(automatable) and general
2. type inference, which is formal, and (at least in the systems we
are talking about here such as Chapter 12 and HOL Light) specialized,
i.e. the HM type inference algorithm
3. meta-inference, which is informal, strictly the province of humans
(at least for the near future), but which can be even more general
than term inference

Chapter 12 does indeed contain explicit reasoning about types, but it
is a combination of #2 (which HOL Light also does, type inference) and
#3 which is of course not within the stated axioms and inference rules

> (From those, HOL Light could be
> curry typed or church typed; it's just a matter of semantics!)

It seems to me that once you're doing type inference, you're doing
church typing?

>> Hmm, but that rule, as it says, is undecidable, and therefore so is
>> type inference in a system where that rule is used.
>
> Which is exactly what you get in any reasonable untyped system! You can't
> perform a full check to see if a statement is meaningful; sometimes you've
> got to either loop forever on meaningless input (at which point the user
> says "what went wrong?") or give up and ask the user to provide the proof of
> well-typedness.
>
> What the rule gets us is the ability to prove that some very reasonable
> terms are "safe" despite not being simply typeable, because if we reduce
> them we eventually get simply typeable terms. In other words, we extend the
> notion of well-typedness to force it to be preserved under some notion of
> equivalence, such as beta-convertibility.

All of what you say here is true, but it's not feasible to do
everything in one shot; we need to do things in stages, with a
concrete target for each stage...

> It sounds like you're leaning towards wanting a typed logic? So why not just
> use HOL Light?

HOL Light is a checker/assistant for manual proof. I'm aiming for a
fully automated system.

To put version numbers on it, here's my current roadmap for Project Ayane:

3.0 - a fully automated prover for typed higher-order logic, the basic
idea being to do in principle everything HOL Light can currently do,
except automatically instead of manually; performance will be
mediocre, due to the difficulty imposed by the size of the search
space

4.0 - addition of learned heuristics for inference control, hopefully
greatly improving performance, as well as hopefully advancing the
state of the art in heuristic learning in domains that are not (as
heretofore) simple vectors

5.0 - extension of the resulting system to untyped logic with all the
benefits previously discussed

You're reminding me of the reasons we want version 5.0, and I agree
completely, but having said all that, what I need to do right _now_ is
concentrate on getting the design for version 3.0 nail down so I can
actually implement it.

Abram Demski

unread,
Feb 20, 2011, 8:50:25 PM2/20/11
to one-...@googlegroups.com
Russel,

Ok! I did not realise that you intended to put off UHOL for so long (but it makes decent sense).

Agreed, the level at which the inference goes on is critical... it's certainly possible to use the system in the chapter as just a type inference system (that's exactly what it is).

The way I see it, HOL Light's typing system can be gotten by adding rules about the basic terms of HOL Light; indeed, all that needs to be added is a rule for polymorphic "=" and a rule for the choice operator, right? Oh, also, a type for theorems (that is, sequent statements) and types for finite sets of (boolean) expressions to go into sequents.

The axioms and natural deduction rules of HOL Light are then added (with their appropriate type restrictions).

At this point, what's created is a system in which tactics for type inference and tactics for general inference are on an equal footing; the natural deduction rules for each exist at the same level. (I am ignorant; I don't know if HOL Light is like this.) There is, however, a strict division between typing statements (whose top-level operator is the type-assignment operator) and theorems (whose top-level operator is the sequent turnstyle).

I guess that sort of system is your immediate goal, then implementing something like Huet's algorithm?

When I say that this sort of system gives us just a bit of untyped reasoning, when I have in mind is that the way in which the typing rules and the inference rules are "on the same level" tempts me to think about extensions which try to merge them further. The addition of the inference rule from section 11k would go a little way in this direction, particularly if we choose HOL Light's "=" as the notion of equality. However, even this is rather limited in its consequences. The real question -- the question for your stage 5 -- is how to safely treat typing statements as terms themselves, so that they can also be assigned types, abstracted over, reasoned about.

Make sense? Am I missing anything?

--Abram


--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.

Russell Wallace

unread,
Feb 21, 2011, 3:42:09 AM2/21/11
to one-...@googlegroups.com
Yes, that's a good summary of the issues. (In TPTP syntax, some other
functions like arithmetic are polymorphic like =, but that's fine, if
anything it would be disappointing to have to implement that hack just
for the sake of one function :-))

Abram Demski

unread,
Feb 21, 2011, 1:05:23 PM2/21/11
to one-...@googlegroups.com
Russell,

Browsing chapter 12, it actually looks like lambda has *worse* properties than combinator when it comes to equivalence! That is, there is *more* reason to consider adding the rule =*. Whereas combinator reduction is guaranteed to preserve types, an explicit example is given where a type can be inferred for a lambda term but not its reduction... type assumptions need to be restricted to "beta-inert terms" to get good properties!

Hopefully this restriction is not important when it comes to HOL Light.

Also, a slight surprise, the next chapter deals with curry-style dependent types, a system which does in fact merge type terms with normal terms (but in a lambda-cube manner). I should really read more of this book. :)

--Abram

Russell Wallace

unread,
Feb 21, 2011, 1:16:21 PM2/21/11
to one-...@googlegroups.com
On Mon, Feb 21, 2011 at 6:05 PM, Abram Demski <abram...@gmail.com> wrote:
> Russell,
>
> Browsing chapter 12, it actually looks like lambda has *worse* properties
> than combinator when it comes to equivalence! That is, there is *more*
> reason to consider adding the rule =*. Whereas combinator reduction is
> guaranteed to preserve types, an explicit example is given where a type can
> be inferred for a lambda term but not its reduction... type assumptions need
> to be restricted to "beta-inert terms" to get good properties!

That sounds disquieting, which page is the example on? I did a quick
skim of the chapter just now, but didn't see it.

> Also, a slight surprise, the next chapter deals with curry-style dependent
> types, a system which does in fact merge type terms with normal terms (but
> in a lambda-cube manner). I should really read more of this book. :)

It certainly rewards rereading; every time I reread a section of it, I
pick up at least one thing that didn't really register before.

Abram Demski

unread,
Feb 21, 2011, 5:47:44 PM2/21/11
to one-...@googlegroups.com
Russell,

Page 169, in my copy. It's remark 12.20 (b); referring to theorem 12.19. The definition of "inert" is several pages earlier, at the start of 12b.


--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.

Russell Wallace

unread,
Feb 21, 2011, 7:05:05 PM2/21/11
to one-...@googlegroups.com
Hmm, I think I'm not understanding the problem, as far as I can see,
you might not be able to prove the exact concrete type of 1 in
isolation, without seeing what it's being applied to, but you can
prove it is typeable, and HOL Light agrees:

# type_of `\x y.x y`;;
Warning: inventing type variables
val it : hol_type = `:(?77399->?77397)->?77399->?77397`

What am I missing?

Abram Demski

unread,
Feb 21, 2011, 7:48:55 PM2/21/11
to one-...@googlegroups.com
Russell,

Good point; that type assignment makes total sense, but it's not the type which the system fails to derive... that's reassuring.

What the system is failing to do is behave well for arbitrary additional type assumptions. We can derive that the type of \xy.xy is (a->b)->a->b, but if we additionally assume that the type of \xy.y is N and the type of \uxy.x(uxy) is N->N, we can't go ahead and prove that \xy.xy is of type N -- this despite being able to prove that (\uxy.x(uxy)) \xy.y is of type N, an expression which reduces to \xy.xy.

(This doesn't have to do with not being able to prove the exact concrete type of 1 in isolation.)

--Abram

Russell Wallace

unread,
Feb 21, 2011, 9:04:53 PM2/21/11
to one-...@googlegroups.com
Hmm. *scratches head*

Okay, I see what you mean... on the other hand, I have a vague feeling
that we shouldn't expect to be able to prove that \xy.xy is of type N,
but I can't articulate why right now.

As far as I can see, it shouldn't cause a problem in practice?

Abram Demski

unread,
Feb 21, 2011, 10:21:04 PM2/21/11
to one-...@googlegroups.com
On Mon, Feb 21, 2011 at 9:04 PM, Russell Wallace <russell...@gmail.com> wrote:
Hmm. *scratches head*

Okay, I see what you mean... on the other hand, I have a vague feeling
that we shouldn't expect to be able to prove that \xy.xy is of type N,
but I can't articulate why right now.

If you figure out an intuitive justification, I'd be glad to hear it!
 

As far as I can see, it shouldn't cause a problem in practice?

What it means is that you need to make sure that any additional typing axioms which you add are of the forms which don't cause trouble. If it looks like a problem, it appears to be OK to use the solution which the remark hints at: expand it to a (potentially infinite) set of type declarations which all have atomic types.
 

Abram Demski

unread,
Feb 22, 2011, 12:51:24 PM2/22/11
to one-...@googlegroups.com

What it means is that you need to make sure that any additional typing axioms which you add are of the forms which don't cause trouble. If it looks like a problem, it appears to be OK to use the solution which the remark hints at: expand it to a (potentially infinite) set of type declarations which all have atomic types.

Scratch that! With that solution, we lose some information. In the example given, we won't be able to re-derive the type N->N for the successor function-- moreover, we shouldn't want the system to derive that type from the given information, because it really doesn't follow. (We don't know what other things might have type N, and they may well provide counterexamples.)

The moral of the story in this case seems to be that we need an actual inductive type system to define inductive types, and shouldn't expect it to work with just polymorphic types.

--Abram
Reply all
Reply to author
Forward
0 new messages