(^[P]:P(2,2) & P(true,true))(=)
Here, we are passing the function template = to a function which
instantiates it twice, once for integers, once for booleans. In other
words, each occurrence of the symbol P has a different type.
It seems to me the only way this can work is if every occurrence of
every symbol is distinct. In other words, typed lambda calculus has
pure reference semantics.
From an implementation viewpoint, that means absolutely no structure
sharing. That also means we can't compare a term with a known constant
symbol like & just with a pointer comparison, we have to look up the
contained symbol every time.
On the other hand, maybe we can make lemonade with a cache friendly
representation that has high locality, perhaps by writing a given
formula entirely in a small contiguous block of memory.
Consider (^[X]:X)(2) and (^[X]:X)(true); are the two occurrences of
the identity function syntactically equal to each other? We want to be
able to say yes. The fact that the different occurrences of X are
distinct doesn't cause a problem, because lambda calculus uses the
notion of equality modulo renaming of bound variables.
But they have different types. Is this a problem?
To answer this, it seems to me we can fall back on our idea that the
foundation is untyped lambda calculus, with type checking being
something we do to make sure we aren't making any unsound inferences.
So from that perspective I think we can say the two occurrences of the
identity function are indeed equal, the same function, each being used
in a different context, each also being used in a valid context. So we
can still use the same notion of syntactic equality that we used in
untyped lambda calculus.
--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.
^X:f(g(X),h(X))
suppose g takes a parameter of type integer, and h takes a parameter
of type Boolean, we need to reject this as ill typed, right?
That means that the type variable attached to the first occurrence of
X, must be the same type variable as attached to the second and third
occurrences, right?
But now consider my earlier example
(^[P]:P(2,2) & P(true,true))(=)
Now this can't work.
Any idea how to reconcile this?
Ah! HOL Light says:
# `(\x.f (x (1=1) (1=1)) (x 2 2) )(=)`;;
Exception: Failure "typechecking error (initial type assignment)".
At least in the short to medium term, it's reasonable to take HOL
Light as defining the requirements, so that sets limits on what we
need to do. That's looking more promising!
Right, whereas for Curry semantics, it does work:
# `(2 = 2) /\ ((1=1) = (1=1))`;;
val it : term = `2 = 2 /\ (1 = 1 <=> 1 = 1)`
(at least, I assume HOL Light is using Curry semantics)
Okay, so what's the difference? It seems to me the difference is:
1. Every occurrence of a predefined constant symbol like = gets a
different set of type variables.
2. Every occurrence of a bound variable gets the same set of type variables.
3. What about user-defined constant symbols?
# `(eekwalz 2 2) /\ (eekwalz (1=1) (1=1))`;;
Exception: Failure "typechecking error (initial type assignment)".
Bleagh. So predefined template functions like = are special case hacks.
But this is the system Intel used for verifying floating-point units.
And what are the alternatives? If we try to move up the typing scale,
isn't the next stop on the way System F., where type inference is
undecidable?
Conjecture: the best way forward for the moment is to live with the
special case hacks, proceed on this basis and reevaluate once we've
got more experience using the resulting system. If anyone has a
better suggestion, it's not too late!
--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.
Now, assuming Hindley & Seldin Chapter 12 plus the special case hacks
for predefined template functions is the system used by HOL Light, and
that its semantics are those I gave in my last e-mail, I think my
answer is tentatively yes, pending figuring out how to translate the
axioms into effective automatic inference algorithms.
Chapter 12. There are some hefty disadvantages with using combinator
logic, I investigated using it anyway in the hope that it might lead
to a system that's fundamentally simple, but since that's not going to
happen, there's no point paying the price for it.
> My sense is that what this gets us is just slightly more than HOL Light, in
> that the resulting system has a bit of untyped logic surrounding the typed
> HOL Light logic.
That wasn't my impression, I don't suppose you can nail down any specifics?
--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.
Sure, but HOL Light can do that too, can't it? Put type annotations on
some components of a term and see what it infers for the rest? In
other words, doesn't HOL Light use Curry semantics?
> More exciting stuff happens in section 11k, where an untyped notion of
> equality is used.
Hmm, but that rule, as it says, is undecidable, and therefore so is
type inference in a system where that rule is used.
On the other hand, that rule would seem to be (hopefully/fortunately!)
unnecessary in lambda, because lambda has a stronger notion of
equality by reduction than CL has.
> I'd have to re-read 11 and read 12 to make very insightful comments... the
> relevant stuff just isn't loaded into my short-term memory at the moment.
*nods* I know how that goes.
> A small example is in note 11.13 on page 125, which notes that one thing theSure, but HOL Light can do that too, can't it? Put type annotations on
> Curry-style typing system has over the Church style one of the previous
> chapter is that the system can answer questions of the form "if exp1 had
> type t1, what type would exp2 have?" That is, we can prove conditional
> well-typedness claims.
some components of a term and see what it infers for the rest? In
other words, doesn't HOL Light use Curry semantics?
Hmm, but that rule, as it says, is undecidable, and therefore so is
> More exciting stuff happens in section 11k, where an untyped notion of
> equality is used.
type inference in a system where that rule is used.
On the other hand, that rule would seem to be (hopefully/fortunately!)
unnecessary in lambda, because lambda has a stronger notion of
equality by reduction than CL has.
*nods* I know how that goes.
> I'd have to re-read 11 and read 12 to make very insightful comments... the
> relevant stuff just isn't loaded into my short-term memory at the moment.
--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.
Well hang on, let's distinguish between three kinds/levels of reasoning:
1. term inference, which is (at least potentially) both formal
(automatable) and general
2. type inference, which is formal, and (at least in the systems we
are talking about here such as Chapter 12 and HOL Light) specialized,
i.e. the HM type inference algorithm
3. meta-inference, which is informal, strictly the province of humans
(at least for the near future), but which can be even more general
than term inference
Chapter 12 does indeed contain explicit reasoning about types, but it
is a combination of #2 (which HOL Light also does, type inference) and
#3 which is of course not within the stated axioms and inference rules
> (From those, HOL Light could be
> curry typed or church typed; it's just a matter of semantics!)
It seems to me that once you're doing type inference, you're doing
church typing?
>> Hmm, but that rule, as it says, is undecidable, and therefore so is
>> type inference in a system where that rule is used.
>
> Which is exactly what you get in any reasonable untyped system! You can't
> perform a full check to see if a statement is meaningful; sometimes you've
> got to either loop forever on meaningless input (at which point the user
> says "what went wrong?") or give up and ask the user to provide the proof of
> well-typedness.
>
> What the rule gets us is the ability to prove that some very reasonable
> terms are "safe" despite not being simply typeable, because if we reduce
> them we eventually get simply typeable terms. In other words, we extend the
> notion of well-typedness to force it to be preserved under some notion of
> equivalence, such as beta-convertibility.
All of what you say here is true, but it's not feasible to do
everything in one shot; we need to do things in stages, with a
concrete target for each stage...
> It sounds like you're leaning towards wanting a typed logic? So why not just
> use HOL Light?
HOL Light is a checker/assistant for manual proof. I'm aiming for a
fully automated system.
To put version numbers on it, here's my current roadmap for Project Ayane:
3.0 - a fully automated prover for typed higher-order logic, the basic
idea being to do in principle everything HOL Light can currently do,
except automatically instead of manually; performance will be
mediocre, due to the difficulty imposed by the size of the search
space
4.0 - addition of learned heuristics for inference control, hopefully
greatly improving performance, as well as hopefully advancing the
state of the art in heuristic learning in domains that are not (as
heretofore) simple vectors
5.0 - extension of the resulting system to untyped logic with all the
benefits previously discussed
You're reminding me of the reasons we want version 5.0, and I agree
completely, but having said all that, what I need to do right _now_ is
concentrate on getting the design for version 3.0 nail down so I can
actually implement it.
--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.
That sounds disquieting, which page is the example on? I did a quick
skim of the chapter just now, but didn't see it.
> Also, a slight surprise, the next chapter deals with curry-style dependent
> types, a system which does in fact merge type terms with normal terms (but
> in a lambda-cube manner). I should really read more of this book. :)
It certainly rewards rereading; every time I reread a section of it, I
pick up at least one thing that didn't really register before.
--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.
# type_of `\x y.x y`;;
Warning: inventing type variables
val it : hol_type = `:(?77399->?77397)->?77399->?77397`
What am I missing?
Okay, I see what you mean... on the other hand, I have a vague feeling
that we shouldn't expect to be able to prove that \xy.xy is of type N,
but I can't articulate why right now.
As far as I can see, it shouldn't cause a problem in practice?
Hmm. *scratches head*
Okay, I see what you mean... on the other hand, I have a vague feeling
that we shouldn't expect to be able to prove that \xy.xy is of type N,
but I can't articulate why right now.
As far as I can see, it shouldn't cause a problem in practice?
What it means is that you need to make sure that any additional typing axioms which you add are of the forms which don't cause trouble. If it looks like a problem, it appears to be OK to use the solution which the remark hints at: expand it to a (potentially infinite) set of type declarations which all have atomic types.