Clauses are written with a combination of named constants and
variables (and optionally literals, e.g. numbers). I'll use Prolog
notation where lower case letters denote constants and uppercase
letters denote variables.
What exactly is the semantic difference between constants and variables?
Intuitively, the answer would seem to be that constants are, well,
constant, whereas variables are variable; this would seem to be the
justification for the usual procedure where we can only substitute for
variables.
But only some constants are really constant in the sense of having
known values. We can write as many constants as we like, and the
values of the extra ones will start off being unknown. We can then say
that a set of clauses is satisfiable if there is some set of values
that can be assigned to the constants and variables that will make
every clause return the value true.
For example, in TPTP format, $sum is defined to be the addition
function, so that $sum(X, Y) always means X+Y. But we can also write
f(X, Y), where f is a symbol we just introduced, the name of some
unknown function.
Constants can represent functions of one or more arguments, whereas
variables can only represent values. But a function of zero arguments
is the same as a value. If we write the constant x, it could have any
value, just as if we write the variable X.
Constants can be of type boolean as well as object, whereas variables
can only be of type object. Also, a given constant can occur in more
than one clause, whereas a given variable can only occur within a
clause.
In short, it would seem that constants are a strict superset of
variables in their capabilities. What would happen if we abolished
variables, and allowed substitution of constants? Would the resulting
system still be sound and complete?
In other words, is it the case that variables in first-order logic (at
least clause normal form - quantified variables are another matter)
are just an optimization hint, a way of telling a prover "you only
need to bother trying to substitute these variables, the constants are
really variables as well, but you don't need to spend time
substituting them"?
The reason I ask is that I'm working on doing higher-order logic with
combinators, in which scenario in a sense there aren't any variables;
or, it seems to me, in another sense variables and constants are the
same thing; I'm wondering if I'm missing anything fundamental.
However, if we focus specifically on clause normal form and the
question of satisfiability, everything is existentially quantified, so
x=0 and X=0 both have the same meaning and status: satisfiable.
Full first-order logic has quantifiers, which bind variables.
Lambda calculus also has quantifiers, which bind variables. Indeed,
when you go to higher-order logic, it becomes clear that quantified
statements are really just statements about the equivalent lambda
functions.
I've been chewing for quite some time now on the problem of reducing
lambda calculus to combinator logic for purposes of higher-order
theorem proving, based on the gut feeling that the simplification of
doing away with bound variables is a huge win.
It now seems to me we can say that this process would be _the same as_
the process in first-order logic of reduction to clause normal form:
get rid of bound variables, make everything existential, then focus
purely on the question of satisfiability; and when you do that, the
difference between constants and variables disappears.
Reduction to clause normal form has been an enormous boost to progress
in automated first-order theorem proving, roughly speaking, because by
flattening everything to a uniform representation, it lets the
automated systems run full throttle.
If the above considerations are true, they suggest reduction to
combinators may have the potential to be a similar boost to progress
in automated higher-order theorem proving for similar reasons.
--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.
However, if we focus specifically on clause normal form and the
question of satisfiability, everything is existentially quantified, so
x=0 and X=0 both have the same meaning and status: satisfiable.
--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.
In the typical interpretation of first-order logic, yes, but not in
clause normal form where the question is of satisfiability. If you
feed X=0 into a first-order theorem prover, it will give the result
Satisfiable.
--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.
cnf(1, plain, X=0).
And it gave the result Satisfiable.
But:
cnf(1, plain, X!=0).
gives the result Unsatisfiable, which would appear to be inconsistent.
But do we need to worry about that for higher-order logic? We are
entitled to wrap every free variable in the input in a universal
quantifier, which translates into a lambda expression, which
translates into combinators with no variables.
A caveat is that we still need to reintroduce free variables for
certain purposes, e.g. to prove SKK=I it is necessary to feed them
both the same variable and observe they then reduce to the same
result.
In that case, are we effectively using variables with implicitly
universally quantified semantics? In a sense we are using them for
that purpose, but it doesn't seem to me we are doing so in such a way
as to make it necessary to introduce such semantics as a primitive.
--
You received this message because you are subscribed to the Google Groups "One Logic" group.
To post to this group, send email to one-...@googlegroups.com.
To unsubscribe from this group, send email to one-logic+...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/one-logic?hl=en.
... it was merely using a model in which 0 is the only number.
(If your jaw didn't just hit the floor, I guess that means you're a
real mathematician :-))
![X]: p(X) translates to ^[X]: p(X) = ^(X): true
?[X]: p(X) translates to ^[X]: p(X) != ^(X): false
If we then translate to combinators, we get rid of variable binding,
which has the huge advantage that terms become context free; the
meaning of a sub term does not depend on the larger term in which it
was embedded. (caveat: I'm not yet as sure as I'd like to be that this
remains true after we bring types into the picture.)
One disadvantage of translating to CL is that it becomes a bit more
difficult to determine when terms are equal. There are also the
practical difficulties of debugging output by eyeball and trying to
not get term size to blow up too much.
I don't have the algorithms nailed down yet, but as far as I can see,
regardless of whether we translate to CL, we will need to shift things
around, feed arguments etc. in order to do inference, as you say.