Dear all,
Today I had the pleasure of calculating some predicate calculus results which seemed intuitive, but which were previously unknown to me. Allow me to share them.
First, let us recall the following slightly-unfamiliar distributivity property:
(0) (Ax : r.x : t.x => C) == (Ex : r.x : t.x) => C .
The validity of (0) may be seen from the following calculation:
(Ax : r.x : t.x => C)
== { => into ¬ and \/ }
(Ax : r.x : ¬t.x \/ C)
== { \/ over A }
(Ax : r.x : ¬t.x) \/ C
== { de Morgan }
¬(Ex : r.x : t.x) \/ C
== { ¬ and \/ into => }
(Ex : r.x : t.x) => C .
* * *
The formula I was considering today had the shape:
(*) (Ax,y : r.x.y /\ s.y : t.y) .
Notice that the quantification is over x and y , but the range of quantification is somewhat “disentangled” with respect to these variables, and the term does not depend on x at all. It seemed clear to me that under certain conditions, the quantification over x could be considered “vacuous” and be eliminated. Intuitively, we would need to at least prove the existence of some x satisfying r.x.y , because if we take r.x.y == false in (*) , then the whole formula equivales false as well.
I decided to calculate to find the condition in question:
(Ax,y : r.x.y /\ s.y : t.y)
== { nesting }
(Ay :: (Ax : r.x.y /\ s.y : t.y) )
== { shunting }
(Ay :: (Ax :: r.x.y /\ s.y => t.y ) )
== { distributivity property (0) }
(Ay :: (Ex :: r.x.y /\ s.y) => t.y )
== { /\ over E }
(Ay :: (Ex :: r.x.y) /\ s.y => t.y )
== { • assuming s.y => (Ex :: r.x.y) }
(Ay :: s.y => t.y)
== { shunting }
(Ay : s.y : t.y) .
And so we have learned that the “vacuous quantification” rule:
(1) (Ax,y : r.x.y /\ s.y : t.y) == (Ay : s.y : t.y)
* * *
Finally, I will reveal the original context of these considerations. We borrow the basic language of (naive) set theory: elements, sets, the predicate ∈ , and so forth. (See Appendix.) Next, we consider a function p , with which we may form expressions like p.x , where the low dot denotes ‘application’ of p to the element x . We extend this notation and allow functions to be applied to sets by the following definition:
(2) y ∈ p.C == (Ex : x ∈ C : y = p.x) ,
from which we conclude the basic property:
(2') x ∈ C => p.x ∈ p.C .
Additionally, for each element y we define a set p⁻¹.y by:
(3) x ∈ p⁻¹.y == p.x = y .
Finally, we wish to speak of a set C being ‘saturated’ by p . In James Munkres’s book, Topology , he defines this notion by saying that C should contain every set p⁻¹.y that it intersects. A colorful description, but not great for calculation! Let’s see what we can do:
(Ay :: C ∩ p⁻¹.y ≠ ø => p⁻¹.y ⊂ C )
== { rewriting the antecedent, in the language of elements; (3) }
(Ay :: (Ez :: z ∈ C /\ p.z = y ) => p⁻¹.y ⊂ C )
== { distributive property (0) }
(Ay,z : z ∈ C /\ p.z = y : p⁻¹.y ⊂ C )
== { one-point rule to eliminate y }
(Az : z ∈ C : p⁻¹.(p.z) ⊂ C ) .
For my money, this is already a decent simplification! (I am thinking in particular of the homogenization and disentanglement of the symbols.) But let’s press further:
(Az : z ∈ C : p⁻¹.(p.z) ⊂ C )
== { expanding the term, in the language of elements; (3) }
(Az : z ∈ C : (Ax : p.x = p.z : x ∈ C) )
== { (un)nesting }
(Az,x : z ∈ C /\ p.x = p.z : x ∈ C )
== { preparing to eliminate z ; (2') and Leibniz }
(Az,x : z ∈ C /\ p.x = p.z /\ p.x ∈ p.C : x ∈ C )
== { • vacuous quantification rule (1) ; see deferred proof obligation below }
(Ax : p.x ∈ p.C : x ∈ C) .
And so we may equivalently and simply say that C is saturated by p if p.x ∈ p.C implies x ∈ C . Note that this is simply the converse of (2’) given above, and constitutes a sort of cancellation property with respect to ∈ . Of course, we are only entitled to this simpler definition, provided we address our deferred proof obligation:
p.x ∈ p.C => (Ez :: z ∈ C /\ p.x = p.z ) .
But this is precisely definition (2) of p.C .
* * *
All best,
+j
APPENDIX
=========
We consider a universe of mathematical objects which come in two types: ‘elements’ and ‘sets’ . With x of type element and C of type set we have the predicate x ∈ C . We now introduce a new type, ‘function’ . With p of type function and x any mathematical object, we can form the expression p.x , which denotes the ‘application’ of p to x . Functions are required to satisfy the ‘Leibniz’ property:
x = y => p.x = p.y (for all x,y )
which (for example) predicates satisfy as well. (Indeed, in the final calculation above I used p.x = p.z => (p.x ∈ C == p.z ∈ C) , which is an example of the Leibniz property for predicates. Accordingly, one might consider predicates to be a sort of function.) Typically, a function only satisfies the Leibniz property for a collection of mathematical objects specified as the ‘domain’ of that function, though such specification is not necessary to follow the above discussion.
It bears mentioning that in certain axiomatic approaches to set theory, ‘element’ , ‘function’ , ‘domain’ , etc, are all taken to be subtypes of the type ‘set’ . For that matter, so is ‘predicate’ . Again, I consider all this overspecific and useful primarily for the logical study of (limitations of) set theory.
Sets satisfy the following axiomatic property:
(0) C = D == (Ax :: x ∈ C == x ∈ D) .
We define a relation ⊂ on sets by:
C ⊂ D == (Ax :: x ∈ C => x ∈ D) ,
so that we have a sort of mutual implication property for sets:
C = D == C ⊂ D /\ D ⊂ C .
Note that, thanks to (0) , a property like:
(*) x ∈ C == f.x
uniquely defines a set C , and we exploit this right away to define the empty set ø by:
x ∈ ø == false .
It follows that C ≠ ø == (Ex :: x ∈ C) . Similarly, we define operators ∪ and ∩ on sets by:
x ∈ C ∪ D == x ∈ C \/ x ∈ D
x ∈ C ∩ D == x ∈ C /\ x ∈ D .
(End of APPENDIX.)