Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Definition of ordinal exponentiation

13 views
Skip to first unread message

Norman Megill

unread,
Dec 22, 2004, 6:11:59 AM12/22/04
to
There seems to be a disagreement about the definition of ordinal
exponentiation in the literature. For zero and successor exponents, all
books agree on:

a^0 = 1
a^(b+1) = (a^b).a

But there are two definitions used for limit exponents. Two books I
looked at use a simpler definition:

Kunen _Set Theory_ p. 26
Just & Weese _Discovering Modern Set Theory: I_ p. 167

a^b = Union_{c<b} a^c if b is a limit ordinal

Four books use a more complicated definition:

Mendelson _Intro. to Math. Logic_ p. 250
Takeuti & Zaring _Intro. to Axiom. Set Th._ 2nd ed. p. 67
Enderton _Elements of Set Th._ p. 232
Suppes _Axiomatic Set Th._ p. 215

a^b = Union_{c<b} a^c if b is a limit ordinal and a > 0
a^b = 0 if b is a limit ordinal and a = 0

Enderton says: "If we were to blindly follow [the simpler definition]
we would have 0^omega = 1. This is undesirable..." but he doesn't
say why it is undesirable.

And finally, Jech seems to avoid the issue:

Jech _Set Theory_ (1978) p. 20

a^b = lim c->b a^c if b is a limit ordinal

where lim c->b a^c is defined (p. 18) only for nondecreasing sequences
(therefore leaving 0^b undefined for limit ordinals)

The difference between the simple and complicated definitions is the
following:

Simple definition: 0^b = 0 when b is a successor and 1 otherwise
Complicated definition: 0^b = 1 when b=0 and 0 otherwise

The only "drawback" I can see of the simple definition is that 0^b jumps
between 0 and 1 as b increases (is not continuous). It goes to 1 at
each limit, then the next successor cuts it back down to 0. Perhaps it
is unaesthetic, but I don't see how it is a theoretical drawback. It
doesn't seem to make much difference in the standard theorems, although
I may have overlooked something.

On the other hand, the complicated definition can complicate proofs
because you have to prove the 0 case separately. By contrast, the
simple supremum is the "natural" thing that transfinite induction
expects to use (just as for ordinal addition and multiplication).

What other arguments are there in favor of or against the simple
definition?

David C. Ullrich

unread,
Dec 22, 2004, 7:47:44 AM12/22/04
to

I wouldn't know, but I doubt that it really matters. But it seems
to me that continuiuty is exactly the point: if b is a limit ordinal
then a^b "should" be the _limit_ of a^c as c -> b from below - if
instead we take the suprememum, ie union, then 0^0 determines what
0^omega is, which doesn't seem to make much sense.

That "should" is not something one can prove, but it seems right to me
anyway.

>On the other hand, the complicated definition can complicate proofs
>because you have to prove the 0 case separately.

Doesn't make proofs much more complicated.

Otoh at least once there was a post here where someone was confused
by some text that seemed to be taking 0^omega = 0, although 0^omega =
1 was what followed from the definition.

>By contrast, the
>simple supremum is the "natural" thing that transfinite induction
>expects to use (just as for ordinal addition and multiplication).

Seems much more "natural" to me for transfinite induction to
involve limits instead of sups - the only reason, seems to me,
that the definition is given in terms of unions is because
that's simpler formally but equivalent except in this trivial
special case.

>What other arguments are there in favor of or against the simple
>definition?


************************

David C. Ullrich

Norman Megill

unread,
Dec 23, 2004, 11:43:13 AM12/23/04
to
David C. Ullrich wrote:

> On Wed, 22 Dec 2004 06:11:59 -0500, Norman Megill
> <n...@see.signature.invalid> wrote:
>
>>There seems to be a disagreement about the definition of ordinal

>>exponentiation in the literature.[...]

Thanks for answering. To give you a perspective, I need to settle on
one of these definitions for my Metamath database and wanted to hear some
opinions.

> I wouldn't know, but I doubt that it really matters. But it seems
> to me that continuiuty is exactly the point: if b is a limit ordinal
> then a^b "should" be the _limit_ of a^c as c -> b from below - if
> instead we take the suprememum, ie union, then 0^0 determines what
> 0^omega is, which doesn't seem to make much sense.
>
> That "should" is not something one can prove, but it seems right to me
> anyway.
>
>>On the other hand, the complicated definition can complicate proofs
>>because you have to prove the 0 case separately.
>
> Doesn't make proofs much more complicated.

True, not much. But a little, especially for formal proofs. And special
cases annoy me. :)

> Otoh at least once there was a post here where someone was confused
> by some text that seemed to be taking 0^omega = 0, although 0^omega =
> 1 was what followed from the definition.

Thanks, I found the thread, of Oct. 2 (searched google groups for
"0^omega=1"). I wasn't aware of it, and it is very relevant.

>>By contrast, the
>>simple supremum is the "natural" thing that transfinite induction
>>expects to use (just as for ordinal addition and multiplication).
>
> Seems much more "natural" to me for transfinite induction to
> involve limits instead of sups - the only reason, seems to me,
> that the definition is given in terms of unions is because
> that's simpler formally but equivalent except in this trivial
> special case.

I kind of agree. But for Metamath definitions usually my inclination is
to go for the formally simplest, which often makes proofs easier.
What I am sensing is that this trivial case isn't all that important,
and besides people don't agree on it anyway.

So on the one hand I'm tempted to use the simplest to make life
easiest. From a purely formal point of view where simplicity counts, it
is the most elegant. Since Kunen says it's OK, I figure that would make
it acceptable. Personally I don't mind 0^omega=1; it kind of grows on
you, especially since 0 and limits share other properties like
idempotent union.

But on the other hand it seems a lot of people dislike 0^omega=1. I'm
still open to comments. What would sway me most would be an example of
a standard theorem where there is no special case now, but a special
case would arise if the simpler definition were used. An example
involving order types was brought up of the thread you referenced, but
one involving pure ordinals would also be nice. I'm not aware of any of
the latter.

If Herb Enderton is reading this, perhaps he could comment on the remark
in his book:

>> Enderton says: "If we were to blindly follow [the simpler definition]
>> we would have 0^omega = 1. This is undesirable..."

(_Elements of Set Theory_ p. 232)

Norman Megill

unread,
Dec 23, 2004, 11:47:54 AM12/23/04
to
Argh, why is my sig not appearing. I'll put it in by hand:

--
Norm Megill nm at alum dot mit dot edu

David C. Ullrich

unread,
Dec 24, 2004, 7:51:55 AM12/24/04
to
On Thu, 23 Dec 2004 11:43:13 -0500, Norman Megill
<n...@see.signature.invalid> wrote:

>[...]


>
>I kind of agree. But for Metamath definitions usually my inclination is
>to go for the formally simplest, which often makes proofs easier.

Ok, if that's the objective then just go ahead and define a^b = 0 for
all a and b. Will make all the proofs regarding exponentiation _much_
simpler.

Of course the exponential function will no longer have all the
properties that we want it to have. (But at least we'll have
0^omega = 0...)

Sorry, couldn't stop myself there once I started typing.

>What I am sensing is that this trivial case isn't all that important,
>and besides people don't agree on it anyway.
>
>So on the one hand I'm tempted to use the simplest to make life
>easiest. From a purely formal point of view where simplicity counts, it
>is the most elegant. Since Kunen says it's OK, I figure that would make
>it acceptable. Personally I don't mind 0^omega=1; it kind of grows on
>you, especially since 0 and limits share other properties like
>idempotent union.
>
>But on the other hand it seems a lot of people dislike 0^omega=1.

Not that my opinion matters, but as long as it's the only opinion
you're getting so far, 0^omega = 1 just seems clearly wrong. It's
a matter of definition, and there's no such thing as an actually
wrong definition, by definition. Except for this one.

>I'm
>still open to comments. What would sway me most would be an example of
>a standard theorem where there is no special case now, but a special
>case would arise if the simpler definition were used. An example
>involving order types was brought up of the thread you referenced, but
>one involving pure ordinals would also be nice. I'm not aware of any of
>the latter.
>
>If Herb Enderton is reading this, perhaps he could comment on the remark
>in his book:
>
> >> Enderton says: "If we were to blindly follow [the simpler definition]
> >> we would have 0^omega = 1. This is undesirable..."
>(_Elements of Set Theory_ p. 232)


************************

David C. Ullrich

David Moews

unread,
Dec 24, 2004, 7:37:52 PM12/24/04
to
In article <-4WdnfeR8ds...@rcn.net>,

Norman Megill <n...@see.signature.invalid> wrote:
|There seems to be a disagreement about the definition of ordinal
|exponentiation in the literature. For zero and successor exponents, all
|books agree on:
|
| a^0 = 1
| a^(b+1) = (a^b).a
|
|But there are two definitions used for limit exponents. Two books I
|looked at use a simpler definition:
|
| Kunen _Set Theory_ p. 26
| Just & Weese _Discovering Modern Set Theory: I_ p. 167
|
| a^b = Union_{c<b} a^c if b is a limit ordinal
|
|Four books use a more complicated definition:
|
| Mendelson _Intro. to Math. Logic_ p. 250
| Takeuti & Zaring _Intro. to Axiom. Set Th._ 2nd ed. p. 67
| Enderton _Elements of Set Th._ p. 232
| Suppes _Axiomatic Set Th._ p. 215
|
| a^b = Union_{c<b} a^c if b is a limit ordinal and a > 0
| a^b = 0 if b is a limit ordinal and a = 0
|
| Enderton says: "If we were to blindly follow [the simpler definition]
| we would have 0^omega = 1. This is undesirable..." but he doesn't
| say why it is undesirable.
[...]

In Exercise I.7 (p. 43) Kunen defines a^b as the order-type of the set of all
functions from b to a which are nonzero at only finitely many places, under
the order f < g iff f(x) < g(x) for the largest x such that f(x) != g(x).
This gives 0^b = 0 for all b > 0, which I think is the better definition.
--
David Moews dmo...@xraysgi.ims.uconn.edu

H. Enderton

unread,
Dec 25, 2004, 10:07:57 PM12/25/04
to
>On Thu, 23 Dec 2004 11:43:13 -0500, Norman Megill wrote:
>>If Herb Enderton is reading this, perhaps he could comment on the remark
>>in his book:
>> >> Enderton says: "If we were to blindly follow [the simpler definition]
>> >> we would have 0^omega = 1. This is undesirable..."
>>(_Elements of Set Theory_ p. 232)

I don't think I had anything deep in mind. Wouldn't it be weird for 0^a = 0
for all the successor ordinals a, then suddenly have 0^a jump up to 1 for
limit a?

Also, I bet there will properties of ordinal exponentiation b^a that, under
your "simpler" definition will have to be restated to handle the b = 0
case separately.

--Herb Enderton


Norman Megill

unread,
Dec 30, 2004, 3:03:03 PM12/30/04
to
David C. Ullrich wrote:

> On Thu, 23 Dec 2004 11:43:13 -0500, Norman Megill wrote:
>>I kind of agree. But for Metamath definitions usually my inclination is
>>to go for the formally simplest, which often makes proofs easier.
>
> Ok, if that's the objective then just go ahead and define a^b = 0 for
> all a and b. Will make all the proofs regarding exponentiation _much_
> simpler.

Yes, I thought of that too. A minor drawback is that ordinal and
cardinal exponentiation would no longer coincide for finite sets, but
otherwise it seems quite "natural," even somewhat elegant, in a purely
formal sense. But no book uses it that I know of, and I'm not so bold
as to be that unconventional. I've decided to resign myself to
the traditional 0^0=1 and 0^omega=0 definition.

--
Norman Megill nm at alum dot mit dot edu

Norman Megill

unread,
Dec 30, 2004, 3:09:42 PM12/30/04
to
H. Enderton wrote:

The master has spoken. 0^omega=0 it shall be for my project.
(Also thanks to Dave Ullrich and David Moews for useful comments.)

Off-topic: An entertaining read about your _A Mathematical
Introduction to Logic_ ("The bible for logicians worldwide"):
http://www.tow.com/musings/20000806_philosophy160a/
Quote:
I'm not so sure about about the [new edition's] cover. In my
opinion, the new cover lacks the seriousness, the defiant
statement of "I am a hardcore book!" of the first edition's.

--
Norm Megill nm at alum dot mit dot edu

Mike Oliver

unread,
Jan 15, 2005, 10:25:24 PM1/15/05
to

I'm a little late here -- but sure, one pops to mind
right away: Ordinal exponentiation, as usually
defined, satisfies

alpha^(beta+gamma) = alpha^beta * alpha^gamma

which by the way is probably the best justification
for the current choice of ordering the multiplier and
multiplicand (note that this is backwards from Cantor's
original notation; he had omega*2 = omega, 2*omega > omega).

Now if 0^omega were equal to 1, this would fail:

1 = 0^omega = 0^(1+omega) != 0^1 * 0^omega = 0

0 new messages