Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

My number is bigger than yours.

49 views
Skip to first unread message

William Elliot

unread,
Jan 14, 2002, 9:43:32 AM1/14/02
to
So you've got heap big number maker, increasing f:N -> N

let g(n) = f^f^..^f(n)..(n) (n) where #iterations = f^n (n)
Notation: f^f^f(n)(n) (n) = [f^ [f^f(n)](n) ](n)

If f(n) = n^n: f^2 (2) = f(4) = 256
g(2) = f^f^..^f(2)..(2) (2) with 256 iterations
f(2) = 4; f^f(2) (2) = f^4 (2) = f^3 (4) = f^2 (256)
= f(256^256) = (256^256)^(256^256) = 256^256^257 > 256^^3
f^f^f(2) (2)(2) = f^256^256^257 (2)
and etc. for 256 iterations
What approximately is g(2), g(3) ?

-- easy going for the real big
g_0 (n) = f(n)
g_1 (n) = (g_0)^(g_0 (n)) (n) = f^f(n) (n)
g_(k+1) (n) = (g_k)^(g_k (n)) (n)

g_w (n) = g_n (n)
g_w+1 (n) = (g_w)^(g_w (n)) (n) = (g_n)^(g_n (n)) (n)
g_w+k+1 (n) = (g_w+k)^(g_w+k (n)) (n)

g_2w (n) = g_w+w (n) = (g_w+n) (n)
continue this way defining g_alpha for any ordinal alpha

g_alpha+1 (n) = (g_alpha)^(g_alpha (n)) (n)
g_(k+1)w = g_(kw+n) (n)
g_w^2 (n) = g_nw (n)
g_w^2+w (n) = g_(w^2 + n) (n)
g_w^3 (n) = g_nw^2 (n)
g_w^k (n) = g_nw^(k-1) (n)
g_w^w (n) = g_w^n (n)
g_w^w^w (n) = g_w^w^n (n)
g_w^w^... (n) = g_(w^^n) (n) where w^^n = w^w^..^w, n times

What's g_w^w^... (1) for f(n) = n+1, g_w^w^... (2) for f(n) = n^n ?

proceed until W, the first uncountable ordinal.
What's g_W (1) for f(n) = n+1, g_W (2) for f(n) = n^n ?

----


______________________________________________________________________________
Posted Via Binaries.net = SPEED+RETENTION+COMPLETION = http://www.binaries.net

Ioannis

unread,
Jan 14, 2002, 10:08:28 AM1/14/02
to
William Elliot wrote:
>
> So you've got heap big number maker, increasing f:N -> N
>
> let g(n) = f^f^..^f(n)..(n) (n) where #iterations = f^n (n)
> Notation: f^f^f(n)(n) (n) = [f^ [f^f(n)](n) ](n)

Look up the Ackermann function and start using a more efficient notation
for towers and hypertowers using the hyper operator and then maybe we'll
bother looking at all this.

Using your inefficient notation, you are wasting space. Many of the
operations in exponentiation and hyperexponentiation can be written
quite compactly using the Ackermann and/or the hyperexponentiation
operator.

n^n (exponent to the right)

n^n^...^n = n@n (exponent to the left) (some authors use n^^n exponent
to the right)

n@...@n = n%n, (tower left, exponent to the right)

n%...%n = n&n, (tower right, exponent to the left)

continue thusly.

[snip]


> proceed until W, the first uncountable ordinal.
> What's g_W (1) for f(n) = n+1, g_W (2) for f(n) = n^n ?
>
> ----

--
Ioannis
http://users.forthnet.gr/ath/jgal/
___________________________________________
Eventually, _everything_ is understandable.

William Elliot

unread,
Jan 15, 2002, 4:21:09 AM1/15/02
to
jg...@ath.forthnet.gr
William Elliot wrote: My number is bigger than yours.

> So you've got heap big number maker, increasing f:N -> N
>
> let g(n) = f^f^..^f(n)..(n) (n) where #iterations = f^n (n)
> Notation: f^f^f(n)(n) (n) = [f^ [f^f(n)](n) ](n)
_ Many of the operations in exponentiation and hyperexponentiation can
_ be written quite compactly using the Ackermann and/or the
_ hyperexponentiation operator.

_ n^n (exponent to the right)
_ n^n^...^n = n@n (exponent to the left)
_ (some authors use n^^n exponent to the right)
n^(n^(n^n))) = n^^4

_ n@...@n = n%n, (tower left, exponent to the right)
_ n%...%n = n&n, (tower right, exponent to the left)
_ continue thusly.
n^^(n^^(n^^n))) = n^^^4 = n%4; some like ^^^, ^^^^, etc
n%(n%(n%n))) = n^^^^4 = n&4

g1(n,m) = f(n,m) = n^m

g2(n,1) = n
g2(n,2) = f(n,g2(n,1)) = n^n
g2(n,3) = f(n,g2(n,2)) = n^n^n
g2(n,m) = f(n,g2(n,m-1)) = n^^m

g3(n,1) = n
g3(n,2) = g2(n,g3(n,1)) = n^^n
g3(n,3) = g2(n,g3(n,2)) = n^^n^^n
g3(n,m) = g2(n,g3(n,m-1)) = n^^^m

gk(n,1) = n, k' = k-1
gk(n,2) = gk'(n,gk(n,1))
gk(n,3) = gk'(n,gk(n,2))
gk(n,m) = gk'(n,gk(n,m-1)) = n^^..^m, (^ repeated k times)

Thus I've described the ^^...^ with notation for each one.
Beyond this, there's the 'omega-th' or w-th step: gw(n,m) = gm(n,m)

Repeat as above, using gw(n,m) in place of f(n,m)
This repetition can be repeated for the 'ordinals' w, w^2, w^3,.. w^n,..
w^w,.. w^w^n,.. w^w^w,.. w^^n,.. w^w^w^... = epsilon.

Thus g_epsilon(n,n) is much much larger than anything you suggested.

Ioannis

unread,
Jan 15, 2002, 5:31:40 AM1/15/02
to
William Elliot wrote:
[snip]
William Elliot wrote:
[snip]

> n^^(n^^(n^^n))) = n^^^4 = n%4; some like ^^^, ^^^^, etc
> n%(n%(n%n))) = n^^^^4 = n&4
>
> g1(n,m) = f(n,m) = n^m
>
> g2(n,1) = n
> g2(n,2) = f(n,g2(n,1)) = n^n
> g2(n,3) = f(n,g2(n,2)) = n^n^n
> g2(n,m) = f(n,g2(n,m-1)) = n^^m
>
> g3(n,1) = n
> g3(n,2) = g2(n,g3(n,1)) = n^^n
> g3(n,3) = g2(n,g3(n,2)) = n^^n^^n
> g3(n,m) = g2(n,g3(n,m-1)) = n^^^m
>
> gk(n,1) = n, k' = k-1
> gk(n,2) = gk'(n,gk(n,1))
> gk(n,3) = gk'(n,gk(n,2))
> gk(n,m) = gk'(n,gk(n,m-1)) = n^^..^m, (^ repeated k times)
>
> Thus I've described the ^^...^ with notation for each one.
> Beyond this, there's the 'omega-th' or w-th step: gw(n,m) = gm(n,m)
>
> Repeat as above, using gw(n,m) in place of f(n,m)
> This repetition can be repeated for the 'ordinals' w, w^2, w^3,.. w^n,..
> w^w,.. w^w^n,.. w^w^w,.. w^^n,.. w^w^w^... = epsilon.
>
> Thus g_epsilon(n,n) is much much larger than anything you suggested.

Fine. However what's the point of going this high? You've got some big
numbers there, but they are only a theoretical curiosity as soon as you
start throwing ordinals in your ops, in the same way a "typical"
increasing sequence of ordinals is a theoretical quriosity.

From a computational standpoint as soon as you start dropping ordinals
in, the numbers cease to have an "actual" meaning, so you are just
handwaving.

Your "big number" argument, as soon as you reach ordinals, is akin to
some of the programmers in the "C BIGNUM BAKEOFF" thread, claiming that
the output ~OU in a "virtual machine" wins all programs. Of course it
does. But it's not producible. (In the case of the "virtual machine",
the algorithm will run out of time trying to shift all the bits into
1's).

I don't wish to impinge badly upon Set Theoreticians here, but I
consider all infinities except _two_ rather moot.

The only two infinities that the Mathematical human brain can clearly
perceive and differentiate _against_, are Aleph_0 and c [*]. The rest
are simply interesting mental constructs. (And incidentally, that's why
imo the Continuum Hypothesis creeps in: Precicely because Aleph_0 and c
are the actual "limits" in this mental differentiation [**]).

Sure, comparing the members of increasing sequences of cardinals and
ordinals aquires meaning, but only vis-a-vis within the context of the
sequence construct. _Outside_ the construct, these "larger" infinities
are just theoretical quriosities.

No offense to Set Theorists here and in the following:

[*] Only comparisons between the following actually make "mental" sense,
imo:

F = {finite, infinite (Aleph_0), infinite (c)}

[**] If one claims that one understands the difference between (finite,
infinite (Aleph_0)) and (finite, infinite (Aleph_26878)), (that is, if
one claims that one understands the "difference of differences" between
the corresponding members) I'd be very reluctant to trust their
judgement, regardless of the fact that in "normal" math parlance, it is
true that Aleph_0 < Aleph_26878)

Other comparisons are equally impossible to fathom:

The "difference" between: (infinite (Aleph_10), infinite (Aleph_26)) has
a meaning only vis-a-vis the index. The entire comparison is moot, since
we cannot fathom how much "larger" Aleph_26 is from Aleph_10. (which
traces of course back to the brain short-circuit because of our
inability to resolve the CH to our liking. (By not switching axioms that
is)).

To conclude, the sentence "building increasing sequences of cardinals or
ordinals", should have quotes around the word "increasing" as well.

Trying to prevent backfiring from some of the experts here, no: I cannot
visualize the "difference of sizes" between, say, |R and PowerSet(|R). I
can perhaps visualize a _particular_ PowerSet of |R, but as far as the
difference of _sizes_ is concerned, the size of PowerSet(|R), for all
practical purposes, for me might as well have been c.

Of course, my inability to visualize between other infinite members not
in F, may very well be the result of my natural stupidity. :*(((((

Lovecraftesque

unread,
Jan 15, 2002, 2:31:23 PM1/15/02
to
It looks involved, but not really that big. For obtaining mind-boggling
large numbers in a simple way have a look at the Ackermann function.

Ioannis

unread,
Jan 15, 2002, 3:30:54 PM1/15/02
to
I wrote:
[snip]

> Trying to prevent backfiring from some of the experts here, no: I cannot
> visualize the "difference of sizes" between, say, |R and PowerSet(|R). I
> can perhaps visualize a _particular_ PowerSet of |R, but as far as the
> difference of _sizes_ is concerned, the size of PowerSet(|R), for all
> practical purposes, for me might as well have been c.
>
> Of course, my inability to visualize between other infinite members not
> in F, may very well be the result of my natural stupidity. :*(((((

And of course, the entire previous article concerns _infinite_ ordinals,
not all ordinals (i.e. finite ones). Substitute "infinite ordinal" for
"ordinal" to avoid misfirings of neurons.

Peter Webb

unread,
Jan 15, 2002, 9:50:32 PM1/15/02
to
<SNIP>

"Trying to prevent backfiring from some of the experts here, no: I cannot
visualize the "difference of sizes" between, say, |R and PowerSet(|R). I
can perhaps visualize a _particular_ PowerSet of |R, but as far as the
difference of _sizes_ is concerned, the size of PowerSet(|R), for all
practical purposes, for me might as well have been c."

I, personally, find it hard to visually the difference of sizes say between
10,000 and 20,000. Show me a big pile of pencils and I wouldn't really be
able to tell you whether there were 10,000 or 20,000 without counting or
working it out.

Doesn't mean there is no difference.

And I think "practical purposes" is rather shaky ground for a mathematical
critique!

.


Ioannis

unread,
Jan 16, 2002, 5:11:01 AM1/16/02
to
Peter Webb wrote:
[snip]

> I, personally, find it hard to visually the difference of sizes say between
> 10,000 and 20,000. Show me a big pile of pencils and I wouldn't really be
> able to tell you whether there were 10,000 or 20,000 without counting or
> working it out.
>
> Doesn't mean there is no difference.

I don't think you quite understand what I am talking about. The
differences:

(20,000, 10,000) and (Aleph_27, Aleph_10) to pick one particular
example, are not of the same "conceptual" order.

What I mean is that only difference combos from F = {finite,
inf(Aleph_0), inf(c)}

are "conceptually" meaningful. I am not doubting the "semantic"
differences between other infinite members not in F.

The difference (Aleph_27, Aleph_10) is "semantically" meaningful (via
the inherited or constructed cardinal order) but "conceptually" it is
nonsense.

Denis Feldmann

unread,
Jan 16, 2002, 6:14:40 AM1/16/02
to

"Ioannis" <morp...@olympus.mons> a ecrit dans le message news:
3C4551...@olympus.mons...

> Peter Webb wrote:
> [snip]
> > I, personally, find it hard to visually the difference of sizes say
between
> > 10,000 and 20,000. Show me a big pile of pencils and I wouldn't really
be
> > able to tell you whether there were 10,000 or 20,000 without counting or
> > working it out.
> >
> > Doesn't mean there is no difference.
>
> I don't think you quite understand what I am talking about. The
> differences:
>
> (20,000, 10,000) and (Aleph_27, Aleph_10) to pick one particular
> example, are not of the same "conceptual" order.
>
> What I mean is that only difference combos from F = {finite,
> inf(Aleph_0), inf(c)}
>
> are "conceptually" meaningful. I am not doubting the "semantic"
> differences between other infinite members not in F.

Are you sure? I , personnally, have not much trouble to "visualize" the set
of real functions, of cardinal 2^c.

William Elliot

unread,
Jan 16, 2002, 6:44:27 AM1/16/02
to
My number is bigger than yours.
Subject: Re: My number is bigger than yours.
Author: Dave L. Renfro <[7]renf...@cmich.edu>
_ A lot more about the hierarchy Elliot was describing can be
_ found in these two sci.math threads:
_ [10]http://mathforum.org/epigone/sci.math/gunsnersteh
_ [11]http://mathforum.org/epigone/sci.math/shingvixtwimp
Ok you big headed guys, your number is bigger.
At least I'm on the right track.

_ I think Elliot's "gw(n,m)" in his Jan. 15 post (this thread)
_ is essentially the Ackermann function. But it does appear that
_ he's nowhere near the Ackermann function in his Jan. 14 post.
Basically the same ordinal thing using a function of one variable instead
of two. A little slower at first unless you seed it with something big.

Ackermann function:
1. A(1, n) = 2 + (n + 3) - 3
2. A(2, n) = 2 (n + 3) - 3
3. A(3, n) = 2 ^ (n + 3) - 3
4. A(4, n) = 2 ^^ (n + 3) - 3
5. A(5, n) = 2 ^^^ (n + 3) - 3
6. etc...
Ok, use gw(n,m) starting with g1(n,m) = A(n,m).

A. My number is bigger than yours because it's 1 + your number.
B. No it's not, my number is bigger because it's twice your number.
A. Oh yea, mine is thrice yours.
B. So what, mine is yours squared.
A. Mine's even bigger, yours cubed a zillion times.
...

Once upon a time Cantor and Conway jousted to see who counted the most.
The Wizard of Oz was referee, only the unimaginable was consider foul.
Zeno and Peano were the judges and Turning was asked to tally the results.

Zeno was found biased in favor of Conway as he was fond of the thickness
of Conway's number. Peano on the other hand was mesmerized by Cantor for
it seem to Peano that Cantor was endlessly repeating his whole numbers
again and again and again. In the end, when Turning's tally hung, the
Wizard of Oz declared foul, as the contest required an _unimaginable_
eternity.

Riddle of the Day: The contest was to have a break at half time.
When was half time?

Ioannis

unread,
Jan 16, 2002, 8:14:11 AM1/16/02
to
Denis Feldmann wrote:
[snip]

> Are you sure? I , personnally, have not much trouble to "visualize" the set
> of real functions, of cardinal 2^c.
[snip]

Denis, again, I am not doubting the fact that |P(S)|>|S|. This very
fact, which I of course accept unequivocably, is enough to convince one,
that indeed when one looks at the size of a given powerset |P(S)|, one
indeed has a larger cardinal than that of the size of |S|.

What I am doubting, is the "inherent" capability of the brain to
"differentiate" conceptually between (in this case) 2^c and c, contrary
to the way the mind can "see" the _difference_ between the sizes of the
sets |R and |N.

In other words, I can "see" how much larger |R is from |N, but I cannot
"see" how much larger P(|R) is from |R.

In my case, this fault probably stems from my laziness to properly
impose some sort of ordering on such a set (using the AC) and put it
side to side with a known set and "see" both at the same time. The key
word here is "comparison".

I don't know if there exists a strict enough language for me to express
what I want to say [*], but I suspect that it is not coincidental that
the AC is, if I recall correctly equivalent to the CH.

[**] Could you please demonstrate in some way how much larger |P(R)| is
from say, |R|?. I.e. could you give me a "measure" of the difference
which could easily be "visualized", preferably something that is more
elaborate than just: 2^c > c?

[*] Here is a nonsensical "pseudo-mathematical" explanation to how I
perceive infinities:

Let R(S_1, S_2) to be the regular equivalence relation that defines
finite/infinite cardinals.

Define R', as follows:

R'(S_i, S_j) <=>

{|S_i| = |S_j| + n, n in |N, iff BOTH S1 and S2 are finite,
R(S_i, S_j), otherwise}

Define S_i recursively as follows:

S_1 any finite set,
S_2 = any infinite set of cardinality Aleph_0
S_i+1 = P(S_i), for i>=2, [P(S) = PowerSet(S)]

Clearly, ~R'(S_i, S_j), for all i, j in I, i<>j

Main (insane) claim: "Conceptually", R' partitions the set of cardinals
into 3 classes only.

My mind conceptually perceives: R'(S_i+1, S_i), for i>3, so I perceive
only 3 classes. The class of finite sets, the class of sets of
cardinality Aleph_0 and the class of sets of cardinality > Aleph_0.

Of course the above perception is clearly "faulty" in math parlance,
since for any i: |S_i+1| > |S_i|, by Cantor, however see [**].

Thanks much.

Denis Feldmann

unread,
Jan 16, 2002, 12:57:46 PM1/16/02
to

"Ioannis" <morp...@olympus.mons> a ecrit dans le message news:
3C457C...@olympus.mons...

> Denis Feldmann wrote:
> [snip]
> > Are you sure? I , personnally, have not much trouble to "visualize" the
set
> > of real functions, of cardinal 2^c.
> [snip]
>
> Denis, again, I am not doubting the fact that |P(S)|>|S|. This very
> fact, which I of course accept unequivocably, is enough to convince one,
> that indeed when one looks at the size of a given powerset |P(S)|, one
> indeed has a larger cardinal than that of the size of |S|.
>
> What I am doubting, is the "inherent" capability of the brain to
> "differentiate" conceptually between (in this case) 2^c and c, contrary
> to the way the mind can "see" the _difference_ between the sizes of the
> sets |R and |N.

I don't know exactly what you mean. Most people have at first trouble to
admit that |N^2| =|N| or that |R^2|= |R| (Cantor:" I see it, but i don't
believe it" ). So I would say it is easy to admit thare are many more
functions than reals (again, many people find hard to believe at first that
there is only c continuous functions)


>
> In other words, I can "see" how much larger |R is from |N, but I cannot
> "see" how much larger P(|R) is from |R.

Do you? Again, I have found wildly different intuitions there (not to
mention cranky ones). My first reaction would be that IR is barely larger
than IN; I found hard to believe that not only ZFC is consistent with
c=aleph_aleph_w, but that the intuition of many mathematicians was that c
was weakly inaccessible, for instance.


>
> In my case, this fault probably stems from my laziness to properly
> impose some sort of ordering on such a set (using the AC) and put it
> side to side with a known set and "see" both at the same time. The key
> word here is "comparison".


Probably not. An ordering on functions is easy (lexicographic one, if you
can well-order IR). But do you visualize the result?


>
> I don't know if there exists a strict enough language for me to express
> what I want to say [*], but I suspect that it is not coincidental that
> the AC is, if I recall correctly equivalent to the CH.

But you recall wrongly. In fact, it is the other way around; there is
absolute independency between ZFC and all the possible versions of "not-CH"

Ioannis

unread,
Jan 16, 2002, 2:41:17 PM1/16/02
to
Denis Feldmann wrote:
[snip]

> I don't know exactly what you mean. Most people have at first trouble to
> admit that |N^2| =|N| or that |R^2|= |R| (Cantor:" I see it, but i don't
> believe it" ). So I would say it is easy to admit thare are many more
> functions than reals (again, many people find hard to believe at first that
> there is only c continuous functions)

No. I personally have no problem with finite operations on cardinals. In
fact they are for me now as "intuitive" as they were when I first saw
them 16 years ago.

> > In other words, I can "see" how much larger |R is from |N, but I cannot
> > "see" how much larger P(|R) is from |R.
>
> Do you? Again, I have found wildly different intuitions there (not to
> mention cranky ones). My first reaction would be that IR is barely larger
> than IN; I found hard to believe that not only ZFC is consistent with
> c=aleph_aleph_w, but that the intuition of many mathematicians was that c
> was weakly inaccessible, for instance.

Precicely. Isn't the above a major "intuitive" annoyance? If
consistent(ZFC,c=aleph_aleph_w), then what I claim makes even more
sense: Namely that "further" ordering (above 2^Aleph_0) is, in fact,
"conceptually" moot, apart from it being a theoretical construct.

As for your comment on whether one can "intuitively" perceive how much
larger c is than Aleph_0, you are right: c > Aleph_0, but "how much"
larger depends largely as you say on intuition.

> Probably not. An ordering on functions is easy (lexicographic one, if you
> can well-order IR). But do you visualize the result?

I can see such a lexicographic ordering, and I can see that |f's| > c,
but I honestly have no clue as to how to actually compare the size of
the generated list against the size of the reals. In this case, the
inequality |f's| > c, is for to my mind as useful as, say, the True
statement P:

P = {Any number larger than the largest known prime at the time P is
uttered is either prime or composite}.

> > I don't know if there exists a strict enough language for me to express
> > what I want to say [*], but I suspect that it is not coincidental that
> > the AC is, if I recall correctly equivalent to the CH.
>
> But you recall wrongly. In fact, it is the other way around; there is
> absolute independency between ZFC and all the possible versions of "not-CH"

Yes, sorry, that was a blunder.

Anyway, I don't know if language is sufficient to describe what I mean,
but what I am saying is that the semantics and logic of the underlying
mathematical model that describes infinite cardinals eventually deviate
from what I would consider "humanly intuitive" or even "mathematically
intuitive" for me.

In other words, using my previous notation: Even though the semantics
say: ~R'(S_i+1, S_i) for all i>2, the very meaning of ~R'(S_i+1, S_i),
is conceptually moot.

Bill Taylor

unread,
Jan 17, 2002, 12:30:06 AM1/17/02
to
"Denis Feldmann" <denis.f...@wanadoo.fr> writes:

|> Are you sure? I , personnally, have not much trouble to "visualize" the set
|> of real functions, of cardinal 2^c.

We need to be careful here!

There may be no difficulty in visualising a SET of cardinality c or 2^c,
but still great difficulty in visualising a CARDINAL NUMBER of cardinality
c or 2^c ... as this requires visualising an ordinal of that cardinality.

Whether or not this was what the original poster had in mind I don't know.

------------------------------------------------------------------------------
Bill Taylor W.Ta...@math.canterbury.ac.nz
------------------------------------------------------------------------------
Set theory is a shotgun marriage - between well-ordering and power-set.
The two parties get along OK; but they hardly seem made for each other.
------------------------------------------------------------------------------

Ioannis

unread,
Jan 17, 2002, 8:09:17 AM1/17/02
to
Bill Taylor wrote:
[snip]

> Whether or not this was what the original poster had in mind I don't know.

In an attempt to bring this thread back on track (since I was the one
responsible for derailing it in the first place), William's post could
be modified to create a "game" similar to the one suggested in the
"Contest: C BIGNUM BAKEOFF" thread.

I don't think I will attempt as a precise formulation of the rules as
David Moews did in the corresponding programming thread, so I hope my
specifications below do not contain too many inconsistencies.

Given some standard terminology, perhaps using hyperexponents a^^^...^b
and using as much space for preliminary definitions as one desires, it
would be interesting to see how big a FINITE number, say, in one line of
text, could be described, using "regular" ascii math text.

That is:

1) ASCII space for preliminary list mathematical definitions: Unlimited,
but see 2).
2) Inductive list definitions are allowed provided they do not generate
infinite ordinals at ANY step.
2) ASCII space for the actual number description: k chars. (Say 10)
3) Names of functions and macros are taken from the set of all finite
strings OF LENGTH m<k (say 10) whose individual elements consist of the
chars: {'a'..'z'} of the English Alphabet, with indistiguishable
upper/lower case letters. (That is "Abc"="abc", etc)
3) No numeric chars allowed in function and macro names. (I.e. no
objects like "abc2dr(x)")
4) Assume the "regular" lexicographic ordering of finite strings of
maximum length m (say 10). That is:
a<aa<aaa...<aaaaaaaaaa(m chars)<b<ba<baa<...<bzz<bzza<...<bzzzzzzzzz(m
chars)<...<z<za<...<zzzzzzzzzz(m chars), etc.
5) Infinite ordinals or functions of infinite ordinals are not allowed.

Here is a sample entry for example:

Preliminary List:

Preliminary def (the 3 argument Ackermann function)

definition #1:

a(k,m,n)={m+n, if k=1,
m, if n=1,
a(k-1,m,a(k,m,n-1)), otherwise}

definition #2: (macro for arguments)

b(n) = a(n,n,n)

definition #3 (definition for iterations of b)
c^m(n) = b(b(...(b(n))...)) (m-times)

definition #4 (macro for iterations of c)
d(n) = c^n(n)

definition #5 (definition for iterations of d)
d^m(n) = d(d(...(d(n))...)) (m-times)

definition #6 (macro for iterations of d)
e(n) = d^n(n)

definition #7 (definition for iterations of e)
e^m(n) = e(e(...(e(n))...)) (m-times)
...

Continue thus, using all letters of the English Alphabet all the way
down to z.

Sample Entry:
z(9)

Compare ANY TWO of the following. For any such two comparisons, you may
assume a list similar to the above using your own definitions, provided
you extend your definitions all the way down to the maximum common
length in some way (But it is not necessary that the list definitions
are the same, as in my list above). That is, if you compare the
{ab(abc(9)), ab(abc(9))} entries, define your full lexicographic list
for words of length 3 (all the way to zzz). If you compare the {z^9(2),
z^2(9)} entries, assume the list has gone only down to z, as above.

z^9(2) (6 chars)
z^2(9) (6 chars)
aa(8) (5 chars)
z(z(z(9))) (10 chars)
abc(ab(5))
ab(abc(9))
ab(abc(9))
a(abcd(9))
zzzzzzz(9)
zzzzz^9(9)
zzzz^99(9)
zzz^999(9)
zz^9999(9)
z^99999(9)
z^z(9)(9)

Now compare ALL the previous together, assuming a FULL list of length 10
of all function and macro words has been built. :*)

Will the entry zzzzzzz(9) always be THE winning entry? Justify.

Heiner Marxen

unread,
Jan 17, 2002, 8:23:53 AM1/17/02
to
In article <pan.2002.01.15.11...@yahoo.com>,

Lovecraftesque <Lovecra...@yahoo.com> wrote:
>It looks involved, but not really that big. For obtaining mind-boggling
>large numbers in a simple way have a look at the Ackermann function.

For even larger numbers look up "Goodstein sequences".
--
Heiner Marxen hei...@insel.de http://www.drb.insel.de/~heiner/

William Elliot

unread,
Jan 18, 2002, 3:06:07 AM1/18/02
to
morp...@olympus.mons

William's post could be modified to create a "game" similar
to the one suggested in the "Contest: C BIGNUM BAKEOFF" thread.

_ Given some standard terminology, <snip>
_ Will the entry zzzzzzz(9) always be THE winning entry?
Nope. What you did with (a,b,..z), I do with (a,b,..z;A,B,..Z)
Instead of using digets (0,1,2,3,4,5,6,7,8,9) I use hexidemical
digets (0,1,2,3,4,5,6,7,8,9,!,@,#,$,%,&). Hence the big shot is
ZZZZZZZ(&)

_ Eventually, _everything_ is understandable.
The incomprehensible takes forever to understand.

Ioannis

unread,
Jan 18, 2002, 5:13:01 AM1/18/02
to
William Elliot wrote:
>
> morp...@olympus.mons
> William's post could be modified to create a "game" similar
> to the one suggested in the "Contest: C BIGNUM BAKEOFF" thread.
>
> _ Given some standard terminology, <snip>
> _ Will the entry zzzzzzz(9) always be THE winning entry?
> Nope. What you did with (a,b,..z), I do with (a,b,..z;A,B,..Z)
> Instead of using digets (0,1,2,3,4,5,6,7,8,9) I use hexidemical
> digets (0,1,2,3,4,5,6,7,8,9,!,@,#,$,%,&). Hence the big shot is
> ZZZZZZZ(&)

Ok, you have to be careful here: Using my construction rules as stated
explicitly in my previous article, zzzzzzz(9) is the "winning" entry.
Now if we extend those rules to include distinguishable upper/lower
letters and hexadecimal digits, then you are of course right: ZZZZZZZ(&)
is the "winning" entry.

Quotes around "winning", because I haven't actually checked it
rigorously.

> _ Eventually, _everything_ is understandable.
> The incomprehensible takes forever to understand.

[snip]

Dave L. Renfro

unread,
Jan 15, 2002, 5:09:01 PM1/15/02
to
Lovecraftesque <Lovecra...@yahoo.com>
[sci.math Tue, 15 Jan 2002 11:31:23 -0800]
http://mathforum.org/epigone/sci.math/flelsmingsnix

wrote (in reply to William Elliot's Jan. 14 post)

> It looks involved, but not really that big. For obtaining
> mind-boggling large numbers in a simple way have a look at
> the Ackermann function.

I think Elliot's "gw(n,m)" in his Jan. 15 post (this thread)


is essentially the Ackermann function. But it does appear that

he's nowhere near the Ackermann function in his Jan. 14 post.

Dave L. Renfro

Dave L. Renfro

unread,
Jan 15, 2002, 10:36:09 PM1/15/02
to
Ioannis <morp...@olympus.mons>
[sci.math Tue, 15 Jan 2002 12:31:40 +0200]
http://mathforum.org/epigone/sci.math/flelsmingsnix/3C4405...@olympus.mons

wrote (in part, about William Elliot's Jan. 15 post)

> Fine. However what's the point of going this high? You've got
> some big numbers there, but they are only a theoretical
> curiosity as soon as you start throwing ordinals in your ops,
> in the same way a "typical" increasing sequence of ordinals
> is a theoretical quriosity.
>
> From a computational standpoint as soon as you start
> dropping ordinals in, the numbers cease to have an "actual"
> meaning, so you are just handwaving.

William Elliot's use of ordinals was simply a notational
device--all the functions he gave are computable. [At least,
the general thrust of what he was doing is fine. I didn't
check to see if he got all the details correct.]

Suppose I define a sequence of increasingly faster growing
functions this way:

f_1(x) is x^1
f_2(x) is x^2
f_3(x) is x^3
*
*
*
f_n(x) is x^n
*
*
*

Now 2^x grows faster than any of these, but I've run out
of natural number subscripts. O-K, so I'll just use the symbol
'w' as a subscript. Nothing about 'w' itself has to be infinite.
If, for convenience, you want to consider 'w' as being the
ordinal "omega", fine, but you don't have to.

f_w(x) is 2^x.

Now consider 3^x. This grows infinitely faster than 2^x, so
how can I denote it in a way that preserves the sequential
listing I've done so far? How about this --->>>

f_{w+1}(x) is 3^x
f_{w+2}(x) is 4^x
*
*
*
f_{w+n}(x) is (n+2)^x
*
*
*

Again, this is just notation. When I write 'w+4', there is
no necessity to consider this as the ordinal "omega plus 4".
If you wish, you can use ordered pairs at this level, and
denote 'w+4' as (1,4). But after ordered pairs comes ordered
triples, then ordered 4-ples, ..., then square matrices, then
3-dimensional matrices, then . . . [Moral: Ordinal number
notation is much more convenient.]

Now consider 2^(x^2). This grows faster than anything
in the list I just finished, but now I've exhausted all
notational ways of writing 'w+n' for positive integers n.
So let's denote 2^(x^2) by f_{2w}(x). Continuing . . .

f_{2w+1}(x) is 2^(x^3)
f_{2w+2}(x) is 2^(x^4)
*
*
*
f_{2w+n}(x) is 2^(x^(n+2))
*
*
*

Now consider 3^(x^2), 3^(x^3), 3^(x^4), ...
4^(x^2), 4^(x^3), 4^(x^4), ...
5^(x^2), 5^(x^3), 5^(x^4), ...
*
*
n^(x^2), n^(x^3), n^(x^4), ...
*
*
*

If I want to continue my growths past any of these,
say for example to something like 2^(2^(x^2)), 2^(2^(x^3)),
2^(2^(x^4)), ..., I'll need something that doesn't include
anything expressible as nw+m for positive integers n and m,
and which (in order to be useful in the present context) is
suggestive of something larger than any of these symbols.
A natural subscript for something "beyond anything that uses
the nw+m subscripts" is w^2.

Of course, as one moves up the notation scale, the functions
I'm describing don't increase anywhere near as fast as those
that William Elliot is describing. His w'th function is already
at the Ackermann function growth level, whereas what I'm doing
will take much, much longer to get to the Ackermann growth level.

However, if you go far enough, some really strange things
will happen. There will be computable ordinals \alpha such
that the \alpha'th level of what I was doing will equal the
\alpha'th level of what Elliot was doing. [I think this has
to do with something called the fixed-point theorem in
computability theory.] In fact, there are even much slower
growing hierarchies that will catch up "from time to time"
with what Elliot was doing. If anyone is interested in more
about this, try various word and phrase searches using
"slow growing hierarchy" and "fast growing hierarchy" at

http://www.google.com/advanced_search

A lot more about the hierarchy Elliot was describing can be

found in these two sci.math threads:

http://mathforum.org/epigone/sci.math/gunsnersteh
http://mathforum.org/epigone/sci.math/shingvixtwimp

Dave L. Renfro

Ioannis

unread,
Jan 19, 2002, 6:09:53 AM1/19/02
to
[snip]

> > From a computational standpoint as soon as you start
> > dropping ordinals in, the numbers cease to have an "actual"
> > meaning, so you are just handwaving.
>
> William Elliot's use of ordinals was simply a notational
> device--all the functions he gave are computable. [At least,
> the general thrust of what he was doing is fine. I didn't
> check to see if he got all the details correct.]
>
> Suppose I define a sequence of increasingly faster growing
> functions this way:

[snip for brevity]

Thanks Dave. I can see now what you are implying after reading a couple
of threads in your refs. That the fact that f _names_ can use some
infinite ordinals notatiowise, does not necessarily imply that the
function _range_ includes infinite ordinals. Aha!

Back to those threads...

> http://www.google.com/advanced_search
>
> A lot more about the hierarchy Elliot was describing can be
> found in these two sci.math threads:
>
> http://mathforum.org/epigone/sci.math/gunsnersteh
> http://mathforum.org/epigone/sci.math/shingvixtwimp
>
> Dave L. Renfro

--

Dave L. Renfro

unread,
Jan 20, 2002, 9:54:47 AM1/20/02
to
Ioannis <morp...@olympus.mons>
[sci.math Sat, 19 Jan 2002 13:09:53 +0200]
http://mathforum.org/epigone/sci.math/flelsmingsnix/3C4953...@olympus.mons

wrote (in part, responding to an earlier post of mine):

>> William Elliot's use of ordinals was simply a notational
>> device--all the functions he gave are computable. [At least,
>> the general thrust of what he was doing is fine. I didn't
>> check to see if he got all the details correct.]
>>
>> Suppose I define a sequence of increasingly faster growing
>> functions this way:
>
> [snip for brevity]
>
> Thanks Dave. I can see now what you are implying after
> reading a couple of threads in your refs. That the fact
> that f _names_ can use some infinite ordinals notatiowise,
> does not necessarily imply that the function _range_ includes
> infinite ordinals. Aha!

Actually, to get really large ordinal subscripts, those that are
way past \epsilon_0 = w^w^w^..., we DO resort to using these
functions as ordinal-valued. If you have problems with ordinals
as completed quantities, or whatever, then what I wrote at

http://mathforum.org/epigone/sci.math/flelsmingsnix/zmhi097wz96w@legacy

will take you up to the \epsilon_0'th operation without having
to confront anything "essentially infinite". However, the
REALLY fast-growing functions require much larger ordinal
subscripts, and you can't get these without bringing some of
the "essentially infinite" aspects of ordinal number theory
onto the scene. [This vague statement can be made precise and
the precise version has been proved.] Here's one way to get
these larger ordinals--a way that makes use of previously defined
functions in this hierarchy.

In the same way that one can define addition, multiplication,
and exponentiation of ordinals using transfinite induction,
these higher order operations can be defined for ordinals.
A curious thing happens when we do this that will allow us
to leap-frog to even higher ordinals. These higher ordinals
can then be used to define even more rapidly growing functions
(way past \epsilon_0) on natural numbers, which can also be
extended in a natural way to really high order operations on
ordinals, which can then be used to obtain even larger
ordinals, . . .

It is fairly immediate to see that any subset of a well-ordered
set is itself well-ordered in the ordering that the subset inherits
from its parent set. [Recall that "well-ordered" means each
non-empty subset has a smallest element.] Since each well-ordered
set is, up to order-isomorphism, an ordinal, we can associate
ordinals to various subsets of a set of ordinals (i.e. obtain
the "order-type" of the subset).

What follows also holds for uncountable ordinals, but since our
focus is on countable ordinals (in fact, relatively small
initial segments of the recursive ordinals), assume that
everything takes place in w_1 (the set of countable ordinals).

Before looking at something that will take us to a really
large ordinal, let's warm up by looking at how the limit
ordinals behave:

w, 2w, 3w, 4w, ...

We can picture these this way. For each positive integer m, let
E_m be the set {m - 1/n: n = 1, 2, 3, ...}. Then, in the usual
ordering on the reals, we have (as equality of order-types)

w is E_1,
2w is E_1 union E_2,
3w is E_1 union E_2 union E_3, and so on.

If we only look at the collection {mw: m = 1, 2, 3, ...}, then
we have a collection of limit ordinals whose order type is w.

However, there are larger limit ordinals.

The first one is w^2 = sup{w, 2w, 3w, ...}. [Every non-empty set
of ordinals has a least upper bound, and when the set consists
of countably many countable ordinals, then that least upper
bound winds up being countable (the proof isn't very hard).]
The next one is w^2 + w, then comes w^2 + 2w, etc.

Incidentally, here's how you can visualize w^2 , w^2 + w, etc.

W^2 is immediate from the above, being the order-type of the
set "UNION(m=1 to infinity) of E_m".

But now we've used up the number line, in the sense that we
can't go further to the right. So to go past w^2, we need to
compress what we've done so far.

If we put a copy of E_1 in the interval [0, 1/2), a copy of
E_1 union E_2 in [1/2, 2/3), a copy of E_2 union E_2 union E_3
in [2/3, 3/4), etc., then we'll have a set in [0,1) whose
order-type is w^2. [At first glance, we have (by definition of
ordinal addition in terms of ordered set representatives)
w + (w + 2w) + (w + 2w + 3w) + .... However, it is easy to see
that this is w^2, since it's the sup of the set {w, w+w+2w = 4w,
w+w+2w+w+2w+3w = 10w, ...}] Call this set F_1.

Then F_1 union E_2 has order type w^2 + w,
F_1 union E_2 union E_3 has order type w^2 + 2w, etc.

To get 2*w^2, do in [1,2) what I did in [0,1) to get F_1, and
call the result F_2. Then F_1 union F_2 has order type 2*w^2,
F_1 union F_2 union E_3 has order type 2*w^2 + w, etc.
Continuing in this way we can obtain w^3 as the set
"UNION(m=1 to infinity) of F_m". Now compress everything we've
done so that it fits into [0,1), and start over again. If you
do a countable sequence of such compression stages and place
the results in the intervals [n, n+1) (higher values of n get
higher iterates of compressions), then we'll get w^w.

O-K, let's get back to the limit ordinals. The limit ordinals
less than w^2, (e.g. w, 2w, 3w, ...) have order-type w. The limit
ordinals less than w^3 (e.g. w, 2w, ..., w^2, w^2 + w, ..., 2*w^2,
2*w^2 + w, ..., 3*w^2, ... ... 4*w^2, ... ... 5*w^2, ... ...)
have order-type w^2. In general, the limit ordinals less than
w^(n+1), for each n = 1, 2, 3, ..., have order-type w^n.

Now we're ready to see something curious happen. The limit
ordinals less than w^w have order-type w^w. In other words, the
ordinal w^w is so large (or "complicated", if you wish) that
even after you prune it by taking out all the non-limit ordinals,
you haven't really changed it. Or, to put it another way, w^w is
the w^w'th limit ordinal.

Since limit ordinals are the outputs of the function

\alpha |--> w*(\alpha),

w^w winds up being a fixed point of this function.

Here's a function that prunes ordinals out at a MUCH higher rate:

\alpha |--> w^(\alpha).

The outputs of this function are w, w^2, w^3, ..., w^w, w^(w+1),
... ... w^w^w = w^(w^w), ....

This function also has a fixed point, but you have to go much
further to find the first one. The smallest fixed point is our
friend \epsilon_0 = sup{w, w^w, w^w^w, ...}.

Without going through the details, here's what can be done:

Each of the natural number operations *, ^, ^^, etc.
(let's denote these by the function names f_2, f_3, f_4, etc.)
can be extended in a natural way by transfinite induction as
functions on the countable ordinals (uncountable ordinals as well,
but let's keep everything in w_1 for this discussion), and the
outputs will all be countable ordinals. Moreover, each of these
2-variable functions will have fixed points when 'w' is placed in
the correct slot. [These operations are not commutative, and if
you put 'w' in the wrong place there will not be any fixed points.]

These fixed points quickly get very large. Already, the smallest
fixed point of f_3 can barely be described. The smallest fixed
points of both f_4 and f_5 require the use of "f_5 notation" to
describe them (just as the fixed points of f_2 and f_3 require
the use of "f_3 notation" to describe them), the smallest
fixed points of f_6 and f_7 require the use of "f_7 notation" to
describe them, and so on in pairs like this.

The Ackermann function A(m,n) is basically f_n evaluated at (m,n)
(or f_m evaluated at (m,n); since I haven't said precisely what
the f_n's are, I can't be more specific than this), and we'll
denote it by f_w. Using transfinite induction one can define
f_w for countable ordinals as well, and the outputs will be
countable ordinals. Also, f_w will have fixed points, and these
will be VERY large countable ordinals. There is a way to keep
defining f_b for b = w^2, w^3, ..., w^w, ..., w^w^w, ..., not
only for natural numbers, but also as operations on the countable
ordinals so that these ordinal-valued operations will have fixed
points.

\epsilon_0 is a fairly large countable ordinal. Here's a MUCH
LARGER countable ordinal.

Let b_2 be the smallest fixed point of f_2, let b_3 be the smallest
fixed point of f_(b_2), let b_4 be the smallest fixed point of
f_(b_3), etc. Then

gamma_0 = sup{b_2, b_3, ...} has the following property:

gamma_0 is the smallest ordinal 'b' such that f_b evaluated at
'b' is equal to 'b'. Or, as can also be shown, whenever a pair
of ordinals 'a' and 'b' are less than \gamma_0, then f_b evaluated
at 'a' winds up being less than \gamma_0. Thus, \gamma_0 is, in a
certain sense, not reachable from below using the f_b hierarchy
of functions. To get \gamma_0 using *, ^, ^^, ^^^, Ackermann, etc.
type operations requires either an input of at least \gamma_0 or
an operation-level of at least \gamma_0. [To compare this with an
earlier example, to reach w^w using limit ordinals requires at
least w^w limit ordinals.]

Now consider f_(\gamma_0) as a function from the positive integers
to the positive integers. f_(\gamma_0) evaluated at 3 is larger
than anything anyone else has described in this thread.

Dave L. Renfro

Dave L. Renfro

unread,
Jan 20, 2002, 1:23:40 PM1/20/02
to
Dave L. Renfro <renf...@cmich.edu>
[sci.math 19 Jan 02 23:59:20 -0500 (EST)]
http://mathforum.org/epigone/sci.math/flelsmingsnix/ncaq6f7qe86x@legacy

wrote (in part)

> Actually, to get really large ordinal subscripts, those that are
> way past \epsilon_0 = w^w^w^..., we DO resort to using these
> functions as ordinal-valued. If you have problems with ordinals
> as completed quantities, or whatever, then what I wrote at
>
>
http://mathforum.org/epigone/sci.math/flelsmingsnix/zmhi097wz96w@legacy
>
> will take you up to the \epsilon_0'th operation without having
> to confront anything "essentially infinite". However, the
> REALLY fast-growing functions require much larger ordinal
> subscripts, and you can't get these without bringing some of
> the "essentially infinite" aspects of ordinal number theory
> onto the scene.

In case this still isn't clear, here's a short version:

Each of the operations f_2, f_3, ..., f_w, ... (in general,
f_b for certain recursive ordinals 'b') takes a pair of positive
integers to a positive integer. f_2 is *, f_3 is ^, f_4 is
iterated exponentiation ^^, ..., f_w ('w' is intended to be the
Greek letter "\omega") is Ackermann's function, etc.

However, we can also EXTEND these operations so that they take
a pair of countably infinite ordinals to a countably infinite
ordinal. When we do this, we can get some VERY LARGE countable
ordinals by looking at fixed points of these operations when
one of the two inputs is 'w'. These VERY LARGE countable ordinals
can now be utilized as subscripts of functions in our operation
hierarchy to obtain some EXTREMELY RAPIDLY GROWING functions from
the positive integers into the positive integers. But that's not
the end of the story. It's also possible to define these new
EXTREMELY HIGH-ORDER OPERATIONS on countable ordinals as well,
and what we previously did (look for fixed points of that
operation when one of the inputs is 'w') can be repeated,
giving EVEN LARGER ordinals, which can then be used to define
EVEN MORE EXTREMELY RAPIDLY GROWING functions from the
positive integers into the positive integers, and so on,
and so on, and so on, ...

In the same way that the ordinal \epsilon_0 is a natural ceiling
for those ordinals that can be described in a finite way using
the operations +, *, and ^, the ordinal \gamma_0 is a natural
ceiling for those ordinals that can be described in a finite way
using the process outlined in the previous paragraph.

\gamma_0 is sometimes called the Feferman-Schütte ordinal.
For a survey paper on \gamma_0, see

Jean H. Gallier, "What's so Special about Kruskal's Theorem and
the Ordinal Gamma[0]? A Survey of Some Results in Proof Theory",
Ann. Pure and Appl. Logic 53 (1991), 199-260. [See APAL 89 (1997),
page 275 for a correction to the proof of theorem 4.5. Both the
Mathematics Review and the Zentralblatt Review of this paper
are given at the end of my post.]

I believe someone mentioned Goodstein sequences in this thread.

http://mathworld.wolfram.com/GoodsteinSequence.html
http://www.maths.uq.edu.au/~krm/goodstein.html
http://www.math.niu.edu/~rusin/known-math/95/independence

The growth rate associated with Goodstein sequences is at the
level of f_b for b = \epsilon_0. The function I mentioned in my
previous post was f_b for b = \gamma_0, which is virtually
incomprehensibly further out in this hierarchy of operations
on the natural numbers. Nonetheless, there are several
"naturally occuring" applications in which f_b shows up for
b = \gamma_0! [See the Gallier paper cited above.]

This hierarchy of operations can be extended much further out
than \gamma_0. A MUCH larger ordinal 'b' for which the function
f_b plays an important role in proof theory is the Howard ordinal
(also called the Bachmann-Howard ordinal). For a discussion of the
Howard ordinal, see David Libert's sci.math post at

http://mathforum.org/epigone/sci.math/shingvixtwimp/9g9f86$kuj$1...@freenet9.carleton.ca
http://groups.google.com/groups?selm=9g9f86%24kuj%241%40freenet9.carleton.ca

In trying to imagine how large some of the numbers associated
even with small inputs to just f_w (Ackermann's function), I've
found the following remarks by Harvey Friedman very helpful:

*********************************

Excerpt from item #19 in "4. Special FOM e-mail Postings" at
http://www.math.ohio-state.edu/foundations/manuscripts.html

"I submit that A_4(4) is a ridiculously large number, but it
is not an incomprehensibly large number. One can imagine a
tower of 2 of a large height, where that height is 65,536,
and 65,536 is not ridiculously large.

However, if we go much further, then a profound level of
incomprehensibility emerges. The definitions are not
incomprehensible, but the largeness is incomprehensible.
These higher levels of largeness blur, where one is unable
to sense one level of largeness from another."

*********************************

The following gives more information about these ideas:

Fast-Growing Functions and Unprovable Theorems
http://www.maths.bris.ac.uk/~maadb/research/seminars/online/fgfut/

A lot of information about this stuff can be found by doing
phrase searches at http://www.google.com/advanced_search
using the following phrases:

Grzegorczyk hierarchy Wainer hierarchy Veblen hierarchy
Hardy hierarchy fast growing hierarchy


Dave L. Renfro

##################################################################
##################################################################

MR 93b:03096 by Helmut Schwichtenberg of the paper

Jean H. Gallier, "What's so Special about Kruskal's Theorem and
the Ordinal Gamma[0]? A Survey of Some Results in Proof Theory",
Ann. Pure and Appl. Logic 53 (1991), 199-260.

The stated purpose of the paper is to survey results of H. Friedman
concerning proof-theoretic aspects of various forms of Kruskal's
theorem, in particular in connection with the ordinal $\Gamma\sb 0$.
However, to make the paper self-contained the author found it
necessary to include some other topics also: normal functions on
the countable ordinals, Veblen hierarchies, subsystems of
second-order logic, fast-growing and slow-growing hierarchies,
and Goodstein sequences.

Section 2 introduces the basic theory of well quasi-orders (wqo's),
including a key lemma due to Nash-Williams. An oversight occurs in
the proof of Lemma 2.4, where the proof of $(2)\Rightarrow(3)$ does
not seem to go as smoothly as the author suggests. In Section 3,
wqo's on strings are treated and a simplified proof of Higman's
theorem (due to J. J. Levy) is given. Section 4 treats wqo's on
trees. It contains proofs of two versions of Kruskal's theorem
(the first one due to Nash-Williams, and the second one due to
Levy), both as usual based on Higman's theorem. In Section 5
Friedman's finite miniaturization of Kruskal's theorem is proved,
following a hint given by Simpson.

Section 6 deals with normal functions on the countable ordinals.
As the author states, he has followed the "particularly convenient
and elegant" presentation by K. Schutte [Proof theory, English
translation, Springer, Berlin, 1977; MR 58 #21497]. There are some
small points where a more streamlined presentation is possible
(e.g., the proof of Proposition 6.4 does not require transfinite
induction but follows from the remark that from
$\alpha\not\le\beta$ one gets $\beta<\alpha$ by totality and
hence a contradiction; the proof of Proposition 6.36 could be
simplified by remarking after Corollary 6.16 that the set of
fixed points of a normal function is closed and unbounded). At
the end of Section 6, where it is argued that the definition of
the ordinal $\Gamma\sb 0$ is impredicative, a reference to the
well-known proof-theoretic results of Feferman and Schutte to
that effect might have been helpful. In Section 7, Veblen
hierarchies are briefly discussed, and a reference to a survey
paper by L. W. Miller [J. Symbolic Logic 41 (1976), no. 2,
439--459; MR 53 #12894] is given. Again a reference to a more
original paper [e.g., 88g:03078W. Buchholz, Ann. Pure Appl.
Logic 32 (1986), no. 3, 195--207; MR 88g:03078] might have been
helpful here. Section 8 presents normal forms for the ordinals
$<\Gamma\sb 0$, again following Schutte. A rather disturbing
misprint appears in Lemma 8.10(3), which should read
"$\alpha\sb 2<\alpha\sb 1$ and
$\psi(\alpha\sb 1,\beta\sb 1)\le\beta\sb 2$". In Section 9 it is
proved that Kruskal's theorem implies the validity of transfinite
induction on $\Gamma\sb 0$. This result is now superseded by
recent work of M. Rathjen and A. Weiermann ["Proof theoretic
investigations on Kruskal's theorem", submitted], where a
complete proof-theoretic characterization of Kruskal's theorem
in terms of ordinal notation systems, subsystems of second-order
analysis and Kripke-Platek set theory is given. In particular,
it is shown there that the so-called Ackermann ordinal corresponds
exactly to Kruskal's theorem. On p. 239, work of D. H. J. de Jongh
and R. Parikh [Indag. Math. 39 (1977), no. 3, 195-207; MR 56 #5371]
is mentioned concerning maximal order types of
well-partial-orderings (i.e., wqo's that are also partial orders).
It might have been added here that this work has been extended
considerably by D. Schmidt [J. Combin. Theory Ser. B 31 (1981),
no. 2, 183--189; MR 82i:06002]. Section 10 describes the subsystems
${\rm ACA}\sb 0$, ${\rm ATR}\sb 0$ and
$\Pi\sp 1\sb 1$-${\rm CA}\sb 0$ of second-order analysis. At the
end, an extension of the miniature version of Kruskal's theorem
(using a gap condition) is mentioned, which Friedman proved to be
unprovable in $\Pi\sp 1\sb 1$-${\rm CA}\sb 0$. For the proof the
reader is referred to a paper by 87i:03122aS. G. Simpson [in Harvey
Friedman's research on the foundations of mathematics, 87--117,
North-Holland, Amsterdam, 1985; MR 87i:03122a]. Again Buchholz'
paper mentioned above might have been referred to here, since it
provides an essential step in Simpson's proof. Section 11 gives a
brief introduction to term orderings, and Section 12 gives a
glimpse at hierarchies of fast- and slow-growing functions.
(On p. 255 the obvious condition needed to ensure
$h\sb {\alpha+\beta}(n)=h\sb \alpha(h\sb \beta(n))$ has been left
out.) The final Section 13 gives some references to constructive
proofs of Higman's lemma.

To summarize, this is a rather long survey paper containing a lot
of material and references to other surveys and---to a lesser
extent---original work. It brings together a variety of subjects
revolving around a common theme: ordinal notations and their use
in mathematical logic. It may very well, as the author hopes,
"help in making some beautiful but seemingly rather arcane tools
and techniques known to more researchers in logic and computer
science".

##################################################################
##################################################################

Zbl 0758.03025 by Mitsuru Yasuhara of the paper

Jean H. Gallier, "What's so Special about Kruskal's Theorem and
the Ordinal Gamma[0]? A Survey of Some Results in Proof Theory",
Ann. Pure and Appl. Logic 53 (1991), 199-260.

As the subtitle indicates, this is a survey of, mainly, H. Friedman's
results on the unprovability of Kruskal's tree theorem and its
variations (which says that well-orderedness of a relation among
trees is inherited by its ``natural'' extension). The paper begins
with the explanation and the proof of the above theorem, and then
moves on to discuss ordinal notations and normal functions including
Schütte-Feferman $\Gamma\sb 0$. The connection between Kruskal's
theorem and the ordinals $< \Gamma\sb 0$ is explained. Then the
nature of the paper changes to, mostly, citations of definitions
and results. They include formal systems $\text{ATR}\sb 0$, etc.,
and unprovability results (in them) by Friedman, Simpson, et al.;
term orderings [Dershowitz, Okada]; hierarchies of fast- and
slow-growing functions [Wainer, Cichon]; and constructive proof
of Higman's theorem about the extension of ordering to finite
sequences.\par This paper is written in lecture note style: the
definition of concatenations of finite sequences is given, two
pages are devoted to motivational material for critical ordinals,
$\Gamma\sb 0$, etc., and the paper abounds with the instructor's
comments like ``the key notion is ...'', ``we are already quite
dizzy [in connection with ordinal notations]'', and so forth. Hence
one should not be intimidated by the length of the paper! The
author gives a modified definition of the isomorphism of trees to
the ordinals [Def. 9.1.] that surmounts the difficulty in Simpson's
and Smorynski's papers.

##################################################################
##################################################################

Ioannis

unread,
Jan 20, 2002, 3:07:10 PM1/20/02
to
Dave L. Renfro wrote:
>
> Dave L. Renfro <renf...@cmich.edu>
> [sci.math 19 Jan 02 23:59:20 -0500 (EST)]
> http://mathforum.org/epigone/sci.math/flelsmingsnix/ncaq6f7qe86x@legacy
>
> wrote (in part)
>
> > Actually, to get really large ordinal subscripts, those that are
> > way past \epsilon_0 = w^w^w^..., we DO resort to using these
> > functions as ordinal-valued. If you have problems with ordinals
> > as completed quantities, or whatever, then what I wrote at
> >
> >
> http://mathforum.org/epigone/sci.math/flelsmingsnix/zmhi097wz96w@legacy

[snip for brevity excellent explanations]

Thanks again. Your explanations and refs easily beat those in my
Kaplansky text.

It will take me a while to digest all the above.

Bill Taylor

unread,
Jan 23, 2002, 2:09:29 AM1/23/02
to
renf...@cmich.edu ("Dave L. Renfro") writes:

|> will take you up to the \epsilon_0'th operation without having
|> to confront anything "essentially infinite". However, the
|> REALLY fast-growing functions require much larger ordinal
|> subscripts, and you can't get these without bringing some of
|> the "essentially infinite" aspects of ordinal number theory
|> onto the scene. [This vague statement can be made precise and
|> the precise version has been proved.]

Dave - can you please tell us a little about how this is made precise?

And quote us the theorem that proves it? Thanks.


|> Now we're ready to see something curious happen. The limit
|> ordinals less than w^w have order-type w^w. In other words, the
|> ordinal w^w is so large (or "complicated", if you wish) that
|> even after you prune it by taking out all the non-limit ordinals,
|> you haven't really changed it. Or, to put it another way, w^w is
|> the w^w'th limit ordinal.

That's a very neat way of putting it! I like it. Why have I never seen
that last sentence before, I wonder? It also seems to suggest that limit
ordinals are just the first concept along an endless path of "superlimit"
ordinals and so forth. Can this also be made precise?


|> Each of the natural number operations *, ^, ^^, etc.
|> (let's denote these by the function names f_2, f_3, f_4, etc.)

|> ...


|> (just as the fixed points of f_2 and f_3 require
|> the use of "f_3 notation" to describe them),

Hooo... wait a moment. I don't think this is really true. You say the fixed
points of * require ^ to describe them? I don't think so - not any more
than the fixed points of ^ require ^^. I don't think this "pairing" operates.

I presume you mean that the first fixed point of ^ is w^w^w^w^...
and that this only requires the ^ operation?

But we could equally well say that the first fixed point of * is not w^w,
but just w*w*w*w*... which requires only *, not ^, to describe it.

And so on all the way up. I don't see any natural pairing of operations.

What am I missing here?

------------------------------------------------------------------------------
Bill Taylor W.Ta...@math.canterbury.ac.nz
------------------------------------------------------------------------------

And God said
Let there be ordinals,
And there *were* ordinals.
Odd and even created he them,
He said to them be fruitful and increase beyond all limits;
And he commanded them to keep the laws of transfinite induction.
------------------------------------------------------------------------------

Keith Ramsay

unread,
Feb 4, 2002, 1:49:12 AM2/4/02
to
In article <a2lnj9$s0p$1...@cantuc.canterbury.ac.nz>,

mat...@math.canterbury.ac.nz (Bill Taylor) writes:
||>Or, to put it another way, w^w is
||> the w^w'th limit ordinal.
|
|That's a very neat way of putting it! I like it. Why have I never seen
|that last sentence before, I wonder?

Well, where have you been reading about ordinals? You get all of your
information about ordinals from usenet, don't you? :-)

There is defined a two-place function theta(x,y) on ordinals
something like this. We define theta(0,y) to be the y-th limit
ordinal. We define theta(x+1,.) to enumerate the fixed points of
theta(x,.). If x is a limit ordinal I think we define theta(x,.)
to enumerate the ordinals that are fixed points of all the
theta(x',.) for x'<x.

So theta(1,0) is w^w, and theta(2,y)=epsilon_y "and so on".

The first fixed point of x->theta(x,0) is that ordinal Gamma_0
we've mentioned before.

Keith Ramsay

David C. Ullrich

unread,
Feb 4, 2002, 9:22:36 AM2/4/02
to
On 04 Feb 2002 06:49:12 GMT, kra...@aol.commangled (Keith Ramsay)
wrote:

>In article <a2lnj9$s0p$1...@cantuc.canterbury.ac.nz>,
>mat...@math.canterbury.ac.nz (Bill Taylor) writes:
>||>Or, to put it another way, w^w is
>||> the w^w'th limit ordinal.
>|
>|That's a very neat way of putting it! I like it. Why have I never seen
>|that last sentence before, I wonder?
>
>Well, where have you been reading about ordinals? You get all of your
>information about ordinals from usenet, don't you? :-)

LOL. Where else would one get information on ordinals? Is there
like a web site out there or something?


David C. Ullrich

0 new messages