> I can't find the answer to this stupid question anywhere,
> please help!
>
> It seems that there is a natural progression of operators
> as follows:
>
> 1. (addition, subtraction)
> 2. (multiplication, division)
> 3. (powers, roots)
> 4. ?
>
> What is the fourth ordered pair in that progression?
>
> How is it that these operators necessarily do not
> come in triplets as follows?
>
> 1. (addition, subtraction, athigician)
> 2. (multiplication, division, duomatician)
> 3. (powers, roots, trigrission)
> 4. ?
>
> And if they could so come in triplets, how do we do
> athigician, duomatician, and trigrission?
Since I'm not sure of the answer myself, I make a joke...
"The Mock Turtle went on. 'We had the best of educations . . . the
different branches of Arithmetic -- Ambition, Distraction,
Uglification, and Derision.'”
One can, of course, go beyond powers as powers go beyong multiplication:
Just as 3^3 means 3*3*3 i.e multiply 3 factors of 3, we can let (let's
see, which of these symbol thingies can I use?...)
3#3 be 3^3^3
Of course since ^ is not associative we have to decide whether we want
3^(3^3) or (3^3)^3
For the second part, what would be the third of the pairs (up, down, ?)
(in, out, ?), (back front, ?), ...
Why do we only toast two sides of a slice of bread?
I DON'T KNOW! (he goes running down the street wailing and waving his
arms...)
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
It seems that there is a natural progression of operators
as follows:
1. (addition, subtraction)
2. (multiplication, division)
3. (powers, roots)
4. ?
What is the fourth ordered pair in that progression?
How is it that these operators necessarily do not
come in triplets as follows?
1. (addition, subtraction, athigician)
2. (multiplication, division, duomatician)
3. (powers, roots, trigrission)
4. ?
And if they could so come in triplets, how do we do
athigician, duomatician, and trigrission?
TIA
Seth Russell
> I can't find the answer to this stupid question anywhere,
> please help!
> [ ... ]
> How is it that these operators necessarily do not
> come in triplets as follows?
>
> 1. (addition, subtraction, athigician)
> 2. (multiplication, division, duomatician)
> 3. (powers, roots, trigrission) [ ... ]
Because, operation-wise, even Flatlanders have one more dimension than
we do. We have the operation and its inverse. (I'm only saying this
because, having said it, I'm sure to be proven wrong--can't wait, in
fact.)
After 3 it is difficult to define more. It is at point 3 (power, roots) that
the operation is not commutative (a^b =/= b^a for most a, b) or even
associative. We could define a^^b as
a^(a^(a^(...^a)...)))
exponention b times. And we could go on to propose a^^^b equals
a^^(a^^(a^^(....^^a)...)))
exponention b times. After this they have lost their usefulness, and even at
point 3 with exponentiation, the operator becomes awkward relative to addition
and multiplication.
---
Seraph-sama
16/m, so don't call me "sera" or nothin'
http://members.tripod.com/~SeraphSama/class.html for my research in solving
for f in g(x) = f(f(x))
Seth Russell wrote:
> I can't find the answer to this stupid question anywhere,
> please help!
>
> It seems that there is a natural progression of operators
> as follows:
>
> 1. (addition, subtraction)
> 2. (multiplication, division)
> 3. (powers, roots)
> 4. ?
>
> What is the fourth ordered pair in that progression?
I used to signify it (I called them superpowers) with the number sign
(#). a#2 = a^a. a#3 = a^(a^a) etc..
The continued fraction p(1) = 1 and p(x) =
Floor[f(x)]+1/p(x-Floor[f(x)]) gives division if f(x) = x, and it gives
log(a) (base b) if f(a) = a/b and superlog(a) (base b) if f(a) =
log(a)/log(b). Then you can use numerical methods to zero in on a
superpower by using its inverse superlog. It works for certain positive
values of a and b.
Cliff Nelson
>
>
> How is it that these operators necessarily do not
> come in triplets as follows?
>
> 1. (addition, subtraction, athigician)
> 2. (multiplication, division, duomatician)
> 3. (powers, roots, trigrission)
"Clifford J. Nelson" wrote:
> Seth Russell wrote:
>
> > I can't find the answer to this stupid question anywhere,
> > please help!
> >
> > It seems that there is a natural progression of operators
> > as follows:
> >
> > 1. (addition, subtraction)
> > 2. (multiplication, division)
> > 3. (powers, roots)
> > 4. ?
> >
> > What is the fourth ordered pair in that progression?
>
> I used to signify it (I called them superpowers) with the number sign
> (#). a#2 = a^a. a#3 = a^(a^a) etc..
>
> The continued fraction p(1) = 1 and p(x) =
> Floor[f(x)]+1/p(x-Floor[f(x)]) gives division if f(x) = x, and it gives
> log(a) (base b) if f(a) = a/b and superlog(a) (base b) if f(a) =
> log(a)/log(b). Then you can use numerical methods to zero in on a
> superpower by using its inverse superlog. It works for certain positive
> values of a and b.
That should be p(1) = 1 and p(x) =
Floor[f(x)]+1/p(1/(x-Floor[f(x)])).
"Clifford J. Nelson" wrote:
> "Clifford J. Nelson" wrote:
>
> > Seth Russell wrote:
> >
> > > I can't find the answer to this stupid question anywhere,
> > > please help!
> > >
> > > It seems that there is a natural progression of operators
> > > as follows:
> > >
> > > 1. (addition, subtraction)
> > > 2. (multiplication, division)
> > > 3. (powers, roots)
> > > 4. ?
> > >
> > > What is the fourth ordered pair in that progression?
> >
> > I used to signify it (I called them superpowers) with the number sign
> > (#). a#2 = a^a. a#3 = a^(a^a) etc..
> >
> [snip]
I apologize for the stuff I snipped.
Its been about ten years since I thought of this and my memory is very bad,
so, I worked it out with Mathematica again.
If f[x,y] is x-y then p[x,y] is division x/y. If f[x,y] is x/y then p[x,y]
is log(x) (base y). If f[x,y] is Log[x]/Log[y] then p[x,y] is the superlog
of x (base y) which can be used to find the superpower with numerical
methods for certain positive values of x and y.
Clear[p]
p[x_,x_] := 1
p[0,_] := 1
p[_,0] := 1
p[x_,1] := x
p[x_,y_] :=1+ p[Floor[f[x,y]],y] /; x >= y
p[x_,y_] :=1/p[y,x] /; x < y
The program above will exceed the recursion depth sometimes so, you stop it
and return the value of x for an approximation.
Cliff Nelson
Seth Russell <seth...@clickshop.com> wrote in article
<37CC1E83...@clickshop.com>...
> I can't find the answer to this stupid question anywhere,
> please help!
>
> It seems that there is a natural progression of operators
> as follows:
>
> 1. (addition, subtraction)
> 2. (multiplication, division)
> 3. (powers, roots)
> 4. ?
>
> What is the fourth ordered pair in that progression?
>
I don't know if there really is an answer for this (if so it escapes me at
the moment), but my guess would be exponents and logarithms. Well, at
least I have a better answer for your next question.
> How is it that these operators necessarily do not
> come in triplets as follows?
>
> 1. (addition, subtraction, athigician)
> 2. (multiplication, division, duomatician)
> 3. (powers, roots, trigrission)
> 4. ?
>
> And if they could so come in triplets, how do we do
> athigician, duomatician, and trigrission?
>
They don't come in triplets because we really only have two operations, +
and x. Each of these has an inverse that (almost) always undoes what the
first does. In fact, we're lucky to have an inverse defined for
multiplcation over the real numbers, most other systems aren't so
fortunate.
> TIA
> Seth Russell
>
--
Jeff Bolz
ehb...@sprynet.com
"Clifford J. Nelson" wrote:
> "Clifford J. Nelson" wrote:
>
> > "Clifford J. Nelson" wrote:
> >
> > > Seth Russell wrote:
> > >
> > > > I can't find the answer to this stupid question anywhere,
> > > > please help!
> > > >
> > > > It seems that there is a natural progression of operators
> > > > as follows:
> > > >
> > > > 1. (addition, subtraction)
> > > > 2. (multiplication, division)
> > > > 3. (powers, roots)
> > > > 4. ?
> > > >
> > > > What is the fourth ordered pair in that progression?
> > >
> > > I used to signify it (I called them superpowers) with the number sign
> > > (#). a#2 = a^a. a#3 = a^(a^a) etc..
> > >
> > [snip]
>
> I apologize for the stuff I snipped.
>
> Its been about ten years since I thought of this and my memory is very bad,
> so, I worked it out with Mathematica again.
>
> If f[x,y] is x-y then p[x,y] is division x/y. If f[x,y] is x/y then p[x,y]
> is log(x) (base y). If f[x,y] is Log[x]/Log[y] then p[x,y] is the superlog
> of x (base y) which can be used to find the superpower with numerical
> methods for certain positive values of x and y.
>
> Clear[p]
>
> p[x_,x_] := 1
>
> p[0,_] := 1
>
> p[_,0] := 1
>
> p[x_,1] := x
>
> p[x_,y_] :=1+ p[Floor[f[x,y]],y] /; x >= y
Sorry again. Remove the Floor function around f[x,y].
Looks a lot like the first three levels of the Ackermann hierarchy.
And the inverse functions. For the Ackermann hierarchy, see a book
on computability theory.
--Herb Enderton
h...@math.ucla.edu
Ok, you've piqued my curiosity.
But I can't find "Ackermann heirarchy" on the web and can't get
to a library for a while.
Can somebody give me a hint?
>How is it that these operators necessarily do not
>come in triplets as follows?
>1. (addition, subtraction, athigician)
>2. (multiplication, division, duomatician)
>3. (powers, roots, trigrission)
>4. ?
Actually, they come in triplets:
1. (addition, subtraction, subtraction)
2. (multiplication, division, division)
3. (powers, roots, logarithms)
In the first two cases, you have a commutative operation, therefore both
"inverses" are equal.
--
Markus Redeker | c...@ibp.de Hamburg, Germany
wrote (in part)
>Ok, you've piqued my curiosity.
>But I can't find "Ackermann heirarchy" on the web and can't get
>to a library for a while.
I'm in a hurry now (I'll post more on these issues
this afternoon), but here's a website on Ackermann's
function:
<http://public.logica.com/~stepneys/cyc/a/ackermnn.htm>
Some related websites ...
Repeated exponentiation is often called "tetration".
Since the operation of exponentiation is not
associative [see Brunson's paper below for just
how non-associative it is; see Golomb's paper for
commutativity, associativity, and distributivity
of the tetration operation (and beyond)], it
makes a difference as to how iterated exponentiation
is carried out. Tetration refers to carrying out
the process from the top on down. The operation
is often symbolized by a *left* superscript. Thus,
letting ^^ denote tetration, we have "Tex'edly"
(a guess) a^^b = {b}^a. As an example,
{3}^3 = 3^(3^3) = 3^27 = 7625597484987, and
{3}^4 = 3^(3^(3^3)) = 3^(3^27) = 3^7625597484987
= (approx.) 10^(3.638 trillion).
Some relatively elementary (i.e. undergraduate
level) articles dealing with tetration are:
J. M. Ash, "The limit of [{n}^x] as x tends to
zero", Mathematics Magazine 69 (1996), 207-209.
Nick Bromer, "Superexponentiation", Mathematics
Magazine 60 (1987), 169-173.
Barry W. Brunson, "The partial order of iterated
exponentials", Amer. Math. Monthly 93 (1986),
779-786.
Martin Gardner's "Mathematical Games" column
in the November 1977 issue of Scientific America.
Astrid E. Golomb, "Super-arithmetic", Journal of
Undergraduate Mathematics 9(1) (March 1977), 11-16.
R. A. Knoebel, "Exponentials reiterated", Amer.
Math. Monthly 88 (1981), 235-252.
Roger Voles, "An exploration of hyperpower equations
{n}^x = {n}^y", The Mathematical Gazette 83(497)
(July 1999), 210-215.
*****************************************
The sequence of operations "addition", "multiplication",
"exponentiation", "tetration", etc. can be continued
beyond any finite level using transfinite induction.
[For successor ordinals, iterate the preceeding
operation. At the limit ordinals, diagonalize
over the preceeding growth rates.] In particular,
for each countable ordinal there is such an operation.
Ackermann's function (in a weak asymptotic way)
appears at the w'th level. These extended operations
can also be carried out on the ordinals. [See the
Doner and Tarski paper; unlike the other papers
mentioned below, they are not concerned with the
computability of the operations.] For instance,
the least epsilon number, the ordinal which
is often symbolized in set theory texts as an
infinite (w-length, to be more precise) exponentiated
tower of w's, is simply w^^w. By making use of
these higher order operations, but not so high
as to be non-recursive, one can obtain some
truly huge countable (and recursive) ordinals.
These huge ordinals can then be used to define
extremely high recursive levels in this operation
hierarchy. Next, input some huge (but recursive)
ordinal (say, epsilon_{0}, or maybe the ordinal
giving the hierarchy level) into one of these
extremely high recursive operation levels, and
you'll obtain still bigger countable (and recursive)
ordinals. You can repeat this over and over,
any finite number of times. You can even surpass
any finite iteration of this process by a
diagonalization process (and you'll still be
dealing with recursive operations), as long
as you don't diagonalize a non-recursive number
of times.
Craig Smorynski's articles below are an excellent
introduction to these ideas. Moreover, he does
a good job of explaining WHY there would be any
mathematical significance to doing something
like this. [At face value, this seems nothing
more than a grown-up version of "By adding one
to any number you can think of, I can obtain
a larger number than you can."]
R. C. Buck, "Mathematical induction and recursive
definitions", Amer. Math. Monthly 70 (1963), 128-135.
John Doner and Alfred Tarski, "An extended
arithmetic of ordinal numbers", Fund. Math.
65 (1969), 95-127.
Stephen G. Simpson, "Unprovable theorems and
fast-growing functions", Comtemporary Mathematics
65 (1987), 359-394.
Craig Smorynski, "Some rapidly growing functions",
The Mathematical Intelligencer 2(3) (1980), 149-154.
Craig Smorynski, "The varieties of arboreal
experience", The Mathematical Intelligencer
4(4) (1982), 182-189.
[A correction to the first lemma (part ii) on
page 188 is given in J. H. Gallier, "Kruskal's
theorem and the ordinal $\Gamma_{0}$", Annals
of Pure and Applied Logic 53 (1991), 235-239.]
Craig Smorynski, "'Big' news from Archimedes
to Friedman", pp. 353-366 in HARVEY FRIEDMAN'S
RESEARCH ON THE FOUNDATIONS OF MATHEMATICS edited
by L. A. Harrington, et al, Elsevier Science,
North Holland, 1985.
Joel Spencer, "Large numbers and unprovable
theorems", Amer. Math. Monthly 90 (1983), 669-675.
>[snip]
>... [At face value, this seems nothing
> more than a grown-up version of "By adding one
> to any number you can think of, I can obtain
> a larger number than you can."]
It's much more interesting than that. Countable tetration yields complex
numbers. So "larger number" becomes meaningless in this case.
See: <http://www2.crosswinds.net/athens/~jgal/Math/Exponents.html>
[snip]
It also appears as though these objects have a ring structure if one
defines (a+b)*=a*+b* for addition and (axb)*=(a*)x(b*).
--
Ioannis Galidakis jg...@ath.forthnet.gr
<http://www.crosswinds.net/athens/~jgal/main.html>
______________________________________
$3FC7E0FE1F87E30C
$060C30C330CC330C
$060700FC3FCC03FC
$0600E0C330CC030C
$060C30C330CC330C
$1EC7ECFE30C7E30C
> The sequence of operations "addition", "multiplication",
> "exponentiation", "tetration", etc. can be continued
> beyond any finite level using transfinite induction.
What is the inverse of "tetration" ?
I'm still baffled by the second part of my original question.
Why do the operations come only in ordered pairs?
1. (addition, subtraction)
2. (multiplication, division)
3. (powers, roots)
4. (tetration, ?)
Why not ordered triplets? The number 2 seems so very
arbitrary here. Why couldn't we find 3 operations that
inverted each other as in a finite state machine with 3 states,
with addition and subtraction being the first two states?
What axiom prevents this? Or, if no axiom prevents it,
then are we to conclude that it is just the nature of numbers?
Seth Russell
Let's say there is a "triplet of operations" as you suggest. If I go from
state A to state B, why would I not want to go from state B back to state A?
I can do that by simply combining the operation that takes me from state B
to state C with the operation that takes me from state C to state A into one
operator making a doublet again.
Seth Russell wrote in message <37D0AC13...@clickshop.com>...
>"Dave L. Renfro" wrote:
>
>> The sequence of operations "addition", "multiplication",
>> "exponentiation", "tetration", etc. can be continued
>> beyond any finite level using transfinite induction.
>
The operations indicated by you (1-4), have two inverses.
In operation 3 "degrees" ("powers") are available the following
inverses: the "radical" and "log". In operation 4 a "superdegree"
("tetration") also there are inverses: the "superradical" and
"superlog". A table published in a magazine "Cybernetics" in
March 1989 (http://oasis.fortunecity.com/andes/205/english/01.htm or
http://www2.crosswinds.net/russia/~rubcov/english/01.htm) and in
"Mathematical library OnLine" - Table 1 is reduced below.
(http://oasis.fortunecity.com/andes/205/english/10.htm or
http://www2.crosswinds.net/russia/~rubcov/english/10.htm),
in which the possibility of deriving of operations as with
numbers more than 4 and operations "easier than addition" is shown.
On my page the exposition "superdegrees" ("tetration")
(http://oasis.fortunecity.com/andes/205/english/01/1e.html or
http://www2.crosswinds.net/russia/~rubcov/english/01/1e.html),
including negative values of an index of a "superdegree" is given.
The more detailed information about the "superradical" and "superlog"
is given in the monography
(http://oasis.fortunecity.com/andes/205/english/09.htm or
http://www2.crosswinds.net/russia/~rubcov/english/09.htm page 20-23
and others). The operations 1 and 2 are commutative on a set of known
numbers. In an outcome their both inverses of operation coincide.
In the monography the distinctions of outcomes of operations of
division two in an assotiation from inversion are considered
(http://oasis.fortunecity.com/andes/205/english/09.htm or
http://www2.crosswinds.net/russia/~rubcov/english/09.htm page 183-186
paragraphs 5.2) an example of application "divisions of the second
inversion" (http://oasis.fortunecity.com/andes/205/english/06/6e.html
or http://www2.crosswinds.net/russia/~rubcov/english/06/6e.html) also
is given.
n\i 1 2 3
... ... ... ...
0 aOb=c c(delta)b=a c(delta)a=b
1 a+b=c c-b=a c-a=b
2 a*b=c c//b=a c/a=b
3 a^b=c c^(1/b)=a log(c)/log(a)=log a(c)
4 a^^b=c "superradical" slog a(c)
... ... ... ...
The note. Because of limited possibilities of representation of the
formulas in a text condition of conference the 4-th and 0-th operation
and the inverses are shown to her schematically.
The research of a superdegree at negative values of an index of a
superdegree is connected to operations "easier than addition"
(http://oasis.fortunecity.com/andes/205/english/03/3e.html or
http://www2.crosswinds.net/russia/~rubcov/english/03/3e.html).
--
Best regards,
Constantin Rubtsov, Belgorod, Russia (ICQ: 27004786)
wrote (in part)
>>... [At face value, this seems nothing
>> more than a grown-up version of "By adding one
>> to any number you can think of, I can obtain
>> a larger number than you can."]
>
>It's much more interesting than that. Countable tetration yields
>complex numbers. So "larger number" becomes meaningless in
>this case.
>See: <http://www2.crosswinds.net/athens/~jgal/Math/Exponents.html>
>[snip]
>It also appears as though these objects have a ring structure
>if one defines (a+b)*=a*+b* for addition and (axb)*=(a*)x(b*).
My "At face value ..." comment might have been
misunderstood. I had just finished mentioning that
there are some serious mathematical issues with the
things I discussed, and my comment was intended
to be taken as what seemed to me would be a
reader's natural question at this point. In other
words, this is not my personal summation of the
matter, but rather what I felt many readers might
be thinking.
Thanks for the reference. I've made a print copy.
[But I had to copy and paste the text material onto
Microsoft Word; it didn't print directly from
Netscape (only your neat graphs printed) for some
reason.]
However, it's "countable exponentiation" that you deal
with, not "countable tetration". Moreover, your process
is analogous to infinite sums and infinite products
in analysis, not the w'th (and higher) operations
I described (which were operations from pairs of
positive integers into the set of positive integers).
I did some net-surfing today and found some additional
material related to my already posted references at
<http://forum.swarthmore.edu/epigone/sci.math/frunyixblou/c9oc1q...@forum.swarthmore.edu>
[Hey, it's the weekend. Also, it rained here today.]
There is a nice discussion of large numbers by
Robert P. Munafo at
<http://www.mrob.com/largenum.html>
A selection from Rudy Rucker's book INFINITY AND
THE MIND [I was going to say which pages are transcribed,
but I can't find my copy of Rucker's book right now
(I recently moved and still haven't put everything
away)] is at
<http://www.anselm.edu/homepage/dbanach/infin.htm>
[Doesn't this violate some copyright law??]
Munafo's "class-0", "class-1", "class-2", etc. numbers
are essentially the initial finite levels in the
hierarchy I mentioned. There is a strange thing that
happens when you get deeper into these issues, and
the following quote from Douglas R. Hofstadter's
book METAMAGICAL THEMAS (from pp. 124-125) is a
good way to begin coming to grips with it (original
italics are represented via CAPITALS):
"If, perchance, you were to start dealing with
numbers having millions or billions of digits, the
numerals themselves (the colossal strings of digits)
would cease to be visualizable, and your perceptual
reality would be forced to take another leap upward
in abstraction--to the number that counts the digits
in the number that counts the digits in the number
that counts the objects concerned. Needless to say,
such third-order perceptual reality is highly abstract.
Moreover, it occurs very seldom, even in mathematics.
Still, you can imagine going far beyond it. Fourth-
and fifth-order perceptual realities would quickly
yield, in our purely abstract imagination, to
tenth-, hundredth-, and millionth-order perceptual
realities.
"By this time, of course, we would have lost track
of the EXACT number of levels we had shifted, and
we would be content with a mere ESTIMATE of that
number (accurate to within ten percent, of course).
'Oh, I'd say about two million levels of perceptual
shift were involved here, give or take a couple
of hundred thousand' would be a typical comment for
someone dealing with such unimaginably unimaginable
quantities. You can see where this is leading: to
multiple levels of abstraction in talking about
multiple levels of abstraction. If we were to
continue our discussion just one zillisecond
longer, we would find ourselves smack-dab in
the middle of the theory of recursive functions
and algorithmic complexity, and that would be too
abstract. So let's drop the topic right here."
Some of these issues might be dealt with in the
discussions under the topic "Largest Finite Number" at
<http://forum.swarthmore.edu/epigone/sci.math/storzazal>
However, I've only read a few of them. Most seemed
pointless or they miss a point that someone else made,
but Bill Dubuque's August 16, 1997 post [but apparently
sent in on Aug. 14, according to the date on the
actual message] might be of interest.
Hummm...I just did a search in sci.math using the
string "Bill Dubuque" and came up with a lot of
relevant (for what I've been writing about) posts.
Here's a link for his related posts that I found
in one of his posts. [The first URL is the post
I found it in, the second URL provides a list of
related links.]
<http://forum.swarthmore.edu/epigone/sci.math/neljarsmee/y8zogt5...@berne.ai.mit.edu>
<http://www.dejanews.com/dnquery.xp?QRY=smorynski%20rucker&groups=sci.math&ST=PS>
I haven't looked into this area in a while, but
W. A. Howard, "A system of abstract constructive ordinals",
Journal of Symbolic Logic 37 (1972), 355-374
is a nice survey paper on large recursive ordinals.
[More precisely, a survey on some schemes for
naming recursive ordinals that have applications
in proof theory.]
Norman Danner has the paper "Transfinite iteration
and ordinal arithmetic" at
<http://www.math.ucla.edu/~ndanner/personal/papers/index.html>
which includes the following in its abstract:
"... we investigate the relationship between (countable)
transfinite iteration and ordinal arithmetic. While there is
a nice connection between finite iteration and addition,
multiplication, and exponentiation, we show that this it
is lost when passing to the transfinite and investigate a
new equivalence relation on ordinal functionals with respect
to which we restore it."
Norman Danner also recently posted the paper
"Transfinite iteration functionals and ordinal
arithmetic" at
<http://front.math.ucdavis.edu/math.LO/9908127>
Some preprints and papers dealing with (mathematical)
computability issues relating to extremely large ordinals
[all countable, I believe, but with various "levels
of computability"):
<http://www.amsta.leeds.ac.uk/pure/staff/rathjen/preprints.html>
I give a lot of internet links to places where things
like this can be found in my August 11, 1999 post
on the sci.math topic "has anyone tried this?"
<http://forum.swarthmore.edu/epigone/sci.math/stoitwonsnen>
wrote
>"Dave L. Renfro" wrote:
>
>> The sequence of operations "addition", "multiplication",
>> "exponentiation", "tetration", etc. can be continued
>> beyond any finite level using transfinite induction.>
>
>What is the inverse of "tetration" ?
>
>I'm still baffled by the second part of my original question.
>Why do the operations come only in ordered pairs?
>
>1. (addition, subtraction)
>2. (multiplication, division)
>3. (powers, roots)
>4. (tetration, ?)
>
>Why not ordered triplets? The number 2 seems so very
>arbitrary here. Why couldn't we find 3 operations that
>inverted each other as in a finite state machine with 3 states,
>with addition and subtraction being the first two states?
>What axiom prevents this? Or, if no axiom prevents it,
>then are we to conclude that it is just the nature of numbers?
>
>
>Seth Russell
Sorry. I guess I got so carried away with what I
wrote in
<http://forum.swarthmore.edu/epigone/sci.math/frunyixblou/c9oc1q...@forum.swarthmore.edu>
that I completely forgot about your second part.
I'm not sure if the "ordered triplets" aspect has any
significance beyond the fact that the operator
{invertible functions} ------> {invertible functions}
is itself a function. For instance, the inverse of
"add 6" is "subtract 6". The inverse of "divide by 23"
is "multiply by 23". There is only one inverse because
[sorry to get technical if you don't know about this]
inverses are unique in groups. In fact, inverses (when
they exist) are unique in monoids.
As to an inverse of tetration, I honestly don't know.
I suspect that the absence of a tetration property analogous
to a*(b+c) = (a*b)+(a*c) or a^(b*c) = (a^b)^c
prevents the possibility of a natural "inverse
tetration" operation. Maybe the formulation of
tetration by Ioannis Galidakis involving the
Lambert W function at
<http://www2.crosswinds.net/athens/~jgal/Math/Exponents.html>
provides a way. However, I haven't looked at Galidakis'
stuff carefully and I don't know anything about the
Lambert W function.
There are two posts in sci.math under the subject
"tetration" at
<http://forum.swarthmore.edu/epigone/sci.math/fraxclonsna>,
and there are five posts in sci.math.research under
the subject "interpolation for iterated exponentiation" at
<http://forum.swarthmore.edu/epigone/sci.math.research/snespermkhe>,
but none seem to address your specific question.
(However, I think you'll find it useful to read over
these posts.)
I should mention, since I've been finding a lot
of discussions of the Ackermann function in my
internet surfing today, the following historical fact
that seems to be little known.
Ackermann published his function in 1928 but, independent
of Ackermann, G. Sudan published a similar example of
a recursive function that isn't primitive recursive.
Gabriel Sudan, "Sur le nombre transfini $\omega^{\omega }$",
Bull. Math. Soc. Roum. Sci. 29 (1927), 11-30.
For more on Sudan's contribution, see
Cristian Calude, Solomon Marcus, and Ionel Tevy,
"The first example of a recursive function which is
not primitive recursive", Historia Math. 6 (1979), 380-384.
It does print, but yellow letters on blue background makes a bad printer
mapping. :*(
(If anyone wants to print the article correctly, save a local html copy,
save the picts as well in the appropriate directories so they can be
loaded, then change the page properties of the web page into black
letters into white background and it should print ok.)
>
> However, it's "countable exponentiation" that you deal
> with, not "countable tetration". Moreover, your process
> is analogous to infinite sums and infinite products
> in analysis, not the w'th (and higher) operations
> I described (which were operations from pairs of
> positive integers into the set of positive integers).
Perhaps I misunderstood what exactly is meant by "tetration". And I
quote from your article:
>Repeated exponentiation is often called "tetration".
>Since the operation of exponentiation is not
>associative [see Brunson's paper below for just
>how non-associative it is; see Golomb's paper for
>commutativity, associativity, and distributivity
>of the tetration operation (and beyond)], it
>makes a difference as to how iterated exponentiation
>is carried out. Tetration refers to carrying out
>the process from the top on down. The operation
>is often symbolized by a *left* superscript. Thus,
>letting ^^ denote tetration, we have "Tex'edly"
>(a guess) a^^b = {b}^a. As an example,
>{3}^3 = 3^(3^3) = 3^27 = 7625597484987, and
>{3}^4 = 3^(3^(3^3)) = 3^(3^27) = 3^7625597484987
= (approx.) 10^(3.638 trillion).
There appears to be some ambiguity as to the order of operations when
one writes:
c^c^c^c^c.
In my analysis, this is equivallent to the operations being carried from
"right to left" (i.e. top to bottom)
The above is a shorthand for:
c^(c^(c^(c^c))).
So, even though for finite exponentials this ambiguity can easily be
resolved, it is not so easily resolved for infinite exponentials, that's
why the whole story was needed to find a new definition for the symbol:
c^c^c^c^...
In any case, am I missing something? I don't see how this cannot be
coined "countable tetration". Maybe "tetration" is meant to be for a
finite number of exponents only?
> I did some net-surfing today and found some additional
> material related to my already posted references at
> <http://forum.swarthmore.edu/epigone/sci.math/frunyixblou/c9oc1q...@forum.swarthmore.edu>
> [Hey, it's the weekend. Also, it rained here today.]
>
> There is a nice discussion of large numbers by
> Robert P. Munafo at
> <http://www.mrob.com/largenum.html>
Thanx for finding Munafo's Address for me! I've been looking for this
guy for ages. Munafo-who is also an excellent programmer, was amonst the
first Mac programmers out there. (Some of his programs STILL work today
on modern machines, even though they were written in 1984!!!)
[snip rest of refs]
Cheerio
> I can't find the answer to this stupid question anywhere,
> please help!
>
> It seems that there is a natural progression of operators
> as follows:
>
> 1. (addition, subtraction)
> 2. (multiplication, division)
> 3. (powers, roots)
> 4. ?
>
> What is the fourth ordered pair in that progression?
How about the following. Define a sequence of binary operators
"<n>" recursively by:
a <0> b = a + b and
a <n+1> b = exp{log(a) <n> log(b)}.
This last formula can be written as
exp(a <n> b) = exp(a) <n+1> exp(b).
If the identity for <n> is written as Id(n), then
exp(Id(n)) = Id(n+1).
Let the inverse of a for <n> be Inv(n, a). Then
Inv(n+1, exp(a)) = exp(Inv(n, a)).
The pair of operations <n> and <n+1> satisfy all the basic
algebraic and ordering properties of + and * including the
distributive and commutative laws.
One can extend <n> to negative n starting with a <-1> b =
log(exp(a) + exp(b)). To make everything pretty, the real line
needs to be made "longer" at the negative infinity end.
Is this heirarchy of operators used in standard (or non-standard)
analysis?
Note: a^b = a <2> exp(b) seems to be betwixt and between.
wrote (in part, beginning with a quote from my earlier post):
>> However, it's "countable exponentiation" that you deal
>> with, not "countable tetration". Moreover, your process
>> is analogous to infinite sums and infinite products
>> in analysis, not the w'th (and higher) operations
>> I described (which were operations from pairs of
>> positive integers into the set of positive integers).
>
>Perhaps I misunderstood what exactly is meant by "tetration".
>And I quote from your article:
>
>>Repeated exponentiation is often called "tetration".
>>Since the operation of exponentiation is not
>>associative [see Brunson's paper below for just
>>how non-associative it is; see Golomb's paper for
>>commutativity, associativity, and distributivity
>>of the tetration operation (and beyond)], it
>>makes a difference as to how iterated exponentiation
>>is carried out. Tetration refers to carrying out
>>the process from the top on down.
[snip]
>There appears to be some ambiguity as to the order of
>operations when one writes: c^c^c^c^c.
************************
Yes, and that's what I said (I thought). The Golomb
paper I mentioned discusses this matter a bit.
So do a lot of other papers I've seen. [I've come
across a lot more than I originally referenced in
my earlier post. I just gave citations to those
papers I was able to find in my "xerox'ed paper
archive".]
************************
>
>In my analysis, this is equivallent to the operations being
>carried from "right to left" (i.e. top to bottom)
>The above is a shorthand for:
>c^(c^(c^(c^c))).
>
>So, even though for finite exponentials this ambiguity can
>easily be resolved, it is not so easily resolved for
>infinite exponentials, that's why the whole story was
>needed to find a new definition for the symbol: c^c^c^c^...
>
>In any case, am I missing something? I don't see how
>this cannot be coined "countable tetration". Maybe
>"tetration" is meant to be for a finite number of exponents only?
************************
INFINITE SUM: Limit of a sequence (a,b,c,d,...) via
a, a+b, a+b+c, a+b+c+d, ...
INFINITE PRODUCT: Limit of a sequence (a,b,c,d,...) via
a, a*b, a*b*c, a*b*c*d, ...
INFINITE EXPONENTIATION (one way to define it):
Limit of a sequence (a,b,c,d,...) via
a, a^b, a^(b^c), a^(b^(c^d)), ...
You may have defined it differently (I haven't
looked at your paper carefully yet), but this
is the way I've seen it in a lot of papers.
Changing the method of associativity as one
takes the limit seems to me a highly nontrivial
undertaking.
I don't think we have any confusion over content,
just over the terms we used. Hope this clears up
what I wrote but, if not, let me know.
--
Your information is connected to an omega-reflections (wi -> w0,
i-integers) mathematical objects
(http://oasis.fortunecity.com/andes/205/english/10.htm or
http://www2.crosswinds.net/russia/~rubcov/english/10.htm ). These
reflections grow out researches of operation of a type "tetration".
(http://oasis.fortunecity.com/andes/205/english/01/1e.html or
http://www2.crosswinds.net/russia/~rubcov/english/01/1e.html)
Omega-reflections allow to receive images of any mathematical objects
(numbers, operations, functions, derivatives, integrals etc.). Thus,
the omega-images of the mathematical formulas, methods, sections of
mathematics etc. is possible to form.
Examples of reflections with use of functions exp(x) or k^x (k=const).
For a=var and b = var:
1. a+b \w1->w0\ a*b
2. a*b \w1->w0\ exp((ln(a))*(ln(b))
3. f(x1, ... , xn) \w1->w0\ exp(f(ln(x1), ... , ln(xn)))
4. If a=const, that a \wi->w0\ k^^(i+slog(a)), where slog(a) -
superlog on a foundation k.
5. Function f(x1, ... , xn) at a repeated omega-reflections:
f(x1, ... , xn) \wi->w0\ k^^(i+slog f( k^^(-i+slog x1), ... ,
k^^(-i+slog xn) ))
6. From a method of the Euler of a numerical solution of the
differential equations ( y i+1=y i + (x i+1 - x i) * f(x i, y i) ),
it is possible to note one of variants an omega-image of this method:
y i+1=y i *(x i+1 / x i)^(x i * f(x i, y i)/y i)
7. At an omega-reflection of function z=x+y we shall receive:
x+y \w -1 -> w0\ ln(exp(x)+exp(y))
The detailed information about all explained in the given text can be
read in my monography.
(http://oasis.fortunecity.com/andes/205/english/09.htm or
http://www2.crosswinds.net/russia/~rubcov/english/09.htm)
Seth Russell wrote:
> I can't find the answer to this stupid question anywhere,
> please help!
>
> It seems that there is a natural progression of operators
> as follows:
>
> 1. (addition, subtraction)
> 2. (multiplication, division)
> 3. (powers, roots)
> 4. ?
>
> What is the fourth ordered pair in that progression?
Progressive exponentiation. x@y = x^x^...^x y times
The general sequence is given by the Ackerman function.
Bob Kolker
The Ackermann construction isn't the only one that generalizes the
sequence however.
This is because, although addition and multiplication are commutative,
exponentiaion is not;
thus by commuting terms in the recursion relations, one gets a sequence
that starts with
addition, multiplication, and exponentiation, but which differs from
Ackermann's
at the next operation.
One can argue that even though it gives rise to a more slowly growing
diagonal function, this
latter sequence is the "correct" one. It permits an extension to a
transfinite sequence of operations
on the oridinal numbers, while the Ackermann construction does not - at
least not in any obvious way.
For more see. "A natural varient of the Ackerman Function", by Levitz
and Nichols, Mathematical Logic Quarterly 34 (1988), pp 399-401
Hilbert Levitz
Department of Computer Science
FLorida State University
lev...@cs.fsu.edu