Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Q] integrate x^x

51 views
Skip to first unread message

junhwi

unread,
Mar 22, 2002, 4:18:02 AM3/22/02
to
what is the integration of x^x ?

x = 0 to X (consider only nonnegative values)

And, is it possible to construct the gerenal form of integrate f(x)^g(x) ?


Zeshan

unread,
Mar 22, 2002, 8:08:19 AM3/22/02
to
On Fri, 22 Mar 2002 18:18:02 +0900, junhwi <jun...@modsim.co.kr> wrote:

> what is the integration of x^x ?
>
> x = 0 to X (consider only nonnegative values)

This is not integrable in terms of elementary functions.

> And, is it possible to construct the gerenal form of integrate
> f(x)^g(x) ?

Again, not in terms of elementary functions, unless you put severe
restrictions on f and g.

You could always just construct a special function to solve the
integral ;-)


Zeshan

Jaime Gaspar

unread,
Mar 22, 2002, 10:19:28 AM3/22/02
to
"junhwi" <jun...@modsim.co.kr> escreveu na mensagem
news:a7estj$ts$1...@news.kreonet.re.kr...

> what is the integration of x^x ?

Try use this transformation:

x^x = e^ln(x^x) = e^(x*ln x)

I don't know if it helps.


Regards,

Jaime Gaspar
______________________________
Homepage: www.jaimegaspar.com
E-mail: e-m...@jaimegaspar.com


Zdislav V. Kovarik

unread,
Mar 22, 2002, 12:11:49 PM3/22/02
to
In article <a7estj$ts$1...@news.kreonet.re.kr>,
junhwi <jun...@modsim.co.kr> wrote:
:what is the integration of x^x ?

:
:x = 0 to X (consider only nonnegative values)
:
:And, is it possible to construct the gerenal form of
:integrate f(x)^g(x) ?

I re-posted it many times before; is there an FAQ
pointer to it, or a more recent version?
Enjoy, ZVK(Slavek).

From: wee...@sagi.wistar.upenn.edu (Matthew P Wiener)
Subject: Re: integrate x^x ?

I really really really ought to polish this up for FAQ inclusion. I
am combining two old articles of mine, the first giving and sketching
the general Liouville theory, the second applying this theory to x^x:
========================================================================
We give a fairly complete sketch of the proof that certain functions,
including the asked for one, are not integrable in elementary terms. The
central theorem is due to Liouville in 1835. His proof was analytic. The
sketch below is mostly algebraic and is due to Maxwell Rosenlicht. See
his
papers in the _Pacific Journal of Mathematics_, 54, (1968) 153-161 and 65,
(1976), 485-492.

WARNING: Prerequisites for understanding the proof is a first year
graduate
course in algebra, and a little complex analysis. No deep results are
used,
but I cannot take the time to explain standard notions or all the
deductions.

Notation: a^n is "a power n", a_n is "a sub n". C is the complex numbers,
for fields F, F[x] is the ring of polynomials in x OR an algebraic
extension
of F, F(x) is the field of rational functions in a transcendental x, M is
the field of meromorphic functions in one variable. If f is a complex
function, I(f) will denote an antiderivative of f.

A differential field is a field F of characteristic 0 with a derivation.
Thus, in addition to the field operations + and *, there is a derivative
mapping ':F->F such that (a+b)'=a'+b' and (ab)'=a'b+ab'. Two standard
examples are C(z) and M with the usual derivative map. Notice a basic
identity (logarithmic differentiation) holds:

[(a_1 ^ k_1) * ... * (a_n ^ k_n)]' a_1' a_n'
--------------------------------- = k_1 --- + ... + k_n ---
(a_1 ^ k_1) * ... * (a_n ^ k_n) a_1 a_n

The usual rules like the quotient rule also hold. If a in F satisfies
a'=0, we call a a constant of F. The set of constants of F is called
Con(F), and forms a subfield of F.

The basic idea in showing something has no elementary integral is to
reduce the problem to a sequence of differential fields F_0, F_1, etc.,
where F_0 = C(z), and F_(i+1) is obtained from F_i by adjoining one
new element t. t is obtained either algebraically, because t satisfies
some polynomial equation p(t)=0, or exponentially, because t'/t=s' for
some s in F_i, or logarithmically, because t'=s'/s is in F_i. Notice
that we don't actually take exponentials or logarithms, but only attach
abstract elements that have the appropriate derivatives. Thus a function
f is integrable in elementary terms iff such a sequence exists starting
with C(z).

Just so there is no confusion, there is no notion of "composition"
involved
here. If you want to take log s, you adjoin a transcendental t with the
relation t'=s'/s. There is no log function running around, for example,
except as motivation, until we reach actual examples.

We need some easy lemmas. Throughout the lemmas F is a differential
field,
and t is transcendental over F.

Lemma 1: If K is an algebraic extension field of F, then there exists a
unique way to extend the derivation map from F to K so as to make K into
a differential field.

Lemma 2: If K=F(t) is a differential field with derivation extending F's,
and t' is in F, then for any polynomial f(t) in F[t], f(t)' is a
polynomial
in F[t] of the same degree (if the leading coefficient is not in Con(F))
or of degree one less (if the leading coefficient is in Con(F)).

Lemma 3: If K=F(t) is a differential field with derivation extending F's,
and t'/t is in F, then for any a in F, n a positive integer, there exists
h in F such that (a*t^n)'=h*t^n. More generally, if f(t) is any
polynomial
in F[t], then f(t)' is of the same degree as f(t), and is a multiple of
f(t) iff f(t) is a monomial.

These are all fairly elementary. For example, (a*t^n)'=(a'+at'/t)*t^n
in lemma 3. The final 'iff' in lemma 3 is where transcendence of t comes
in. Lemma 1 in the usual case of subfields of M can be proven
analytically
using the implicit function theorem.
--------------------------------------------------------------------------
MAIN THEOREM. Let F,G be differential fields, let a be in F, let y be in
G,
and suppose y'=a and G is an elementary differential extension field of F,
and Con(F)=Con(G). Then there exist c_1,...,c_n in Con(F), u_1,...,u_n, v
in F such that
u_1' u_n'
a = c_1 --- + ... + c_n --- + v'.
u_1 u_n

In other words, the only functions that have elementary anti-derivatives
are the ones that have this very specific form.
--------------------------------------------------------------------------
This is a very useful theorem for proving non-integrability. In the usual
case, F,G are subfields of M, so Con(F)=Con(G)=C always holds.

Proof:
By assumption there exists a finite chain of fields connecting F to G
such that the extension from one field to the next is given by performing
an algebraic, logarithmic, or exponential extension. We show that if the
form (*) can be satisfied with values in F2, and F2 is one of the three
kinds of allowable extensions of F1, then the form (*) can be satisfied
in F1. The form (*) is obviously satisfied in G: let all the c's be 0,
the
u's be 1, and let v be the original y for which y'=a. Thus, if the form
(*) can be pulled down one field, we will be able to pull it down to F,
and the theorem holds.

So we may assume without loss of generality that G=F(t).

Case 1: t is algebraic over F. Say t is of degree k. Then there are
polynomials U_i and V such that U_i(t)=u_i and V(t)=v. So we have

U_1(t)' U_n(t)'
a = c_1 ------ + ... + c_n ------ + V(t)'.
U_1(t) U_n(t)

Now, by the uniqueness of extensions of derivatives in the algebraic case,
we may replace t by any of its conjugates t_1,..., t_k, and the same
equation
holds. In other words, because a is in F, it is fixed under the Galois
automorphisms. Summing up over the conjugates, and converting the U'/U
terms into products using logarithmic differentiation, we have

[U_1(t_1)*...*U_1(t_k)]'
k a = c_1 ----------------------- + ... + [V(t_1)+...+V(t_k)]'.
U_1(t_1)*...*U_n(t_k)

But the expressions in [...] are symmetric polynomials in t_i, and as
they are polynomials with coefficients in F, the resulting expressions
are in F. So dividing by k gives us (*) holding in F.

Case 2: t is logarithmic over F. Because of logarithmic differentiation
we may assume that the u's are monic and irreducible in t and distinct.
Furthermore, we may assume v has been decomposed into partial fractions.
The fractions can only be of the form f/g^j, where deg(f)<def(g) and g
is monic irreducible. The fact that no terms outside of F appear on the
left hand side of (*), namely just a appears, means a lot of cancellation
must be occuring.

Let t'=s'/s, for some s in F. If f(t) is monic in F[t], then f(t)' is
also
in F[t], of one less degree. Thus f(t) does not divide f(t)'. In
particular,
all the u'/u terms are in lowest terms already. In the f/g^j terms in v,
we have a g^(j+1) denominator contribution in v' of the form
-jfg'/g^(j+1).
But g doesn't divide fg', so no cancellation occurs. But no u'/u term can
cancel, as the u's are irreducible, and no (xx)/g^(j+1) term appears in
a, because a is a member of F. Thus no f/g^j term occurs at all in v.
But
then none of the u's can be outside of F, since nothing can cancel them.
(Remember the u's are distinct, monic, and irreducible.) Thus each of the
u's is in F already, and v is a polynomial. But v' = a - expression in
u's,
so v' is in F also. Thus v = b t + c for some b in con(F), c in F, by
lemma
2. Then

u_1' u_n' s'
a = c_1 --- + ... + c_n --- + b --- + c'
u_1 u_n s

is the desired form. So case 2 holds.

Case 3: t is exponential over F. So let t'/t=s' for some s in F. As in
case 2 above, we may assume all the u's are monic, irreducible, and
distinct
and put v in partial fraction decomposition form. Indeed the argument is
identical as in case 2 until we try to conclude what form v is. Here
lemma
3 tells us that v is a finite sum of terms b*t^j where each coefficient is
in F. Each of the u's is also in F, with the possible exception that one
of them may be t. Thus every u'/u term is in F, so again we conclude v'
is in F. By lemma 3, v is in F. So if every u is in F, a is in the
desired
form. Otherwise, one of the u's, say u_n, is actually t, then

u_1'
a = c_1 --- + ... + (c_n s + v)'
u_1

is the desired form. So case 3 holds.
------------------------------------------------------------------QED------
This proof, by the way, is a LOT easier than it looks. Just work out some
examples, and you'll see what's going on. (If this were a real expository
paper, such examples would be provided. Maybe it's better this way.
Indeed,
if anybody out there takes the time to work some out and post them, I
would
be much obliged.)

So how to you actually go about using this theorem? Suppose you want to
integrate f*exp(g) for f,g in C(z), g non zero. [This isn't yet the asked
for problem.] Let t=exp(g), so t'/t=g'. Let F=C(z)(t), G=any
differential
extension field containing an antiderivative of f*t. [Note that t is in
fact transcendental over C(z): g is rational and non-zero, so it has a
pole (possibly at infinity) and so t has an essential singularity and
can't
be algebraic over C(z).] Is G an elementary extension? If so, then

u_1' u_n'
f*t = c_1 --- + ... + c_n --- + v'
u_1 u_n

where the c_i, u_i, and v are in F. Now the left hand side can be viewed
as a polynomial in C(z)[t] with exactly one term. We must identify the
coefficient of t in the right hand side and get an equation for f. But
the first n terms can be factored until the u_i's are linear (using the
logarithmic differentiation identity to preserve the abstract from). As
for the v' term, long divide and use partial fractions to conclude v is a
sum of monomials: if v had a linear denominator other than t, raised to
some power in its partial fraction decomposition, its derivative would be
one higher power, and so cannot be cancelled with anything from the u_i
terms. (As in the proof.) If w is the coefficient of t in v, we have
f=w'+wg' with w in C(z). Solving this first order ODE, we find that
w=exp(-g)*I(f*exp(g)). In other words, if an elementary antiderivative
can be found for f*exp(g), where f,g are rational functions, then it is of
the form w*exp(g) for some rational function w. [Notice that the
conclusion
would fail for g equal to 0!]

For example, consider f=1 and g=-z^2. Now exp(z^2)*I(exp(-z^2)) has no
poles (except perhaps at infinity), so if it is a rational function, it
must be a polynomial. So let (p(z)*exp(-z^2))'=exp(-z^2). One quickly
verifies that p'-2zp=1. But the only solution to that ODE is the error
function I(exp(-z^2)) itself (within an additive constant somewhere)!
And the error function is NOT a polynomial! (Proof? OK, for one thing,
its Taylor series obtained by termwise integration is infinite. For
another, its derivative is an exponential.)

As an exercise, prove that I(exp(z)/z) is not elementary. Conclude that
neither is I(exp(exp(z))) and I(1/log(z)).

For a slightly harder exercise, prove that I(sin(z)/z) is not elementary.
Conclude that neither is I(sin(exp(z))).

Finally, we consider the case I(z^z).

So this time, let F=C(z,l)(t), the field of rational functions in z,l,t,
where l=log z and t=exp(zl)=z^z. Note that z,l,t are algebraically
independent. (Choose some appropriate domain of definition.) Then
t'=(1+l)t, so for a=t in the above situation, the partial fraction
analysis (of the sort done in the previous posts) shows that the only
possibility is for v=wt+... to be the source of the t term on the left,
with w in C(z,l).

So this means, equating t coefficients, 1=w'+(l+1)w. This is a first
order ODE, whose solution is w=I(z^z)/z^z. So we must prove that no
such w exists in C(z,l). So suppose (as in one of Ray Steiner's posts)
w=P/Q, with P,Q in C[z,l] and having no common factors. Then z^z=
(z^z*P/Q)'=z^z*[(1+l)PQ+P'Q-PQ']/Q^2, or Q^2=(1+l)PQ+P'Q-PQ'. So Q|Q',
meaning Q is a constant, which we may assume to be one. So we have
it down to P'+P+lP=1.

Let P=Sum[P_i l^i], with P_i, i=0...n in C[z]. But then in our equation,
there's a dangling P_n l^(n+1) term, a contradiction.
--
-Matthew P Wiener (wee...@sagi.wistar.upenn.edu)


Jonathan Hoyle

unread,
Mar 22, 2002, 1:24:20 PM3/22/02
to
"junhwi" <jun...@modsim.co.kr> wrote in message news:<a7estj$ts$1...@news.kreonet.re.kr>...

> what is the integration of x^x ?
>
> x = 0 to X (consider only nonnegative values)
>
> And, is it possible to construct the gerenal form of integrate f(x)^g(x) ?

The function f(x) = x^x for x>0 and f(0) = 1 is indeed continuous over
[0,oo), and therefore integrable. You can even extended this over the
real line by considering f(x) = |x|^x for x!=0, and f(0) = 1. I do
not know if there is an algebraic solution for the integral of |x|^x.

Lawrence V. Cipriani

unread,
Mar 22, 2002, 1:45:58 PM3/22/02
to
In article <a7estj$ts$1...@news.kreonet.re.kr>,
junhwi <jun...@modsim.co.kr> wrote:

10 bonus points: find the minimum of x^x for x >= 0

Franz Fritsche

unread,
Mar 22, 2002, 2:10:47 PM3/22/02
to
On 22 Mar 2002 18:45:58 GMT, l...@ww3.lucent.com (Lawrence V. Cipriani)
wrote:

>
> 10 bonus points: find the minimum of x^x for x >= 0
>

I'll concentrate on the minimum of x^x for x > 0.

---> Well let's ask MuPad...

plotfunc(x^x, x=0..1);

diff(x^x, x);

solve(%, x);

---> ...

:-)

F.

Jan Kristian Haugland

unread,
Mar 22, 2002, 3:57:30 PM3/22/02
to

That's one point for you, and nine for your computer.
Seriously, what gives you the urge to announce that
you need a computer program to differentiate x^x ?

--

J K Haugland
http://hjem.sol.no/neutreeko

A N Neil

unread,
Mar 22, 2002, 4:18:20 PM3/22/02
to
In article <a7estj$ts$1...@news.kreonet.re.kr>, junhwi
<jun...@modsim.co.kr> wrote:

> what is the integration of x^x ?
>
> x = 0 to X (consider only nonnegative values)

It is not an elementary function.
Is that the question?

Franz Fritsche

unread,
Mar 22, 2002, 4:24:55 PM3/22/02
to
On Fri, 22 Mar 2002 21:57:30 +0100, Jan Kristian Haugland
<jkha...@stud.hia.no> wrote:

>
> That's one point for you, and nine for your computer.
>

Well to be more precise(ly?): at least five for MuPad! :-)

> Seriously, what gives you the urge to announce that
> you need a computer program to differentiate x^x ?
>

Who said that I announced that? Sorry, J K but you must be
dreaming... You claim things that are _neither said_ nor _expressed_
whatsoever.

Why is it SO difficult for you, just to stay with the facts?

F.

Franz Fritsche

unread,
Mar 22, 2002, 4:27:03 PM3/22/02
to
Sorry, I forgot to mention...

>
> Why is it SO difficult for you, just to stay with the facts?
>

---> This is _extremely_ annoying.

F.

Doug Norris

unread,
Mar 22, 2002, 4:32:16 PM3/22/02
to
in...@simple-line.de (Franz Fritsche) writes:

Nice to see Franz responding to his own post, mentioning that it's extremely
annoying. Although, it's the first thing I've agreed with him/her/it on
in quite some time.

Oh, I should throw in some smilies, just so that everyone feels better about
themselves :-) :-) :-) :-)

Doug

Jaime Gaspar

unread,
Mar 22, 2002, 4:52:44 PM3/22/02
to
"Lawrence V. Cipriani" <l...@ww3.lucent.com> escreveu na mensagem
news:a7fu56$2...@nntpb.cb.lucent.com...

> 10 bonus points: find the minimum of x^x for x >= 0

Help: (x^x)' = (ln x + 1) * x^x
:-)


Regards

Bengt Månsson

unread,
Mar 22, 2002, 4:52:17 PM3/22/02
to
Lawrence V. Cipriani wrote:


Min (1/e)^(1/e) when x = 1/e.

Dx^x = De^(x ln x) = e^(x ln x)*D(x ln x) = x^x*(ln x + 1) etc.

-

___________________________________________________________________
Bengt Månsson, Partille, Sweden http://www.algonet.se/~bengtmn/
PhD Theoretical Physics, speciality Classical Fields and Relativity
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

David C. Ullrich

unread,
Mar 22, 2002, 6:02:03 PM3/22/02
to
On Fri, 22 Mar 2002 21:52:17 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
<ben...@telia.com> wrote:

>Lawrence V. Cipriani wrote:
>
>> In article <a7estj$ts$1...@news.kreonet.re.kr>,
>> junhwi <jun...@modsim.co.kr> wrote:
>>
>> 10 bonus points: find the minimum of x^x for x >= 0
>
>
>Min (1/e)^(1/e) when x = 1/e.
>
>Dx^x = De^(x ln x) = e^(x ln x)*D(x ln x) = x^x*(ln x + 1) etc.

There's a more interesting way to find the derivative - worth
mentioning for situations where you can't rewrite things this
way: If c is a constant then the derivative of a^x is log(a) a^x
and the derivative of x^a is a x^(a-1); it follows from the
chain rule in several variables that the derivative of x^x
is log(x) x^x + x x^(x-1).

(Intuitively, the rate of change of x^x is the sum of the
changes due to the change in each of the x's, with the
other held constant. Officially: Let F(s,t) = s^t, and
denote the partial derivatives by F_1 and F_2. Then
x^x = F(x,x), so the derivative of x^x is F_1(x,x) + F_2(x,x),
by the chain rule.

For example, define f(x) = int_0^x t^x dt. You can
find f'(x) this way.)

___________________________________________________________________
>Bengt Månsson, Partille, Sweden http://www.algonet.se/~bengtmn/
>PhD Theoretical Physics, speciality Classical Fields and Relativity
>¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
>


David C. Ullrich

Franz Fritsche

unread,
Mar 22, 2002, 6:05:12 PM3/22/02
to
Do(u)g barking again... :-)))

F.

P.S.
And so much mathematical context in his ahhh... whatever... (as
usual).

Franz Fritsche

unread,
Mar 22, 2002, 6:25:08 PM3/22/02
to
>
> 10 bonus points: find the minimum of x^x for x >= 0
>

Well, that's an easy task.

Let x^x = 0 for x = 0. Since x^x > 0 for all x > 0, x^x has the
minimum 0 at x = 0.

:-)

F.

Franz Fritsche

unread,
Mar 22, 2002, 6:28:00 PM3/22/02
to
On Fri, 22 Mar 2002 21:57:30 +0100, Jan Kristian Haugland
<jkha...@stud.hia.no> wrote:

< bla bla >

Could you id*ot restrict your comments to mathematical content next
time?

Thank you in advance.

F.

Doug Norris

unread,
Mar 22, 2002, 6:40:47 PM3/22/02
to
in...@simple-line.de (Franz Fritsche) writes:

>Let x^x = 0 for x = 0. Since x^x > 0 for all x > 0, x^x has the
>minimum 0 at x = 0.

Speaking of a post with zero mathematical content...

Doug

Doug Norris

unread,
Mar 22, 2002, 7:11:47 PM3/22/02
to
in...@simple-line.de (Franz Fritsche) writes:

>Could you id*ot restrict your comments to mathematical content next
>time?

You mean like your above comment?

Doug

David Kastrup

unread,
Mar 22, 2002, 7:36:36 PM3/22/02
to
in...@simple-line.de (Franz Fritsche) writes:

Unfortunately, 0^0=1, and so is the limit of x^x for x->0. So we
have to differentiate to find (ln(x)+1)x^x = 0, which has the obvious
zero at 1/e.

--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Email: David....@t-online.de

Dave L. Renfro

unread,
Mar 23, 2002, 12:19:47 AM3/23/02
to
Zdislav V. Kovarik <kov...@mcmail.cis.McMaster.CA>
[sci.math 22 Mar 2002 12:11:49 -0500]
http://mathforum.org/epigone/sci.math/claiglaxbloa/a7fokl$n...@mcmail.cis.mcmaster.ca

wrote (in part):

> I re-posted it many times before; is there an FAQ
> pointer to it, or a more recent version?
> Enjoy, ZVK(Slavek).

Below is a Dec. 3, 2001 ap-calculus post of mine where I
collected a lot of useful links on this topic. [I haven't
checked to see if all the links are still valid.]

Dave L. Renfro

*******************************************
*******************************************

http://mathforum.org/epigone/ap-calc/fryspencrim

Subject: [ap-calculus] Re: Is this an integrable function?
Author: Dave L. Renfro <dlre...@gateway.net>
Date: Mon, 03 Dec 2001 09:16:19 -0500

Ken Coulson <kcou...@lausd.k12.ca.us>
[ap-calculus Wed, 21 Nov 2001 19:42:21 -0800]
http://mathforum.org/epigone/ap-calc/zhehsmimyou

wrote

> Perhaps the group can assist me and one of my fellow teachers
> with the following:
>
> Is Integral (sin (x^2) )^2 dx integrable. ie is there
> an F(x) such that F'(x) equals the sine squared of x squared.
> I contend that it is not integrable. Am I wrong about this?

Several others have already responded (I was very busy last
week), so my comments are meant to supplement what has already
been posted.

As Mark Snyder essentially pointed out in his Nov. 27 post (his post
labeled Nov. 28 at <http://mathforum.org/epigone/ap-calc/fryspencrim>),
the integral of [sin(x^2)]^2 is (modulo an additive constant)
sqrt(2)*x - 2*sqrt(2)*J, where J is the integral of cos(x^2).
Since the integral of cos(x^2) is known to be a non-elementary
function, it follows that the integral of [sin(x^2)]^2 is also
a non-elementary function.

The situation is similar to sqrt(2) being irrational, which means
that sqrt(2) cannot be expressed using a finite sequence of the
four basic operations (+, -, *, /) applied to the integers.

[[ Actually, the situation is more akin to Pi being
transcendental, since "elementary function" includes
functions that cannot be expressed in a finite way
using "standard calculus functions" along with "standard
precalculus" operations (in the same way that "algebraic
number" includes solutions to polynomial equations having
integer coefficients that cannot be expressed using a
finite sequence of the four basic arithmetic operations
along with positive integer root extractions -- for more
about this, see my two Jan. 14, 2001 posts at
<http://mathforum.org/epigone/sci.math.symbolic/playzerdblar>). ]]

See the following web pages for more about integration in terms
of elementary functions --->>>

Matthew P Wiener [a sci.math post from Nov. 30, 1997]
http://www.mathsoft.com/asolve/constant/itrexp/wiener.html

G. A. Edgar's May 31, 1998 sci.math post at
http://mathforum.org/epigone/sci.math/khaxzezo

An outline of a proof that no antiderivative of exp(-x^2) is an
elementary function.
http://mathforum.org/epigone/sci.math/yoxkhaheh

Ask Dr. Math: "Symbolic simplification and integration"
[The Risch algorithm for CAS is mentioned near the bottom.]
http://mam2000.mathforum.org/dr.math/problems/macho11.30.98.html

Robert Israel, "Integration in Elementary Functions"
http://gamba.math.ubc.ca/coursedoc/m210/lesson17.html

Michael Singer's lectures on Symbolic Computation [NC State Univ.,
MA 591U, Spring 2001]
http://www4.ncsu.edu/~jeperry/Notes_In_Computer_Algebra/

Andy R. Magid, "Differential Galois Theory", Notices of the
American Mathematical Society 46(9) (Oct. 1999), 1041-1049.
http://www.ams.org/notices/199909/fea-magid.pdf


Dave L. Renfro

*******************************************
*******************************************

Navin Kadambi

unread,
Mar 23, 2002, 1:04:31 AM3/23/02
to
Hi,
It's not always possible to integrate all functions, and it is very
likely x^x is one such. It is curious to note that while we can
integrate (2x * e^(x^2)), we cannot integrate (e^(x^2)). Integration
is an art, while differentiation,--well any twit can differentiate.

Let's differentiate x^x (which I was playing with in high school).

y = x^x

log y = x * log x

(1/y)dy/dx = x(1/x) + log x

dy/dx = y * (1 + log x)

Therefore, dy/dx = x^x * (1 + log x)

Sadly, integration is never this easy. Another curious example is that
you should be able to integrate the result x^x * (1 + log x) back to
x^x--even though you might really have to stretch your brain power to
integrate x^x !! A more tough-looking function is not necessarily hard
to integrate.

Regards,
N.K.B. Kadambi

A N Neil <ann...@nym.alias.net> wrote in message news:<220320021618205301%ann...@nym.alias.net>...

Bengt Månsson

unread,
Mar 23, 2002, 2:26:43 AM3/23/02
to
Franz Fritsche wrote:


0^0 is usually left undefined. There are arguments for 0^0 = 1 as well
as for 0^0 = 0 but none is in common use since there are drawbacks to
both (btw there was a _long_ discussion on this in sci.math some years
ago).

The poster who asked for the minimum should have limited himself to the
open interval x > 0.

(BTW It was a little depressing to read that unprovoked attack by JKH
for you using a computer program. In particular since he is able to
write things of mathematical interest and often does.)


> F.

--

Franz Fritsche

unread,
Mar 23, 2002, 3:51:35 AM3/23/02
to
Hi David!

>
> Unfortunately, 0^0=1
>
Sorry, but that's wrong IF you define 0^0 = 0 (and of course you have
the freedom to do so. ;-) or if its left undefined.

Read what the math FAQ has to say in respect to this question...

Sincerely,
F.

Franz Fritsche

unread,
Mar 23, 2002, 4:11:06 AM3/23/02
to
>
> Speaking of a post with zero mathematical content...
>

Well, at least you seem to have some humor. ;-)

F.


... one of the main causes of the fall of the Roman Empire was that,
lacking zero, they had no way to indicate successful termination of
their C programs.

-- Robert Firth

Franz Fritsche

unread,
Mar 23, 2002, 4:14:24 AM3/23/02
to
Hi Bengt!

>
> 0^0 is usually left undefined.
>

Until you _define_ it! :-)

>
> The poster who asked for the minimum should have limited
> himself to the open interval x > 0.
>

Sure... :-)

See my first posting to this thread... ;-)

F.

David Kastrup

unread,
Mar 23, 2002, 5:41:40 AM3/23/02
to
in...@simple-line.de (Franz Fritsche) writes:

> Hi David!
>
> >
> > Unfortunately, 0^0=1
> >
> Sorry, but that's wrong IF you define 0^0 = 0 (and of course you have
> the freedom to do so.

You can also define 1^1=7. It just makes no sense.

> ;-) or if its left undefined.
>
> Read what the math FAQ has to say in respect to this question...

Yes, do. There is no interpretation or argument in it that would
justify setting x^x=0 at x=0. The limit is 1, and the common
definition is 1. The only mildly defensible point would be to leave
0^0 undefined (at the cost of making the polynomial a_2 x^2 + a_1 x^1
+ a_0 x^0 undefined for x=0), but certainly not setting it to 0.

G. A. Edgar

unread,
Mar 23, 2002, 6:24:45 AM3/23/02
to
The teacher in me insists that answers should be like this:

The function x^x is continuous, so of course it has an indefinite
integral. However, ...

--
G. A. Edgar http://math.ohio-state.edu/~edgar/

Franz Fritsche

unread,
Mar 23, 2002, 6:29:34 AM3/23/02
to
Hi David!

>>
>> Read what the math FAQ has to say in respect to this question...
>>
> Yes, do. There is no interpretation or argument in it that would
> justify setting x^x=0 at x=0.
>

Well, I give you one right here: x^0 = 0 for all x e IR, x != 0. If I
want to have this defined at x = 0, and do not like a discontinuity, I
think 0 is a good choice... ;-)


But the point was not to "justify" the arbitrary choice of 0^0 = 0 but
to show the meaningless of a proposition like:

"Unfortunately, 0^0=1..."

...seems that you missed this important point somehow... :-)


> and the common definition is 1.

There is no such thing as a "common" definition. Even if you
_frequently_ will find that ( in certain contexts ) 0^0 is considered
(defined) to be 1.

But you can SET it to every value you want.

F.

P.S.
I thought you have read the FAQ?

Franz Fritsche

unread,
Mar 23, 2002, 6:34:29 AM3/23/02
to
Sorry, a typo (thinko)

> Well, I give you one right here: 0^x = 0 for all x e IR, x > 0. If I


> want to have this defined at x = 0, and do not like a discontinuity, I
> think 0 is a good choice... ;-)
>

F.

Franz Fritsche

unread,
Mar 23, 2002, 6:45:08 AM3/23/02
to
On Sat, 23 Mar 2002 11:34:29 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>
> Well, I give you one right here: 0^x = 0 for all x e IR, x > 0. If I
> want to have this defined at x = 0, and do not like a discontinuity, I
> think 0 is a good choice... ;-)
>

Its interesting what R. Graham, D. Knuth, O. Patashnik has to say in
this respect:

From Concrete Mathematics p.162 (R. Graham, D. Knuth, O. Patashnik):

Some textbooks leave the quantity 0^0 undefined, because the
functions x^0 and 0^x have different limiting values when x
decreases to 0. But this is a mistake. We must define x^0=1
for all x, _if_ the binomial theorem is to be valid when
x = 0, y = 0, and/or x = -y .

Well, no one doubts that...

The theorem is too important to be arbitrarily restricted!

Well, one may have this opinion (or not...).

By contrast, the function 0^x is quite unimportant.

But not SO unimportant, that R. Graham, D. Knuth, O. Patashnik forgets
to mention it... ;-)

F.

David Kastrup

unread,
Mar 23, 2002, 7:51:12 AM3/23/02
to
in...@simple-line.de (Franz Fritsche) writes:

It is a bad choice since you extend the range of continuity for just
a single point. That's not worth the trouble you reap.

for y<0: lim x->0+ x^y = infinity
for y>0: lim x->0+ x^y = 0

So there is no compelling reason to set a value for 0^0 based on the
exponent.

David Kastrup

unread,
Mar 23, 2002, 7:55:43 AM3/23/02
to
in...@simple-line.de (Franz Fritsche) writes:

There is a difference in quality between fixing a single discontinuity
over the entire reals (as with x^0) or with extending the range of
definition by single measly point (as with 0^x), since 0^x is
undefined for negative x, and limiting expressions of that form will
be divergent.

Franz Fritsche

unread,
Mar 23, 2002, 8:06:53 AM3/23/02
to
On 23 Mar 2002 13:55:43 +0100, David Kastrup
<David....@t-online.de> wrote:

>
> There is a difference in quality between fixing a single discontinuity
> over the entire reals (as with x^0) or with extending the range of
> definition by single measly point (as with 0^x), since 0^x is

> undefined for negative x, and...
>
Sure. We don't have to discuss this... :-)

I certainly (normally) would consider x^0 to be 1 for all x e IR. But
as it happens sometimes its convenient to define 0^0 to be say... 0.
[ The point is/was in this case, the OP did not fix it either way...
;-) ]

F.

Franz Fritsche

unread,
Mar 23, 2002, 8:10:31 AM3/23/02
to
On 23 Mar 2002 13:51:12 +0100, David Kastrup
<David....@t-online.de> wrote:

>
> So there is no compelling reason to set a value for 0^0 based on the
> exponent.
>

That's certainly true! No question. ( I certainly agree with the
comments of R. Graham, D. Knuth, O. Patashnik in this respect... ;-)

F.

Bengt Månsson

unread,
Mar 23, 2002, 8:43:17 AM3/23/02
to
(This is for several posters, i just chose the latest.)

Franz Fritsche wrote:

I looked at the FAQ, a Swedish discussion group (or rather a web site
where you can send questions) and some computer algebra systems.

It seems that the most common is to define 0^0 = 1 and I agree with the
argument by Graham et al. On the other hand, what about this (from the
Swedish group):

x^(1/ln(x)) = e for all x > 0. And 1/ln(x) --> 0 when x --> 0+. Then if
we want the function f(x) = x^(1/ln(x)) to be continuous for x >= 0 we
must define 0^0 = e. That was not a serious proposal, just an example to
show that there is some argument for other values than 1 and 0. (The
poster prefered to leave 0^0 undefined.)

I tested some CAS:s too. Yes, I know that's no mathematical argument but
it showed the programmers' choices. 0^0 was defined as 1 by Derive,
Maple,Matlab and MuPad but it was left undefined by MathCad and Mathematica.

(The Swedish group is at http://www.maths.lth.se/query/. It is in
Swedish but questions can be submitted and are then answered in English.)

David Kastrup

unread,
Mar 23, 2002, 8:49:24 AM3/23/02
to
Bengt Månsson <ben...@telia.com> writes:

> It seems that the most common is to define 0^0 = 1 and I agree with
> the argument by Graham et al. On the other hand, what about this (from
> the Swedish group):
>
> x^(1/ln(x)) = e for all x > 0. And 1/ln(x) --> 0 when x --> 0+. Then
> if we want the function f(x) = x^(1/ln(x)) to be continuous for x >= 0
> we must define 0^0 = e. That was not a serious proposal, just an
> example to show that there is some argument for other values than 1
> and 0. (The poster prefered to leave 0^0 undefined.)

As with 0^x, this continuation buys you an extension of the
continuity by just a single point before the function becomes
undefined.

Apart from rather contrived examples, the only actually *useful*
continuation (namely one that not just changes a validity from an open
to a closed interval) is that based on completing the domain of x^0 in
the obvious way.

And you really don't want to have
sum(k=0, infinity) x^k a_k
be undefined for x=0.

David C. Ullrich

unread,
Mar 23, 2002, 9:29:25 AM3/23/02
to
On Fri, 22 Mar 2002 23:25:08 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>>

Uh, no - in a context like this we define the value of x^x for
x = 0 by continuity; then the value at x = 0 turns out to be 1.

>:-)
>
>F.


David C. Ullrich

Dave L. Renfro

unread,
Mar 23, 2002, 9:41:11 AM3/23/02
to
G. A. Edgar <ed...@math.ohio-state.edu>
[sci.math Sat, 23 Mar 2002 06:24:45 -0500]
http://mathforum.org/epigone/sci.math/claiglaxbloa

wrote

> The teacher in me insists that answers should be like this:
>
> The function x^x is continuous, so of course it has an
> indefinite integral. However, ...

You'll be pleased to know that lot of the replies in the thread
I cited,

http://mathforum.org/epigone/ap-calc/fryspencrim

took the original poster (in that ap-calculus thread) to task
over this matter. Since I came into the thread a bit late, I
didn't bother rehashing that issue. [But now we're in another
thread . . .] Typical of some of the replies in that ap-calculus
thread is the following --->>>

***************************************
***************************************

Subject: [ap-calculus] Re: Is this an integrable function?

Author: Jerry Uhl <ju...@cm.math.uiuc.edu>
Date: Tue, 27 Nov 2001 12:29:03 -0600

Sin[x^2] and its relative e^(-x^2) are integrable functions. Too many
AP students come to unversities with the mistaken notion that these
two functions are not integrable. That's too bad because the whole
theory of normal probability (bell curve) is based on integrating
e^(-x^2).

On the other hand the integrals of these functions are not
expressible in terms of the limited list of elementary functions
found in most calculus books. Think of it this way: If nobody had
ever heard of a logarithm, then everybody would say that 1/x is not
integrable in terms of elementary functions.

-Jerry Uhl

***************************************
***************************************

Dave L. Renfro

Dave L. Renfro

unread,
Mar 23, 2002, 10:01:36 AM3/23/02
to
Lawrence V. Cipriani <l...@ww3.lucent.com>
[sci.math 22 Mar 2002 18:45:58 GMT]
http://mathforum.org/epigone/sci.math/claiglaxbloa/a7fu56$2...@nntpb.cb.lucent.com

wrote

> 10 bonus points: find the minimum of x^x for x >= 0

which led to a lot of semi-mathematical follow-ups about
0^0. In case anyone not_interested_in 0^0 quibbles is still
reading this thread, here are some more extrema that can be
found in explicit form. [Assume x is restricted so that all
bases and all inputs of logarithms are positive.]

y = x^x local min @ x = e^(-1)

y = x^(1/x) local max @ x = e

y = x^[x*ln(x)] local max @ x = e^(-2), local min @ x = 1

y = (ln x)^(ln x) local min @ x = e^(1/e)

y = (ln x)^[-(ln x)] local max @ x = e^(1/e)

y = (1 + ln x)^(ln x) local min @ x = 1

y = [ln(ln x)]^[ln(ln x)] local min @ x = exp[e^(1/e)]

y = (ln x)^[ln(ln x)] local min @ x = e

y = [(ln x) - ln(ln x)]^x local min @ x = e

Dave L. Renfro

Franz Fritsche

unread,
Mar 23, 2002, 10:08:03 AM3/23/02
to
On Fri, 22 Mar 2002 23:25:08 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>
> Uh, no - in a context like this we define... <bla bla>
>

Sorry, Ullrich, you _surely_ mean: "...IF we define it like this and
that..." - thanx god the mathematician is not confined by some
..ahhh... well, its not confined in any way (here)... in this
respect.

Since the OP (of the question) did not mention if x^x should be
continuos or not on x >= 0, this question is _completely_ open.

Ullrich, you shouldn't _always_ mistake the way _you_ would do things
with "they way things are done". :-)

It would surely be more accurate if you could begin sentences with:
"Well, *I* would do it not this way but..." - everything else is
extremely "misleading".

F.

David C. Ullrich

unread,
Mar 23, 2002, 10:26:34 AM3/23/02
to
On Sat, 23 Mar 2002 15:08:03 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>On Fri, 22 Mar 2002 23:25:08 GMT, in...@simple-line.de (Franz Fritsche)
>wrote:
>
>>
>> Uh, no - in a context like this we define... <bla bla>
>>
>
>Sorry, Ullrich, you _surely_ mean: "...IF we define it like this and
>that..." - thanx god the mathematician is not confined by some
>..ahhh... well, its not confined in any way (here)... in this
>respect.

No, I meant exactly what I said - when the question is to
find the minimum of x^x for x >= 0 it's clear to everyone
but you that the value at the origin is supposed to be defined
by continuity.

>Since the OP (of the question) did not mention if x^x should be
>continuos or not on x >= 0, this question is _completely_ open.
>
>Ullrich, you shouldn't _always_ mistake the way _you_ would do things
>with "they way things are done". :-)

I don't.

>It would surely be more accurate if you could begin sentences with:
>"Well, *I* would do it not this way but..." - everything else is
>extremely "misleading".

No, what's misleding is suggesting that giving an arbitrary definition
of 0^0 counts as a sensible answer to the question.

>F.
>


David C. Ullrich

Franz Fritsche

unread,
Mar 23, 2002, 10:30:40 AM3/23/02
to
On Sat, 23 Mar 2002 15:26:34 GMT, ull...@math.okstate.edu (David C.
Ullrich) wrote:

>
> No, I meant exactly what I said - when the question is to
> find the minimum of x^x for x >= 0 it's clear to everyone
> but you that the value at the origin is supposed to be defined
> by continuity.
>

Well, that's a _bold_ assertion, and surely _not_ a question of
mathematics...; but I think it illustrates very nicely some specific
kind of ignorance...

>>
>> Since the OP (of the question) did not mention if x^x should be
>> continuos or not on x >= 0, this question is _completely_ open.
>>

>
> No, what's misleading is suggesting that giving an arbitrary definition


> of 0^0 counts as a sensible answer to the question.
>

Well, don't you think that this is a question related more to
"philosophy" (or whatever) than to mathematics itself; since the
answer is certainly VALID. If not _please_ prove me wrong!

:-)

Sincerely,
F.

P.S.
If one would be nitpick one could even argue that the (quiet)
assumption that 0^0 = 1 (without mentioning it!) is faulty - and
proves some "solutions" wrong here. Since 0^0 *is not* just plainly 1.
( Of course you can give it this meaning... )

Jan Kristian Haugland

unread,
Mar 23, 2002, 10:38:44 AM3/23/02
to

Bengt Månsson wrote:

> (BTW It was a little depressing to read that unprovoked attack by JKH
> for you using a computer program. In particular since he is able to
> write things of mathematical interest and often does.)

Sorry for causing you depression, but I find his
"look at me, I can write a C program to count the
number of solutions" style a _little_ annoying.
And this was along the same lines.

--

J K Haugland
http://hjem.sol.no/neutreeko

Franz Fritsche

unread,
Mar 23, 2002, 10:40:30 AM3/23/02
to
>
> And this was along the same lines.
>
But only in your day dreams... :-)

F.

P.S.
You seriously seem to have missed ANY point
:-)))

Franz Fritsche

unread,
Mar 23, 2002, 10:38:35 AM3/23/02
to
On Fri, 22 Mar 2002 23:02:03 GMT, ull...@math.okstate.edu (David C.
Ullrich) wrote:

>
> ... it follows from the chain rule in several variables that the derivative
> of x^x is log(x) x^x + x x^(x-1).
>
Surely you will explain me how this applies to the case x = 0.

// Since the question assumed x^x to be defined for x >= 0, and you
didn't mention any restrictions on the derivative of x^x above.

Thanx in advance!

F.

David C. Ullrich

unread,
Mar 23, 2002, 11:12:24 AM3/23/02
to
On Sat, 23 Mar 2002 15:30:40 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>On Sat, 23 Mar 2002 15:26:34 GMT, ull...@math.okstate.edu (David C.


>Ullrich) wrote:
>
>>
>> No, I meant exactly what I said - when the question is to
>> find the minimum of x^x for x >= 0 it's clear to everyone
>> but you that the value at the origin is supposed to be defined
>> by continuity.
>>
>Well, that's a _bold_ assertion, and surely _not_ a question of
>mathematics...;

It's also _true_.

>but I think it illustrates very nicely some specific
>kind of ignorance...

Of course that's what you think - that's what you think
any time someone points out that something you said was
silly.

You should really change the "info" to something else.

>>>
>>> Since the OP (of the question) did not mention if x^x should be
>>> continuos or not on x >= 0, this question is _completely_ open.
>>>
>
>>
>> No, what's misleading is suggesting that giving an arbitrary definition
>> of 0^0 counts as a sensible answer to the question.
>>
>Well, don't you think that this is a question related more to
>"philosophy" (or whatever) than to mathematics itself; since the
>answer is certainly VALID. If not _please_ prove me wrong!
>
>:-)
>
>Sincerely,
>F.
>
>P.S.
>If one would be nitpick one could even argue that the (quiet)
>assumption that 0^0 = 1 (without mentioning it!)

I didn't assume any such thing, and I _did_ mention quite
explicitly what I did assume: _In_ a problem like this we
take the value at the point where the function is undefined


to be defined by continuity.

>is faulty - and


>proves some "solutions" wrong here. Since 0^0 *is not* just plainly 1.
>( Of course you can give it this meaning... )
>


David C. Ullrich

Franz Fritsche

unread,
Mar 23, 2002, 11:17:15 AM3/23/02
to
>>
>> Well, that's a _bold_ assertion, and surely _not_ a question of
>> mathematics...;
>
>It's also _true_.
>
ANOTHER assertion...

*sigh*

>>
>>but I think it illustrates very nicely some specific
>>kind of ignorance...
>>
> Of course that's what you think - that's what you think
> any time someone points out that something you said was
> silly.
>

No, Ullrich, that is what I think if I see another silly message that
_you_ wrote. ( And this list is still growing... )


>>
>> If one would be nitpick one could even argue that the (quiet)
>> assumption that 0^0 = 1 (without mentioning it!)
>>
> I didn't assume any such thing, and I _did_ mention quite
> explicitly what I did assume: _In_ a problem like this we
> take the value at the point where the function is undefined
> to be defined by continuity.
>

???

Who we??? We the Ullrichs? Or what?
- You again don't make any sense her.

You again mistake the way _you_ would do things


with "they way things are done". :-)

So once more: The question is NOT what one "generally" would do here;
or what one would "normally" to here, etc. (since this can be argued
for or against... ) but what one CAN do here.

F.

David C. Ullrich

unread,
Mar 23, 2002, 11:38:17 AM3/23/02
to
On Sat, 23 Mar 2002 16:17:15 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>>>


>>> Well, that's a _bold_ assertion, and surely _not_ a question of
>>> mathematics...;
>>
>>It's also _true_.
>>
>ANOTHER assertion...
>
>*sigh*
>
>>>
>>>but I think it illustrates very nicely some specific
>>>kind of ignorance...
>>>
>> Of course that's what you think - that's what you think
>> any time someone points out that something you said was
>> silly.
>>
>No, Ullrich, that is what I think if I see another silly message that
>_you_ wrote. ( And this list is still growing... )
>
>
>>>
>>> If one would be nitpick one could even argue that the (quiet)
>>> assumption that 0^0 = 1 (without mentioning it!)
>>>
>> I didn't assume any such thing, and I _did_ mention quite
>> explicitly what I did assume: _In_ a problem like this we
>> take the value at the point where the function is undefined
>> to be defined by continuity.
>>
>???
>
>Who we??? We the Ullrichs? Or what?

We here is anyone with a clue, who happens to be more
interested in math than in coming up with silly answers
and then insisting that they _could_ be considered correct.

>- You again don't make any sense her.
>
>You again mistake the way _you_ would do things
>with "they way things are done". :-)
>
>So once more: The question is NOT what one "generally" would do here;
>or what one would "normally" to here, etc. (since this can be argued
>for or against... ) but what one CAN do here.

Uh, no. The fact that it's not a mathematical question does
not prove that there's no such thing as what's generally done.
In a context like the present what's generally done is try to
interpret the question in the way the person asking it obviously
had in mind.

Have you ever read any analysis? Taking the value of a function
to be defined by continuity at a few points where the formula
defining it doesn't quite make sense _is_ a perfectly _standard_
convention. Instead of wasting your time here on sci.math you
should be writing letters to all the authors in the universe
who have ever referred to sin(z)/z as an entire function.

>F.
>


David C. Ullrich

Franz Fritsche

unread,
Mar 23, 2002, 11:31:17 AM3/23/02
to
On Sat, 23 Mar 2002 14:29:25 GMT, ull...@math.okstate.edu (David C.
Ullrich) wrote:

>
> Uh, no - in a context like this we define the value of x^x for
> x = 0 by continuity; then the value at x = 0 turns out to be 1.
>

Urrich,

I would appreciate if you could give you comments (regarding my posts)
some form that looks more like mathematics and less like "just
Ullrich's opinion".

The FOLLOWING would certainly be appropriate:

If we define the value of x^x for x = 0 by continuity;


then the value at x = 0 turns out to be 1.

WHO could say something against that?

F.

P.S.
Even the comment "And in contexts like this we _normally_ do this."
would be appropriate.

But an statement like "in a context like this _we_ do this and that"
doesn't make much sense. Since _every_ mathematician can make this
just like HE think that its right.

It certainly would be a very _poor_ field if _you_ were the one that
defined what is to be considered proper action and what not... :-)))


Franz Fritsche

unread,
Mar 23, 2002, 11:36:00 AM3/23/02
to
On Sat, 23 Mar 2002 16:38:17 GMT, ull...@math.okstate.edu (David C.
Ullrich) wrote:

>
> < bla bla > insisting that they _could_ be considered correct.
>
To get you just right... : You claim my answer is WRONG?!!!

( PLEASE - PLEASE!!!)

F.

:-)))

David C. Ullrich

unread,
Mar 23, 2002, 11:47:50 AM3/23/02
to
On Sat, 23 Mar 2002 16:31:17 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>On Sat, 23 Mar 2002 14:29:25 GMT, ull...@math.okstate.edu (David C.


>Ullrich) wrote:
>
>>
>> Uh, no - in a context like this we define the value of x^x for
>> x = 0 by continuity; then the value at x = 0 turns out to be 1.
>>
>
>Urrich,
>
>I would appreciate if you could give you comments (regarding my posts)
>some form that looks more like mathematics and less like "just
>Ullrich's opinion".
>
>The FOLLOWING would certainly be appropriate:
>
> If we define the value of x^x for x = 0 by continuity;
> then the value at x = 0 turns out to be 1.
>
>WHO could say something against that?

Anyone who was not familiar your posts would just as well wonder
WHO could say anything against the statement "in a context like
this we define the value of x^x for x = 0 by continuity".

>F.
>
>P.S.
>Even the comment "And in contexts like this we _normally_ do this."
>would be appropriate.
>
>But an statement like "in a context like this _we_ do this and that"
>doesn't make much sense. Since _every_ mathematician can make this
>just like HE think that its right.

I didn't say anything about what a mathematician _can_ do, I
said something about what they _do_.

>It certainly would be a very _poor_ field if _you_ were the one that
>defined what is to be considered proper action and what not... :-)))

I didn't claim that anything was so because I said so.

David C. Ullrich

David C. Ullrich

unread,
Mar 23, 2002, 11:49:24 AM3/23/02
to
On Sat, 23 Mar 2002 16:36:00 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>On Sat, 23 Mar 2002 16:38:17 GMT, ull...@math.okstate.edu (David C.


>Ullrich) wrote:
>
>>
>> < bla bla > insisting that they _could_ be considered correct.
>>
>To get you just right... : You claim my answer is WRONG?!!!

It's clearly a WRONG answer to the question that was intended, yes.

>( PLEASE - PLEASE!!!)

You're welcome.

>F.
>
>:-)))
>


David C. Ullrich

Franz Fritsche

unread,
Mar 23, 2002, 11:44:05 AM3/23/02
to
>>
>> To get you just right... : You claim my answer is WRONG?!!!
>>
> It's clearly a WRONG answer to the question that was intended, yes.
>
Well, I appreciate this clear statement (free of any insult in any
form).

Well, I have to admit that I do think that my answer is a _valid_
solution. And if you think the opposite I can at least understand
your objection against it. ;-)

Sincerely,
F.

David C. Ullrich

unread,
Mar 23, 2002, 12:00:10 PM3/23/02
to
On Sat, 23 Mar 2002 16:44:05 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>>>


>>> To get you just right... : You claim my answer is WRONG?!!!
>>>
>> It's clearly a WRONG answer to the question that was intended, yes.
>>
>Well, I appreciate this clear statement (free of any insult in any
>form).

You don't realize that when _you_ complain, even implicitly, about
other people being insulting you induce waves of hilarious laughter
all over the planet? That's what happens, now you know (that's
another example of something that's true even though I can't
prove it.)

>Well, I have to admit that I do think that my answer is a _valid_
>solution. And if you think the opposite I can at least understand
>your objection against it. ;-)

I've answered all your questions. Now you answer mine: _Have_
you ever read any analysis? (Other than things like baby
textbooks, where the author is very careful to dot every i...)

>Sincerely,
>F.
>


David C. Ullrich

Franz Fritsche

unread,
Mar 23, 2002, 11:52:38 AM3/23/02
to
On Sat, 23 Mar 2002 16:47:50 GMT, ull...@math.okstate.edu (David C.
Ullrich) wrote:

>
> WHO could say anything against the statement "in a context like

> this we [normally -ff] define the value of x^x for x = 0 by continuity".
>

Look, Ullrich, its not the statement in "isolation" that I question,
but the value of this statement to form an "argument" against the
_possibility_ to set x^x = 0 for x = 0.

Got me?

>
> I didn't say anything about what a mathematician _can_ do, I
> said something about what they _do_.
>

If so. WHY the hell then are you questioning my solution?! Its just a
_possible_ solution, so not "the obvious" (or "standard" or whatever)
solution.


>
> I didn't claim that anything was so because I said so.
>

Ok. I appreciate that. ( Even I would admit that you HAVE a lot of
knowledge in this field and can VERY OFTEN say what's fact and what's
not fact...; but I also would insist in the point that there certainly
are also "questions" where different _opinions_ are possible... ;-)

F.


Franz Fritsche

unread,
Mar 23, 2002, 12:02:05 PM3/23/02
to
On Sat, 23 Mar 2002 17:00:10 GMT, ull...@math.okstate.edu (David C.
Ullrich) wrote:

>
> You don't realize that when _you_ complain, even implicitly, about
> other people being insulting you induce waves of hilarious laughter
> all over the planet?
>

Are you serious? ;-) :-)


> ...now you know (that's another example of something

> that's true even though I can't prove it.)
>

We.., maybe Ullrich OUR personal troubles with each other arise from a
completely different USAGE of "language" (whatsoever?)

-------> *I* would call such a "thing" an _assertion_ (not more)!!!

( By "such a thing" I mean an instance of: "an example of something


that's true even though I can't prove it.")


Well "assertions" are helpful sometimes - especially in logic - but
certainly they should not be seen as (proven) facts...

>
> I've answered all your questions. Now you answer mine: _Have_
> you ever read any analysis?
>

You want to be funny? Or what?

F.

Franz Fritsche

unread,
Mar 23, 2002, 12:06:50 PM3/23/02
to
>
> "an example of something that's true even though I can't prove it."
>

Well maybe you had a Goedelean "sentence" in mind here? :-)

This would completely change things! :-)))

F.

Bengt Månsson

unread,
Mar 23, 2002, 12:09:58 PM3/23/02
to
David C. Ullrich wrote:

> On Sat, 23 Mar 2002 15:08:03 GMT, in...@simple-line.de (Franz Fritsche)
> wrote:


[snip]

> find the minimum of x^x for x >= 0 it's clear to everyone
> but you that the value at the origin is supposed to be defined
> by continuity.


Sorry, but that isn't clear to me either. Surely the most reasonable way
to define 0^0 is 1 but not for the reason you mention.

What about this: Find the maximum of x^0 in the interval [-1,1] .


> David C. Ullrich

Bengt Månsson

unread,
Mar 23, 2002, 12:14:10 PM3/23/02
to
Bengt Månsson wrote:

>
> What about this: Find the maximum of x^0 in the interval [-1,1] .

Ok, that's was easy (typo). But then what about this:

Find the maximum of 0^x for x >= 0 .

Franz Fritsche

unread,
Mar 23, 2002, 12:19:10 PM3/23/02
to
On Sat, 23 Mar 2002 17:09:58 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
<ben...@telia.com> wrote:

>>
>> find the minimum of x^x for x >= 0 - it's clear to everyone


>> but you that the value at the origin is supposed to be defined
>> by continuity.
>>
>
> Sorry, but that isn't clear to me either. Surely the most reasonable way
> to define 0^0 is 1 but not for the reason you mention.
>

So: proof failed by citing a counter example... ;-)

F.


Bengt Månsson

unread,
Mar 23, 2002, 12:23:16 PM3/23/02
to
Dave L. Renfro wrote:

> Lawrence V. Cipriani <l...@ww3.lucent.com>
> [sci.math 22 Mar 2002 18:45:58 GMT]
> http://mathforum.org/epigone/sci.math/claiglaxbloa/a7fu56$2...@nntpb.cb.lucent.com
>
> wrote
>

> which led to a lot of semi-mathematical follow-ups about
> 0^0.


Digressions, yes, but why "semi-mathematical"?

[snip]


BTW: Thanks for the list of links you provided about the original
posters question.


> Dave L. Renfro

Bengt Månsson

unread,
Mar 23, 2002, 12:25:36 PM3/23/02
to
Jan Kristian Haugland wrote:

> Bengt Månsson wrote:
>
> Sorry for causing you depression, but I find his
> "look at me, I can write a C program to count the
> number of solutions" style a _little_ annoying.
> And this was along the same lines.

So, don't read it. This is sci.math, it's not sci.math.research.

Franz Fritsche

unread,
Mar 23, 2002, 12:43:49 PM3/23/02
to
On Sat, 23 Mar 2002 17:23:16 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
<ben...@telia.com> wrote:

>
> Digressions, yes, but why "semi-mathematical"?
>

For there seems to be a totally _valid_ mathematical "point" connected
with this "Digression". ( No one mentioned so far here... )

Well of course nothing "revolutionary" but maybe worth to mention. (?)

That is: The usually "method" of setting the derivation to zero just
delivers (at best) a LOCAL minimum (maximum); not necessarily a
"global" one!

So in this case when one sets 0^0 = 0 this matters!

Interestingly Dave L. Renfro just implicitly "mentioned" this fact...
( See his list of solutions!)

Sincerely,
F.

David Kastrup

unread,
Mar 23, 2002, 1:37:40 PM3/23/02
to
Bengt Månsson <ben...@telia.com> writes:

> David C. Ullrich wrote:
>
> > On Sat, 23 Mar 2002 15:08:03 GMT, in...@simple-line.de (Franz Fritsche)
> > wrote:
>
>
> [snip]
>
> > find the minimum of x^x for x >= 0 it's clear to everyone
> > but you that the value at the origin is supposed to be defined
> > by continuity.
>
>
> Sorry, but that isn't clear to me either. Surely the most reasonable
> way to define 0^0 is 1 but not for the reason you mention.
>
> What about this: Find the maximum of x^0 in the interval [-1,1] .

Since 0^0 is undefined, we can clearly define it as 2, so the maximum
is at x=0. Sorry for employing Fritsche logic.

--
David Kastrup, Kriemhildstr. 15, 44793 Bochum
Email: David....@t-online.de

David C. Ullrich

unread,
Mar 23, 2002, 1:59:18 PM3/23/02
to
On Sat, 23 Mar 2002 15:38:35 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>On Fri, 22 Mar 2002 23:02:03 GMT, ull...@math.okstate.edu (David C.


>Ullrich) wrote:
>
>>
>> ... it follows from the chain rule in several variables that the derivative
>> of x^x is log(x) x^x + x x^(x-1).
>>
>Surely you will explain me how this applies to the case x = 0.

It doesn't.

>// Since the question assumed x^x to be defined for x >= 0, and you
>didn't mention any restrictions on the derivative of x^x above.

I assumed the reader was not an idiot.

If you want to find the minimum of f(x) for x >= 0 the derivative
of f at x = 0 is irrelevant.

>Thanx in advance!
>
>F.
>


David C. Ullrich

David C. Ullrich

unread,
Mar 23, 2002, 2:02:18 PM3/23/02
to
On Sat, 23 Mar 2002 17:02:05 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>On Sat, 23 Mar 2002 17:00:10 GMT, ull...@math.okstate.edu (David C.


>Ullrich) wrote:
>
>>
>> You don't realize that when _you_ complain, even implicitly, about
>> other people being insulting you induce waves of hilarious laughter
>> all over the planet?
>>
>Are you serious? ;-) :-)
>
>
>> ...now you know (that's another example of something
>> that's true even though I can't prove it.)
>>
>We.., maybe Ullrich OUR personal troubles with each other arise from a
>completely different USAGE of "language" (whatsoever?)
>
>-------> *I* would call such a "thing" an _assertion_ (not more)!!!

Um, yes it's an assertion, just like everything I've ever said
to anyone and everything you've ever said to anyone. Good of you
to point out this deep fact.

>( By "such a thing" I mean an instance of: "an example of something
>that's true even though I can't prove it.")
>
>
>Well "assertions" are helpful sometimes - especially in logic - but
>certainly they should not be seen as (proven) facts...

Um, right. When I said that this was something I can't prove you
took that as a claim that I could prove it?

>> I've answered all your questions. Now you answer mine: _Have_
>> you ever read any analysis?
>>
>You want to be funny? Or what?

Was that a yes or a no?

>F.
>


David C. Ullrich

David C. Ullrich

unread,
Mar 23, 2002, 2:06:37 PM3/23/02
to
On Sat, 23 Mar 2002 17:09:58 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
<ben...@telia.com> wrote:

>David C. Ullrich wrote:
>
>> On Sat, 23 Mar 2002 15:08:03 GMT, in...@simple-line.de (Franz Fritsche)
>> wrote:
>
>
>[snip]
>
>> find the minimum of x^x for x >= 0 it's clear to everyone
>> but you that the value at the origin is supposed to be defined
>> by continuity.
>
>
>Sorry, but that isn't clear to me either.

Ok, then it's not clear to everyone but FF. It's true nonetheless.

>Surely the most reasonable way
>to define 0^0 is 1 but not for the reason you mention.
>
>What about this: Find the maximum of x^0 in the interval [-1,1] .

1. (Have you ever seen a polynomial described as the sum of a_n * x^n
for n=0 to N? Do you feel there's something controversial in
describing a polynomial that way, or that it raises questions about
whether a polynomial is continuous at the origin?)



>> David C. Ullrich
>>
>
>
>--
>
>___________________________________________________________________
>Bengt Månsson, Partille, Sweden http://www.algonet.se/~bengtmn/
>PhD Theoretical Physics, speciality Classical Fields and Relativity
>¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
>


David C. Ullrich

David C. Ullrich

unread,
Mar 23, 2002, 2:08:04 PM3/23/02
to
On Sat, 23 Mar 2002 17:14:10 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
<ben...@telia.com> wrote:

>Bengt Månsson wrote:
>
>>
>> What about this: Find the maximum of x^0 in the interval [-1,1] .
>
>Ok, that's was easy (typo). But then what about this:
>
>Find the maximum of 0^x for x >= 0 .

_If_ that were an actual question that someone actually wanted
to know the answer to for some real reason the answer would
be 0, because the function is identically 0.

>--
>
>___________________________________________________________________
>Bengt Månsson, Partille, Sweden http://www.algonet.se/~bengtmn/
>PhD Theoretical Physics, speciality Classical Fields and Relativity
>¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
>


David C. Ullrich

David C. Ullrich

unread,
Mar 23, 2002, 2:13:06 PM3/23/02
to
On Sat, 23 Mar 2002 16:52:38 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>On Sat, 23 Mar 2002 16:47:50 GMT, ull...@math.okstate.edu (David C.


>Ullrich) wrote:
>
>>
>> WHO could say anything against the statement "in a context like
>> this we [normally -ff] define the value of x^x for x = 0 by continuity".

No, I didn't write that. Don't say things that are not true.

>Look, Ullrich, its not the statement in "isolation" that I question,
>but the value of this statement to form an "argument" against the
>_possibility_ to set x^x = 0 for x = 0.
>
>Got me?
>
>>
>> I didn't say anything about what a mathematician _can_ do, I
>> said something about what they _do_.
>>
>If so. WHY the hell then are you questioning my solution?!

Because your solution was clearly WRONG (if intended as a solution
to the question that was obviously meant - if the point to your
"solution" was just to point out how smart you are because you
know that 0^0 is undefined then it was simply brilliant.)

> Its just a
>_possible_ solution, so not "the obvious" (or "standard" or whatever)
>solution.
>
>
>>
>> I didn't claim that anything was so because I said so.
>>
>Ok. I appreciate that. ( Even I would admit that you HAVE a lot of
>knowledge in this field and can VERY OFTEN say what's fact and what's
>not fact...; but I also would insist in the point that there certainly
>are also "questions" where different _opinions_ are possible... ;-)

We all know that you would insist that. If you thought that the
function in question was supposed to take the value 0 at the
origin you're wrong. Insist what you want.

You really _should_ write letters to everyone who's ever referred
to sin(z)/z as an entire function. Better get started right away;
it's going to be a lot of letters to write.

>F.
>
>


David C. Ullrich

Franz Fritsche

unread,
Mar 23, 2002, 2:09:39 PM3/23/02
to
On Sat, 23 Mar 2002 18:59:18 GMT, ull...@math.okstate.edu (David C.
Ullrich) wrote:

>
> If you want to find the minimum of f(x) for x >= 0 the derivative
> of f at x = 0 is irrelevant.
>

I admit.

Now regarding your statement:

>>>
>>> ... it follows from the chain rule in several variables that the derivative
>>> of x^x is log(x) x^x + x x^(x-1).
>>>

Surely you meant: "...the derivative of x^x for x > 0 is


log(x) x^x + x x^(x-1)"

Has nothing to do with being an "idiot" or not. But with being correct
or not - ESPECIALLY when discussing the question if giving x^x a
certain value for x = 0 or not.

Mentioning the domain where a certain function (in this case the
derivation of x^x) is defined is analysis basics...; don't you think
so?

F.

Franz Fritsche

unread,
Mar 23, 2002, 2:12:06 PM3/23/02
to
On 23 Mar 2002 19:37:40 +0100, David Kastrup
<David....@t-online.de> wrote:

>
> Since 0^0 is undefined, we can clearly define it as 2, so the maximum

> is at x=0. (...)
>
And where is the "problem" now???

F.

David C. Ullrich

unread,
Mar 23, 2002, 2:29:03 PM3/23/02
to
On Sat, 23 Mar 2002 19:09:39 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>On Sat, 23 Mar 2002 18:59:18 GMT, ull...@math.okstate.edu (David C.


>Ullrich) wrote:
>
>>
>> If you want to find the minimum of f(x) for x >= 0 the derivative
>> of f at x = 0 is irrelevant.
>>
>I admit.
>
>Now regarding your statement:
>
>>>>
>>>> ... it follows from the chain rule in several variables that the derivative
>>>> of x^x is log(x) x^x + x x^(x-1).
>>>>
>
>Surely you meant: "...the derivative of x^x for x > 0 is
>log(x) x^x + x x^(x-1)"

That's correct. It's also _obvious_ that that's what I meant.

>Has nothing to do with being an "idiot" or not. But with being correct
>or not - ESPECIALLY when discussing the question if giving x^x a
>certain value for x = 0 or not.

But when I wrote what I did about that derivative I was _not_
discussing the question of the definition of 0^0.

>Mentioning the domain where a certain function (in this case the
>derivation of x^x) is defined is analysis basics...; don't you think
>so?

Not when the domain is _clear_, no. Actual people in the actual
world do _not_ always _explicitly_ mention all the conditions
that are required to make what they're saying correct, when
those conditions are _obvious_.

>F.
>


David C. Ullrich

Franz Fritsche

unread,
Mar 23, 2002, 2:25:34 PM3/23/02
to
On Sat, 23 Mar 2002 19:06:37 GMT, ull...@math.okstate.edu (David C.
Ullrich) wrote:

>
> Have you ever seen a polynomial described as the sum of a_n * x^n
> for n=0 to N? Do you feel there's something controversial in
> describing a polynomial that way, or that it raises questions about
> whether a polynomial is continuous at the origin?
>

David, you may find this _very astonishing_ BUT this WAS (and CAN BE)
questioned...:

Quote from the math FAQ:

"Some people feel that giving a value to a function with an essential
discontinuity at a point, such as x^y at (0,0), is an inelegant patch
and should not be done. Others point out correctly that in
mathematics, usefulness and consistency are very important, and that
under these parameters 0^0 = 1 is the natural choice."

So some mathematician feel that it is _necessary_ to ARGUE for
0^0 = 1:

"Some textbooks leave the quantity 0^0 undefined, because the
functions x^0 and 0^x have different limiting values when x decreases
to 0. But this is a mistake. We must define x^0 = 1 for all x, if the
binomial theorem is to be valid when x=0, y=0, and/or x=-y. The
theorem is too important to be arbitrarily restricted! By contrast,
the function 0^x is quite unimportant." (R. Graham, D. Knuth, O.
Patashnik, Concrete Mathematics, p.162)

And (unknown source):

"But no, no, ten thousand times no! Anybody who wants the binomial
theorem (x + y)^n = sum_(k = 0...n) (n over k) x^k y^(n - k) to hold
for at least one nonnegative integer n must believe that 0^0 = 1, for
we can plug in x = 0 and y = 1 to get 1 on the left and 0^0 on the
right."

From the math FAQ.

Sincerely,
F.

Bengt Månsson

unread,
Mar 23, 2002, 2:32:04 PM3/23/02
to
David C. Ullrich wrote:

> On Sat, 23 Mar 2002 17:14:10 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
> <ben...@telia.com> wrote:
>
>>Bengt Månsson wrote:
>>
>>>What about this: Find the maximum of x^0 in the interval [-1,1] .
>>>
>>Ok, that's was easy (typo). But then what about this:

>>Find the maximum of 0^x for x >= 0 .

> _If_ that were an actual question that someone actually wanted
> to know the answer to for some real reason the answer would
> be 0, because the function is identically 0.


Not if you define 0^0 = 1.


> David C. Ullrich

Franz Fritsche

unread,
Mar 23, 2002, 2:31:44 PM3/23/02
to
>>
>> If so. WHY the hell then are you questioning my solution?!
>>
> Because your solution was clearly WRONG ... <bla & bla>
>
Hell, don't talk bullshit; and tell me whet the actual _error_ is.


> if intended as a solution to the question that was obviously meant
>

Since when does second guess count in mathematics??!
???


> if the point to your "solution" was just to <bla bla>
>
C'mon don't just talk shit, stupid.

F.

Franz Fritsche

unread,
Mar 23, 2002, 2:33:59 PM3/23/02
to
>>
>> if intended as a solution to the question that was obviously meant ...

>>
> Since when does second guess count in mathematics??!
> ???
>

Especially since the question hardly was meant seriously. [ But
that's also second guess now... :-))) ]

F.

David C. Ullrich

unread,
Mar 23, 2002, 2:47:05 PM3/23/02
to
On Sat, 23 Mar 2002 19:25:34 GMT, in...@simple-line.de (Franz Fritsche)
wrote:

>On Sat, 23 Mar 2002 19:06:37 GMT, ull...@math.okstate.edu (David C.


>Ullrich) wrote:
>
>>
>> Have you ever seen a polynomial described as the sum of a_n * x^n
>> for n=0 to N? Do you feel there's something controversial in
>> describing a polynomial that way, or that it raises questions about
>> whether a polynomial is continuous at the origin?
>>
>
>David, you may find this _very astonishing_ BUT this WAS (and CAN BE)
>questioned...:

What I still find astonishing is the way your replies are so
often utterly irrelevant to what you're replying to. Nothing
in any of the quotes below raises questions as to whether
a polynomial is continuous at the origin.

What else is astonishing is you don't seem to have noticed
that the people who _are_ saying that 0^0 should be defined
below are all saying it should equal 1.

>Quote from the math FAQ:
>
>"Some people feel that giving a value to a function with an essential
>discontinuity at a point, such as x^y at (0,0), is an inelegant patch
>and should not be done. Others point out correctly that in
>mathematics, usefulness and consistency are very important, and that
>under these parameters 0^0 = 1 is the natural choice."
>
>So some mathematician feel that it is _necessary_ to ARGUE for
>0^0 = 1:

A moderately surprising aspect of all this is you haven't
noticed that I have _not_ been claiming in _any_ of this
that 0^0 should be defined to equal 1.

The function x^y (defined initially for x, y > 0) does not
have a limit at (0,0). The function x^x (x > 0) _does_ have
a limit at 0. That's why it's clear that the value of the
function at the origin in the original question was supposed
to be 1. Saying that does not say that 0^0 should be defined
to equal 1 in general.

>"Some textbooks leave the quantity 0^0 undefined, because the
>functions x^0 and 0^x have different limiting values when x decreases
>to 0. But this is a mistake. We must define x^0 = 1 for all x, if the
>binomial theorem is to be valid when x=0, y=0, and/or x=-y. The
>theorem is too important to be arbitrarily restricted! By contrast,
>the function 0^x is quite unimportant." (R. Graham, D. Knuth, O.
>Patashnik, Concrete Mathematics, p.162)
>
>And (unknown source):
>
>"But no, no, ten thousand times no! Anybody who wants the binomial
>theorem (x + y)^n = sum_(k = 0...n) (n over k) x^k y^(n - k) to hold
>for at least one nonnegative integer n must believe that 0^0 = 1, for
>we can plug in x = 0 and y = 1 to get 1 on the left and 0^0 on the
>right."
>
>From the math FAQ.

I've seen it before. What in the world is your point here, insisting
that you're free to define 0^0 = 0 and citing all these references
that say why they think it should be 1?

Bengt Månsson

unread,
Mar 23, 2002, 2:40:20 PM3/23/02
to
David Kastrup wrote:

> Bengt Månsson <ben...@telia.com> writes:
>
>>David C. Ullrich wrote:
>>
>>>On Sat, 23 Mar 2002 15:08:03 GMT, in...@simple-line.de (Franz Fritsche)
>>>wrote:
>>
>>[snip]
>>
>>>find the minimum of x^x for x >= 0 it's clear to everyone
>>>but you that the value at the origin is supposed to be defined
>>>by continuity.

>>Sorry, but that isn't clear to me either. Surely the most reasonable
>>way to define 0^0 is 1 but not for the reason you mention.

>>What about this: Find the maximum of x^0 in the interval [-1,1] .
>>
>
> Since 0^0 is undefined, we can clearly define it as 2, so the maximum
> is at x=0. Sorry for employing Fritsche logic.

There was a typo, sorry, though I corrected it 5 minutes later.

I meant 0^x in the interval x >= 0. On the other hand that shouldn't

change your answer if you define 0^0 as 2 for this case.


But if the value in cases like this is always supposed to be defined by
continuity (which is what I'm questioning) then the answer should be 0.
If, on the other hand, we define 0^0 as 1 once and for all the answer
is 1. (BTW: What is "Fritsche logic"?)

Franz Fritsche

unread,
Mar 23, 2002, 2:42:52 PM3/23/02
to

Can't see what this "question" has to do with our "problem" here.

You are "mixing" things, Mr. Ullrich. That's not reasonable behavior.
PLEASE stay focused (if _in any way_ possible!)

F.

David C. Ullrich

unread,
Mar 23, 2002, 2:57:25 PM3/23/02
to
On Sat, 23 Mar 2002 19:32:04 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
<ben...@telia.com> wrote:

>David C. Ullrich wrote:
>
>> On Sat, 23 Mar 2002 17:14:10 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
>> <ben...@telia.com> wrote:
>>
>>>Bengt Månsson wrote:
>>>
>>>>What about this: Find the maximum of x^0 in the interval [-1,1] .
>>>>
>>>Ok, that's was easy (typo). But then what about this:
>
>>>Find the maximum of 0^x for x >= 0 .
>
>> _If_ that were an actual question that someone actually wanted
>> to know the answer to for some real reason the answer would
>> be 0, because the function is identically 0.
>
>
>Not if you define 0^0 = 1.

Of course not if you define 0^0 = 1. When did I do that?
Where does it say in any standard reference that _the_
definition of 0^0 _is_ 1?

It's very standard, in even mathematics that's even the
tiniest bit informal, which includes a _lot_ of published
work by esteemed mathematicians, to take the value of
a function at a removable singularity to be defined by
continuiity. People _do_ talk about the entire function
f(z) = sin(z)/z, saying that f(0) = 0 (without including
that explicitly in the definition of f.) When people
write that way, as _many_ people _do_ in _many_ contexts,
they are _not_ defining 0/0 = 1! They will also talk about
the entire function g(z) = sin(2z)/z, and say that g(0) = 2.

No, the sci.math faq is not quite a standard reference,
and in any case smart people saying why they feel the
definition of 0^0 _should_ be 1 is not the same as
saying that the definition _is_ 1, universally, regardless
of the context. In fact a bare 0^0 is undefined - that's
true even though the 0^0 that appears in many formulas
when you set x = 0 is taken to be 1. Actual mathematical
notation as actually used by actual mathematicians is
not a precise formal system.

>> David C. Ullrich
>
>--
>
>___________________________________________________________________
>Bengt Månsson, Partille, Sweden http://www.algonet.se/~bengtmn/
>PhD Theoretical Physics, speciality Classical Fields and Relativity
>¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
>


David C. Ullrich

Franz Fritsche

unread,
Mar 23, 2002, 2:52:20 PM3/23/02
to
On Sat, 23 Mar 2002 19:40:20 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
<ben...@telia.com> wrote:

>
> There was a typo, sorry, though I corrected it 5 minutes later.
>

You DID correct it.

But some people seem to prefer to answer to the non relevant part of a
message - just to be able to make some "point". :-(((

>
> I meant 0^x in the interval x >= 0. On the other hand that shouldn't
> change your answer if you define 0^0 as 2 for this case.
>

Well to be honest if you do not define 0^x for x = 0. Surely an
arbitrary value could be chosen for it. Even 1. ;-)

F.

David Kastrup

unread,
Mar 23, 2002, 3:00:53 PM3/23/02
to
ull...@math.okstate.edu (David C. Ullrich) writes:

> On Sat, 23 Mar 2002 17:14:10 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
> <ben...@telia.com> wrote:
>
> >Bengt Månsson wrote:
> >
> >>
> >> What about this: Find the maximum of x^0 in the interval [-1,1] .
> >
> >Ok, that's was easy (typo). But then what about this:
> >
> >Find the maximum of 0^x for x >= 0 .
>
> _If_ that were an actual question that someone actually wanted
> to know the answer to for some real reason the answer would
> be 0, because the function is identically 0.

In my book, it is for x>0, but for x=0 it is 1.

Bengt Månsson

unread,
Mar 23, 2002, 2:57:36 PM3/23/02
to
David C. Ullrich wrote:

>
> Ok, then it's not clear to everyone but FF. It's true nonetheless.

> 1. (Have you ever seen a polynomial described as the sum of a_n * x^n


> for n=0 to N? Do you feel there's something controversial in
> describing a polynomial that way, or that it raises questions about
> whether a polynomial is continuous at the origin?)


Not if we define 0^0 = 1. Then a polynomial defined as an expression of
the form sum( a_n*x^n, n=0, ..., N ) will define a function continuous
for all (real or complex) x.

However this doesn't justify your claim that "the value at the origin is
supposed to be defined by continuity" in general.

David Kastrup

unread,
Mar 23, 2002, 3:15:26 PM3/23/02
to
Bengt Månsson <ben...@telia.com> writes:

> David C. Ullrich wrote:
>
> > Ok, then it's not clear to everyone but FF. It's true nonetheless.
>
> > 1. (Have you ever seen a polynomial described as the sum of a_n * x^n
> > for n=0 to N? Do you feel there's something controversial in
> > describing a polynomial that way, or that it raises questions about
> > whether a polynomial is continuous at the origin?)
>
>
> Not if we define 0^0 = 1. Then a polynomial defined as an expression
> of the form sum( a_n*x^n, n=0, ..., N ) will define a function
> continuous for all (real or complex) x.
>
> However this doesn't justify your claim that "the value at the origin
> is supposed to be defined by continuity" in general.

In particular, if the best one can achieve is one-sided continuity,
as in the case with 0^x as opposed to x^0.

David C. Ullrich

unread,
Mar 23, 2002, 3:29:23 PM3/23/02
to
On Sat, 23 Mar 2002 19:57:36 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
<ben...@telia.com> wrote:

>David C. Ullrich wrote:
>
>>
>> Ok, then it's not clear to everyone but FF. It's true nonetheless.
>
>> 1. (Have you ever seen a polynomial described as the sum of a_n * x^n
>> for n=0 to N? Do you feel there's something controversial in
>> describing a polynomial that way, or that it raises questions about
>> whether a polynomial is continuous at the origin?)
>
>
>Not if we define 0^0 = 1. Then a polynomial defined as an expression of
>the form sum( a_n*x^n, n=0, ..., N ) will define a function continuous
>for all (real or complex) x.
>
>However this doesn't justify your claim that "the value at the origin is
>supposed to be defined by continuity" in general.

It doesn't _prove_ it - this is not the sort of thing that's
susceptible to proof. But there's also the example of sin(z)/z
equalling 1 when z = 0 (together with the example of sin(2z)/z
equalling 2 when z = 0, which shows that the reason sin(z)/z
is said to equal 1 when z = 0 is not because 0/0 has been
defined to be 1.)

>>David C. Ullrich
>
>
>--
>
>___________________________________________________________________
>Bengt Månsson, Partille, Sweden http://www.algonet.se/~bengtmn/
>PhD Theoretical Physics, speciality Classical Fields and Relativity
>¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
>


David C. Ullrich

Bengt Månsson

unread,
Mar 23, 2002, 3:21:54 PM3/23/02
to
David C. Ullrich wrote:

> On Sat, 23 Mar 2002 19:32:04 GMT, Bengt = Månsson
> <ben...@telia.com> wrote:


[snip]


>>>Find the maximum of 0^x for x >= 0 .
>>>>
>>>_If_ that were an actual question that someone actually wanted
>>>to know the answer to for some real reason the answer would
>>>be 0, because the function is identically 0.


>>Not if you define 0^0 = 1.
>>
> Of course not if you define 0^0 = 1. When did I do that?
> Where does it say in any standard reference that _the_
> definition of 0^0 _is_ 1?

So you do not define 0^0 once and for all but use different values
depending on context. Or something equvivalent like assuming polynomial
functions or 0^x continuous for all x. Right? That's fine with me. What
I'm questing is just whether it is common practise to assume "the value
at the origin is supposed to be defined by continuity" quite generally,
i e for any function, like x^x or 0^x.


> People _do_ talk about the entire function
> f(z) = sin(z)/z, saying that f(0) = 0 (without including
> that explicitly in the definition of f.)


( f(0) = 1 )

Yes, but then it is known from the context that they are talking about
_entire functions_. The poster who asked about the minimum of the
function x^x, x >= 0 didn't mention _continuous_ functions.

On the other hand lets's change context to elementary real analysis.
Then, if you ask for the greatest value of the function f(x) = sin(x)/x
for real x, I would say that there is none. The function takes all
values in [M,1[ where M is a certain negative number (appr
-0.2172336282). I've never heard that f(0) = 1 should be plugged in that
case.

Franz Fritsche

unread,
Mar 23, 2002, 3:29:37 PM3/23/02
to
On Sat, 23 Mar 2002 20:21:54 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
<ben...@telia.com> wrote:

>
> Yes, but then it is known from the context that they are talking about
> _entire functions_. The poster who asked about the minimum of the
> function x^x, x >= 0 didn't mention _continuous_ functions.
>

Thanx for pointing this out Bengt - I am tired already to waste my
time with pointless "arguments" with Mr. Ullrich about things that are
simple (and clear) as that... [ As if _anybody_ had ever questioned
the important case of _entire functions_! ]

F.

Bengt Månsson

unread,
Mar 23, 2002, 3:39:50 PM3/23/02
to
David C. Ullrich wrote:

> On Sat, 23 Mar 2002 19:57:36 GMT, Bengt Månsson
> <ben...@telia.com> wrote:
>
>>Not if we define 0^0 = 1. Then a polynomial defined as an expression of
>>the form sum( a_n*x^n, n=0, ..., N ) will define a function continuous
>>for all (real or complex) x.
>>
>>However this doesn't justify your claim that "the value at the origin is
>>supposed to be defined by continuity" in general.
>
> It doesn't _prove_ it - this is not the sort of thing that's
> susceptible to proof.


I said "justify".

> But there's also the example of sin(z)/z
> equalling 1 when z = 0 (together with the example of sin(2z)/z
> equalling 2 when z = 0, which shows that the reason sin(z)/z
> is said to equal 1 when z = 0 is not because 0/0 has been
> defined to be 1.)


Hmm... You don't just that the domain of the function f(z) = sin(z)/z
is extended to all of C by defining f(0) = 1. You say, literally,
"sin(z)/z equalling 1 when z = 0" , i e sin(0)/0 = 1, and also
sin(2*0)/0 = 2. Since sin(2*0) = sin(0) that should prove that 1 = 2.

David C. Ullrich

unread,
Mar 23, 2002, 3:50:28 PM3/23/02
to
On Sat, 23 Mar 2002 20:21:54 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
<ben...@telia.com> wrote:

>David C. Ullrich wrote:
>
>> On Sat, 23 Mar 2002 19:32:04 GMT, Bengt = Månsson
>> <ben...@telia.com> wrote:
>
>
>[snip]
>
>
>>>>Find the maximum of 0^x for x >= 0 .
>>>>>
>>>>_If_ that were an actual question that someone actually wanted
>>>>to know the answer to for some real reason the answer would
>>>>be 0, because the function is identically 0.
>
>
>>>Not if you define 0^0 = 1.
>>>
>> Of course not if you define 0^0 = 1. When did I do that?
>> Where does it say in any standard reference that _the_
>> definition of 0^0 _is_ 1?
>
>
>So you do not define 0^0 once and for all but use different values
>depending on context. Or something equvivalent like assuming polynomial
>functions or 0^x continuous for all x. Right? That's fine with me.

Great.

>What
>I'm questing is just whether it is common practise to assume "the value
>at the origin is supposed to be defined by continuity" quite generally,
>i e for any function, like x^x or 0^x.

Of course the value of a function at a point is not _always_ suppposed
to be defined by continuity. In a context where the point was showing
that you understood that there is such a thing as a discontinuous
function or a formula which is undefined at a point taking the value
to be defined by continuity would be a blunder, totally missing the
point. On the other hand in the context of an informal newsgroup
posting asking for the minimum of x^x for x >= 0 doing anything
other than taking the value at the origin to be defined by continuity
is just _silly_.

>> People _do_ talk about the entire function
>> f(z) = sin(z)/z, saying that f(0) = 0 (without including
>> that explicitly in the definition of f.)
>
>
>( f(0) = 1 )
>
>Yes, but then it is known from the context that they are talking about
>_entire functions_.

People talk this way all the time even without saying they're talking
about entire functions, just like people talk about the function
sum_0^N a_n x^n and expect the reader to realize they're talking
about a polynomial, not a function which is undefined at the origin.

>The poster who asked about the minimum of the
>function x^x, x >= 0 didn't mention _continuous_ functions.
>
>On the other hand lets's change context to elementary real analysis.
>Then, if you ask for the greatest value of the function f(x) = sin(x)/x
> for real x, I would say that there is none.

What the answer is depends on the _context_, because the meaning
of the question depends on the context. In some contexts asking
the _question_ "what is the greatest value of sin(x)/x for real
x" would be erroneous. In some contexts the answer would be
that there is none, and in some the answer would be that the
largest value is 1.

If "elementary real analysis" means a class in elementary real
analysis I'd say the question was _very_ poorly worded, at the
very least. If I were taking a test in elementary real analysis
I think I'd point out the ambiguity in the question and give
more than one answer, saying that if the question is meant
one way the answer is this, while if it's meant the other
way the answer is this.

The point to these marks on the page is to try to communicate.
Answering the question "what is the minimum of x^x for x >= 0"
with "0, because we can define 0^0 = 0" is not attempting to
communicate with the person asking the question. (Except
maybe in a context where it seems likely the person asking
the question does not realize there are problems with the
definition of 0^0, which doesn't seem likely here.)

>The function takes all
>values in [M,1[ where M is a certain negative number (appr
>-0.2172336282). I've never heard that f(0) = 1 should be plugged in that
>case.
>
>> David C. Ullrich
>
>--
>
>___________________________________________________________________
>Bengt Månsson, Partille, Sweden http://www.algonet.se/~bengtmn/
>PhD Theoretical Physics, speciality Classical Fields and Relativity
>¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
>


David C. Ullrich

Denis Feldmann

unread,
Mar 23, 2002, 3:39:49 PM3/23/02
to

"Franz Fritsche" <in...@simple-line.de> a écrit dans le message news:
3c9ce477...@news.t-online.de...

> On Sat, 23 Mar 2002 20:21:54 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
> <ben...@telia.com> wrote:
>
> >
> > Yes, but then it is known from the context that they are talking about
> > _entire functions_. The poster who asked about the minimum of the
> > function x^x, x >= 0

Was clearly asking, so was not interested in the answer 0 ,given *because*
it would be *possible* to define f as 0 at 0 . So context here was
"probably" to give as answer exp(1/e). Anyway, all this is irrelevant, and
you al know it. I notice that many of the "arguing" contributors were unable
to answer the "pritive of x^x" question, nor weree they polite enough to
notice the demonstration of the Liouville theorem which could have taught
them something.

didn't mention _continuous_ functions.
> >
>
> Thanx for pointing this out Bengt - I am tired already to waste my
> time with pointless "arguments" with Mr. Ullrich about things that are
> simple (and clear) as that... [ As if _anybody_ had ever questioned
> the important case of _entire functions_! ]

Are you acquainted with Mr Lounesto, I wonder?

>
> F.
>


David C. Ullrich

unread,
Mar 23, 2002, 4:02:10 PM3/23/02
to
On Sat, 23 Mar 2002 20:39:50 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
<ben...@telia.com> wrote:

>David C. Ullrich wrote:
>
>> On Sat, 23 Mar 2002 19:57:36 GMT, Bengt Månsson
>> <ben...@telia.com> wrote:
>>
>>>Not if we define 0^0 = 1. Then a polynomial defined as an expression of
>>>the form sum( a_n*x^n, n=0, ..., N ) will define a function continuous
>>>for all (real or complex) x.
>>>
>>>However this doesn't justify your claim that "the value at the origin is
>>>supposed to be defined by continuity" in general.
>>
>> It doesn't _prove_ it - this is not the sort of thing that's
>> susceptible to proof.
>
>
>I said "justify".
>
> > But there's also the example of sin(z)/z
> > equalling 1 when z = 0 (together with the example of sin(2z)/z
> > equalling 2 when z = 0, which shows that the reason sin(z)/z
> > is said to equal 1 when z = 0 is not because 0/0 has been
> > defined to be 1.)
>
>
>Hmm... You don't just that the domain of the function f(z) = sin(z)/z
>is extended to all of C by defining f(0) = 1. You say, literally,
>"sin(z)/z equalling 1 when z = 0" , i e sin(0)/0 = 1, and also
>sin(2*0)/0 = 2. Since sin(2*0) = sin(0) that should prove that 1 = 2.

I don't think I've ever seen anyone say "sin(0)/0 = 1". But
people _do_ refer to the function f(z) = sin(z)/z, without
saying anything about how it should be defined at the origin,
and then take f(0) = 0.

That's because everyone knows what they mean when they
say "f(z) = sin(z)/z". Just like I would have thought
everyone would understand what was meant by "find the
minimum of x^x for x >= 0."

>> David C. Ullrich
>
>--
>
>___________________________________________________________________
>Bengt Månsson, Partille, Sweden http://www.algonet.se/~bengtmn/
>PhD Theoretical Physics, speciality Classical Fields and Relativity
>¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
>


David C. Ullrich

David C. Ullrich

unread,
Mar 23, 2002, 4:06:43 PM3/23/02
to
On Sat, 23 Mar 2002 21:39:49 +0100, "Denis Feldmann"
<denis.f...@wanadoo.fr> wrote:

>
>"Franz Fritsche" <in...@simple-line.de> a écrit dans le message news:
>3c9ce477...@news.t-online.de...
>> On Sat, 23 Mar 2002 20:21:54 GMT, Bengt =?ISO-8859-1?Q?M=E5nsson?=
>> <ben...@telia.com> wrote:
>>
>> >
>> > Yes, but then it is known from the context that they are talking about
>> > _entire functions_. The poster who asked about the minimum of the
>> > function x^x, x >= 0
>
>Was clearly asking, so was not interested in the answer 0 ,given *because*
>it would be *possible* to define f as 0 at 0 . So context here was
>"probably" to give as answer exp(1/e). Anyway, all this is irrelevant, and
>you al know it. I notice that many of the "arguing" contributors were unable
>to answer the "pritive of x^x" question, nor weree they polite enough to
>notice the demonstration of the Liouville theorem which could have taught
>them something.

I was mystified by the last remark for a few seconds. Then I recalled
that Liouville proved more than one theorem...

>didn't mention _continuous_ functions.
>> >
>>
>> Thanx for pointing this out Bengt - I am tired already to waste my
>> time with pointless "arguments" with Mr. Ullrich about things that are
>> simple (and clear) as that... [ As if _anybody_ had ever questioned
>> the important case of _entire functions_! ]
>
>Are you acquainted with Mr Lounesto, I wonder?
>
>>
>> F.
>>
>
>


David C. Ullrich

Dave L. Renfro

unread,
Mar 23, 2002, 4:13:04 PM3/23/02
to
Bengt =?ISO-8859-1?Q?M=E5nsson?= <ben...@telia.com>
[sci.math Sat, 23 Mar 2002 17:23:16 GMT]
http://mathforum.org/epigone/sci.math/claiglaxbloa/3C9CBA47...@telia.com

wrote (in part, in response to an earlier post of mine):

>> which led to a lot of semi-mathematical follow-ups about
>> 0^0.
>
> Digressions, yes, but why "semi-mathematical"?

Well, I wanted to write "non-mathematical", but I decided to
be a bit more diplomatic. I guess I've read too many posts on
this topic over the past few years . . .

[[ But (Oh no, I've gone and done it again!
<http://mathforum.org/epigone/sci.math/shenkhazho>)
it's certainly a lot more mathematical than that
weird "GIVE OPIE THE OSCAR" post I responded to
earlier today. ]]

There is some mathematics involved with all this 0^0 stuff,
but it usually gets pushed into the background whenever the
topic comes up. For example, what do we mean by 9^3? It means
9*9*9. O-K, so what about 9^(1/2)? To be consistent with the
laws of exponents, we let this be 3. [Something that often gets
missed at this point is that it's easy to show this definition
is the only one possible if you want consistency with the laws
of exponents, but it still remains to be proved that *all* the
other various exponent laws, with all the various other possible
numerical inputs, will continue to work.] But now we have to be
careful. We have two operational definitions of something like
9^(12/4). We could see 12/4 as a natural number and use our
first method, or we could see 12/4 as a rational number and
apply the second method. Do we get the same thing? Well, this
time we do, but what about in general?

If we're considering the exponent as a rational number, then
12/4 equals 3, and the methods better give the same result or
else we don't have a well-defined operation. But I can imagine
some situations where we'd want to distinguish 12/4 from 3
(such as when we're constructing the rational numbers from the
integers), and if this distinction is important for what we're
doing, then it wouldn't matter as much if the two methods agree
or not, since in these cases "12/4" would be a mathematically
distinct object from "3".

I could continue and discuss what 9^(sqrt 2) means and whether
our definition for irrational exponents is consistent with
9^(sqrt 9) = 9^(12/4) = 9^3, but hopefully this will give you
the idea of what I'm driving at. Namely, in defining 0^0 it is
crucial to be explicit about whether we're looking at the
exponent '0' as a natural number, a rational number, or a
real number.

Dave L. Renfro

Bengt Månsson

unread,
Mar 23, 2002, 5:15:16 PM3/23/02
to
Denis Feldmann wrote:

> "Franz Fritsche" <in...@simple-line.de> a écrit dans le message news:
> 3c9ce477...@news.t-online.de...
>

>>On Sat, 23 Mar 2002 20:21:54 GMT, Bengt Månsson
>><ben...@telia.com> wrote:
>
> Was clearly asking, so was not interested in the answer 0 ,given *because*
> it would be *possible* to define f as 0 at 0 .


Probably not, but that doesn't settle the question as to whether x^x,
x>=0 should be assumed continous for x = 0.

> So context here was
> "probably" to give as answer exp(1/e).


Probably not. Probably exp(-1/e).

>>Anyway, all this is irrelevant, and you al know it.


Irrelevant to the original question yes (since whatever value you choose
for 0^0 doesn't affect the integral).

> I notice that many of the "arguing" contributors were unable
> to answer the "pritive of x^x" question, nor weree they polite enough to
> notice the demonstration of the Liouville theorem which could have taught
> them something.


Right, thanks are due to the poster of the demonstration of the
Liouville theorem (Zdisalv Kovarik) and also the person sending a lot of
links (Dave Renfro). I think I _did_ thank the latter btw; maybe you
have not read everything. And I've printed it out and will read it. -
However I, for one, knew that x^x can not be integrated in primitive
functions but when I started to read this thread that had already been
pointed out. - Digressions often occur in these threads. So? In what way
does that entitle you to assume that the posters are unable to answer
the original question, etc?

Bengt Månsson

unread,
Mar 23, 2002, 5:32:04 PM3/23/02
to
David C. Ullrich wrote:

> On Sat, 23 Mar 2002 20:21:54 GMT, Bengt Månsson
> <ben...@telia.com> wrote:
[snip]


> Of course the value of a function at a point is not _always_ suppposed
> to be defined by continuity.


Great.

>>On the other hand in the context of an informal newsgroup
> posting asking for the minimum of x^x for x >= 0 doing anything
> other than taking the value at the origin to be defined by continuity
> is just _silly_.


Ok, then I'm silly. Because I took it for granted that the poster really
meant x > 0 and didn't define a value at all at x = 0.

>
> What the answer is depends on the _context_, because the meaning
> of the question depends on the context. In some contexts asking
> the _question_ "what is the greatest value of sin(x)/x for real
> x" would be erroneous. In some contexts the answer would be
> that there is none, and in some the answer would be that the
> largest value is 1.


Never! - Because if you intend that then you should have a function f(x)
= sin(x)/x for x=/= 0 and either (depending on context) take it for
granted that f(0) should be taken as 1 or state it explicitly. Then you
can say that the greatest value of f(x) in R is 1. But you still cannot
(or should not) say that the greatest value of sin(x)/x in R is 1.


> The point to these marks on the page is to try to communicate.


I thought the point was to discuss, answer questions, asking questions
etc in and about mathematics.


> Answering the question "what is the minimum of x^x for x >= 0"
> with "0, because we can define 0^0 = 0" is not attempting to
> communicate with the person asking the question.


It is a possibility albeit a trivial one. Other people post other things
and there are lots of digressions. So?

Bengt Månsson

unread,
Mar 23, 2002, 5:40:09 PM3/23/02
to
David C. Ullrich wrote:

> On Sat, 23 Mar 2002 20:39:50 GMT, Bengt Måsson
> <ben...@telia.com> wrote:


[snip]


>>>But there's also the example of sin(z)/z
>>>equalling 1 when z = 0

> I don't think I've ever seen anyone say "sin(0)/0 = 1".


But you just said it (above)! In words, but what's the difference?

[snip]

I agree that leaving 0^0 undefined is analogous to leaving 0/0
undefined. That makes me (on second thought) prefer to leave 0^0
undefined in general. - But there is a problem with the binomial
theorem. Expanding (0 + 0)^n (n positive integer) how do we avoid 0^0
in the first and last terms?

Franz Fritsche

unread,
Mar 23, 2002, 5:45:37 PM3/23/02
to
Ullrich wrote:
>>
>> On the other hand in the context of an informal newsgroup
>> posting asking for the minimum of x^x for x >= 0 doing anything
>> other than taking the value at the origin to be defined by continuity
>> is just _silly_.
>>

Seems that this appears to be a valid _mathematical_ argument for Mr.
Ullrich.

>
> Ok, then I'm silly. Because I took it for granted that the poster really
> meant x > 0 and didn't define a value at all at x = 0.
>

Well I am silly too; since I think that it also _could be_ possible
that the poster maybe wasn't aware of the "problem" with x = 0 when
posting the question.

>
> I thought the point was to discuss, answer questions, asking questions
> etc in and about mathematics.
>

Agree... [ Well, since some time I am seriously beginning to question
this "assumption"... :-( ]

>>
>> Answering the question "what is the minimum of x^x for x >= 0"
>> with "0, because we can define 0^0 = 0" is not attempting to
>> communicate with the person asking the question.
>>

??? Why not? Since the OTHER _possible_ solution was given 3 or 4
times ...; won't think that this provides "better" communication...

>
> It is a possibility
>
I would think so.


> albeit a trivial one.
>
I WON'T think so. If it _were_ why the hell than this digressions?
( Know what I mean? ;-)

F.

David Kastrup

unread,
Mar 23, 2002, 5:50:24 PM3/23/02
to
ull...@math.okstate.edu (David C. Ullrich) writes:


> >
> >However this doesn't justify your claim that "the value at the origin is
> >supposed to be defined by continuity" in general.
>
> It doesn't _prove_ it - this is not the sort of thing that's
> susceptible to proof. But there's also the example of sin(z)/z
> equalling 1 when z = 0 (together with the example of sin(2z)/z
> equalling 2 when z = 0, which shows that the reason sin(z)/z
> is said to equal 1 when z = 0 is not because 0/0 has been
> defined to be 1.)

"equalling" is the wrong expression. It doesn't equal it there, it
is just sloppy writing. When mathematicians write sin(z)/z, they
usually mean something different. Electrical engineers have the
function sinc(x) which is sin(pi x)/(pi x) except for the obvious
extension of continuity.

Where the sloppiness of mathematicians really excels is when they
write arctan(y/x) which often means something quite more complicated,
and only agreeing for x>0 with the formal value of the expression.

So the question boils down to whether x^x for x>=0 was supposed to be
sloppy writing for "substitute something continuous for x=0", or
whether it was supposed to say "use the definition of 0^0 at x=0",
whatever the original poster meant this definition should have been.

It is loading more messages.
0 new messages