According to my calculator,
7^sqrt(8) = 245.63951...
8^sqrt(7) = 245.10463...
So, 7^sqrt(8) > 8^sqrt(7).
Who can prove this fact only by algebra or analysis?
Many thanks.
Konstantin Knop.
---
----------------------------------------------------------------------
Konstantin Knop | Numbers 142072321123 + 1419763024680 i, 0 <= i <= 20
k...@knop.spb.ru | are primes. It's nice, isn't it?
: According to my calculator,
: 7^sqrt(8) = 245.63951...
: 8^sqrt(7) = 245.10463...
: So, 7^sqrt(8) > 8^sqrt(7).
: Who can prove this fact only by algebra or analysis?
:
Hmmm. The inequality 7^(sqrt(8)) ? 8^(sqrt(7)) holds in the same
sense as 7^(1/sqrt(7)) ? 8^(1/sqrt(8)) so it seems sensible to look at
the function f(x) = x^(1/sqrt(x)), defined for x > 0, and which
approaches the limit 0 as x --> 0+. One has
f'(x) = f(x) * x^(-3/2) * (2 - ln(x))/2.
It follows that f(x) is increasing for 0 < x < e^2 and decreasing for
x > e^2, so has an absolute maximum of e^(2/e) at x = e^2.
Since e^2 = 7.389, approximately, one sees that the points (7,f(7))
and (8,f(8)) are on opposite sides of the extreme point (e^2, e^(2/e)).
Dang! However, one also has e^2 = 7.389, approximately, so e^2 - 7 is
about .389, while 8 - e^2 is about .611. The greater distance of 8 than
7 from the ordinate of the extreme point (which is an absolute maximum), at
least makes it plausible that f(8) < f(7), especially if you consider the
Taylor expansion of f(x) at x = e^2.
:Hello!
:
:According to my calculator,
:7^sqrt(8) = 245.63951...
:8^sqrt(7) = 245.10463...
:
:So, 7^sqrt(8) > 8^sqrt(7).
:
:Who can prove this fact only by algebra or analysis?
:Many thanks.
Nice ;-)
It looks a little tough by anything but brute-force calculation. My
original thought was, This is equivalent to 7^(1/\sqrt{7}) >
8^(1/\sqrt{8}). Now the function f(x) = x^(1/x) is decreasing on the
interval [e,\infty) (just evaluate its derivative), and the inequality
just says that f(\sqrt{7}) > f(\sqrt{8}) (square the latter to get the
former). So, I thought, all you have to do is show \sqrt{7} > e.
Unfortunately, \sqrt{7} < e < \sqrt{8}.
So, it looks like doing it analytically requires going to (at least) the
second derivative, and knowing roughly where e is in (\sqrt{7},
\sqrt{8}). The Joneses live on one side of the hill, the Smiths on the
other, and which one is higher depends on the shape of the hill and how
far they are from its top.
Not worth the analysis. I hate to say it, but it looks like brute-force
wins. Unless somebody has something elegant to say about where e^2 lies
in (7,8)?
--Ron Bruck
Now 100% ISDN from this address
7 = 8^(ln 7 / ln 8), so show that sqrt(8) * ln 7 / ln 8 > sqrt(7), or
sqrt(8)/ln(8) > sqrt(7)/ln(7).
Take the derivative of sqrt(x)/ln(x). This should be (if I'm doing this right)
(ln(x)/(2*sqrt(x)) - sqrt(x)/x) / (ln(x))^2
If we can show that this is positive somewhere and non-negative everywhere
between x=7 and x=8 we will be done.
The denominator is clearly positive, so let's concentrate on the numerator.
((ln(x)/(2*sqrt(x)))/(sqrt(x)/x)) = x*ln(x)/2, which if x is greater than
e (which it is) is greater than x/2, which if x is greater than 2 (which it
is) is greater than 1. Therefore, ln(x)/(2*sqrt(x)) is greater than sqrt(x)/x
for all x between 7 and 8, and thus 7^sqrt(8) > 8^sqrt(7) QED.
-Travis
>:Many thanks.
Wrong.
= ln(x)/2
Now ln(x)/2 = 1 when x = e^2 and 7 < e^2 < 8 as several other posters
have noted. So you reach the same roadblock as the previous posters
reached which is of course understandable since you are using the same
technique modulo a logarithm.
Jeff
x These extra lines added to outsmart my brain-dead news poster
x
x
x
x
x
x
Many prior posters tried and failed to apply calculus to this
problem. But it can be done purely arithmetically -- a triumph of
number theory over calculus. The trick is to replace the square
roots by good rational approximations obtained via continued
fractions. The proof below is easily verified using only simple
longhand integer arithmetic.
sqrt(7) sqrt(8)
8 < 7
<=> raise above to sqrt(2)
sqrt(14) 4
8 < 7
but sqrt(14) < 116/31 since 14*31^2 - 116^2 = -2
hence it suffices to show
116/31 4
8 < 7
or
29 31
8 < 7
or
30
(8/7) < 56
but
30
(8/7) < (39^2/20^2)^3 since (8/7)^5 < 39/20
< (153/40)^3 since 39^2 = 1521 < 1530
< 56 since 153^3 < 56*40^3
QED
The approximation sqrt(14) < 116/31 arises from applying to
sqrt(14) a few iterations of the continued fraction algorithm
(which uses only arithmetical operations and is easily carried
out by hand in this case). Alternatively, one may use the n=3
case of the classical formula
------------- --------------
\/ (n + 1)^2 - 2 = [ n, 1, n-1, 1, 2*n ]
i.e
--
\/ 14 = [ 3, 1, 2, 1, 6, 1, 2, 1, 6, ... ]
The approximant we used above arises from the 5th convergent
[ 3, 1, 2, 1, 6, 1 ], i.e.
1 116
3 + ----------------- = ---
1 31
1 + -------------
1
2 + ---------
1
1 + -----
1
6 + -
1
or expressed matrix-theoretically
[ 3 1 ] [ 1 1 ] [ 2 1 ] [ 1 1 ] [ 6 1 ] [ 1 1 ] [ 116 101 ]
[ ] [ ] [ ] [ ] [ ] [ ] = [ ]
[ 1 0 ] [ 1 0 ] [ 1 0 ] [ 1 0 ] [ 1 0 ] [ 1 0 ] [ 31 27 ]
so it follows that
101 116
--- < sqrt(14) < ---
27 31
Further information on continued fractions may be found in most
texts on elementary number theory, e.g. the texts of Stark or
Hardy and Wright. Many other inequalities involving sqrt's may
be similarly attacked.
Note that the complexity of even simpler problems involving
radicals is currently unknown. For example, no polynomial time
algorithm is known for determining the sign of a sum of real
radicals sum(ci*qi^(1/ri)) where ci, qi are rational numbers and
ri is a positive integer. Such sums play an important role in
various geometric problems (e.g. Euclidean shortest paths and
traveling salesman tours). Testing whether such a sum of radicals
is zero can be decided in polynomial time (e.g. see Bloemer's
papers on radical denesting in FOCS '91, '92). However, this is
of no help in determining the sign, it only shows that if sign
testing is in NP then it is already in NP /\ co-NP.
Finally, it should be mentioned that problems like Knop's may
be attacked via general decision procedures for real exponential
fields (such algorithms are generalizations of Tarski's classical
decision procedure for the first order theory of the reals, and
its subsequent effectivization by Collins in his Cylindrical
Algebraic Decomposition (CAD) algorithm). Following are some
references to recent work in this area.
-Bill
van den Dries, L et. al. The elementary theory of restricted analytic
fields with exponentiation, Annals of Math. 140 (1994) 183-205.
van den Dries, L. Alfred Tarski's elimination theory for real closed
fields, Jnl. Symbolic Logic 53 #1 (1988) 7-19.
van den Dries, L. A generalization of the Tarski-Seidenberg theorem,
and some nondefinability results, Bull. AMS 15 #2 (1986) 189-193.
van den Dries, L. Tarski's problem and Pfaffian functions,
Logic Colloquium '84 (J.B. Paris ed.) 59-90.
Khovanskii, A. G. Fewmonomials and Pfaff manifolds, Proc's of
Int'l Congress of Mathematicians '83, 549-564.
Miller, C. Infinite differentiability in polynomial bounded o-minimal
structures, Proc's AMS 123 #8 (1995) 2551-2555.
Richardson, D. Wu's method and the Khovanskii finiteness theorem,
Jnl. Symbolic Computation 12 (1991) 127-141.
Richardson, D. A zero structure theorem for exponential polynomials,
Proc. ISSAC '93 144-151.
Wolter, H. Consequences of Schanuel's condition for zeros of exponential
terms, Math. Logic Quarterly, 39 (1993) 559-565.
Here is what appears to be a really easy mostly-algebraic proof, barely
"analytic" at all. It is very elementary, though not quite so much as Bill's.
I post it on behalf of my colleague John Spain. It's amazing that so simple
a proof hasn't appeared already, yet I can't find anything wrong with it.
A black mark for all of us regulars on sci.math!
-----------------------------------------------
Let k>0.
n k
Use the well-known and easily provable fact that (1+k/n) ---> e , increasing.
^^^^^^^^^^
So for n>e ... NOTE ^^
k k n
n > e > (1+k/n)
(n+k) n
thus n > (n+k)
sqr(8) sqr(7)
thus sqr(7) > sqr(8)
sqr(8) sqr(7)
thus 7 > 8
-----------------------------------------------
Well was that easy, or was it? (Thanks John.)
It ties in with that occasional thread about the min value in which x^y = y^x.
-------------------------------------------------------------------------------
Bill Taylor w...@math.canterbury.ac.nz
-------------------------------------------------------------------------------
Every problem has at least one solution which is elegant, neat - and wrong.
-------------------------------------------------------------------------------
:So for n>e ...
:
: k k n
: n > e > (1+k/n)
:
: (n+k) n
:thus n > (n+k)
:
: sqr(8) sqr(7)
:thus sqr(7) > sqr(8)
:
: sqr(8) sqr(7)
:thus 7 > 8
:-----------------------------------------------
:
:Well was that easy, or was it? (Thanks John.)
:
:It ties in with that occasional thread about the min value in which
: x^y = y^x. >
:-------------------------------------------------------------------------------
: Bill Taylor w...@math.canterbury.ac.nz
:-------------------------------------------------------------------------------
: Every problem has at least one solution which is elegant, neat - and
: wrong.
:-------------------------------------------------------------------------------
Exercise: Keep the 7, replace 8 by 7.2 (allowed since k = 0.2 > 0) and
do the calculations again (a commercial calculator is expected to show at
least 5 correct digits). (The world would be a different place if sqrt(7)
were greater than e.)
Yes, it was amusing. Cheers, Slavek (ZVK).
>Let k>0.
>Use the well-known and easily provable fact that (1+k/n)---> e , increasing.
> ^^^^^^^^^^
>So for n>e ... NOTE ^^
Here ^^^ is the problem (see below)
> k k n
> n > e > (1+k/n)
>
> (n+k) n
>thus n > (n+k)
>
> sqr(8) sqr(7)
>thus sqr(7) > sqr(8)
[snip!]
at this point, you have n=sqr(7), and k=(sqr(8)-sqr(7)). This is easily
seen by comparing this with the previous statement.
The only problem is sqr(7) < e, which is contrary to original assumption
that I pointed out above.
(sqr(7)=2.6457...<2.71...=e)
> sqr(8) sqr(7)
>thus 7 > 8
And so this statement is no longer justified by this lin of reasoning, since
the previous statement is invalid.
>Well was that easy, or was it? (Thanks John.)
No, it wasn't ;-)
--Matthew
One could reduce the size of the multiplicands in my earlier
proof -- I didn't try very hard to do such. For example,
picking up halfway through my earlier proof, instead of
30
(8/7) < (39^2/20^2)^3 since (8/7)^5 < 39/20
< (153/40)^3 since 39^2 = 1521 < 1530
< 56 since 153^3 < 56*40^3
use
30
(8/7) < ((103/69)^2)^5 since (8/7)^3 < 103/69
< (29/13)^5 since (103/69)^2 < 29/13
< 5*5*29/13 since (29/13)^2 < 5
< 56 since 25*29 < 13*56
Although I didn't mention in it my earlier proof, I should
point out that the above inequalities were also generated
via continued fraction approximations.
In any case the main point of my proof is that problems like
this can be attacked purely arithmetically -- with proofs so
simple that even an elementary school student could easily
verify the arithmetic by hand -- e.g. the above proof is
verified by checking each inequality following "since ...".
Contrast this simplicity with the complexity of the various
proposed proofs based upon calculus (all of which turned out
to be incorrect -- a tribute to their complexity).
As I said, a triumph for number theory over calculus!
Gauss would surely be happy.
-Bill
>Here is what appears to be a really easy mostly-algebraic proof, barely
>"analytic" at all. It is very elementary, though not quite so much as Bill's.
>
>I post it on behalf of my colleague John Spain. It's amazing that so simple
>a proof hasn't appeared already, yet I can't find anything wrong with it.
>
>A black mark for all of us regulars on sci.math!
>
>-----------------------------------------------
>Let k>0.
> n k
>Use the well-known and easily provable fact that (1+k/n) ---> e , increasing.
> ^^^^^^^^^^
>So for n>e ... NOTE ^^
>
> k k n
> n > e > (1+k/n)
>
> (n+k) n
>thus n > (n+k)
>
> sqr(8) sqr(7)
>thus sqr(7) > sqr(8)
Okay, now this step is the problem. It looks like you are taking
n=sqr(7) and k = sqr(8) - sqr(7).
The problem would be that, according to my calculator,
sqr(7) = 2.6457 . . . . < e = 2.718 . . .
Seeing as how by hypothesis, n needed to be greater than e to make this proof
fly, we find that your .sig is very applicable.
Of course, this begs the question "Does someone have an analytic proof that
sqr(7) < e ? ;)
> sqr(8) sqr(7)
>thus 7 > 8
>-----------------------------------------------
I found another way of proof:
7^31 > 8^29 (=2^87) is equivalent to 7^30/2^84 > 8/7.
7^30/2^84 = (7^5/2^14)^6 = (16807/16384)^6 > (1 + 400/16384)^6 >
> 1+ 400*6/16384 (used Bernoulli inequality)
> 1 + 1/7 since 400*6*7>16384.
Check this:
e^2 - 7 = (integral from 0 to 1) 4 * x^2 * (1 - x)^2 * e^(2*x) dx.
Have fun, ZVK (Slavek).
This is related to my earlier "high-school algebra" proof below.
I claim this proof can be checked in less than two minutes by
purely mental arithmetic (no pencil, paper or calculator needed!).
Indeed, only arithmetic of two-by-one digit integers is needed.
Note 8*29^2 > 7*31^2 via x=30 in 8*(x-1)^2-7*(x+1)^2 = x^2-30*x+1
so sqrt(8) > 31/29 sqrt(7) by taking sqrt of above.
Thus to prove
sqrt(8) sqrt(7)
7 > 8
it suffices to prove
31/29 sqrt(7) sqrt(7)
7 > 8
or 7^30/8^28 > 8/7
but
(7^5/2^14)^6 > (42/41)^6 since 7^5 = 7*(50-1)^2 > 7*(2500-100) = 16800
and 2^14 = 2^4*2^10 < 2^4*1025 = 16400
> 1 + 6/41 since (1+x)^n > 1+n*x [binomial theorem, x>0]
> 1 + 1/7 since 7*6 > 41
QED
I did not supply this proof in my earlier post because I wanted to
emphasize how continued fraction approximations algorithmically
reduce the problem to a purely arithmetical problem, i.e. to "grade
school algebra" involving only addition and multiplication of integers
(which does not include the binomial theorem or related inequalities).
Note that the continued fraction approximation used in the above proof
sqrt(7/8) < 29/31 is equivalent to that in my prior proof
sqrt(14) < 4*29/31.
Note also the inherent symmetry: we reduce the original problem
sqrt(7) sqrt(8)
8 < 7
to
8^29 < 7^31
via
8*29^2 > 7*31^2 [ our continued fraction for sqrt(7/8) ]
I leave to Ron Bruck or other resident sci.math analysts the (probably
hopeless) task of devising an (analytical) "college algebra" proof
as simple as those I supplied via grade school and high school algebra.
-Bill
P.S. The grade school vs. high school vs. college algebra distinction
follows the delightful algebraic geometry expositions of one of the
great algebraic manipulators of our time -- namely Shreeram Abhyankar.
See his classic paper "Historical ramblings in algebraic geometry and
related algebra", Amer. Math. Monthly 83 (1976), 409-448, or his more
recent lectures "Algebraic Geometry for Scientists and Engineers", AMS,
1990. With the latter in hand, even analysts have hope of understanding
Abhyankar's celebrated work on resolution of singularities! But start
with simpler problems like the above first :-)
I have tried to tell the story of algebraic geometry and to bring
out the poetry in it. -- Abhyankar, preface of said lectures.
If only mathematics had more poets!