Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

7^sqrt(8) > 8^sqrt(7) proof

68 views
Skip to first unread message

Wieslaw Ardanowski

unread,
May 24, 1996, 3:00:00 AM5/24/96
to

How to prove that 7^sqrt(8) > 8^sqrt(7) ?

This is my quasiscientific proof.

I translate this inequality in:

x^sqrt(x+b) > (x+b)^sqrt(x),

{ where <b> is a number but b > 0,
x>0 } <=== TERMS!!!

let this transform in:

1. x^(x+b) > (x+b)^x

2. x^x*x^b > (x+b)^x

3. (x+b)^x
x^b > --------
x^x


4. / x+b \
x^b > | ----- |^x
\ x /


/ b \
5. x^b > | 1+ --- |^x
\ x /

6. lim x^b = infinity {where x goes to infinity}

/ b \
lim | 1+ --- |^x = finite number {sometimes unreal}
\ x / {where x goes infinity}

/ b \
if b=1 then lim | 1+ --- |^x = 2.71.. {Euler's constant}
\ x / {where x goes infinity}

/ b \
Point 6. shows that x^b > | 1+ --- |^x ,
\ x /


so for x=7 and b=1 it shows that 7^sqrt(8) > 8^sqrt(7),
what should be proved.


!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!! !! !!!
!!! _________ !! Darth alias Vader !!!
!!! /....../ \.\ !! e-mail i...@bydg.pdi.net !!!
!!! / \ ....[ ]..\ !! !!!
!!! | |.....\ /....| !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!! | \____........| !!!
!!! | ____.....| ! ! !!!
!!! | .... \....| ! ! !!!
!!! | ...... |...| PARANORMAL !!! MEN WILL !!!
!!! \ .... \./ ACTIVITY ! ! STRIKE !!!
!!! \ _________ / EXISTS ! ! AGAIN! !!!
!!! !!!
!!! !!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Eric Gindrup

unread,
May 24, 1996, 3:00:00 AM5/24/96
to

Wieslaw Ardanowski wrote:
>
> How to prove that 7^sqrt(8) > 8^sqrt(7) ?
>
> This is my quasiscientific proof.
>

[... very complicated approach omitted...]

> !!! _________ !! Darth alias Vader !!!
> !!! /....../ \.\ !! e-mail i...@bydg.pdi.net !!!

Let's start by multiplying both sides of this equation by 7^sqrt(8).
49 * 8 > 56 * sqrt(56) [where ">" is provisional]
Then we can divide both sides by 56.
7 > sqrt(56)

However, since:
1) 7^2 = 49,
2) 49 < 56,
3) sqrt() is a function that never decreases,
we know from putting 1) into 2) that
7^2 < 56
and then from 3) that
sqrt(7^2) < sqrt(56).
But, sqrt(7^2) = 7, so
7 < sqrt 56
Therefore using the ">" symbol was incorrect and we need to correct it.
7 sqrt(8) < 8 sqrt(7).


This does it.
An equally good way is to divide both sides by 7 sqrt(7). This leaves:
sqrt(8)/sqrt(7) < 8/7
Or
sqrt(8/7) < 8/7
Since the square root of a positive number is closer to 1 than the
number, sqrt(8/7) is closer to 1 than 8/7 is. Since 8/7 is greater than
1, sqrt(8/7) is less than 1 (it's between 8/7 and 1).
-- Eric Gindrup ! gin...@okway.okstate.edu

Ronald Bruck

unread,
May 24, 1996, 3:00:00 AM5/24/96
to

In article <4o4joj$3...@noghri.bydg.pdi.net>, i...@noghri.bydg.pdi.net
(Wieslaw Ardanowski) wrote:

:How to prove that 7^sqrt(8) > 8^sqrt(7) ?
:
:This is my quasiscientific proof.

:
:I translate this inequality in:


:
: x^sqrt(x+b) > (x+b)^sqrt(x),
:
:{ where <b> is a number but b > 0,
: x>0 } <=== TERMS!!!
:
:let this transform in:
:
:1. x^(x+b) > (x+b)^x

:
[etc.]
:
: / b \


:5. x^b > | 1+ --- |^x
: \ x /
:
:6. lim x^b = infinity {where x goes to infinity}
:
: / b \
: lim | 1+ --- |^x = finite number {sometimes unreal}
: \ x / {where x goes infinity}
:
: / b \
: if b=1 then lim | 1+ --- |^x = 2.71.. {Euler's constant}
: \ x / {where x goes infinity}
:
: / b \
:Point 6. shows that x^b > | 1+ --- |^x ,
: \ x /
:
:
:so for x=7 and b=1 it shows that 7^sqrt(8) > 8^sqrt(7),
:what should be proved.

It shows nothing of the kind. The limit of 2^n as n goes to infinity is
bigger than the limit of 1000 + 1/n as n goes to infinity, but that
doesn't mean that 2^n is bigger than 1000 + 1/n when n = 1.

But it does suggest the following: consider the inequality

7^\sqrt{7+x} > (7+x)^\sqrt{7}.

By taking logs we can rewrite this as


log(1+x/7)
\sqrt{1 + x/7} > 1 + ----------
log 7

and try to prove this inequality by series expansions. The series for the
square root is

1 + x/14 - x^2/392 + x^3/5488 - ...,

while the series for the logarithm is

1 + (x/7 - x^2/98 + x^3/1029 - ...)/log 7.

Since 7 log 7 < 14 (though not by much), we see the x/14 term beats the
x/(7 log 7) term, and thus for SUFFICIENTLY SMALL x > 0 the inequality is
true. But how small? Is x = 1 small enough?

If we keep |x| < 7, both series have terms which decrease in absolute
value, so it would be enough to show that

1 + x/14 - x^2/392 > 1 + x/(7 log 7)

when x = 1. This would be bad enough, but unfortunately, it's false, and
we have to go to another two terms for the second series! The resulting
inequality, 52/27 < log 7 if I haven't miscalculated, strikes me as even
more obscure than the original.

Well, any brighter ideas? Other than a calculator that is ;-)

--Ron Bruck
Now 100% ISDN from this address

Ronald Bruck

unread,
May 24, 1996, 3:00:00 AM5/24/96
to

In article <bruck-24059...@max-22.pacificnet.net>,
br...@pacificnet.net (Ronald Bruck) wrote:

:But it does suggest the following: consider the inequality


:
: 7^\sqrt{7+x} > (7+x)^\sqrt{7}.
:
:By taking logs we can rewrite this as
:
:
: log(1+x/7)
: \sqrt{1 + x/7} > 1 + ----------
: log 7
:
:and try to prove this inequality by series expansions. The series for the
:square root is
:
: 1 + x/14 - x^2/392 + x^3/5488 - ...,
:
:while the series for the logarithm is
:
: 1 + (x/7 - x^2/98 + x^3/1029 - ...)/log 7.
:
:Since 7 log 7 < 14 (though not by much), we see the x/14 term beats the
:x/(7 log 7) term, and thus for SUFFICIENTLY SMALL x > 0 the inequality is
:true. But how small? Is x = 1 small enough?

Oops! It goes just the OTHER way; for x > 0 sufficiently small, the
inequality is REVERSED; we ain't gonna get it this way, folks, because
we're looking near x = 1, where the behavior near 0 has already switched
around.

Dang! Let's see, DIVIDE by the SMALLER number, get the BIGGER quotient...

Hmmm. My watch is on my LEFT hand, so this must be my RIGHT hand...

Kurt Foster

unread,
May 25, 1996, 3:00:00 AM5/25/96
to

Wieslaw Ardanowski (i...@noghri.bydg.pdi.net) wrote:

: How to prove that 7^sqrt(8) > 8^sqrt(7) ?

: This is my quasiscientific proof. [snip]
The following approach (admittedly) involves at least as much
calculation as just blasting out the numerical answer, but perhaps it will
prove instructive.
Let

(I) x^(2/x) = y^(2/y) for 0 < x < y. If

(II) y/x = 1 + t, t > 0,

one obtains

(III) x = (1 + t)^(1/t), y = (1 + t)^(1 + 1/t).

A little logarithmic differentiation shows that x decreases and y
increases as t increases.
Taking y/x = sqrt(8/7), one can calculate x and y. one gets:

x = 2.630012491414314 y = 2.811601618802316

One has 2.6 > sqrt(7), so 7^(1/sqrt(7)) [= x^(2/x) with x = sqrt(7)]
corresponds to a value of y/x less than sqrt(8/7). It follows that the
y > sqrt(7) for which y^(2/y) = 7^(1/sqrt(7)), satisfies y < sqrt(8).
The above expressions give values x < e and y > e. Thus x^(2/x)
inceases as x increases, but y^(2/y) decreases as y increases. So, from
y < sqrt(8) above, we conclude that 8^(1/sqrt(8)) < 7^(1/sqrt(7), or

8^(sqrt(7)) < 7^(sqrt(8)).

Questions left to the reader: What are the limiting values for x and y in
(III) as t --> 0+ ? What happens as t increases without bound?

Donald T. Davis

unread,
May 26, 1996, 3:00:00 AM5/26/96
to

the following proof relies less heavily on limits, though
i do use a power-series expansion:

let f(t) = -t ln t; then the problem reduces to showing
that f(7^-1/2) > f(8^-1/2). f is concave-downward on the
unit interval, vanishing at 0 and 1, and taking its
maximum at the fixed-point 1/e, which lies nearly halfway
between 8^-1/2 and 7^-1/2 . so, let 1/m denote the point
that is halfway between 7^-1/2 and 8^-1/2 , and expand f
in a taylor series about 1/m:

2 3 4 5
1 1 m (zm) (zm) (zm) (zm)
f(- + z) = - [ln m + zm ln - - ---- + ---- - ---- + ---- - ... ]
m m e 2 2*3 3*4 4*5


let d = 7^-1/2 - 1/m . then f(7^-1/2) - f(8^-1/2) =

3 5 7
1 1 2 m (dm) (dm) (dm)
f(- + d) - f(- - d) = - [dm ln - + ---- + ---- + ---- + ... ]
m m m e 2*3 4*5 6*7

as long as m > e, then all of the rhs' terms are positive.
otherwise, only the first term is nonpositive. so we have
only to show that m > e.

2 sqrt(56) 2 4 * 56
m = ----------------- , so m = ------------------
sqrt(7) + sqrt(8) 7 + 8 + 2 sqrt(56)

2 2 1 1
sqrt(56) = sqrt(7 + 7) < sqrt(7 + 2*-*7 + -) = 7.5
2 4
2 224 7 2
so, m > --- = 7 -- > e . this shows that the difference
30 15
1 1
series is positive, so f(------) > f(------), as desired.
sqrt 7 sqrt 8

-don davis, boston


Donald T. Davis

unread,
May 27, 1996, 3:00:00 AM5/27/96
to

(sorry to follow myself, but i improved the final ineguality
a little bit, and clarified the conclusion. -- don)

the following proof avoids limits, except that i do use a
power-series expansion:

let f(t) = -t ln t; then the problem reduces to showing
that f(7^-1/2) > f(8^-1/2). f is concave-downward on the
unit interval, vanishing at 0 and 1, and taking its
maximum at the fixed-point 1/e, which lies nearly halfway
between 8^-1/2 and 7^-1/2 . so, let 1/m denote the point
that is halfway between 7^-1/2 and 8^-1/2 , and expand f
in a taylor series about 1/m:

2 3 4 5
1 1 m (zm) (zm) (zm) (zm)
f(- + z) = - [ln m + zm ln - - ---- + ---- - ---- + ---- - ... ]
m m e 2 2*3 3*4 4*5


let d = 7^-1/2 - 1/m . then f(7^-1/2) - f(8^-1/2) =

3 5 7
1 1 2 m (dm) (dm) (dm)
f(- + d) - f(- - d) = - [dm ln - + ---- + ---- + ---- + ... ]
m m m e 2*3 4*5 6*7

as long as m > e, then all of the rhs' terms are positive.

otherwise, only the first term is nonpositive. so we still
have to show that m > e :

2 sqrt(56) 2 4 * 56
m = ----------------- , so m = ------------------
sqrt(7) + sqrt(8) 7 + 8 + 2 sqrt(56)

2 2 1 1
sqrt(56) = sqrt(7 + 7) < sqrt(7 + 2*-*7 + -) = 7.5
2 4

2 224 896 900 30
so, m > --- = --- > --- , and m > -- = 2.7273 > e.
30 120 121 11

this shows that the difference series is positive, and

1 1
that f(------) > f(------), as desired.
sqrt 7 sqrt 8

1 1
then exp(2 f(------)) > exp(2 f(------)) ;
sqrt 7 sqrt 8

1 1
------ ------
sqrt 7 sqrt 8
that is, 7 > 8 ,

sqrt 8 sqrt 7
and hence 7 > 8 .

-don davis, boston


Uatrongjit Sermsak 2850

unread,
May 27, 1996, 3:00:00 AM5/27/96
to

: How to prove that 7^sqrt(8) > 8^sqrt(7) ?

Let f(x) = sqrt(x)/ln x ;x > 0
Then,

ln x /(2 sqrt(x)) - sqrt(x)/x
f'(x) = -------------------------------
(ln x)^2

1
= ---------------- ( 0.5 ln x - 1 )
sqrt(x) (ln x)^2


when ln x > 2 or x > exp(2), f'(x) > 0.

And 7 > exp(2).
So, f(x) ; x > exp(2) is increasing function.

We have,

sqrt(8)/ln 8 > sqrt(7)/ln 7
sqrt(8) ln 7 > sqrt(7) ln 8
or,
7^sqrt(8) > 8^sqrt(7)

----
s.s.


--
.
<<<.><><><><><><><><><><><><><><><><><><>---=Sermsak Uatrongjit=--<><><>
___________________}{ [] Department of Physical Electronic []
<<>__________________|*O======{) () Tokyo Institute of Technology ()
}{ [] . []
<><><>- b...@ss.titech.ac.jp ---<><><><><><><><><><><><><><><><><><><.>>>


Eric Gindrup

unread,
May 28, 1996, 3:00:00 AM5/28/96
to

Oops, I *did* make a catastrophic reading error in the problem. I thank
the many readers who pointed that out.

Tony2back

unread,
May 28, 1996, 3:00:00 AM5/28/96
to

In article <USERMSAK.96...@sc4.o.cc.titech.ac.jp>,

user...@o.cc.titech.ac.jp (Uatrongjit Sermsak 2850) writes:

>
> when ln x > 2 or x > exp(2), f'(x) > 0.
>
> And 7 > exp(2).
> So, f(x) ; x > exp(2) is increasing function.
>
>

Unfortunately 7<e^2, so f(x) is not increasing when x = 7. In fact it is
decreasing until x = 7.389 and then becomes an increasing function as x
increases. This is why the problem has attracted the number of posts that
it has.

Anthony Hugh Back

schl...@bbs.cruzio.com

unread,
May 28, 1996, 3:00:00 AM5/28/96
to

Doesn't anyone have a computer? Just calculate both sides!

Roger


Rene Bos

unread,
May 28, 1996, 3:00:00 AM5/28/96
to

>
> Doesn't anyone have a computer? Just calculate both sides!
>
> Roger
>

You may have missed the original posting, but it was a request for an
_analytic_ solution.

Rene

Earle D. Jones

unread,
May 29, 1996, 3:00:00 AM5/29/96
to

> Doesn't anyone have a computer? Just calculate both sides!
>
> Roger

==========

Dumb answer.

Is your mother Phyllis Schlafly?

__
__/\_\
/\_\/_/
\/_/\_\ earle
\/_/ jones

J. B. Rainsberger

unread,
May 30, 1996, 3:00:00 AM5/30/96
to

Let f(x) = sqrt(x); g(x) = ln x

g'(7) = 1/7 < 1/[2 sqrt(7)] = f'(7)
g'(8) = 1/8 < 1/[2 sqrt(8)] = f'(8)

Can we not then conclude that, since f(x), g(x) are continuous, that f
grows faster than g over [7, 8] and thus f(7)/f(8) would be less than
g(7)/g(8)?

This means that

sqrt(7)/sqrt(8) < ln 7/ln 8
sqrt(7) ln 8 < sqrt(8) ln 7
8^[sqrt(7)] < 7^[sqrt(8)]

or am I not interpreting f' and g' correctly?

Joe.

J. B. Rainsberger, | Any six-year-old who knows the
York University | words nefarious and extemp-
(collège Glendon) | oraneous gets my vote.
t7...@inforamp.net | -- Joe on Calvin.

schl...@bbs.cruzio.com

unread,
May 30, 1996, 3:00:00 AM5/30/96
to

In article <4ofs37$1...@dinkel.civ.utwente.nl>, bo...@cs.utwente.nl (Rene Bos) writes:
> In article <Ds4u1...@cruzio.com>, schl...@bbs.cruzio.com wrote:
>
> >
> > Doesn't anyone have a computer? Just calculate both sides!
> >
> > Roger
> >
>
> You may have missed the original posting, but it was a request for an
> _analytic_ solution.

And "analytic" means without computation? Ok, I've got a new
problem. Can anyone give an "analytic" proof that 3^2 > 2^3 ?

Roger

DBBBIGDOG

unread,
May 30, 1996, 3:00:00 AM5/30/96
to

Maybe I'm missing the point of the term "analytic", but I thought analytic
kinda referred more to a general form. For example, in the example
previously mentioned (3^2>2^3), is it possible that the heart of the
matter here would be to prove x^(x-1)>(x-1)^x for all x (which is false
anyway). Likewise, are we trying to prove this one specific case of
7^sqrt(8) > 8^sqrt(7), or are we actually trying to prove x^sqrt(x+1) >
(x+1)^sqrt(x) for some range? I never saw the original post, so I'm just
posing this as a possibility.


David "BIG DOG" Bykowski

Ronald Bruck

unread,
May 30, 1996, 3:00:00 AM5/30/96
to

In article <Ds8Cu...@cruzio.com>, schl...@bbs.cruzio.com wrote:

:In article <4ofs37$1...@dinkel.civ.utwente.nl>, bo...@cs.utwente.nl (Rene

It means "conceptual"; arithmetic is permitted, but only what a normal
person can do in his head. This plays to the mathematician's penchant for
clever solutions, the harder the better. It's a game.

One of my favorites (which I like to ask most beginning calculus courses,
posing it several times during the semester) is, which is bigger, \pi^e or
e^\pi?

Of course, anybody with a halfway-decent calculator can tell that it's
e^\pi. But if you do it that way, you'll miss the fact that \pi is a
COMPLETE RED HERRING. e^a is bigger than a^e for ANY positive real number
(except a = e, of course).

This is what I call a "separating" problem; it separates those who think
analytically from those who don't. ("Analytic" HERE means in the
classical sense, the opposite of "synthetic"; working the problem
backwards.)

R.J.Chapman

unread,
May 31, 1996, 3:00:00 AM5/31/96
to

d...@cam.ov.com writes:
> the following proof relies less heavily on limits, though
> i do use a power-series expansion:
>
> let f(t) = -t ln t; then the problem reduces to showing
> that f(7^-1/2) > f(8^-1/2). f is concave-downward on the
> unit interval, vanishing at 0 and 1, and taking its
> maximum at the fixed-point 1/e, which lies nearly halfway
> between 8^-1/2 and 7^-1/2 . so, let 1/m denote the point
> that is halfway between 7^-1/2 and 8^-1/2 , and expand f
> in a taylor series about 1/m:
>
> 2 3 4 5
> 1 1 m (zm) (zm) (zm) (zm)
> f(- + z) = - [ln m + zm ln - - ---- + ---- - ---- + ---- - ... ]
> m m e 2 2*3 3*4 4*5
>
>
> let d = 7^-1/2 - 1/m . then f(7^-1/2) - f(8^-1/2) =
>
> 3 5 7
> 1 1 2 m (dm) (dm) (dm)
> f(- + d) - f(- - d) = - [dm ln - + ---- + ---- + ---- + ... ]
> m m m e 2*3 4*5 6*7
>
> as long as m > e, then all of the rhs' terms are positive.
> otherwise, only the first term is nonpositive. so we have
> only to show that m > e.
>
> 2 sqrt(56) 2 4 * 56
> m = ----------------- , so m = ------------------
> sqrt(7) + sqrt(8) 7 + 8 + 2 sqrt(56)
>
> 2 2 1 1
> sqrt(56) = sqrt(7 + 7) < sqrt(7 + 2*-*7 + -) = 7.5
> 2 4
> 2 224 7 2
> so, m > --- = 7 -- > e .
> 30 15

But why is 224/30 = 112/15> e^2? Here's a simple argument. The sum
1 + 2 + 4/2 + 8/6 + 16/24 + 32/120 of the first six terms of the
series for e^2 is 109/15,. Th next term is 64/720 = 4/45 and each
subsequent term is less than half the preceding one, so the error
in taking 109/15 for e^2 is less than 2 x 4/45 = 8/45. Hence
e^2 < 109/15 + 8/45 < 112/15.

Robin Chapman
--
Robin J. Chapman
Department of Mathematics "... But there are full professors
University of Exeter, EX4 4QE, UK in this place who read nothing
r...@maths.exeter.ac.uk but cereal boxes."
http://www.maths.ex.ac.uk/~rjc/rjc.html Don Delillo--White Noise

schl...@bbs.cruzio.com

unread,
Jun 1, 1996, 3:00:00 AM6/1/96
to

In article <4okvlf$5...@newsbf02.news.aol.com>, dbbb...@aol.com (DBBBIGDOG) writes:
> Maybe I'm missing the point of the term "analytic", but I thought analytic
> kinda referred more to a general form. For example, in the example

I get the impression that "analytic" here means a difficult
proof for a trivial fact.

Roger

Zdislav V. Kovarik

unread,
Jun 1, 1996, 3:00:00 AM6/1/96
to

You can board a bus in Marathon, Greece, and disembark in Athens,
Greece. The whole operation is trivial but it's not a Marathon run.

Mathematics is a fascinating mix of facts, tools, problems, puzzles,
opinions, and who knows what else (art, ...).

For example, Steiner's investigations of Euclidean constructions using a
compass alone were (TTBOMK) not prompted by a sudden shortage of
straight-edges but by curiosity and aesthetic considerations. Today, using
appropriate graphic software may surpass ancient ways in a "trivial" way.

And the parallel (no pun intended) continues: just as the user of the
compass was under no pressure to know how the tool was manufactured, nor
is the software user.
All's fine unless something goes wrong, and you have to work around the
glitch. This happens when I use MATLAB to produce linear algebra
exercises: results which are allegedly integer are rejected by the gcd
routine as non-integer (round-off at work), and I have to insert rounding
steps, sometimes in a delicate way.

Back to "analytic": a good example is a math folklore equation
p(x) = 242 * x^4 + 113 * x - 23928 = 0
Since we are told (Lindemann's Theorem) that pi is transcendental, we
deduce that pi cannot be a root of the polynomial p (->analytic argument).
Yet, MATLAB shows that
p(pi) is approximately 8.4 * 10^(-8) (compare with 23928), and
if r is the positive root of p, we find that
r - pi is approximately -2.8 * 10^(-12).

These differences can be hidden from a user of an 11-digit calculator.

The difference between 7^sqrt(8) and 8^sqrt(7) is much larger (more
than 0.5 of a total about 245) but this just says that there is a price
(of hardware/software) attached to this concept of "triviality".
"Analytic" seems to refer to using more intellect and work with manually
manageable rational numbers, and less circuitry.

Is anyone working on expanding ln(x)/sqrt(x) around x_0=e^2 to obtain
the comparison?

Cheers, ZVK (Slavek).

schl...@bbs.cruzio.com

unread,
Jun 2, 1996, 3:00:00 AM6/2/96
to

In article <4oqkfe$f...@mcmail.CIS.McMaster.CA>, kov...@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) writes:
> In article <DsBCH...@cruzio.com>, <schl...@bbs.cruzio.com> wrote:
> >In article <4okvlf$5...@newsbf02.news.aol.com>, dbbb...@aol.com (DBBBIGDOG) writes:
> >> Maybe I'm missing the point of the term "analytic", but I thought analytic
> >> kinda referred more to a general form. For example, in the example
> >
> >I get the impression that "analytic" here means a difficult
> >proof for a trivial fact.

> For example, Steiner's investigations of Euclidean constructions using a


> compass alone were (TTBOMK) not prompted by a sudden shortage of
> straight-edges but by curiosity and aesthetic considerations. Today, using
> appropriate graphic software may surpass ancient ways in a "trivial" way.

I am not criticizing the problem. I am questioning why people are
looking for difficult proofs when there is a trivial proof.



> Back to "analytic": a good example is a math folklore equation
> p(x) = 242 * x^4 + 113 * x - 23928 = 0
> Since we are told (Lindemann's Theorem) that pi is transcendental, we
> deduce that pi cannot be a root of the polynomial p (->analytic argument).
> Yet, MATLAB shows that
> p(pi) is approximately 8.4 * 10^(-8) (compare with 23928), and
> if r is the positive root of p, we find that
> r - pi is approximately -2.8 * 10^(-12).
>
> These differences can be hidden from a user of an 11-digit calculator.

What is this an example of? That 2 numbers can be close
without being equal?

Roger

Miguel Lerma

unread,
Jun 2, 1996, 3:00:00 AM6/2/96
to

schl...@bbs.cruzio.com wrote:
[...]
: I am not criticizing the problem. I am questioning why people are

: looking for difficult proofs when there is a trivial proof.

I do not think that the calculator really provides a "proof" in
the mathematical sense. A "mathematical proof" should consist of
a sequence of logical steps that anybody could check. When a part
of the alleged proof relies on the output of a mechanism such as
a calculator, then it does not fulfill the requirements to be
properly considered as a mathematical proof - call it "empirical
evidence" if you want, but not "proof".

Of course, you could provide a hand computation of suitable
approximations of 7^sqrt(8) and 8^sqrt(7), together with a
proof that the approximations used are enough to settle the
problem, but I am not sure that such a task if really trivial...

On the other hand, if you call "proof" any process that shows that
something is true beyond "reasonable doubt", perhaps the output of a
calculator carefully used, after suitable recheckings etc. (say, using
different calculators, repeating the computation in different ways, etc.),
could provide such a proof, but it would not be a "mathematical proof" yet.

It remains to discuss why we are interested in "mathematical" proofs,
instead in the usual kind of "empirical" evidence that is considered
enough in everyday life. Several different answers are possible for
that question, one of them the weakeness and unreliablility of usual
empirical evidence, but a much simpler answer could be this one:
why not?

: > Back to "analytic": a good example is a math folklore equation


: > p(x) = 242 * x^4 + 113 * x - 23928 = 0
: > Since we are told (Lindemann's Theorem) that pi is transcendental, we
: > deduce that pi cannot be a root of the polynomial p (->analytic argument).
: > Yet, MATLAB shows that
: > p(pi) is approximately 8.4 * 10^(-8) (compare with 23928), and
: > if r is the positive root of p, we find that
: > r - pi is approximately -2.8 * 10^(-12).
: >
: > These differences can be hidden from a user of an 11-digit calculator.

: What is this an example of? That 2 numbers can be close
: without being equal?

It is a good example of how the answer of a calculator can be
missleading.


Miguel A. Lerma


schl...@bbs.cruzio.com

unread,
Jun 5, 1996, 3:00:00 AM6/5/96
to

In article <4ot369$d...@geraldo.cc.utexas.edu>, mle...@arthur.ma.utexas.edu (Miguel Lerma) writes:
> schl...@bbs.cruzio.com wrote:
> [...]
> : I am not criticizing the problem. I am questioning why people are
> : looking for difficult proofs when there is a trivial proof.
>
> I do not think that the calculator really provides a "proof" in
> the mathematical sense. A "mathematical proof" should consist of
> a sequence of logical steps that anybody could check. When a part
> of the alleged proof relies on the output of a mechanism such as
> a calculator, then it does not fulfill the requirements to be
> properly considered as a mathematical proof - call it "empirical
> evidence" if you want, but not "proof".

My calculator uses a sequence of logical steps that anybody could
check.

> It remains to discuss why we are interested in "mathematical" proofs,

> instead in the usual kind of "empirical" evidence that is considered
> enough in everyday life. Several different answers are possible for
> that question, one of them the weakeness and unreliablility of usual
> empirical evidence, but a much simpler answer could be this one:
> why not?

Is your calculator unreliable?

> : What is this an example of? That 2 numbers can be close
> : without being equal?
>
> It is a good example of how the answer of a calculator can be
> missleading.

It is only misleading if you jump to a totally unwarranted conclusion.
That is, if you think 2 numbers are equal just because they are
close. Many analytical arguments are similarly misleading.

Roger


Sermsak Uatrongjit

unread,
Jun 5, 1996, 3:00:00 AM6/5/96
to

Here is another proof: (Please correct me if there is something wrong.)


Let f(x) = ln(1/x)/sqrt(x),
a = f(7), b = f(8),
s7 = sqrt(7), s8 = sqrt(8)

Consider

g(n) = \int_0^1 (1-x^n)/ln(x) dx = ln(1/(n+1))

[I found these formula in a book, hope it's correct.]


Then we have

a = g(6)/s7 and b = g(7)/s8


Thus
a-b = \int_0^1 ((1-x^6)/s7 - (1-x^7)/s8)/ln(x) dx

By mean value theorem there is x0 (0 < x0 < 1) that gives

a-b = 1/ln(x0) \int_0^1 ((1-x^6)/s7 - (1-x^7)/s8) dx

= 1/ln(x0) ( 6/(7*s7) - 7/(8*s8) )

Since ln(x0) < 0, and with a little calculation we found that:

a-b = [Negative Value] * [ Positive Value ]
so a < b

or ln(7)/s7 > ln(8)/s8
7^s8 > 8^s7

Miguel Lerma

unread,
Jun 5, 1996, 3:00:00 AM6/5/96
to

schl...@bbs.cruzio.com wrote:
[...]

> My calculator uses a sequence of logical steps that anybody could
> check.

How do you check it? By studing the electronic circuitry and
deducing how it works? By looking at the programs used by the
designers, studying the algorithms envolved and trusting that
they were actually implemented in the chips of your calculator?

[...]

> Is your calculator unreliable?

Who knows?

> > : What is this an example of? That 2 numbers can be close
> > : without being equal?
> >
> > It is a good example of how the answer of a calculator can be
> > missleading.

> It is only misleading if you jump to a totally unwarranted conclusion.
> That is, if you think 2 numbers are equal just because they are
> close. Many analytical arguments are similarly misleading.

That is not what I think, but what the calculator thinks.

Who knows which conclusions are warranted from the output of that
black box that we call an electronic calculator?

In practice I use calculators and computers a lot, but I am
very carefull about writing a proof that includes a step of
the form "my calculator said...".


Miguel A. Lerma


Bill Dubuque

unread,
Jun 6, 1996, 3:00:00 AM6/6/96
to

Yet another erroneous proof via calculus! This one fails due to
an invalid application of the mean value theorem: the integrand

6 7
1 - x 1 - x
------ - ------
sqrt(7) sqrt(8)

changes sign just below 0.8, so the mean value theorem is
inapplicable over the interval [0,1].

So far no one has succeeded in supplying a proof via calculus
(indeed, I haven't seen any other correct proof except the
purely arithmetical proof I supplied earlier).

Surely there must be someone on sci.math who can tackle this!

-Bill

From: b...@ss.titech.ac.jp (Sermsak Uatrongjit)
Date: Wed, 5 Jun 1996 07:13:01 GMT

Here is another proof: (Please correct me if there is something wrong.)


Let f(x) = ln(1/x)/sqrt(x),
a = f(7), b = f(8),
s7 = sqrt(7), s8 = sqrt(8)

Consider

g(n) = \int_0^1 (1-x^n)/ln(x) dx = ln(1/(n+1))

[I found these formula in a book, hope it's correct.]


Then we have

a = g(6)/s7 and b = g(7)/s8


Thus
a-b = \int_0^1 ((1-x^6)/s7 - (1-x^7)/s8)/ln(x) dx

By mean value theorem there is x0 (0 < x0 < 1) that gives

**** ERROR: MVT inapplicable here, see above ****

a-b = 1/ln(x0) \int_0^1 ((1-x^6)/s7 - (1-x^7)/s8) dx ...

Dr D F Holt

unread,
Jun 7, 1996, 3:00:00 AM6/7/96
to

In article <4ot369$d...@geraldo.cc.utexas.edu>,
mle...@arthur.ma.utexas.edu (Miguel Lerma) writes:
>schl...@bbs.cruzio.com wrote:
>[...]
>: I am not criticizing the problem. I am questioning why people are
>: looking for difficult proofs when there is a trivial proof.
>
>I do not think that the calculator really provides a "proof" in
>the mathematical sense. A "mathematical proof" should consist of
>a sequence of logical steps that anybody could check. When a part
>of the alleged proof relies on the output of a mechanism such as
>a calculator, then it does not fulfill the requirements to be
>properly considered as a mathematical proof - call it "empirical
>evidence" if you want, but not "proof".

That is certainly a possible opinion, but I would not agree with it.
I would still call it proof. Since many reputable people seem to consider
that the 4-colour theorem has been proved, I am not alone in that opinion.

>
>Of course, you could provide a hand computation of suitable
>approximations of 7^sqrt(8) and 8^sqrt(7), together with a
>proof that the approximations used are enough to settle the
>problem, but I am not sure that such a task if really trivial...
>

The point is that even if you could carry out such a calculation by hand,
it seems to me that there would be very little point or merit in doing it,
since it would be much more probable that you would make a mistake than
that your calculator would. You could probably also do it by hand by using
old fashioned log tables. Again, though, I can't see much point in doing this.

I feel certain beyond any reasonable doubt that the statement in question
is correct. The reason is that I have checked on a calculator in several
different equivalent formulations, and it is evident that several others have
too. On the other hand, the various attempts at analytical proof that have
been posted have been unconvincing in the extreme. Several of them have
contained errors, and others are probably correct, but are unpleasantly
complicated.
Do you feel certain that it is correct - if so, why?

With a less trivial example, like the 4-colour theorem, again I feel
reasonably confident that it is correct, but would not be completely
surprised if it contained errors. But I would be very surprised indeed
if these errors were due to repeated hardware or compiler error (which are the
parts of the computer calculation which most mathematicians have no control
over). Much more likely is that there could be minor or conceptual errors in
the programs used, but these are more akin to errors in a standard mathematical
proof.

I would agree that computer or calculator calculations introduce the
possibility of different sorts of error (such as hardware error) from what
mathematicians are traditionally accustomed to, and certainly adequate
precautions should be taken to reduce the probability of them occurring in
important calcualtions, but I cannot agree with your main thesis that there
is some profound philosophical distinction between proofs with and without
computational aid.

At the end of the day, the only meaningful question you can ask is how
confident can we be that the result is correct. And we can never be
completely certain, because any argument, with or without machanical aid,
could conceivably contain an error that everybody has overlooked.

Having said all that, I still think there is some point in looking for an
analytic proof of the sqrt(8) inequality, but that is because it may shed some
light on the problem, or put the result in a more general setting.

Derek Holt.

schl...@bbs.cruzio.com

unread,
Jun 8, 1996, 3:00:00 AM6/8/96
to

In article <4p4e5q$b...@geraldo.cc.utexas.edu>, mle...@arthur.ma.utexas.edu (Miguel Lerma) writes:
>
> > > : What is this an example of? That 2 numbers can be close
> > > : without being equal?
> > >
> > > It is a good example of how the answer of a calculator can be
> > > missleading.
>
> > It is only misleading if you jump to a totally unwarranted conclusion.
> > That is, if you think 2 numbers are equal just because they are
> > close. Many analytical arguments are similarly misleading.
>
> That is not what I think, but what the calculator thinks.
>
> Who knows which conclusions are warranted from the output of that
> black box that we call an electronic calculator?

If your calculator thinks that two unequal numbers are equal,
then, yes, I'd say you shouldn't trust that calculator.
Most computers and calculators don't have that problem.

Roger


Rogerio Brito

unread,
Jun 8, 1996, 3:00:00 AM6/8/96
to

schl...@bbs.cruzio.com wrote:
>In article <4oqkfe$f...@mcmail.CIS.McMaster.CA>, kov...@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) writes:

(..)

>> Since we are told (Lindemann's Theorem) that pi is transcendental, we
>> deduce that pi cannot be a root of the polynomial p (->analytic argument).
>> Yet, MATLAB shows that
>> p(pi) is approximately 8.4 * 10^(-8) (compare with 23928), and
>> if r is the positive root of p, we find that
>> r - pi is approximately -2.8 * 10^(-12).
>>
>> These differences can be hidden from a user of an 11-digit calculator.
>

>What is this an example of? That 2 numbers can be close
>without being equal?

No. It only shows that the real answer can be hidden if
you only rely on those machines. A (true) mathematical
proof (frequently) doesn't rely on machines (except for
very few cases, such as the Four Color Theorem, that many
disagree that it has been *really* proved). These
machines have limitations, this is what the
mathematicians don't accept.

Otherwise, mathematicians only say that it is a
"verification".

A proof is a sequence that *has* to be worked *only* with
pencil and paper (and, of course, imagination :) ).
Anything else is just a way to get insight to return to
the problem and to continue it with your pencil and
paper.

>Roger

[]z, Roger...

--
======================================================================
Rogerio Brito - Computer Science Student - University of Sao Paulo
e-mail: rbr...@ime.usp.br - home page: http://www.ime.usp.br/~rbrito
"Windows? Linux and X!" - Member of Linux Users Group in Brazil
======================================================================

Rogerio Brito

unread,
Jun 8, 1996, 3:00:00 AM6/8/96
to

schl...@bbs.cruzio.com wrote:
>
>Is your calculator unreliable?

I'd say every calculator is unreliable if I can make up
numbers to which it can give me wrong answers. And if it
based on digital systems with limited precision, we can
*always* come with these numbers.

Although I hate my Numerical Analysis classes (I only
care for the methods and what I really *hate* is the
implementation with floating point numbers ;) ), they are
very important because they've taught me that we can
almost always use another way to use our equipment in a
more rational way and to understand the computer
limitations when dealing with floating point numbers (and
I think this is the real objective of the course).

Rogerio Brito

unread,
Jun 8, 1996, 3:00:00 AM6/8/96
to

Rogerio Brito

unread,
Jun 8, 1996, 3:00:00 AM6/8/96
to

(..)

>Roger

schl...@bbs.cruzio.com

unread,
Jun 9, 1996, 3:00:00 AM6/9/96
to

Yes. Funny how many of these "verifiaiable" analytic proofs are wrong.

Roger


Miguel Lerma

unread,
Jun 9, 1996, 3:00:00 AM6/9/96
to

Dr D F Holt (ma...@csv.warwick.ac.uk) wrote:
[...]

> That is certainly a possible opinion, but I would not agree with it.
> I would still call it proof. Since many reputable people seem to consider
> that the 4-colour theorem has been proved, I am not alone in that opinion.

I also think that the 4-color theorem has been proved. Whoever has
doubts about it should read Appel & Haken's paper "The Four Colour
Proof Suffices", The Mathematical Inteligencer, Vol.8 (1986), N.1,
pp.10-20. In this case the computer was an aid to get a proof, but
the authors gave many details of it, including hundred of pages with
diagrams that anybody could study and verify. Some people have
worked in verifying the proof (including verification by hand).

An opposite example is Wu-Yi Hsiang "proof" of Kepler Conjecture
about the maximum density packing of spheres in R^3. Look at
Tomas C. Hales' article "The Status of the Kepler Conjecture",
The Mathematical Intelligencer, Vol.16 (1994), N.3, pp.47-58.
A part of the criticism has to do with lack of details and
"unproven" claims resting on computational analysis (see
footnote 3 in that article).

: >Of course, you could provide a hand computation of suitable

: >approximations of 7^sqrt(8) and 8^sqrt(7), together with a
: >proof that the approximations used are enough to settle the
: >problem, but I am not sure that such a task if really trivial...

: The point is that even if you could carry out such a calculation by hand,
: it seems to me that there would be very little point or merit in doing it,
: since it would be much more probable that you would make a mistake than
: that your calculator would. You could probably also do it by hand by using
: old fashioned log tables. Again, though, I can't see much point in doing this.

The point is not the probablity of error. I am sure I are more likely
to make a mistake than the computer is. The point is how transparent
the proof is to human eyes. If I make a mistake, it will be there
exposed to everybody's eyes. If the computer makes a mistake, who
will notice it just by reading the step that says "the computer gave
the following answer..."?

An example of how little we should trust computers is R.G.E. Pinch's
review of primality test implemented in several popular packages in
his article "Some Primality Testing Algorithms", Notices of the A.M.S.,
Vol.40 (1993), N.9, pp.1203-1210. There are many other examples, such
as the design flaw in the Pentium chip, etc.

[...]
: I would agree that computer or calculator calculations introduce the


: possibility of different sorts of error (such as hardware error) from what
: mathematicians are traditionally accustomed to, and certainly adequate
: precautions should be taken to reduce the probability of them occurring in
: important calcualtions, but I cannot agree with your main thesis that there
: is some profound philosophical distinction between proofs with and without
: computational aid.

Well, I didn't say that there is a "philosophical" distintion. I was
discussing just their transparency to human eyes. I don't care how
it was obtained, what I want is the proof to be expressed in such
a way that every step can be checked by anybody who wishes it. I still
think that just writting "the computer said..." is not explicit enough.


Miguel A. Lerma


Bill Dubuque

unread,
Jun 10, 1996, 3:00:00 AM6/10/96
to

From: mle...@arthur.ma.utexas.edu (Miguel Lerma)
Date: 9 Jun 1996 19:44:57 GMT

...The point is not the probablity of error. I am sure I are more likely

to make a mistake than the computer is. The point is how transparent
the proof is to human eyes.

See my most recent post in this problem for a proof that can be
verified by easy mental arithmetic in less than a couple minutes.

An example of how little we should trust computers is R.G.E. Pinch's
review of primality test implemented in several popular packages in
his article "Some Primality Testing Algorithms", Notices of the A.M.S.,
Vol.40 (1993), N.9, pp.1203-1210. There are many other examples, such
as the design flaw in the Pentium chip, etc.

Another good example: I found a bug in integer division in PARI
around the same time the Pentium bug was discovered (PARI is the
premier computational algebraic number theory package; its
unlimited precision integer (aka bignum) modules are used in many
other packages, e.g. Axiom, Macsyma, various Common Lisp's, etc).
This division bug would occur with probability roughly 1/2^32 if
I remember correctly. Nonetheless it was of course considered a
major bug and a new patched version of PARI was issued
immediately. It is of course possible that this bug might
invalidate previous calculations performed with PARI.

There are many bugs of this nature in most large symbolic
computation systems. Almost every time that Macsyma was ported
to a new Common Lisp platform, bugs were discovered in the
underlying Lisp (bignum) arithmetic, especially in bignum
division, which is notoriously difficult to implement correctly
(see Knuth's Seminumerical Algorithms, volume 2).

And this is only exact integer arithmetic. With floating
point calculations one has much bigger problems to consider,
especially stability.

One should always try to find independent ways to verify
computational proofs. I believe it was Askey (an expert
in special functions) who said that he never uses a formula
unless he has verified it himself. That's how I found the
above-mentioned PARI bug: I decided to prove the division
algorithm correct before I trusted it. Otherwise, it might
not have been discovered for many years -- if ever.

-Bill

Hauke Reddmann

unread,
Jun 10, 1996, 3:00:00 AM6/10/96
to

schl...@bbs.cruzio.com wrote:
:
: Yes. Funny how many of these "verifiaiable" analytic proofs are wrong.
:
What only proves that we still haven't found the "insight" WHY
7^v8>8^v7. If I want to know IF 7^v8>8^v7, I use a pocket
calculator too. (And I trust the resultate - after I
mistyped the operations three times :-)
--
Hauke Reddmann <:-EX8
fc3...@math.uni-hamburg.de PRIVATE EMAIL
fc3...@rzaixsrv1.rrz.uni-hamburg.de BACKUP
redd...@chemie.uni-hamburg.de SCIENCE ONLY

Dr D F Holt

unread,
Jun 10, 1996, 3:00:00 AM6/10/96
to

In article <199606081656...@babyblue.cs.yale.edu>,

rbr...@ime.usp.br writes:
> No. It only shows that the real answer can be hidden if
> you only rely on those machines. A (true) mathematical
> proof (frequently) doesn't rely on machines (except for
> very few cases, such as the Four Color Theorem, that many
> disagree that it has been *really* proved).

It does not really matter whether they disagree or not -
if the proof contains no error, then it is correct, and the theorem has been
proved. Of course, it might have mistakes in it, and it might be a matter
of opinion whether the proof has been adequately checked.

> These
> machines have limitations, this is what the
> mathematicians don't accept.
>
> Otherwise, mathematicians only say that it is a
> "verification".

You should say "some mathematicians" since they do not all agree with you!

> A proof is a sequence that *has* to be worked *only* with
> pencil and paper (and, of course, imagination :) ).
> Anything else is just a way to get insight to return to
> the problem and to continue it with your pencil and
> paper.

I find this opinion unrealistic, idealistic and out-of-date.
I know there are problems with computer proofs, particularly where floating
point computations are involved, so just to keep things simple, let's
leave floating point calculations out of the discussion and stick to
whole number calculations. (As far as I know, the proof of the four color
theorem does not involve any real number arithmetic - correct me somebody
if I am wrong.)
Why do you attach such huge merit to pencil and paper proofs, which are often
much more likely to contain errors that the same thing done with computational
aid? To take a silly example, what is 17263921 x 4628710?
The answer is 79909683771910 (done by a calculator). Presumably, you do
not regard this result of mine as having been proved, and if for some
reason you needed to use this in the course of a mathematical proof, you would
do it by hand, make a few mistakes, and when you finally got the same
answer as the calculator, you would decide that you had proved it!
The plain fact is that, for many types of routine calculations, you are much
more likely to obtain a correct result with computational aid, than you are
without it.

As I said in an earlier post, the only meaningful question that you can ask
about a proof is how likely is it that it is correct, and that is independent
of the method used.

Do you believe that 7^sqrt(8) > 8^sqrt(7)? If so, why?
A couple of very nice methods of proving this using only arithmetic that
can be done on paper and pencil have been posted, but despite that, I
personally still find the straightforward calculator "proof" the most
convincing, and the most likely to be free of error.


Derek Holt.

Daniel A. Asimov

unread,
Jun 10, 1996, 3:00:00 AM6/10/96
to

In article <199606081753...@babyblue.cs.yale.edu> rbr...@ime.usp.br writes:
>
>...the real answer can be hidden if

>you only rely on those machines.
>
>[...]

>
>A proof is a sequence that *has* to be worked *only* with
>pencil and paper (and, of course, imagination :) ).
>Anything else is just a way to get insight to return to
>the problem and to continue it with your pencil and
>paper.
----------------------------------------------------------

The arithmetical example (deleted for space reasons) shows only that
careless use of computer arithmetic can result in erroneous results.

But carelessness in any aspect of a putative mathematical proof can
equally lead to erroneous results.

Careful use of computer arithmetic often involves rigorously determining
error bounds on machine calculations. If the result, when the error bounds
are considered, is a useful component of a mathematical proof, then
IMHO the use of computers has not in any way invalidated the proof.

--Dan Asimov

schl...@bbs.cruzio.com

unread,
Jun 10, 1996, 3:00:00 AM6/10/96
to


>> From: rbr...@ime.usp.br (Rogerio Brito) -(1)--(1)


>> I'd say every calculator is unreliable if I can make up
>> numbers to which it can give me wrong answers. And if it
>> based on digital systems with limited precision, we can
>> *always* come with these numbers.

Nonsense. It is only your interpretation of them which is wrong.
I am sure you are also capable of misinterpreting a pencil and
paper argument also.

Roger

Ronald Bruck

unread,
Jun 10, 1996, 3:00:00 AM6/10/96
to

In article <Dssv3...@cruzio.com>, schl...@bbs.cruzio.com wrote:

:>> From: rbr...@ime.usp.br (Rogerio Brito) -(1)--(1)

Funny. I would have called his statement almost a tautology.

schl...@bbs.cruzio.com

unread,
Jun 11, 1996, 3:00:00 AM6/11/96
to

In article <4pf9jp$f...@geraldo.cc.utexas.edu>, mle...@arthur.ma.utexas.edu (Miguel Lerma) writes:
> Dr D F Holt (ma...@csv.warwick.ac.uk) wrote:
> [...]
> > That is certainly a possible opinion, but I would not agree with it.
> > I would still call it proof. Since many reputable people seem to consider
> > that the 4-colour theorem has been proved, I am not alone in that opinion.
>
> I also think that the 4-color theorem has been proved. Whoever has
> doubts about it should read Appel & Haken's paper "The Four Colour
> Proof Suffices", The Mathematical Inteligencer, Vol.8 (1986), N.1,
> pp.10-20. In this case the computer was an aid to get a proof, but
> the authors gave many details of it, including hundred of pages with
> diagrams that anybody could study and verify. Some people have
> worked in verifying the proof (including verification by hand).

I think it would be a whole lot easier and convincing to verify
their computer proof than to manually verify those hundreds of pages
of diagrams.

Roger


>
> An opposite example is Wu-Yi Hsiang "proof" of Kepler Conjecture
> about the maximum density packing of spheres in R^3. Look at
> Tomas C. Hales' article "The Status of the Kepler Conjecture",
> The Mathematical Intelligencer, Vol.16 (1994), N.3, pp.47-58.
> A part of the criticism has to do with lack of details and
> "unproven" claims resting on computational analysis (see
> footnote 3 in that article).

If Hsiang had a computer proof for those details, maybe people
wouldn't be complaining. It is usually the pencil and paper
proofs that skips details, not the computer proofs.

Roger


Massimo Tagliavini

unread,
Jun 11, 1996, 3:00:00 AM6/11/96
to schl...@bbs.cruzio.com

schl...@bbs.cruzio.com wrote:

>
> In article <4ot369$d...@geraldo.cc.utexas.edu>, mle...@arthur.ma.utexas.edu (Miguel Lerma) writes:
> > schl...@bbs.cruzio.com wrote:
> > [...]
> > : I am not criticizing the problem. I am questioning why people are
> > : looking for difficult proofs when there is a trivial proof.
> >
> > I do not think that the calculator really provides a "proof" in
> > the mathematical sense. A "mathematical proof" should consist of
> > a sequence of logical steps that anybody could check. When a part
> > of the alleged proof relies on the output of a mechanism such as
> > a calculator, then it does not fulfill the requirements to be
> > properly considered as a mathematical proof - call it "empirical
> > evidence" if you want, but not "proof".
>
> My calculator uses a sequence of logical steps that anybody could
> check.
>
> > It remains to discuss why we are interested in "mathematical" proofs,
> > instead in the usual kind of "empirical" evidence that is considered
> > enough in everyday life. Several different answers are possible for
> > that question, one of them the weakeness and unreliablility of usual
> > empirical evidence, but a much simpler answer could be this one:
> > why not?
>
> Is your calculator unreliable?
>
> > : What is this an example of? That 2 numbers can be close
> > : without being equal?
> >

> > It is a good example of how the answer of a calculator can be
> > missleading.
>
> It is only misleading if you jump to a totally unwarranted conclusion.
> That is, if you think 2 numbers are equal just because they are
> close. Many analytical arguments are similarly misleading.
>
> Roger

Hey: did any of you noticed that sqrt of 441 = 21 and sqrt of 144 = 12?

sqrt of 961 = 31 and sqrt 169 = 13! I found a lot of these inverse
symmetries. Numbers being "infinite", are they infinite? Prove it!

Cheers

Massimo


Massimo Tagliavini

unread,
Jun 11, 1996, 3:00:00 AM6/11/96
to rbr...@ime.usp.br

Rogerio Brito wrote:
>
> schl...@bbs.cruzio.com wrote:
> >In article <4oqkfe$f...@mcmail.CIS.McMaster.CA>, kov...@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) writes:
>
> (..)
>
> >> Since we are told (Lindemann's Theorem) that pi is transcendental, we
> >> deduce that pi cannot be a root of the polynomial p (->analytic argument).
> >> Yet, MATLAB shows that
> >> p(pi) is approximately 8.4 * 10^(-8) (compare with 23928), and
> >> if r is the positive root of p, we find that
> >> r - pi is approximately -2.8 * 10^(-12).
> >>
> >> These differences can be hidden from a user of an 11-digit calculator.
> >
> >What is this an example of? That 2 numbers can be close
> >without being equal?
>
> No. It only shows that the real answer can be hidden if
> you only rely on those machines. A (true) mathematical
> proof (frequently) doesn't rely on machines (except for
> very few cases, such as the Four Color Theorem, that many
> disagree that it has been *really* proved). These

> machines have limitations, this is what the
> mathematicians don't accept.
>
> Otherwise, mathematicians only say that it is a
> "verification".
>
> A proof is a sequence that *has* to be worked *only* with
> pencil and paper (and, of course, imagination :) ).
> Anything else is just a way to get insight to return to
> the problem and to continue it with your pencil and
> paper.
>
> >Roger
>
> []z, Roger...
>
> --
> ======================================================================
> Rogerio Brito - Computer Science Student - University of Sao Paulo
> e-mail: rbr...@ime.usp.br - home page: http://www.ime.usp.br/~rbrito
> "Windows? Linux and X!" - Member of Linux Users Group in Brazil
> ======================================================================

Brilliant Rogerio !

Teach Them the o'l European Way they seem to be losing.

Machines DO help for insight!

But proof has to be based on Formal Logic.

Recommended Readings:

1)Peano

1)Lord Bertie Russell: An Introduction to Mathematical Philosophy, or
better: from the above and Alfred North Witehead "De principia
Mathematica"

Regards

Massimo Tagliavini


Miguel Lerma

unread,
Jun 12, 1996, 3:00:00 AM6/12/96
to

schl...@bbs.cruzio.com wrote:

: I think it would be a whole lot easier and convincing to verify


: their computer proof than to manually verify those hundreds of pages
: of diagrams.

How do you do that? If you can actually verify the computer proof
that would be fine, but if the proof does not give enough details
about how the results were reached (by the computer or whatever),
there is no way to verify it.

Of course, we can add evidence by repeating the computations
with another computer, different software, etc. But that is
not the same as "checking the proof".

: If Hsiang had a computer proof for those details, maybe people


: wouldn't be complaining. It is usually the pencil and paper
: proofs that skips details, not the computer proofs.

The point is not how the proof was obtained. A proof made with
a computer is fine if the output gives enough details to check
the steps, or if those details are added later by the author.
Otherwise it is not a proof, but a claim, empirical evidence
or whatever.

Incidentally, the criticism about Hsiang's proof is caused by
lack of details only in part. It seems that some claims included
in the "proof" are actually wrong.


Miguel A. Lerma


Gerry Myerson

unread,
Jun 13, 1996, 3:00:00 AM6/13/96
to

In article <bruck-10069...@max-10.pacificnet.net>,
br...@pacificnet.net (Ronald Bruck) wrote:
=>
=> Funny. I would have called his statement almost a tautology.

Er, how can a statement be almost a tautology? Isn't that like being
almost pregnant?

GM


Per Erik Manne

unread,
Jun 13, 1996, 3:00:00 AM6/13/96
to

Miguel Lerma wrote:

> schl...@bbs.cruzio.com wrote:
>
> : If Hsiang had a computer proof for those details, maybe people
> : wouldn't be complaining. It is usually the pencil and paper
> : proofs that skips details, not the computer proofs.
>
(...)

>
> Incidentally, the criticism about Hsiang's proof is caused by
> lack of details only in part. It seems that some claims included
> in the "proof" are actually wrong.
>
> Miguel A. Lerma

This is in fact what Thomas Hales claims in an article in the
Math.Intelligencer (no.3, 1994). Hsiang has answered this criticism
in a later issue (no.1, 1995). I haven't seen any response to
this answer. In this subject I'm sufficiently non-expert to be
easily convinced by the last speaker, whoever that may be.
Also, I'm not sufficiently interested to start wrestling with
Hsiang's proof (90+ pages), especially if it is in addition short
on detail. Therefore it would be nice if the critics would continue
this debate and state which parts of their criticism that they uphold.

Per Manne

Massimo Tagliavini

unread,
Jun 13, 1996, 3:00:00 AM6/13/96
to Dr D F Holt

Dr D F Holt wrote:
>
> In article <199606081656...@babyblue.cs.yale.edu>,
> rbr...@ime.usp.br writes:
> > No. It only shows that the real answer can be hidden if
> > you only rely on those machines. A (true) mathematical
> > proof (frequently) doesn't rely on machines (except for
> > very few cases, such as the Four Color Theorem, that many
> > disagree that it has been *really* proved).
>
> It does not really matter whether they disagree or not -
> if the proof contains no error, then it is correct, and the theorem has been
> proved. Of course, it might have mistakes in it, and it might be a matter
> of opinion whether the proof has been adequately checked.
>
> > These
> > machines have limitations, this is what the
> > mathematicians don't accept.
> >
> > Otherwise, mathematicians only say that it is a
> > "verification".
>
> You should say "some mathematicians" since they do not all agree with you!
>
> > A proof is a sequence that *has* to be worked *only* with
> > pencil and paper (and, of course, imagination :) ).
> > Anything else is just a way to get insight to return to
> > the problem and to continue it with your pencil and
> > paper.
>

Dr. Derek

Are you serious? "proof" by calculator? please! I recommend, By
Gulbenkian Foundation "Introduction to development of logic" Authors
Wilfred and Wilma Kaplan.

Also Aristotle :categories

Also, Eco on semantics.

Regards

Massimo


Daniel A. Asimov

unread,
Jun 13, 1996, 3:00:00 AM6/13/96
to

Here's a half-baked idea I had for a calculus proof that 7^sqrt(8) > 8^sqrt(7).

(This problem is really starting to intrigue me because it seems so simple,
but a clean and simple calculus proof has to date seemed rather elusive!)

Letting y stand for sqrt(56)/x, consider the function

f(x) = ln(x^y/y^x).

If we can show that f(sqrt(7)) > f(sqrt(8)), this would imply that

(sqrt(7)^sqrt(8))^2 > (sqrt(8)^sqrt(7))^2,

which is equivalent to

7^sqrt(8) > 8^sqrt(7).

When I graph f(x) using Mathematica, it is visually unequivocal that

(*) f'(x) < 0 for sqrt(7) <= x <= sqrt(8)

SO -- if someone can come up with a calculus proof of (*), that would, I think, complete a calculus proof of the original problem.

Interestingly, it's also visually clear that f'(x) changes sign somewhere
inside [sqrt(7)-.2, sqrt(7)] and also somewhere inside [sqrt(8),sqrt(8 +.2].

--Dan Asimov

Rogerio Brito

unread,
Jun 13, 1996, 3:00:00 AM6/13/96
to

ma...@csv.warwick.ac.uk (Dr D F Holt) wrote:
(...)

>whole number calculations. (As far as I know, the proof of the four color
>theorem does not involve any real number arithmetic - correct me somebody
>if I am wrong.)

I've not said that the proof of this theorem involved any
floating point arithmetic. Neither that I disagreed with
the fact that it's proved. I only stated that there are
people who doesn't accept.

>As I said in an earlier post, the only meaningful question that you can ask
>about a proof is how likely is it that it is correct, and that is independent
>of the method used.

What's a proof? I think it's a sequence of symbols.
What's the process of verifying a proof? To see if that
sequence is "valid" given some rules. If the method you
used gives me a correct finite sequence of symbols,
that's what we expected. But since it has finiteness, it
has the property that it can be worked out with paper and
pencil only.



>Do you believe that 7^sqrt(8) > 8^sqrt(7)? If so, why?
>A couple of very nice methods of proving this using only arithmetic that
>can be done on paper and pencil have been posted, but despite that, I
>personally still find the straightforward calculator "proof" the most
>convincing, and the most likely to be free of error.

I think this is turning out to be a flame war from your
part; anyway, let's continue: if I don't have a
justification to the method, I can't accept something.
And saying: "my calculator `said' this" isn't a strong
justification. If you can show me why you've accepted
every step in the proof done with your proof using your
calculator, then there's no problem with the proof (but I
think this will be longer than the answer asked by the
original poster for example). But if you can't, well,
that is a wrong answer as wrong as saying that 0 = 1
(when we are dealing with real numbers, let me state this
before someone get some weird algebraic structure :) ).

Rogerio Brito

unread,
Jun 13, 1996, 3:00:00 AM6/13/96
to

schl...@bbs.cruzio.com wrote:
>
>
>>> From: rbr...@ime.usp.br (Rogerio Brito) -(1)--(1)
>>> I'd say every calculator is unreliable if I can make up
>>> numbers to which it can give me wrong answers. And if it
>>> based on digital systems with limited precision, we can
>>> *always* come with these numbers.
>
>Nonsense. It is only your interpretation of them which is wrong.

When if comes to the "believe or not", we can't argue who
is "right or wrong".

>I am sure you are also capable of misinterpreting a pencil and
>paper argument also.

I don't think you've read to my post integrally, have
you? What about "and if it [is] based on digital systems


with limited precision, we can *always* come with these

numbers"?

My point is that a proof that is based on some computer
has to be carefully verified and that it's not easy to
come with this. As I've said before, computers can give
wrong answers (I'm only saying that computers can give
wrong answers and not that humans can't error, if you
think of me saying this); one example can be the Pentium
Floating Point Unit Error.

And interpretations being or not important, formal proofs
only have to use a set of finite rules that are allowed.
Nothing else.

Maybe I'm a formalist.

>Roger

Travis Kidd

unread,
Jun 13, 1996, 3:00:00 AM6/13/96
to

Gerry Myerson <ge...@mpce.mq.edu.au> writes:
>=> Funny. I would have called his statement almost a tautology.
>Er, how can a statement be almost a tautology? Isn't that like being
>almost pregnant?
Easy! True in all but trivial cases :-)

>GM
-Travis


J. B. Rainsberger

unread,
Jun 14, 1996, 3:00:00 AM6/14/96
to

In article <4ppcgn$d...@cnn.nas.nasa.gov>,

asi...@nas.nasa.gov (Daniel A. Asimov) wrote:
>Here's a half-baked idea I had for a calculus proof that 7^sqrt(8) >
8^sqrt(7).
>
>(This problem is really starting to intrigue me because it seems so
simple,
>but a clean and simple calculus proof has to date seemed rather elusive!)

Can we not simply consider the functions f(x) = ln x and g(x) = sqrt(x)?

Now, f'(x) = 1/x, g'(x) = 1/[2 sqrt(x)]

x^2 - 4x = 0 <==> x(x - 4) = 0 <==> x is in {0, 4}. To the right of the
root 4, x^2 - 4x > 0 since the coefficient of x is positive. Therefore, for
all x in [7, 8]:

x^2 - 4x > 0 <==> 4x < x^2 <==> 2 sqrt(x) < |x|
1/|x| < 1/[2 sqrt(x)]

but since we are considering only positive values for x, |x| = x and thus

1/x < 1/[2 sqrt(x)]

Thus, for all x in [7, 8], f'(x) < g'(x) and thus f increases more slowly
than g. This means that f(7)/f(8) > g(7)/g(8) or:

sqrt(7)/sqrt(8) < ln(7)/ln(8) <==> ln(8) sqrt(7) < ln(7) sqrt(8)
<==> 8^sqrt(7) < 7^sqrt(8)

QED.

J. B. Rainsberger, | Any six-year-old who knows the
York University | words nefarious and extemp-
(collège Glendon) | oraneous gets my vote.
t7...@inforamp.net | -- Joe on Calvin.

Ronald Bruck

unread,
Jun 14, 1996, 3:00:00 AM6/14/96
to

In article <4pqn39$2...@news.inforamp.net>, t7...@inforamp.net (J. B.
Rainsberger) wrote:

:
:Can we not simply consider the functions f(x) = ln x and g(x) = sqrt(x)?


:
:Now, f'(x) = 1/x, g'(x) = 1/[2 sqrt(x)]
:
:x^2 - 4x = 0 <==> x(x - 4) = 0 <==> x is in {0, 4}. To the right of the
:root 4, x^2 - 4x > 0 since the coefficient of x is positive. Therefore, for
:all x in [7, 8]:
:
:x^2 - 4x > 0 <==> 4x < x^2 <==> 2 sqrt(x) < |x|
:1/|x| < 1/[2 sqrt(x)]
:
:but since we are considering only positive values for x, |x| = x and thus
:
:1/x < 1/[2 sqrt(x)]
:
:Thus, for all x in [7, 8], f'(x) < g'(x) and thus f increases more slowly
:than g. This means that f(7)/f(8) > g(7)/g(8) or:
:
:sqrt(7)/sqrt(8) < ln(7)/ln(8) <==> ln(8) sqrt(7) < ln(7) sqrt(8)
:<==> 8^sqrt(7) < 7^sqrt(8)

:

Er, but the fact that f' < g', while it does imply f is increasing more
slowly than g, does NOT imply that f(7)/f(8) > g(7)/g(8). Just take f(x)
= x, g(x) = cx for some c > 1. If you insist on f(7)/f(8) < g(7)/g(8)
(instead of accepting equality as a counterexample), try f(x) = x, g(x) =
x + a - 1/(b x), and adjust the values of a and b so that f(x) < g(x) on
an interval containing [7,8]; it's automatic that f'(x) < g'(x) if b > 0;
yet f(7)/f(8) < g(7)/g(8). Let's see, if I've done my arithmetic right
you need only

1/7 < ab < 15/56, and a, b > 0.

Daniel A. Asimov

unread,
Jun 14, 1996, 3:00:00 AM6/14/96
to

In article <4pqn39$2...@news.inforamp.net> it was written:

>
>Can we not simply consider the functions f(x) = ln x and g(x) = sqrt(x)?
>
> [...]
>
>To the right of the root 4, x^2 - 4x > 0 [and so] 1/x < 1/[2 sqrt(x)].

>
>Thus, for all x in [7, 8], f'(x) < g'(x) and thus f increases more slowly
>than g. This means that f(7)/f(8) > g(7)/g(8) or:
^^^^^^^^^^^^^^^^^^^^^

>
>sqrt(7)/sqrt(8) < ln(7)/ln(8) <==> ln(8) sqrt(7) < ln(7) sqrt(8)
><==> 8^sqrt(7) < 7^sqrt(8)
-------------------------------------------------------------------------

Although the highlighted inequality is true, I don't see how it follows
from the previous sentence.

And also, it appears to me that the same reasoning, if valid, would apply not
only to 7 and 8 but to any numbers x, y with 4 < x < y.

But it happens that 6^sqrt(9) = 216, while 9^sqrt(6) = 217.474...

--Dan Asimov

Dr D F Holt

unread,
Jun 15, 1996, 3:00:00 AM6/15/96
to

In article <199606131621...@babyblue.cs.yale.edu>,

rbr...@ime.usp.br writes:
>ma...@csv.warwick.ac.uk (Dr D F Holt) wrote:
>(...)
>
>>Do you believe that 7^sqrt(8) > 8^sqrt(7)? If so, why?
>>A couple of very nice methods of proving this using only arithmetic that
>>can be done on paper and pencil have been posted, but despite that, I
>>personally still find the straightforward calculator "proof" the most
>>convincing, and the most likely to be free of error.
>
> I think this is turning out to be a flame war from your
> part;

That is not my intention. The only point I was trying to make was that, for
many types of routine mathematical calculation, you get the correct answer more
quickly by using a calculator or computer, and in practice it is much less
likely to be wrong than by doing an equivalent calculation on paper.
(But of course it might still be wrong, and you need to be aware of all of
the possible sources of error in a computer proof.) In my own research, I
use computers regularly to decide, for example, whether a given finitely
presented group is finite or infinite, and in many cases I would have no idea
how to resolve the question without using a computer. (A typical situation is
that you have a family of groups G(n), you prove them infinite for n>=N by
some theoretical method, and then you resolve the cases n<N by computer.
Of course this often encourages people to refine the theoretical approaches,
so that they can handle some of the smaller values without computer, and that
is a good thing. But that is because they are improving their understanding
of the mathematics - it is not because they are repeating lengthy mechanical
calcualtions by hand.)

Anyway, so the real question is, what is the value of a paper and pencil
proof, without mechanical assistance. What I do NOT believe is that a
paper and pencil proof gives you any more certainty that the result is correct,
and this is really the only point that I am arguing about.

There are several good reasons for looking for pencil and paper proofs
however. The best reason is that it often provides more insight into the
problem. But conversely, if it provides no new insight, but merely
amounts to carrying out several pages of routine manipulations then it
does not seem to me to serve much useful purpose.

Another reason might be that a paper and pencil proof might conceivable provide
a more lasting and reliable record for posterity. Perhaps one day we will
all be forrced to rely on more primitive methods...

Derek Holt.

Miguel Lerma

unread,
Jun 16, 1996, 3:00:00 AM6/16/96
to

Per Erik Manne (p...@hamilton.nhh.no) wrote:

> Miguel Lerma wrote:
> > Incidentally, the criticism about Hsiang's proof is caused by
> > lack of details only in part. It seems that some claims included
> > in the "proof" are actually wrong.
> >
> > Miguel A. Lerma

> This is in fact what Thomas Hales claims in an article in the
> Math.Intelligencer (no.3, 1994). Hsiang has answered this criticism
> in a later issue (no.1, 1995). I haven't seen any response to
> this answer. In this subject I'm sufficiently non-expert to be

Right, Hsiang answered Hales' criticism. I am curious to see if
that answer is satisfactory for J.H. Conway, T.C. Hales, D.J. Muder,
and N.J.A. Sloane, who expresed their negative opinion in a letter
to the editor in The Math. Intell. Vol.16 (1994) N.2. Anyway, we may
wonder in what extend a proof that is not convincing for (or understood
by) the main world experts in the matter can be actually be considered
"a proof".

In inspite that I have expressed some skepticism about "computer proofs",
I can see that they may play a role in the future of Mathematics.
I agree up to some extend with C.W.H. Lam's opinions in his article
"How Reliable Is a Computer-Based Proof?", The Math. Intell. Vol.12
(1990) N.1, pp.8-12. Lam gave a computer based proof that there do not
exist any finite projective planes of order 10. The proof used several
thousand hours of supercomputer time, and it seems to be impossible to
check in the traditional sense of the word. However it can be independently
verified. Lam also gives some ideas about how to increase confidence in
a computed result.


Miguel A. Lerma

Mike McCarty

unread,
Jun 17, 1996, 3:00:00 AM6/17/96
to

In article <4q266c$2...@geraldo.cc.utexas.edu>,
Miguel Lerma <mle...@arthur.ma.utexas.edu> wrote:

[stuff cut]

)In inspite that I have expressed some skepticism about "computer proofs",
)I can see that they may play a role in the future of Mathematics.
)I agree up to some extend with C.W.H. Lam's opinions in his article
)"How Reliable Is a Computer-Based Proof?", The Math. Intell. Vol.12
)(1990) N.1, pp.8-12. Lam gave a computer based proof that there do not
)exist any finite projective planes of order 10. The proof used several
)thousand hours of supercomputer time, and it seems to be impossible to
)check in the traditional sense of the word. However it can be independently
)verified. Lam also gives some ideas about how to increase confidence in
)a computed result.

I think that this completely misses the point. The purpose of a proof is
not merely to determine whether some proposition be true under certain
conditions, but rather to -understand- why it is true.

That's why so many "important" theorems are proven over and over using
different techniques. The idea is to understand all the connections
between the "fact" and the rest of mathematics.

A "computer" proof which comprises enumeration of millions of possible
cases, and testing them for validity does -not- increase our
understanding of mathematics, and is, therefore, useless. I forget who
said "The purpose of computing is understanding, not numbers." I
disagree with this (it is very useful to compute some numbers in order
to build a bridge or whatever), but I definitely believe that it is true
of proofs.

Mike
--
----
char *p="char *p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}

I don't speak for DSC. <- They make me say that.

David Shield

unread,
Jun 18, 1996, 3:00:00 AM6/18/96
to J. B. Rainsberger

J.B.Rainsberger wrote:
>
> Thus, for all x in [7, 8], f'(x) < g'(x) and thus f increases more slowly
> than g. This means that f(7)/f(8) > g(7)/g(8) or:

Counterexample: f(x) = (x-1)/10 , g(x) = x,
f'(x) = 1/10, g'(x) = 1,
f(7)/f(8) = 6/7, g(7)/g(8) = 7/8.


>
> sqrt(7)/sqrt(8) < ln(7)/ln(8) <==> ln(8) sqrt(7) < ln(7) sqrt(8)
> <==> 8^sqrt(7) < 7^sqrt(8)
>

Miguel Lerma

unread,
Jun 20, 1996, 3:00:00 AM6/20/96
to

Mike McCarty (jmcc...@sun1307.spd.dsccc.com) wrote:

> I think that this completely misses the point. The purpose of a proof is
> not merely to determine whether some proposition be true under certain
> conditions, but rather to -understand- why it is true.

I desagree. The purpose of a proof is just to determine if a proposition
is true. If it also helps to understand "why" it is true, then we call it
a "beautiful proof".


Miguel A. Lerma


0 new messages