Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

1.499999999

1,307 views
Skip to first unread message

Mrrob3rts

unread,
Jan 23, 2002, 5:46:18 PM1/23/02
to
by definition, 1.49999..... equals 1.5, so when rounding 1.49999.... to 1 s.f.
shouldnt we round it up to 2. If the answer is indeed yes, then doesnt the way
we teach rounding appear wrong

Robert Low

unread,
Jan 24, 2002, 4:33:22 AM1/24/02
to

Well, 1.4999... isn't equal to 1.5 'by definition', it's a theorem.

Other than that, what about 'the way we teach rounding' is wrong?

If a.b is a decimal representation, then you round to the nearest
integer unless a.b is precisely half-way between two, and in that
case you round up (nowadays, though I was originally taught to
round to the nearest even integer in the case of such a tie).
Works for me.

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Horst Kraemer

unread,
Jan 24, 2002, 6:32:07 AM1/24/02
to
On Thu, 24 Jan 2002 09:33:22 +0000, Robert Low <mtx...@coventry.ac.uk>
wrote:

> On 23 Jan 2002, Mrrob3rts wrote:
>
> > by definition, 1.49999..... equals 1.5, so when rounding 1.49999.... to 1 s.f.
> > shouldnt we round it up to 2. If the answer is indeed yes, then doesnt the way
> > we teach rounding appear wrong
>
> Well, 1.4999... isn't equal to 1.5 'by definition', it's a theorem.

I don't think so. By asking if 1.49999.... is equal to 1.5 or not you
are assuming that the symbols "1.49999...." and "1.5" represent
numbers. You aren't asking if the _symbols_ are equal or not - they
definitely aren't.

Thus you are assigning implicitly a number to the symbol
"1.4999..." if you are asking if 1.4999... is equal to 1.5.
This assignment is rather a definition than a theorem.

The implicit definition of the symbol

a.c1c2c3...

is lim a + c1/10 + c2/100 + c3/1000 + c3/10000 + c3/10000 + ...

and the limit of the partial sums of

1 + 4/10 + 9/100 + 9/1000 + ....

_is_ 1.5 (if you take this fact as a theorem then everything is a
theorem and there are no definitions ;-)

Thus 1.4999... is 1.5 by definition of the meaning of the symbol
1.4999...


Regards
Horst

Robert Low

unread,
Jan 24, 2002, 8:40:49 AM1/24/02
to
On Thu, 24 Jan 2002, Horst Kraemer wrote:
> I don't think so. By asking if 1.49999.... is equal to 1.5 or not you
> are assuming that the symbols "1.49999...." and "1.5" represent
> numbers. You aren't asking if the _symbols_ are equal or not - they
> definitely aren't.

Agreed. I was assuming that 1.4999... was a shorthand for

1.4 + lim {n-> infinity} 9*10^{-n-2}

> and the limit of the partial sums of
>
> 1 + 4/10 + 9/100 + 9/1000 + ....
>
> _is_ 1.5

Agreed. But that is clearly a theorem, as
we have to prove that the above sequence converges
to that limit.

> (if you take this fact as a theorem then everything is a
> theorem and there are no definitions ;-)

I strongly disagree. To show that the limit is 1.5 you
have to do work. The standard version is first to
observe that the limit cannot be greater than 1.5, and
then to show that adding any number, however small, to
the sum, gives a result greater than 1.5, so that it
cannot be less than 1.5 either: thus we conclude that
it is equal to 1.5. I don't know any more immediate way
to show that 1.4999... is equal to 1.5, and I certainly
don't know what set of definitions you're using to make
it the definition of 1.5.

Maybe you could elucidate by writing down your definitions
for the real that makes the claim a definition?

An outline of the system I'm using is that the reals
are by definition a complete ordered field. 1 is the
multiplicative identity. 2 is shorthand for 1+1,
3 for 1+2, and so on. 1.5 is by definition 1+5/10.
1.4999... is as stated above. Since the sequence of
partial sums is monotone increasing and bounded above,
the limit exists. But that this limit is equal to
1.5 is a theorem.

What axiomatization are you using?
---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Gazza

unread,
Jan 24, 2002, 9:24:35 AM1/24/02
to

"Robert Low" <mtx...@coventry.ac.uk> wrote in message
news:Pine.LNX.4.33.020124...@alfgar.coventry.ac.uk...

>I don't know any more immediate way
> to show that 1.4999... is equal to 1.5

I think that the OP was using was the following:

Let x = 1.4999999...
then 10x = 14.99999...
Subtracting upwards gives 9x = 13.5
Solving for x, gives x = 1.5, hence 1.4999999...= 1.5

Gazza


Robert Low

unread,
Jan 24, 2002, 12:17:58 PM1/24/02
to

Even that is a (sketch of) a proof, it certainly isn't
a definition.

But in any case, it is incomplete: to fill in the detail
(where the devil resides) you need an algorithm for
subtracting non-terminating decimals. Either that, or
you need to show that if a_n and b_n are two Cauchy
sequences, then lim(a_n - b_n) = lim(a_n) - lim(b_n).
It isn't a huge amount of work, but it's no less than
what I outlined.

In either case, 1.4999... = 1.5 is a theorem, not
a definition.

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Dr A. N. Walker

unread,
Jan 24, 2002, 12:07:36 PM1/24/02
to
In article <a2p5gh$70r$1...@newsg4.svr.pol.co.uk>,

Gazza <ga...@garyjones.co.uk.invalid> wrote:
>Let x = 1.4999999...
>then 10x = 14.99999...
>Subtracting upwards gives 9x = 13.5

Would you care to justify either of these steps [without
appealing to theorems of analysis]? Note that both multiplication
and subtraction are normally, for finite numbers of digits, done
from the RH end, which is here rather a long way down the line ....
Would you be just as happy with "2x = 2.999999..., so x = 1.5"?
Or with "7x = 10.4999999..., so 6x = 9"? Or with "Let y = 1.5,
then y/7 = 0.2142857142857..., so y/7*7 = 1.4999999... = y = 1.5"?

There are good reasons why reputable texts use theorems about
limits and not optically-attractive proofs like the above.

--
Andy Walker, School of MathSci., Univ. of Nott'm, UK.
a...@maths.nott.ac.uk

Clark

unread,
Jan 24, 2002, 1:11:00 PM1/24/02
to

"Dr A. N. Walker" wrote:
>
> In article <a2p5gh$70r$1...@newsg4.svr.pol.co.uk>,
> Gazza <ga...@garyjones.co.uk.invalid> wrote:
> >Let x = 1.4999999...
> >then 10x = 14.99999...
> >Subtracting upwards gives 9x = 13.5
>
> Would you care to justify either of these steps [without
> appealing to theorems of analysis]?

What about

step 1: multiplying by 10 in the decimal system is effected by moving
every digit one place left while keeping the decimal marker fixed
step 2: subtracting equals from equals gives equals

Note that we only need start subtracting from the left in case of
'carrys' ... none of those here. And, sure, infinite lists are sometimes
tricky to work with, but it's plausible at least that if you just stick
to subtracting 9 from 9 you'll always get 0.

> Note that both multiplication
> and subtraction are normally, for finite numbers of digits, done
> from the RH end, which is here rather a long way down the line ....
> Would you be just as happy with "2x = 2.999999..., so x = 1.5"?
> Or with "7x = 10.4999999..., so 6x = 9"? Or with "Let y = 1.5,
> then y/7 = 0.2142857142857..., so y/7*7 = 1.4999999... = y = 1.5"?
>
> There are good reasons why reputable texts use theorems about
> limits and not optically-attractive proofs like the above.


Well, yes, but ... it depends on the level you're working at and what
you think a proof is, doesn't it? Think of a fifteen-year-old who hasn't
studied limits in any formal way yet (but who is beginning to think
about such things). Doesn't it make lots of sense to talk with such a
person about the _proof_ above that doesn't use an axiom system for the
reals, or formally develop Cauchy sequences and theorems about limits
and so on? If we don't allow ourselves to talk about 'proof' until we've
covered all possibilities, we'll have no proofs at all (a point Lakatos
long ago made convincingly, no?).

Guarded tolerance of forms of proof like this seems to me to make sense
as an attitude for schoolteachers (like me) to take. Do you disagree?

So ... maybe there are sometimes good reasons for accepting 'optically
attractive' proofs at face value.

[Agreed, it's a proof, not a definition. But it is a proof.]

Bob

Clark

unread,
Jan 24, 2002, 1:14:43 PM1/24/02
to

"Dr A. N. Walker" wrote:
>
> In article <a2p5gh$70r$1...@newsg4.svr.pol.co.uk>,
> Gazza <ga...@garyjones.co.uk.invalid> wrote:
> >Let x = 1.4999999...
> >then 10x = 14.99999...
> >Subtracting upwards gives 9x = 13.5
>
> Would you care to justify either of these steps [without
> appealing to theorems of analysis]?

What about

step 1: multiplying by 10 in the decimal system is effected by moving
every digit one place left while keeping the decimal marker fixed
step 2: subtracting equals from equals gives equals

Note that we only need start subtracting from the right in case of


'carrys' ... none of those here. And, sure, infinite lists are sometimes
tricky to work with, but it's plausible at least that if you just stick
to subtracting 9 from 9 you'll always get 0.

> Note that both multiplication


> and subtraction are normally, for finite numbers of digits, done
> from the RH end, which is here rather a long way down the line ....
> Would you be just as happy with "2x = 2.999999..., so x = 1.5"?
> Or with "7x = 10.4999999..., so 6x = 9"? Or with "Let y = 1.5,
> then y/7 = 0.2142857142857..., so y/7*7 = 1.4999999... = y = 1.5"?
>
> There are good reasons why reputable texts use theorems about
> limits and not optically-attractive proofs like the above.

Dr A. N. Walker

unread,
Jan 24, 2002, 3:33:44 PM1/24/02
to
In article <3C504F13...@brutele.be>, Clark <cl...@brutele.be> wrote:
>> >Let x = 1.4999999...
>> >then 10x = 14.99999...
>> >Subtracting upwards gives 9x = 13.5
>> Would you care to justify either of these steps [without
>> appealing to theorems of analysis]?

>What about
>step 1: multiplying by 10 in the decimal system is effected by moving
>every digit one place left while keeping the decimal marker fixed

OK for finite digit strings, gets a bit murkier with infinite.
Especially as it's [in some loose sense] what's happening right down
at the far end that matters in this problem.

>step 2: subtracting equals from equals gives equals

No problem with that, but with the actual subtraction process.

>Note that we only need start subtracting from the right in case of
>'carrys' ... none of those here.

How do you know there are no carries [borrows?]. Again, this
process depends crucially on what is happening at the far end. If
you subtract 1,49999999999999999999999999999999fuzzfuzzfuzz from
14.9999999999999999999999999999999fuzzfuzzfuzz [trying to avoid the
use of "..."!] you can't tell whether there is a borrow until you can
look at "fuzz". In this case, you can therefore never tell.

> And, sure, infinite lists are sometimes
>tricky to work with, but it's plausible at least that if you just stick
>to subtracting 9 from 9 you'll always get 0.

"Yes", except that when you multiplied by 10, you reduced the
number of 9's after the point by 1, so down at the far end something
different [arguably] happens.

>Well, yes, but ... it depends on the level you're working at and what
>you think a proof is, doesn't it? Think of a fifteen-year-old who hasn't
>studied limits in any formal way yet (but who is beginning to think
>about such things).

OK, such pupils need encouragement, but also warnings. Any
worries about 1.4999... presumably come after they've learned about
recurring decimals, so they should know how to convert them into
rationals -- which, lo and behold, gives the right answer [and the
analysis is disguised elsewhere].

> Doesn't it make lots of sense to talk with such a
>person about the _proof_

Well, I should have put "proof" in quotes, for the above is
*not* a proof, see below.

> above that doesn't use an axiom system for the
>reals, or formally develop Cauchy sequences and theorems about limits
>and so on? If we don't allow ourselves to talk about 'proof' until we've
>covered all possibilities, we'll have no proofs at all (a point Lakatos
>long ago made convincingly, no?).

Cauchy and limits can wait. I'm not sure that axioms for the
reals can. Just like Euclid's parallel axiom, there is one of the
axioms for the reals that is only partially intuitive -- the one that
says that there are no infinitesimals. We're somewhat brainwashed
into that, as most of us never learn about any other possibility.
But it is possible to set up number-like systems [non-standard
analysis, the surreals] in which there are indeed infinitesimals.
In such systems, it remains true that 1.4999... == 1.5 *if* the LHS
"means" some limit. But it is also true that there are numbers x
such that 1.4999... < x < 1.5 for *any* *finite* number of 9's on
the LHS [x is infinitesimally less than 1.5]. [And in such systems,
the very notion of limits becomes more problematic.]

It follows that every "optical" "proof" is wrong, for it does
not use the infinitesimal axiom, and therefore proves something that
is not true in other systems that follow the same axioms.

>Guarded tolerance of forms of proof like this seems to me to make sense
>as an attitude for schoolteachers (like me) to take. Do you disagree?

Not at all, You have to judge your audience. But (a) it
concerns me that far too many maths teachers, esp but not only those
who were drafted in from biology, PE, whatever, are not themselves
aware of the intricacies of these apparently simple results [not
just in number systems, there are comparable pitfalls in statistics,
mechanics, geometry, calculus, ...]; and (b) I suspect that the
bright 15yo who is fascinated by 1,4999... would be even more
fascinated by a decent discussion of infinitesimals, infinities,
and related topics [such as games].

>So ... maybe there are sometimes good reasons for accepting 'optically
>attractive' proofs at face value.

Yes, we all have to teach engineers ...!

>[Agreed, it's a proof, not a definition. But it is a proof.]

But a *fallacious* "proof".

You might do better, BTW, with the following: "If 1,4999...
< 1.5, then there must be a number between them; what do you think
that number might be?" The usual answer [even from maths u/gs] is
"1.4999...5". *Then* you can start to explore what 1.4999...5 might
or might not mean. But I don't think you can deal with it properly
unless you're prepared to discuss infinitesimals, at some level.

Chris Holford

unread,
Jan 24, 2002, 5:25:30 PM1/24/02
to
In article <Pine.LNX.4.33.020124...@alfgar.coventry.ac.
uk>, Robert Low <mtx...@coventry.ac.uk> writes
snip

>If a.b is a decimal representation, then you round to the nearest
>integer unless a.b is precisely half-way between two, and in that
>case you round up (nowadays, though I was originally taught to
>round to the nearest even integer in the case of such a tie).
Is there any reason to adopt the convention of rounding up for a.5 ?
I usually go for the 'nearest even number' rule so that in along list of
numbers (experimental results for example) some numbers increase and
some decrease and thus rounding errors tend to cancel out.
--
Chris Holford

Robert Low

unread,
Jan 24, 2002, 5:29:14 PM1/24/02
to
On 24 Jan 2002, Dr A. N. Walker wrote:

> In article <a2p5gh$70r$1...@newsg4.svr.pol.co.uk>,
> Gazza <ga...@garyjones.co.uk.invalid> wrote:
> >Let x = 1.4999999...
> >then 10x = 14.99999...
> >Subtracting upwards gives 9x = 13.5
>
> Would you care to justify either of these steps [without
> appealing to theorems of analysis]? Note that both multiplication

I thought this thread might suck you in :-)

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Robert Low

unread,
Jan 24, 2002, 5:41:25 PM1/24/02
to
On Thu, 24 Jan 2002, Chris Holford wrote:
> Is there any reason to adopt the convention of rounding up for a.5 ?

I don't know. Just, some time ago, I noticed that my students
(undergrad) all arrived pre-programmed with the 'round-up' rule.

> I usually go for the 'nearest even number' rule so that in along list of
> numbers (experimental results for example) some numbers increase and
> some decrease and thus rounding errors tend to cancel out.

That's what (and why) I learned. Still seems more sensible
to me. If anybody has a good reason for 'round up', I'd like
to hear it too.

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Gazza

unread,
Jan 24, 2002, 7:35:37 PM1/24/02
to

"Robert Low" <mtx...@coventry.ac.uk> wrote in message news:Pine.LNX.4.33.020124...@alfgar.coventry.ac.uk...

I was taught to round up if the the nth + 1 digit was a 5. (Am now 21 to give you a time scale). I think the reason was that then
there was a balance between what numbers round, up, and those that round down:
ie. instead of the unbalanced:
UP: 6, 7, 8, 9, [4 figures - potentially 5 figures]
DOWN: 0, 1, 2, 3, 4, (though obviously the 0 is the ambiguous type case) [5 figures - potentialy 6 figures]

...and a 5 being dependent on the unit number.

So would you round 2.59834 to 2 (1s.f) ? (or have I mis-understood your teaching?) If you only look at the nth+1 digit to the sig
figure, a 5, then you'd round to the nearest even number, 2....?

Gazza


Virgil

unread,
Jan 24, 2002, 9:28:37 PM1/24/02
to
In article <a2q9a9$d49$1...@newsg1.svr.pol.co.uk>,
"Gazza" <ga...@garyjones.co.uk.invalid> wrote:

If what is to be cut off is a 5 plus anything (is anything larger
than just a bare 5) you should round up. If it is less than 5 you
should round down.

If it is exactly 5, there are at least 2 different rules:

The common rule always rounds 5s up. This makes for simplicity, you
only have to look one digit beyond the cut off point.

The statistical rule rounds exact 5s to the nearest even digit in
the next more significant place. This method aviods "systematic
bias" in rounding off.

Robert Low

unread,
Jan 25, 2002, 5:54:39 AM1/25/02
to
On Fri, 25 Jan 2002, Gazza wrote:
> "Robert Low" <mtx...@coventry.ac.uk> wrote in message news:Pine.LNX.4.33.020124...@alfgar.coventry.ac.uk...
> > On Thu, 24 Jan 2002, Chris Holford wrote:
> > > Is there any reason to adopt the convention of rounding up for a.5 ?
> So would you round 2.59834 to 2 (1s.f) ? (or have I mis-understood your teaching?) If you only look at the nth+1 digit to the sig
> figure, a 5, then you'd round to the nearest even number, 2....?

I'd round 2.59834 to 3 because that's the nearest integer; it's
only in the case where there is no nearest integer (or rather,
two of 'em) that we need a rule, i.e. when the 5 is the only
digit after the decimal point. (Similar rules go for rounding
to some number of decimal places...)

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Gazza

unread,
Jan 25, 2002, 8:03:53 AM1/25/02
to

"Robert Low" <mtx...@coventry.ac.uk> wrote in message
news:Pine.LNX.4.33.020125...@alfgar.coventry.ac.uk...

As I would have expected, only it didn't come across like that! :) Was just checking,
as I'd not heard about this statistical rounding.

Ta
Gazza


Mark Thakkar

unread,
Jan 25, 2002, 8:16:39 AM1/25/02
to
Rob,

>> Is there any reason to adopt the convention of rounding up for a.5 ?
>
> I don't know. Just, some time ago, I noticed that my students
> (undergrad) all arrived pre-programmed with the 'round-up' rule.
>
>> I usually go for the 'nearest even number' rule so that in along
>> list of numbers (experimental results for example) some numbers
>> increase and some decrease and thus rounding errors tend to cancel
>> out.
>
> That's what (and why) I learned. Still seems more sensible to me.
> If anybody has a good reason for 'round up', I'd like to hear it too.

As someone who's been pre-programmed with the 'round-up rule', but
doesn't like applying it, your rule seems more sensible. (Not that
I'd like having to apply your version either, but of the two...)

Mark.

Robert Low

unread,
Jan 25, 2002, 9:12:30 AM1/25/02
to
On Fri, 25 Jan 2002, Gazza wrote:
> "Robert Low" <mtx...@coventry.ac.uk> wrote in message
> > I'd round 2.59834 to 3 because that's the nearest integer; it's
> As I would have expected, only it didn't come across like that! :)


OK: I'll try to be less ambiguous in future.

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Chris Holford

unread,
Jan 25, 2002, 7:01:00 PM1/25/02
to
In article <MPG.16bb6b10d...@News.CIS.DFN.DE>, Mark Thakkar
<mark.t...@balliol.ox.ac.uk> writes
snip

>>> I usually go for the 'nearest even number' rule so that in along
>>> list of numbers (experimental results for example) some numbers
>>> increase and some decrease and thus rounding errors tend to cancel
>>> out.
>>
>> That's what (and why) I learned. Still seems more sensible to me.
>> If anybody has a good reason for 'round up', I'd like to hear it too.
>
>As someone who's been pre-programmed with the 'round-up rule', but
>doesn't like applying it, your rule seems more sensible. (Not that
>I'd like having to apply your version either, but of the two...)
>
>Mark.
Now for the 2^5 dollar question......
Is the 'nearest even number' rule (as developed in this thread)
acceptable to examiners. I'd hate to tell the GCSE students I coach
something that would lose them marks.
--
Chris Holford

Clark

unread,
Jan 26, 2002, 10:53:51 AM1/26/02
to
I disagree with you about some of this ...

"Dr A. N. Walker" wrote:

> In article <3C504F13...@brutele.be>, Clark <cl...@brutele.be> wrote:
> >> >Let x = 1.4999999...
> >> >then 10x = 14.99999...
> >> >Subtracting upwards gives 9x = 13.5
> >> Would you care to justify either of these steps [without
> >> appealing to theorems of analysis]?
> >What about
> >step 1: multiplying by 10 in the decimal system is effected by moving
> >every digit one place left while keeping the decimal marker fixed
>

> OK for finite digit strings, gets a bit murkier with infinite.
> Especially as it's [in some loose sense] what's happening right down
> at the far end that matters in this problem.

But there is no 'far end'. That's why the principle I gave still works
for repeating decimals. It's true (isn't it?) that
for any decimal digit a, 10*0.aaa... = a.aaa...
(with the usual meaning of '...' See below.)
- That seems reasonably describable by talking of moving every digit one
place left, no?


>
> >step 2: subtracting equals from equals gives equals
>

> No problem with that, but with the actual subtraction process.
>

> >Note that we only need start subtracting from the right in case of
> >'carrys' ... none of those here.
>

> How do you know there are no carries [borrows?].

OK, 'borrows'. We know there aren't any because it's 9's all the way.
_That's_ a matter of definition: '1.4999...' means 'the number written
in decimal with one in the units place, 4 in the tenths place, and 9 in
every place after that'.

> Again, this
> process depends crucially on what is happening at the far end. If
> you subtract 1,49999999999999999999999999999999fuzzfuzzfuzz from
> 14.9999999999999999999999999999999fuzzfuzzfuzz [trying to avoid the
> use of "..."!] you can't tell whether there is a borrow until you can
> look at "fuzz". In this case, you can therefore never tell.

Once again, there is no 'far end'. Your 'fuzzfuzzfuzz' is misleading.
There's no fuzzyness here at all. Every digit after the ones you've
written is a 9. No?

>
> > And, sure, infinite lists are sometimes
> >tricky to work with, but it's plausible at least that if you just stick
> >to subtracting 9 from 9 you'll always get 0.
>

> "Yes", except that when you multiplied by 10, you reduced the
> number of 9's after the point by 1

Nope. That's one of the tricky things about infinite lists. That would
be true if there _were_ a far end, because in that case the list would
be finite. For infinite lists like this, though, moving one place left
relative to the point is like taking one away: it leaves you with the
same number you started with after the point. (Aleph-null in this case.)

, so down at the far end something
> different [arguably] happens.

Once again, what's this 'far end'? There isn't one. (I suppose you could
argue that there _is_ an end in something like the sense in which
infinite ordinals come after 'the end' of the finite ordinals. You want
to do that? It seems to me very much not the right sense for what we're
talking about just here.)

>
> >Well, yes, but ... it depends on the level you're working at and what
> >you think a proof is, doesn't it? Think of a fifteen-year-old who hasn't
> >studied limits in any formal way yet (but who is beginning to think
> >about such things).
>

> OK, such pupils need encouragement, but also warnings. Any
> worries about 1.4999... presumably come after they've learned about
> recurring decimals, so they should know how to convert them into
> rationals -- which, lo and behold, gives the right answer [and the
> analysis is disguised elsewhere].
>

> > Doesn't it make lots of sense to talk with such a
> >person about the _proof_
>

> Well, I should have put "proof" in quotes, for the above is
> *not* a proof, see below.
>

> > above that doesn't use an axiom system for the
> >reals, or formally develop Cauchy sequences and theorems about limits
> >and so on? If we don't allow ourselves to talk about 'proof' until we've
> >covered all possibilities, we'll have no proofs at all (a point Lakatos
> >long ago made convincingly, no?).
>

> Cauchy and limits can wait. I'm not sure that axioms for the
> reals can. Just like Euclid's parallel axiom, there is one of the
> axioms for the reals that is only partially intuitive -- the one that
> says that there are no infinitesimals. We're somewhat brainwashed
> into that, as most of us never learn about any other possibility.
> But it is possible to set up number-like systems [non-standard
> analysis, the surreals] in which there are indeed infinitesimals.
> In such systems, it remains true that 1.4999... == 1.5 *if* the LHS
> "means" some limit. But it is also true that there are numbers x
> such that 1.4999... < x < 1.5 for *any* *finite* number of 9's on
> the LHS [x is infinitesimally less than 1.5]. [And in such systems,
> the very notion of limits becomes more problematic.]
>
> It follows that every "optical" "proof" is wrong, for it does
> not use the infinitesimal axiom, and therefore proves something that
> is not true in other systems that follow the same axioms.
>

> >Guarded tolerance of forms of proof like this seems to me to make sense
> >as an attitude for schoolteachers (like me) to take. Do you disagree?
>

> Not at all, You have to judge your audience. But (a) it
> concerns me that far too many maths teachers, esp but not only those
> who were drafted in from biology, PE, whatever, are not themselves
> aware of the intricacies of these apparently simple results [not
> just in number systems, there are comparable pitfalls in statistics,
> mechanics, geometry, calculus, ...] and (b) I suspect that the
> bright 15yo who is fascinated by 1,4999... would be even more
> fascinated by a decent discussion of infinitesimals, infinities,
> and related topics [such as games].
>

> >So ... maybe there are sometimes good reasons for accepting 'optically
> >attractive' proofs at face value.
>

> Yes, we all have to teach engineers ...!

I think that's unfair to school maths teachers. If we're to emphasise
the importance of proof, we have to give _proofs_ (no scare-quotes, by
design!) that we know are going to be 'rigourised' in certain
directions, for certain of our students, later on. And there's nothing
wrong with that. What counts as a proof is relative to the audience,
whether that relatity be over time or just over level of mathematical
sophistication.

>
> >[Agreed, it's a proof, not a definition. But it is a proof.]
>

> But a *fallacious* "proof".

I don't think it is. Even your argument, at best, only supports it being
an enthymeme (it's missing some premises). And even that seems arguable.
Take the Lakatos point I mentioned. Are you really prepared to say that
Euclid's proof of, say Pythagoras' theorem was fallacious because Euclid
wasn't Hilbert? Seems a bit extreme.

>
> You might do better, BTW, with the following: "If 1,4999...
> < 1.5, then there must be a number between them; what do you think
> that number might be?" The usual answer [even from maths u/gs] is
> "1.4999...5". *Then* you can start to explore what 1.4999...5 might
> or might not mean. But I don't think you can deal with it properly
> unless you're prepared to discuss infinitesimals, at some level.

I agree with that. As it happens (and I know this is probably rare for
colleauges in UK), many of my maths students also study philosophy with
me. When they've read some Leibniz (and of course Berkeley's attack on
him), of course it makes sense to talk about infinitesimals in maths too
... and to speculate, for instance, about whether, if Abraham Robinson
had been born 100 years earlier, we'd still think that every set of
reals bounded above has a lub and so on. (Of course that would have
necessitated Frege being born earlier, and, then, well, in a way his
development of logic (required for the development of non-standard
analysis) arose just from the problems in making general second order
epsilon-delta type statements, which themselves arose from traditional
analysis ...)

That's all very interesting stuff, I don't deny. I just don't think it
supports your contention that simple proofs like the one in question are
fallacious.

Bob

Mark Thakkar

unread,
Jan 26, 2002, 11:27:35 AM1/26/02
to
Clark,

> If we're to emphasise the importance of proof, we have to give
> _proofs_ (no scare-quotes, by design!) that we know are going to be
> 'rigourised' in certain directions, for certain of our students,
> later on. And there's nothing wrong with that. What counts as a
> proof is relative to the audience, whether that relatity be over

> time or just over level of mathematical sophistication. [...]


> Take the Lakatos point I mentioned. Are you really prepared to say
> that Euclid's proof of, say Pythagoras' theorem was fallacious
> because Euclid wasn't Hilbert? Seems a bit extreme.

Mmm, Lakatos. Interesting stuff. Most other philosophers of maths
spend their time talking about maths in the abstract, with no regard
as to how maths is /done/ (with the implicit assumption that this is
unimportant).

> many of my maths students also study philosophy with me. When
> they've read some Leibniz (and of course Berkeley's attack on him),
> of course it makes sense to talk about infinitesimals in maths too
> ...

Ian Stewart has an interesting-looking article on non-standard
analysis in the current "New Scientist". (It wasn't long enough to
justify the cover price, so I've not read it - anyone else?)

Mark.

Darrell

unread,
Jan 26, 2002, 6:01:44 PM1/26/02
to
"Clark" <cl...@brutele.be> wrote in message
news:3C52D10F...@brutele.be...

> I disagree with you about some of this ...
>
> "Dr A. N. Walker" wrote:
>
> > In article <3C504F13...@brutele.be>, Clark <cl...@brutele.be>
wrote:
> > >> >Let x = 1.4999999...
> > >> >then 10x = 14.99999...
> > >> >Subtracting upwards gives 9x = 13.5
> > >> Would you care to justify either of these steps [without
> > >> appealing to theorems of analysis]?
> > >What about
> > >step 1: multiplying by 10 in the decimal system is effected by moving
> > >every digit one place left while keeping the decimal marker fixed
> >
> > OK for finite digit strings, gets a bit murkier with infinite.
> > Especially as it's [in some loose sense] what's happening right down
> > at the far end that matters in this problem.
>
> But there is no 'far end'. That's why the principle I gave still works
> for repeating decimals. It's true (isn't it?) that
> for any decimal digit a, 10*0.aaa... = a.aaa...
> (with the usual meaning of '...' See below.)
> - That seems reasonably describable by talking of moving every digit one
> place left, no?

Yes, it turns out this is correct,
10*0.aaa...=a.aaa.... But recall you also said "It's true (isn't it?)"
which raises my point. Moving the decimal point over one place to
the right *is* effectively multiplying by 10, but this is by definition of
the
decimal number system and place value for *finite* strings. We can't
just blindly assume the pattern will hold for repeating decimals just
because
"that's the way it works for terminating ones." In order to *know* it's
true
it needs to be proven, which has yet to be done. It boils down to defining
what a
repeating decimal *is.* The "visual" argument that goes like:


Let x = 1.4999999...
then 10x = 14.999...

assuming no prior knowledge of limits, infinite series, or the like, is
making a rather bold assumption. Specifically, it is making the assumption
that moving the decimal point around has the same effect for expressions of
infinitely many digits as it does for finitely many. As it turns out of
course, the assumption is correct but I believe the point being made is this
is at best an informal argument, not a proof of its validity, and in order
to prove its validity requires theorems of analysis.

Early on, we were taught things like:

123.678

means...

(1*10^2)+(2*10^1)+(3*10^0)+(6*10^-1)+(7*10^-2)+(8*10^-3)


...and we generalized by talking about the 10^+n or 10^-n place values. But
we did not talk about what happens when n is *infinite.* Only after some
discussion at an appropriate time (i.e. limits, the sum of an infinite
series, etc.) about what the very *meaning* of an infinite "n" is in this
context, can we attempt to define what 1.499... even *is*... much less
assume what its product with 10 is, etc. I don't see how this can be
accomplished without introducing at minimum *some* notion of limit,
infinitesimal,
etc., be it formal or informal because without doing so we would still be
adding terms to a sum that never ends, so we would never now what its "sum"
really is (we never finish the process) much less know what its product with
10, etc. So, in some _loose sense_ we need to discuss what happens down at
the very "end" of 1.499... (again we're talking loosely here.) Of course,
once this discussion is complete and we have developed the necessary
machinery to *prove* that 1.499...*10=14.99... (or better yet for the
general case) can we *assume* it to be true. This above "proof" makes that
assumption without offering or referencing proof of its validity.
Apparently, the proof of its validity is supposed to be self-evident based
on similar patterns performing similar operations on terminating decimals.

>
>
> >
> > >step 2: subtracting equals from equals gives equals
> >
> > No problem with that, but with the actual subtraction process.

Similarly, I believe the point is this "actual subtraction process" is being
assumed to be valid without proof, and that's a fallacy. Again, turns out
it is true, but at this point it is just an assumption based on similar
results when performing arithmetic on terminating decimals. We can't just
say blindly, out of the blue, something like, "that's the way it works here
because that's the way it works for terminating decimals."


> >
> > >Note that we only need start subtracting from the right in case of
> > >'carrys' ... none of those here.
> >
> > How do you know there are no carries [borrows?].
>
> OK, 'borrows'. We know there aren't any because it's 9's all the way.
> _That's_ a matter of definition: '1.4999...' means 'the number written
> in decimal with one in the units place, 4 in the tenths place, and 9 in
> every place after that'.

But we have yet to give sufficient mathematical meaning to the phrase "every
place after that."

>
> > Again, this
> > process depends crucially on what is happening at the far end. If
> > you subtract 1,49999999999999999999999999999999fuzzfuzzfuzz from
> > 14.9999999999999999999999999999999fuzzfuzzfuzz [trying to avoid the
> > use of "..."!] you can't tell whether there is a borrow until you can
> > look at "fuzz". In this case, you can therefore never tell.
>
> Once again, there is no 'far end'. Your 'fuzzfuzzfuzz' is misleading.
> There's no fuzzyness here at all. Every digit after the ones you've
> written is a 9. No?

Again, I believe the point is , at this juncture, we really *are* talking
about something fuzzy, because we have yet to give sufficient mathematical
meaning to a decimal string that
repeats indefinitely nor what result we obtain by subtracting it from
another similar expression containing "fuzfuzfuz" (or if you prefer, ...).
That's why there is "fuzfuzfuz" at the end of that statement. We have yet
to define what a repeating decimal really *is*, much less talk about its
product with 10, etc. etc. This is all quite "fuzzy" for now...


>
> >
> > > And, sure, infinite lists are
sometimes
> > >tricky to work with, but it's plausible at least that if you just stick
> > >to subtracting 9 from 9 you'll always get 0.
> >
> > "Yes", except that when you multiplied by 10, you reduced the
> > number of 9's after the point by 1
>
> Nope. That's one of the tricky things about infinite lists.

Err...yep. We reduced the number of 9's after the decimal point *precisely*
by 1.

> That would
> be true if there _were_ a far end, because in that case the list would
> be finite. For infinite lists like this, though, moving one place left
> relative to the point is like taking one away: it leaves you with the
> same number you started with after the point. (Aleph-null in this case.)

By your very own argument in an earlier post, it was not the decimal point
that moved but
rather the string in relation to the (fixed) decimal point. You cannot deny
that a single 9
"moved" from the rhs to the lhs of the decimal point (else where did it come
from?)

So, there is precisely one less 9 on the rhs than there was before. Of
course, it turns out, there are still an infinite number of 9's of the rhs
of the decimal point
(oo-1=oo) but that doesn't change the fact that *one* of them moved over to
the lhs. (again, because oo-1=oo).

>
> , so down at the far end something
> > different [arguably] happens.
>
> Once again, what's this 'far end'? There isn't one.

...but we have yet to define this mathematically. Before the notion of
limit, series, etc. is introduced, we have yet to define the very meaning
(and sum) of an infinite addition
problem like:

1*10^0 + 4*10^-1 + 9*10^-2 + ... + 9*10^-n + ...

...we have only discussed the general case of terminating decimals, which
are different beasts.

> >
> > OK, such pupils need encouragement, but also warnings. Any
> > worries about 1.4999... presumably come after they've learned about
> > recurring decimals, so they should know how to convert them into
> > rationals -- which, lo and behold, gives the right answer [and the
> > analysis is disguised elsewhere].


Precisely. The analysis is disguised elsewhere and nowhere revealed in the
simple looking "proof." However, what the simple looking proof *does*
suggest (and in fact what you have also suggested) is that...to someone who
does not know better...equating 10*1.499... with 14.99..., etc. is based
solely on the "analysis" of performing similar operations on terminating
decimals.

>
> That's all very interesting stuff, I don't deny. I just don't think it
> supports your contention that simple proofs like the one in question are
> fallacious.

Arriving at a correct conclusion based on incorrect logic is pure *luck*,
hence fallacious. for it not to be fallacious, we need to prove the
assumptions that are being made. I would argue it is less expensive to
prove *directly* that 1.499...=1.5 than it is to prove 10*1.499...=14.99...
and that it would be tricky to prove the latter without also proving the
former (or its equivalent) in the process.

--
Darrell

implied odds

unread,
Jan 27, 2002, 11:23:20 AM1/27/02
to
Haven't read all the threads, but I don't think we should get too hung
up on this. Our decimal approach(recursive representations) of
numbers, is merely "our best attempt" to try to describe points on the
number line, that can not be described by n decimal places. The
'proof' involving 10x-x is probably satisfactory, if a proof by
contradication stance is undertaken. One should never try to think
about far to the right when dealing with infinite numbers. Infinity
is conceptual.

I can't draw the dot, but 0.3333 (recurring) = 1/3, the "dot symblol"
is our link way of describing that fractional number.

0.33333 (rec)/ 10 = 0.0333(rec), because it is infinite there is no
"shift" at the end - it is infinite. This is why, IMO, it is a an
adequate apporach.

regards Steve

Darrell

unread,
Jan 27, 2002, 5:46:22 PM1/27/02
to
"implied odds" <steve_w...@hotmail.com> wrote in message
news:401d2bfe.02012...@posting.google.com...

> Haven't read all the threads, but I don't think we should get too hung
> up on this. Our decimal approach(recursive representations) of
> numbers, is merely "our best attempt" to try to describe points on the
> number line, that can not be described by n decimal places. The
> 'proof' involving 10x-x is probably satisfactory, if a proof by
> contradication stance is undertaken.

The "proof" involving 10x - x can never be "satisfactory" until one properly
defines what it means to sum an infinite number of terms. What *is*
1.499...? What *is* a repeating decimal minus another repeating decimal?
What *is* the product of 10 and this certain repeating decimal? I do not
believe it sufficient to say "merely our best attempt to describe points on
the number line, that can not be described by n decimal places" and blindly
assume that finite arithmetic on similar terminating decimals will work the
same way for repeating ones.

The repeating decimal 1.49... has a very precise mathematical meaning. It
is the sum of the infinite series:

1 + 4/10 + 9/100 + 9/1000 + 9/10000 + ...

Now, at this juncure, when all we have done is sum finitely many terms by
adding the first two, then taking that result and adding the next term ,etc.
until we reach the *end*, we have not yet developed the necessary machinery
to find this infinite sum. In order for that simple proof to be
"satisfactory" it has to be satisfactorilly shown (i.e. proven) that
10*1.49...=14.99... and so forth for any other steps involving arithmetic
with a repeating decimal that has not yet been proven. Again, I argue that
it is less expensive to prove 1.49...=1.5 *directly* than it is to prove
10*1.499...=14.999...

Now, if by "satisfactory" you mean something to the effect of.....just
assume this step is valid without any prior proof (after all it looks like
it makes sense, doesn't it?) then please do not call it a "proof" when in
fact the proof is missing some crucial steps.

One should never try to think
> about far to the right when dealing with infinite numbers. Infinity
> is conceptual.
>
> I can't draw the dot, but 0.3333 (recurring) = 1/3, the "dot symblol"
> is our link way of describing that fractional number.
>
> 0.33333 (rec)/ 10 = 0.0333(rec), because it is infinite there is no
> "shift" at the end - it is infinite. This is why, IMO, it is a an
> adequate apporach.

But these observations are just that----observations. They have yet to be
proved mathematically. No one is arguing that the 10x-x approach yields a
correct conclusion......it does....but that does not imply that correct
reasoning was applied during every step of the process. In order to equate
10*1.49... with 14.99... we have to prove *why* that result is valid. Wew
can't just say this is "adequate" because that's the way it works for
terminating decimals. We are going from finding the sum of a finite number
of terms, to finding the sum of an nfnite number of terms. We need
"instructions" on how this is accomplished in order to verify that
10*1.49... really is 14.999...


--
Darrell

Rich Bednarski

unread,
Jan 28, 2002, 1:46:33 AM1/28/02
to
Those who would take a more casual view of the "proof" that 1.499... = 1.5
will run into difficulty when a student claims that 1.499... rounded to the
nearest integer is 1, which I believe was the original poster's point.
After all they will be applying the same steps they would use for rounding a
number with a terminating decimal, namely looking at the tenth place,
rounding up if 5 or greater and truncating if 4 or less. And when you then
say, "you can't use the terminating decimal rule when you are dealing with a
recurring decimal", what are you going to say if the student says, "but you
used the terminating decimal rules on a recurring decimal when you showed us
the proof that 1.499... = 1.5"? There is a reason that mathematicians are
very careful to distinguish between proofs and demonstrations or algorithms.
I think the 10X-X method would be much better, and more accurately,
presented as an algorithm than a proof. If the class is up to it you could
prove that the algorithm works and if the class is not up to it then you
simply present it as an algorithm without proof and at least you haven't
messed up the students' ideas about what constitutes a proof.

At least that is this Yank's opinion.

Rich

"Darrell" <dr6...@hotmail.com> wrote in message
news:2r%48.21146$ag5.2...@newsread2.prod.itd.earthlink.net...

implied odds

unread,
Jan 28, 2002, 4:06:30 AM1/28/02
to
The proof is satisfactory, because you start with the assumption that
1.5>1.49999 (recurring) and can carry an infinite number of examples
to disprove the case, any case will do that proves the contrary - that
is t contradiction proof. We are not dealing with limits here, we are
not ashesessing the limit we are definging a number. There is a
difference here that you have overlooked, the limit of the sum is 1.5,
but we are not summing - it is a precise definition. Typically with
the pi series - pi = lim of the sum.
Hence, 0.999(r) - 0.0999(r) = 0.9.

Steve


"Darrell" <dr6...@hotmail.com> wrote in message news:<2r%48.21146$ag5.2...@newsread2.prod.itd.earthlink.net>...

implied odds

unread,
Jan 28, 2002, 4:37:21 AM1/28/02
to
If you assume that 1.5 >1.4999 and then conclude that, say, 13.5>13.5
or 1.5>1,5then this is a proof and it is a proof by contradiction.
An alternative approach lies, one suspects, in showing that 1.499(rec)
is not an irrational number and hence there exists a p and q s.t etc

regards Steve
"Rich Bednarski" <rbedn...@charter.net> wrote in message news:<u59suul...@corp.supernews.com>...

Darrell

unread,
Jan 28, 2002, 2:09:15 PM1/28/02
to
"implied odds" <steve_w...@hotmail.com> wrote in message
news:401d2bfe.02012...@posting.google.com...
> The proof is satisfactory, because you start with the assumption that
> 1.5>1.49999 (recurring) and can carry an infinite number of examples
> to disprove the case, any case will do that proves the contrary - that
> is t contradiction proof.

Makes sense to me, provided we can actually prove the contrary.

> We are not dealing with limits here, we are
> not ashesessing the limit we are definging a number.

Then what is your "definition" of that number? You are saying much about
this proof by contradiction, yet you offer no specific example of such a
proof. Perhaps if you explicitely give us a proof by contradiction, your
point will be made clear (or refuted as the case may be.)

> There is a
> difference here that you have overlooked, the limit of the sum is 1.5,
> but we are not summing - it is a precise definition.

Err. An infinite sum *is* a limit. A sum is just that----a sum, i.e. a
number. A constant, if you will. Constants do not vary. They are exact.
The sum of an infinite series (call this S) is *by definition* the limit of
its sequence of partial sums. This does not mean "the limit of the sum" is
S. This means the sum *is* S. And yes, we *are* summing because by
definition of the very meaning of the expression:

1.49...

in the decimnal number system this is:

(1*10^0)+(4*10^-1) + (9*10^-2) + ...

Not jujst for repeating decimals, but for *any* number. For example, the
expression 124 is *by definition*:

(1*10^2) + (2*10^1) + (4*10^0)

So, as you see, these really are "sums." Perhaps you are not clear on the
definition of the sum of a series. Yes, the definition is in terms of a
limit. But, the definition clearly states that this limit (the limit of the
sequence of partial sums) *is* the sum of the series. If you need more
specifics, see the alt.algebra.help FAQ at
http://home.earthlink.net/~aahfaq/. There is an article there about
.99...=1 which contains discussion not only of the 10x-x demonstration but
of the formal proof as well.

The bottom line is, and I agree with Rich--------the 10x-x demonstration is
just that---a demonstration. It is not a "proof." and again I reiterate it
is actually easier to prove 1.49...=1.5 directly using the definition than
it is to proof 10*1.499...=14.999... (a necessary and missing element n
order for the 10x-x method to constitute a "proof."

Proof by contradiction? Yeah, but in order to prove this by contradiction
you actually have to, at some point, *prove* why one side of the equation
really is equal to the other side. Ultimately, this boils down to defining
what 1.49... (or equivalently .99..) actually *is.*

Typically with
> the pi series - pi = lim of the sum.
> Hence, 0.999(r) - 0.0999(r) = 0.9.

Err. .99... - .99... = 0. this has been discussed in various newsgroups ad
nausium, which is why I put it in the alt.algebra.help FAQ. If you are
under the inpression that .99...<>1 then you do not understand what an
infinite sum *is* or how to find its value.

Regards,

--
Darrell

Darrell

unread,
Jan 28, 2002, 2:32:50 PM1/28/02
to
"Darrell" <dr6...@hotmail.com> wrote in message
news:vlh58.1590$By6.2...@newsread2.prod.itd.earthlink.net...

> "implied odds" <steve_w...@hotmail.com> wrote in message

> Typically with


> > the pi series - pi = lim of the sum.
> > Hence, 0.999(r) - 0.0999(r) = 0.9.
>
> Err. .99... - .99... = 0. this has been discussed in various newsgroups
ad
> nausium, which is why I put it in the alt.algebra.help FAQ. If you are
> under the inpression that .99...<>1 then you do not understand what an
> infinite sum *is* or how to find its value.

Obviously I misread your statement. Of course, .99...-.099...=.9 (I misread
the latter as .99...) But now, I have to ask why you mention this at all?
A sum is a constant, hence the "limit of the sum" is simply that constant.
In this case the sum of a pi series is, well, pi (else why is this called a
pi series?) So you have pi-pi which is 0. Similarly, .99... is the sum of
a series (in this case the sum is 1) and .09... is the sum of another series
(in this case the sum is .1) and 1-.1=.9.

Regards,

--
Darrell

implied odds

unread,
Jan 28, 2002, 7:24:49 PM1/28/02
to
Well the contradiction proof is fairly trivial.
Right then you appear to concede the point that 0.9999 (rec) -
0.0999(rec) = 0.9,
or more generally x.xxx(rec) * 10 exp(n) / 10 = x.xxx(rec) *10
exp(n-1) (I)
If not then this would start to violate all laws, low down in number
theory.

Start of with the premice that for x,y >0

y>x
hence
=> 9y>9x
ie
=> 10y-y >10x-x

Nothing earthshattering.
Substitute y = 1.5, x= 1.4999(rec)

and we have

1.5 >1.4999 (rec)

=> 13.5>13.5

which is clearly false, hence the initial assumpton is wrong.
Similar proof for 1.4999 >1.5. Proof by contradiction.
Both assumption are wrong hence 1.4999(rec) = 1.5
providing (I) holds.

(I asume you except that 14.9999 (rec) - 1.4999 (rec) = 13.5 which
would stem from)


It gets into the realms of mathematical philosphy but i contend that
1.49999 (rec) is not a number, not 0.33 (rec). If we think of numbers
as points on a line, then we are simply trying to use our digitlal
system to desctribe those points, but that is an aside)

"A sum is a constant, hence the "limit of the sum" is simply that
constant."

I think you might want to rethink that statement.

ps I
"If you areunder the inpression that .99...<>1 then you do not


understand what an infinite sum *is* or how to find its value."

I'm sure why you said this, but I am happy with my understsanding of
limits.

Limits apply as we let eg n-> inf,

it is perfectly accepted and rigorous to *sum to infinity*, ie to
accept a sum from i = 1 to inf, and define the result of the sum as
the limit.

regards
S

"Darrell" <dr6...@hotmail.com> wrote in message news:<CHh58.1658$By6.2...@newsread2.prod.itd.earthlink.net>...

implied odds

unread,
Jan 28, 2002, 7:29:55 PM1/28/02
to
(I asume you except that 14.9999 (rec) - 1.4999 (rec) = 13.5 which
would stem from I)

"Of course, .99...-.099...=.9 "
then clearly 14.9999 (rec) - 1.4999 (rec) = 13.5,
hence its inclusion and thus (I), and the proof by contradiction.

"Darrell" <dr6...@hotmail.com> wrote in message news:<CHh58.1658$By6.2...@newsread2.prod.itd.earthlink.net>...

Darrell

unread,
Jan 28, 2002, 8:20:15 PM1/28/02
to
"implied odds" <steve_w...@hotmail.com> wrote in message
news:401d2bfe.02012...@posting.google.com...

> (I asume you except that 14.9999 (rec) - 1.4999 (rec) = 13.5 which
> would stem from I)
>
> "Of course, .99...-.099...=.9 "
> then clearly 14.9999 (rec) - 1.4999 (rec) = 13.5,
> hence its inclusion and thus (I), and the proof by contradiction.

You miss the entire point. The 10x-x argument *assumes* this is true
without proof. That does not constitute a "satisfactory" proof. To prove
the statement true requires theorems of analysis, which is the whole point
of this sub-thread (at least that was the point in which I jumped in.)
Someone (I forgot who) argued that it is effectively true based on similar
results from performing similar arithmetic operations on *terminating*
decimals. And, they are simply assuming blindly that results will hold for
non-terminating decimals. This is faulty logic. Terminating decimals and
repeating decimals are different beasts, so before you can talk about what
the product of 10 and 1.49... is, you need first define exactly what
1.499... *is.* i.e. how to calculate it's value. That's going to be hard to
do without limits, etc. which is the entire point of this
discussion----------the claim has been made that the 10x-x proof is
"satisfactory" while assuming (not proving with theorems of analysis, but
*assuming*) that 10*1.49...=14.999... and similar assumptions for the step
where the subtraction occurs.

--
Darrell

Darrell

unread,
Jan 28, 2002, 8:40:36 PM1/28/02
to
"implied odds" <steve_w...@hotmail.com> wrote in message
news:401d2bfe.02012...@posting.google.com...

> Well the contradiction proof is fairly trivial.
> Right then you appear to concede the point that 0.9999 (rec) -
> 0.0999(rec) = 0.9,

I am not "conceding" that .09...=.9. I am simply saying it can be *proven*
to be true using the definition of what .09... and .9 actually *are.*
However, nowhere (at least not yet) in the 10x-x argument that you claim to
be "satisfactory" is this step actually proven. That's the fallacy.
Furthermore, the claim has been made that it need *not* be proven in order
for this "proof" to be satisfactory. That's the context of this discussion.

> or more generally x.xxx(rec) * 10 exp(n) / 10 = x.xxx(rec) *10
> exp(n-1) (I)

Don't see what exp(n) has to do with anything.

> If not then this would start to violate all laws, low down in number
> theory.
>
> Start of with the premice that for x,y >0
>
> y>x
> hence
> => 9y>9x
> ie
> => 10y-y >10x-x
>
> Nothing earthshattering.

Agreed. Nothing earthshattering.


> Substitute y = 1.5, x= 1.4999(rec)
>
> and we have
>
> 1.5 >1.4999 (rec)
>
> => 13.5>13.5


How did you get this?

>
> which is clearly false, hence the initial assumpton is wrong.

...but you have not shown what operation you performed to get 13.5>13.5 from
1.5>1.499... Again, please *show* your proof by contradiction, and remember
the context of this discussion is to do so without theorems of analysis
(i.e. limits, series, etc.) No one is arguing that it can or canot be
proved using theorems of analysis (i.e. calculus).

> Similar proof for 1.4999 >1.5. Proof by contradiction.
> Both assumption are wrong hence 1.4999(rec) = 1.5
> providing (I) holds.
>
> (I asume you except that 14.9999 (rec) - 1.4999 (rec) = 13.5 which
> would stem from)
>
>
> It gets into the realms of mathematical philosphy but i contend that
> 1.49999 (rec) is not a number, not 0.33 (rec).

If you believe it not to be a number, then I am wasting my time discussing
this with you. But here's something for you to think about...if 1.499... is
not a number, then why do you equate it to 1.5 (which I presume you believe
*is* a number?) The only contradictions here are your own statements.


> "A sum is a constant, hence the "limit of the sum" is simply that
> constant."
>
> I think you might want to rethink that statement.

No need to rethink it. The *definition* of an infinite sum *says* it's a
number (provided the sum exists) and tells us how to find this number. It
is the limit of the sequence of partial sums of the series. IOW, whatever
that limit is (of the sequence of partial sums), that's the value of the sum
of the series. Limits are constants, no? (rhetorical question, only answer
to yourself.)


> it is perfectly accepted and rigorous to *sum to infinity*, ie to
> accept a sum from i = 1 to inf,

Strange, a little earlier (above) you said this is not a number.


> and define the result of the sum as
> the limit.

Err. We define the limit of the sequence of partial sums as the *sum* of
the series. That's the "result" of the sum. And, this "sum" is a constant
and the limit of any constant is just that constant. Inserting phrases such
as "the limit of a sum" while technically, I guess, is not incorrect, it
certainly adds nothing useful to the argument. Why not just say "sum?"
Again these are just rhetorical questions. I do not wish to discuss
further, since you have a "philisophical" with the notion that 1.499... is a
number (yet, you seem to have no problem equating this to 1.5, which *is* a
number.) At best, your philosophy is not self-consistent.

--
Darrell

Rich Bednarski

unread,
Jan 29, 2002, 1:16:00 AM1/29/02
to
How can I assume that 1.5>1.499(rec) when I don't even know for sure,
without the techniques of analysis, that 1.4999(rec) even exists? That is a
very large assumption to have as the basis for something we are going to
call a proof, no?

Rich

Clark

unread,
Jan 29, 2002, 3:52:02 AM1/29/02
to

Yes. Just arrived in Brussels. I'll be giving copies to my students. One
complaint about this article (as about popular treatments of nsa in
general, really): it does no justice at all to its genesis in model
theory and non-standardness à la Skolem, which seems to me one of its
most interesting aspects.

Bob

Robert Low

unread,
Jan 29, 2002, 4:47:05 AM1/29/02
to
On Tue, 29 Jan 2002, Clark wrote:
> Mark Thakkar wrote:
> > Ian Stewart has an interesting-looking article on non-standard
> > analysis in the current "New Scientist". (It wasn't long enough to
> > justify the cover price, so I've not read it - anyone else?)
> >
> Yes. Just arrived in Brussels. I'll be giving copies to my students. One
> complaint about this article (as about popular treatments of nsa in
> general, really): it does no justice at all to its genesis in model
> theory and non-standardness à la Skolem, which seems to me one of its
> most interesting aspects.

Doesn't that depend on why you're thinking about NSA? Chances
are that if what you want to do is get an easy route to results
in, for example, mathematical physics (the kind of stuff that
Arkeryd or Capinski do, for example) then it's enough to deal
with the (cough, cough) contructive approach where you think
of the nonstandard reals as sequences of standard reals with
equivalence given by an ultrafilter on the integers. This
gives you an effective way of using NSA. On the other hand,
if you want a deeper understaning of important ideas like
transference, the model theoretic approach is doubtless better.

Horses for courses...

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Clark

unread,
Jan 29, 2002, 5:19:35 AM1/29/02
to
I'll rejoin the discussion at this point, if I may. The interesting part
of it (to me, at any rate) is about what constitutes a proof. I'm
suggesting that what counts as a proof is a relative matter, something I
take to be Imre Lakatos' point in 'Proofs and Refutations' and
elsewhere. I'm suggesting tht there are some contexts in which it's
possible to prove (yes, really _prove_) that
1.499...=1.5
without first defining what a limit is or mentioning any Archimedean
axiom.

'Proofs and Refutations' (everyone reading this group has read it, I'm
sure, If not, read it now instead of wasting your time over this waffle
from me) in a nutshell: Question: is _this_ (->...) a valid proof of
Euler's relation? Answer: it depends.

OK. Now look. We (me and a group of pre-analysis students) know
something about recurring decimals. Try dividing 1 by 3 ... what do we
get? It's clear enough (blindingly obvious, in fact) that we get
0.333... (where '...' means 'and so on without end'). We also know that
when we divide 1 by 3 we get 1/3, the rational number. So, we've proved
that 1/3=0.333... That does for existence of recurring decimals. There's
lots of others too.

Now, how do these recurring decimals behave? Can we add them? Well, if
there are any 'carry's' it might get a bit tricky, but if not, well,
isn't it clear that if I add 0.333... and 0.111... I'll get 0.444...?
(Not an assumption here, note: 3+1=4 in every place. We can see that
this works. There's only 3's and 1's to add ... so we'll always get 4.)
Subtract? Same thing: we might be able to think of a way of dealing with
'borrow's', but in any case sometimes we won't need to. We can see that
0.333...-0.111...=0.222... in the same way (and for the same simple
reason: again, no assumption). Multiplication? ... and so on. (Details
left as an exercise.)

And so ... do some similar stuff for 1.4999... What's it equal to?
Convinced? If not, pinpoint where the reasoning fails. It's not enough
to simply assert that recurring decimals aren't the same as terminating
decimals. Of course in some ways they're the same and in some ways they
differ. You need to show that they differ in a _relevant_ way, a way
which shows this proof fallacious. (A Wittgenstein joke: "But isn't _the
same_ at least the same?")

Right. Is (<-) a proof of '1.499...=1.5'? Well, we know from Lakatos (if
he's right: I think he is) that an answer 'it depends' makes sense. So:
answer 'it depends'. Further, it depends on the context ... and in the
context I've described, the answer, I'm claiming, is 'yes'.

Now, distinguish two possible points of disagreement.

First, with Lakatos' 'it depends'. Some people just won't accept that as
a possible answer to 'Is such-and-such a proof?'. Such people, it seems
to me, commit themselves to the view that, for instance, all Euclid's
proofs are invalid in the light of our knowledge of Hilbert's
'Foundations of Geometry'. Worse, it looks like they may be committed to
the possibilty that _no_ mathematical proof is valid because of the
possibility of Latatosian 'monsters' as yet unthought of. That seems
absurd. (Although, interestingly, it does seem to be a (partly)
respectable view of scientific theories. Maths is different from
science, though, isn't it? ...)

The other possible point of disagreement (accepting the Lakatos 'it
depends') is regarding the actual proof in question. It seems to me
sound in the given context. Where, specifically, does it go wrong (in
the given context)?

Finally, it looks to me as though most posters who want to disagree with
what I've written actually do so in the first of these two ways, but
nevertheless try to explain their disagreement in a manner relevant to
the second. I hope this post might go some way towards clarifying the
point of disagreement at least.

Bob

Clark

unread,
Jan 29, 2002, 5:30:44 AM1/29/02
to

Sure enough. I'm thinking about popularisations that will interest
school students, get them thinking about some extra-curricular maths,
and with luck stimulate them to do more. Which is what I think Stewart
is doing (and what New Scientist is for). Logic is such good fun, and
yet generally gets such a bad press, that I thought it a shame to pass
up a chance to redress the balance. That's all.

Bob

Robert Low

unread,
Jan 29, 2002, 5:37:00 AM1/29/02
to
On Tue, 29 Jan 2002, Clark wrote:

> I'll rejoin the discussion at this point, if I may. The interesting part
> of it (to me, at any rate) is about what constitutes a proof. I'm

Agreed. Indeed, it's the core of most of the arguments about this
(in its more usual guise of 0.999... = 1).

> And so ... do some similar stuff for 1.4999... What's it equal to?
> Convinced? If not, pinpoint where the reasoning fails. It's not enough
> to simply assert that recurring decimals aren't the same as terminating
> decimals. Of course in some ways they're the same and in some ways they
> differ. You need to show that they differ in a _relevant_ way, a way

Isn't that misplacing the burden of proof? Once you know that they're
different in some ways, you can either:

assume that they're the same in every way that matters, until something
goes wrong

or

attempt to justify reasoning by analogy every time you do it.

After all,

1.4 < 1.5
1.49 < 1.5
1.499 < 1.5

This is obviously true no matter how many 9s there are, so

1.49... < 1.5

This seems just as plausible to me (until I do the analysis more
carefully and actually work out the limit).

So, given two arguments (each of which proceeds by analogy
with the finitely many place case) and which lead to contradictory
conclusions, we're left with the problem of finding a more
careful argument which clarifies why and where the intuition was
wrong. This will probably lead nicely to the question of
what we mean by real numbers, infinite series, convergence and
so on. (I mention this only to make some slight contact with the
news group's official raison d'etre :-))

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Robert Low

unread,
Jan 29, 2002, 5:41:11 AM1/29/02
to

Ah, yes, a good point. Just about the only popular treatments
of interesting logic issues one sees are on Goedel's incompleteness
result, and there are many more fascinating things going on
there. (My favourite is probably the Skolem-Loewenheim theorem,
which tells us that there is a countable model of the reals.)

Anyway, I suppose I should really have read the New Scientist
article before commenting---but I'm generally a couple of weeks
behind with it. :-(

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Darrell

unread,
Jan 29, 2002, 9:14:10 AM1/29/02
to
"Robert Low" <mtx...@coventry.ac.uk> wrote in message
news:Pine.LNX.4.33.020129...@alfgar.coventry.ac.uk...

> Isn't that misplacing the burden of proof? Once you know that they're
> different in some ways, you can either:
>
> assume that they're the same in every way that matters, until something
> goes wrong
>
> or
>
> attempt to justify reasoning by analogy every time you do it.
>
> After all,
>
> 1.4 < 1.5
> 1.49 < 1.5
> 1.499 < 1.5
>
> This is obviously true no matter how many 9s there are, so
>
> 1.49... < 1.5
>
> This seems just as plausible to me (until I do the analysis more
> carefully and actually work out the limit).

Precisely. It does not seem reasonable to me that one can use the argument
of performing operations on terminating decimals, then extend those results
(i.e. blindly assume) they will hold for repeating decimals. It is not
logical to argue long dividing 1 by 3 yields .33... (in any rigourous sense,
that is) for the same reason it is not logical to deduce 1.49...<1.5 as
above. When we do long division and "conclude" that 1/3 = .33... we are
making a rather bold assumption. All this is really doing is demonstrating
the fact that the *longer* we carry out the algorithm, the *more* decimal
places we produce. Equating the *entire* decimal representation (usually
denoted as .33...) to *exactly* 1/3 is effectively saying, "I see a pattern
here. Although I have yet to properly define the expression .33... (i.e.
what does the phrase 'it keeps going and going' mean mathematically?) I see
a pattern in that if I were to stop the algorithm at any point, I get a
terminating decimal with all 3's. And, although each of these terminating
decimals is *less* than 1/3 I am going to make a big jump (without really
defining what operation I am performing) and say that the repeating decimal
is *equal* to 1/3 although I have yet to define in any mathematical sense
what it means for the expression .33... to repeat 'indefinitely.' I'm just
going to say something like 'it never ends.' Not only am I going to say all
that, but I am also going to assume I can perform additions and
multiplications with this expression although I have yet to define any
satisfactory method to perform such operations. I am just going to start at
the left end of the expression, assume such operations will never involve
any 'carries,' and add them digit by digit all the way until I reach the
right end." But the problem is, we have yet to define exactly what "the
right end" and, obviously, will never even get there using these rudimentary
tools for the same reason we never really finish "dividing" 1 by 3 to get
.33...

I am not going to argue that things like this should not be *investigated.*
After all, the calculus had to begin somewhere, surely, with thoughts along
these lines and other similar notions of infinite processes. But, one can't
simply sum of the calculus with a simple definition like "long division of 1
by 3 yields .33..." so I am defining .333... to be 1/3." That simnple
definition does not apply in the 1.49... case (you have to 'define' what
that is along similar lines and addition, multiplcation with it also.) So,
what I am saying is I guess it is OK to say 1.49...=1.5 *by definition*, but
that being the case we have not really *proved* anything, have we? We have
simply defined it that way. Hence, the 10x-x argument cannot be a "proof."

>
> So, given two arguments (each of which proceeds by analogy
> with the finitely many place case) and which lead to contradictory
> conclusions, we're left with the problem of finding a more
> careful argument which clarifies why and where the intuition was
> wrong. This will probably lead nicely to the question of
> what we mean by real numbers, infinite series, convergence and
> so on. (I mention this only to make some slight contact with the
> news group's official raison d'etre :-))

Precisely.

--
Darrell

Clark

unread,
Jan 29, 2002, 10:43:15 AM1/29/02
to

Robert Low wrote:
>
> On Tue, 29 Jan 2002, Clark wrote:

> > And so ... do some similar stuff for 1.4999... What's it equal to?
> > Convinced? If not, pinpoint where the reasoning fails. It's not enough
> > to simply assert that recurring decimals aren't the same as terminating
> > decimals. Of course in some ways they're the same and in some ways they
> > differ. You need to show that they differ in a _relevant_ way, a way
>
> Isn't that misplacing the burden of proof? Once you know that they're
> different in some ways, you can either:
>
> assume that they're the same in every way that matters, until something
> goes wrong
>
> or
>
> attempt to justify reasoning by analogy every time you do it.

That seems like a false dichotomy. Of course everything is different
from everything else in some ways. This '8' is different from this '8'
in _some_ ways. Do we _assume_ they're the same? Or do we _notice_ that
they're relevantly the same for most purposes?

I'm not clear where reasoning by analogy comes in. Divide 1 by 3: in
some ways that's like dividing 1 by 4, in others not. Likewise, dividing
1 by 4 is in some ways like dividing 1 by 8, in others not. Are the
differences relevant? Well, sure, there may be some (often implicit)
judgement going on here, just as there may be an implicit judgement that
I mean the same thing by this '8' as I did by that last '8'. Do I
_assume_ that '8' means the same as '8'? Well, in a way, I suppose I do,
but it's kind of an odd thing to say, no? Divide 1 by 3. What do I get?
0.333... (where the '...' means, let's define it, 'and so on without
end'). Do I _assume_ this is the same sort of thing as 1/4=0.25? Well, I
suppose so, in a way, but again it seems an odd thing to say. What's
wrong with saying, 'Look, can't you see, we always get a remainder of 1,
so it'll always give us a 3 in the next decimal place'? That seems more
natural than talk of assumptions or of reasoning by analogy. (And in
fact, we do know that arguing in infinite cases by analogy with finite
is liable to mislead us, so we'll be very careful about doing that.)

> After all,
>
> 1.4 < 1.5
> 1.49 < 1.5
> 1.499 < 1.5
>
> This is obviously true no matter how many 9s there are, so
>
> 1.49... < 1.5
>
> This seems just as plausible to me

But we've already proved that 1.49...=1.5 (!) So something must be going
wrong somewhere. Where? Perhaps it's the implied assumption that 'how
many 9's there are' makes sense when we have 1.499...('and so on without
end')? Yes, I think that's it. (Remember the context.)There _is_ no 'how
many 9's there are' in 1.499... Convinced?

> (until I do the analysis more
> carefully and actually work out the limit).
>
> So, given two arguments (each of which proceeds by analogy
> with the finitely many place case) and which lead to contradictory
> conclusions, we're left with the problem of finding a more
> careful argument which clarifies why and where the intuition was
> wrong. This will probably lead nicely to the question of
> what we mean by real numbers, infinite series, convergence and
> so on. (I mention this only to make some slight contact with the
> news group's official raison d'etre :-))

Indeed this is a good way of leading into that stuff. There we agree.
What do you think about the Lakatosian 'it depends' applied to proofs?


Bob

Robert Low

unread,
Jan 29, 2002, 11:12:59 AM1/29/02
to
On Tue, 29 Jan 2002, Clark wrote:
> Robert Low wrote:
> > Isn't that misplacing the burden of proof? Once you know that they're
> > different in some ways, you can either:
> > assume that they're the same in every way that matters, until something
> > goes wrong
> > or
> > attempt to justify reasoning by analogy every time you do it.
>
> That seems like a false dichotomy. Of course everything is different
> from everything else in some ways. This '8' is different from this '8'
> in _some_ ways. Do we _assume_ they're the same? Or do we _notice_ that
> they're relevantly the same for most purposes?

Personally, after the first time that I see 1.4999... behaving
differently from 1.49...9, I shift from 'assume it's OK' to
'assume it isn't'. I'll almost certainly be guided in my guesses
about how it behaves by my knowledge of terminating decimals,
but I'll be more cautious about when the 'obvious' is actually
true.

> I'm not clear where reasoning by analogy comes in. Divide 1 by 3: in

You're arguing that working with infinite decimals works the
same as working with a finite number: since
14.99...9 - 1.49...9 = 13.5 for all finite strings of 9, it'll
'obviously' work the same way for infinitely many. Looks like
reasoning by analogy to me.

> > After all,
> >
> > 1.4 < 1.5
> > 1.49 < 1.5
> > 1.499 < 1.5
> >
> > This is obviously true no matter how many 9s there are, so
> >
> > 1.49... < 1.5
> >
> > This seems just as plausible to me
>
> But we've already proved that 1.49...=1.5 (!) So something must be going

Unfortunately, I did that proof first, so I'd already proved that
1.499... was less than 1.5, before you proved that it was equal.
I therefore maintain that it's your claim that 'obviously' the
non-terminating strings of 9 after the decimal point that
must be mistaken. (Of course, I'm wrong, but I think I can
now claim that it isn't your conclusion that is wrong, only
the contention that it is obvious.)

Either way, something, as you say, must be wrong somewhere :-)
Also, either way, the only resolution is to start thinking more
carefully about what the 1.49... actually means, as opposed to
how I can perform syntactic manipulations on it to produce
decimals I do understand (because they terminate).

> Indeed this is a good way of leading into that stuff. There we agree.

Maybe we should leave it at that, then, for the 1.49... stuff.

> What do you think about the Lakatosian 'it depends' applied to proofs?

I think it's a way of showing that we often don't quite know what
we're talking about, and that more careful consideration can
make things less ambiguous, even if there's always the threat
of some remaining ambiguity (or some unnoticed lacunae in a
proof). I did like the tale of Cauchy publishing a proof
(or 'proof') of the Euler formula together with a collection
of counter-examples :-)


---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Clark

unread,
Jan 29, 2002, 11:23:00 AM1/29/02
to

Darrell wrote:
>
> "Robert Low" <mtx...@coventry.ac.uk> wrote in message
> news:Pine.LNX.4.33.020129...@alfgar.coventry.ac.uk...

> > This seems just as plausible to me (until I do the analysis more


> > carefully and actually work out the limit).
>
> Precisely. It does not seem reasonable to me that one can use the argument
> of performing operations on terminating decimals, then extend those results
> (i.e. blindly assume) they will hold for repeating decimals.

It's not a blind assumption, though, is it? You can _see_ how it works
(so I claim, anyway). Else why talk of 'optical' proofs in the first
place? Sure enough, sometimes what people claim to be able to see they
only think they can see. Sure, some so-called proofs turn out to be
fallacious. Not this one, though. So I claim.

It is not
> logical to argue long dividing 1 by 3 yields .33... (in any rigourous sense,
> that is)

But that's begging the question. I'm not claiming that what we've got is
_rigourous_. What I'm claiming is that it doesn't have to be rigourous
to be a proof. What's a proof? A valid argument to some conclusion.
Divide 1 by 3. You keep getting a remainder of 1. In fact, (it's easy to
see) you'll always get a remainder of 1 no matter how many decimal
places you work to. So 1 divided by 3 is 0.333... Convinced? You should
be. The argument is perfectly valid.

for the same reason it is not logical to deduce 1.49...<1.5 as
> above. When we do long division and "conclude" that 1/3 = .33... we are
> making a rather bold assumption.

No assumption. We can look and see.

All this is really doing is demonstrating
> the fact that the *longer* we carry out the algorithm, the *more* decimal
> places we produce. Equating the *entire* decimal representation (usually
> denoted as .33...) to *exactly* 1/3 is effectively saying, "I see a pattern
> here. Although I have yet to properly define the expression .33... (i.e.
> what does the phrase 'it keeps going and going' mean mathematically?) I see
> a pattern in that if I were to stop the algorithm at any point, I get a
> terminating decimal with all 3's. And, although each of these terminating
> decimals is *less* than 1/3 I am going to make a big jump (without really
> defining what operation I am performing) and say that the repeating decimal
> is *equal* to 1/3 although I have yet to define in any mathematical sense
> what it means for the expression .33... to repeat 'indefinitely.' I'm just
> going to say something like 'it never ends.' Not only am I going to say all
> that, but I am also going to assume I can perform additions and
> multiplications with this expression although I have yet to define any
> satisfactory method to perform such operations. I am just going to start at
> the left end of the expression, assume such operations will never involve
> any 'carries,'

Adding 1 to 3 will never give anything other than 4. When we add
0.333... to 0.111... we'll never be adding anything but 1 to 3 ... so
we'll never get a carry (since 4<10). No assumption there. Just a
trivial little (valid) argument.

and add them digit by digit all the way until I reach the
> right end." But the problem is, we have yet to define exactly what "the
> right end"

I say there is no right end in 0.333... or 0.111...

and, obviously, will never even get there using these rudimentary
> tools for the same reason we never really finish "dividing" 1 by 3 to get
> .33...

We'd never finish _writing_ the answer if we tried to use just decimal
digits. In that sense, you're right. But I can decide to stop as soon as
I see what the answer is, and write 1/3=0.333... Have I finished
dividing or not? well, in one sense, no, but in another sense, yes.

>
> I am not going to argue that things like this should not be *investigated.*
> After all, the calculus had to begin somewhere, surely, with thoughts along
> these lines and other similar notions of infinite processes. But, one can't
> simply sum of the calculus with a simple definition like "long division of 1
> by 3 yields .33..." so I am defining .333... to be 1/3."

But 1/3 just _is_ 0.333... Surely you're not saying I shouldn't explain
why this is so (ie prove it) to small children who have yet to come
across axioms for R?

That simnple
> definition does not apply in the 1.49... case (you have to 'define' what
> that is along similar lines and addition, multiplcation with it also.) So,
> what I am saying is I guess it is OK to say 1.49...=1.5 *by definition*, but
> that being the case we have not really *proved* anything, have we?

Sure, a definition is only a proof in a very thin sense.

We have
> simply defined it that way. Hence, the 10x-x argument cannot be a "proof."

I still say it's a proof. Of course you might like to explain why it's
invalid. (Which step doesn't follow?)

Knowing that theorems in Euclid rely on axioms only made explicit by
Hilbert, are Euclid's proofs of those theorems invalid? What do you
think?

Bob

Clark

unread,
Jan 29, 2002, 11:53:47 AM1/29/02
to

Robert Low wrote:
>
> On Tue, 29 Jan 2002, Clark wrote:
> > Robert Low wrote:
> > > Horses for courses...
> > >
> > Sure enough. I'm thinking about popularisations that will interest
> > school students, get them thinking about some extra-curricular maths,
> > and with luck stimulate them to do more. Which is what I think Stewart
> > is doing (and what New Scientist is for). Logic is such good fun, and
> > yet generally gets such a bad press, that I thought it a shame to pass
> > up a chance to redress the balance. That's all.
>
> Ah, yes, a good point. Just about the only popular treatments
> of interesting logic issues one sees are on Goedel's incompleteness
> result, and there are many more fascinating things going on
> there. (My favourite is probably the Skolem-Loewenheim theorem,
> which tells us that there is a countable model of the reals.)

Now there's an interesting apparent contradiction. Surely any fule kno
that the reals are uncountable?

Actually, I think I may be able to make a connection between
Lowenhein-Skolem and the relativity of proof, that's going on in another
(the same? never mind) thread. Do you know that piece by Hilary Putnam
on L-S where he argues for (completely general, not confined to maths)
non-realist semantics? Well, anyway, something along those lines. I
dunno.

But I really ought to do some work. I'll be back in a couple of days.


Bob

Robert Low

unread,
Jan 29, 2002, 12:32:49 PM1/29/02
to
On Tue, 29 Jan 2002, Clark wrote:
> Robert Low wrote:
> > there. (My favourite is probably the Skolem-Loewenheim theorem,
> > which tells us that there is a countable model of the reals.)
>
> Now there's an interesting apparent contradiction. Surely any fule kno
> that the reals are uncountable?

Absolutely. That's why it's so high on my list of fun results.
In the interests of tantalizing anybody else still
reading the thread, I'll leave this for a while. If anybody
actually wants to get to grips with the resolution, I guess
I'll hear from them :-)

> (the same? never mind) thread. Do you know that piece by Hilary Putnam
> on L-S where he argues for (completely general, not confined to maths)
> non-realist semantics? Well, anyway, something along those lines. I
> dunno.

Nah, I don't know Putnam's work at all. I think I have a collection
of papers/essays somewhere, but never got round to reading them.
I'm bad.

> But I really ought to do some work. I'll be back in a couple of days.

We'll be here.

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Robert Low

unread,
Jan 29, 2002, 12:29:18 PM1/29/02
to
On Tue, 29 Jan 2002, Clark wrote:
> Darrell wrote:
> > simply defined it that way. Hence, the 10x-x argument cannot be a "proof."
>
> I still say it's a proof. Of course you might like to explain why it's
> invalid. (Which step doesn't follow?)

Not invalid. Incomplete, that's all.

> Knowing that theorems in Euclid rely on axioms only made explicit by
> Hilbert, are Euclid's proofs of those theorems invalid? What do you
> think?

Same thing there.
---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Darrell

unread,
Jan 29, 2002, 3:17:04 PM1/29/02
to
"Clark" <cl...@brutele.be> wrote in message
news:3C56CC64...@brutele.be...

> But that's begging the question. I'm not claiming that what we've got is
> _rigourous_. What I'm claiming is that it doesn't have to be rigourous
> to be a proof. What's a proof? A valid argument to some conclusion.

OK I see now (I think). Since the "proof" has been shown to be
unsatisfactory in any slightly rigorous way, we shall now focus on the very
"definition" of what constitutes a proof, in order for this "proof" to now
be satisfactory in this context. While we're at it, let's redefine the
entire universe, shall we :-)

> Divide 1 by 3. You keep getting a remainder of 1. In fact, (it's easy to
> see) you'll always get a remainder of 1 no matter how many decimal
> places you work to.

Yep. That's easy to see. You tell me how many decimal places to "work to"
and I'll tell you how many 3's are there. Problem is, every one of those
operations results in a terminating decimal that is less than 3, i.e. it is
a finite number of operations. That is a *very* different thing from
continuing the process "indefinitely" which is intuitively *impossible*
using the long division algorithm.

So 1 divided by 3 is 0.333... Convinced? You should
> be. The argument is perfectly valid.

The conclusion is valid. The argument, however, is not.

>
> for the same reason it is not logical to deduce 1.49...<1.5 as
> > above. When we do long division and "conclude" that 1/3 = .33... we are
> > making a rather bold assumption.
>
> No assumption. We can look and see.

You can "see" the end of the decimal expansion? Really? You mean to tell
me you can "see" each and every one of the infinite number of iterations in
the long division algorithm? I only have 20/20 vision. What's yours,
infinity/20 ?

>
> Adding 1 to 3 will never give anything other than 4. When we add
> 0.333... to 0.111... we'll never be adding anything but 1 to 3 ... so
> we'll never get a carry (since 4<10). No assumption there. Just a
> trivial little (valid) argument.

Obviously you have still failed to grasp the meaning of what I and others
are saying. In your example above you are not adding .333... to .111...
Rather, you are arguing that you can add .3 to .1, then add that result to
the sum of .03 to .01, then add that result to the sum of .003 to .001, and
so on. You are doing finite arithmetic. I don't care *how* long you
continue that process, you will never reach the entire sum of
.3+.03+.003+.... because there never is an "end" to that process using that
method. Have you not studied calculus? If so, this should be obvious. If
not, that explains why you do not understand. Don't worry, you will, once
that bridge is crossed.

>
> and add them digit by digit all the way until I reach the
> > right end." But the problem is, we have yet to define exactly what "the
> > right end"
>
> I say there is no right end in 0.333... or 0.111...

And so say I as well, therefore you can never complete your finite method of
addition, can you? All you are doing is summing a finite number of terms.
Have you not yet noticed that each sum you achieve is less than .44..? That
being the case, *how* (without calculus) can you possibly make the big jump
(i.e. bold assumption) of *equating* .333...+.111... with .44...? By your
very own argument (that is, keep adding each digit to the corresponding
digit in the other expression and keep going, and going) can you not see
that there will always be an infinite number of digits that you have not yet
added to each other? Yet, you still seem happy thinking that you have
summed *all* of them. You have not.

>
> and, obviously, will never even get there using these rudimentary
> > tools for the same reason we never really finish "dividing" 1 by 3 to
get
> > .33...
>
> We'd never finish _writing_ the answer if we tried to use just decimal
> digits. In that sense, you're right. But I can decide to stop as soon as
> I see what the answer is, and write 1/3=0.333... Have I finished
> dividing or not? well, in one sense, no, but in another sense, yes.

No, you have not finished dividing yet (nor will you ever finish using that
algorithm.) You are simply choosing not to continue with the algorithm
because you see that you can never finish it. If you're going to divide,
add, subtract, multiply, whatever on a digit by digit basis, in order to
*prove* something, then your result cannot be *shown* to be valid unless you
actually *complete* the process. You are in effect saying, "This can't be
done, so I'm just going to write .33... and say I have *proven* 1/3=.33..."
That's simply not good enough, or using your terminology, that is *not* a
valid conclusion to your argument.

> But 1/3 just _is_ 0.333... Surely you're not saying I shouldn't explain
> why this is so (ie prove it) to small children who have yet to come
> across axioms for R?

Of course not. It is becomeing more and more apparent since you simply want
to redefine what a proof is, redefine the target audience, rather than stick
to the original context and the original target audience.

>
> That simnple
> > definition does not apply in the 1.49... case (you have to 'define' what
> > that is along similar lines and addition, multiplcation with it also.)
So,
> > what I am saying is I guess it is OK to say 1.49...=1.5 *by definition*,
but
> > that being the case we have not really *proved* anything, have we?
>
> Sure, a definition is only a proof in a very thin sense.
>
> We have
> > simply defined it that way. Hence, the 10x-x argument cannot be a
"proof."
>
> I still say it's a proof. Of course you might like to explain why it's
> invalid. (Which step doesn't follow?)

I have done that numerous times, including in this very post. I won't
repeat myself, rather you just look at the postings.

>
> Knowing that theorems in Euclid rely on axioms only made explicit by
> Hilbert, are Euclid's proofs of those theorems invalid? What do you
> think?

I do not have an opinion. that's not the topic of this discussion (or if it
was, I must have missed it somewhere.)

--
Darrell

Mark Thakkar

unread,
Jan 29, 2002, 3:59:05 PM1/29/02
to
Darrell,

>> Knowing that theorems in Euclid rely on axioms only made explicit
>> by Hilbert, are Euclid's proofs of those theorems invalid? What do
>> you think?
>
> I do not have an opinion. that's not the topic of this discussion
> (or if it was, I must have missed it somewhere.)

I think you have, to be honest. Bob's point can be illustrated by
asking the question, "Did mathematics begin in the late 19th century?"

Now do you see what he's driving at?

Mark.

Mark Thakkar

unread,
Jan 29, 2002, 4:02:37 PM1/29/02
to
Robert,

>>> My favourite is probably the Skolem-Loewenheim theorem, which
>>> tells us that there is a countable model of the reals.
>>

>> Now there's an interesting apparent contradiction. Surely any fule
>> kno that the reals are uncountable?
>
> Absolutely. That's why it's so high on my list of fun results.
> In the interests of tantalizing anybody else still reading the
> thread, I'll leave this for a while. If anybody actually wants to
> get to grips with the resolution, I guess I'll hear from them :-)

Skolem's Paradox? From what I've been able to find out (none of this
was in our Set Theory course, which I'm currently revising), this is
only really a problem for Platonists (any consistent axiomatic set
theory has a countable model, so no axiom system can adequately
represent the Platonist's world of sets) - though Putnam thinks it's
the death knell of "moderate realism", whatever that is. ("Models and
Reality", Journal of Symbolic Logic vol. 45 issue 3, Sept. 1980.)
For non-realists (fictionalists? formalists?), there's no problem,
because although the result /looks/ paradoxical, it's only so on the
surface.

The main point seems to be that countability (and otherwise) is a
/relative/ phenomenon. (What isn't, these days?) While we'd say
(from our point of view) that this model X of the reals is countable,
/within the model itself/ the set of reals-in-X /is/ uncountable.

The problem is that /we/ think the set of reals-in-X is countable,
because we can see that there's an injective function from the
reals-in-X to the naturals-in-X. However, such a function is not
contained within X; in X, there is no injective reals --> naturals
function (and hence no bijection). So the reals are uncountable in X.

No details here, of course. (Why is there no such function in X?)
But have I got the basic idea right?

Mark.

implied odds

unread,
Jan 29, 2002, 4:15:06 PM1/29/02
to
"A sum is a constant, hence the "limit of the sum" is simply that
constant." - is in correct a sum is not necessarily a constant,
neither is the limit of the sum.

The decimal system is our mechanism for interpreting the number line.
This is the essence of the problem. What do we mean by 1/3 ? We wish
to describe this point by performing operations or manipulations on
the number line. Our decimal system cannot handle root2, a number
that we can easily describe on our number line as we can 1/3. My
philosophy was not contradictory, it merely underpinned my argument
that our *recordings* in the decimal system are not numbers, they are
our attempt to represent points on the number line. Most numbers we
record and operate on in our decimal system we can describe accurately
back on to the number line. But a times we cannot, so we must
*symbolise* what we are trying to do. The *symbol* that is 1.499
(rec) serves to duplicate, imo, the number that 1.5 accurately refects
on the number line.

This was

x.xxx(rec) * 10 exp(n) / 10 = x.xxx(rec) *10 exp(n-1)(I)

exp~exponent,not 2.71..

the assumption that I stated would lead (after interpretation) to
14.99 r - 1.49 r = 13.5.

This is presumably what you context is unproven, I suspect.
(you indicate it is not shown for eg 0.999rec - 0.0999rec = 0.9)
We have theories on numbers not our mechanisms for interpreting them.
Our decimal system cannot handle recurring numbers/ irrational
numbers.
You state this cannot be proved for non-terminating decimals only
finite ones.
Supposing rather than a number that is a terminating decimal we
described a number that was
xxxxxxxxxxxx (rec) eg ~ recurringly large

Now would all our theorems relating to n a number be incorrect ?

ie Would we question the validity of proof by induction ?
ie for all n

Indeed, proof by induction could be usedto show the following.
ie show

0.9999......999(nth dec place) - 0.0999...9999(nth decimal place) =
0.9
is true for all n - this can be done by induction. (III)

If we can concede that in proof by induction that we can state that
sigma n = 0.5n(n+1) for all n, ie for n infinitly large - no hint of
limits.

Then we can also state that if (III) is true for all n i.e. for n
infinitly large, then in (III) it indicatees the existance of a number
with an 'infinitly large decimal place' - (a number with an infinitly
large decimal place would satisfy my definiton of a non-terminating
decimal - although I don't like the phrase '..'.) for which the
relationship holds.

I would be prepared to indulge more time but your discourteous nature
doesn't
warrant it, which is why I have not spent the effort dotting the i's
and crossing the t's. I can see why one or two decided to disengage
from this thread earlier. When you have experience in maths, you have
a sense for proofs and as such give intimations as to how it might be
done. I inferred that I believed it could probably be proven by
contradiction as similar results were proved at uni - at which you
demanded, like a performing monkey, I prove. If you wish to further
your own and other peoples understanding of mathematics, then behave
in a cordiale and civilised way. Yes it's tough, but I managed to
restrain myself in this post.

regards

Clark <cl...@brutele.be> wrote in message news:<3C567737...@brutele.be>...

Darrell

unread,
Jan 29, 2002, 6:23:14 PM1/29/02
to
"Mark Thakkar" <mark.t...@balliol.ox.ac.uk> wrote in message
news:MPG.16c11d6f3...@News.CIS.DFN.DE...

Mark,

Like I stated, I do not have an opinion. I would have to review the topic
in order to have an opinion. And I don't see the point in doing that just
to argue whether or not the 10x-x approach using no calculus constitutes a
"proof." Of course, as you have seen, this is apparently not a widely
agreeable question. If "proof" depends strictly on context (i.e. how one
defines and what one accepts as proof) then you can "prove" anything you
want. E.g.:

1.4999... < 1.5 (to be proved or disproved)

1.4 + .09 < 1.5 true

1.4 + .09 + .009 < 1.5 true

1.4 + .09 + .009 + .0009 < 1.5 true

Now, by observing (i.e. "seeing" I believe was stated) the "pattern" of
these finite arithmetic processes that supposedly are *valid* (allegedly), I
could *just* as easily conclude that:

1.4 + .09 + .009 + .0009 + ... < 1.5

After all, the "pattern" seen in the additions is that the sum is always
less than 1.5, no?

That is every bit as legitimate (or illegitimate I should say) than if I was
to conclude that:

1.4 + .09 + .009 + .0009 + ... = 1.5

Bottom line, the argument presented (analogy of similar operations on
terminating decimals, long division, etc. etc.) is not a "proof," I don't
care what (reasonable) definition of "proof" we are using, because I have
just "proved" two conflicting results using the exact same steps. It boils
down to what a repeating decimal actually *is*, i.e. what does it mean
*mathematically* to add an infinite number of terms, and how we are to
accomplish this (e.g. a definition) which has yet to be adequately given
(without calculus, that is.) Bottom line, in order to *prove* 1.499...=1.5
this question *must* be addressed at some point in the process. People are
arguing that no, it need not be, it is satisfactory as it is, and I have
time and time again shown that the logic used to arrive at the conclusions
they arrived at, is simply invalid. I don't care *what* definition of proof
we use. It certainly cannot be a reasonable definition if draw conflicting
conclusion using the same logic. You see, it really depends on how we
*define* an infinite sum. Once this is done with the usual method, it is
not difficult to see that it is easier to prove 1.499... directly than it is
to prove how can go from x=1.499... to 10x=14.999... and the same for the
subsequent subtraction step.

--
Darrell

Robert Low

unread,
Jan 29, 2002, 6:48:01 PM1/29/02
to
On Tue, 29 Jan 2002, Mark Thakkar wrote:
> The problem is that /we/ think the set of reals-in-X is countable,
> because we can see that there's an injective function from the
> reals-in-X to the naturals-in-X. However, such a function is not
> contained within X; in X, there is no injective reals --> naturals

That's right: the object which we think of as a set of ordered
pairs matching up the integers to the 'reals' isn't a set in
that universe. Sometimes it's convenient to talk about 'internal'
and 'external' sets. This set of ordered pairs is then an
external set, since it's there in our (bigger) universe; but
in the countable model that the L-S theorem guarantees, that
collection of objects is not a set.


> But have I got the basic idea right?

Yep. Bit of a brain-buster at first sight, isn't it?
(I hope you haven't spoiled the fun for anybody by
giving it away so quick :-))

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Robert Low

unread,
Jan 29, 2002, 7:01:23 PM1/29/02
to
On 29 Jan 2002, implied odds wrote:
> The decimal system is our mechanism for interpreting the number line.
> This is the essence of the problem. What do we mean by 1/3 ? We wish
> to describe this point by performing operations or manipulations on
> the number line. Our decimal system cannot handle root2, a number

Course it can. But in order for us to use non-terminating
decimals correctly we need to know how they behave. A
way to do this is to regard them as the limit of a sequence
of partial sums. You can then do the analysis and see
just what will work and what won't. (For example, it's
pretty easy to prove that 10*0.099... - 0.09999 = 0.9;
but it requries a little work, as one has to manipulate
objects which arise as the limits of sequences, and check
that operating on the terms then taking the limit gives the
same result as operating on the limit.)

> This is presumably what you context is unproven, I suspect.
> (you indicate it is not shown for eg 0.999rec - 0.0999rec = 0.9)
> We have theories on numbers not our mechanisms for interpreting them.
> Our decimal system cannot handle recurring numbers/ irrational
> numbers.

See the statement above.

I think you've misinterpreted some of what's going on: it wasn't
whether one could deal with non-terminating decimal expansions,
it was the extent to which certain manipulations required further
justification that was under dispute.

> ie Would we question the validity of proof by induction ?
> ie for all n
>
> Indeed, proof by induction could be usedto show the following.
> ie show
>
> 0.9999......999(nth dec place) - 0.0999...9999(nth decimal place) =
> 0.9
> is true for all n - this can be done by induction. (III)
>
> If we can concede that in proof by induction that we can state that
> sigma n = 0.5n(n+1) for all n, ie for n infinitly large - no hint of
> limits.

Same proof method shows that 1.49... < 1.5.

> from this thread earlier. When you have experience in maths, you have
> a sense for proofs and as such give intimations as to how it might be
> done. I inferred that I believed it could probably be proven by
> contradiction as similar results were proved at uni - at which you
> demanded, like a performing monkey, I prove. If you wish to further

I'm afraid that if you're going to hang about in maths discussion
groups, you'll have to get used to a response of 'show me' if you
claim that a certain result can be proved in a particular way.
It's the nature of the beast.

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Darrell

unread,
Jan 29, 2002, 9:42:07 PM1/29/02
to
"implied odds" <steve_w...@hotmail.com> wrote in message
news:401d2bfe.02012...@posting.google.com...
> "A sum is a constant, hence the "limit of the sum" is simply that
> constant." - is in correct a sum is not necessarily a constant,
> neither is the limit of the sum.

1.4 + .09 + .009 + ... is the sum of *constants* (a real number, assuming it
does not diverge that is.) We are not talking about a sum containing
variables....we are talking about the sum of constants. Last I checked, R
was closed under addition, so if you want to argue that this sum (or similar
sum) is not necessarily a constant, you will have to find someone else
willing to entertain this type of trolling argument since it is obviosuly an
unreasonable argument...

Likewise for the limit. Limits are constants, and the limit of a constant
as x-->anything is just that constant. If you insist on arguing against
this, again I must refuse to entertain this type of trolling argument any
further. enough bandwidth has been wasted on it already. If you *must*
have the last word, be my guest.

>
> The decimal system is our mechanism for interpreting the number line.
> This is the essence of the problem. What do we mean by 1/3 ? We wish
> to describe this point by performing operations or manipulations on
> the number line. Our decimal system cannot handle root2, a number
> that we can easily describe on our number line as we can 1/3.

I can "describe" sqrt(2) very precisely on the number line. It is the point
sqrt(2).

My
> philosophy was not contradictory, it merely underpinned my argument
> that our *recordings* in the decimal system are not numbers, they are
> our attempt to represent points on the number line.

I don't see where you are going with this (no, that is not a request for
clarification.) anything we write is simply an expression for a number, it
is not "the" number. "The" number lives in our minds, not on paper. And
yes, your phiosphy is self-contradictory as shown in my previous post which
I will not repeat here.


> Most numbers we
> record and operate on in our decimal system we can describe accurately
> back on to the number line. But a times we cannot, so we must
> *symbolise* what we are trying to do.

Uh, *every* time we write a number down we are "symbolizing" something. So
what?


> The *symbol* that is 1.499
> (rec) serves to duplicate, imo, the number that 1.5 accurately refects
> on the number line.

They both accurately reflect the same number. They are just different names,
that's all. Problem is, you seem to understand the language in which the
latter is written (i.e. you seem to understand what the expression 1.5
*means*) but you apparently do not understand so clearly the language in
which the first is written, since by your very own argument you are implying
that 1.499... does not reflect as *accurately* as 1.5. Ask yourself, if
1.499... is not as "accurate" an expression as 1.5 how can that label
"duplicate" the label 1.5?


>
> This was
> x.xxx(rec) * 10 exp(n) / 10 = x.xxx(rec) *10 exp(n-1)(I)
> exp~exponent,not 2.71..
>
> the assumption that I stated would lead (after interpretation) to
> 14.99 r - 1.49 r = 13.5.

...and this is what we have been trying to tell you. You are leaving out
the "interpretation." We need to know this "interpretation" of yours to
verify the step of going from 14.99...-1.49...=13.5.


>
> This is presumably what you context is unproven, I suspect.
> (you indicate it is not shown for eg 0.999rec - 0.0999rec = 0.9)

Exactly.

> We have theories on numbers not our mechanisms for interpreting them.
> Our decimal system cannot handle recurring numbers/ irrational
> numbers.

Our "decimal system" is simply one number system. So is base 2, base 16,
etc. The "number" is independant of the language in which we assign it a
name.


> You state this cannot be proved for non-terminating decimals only
> finite ones.

No, I stated it cannot be proved using the logic you are using. It can be
proved quite easily with the *definition* of the sum of an infinite series.
But, that's calculus, and as you recall, the argument was that the 10x-x
"proof" does not need such notions in order to be "satisfactory." It is
being argued that it is a valid proof based solely on operations involving
finite arithmetic on similar, terminating decimals, and on *partial* (not
complete, partial) long division processes. Oddly enough, even using this
line of reasoning, the "pattern" that in in fact shows intuitively that
1.49... is *less* than 1.5.


> Supposing rather than a number that is a terminating decimal we
> described a number that was
> xxxxxxxxxxxx (rec) eg ~ recurringly large
>
> Now would all our theorems relating to n a number be incorrect ?

It's not clear why you say recurringly large. the fact is, it can be shown
that a repeating decimal is not only a real, but also a rational. So yeah,
it's a number. (or rather I should say, since this is apparently not as
obvious to you as it is to everyone else, it is a *name* for a number.)
Just like "Tammy" and "Mom" are two different names for the same person, my
wife in this case.

>
> ie Would we question the validity of proof by induction ?
> ie for all n
>
> Indeed, proof by induction could be usedto show the following.
> ie show
>
> 0.9999......999(nth dec place) - 0.0999...9999(nth decimal place) =
> 0.9
> is true for all n - this can be done by induction. (III)
>
> If we can concede that in proof by induction that we can state that
> sigma n = 0.5n(n+1) for all n, ie for n infinitly large - no hint of
> limits.

Uh, not quite. the very notation sum(n=1,oo) *is* BY DEFINITION a limit.

>
> Then we can also state that if (III) is true for all n i.e. for n
> infinitly large, then in (III) it indicatees the existance of a number
> with an 'infinitly large decimal place' - (a number with an infinitly
> large decimal place would satisfy my definiton of a non-terminating
> decimal - although I don't like the phrase '..'.) for which the
> relationship holds.
>
> I would be prepared to indulge more time but your discourteous nature
> doesn't
> warrant it, which is why I have not spent the effort dotting the i's
> and crossing the t's. I can see why one or two decided to disengage
> from this thread earlier. When you have experience in maths, you have
> a sense for proofs and as such give intimations as to how it might be
> done. I inferred that I believed it could probably be proven by
> contradiction as similar results were proved at uni - at which you
> demanded, like a performing monkey, I prove.

What do you expect? You think we should just accept it at face value
because you stated it? If you say your going to prove something, then prove
it. the fact that you can't prove it cannot be so easily disguised under
the excuse of "I'm not gonna, cause ya got bad manners."


--
Darrell

Rich Bednarski

unread,
Jan 30, 2002, 12:22:50 AM1/30/02
to
How is this different than "proving" that there are more natural numbers
than even natural numbers by saying, "If I match the even natural numbers
with the even elements of the set of natural numbers I have a lot of
unmatched numbers in the set of natural numbers therefore there are more
natural numbers than even natural numbers". Or "proving" that there are
more points on a longer line than a shorter line by arguing "if I lay the
shorter line on the longer line all of the points coincide so the extra
points on the part of the longer line not covered by the shorter line show
that the longer line has more points.

You can't use an optical proof on something you can't see and you can't see
an infinite series of decimals. "Reasonable" and infinities, or
infinitesimals, don't go together, IMO. The infinity hotel demonstrates
(see, I didn't say proves [g]) that.

Rich

Dr A. N. Walker

unread,
Jan 30, 2002, 2:49:34 PM1/30/02
to
In article <3C52D10F...@brutele.be>, Clark <cl...@brutele.be> wrote:
>I disagree with you about some of this ...
[...]
>> OK for finite digit strings, gets a bit murkier with infinite.
>> Especially as it's [in some loose sense] what's happening right down
>> at the far end that matters in this problem.
>But there is no 'far end'.

Well, that's why I wrote "[in some loose sense]". But (a)
"there is no 'far end'" is itself an assumption, and (b) if you are
trying to explain stuff to 15yos, then spouting the official line
and refusing to consider any alternative is likely to be counter-
productive, unless the teenagers you know are different from the
ones I know. We're in Hilbert Hotel territory; and some students
love and accept the "official" view, and will delightedly show it
to friends, relatives, anyone who will listen, while others find it
a big turn-off, and an example of why most people don't like maths.

We can see the same problem in Zeno. How can Achilles
catch the tortoise? Many mathematicians simply observe that the
infinite sum converges, QED, and this makes them happy. But it
misses the point [or one of the many points of this profound
puzzle]. Yes, Achilles *could* catch up by using theorems of
analysis to make the sum converge. But he has chosen not to;
he runs to where the tortoise was, then to where it then was,
then ...; how *then* does he catch up. Is it really a triumph
of mathematics that in the 17thC he couldn't catch up, in the
18thC he pragmatically could, and only in the 19thC could we
prove rigorously that he did?

> That's why the principle I gave still works
>for repeating decimals. It's true (isn't it?) that
>for any decimal digit a, 10*0.aaa... = a.aaa...
>(with the usual meaning of '...' See below.)
>- That seems reasonably describable by talking of moving every digit one
>place left, no?

"Moving every digit" is an awful lot of work. Isn't it
more sensible to move the decimal point right? [Think of the
poor chambermaids in Hilbert's Hotel again, not to mention the
guests who sometimes have to move from room N to room 2N.]

Yes, of course it's true with the usual meanings of real
numbers and the operations on them. But it's not as obviously
true as you seem to think, and it worries me when educators
respond to [in this case hypothetical] worries of pupils by
simply repeating the official line.

>> How do you know there are no carries [borrows?].
>OK, 'borrows'. We know there aren't any because it's 9's all the way.
>_That's_ a matter of definition: '1.4999...' means 'the number written
>in decimal with one in the units place, 4 in the tenths place, and 9 in
>every place after that'.

There's a big gap somewhere between "1.4999... means [foo]"
and "we know there aren't any borrows", especially when what *you*
mean by 1.4999... [is probably pretty much what all of us in this
thread mean by 1.4999... but] may differ from what your 15yo may
mean by it.

I have just marked an exam script in which a physicist had
[carrectly] sketched the graph of exp(x)/(x^2 - 2), and had labelled
the bit of the graph near the asymptote at x == sqrt(2) with "very
large but not infinite". Your 15yo may think of 1.4999... as having
a "very large but not infinite" number of 9's; "as many 9's as are
needed". That's pretty much what we do on our calculators and in
our computer programs. and in numerical analysis; and it works
perfectly well in practice. Some applications need more 9's than
others, of course. It just gives a different "pure maths" from the
one we usually teach to undergraduates.

>> "Yes", except that when you multiplied by 10, you reduced the
>> number of 9's after the point by 1
>Nope. That's one of the tricky things about infinite lists. That would
>be true if there _were_ a far end, because in that case the list would
>be finite. For infinite lists like this, though, moving one place left
>relative to the point is like taking one away: it leaves you with the
>same number you started with after the point. (Aleph-null in this case.)

Would you like to read what you've written again, and tell me
with a straight face that your 15yo is equipped to understand all
that but us not equipped to understand that there are no infinitesimal
real numbers? And is it not potentially interesting to a bright 15yo
to learn (a) that the Cantorian theory of infinity is not the only
possible one, (b) that the Archimedean theory of reals is not the only
possible one, (c) that alternative theories are just as interesting
and useful [but much less used], and (d) that mathematics is just as
subject to prevailing fashions as other disciplines, so that what is
true here today may not be true on Mars or 100 years hence?

>> >[Agreed, it's a proof, not a definition. But it is a proof.]
>> But a *fallacious* "proof".
>I don't think it is. Even your argument, at best, only supports it being
>an enthymeme (it's missing some premises).

No, it's [subtly] assuming what it purports to prove. Your
bright 15yo could blow it out of the water quite easily:

You, Clark, claim that 1.4999... == 1.5; I, Bright 15YO,
think it could be 1.5 - eps, where eps is very, very, small. So
we let 1.4999... == X, and we multiply by 10. You say this is
14.999..., and I more-or-less agree, but you are thinking 15 while
I am thinking 15 - 10eps. Now you subtract, and you get 9X == 13.5,
but then I notice that I am thinking 13.5-9eps or 13.4999.... The
reason you haven't noticed is because you think that 13.4999... and
13.5 are the same, whereas I am worried about the tiny, tiny error
all the way down among the epsilons. You tell me I shouldn't worry
because we never get there; but if we never get there, then how do
we know what's happening there? What happens if we do an Achilles
on this, and do the n-th decimal place at 10^(-n) seconds before
noon -- can we not complete these infinite operations? Meanwhile,
if as I claim, 14.999... - 1.4999... == 13.4999... [which is, after
all, just as good an answer as yours, by your rules, and a better
answer by my rules], then you have made no progress.

> And even that seems arguable.


>Take the Lakatos point I mentioned. Are you really prepared to say that
>Euclid's proof of, say Pythagoras' theorem was fallacious because Euclid
>wasn't Hilbert? Seems a bit extreme.

Euclid explicitly made the parallel axiom part of his
geometry; without that axiom, much of geometry is false, as we
came to appreciate when non-Euclidean gemotries were discovered.
Without the Archimedean axiom, much of what we usually think about
numbers is false, as Bright 15YO might be discovering. There are
perfectly *reasonable* number systems and meanings of 1.4999...
in which 1.4999... < 1.5; it's just that these are not the systems
and meanings that conventional mathematics here and now happen to
use. In other words, it is perfectly conceivable that mathematicians
on Mars [or Alpha Centauri] are just as adept at doing trig, calculus.
NA, etc as we are, but their Clark is telling Bright that 1.4999...
obviously means [whatever] and is [very slightly] less than 1.5;
and her "optical" proofs are just as convincing as yours. [In fact.
more so, for she doesn't have to talk about infinity, just look at
the tenths.]

> [...] When they've read some Leibniz (and of course Berkeley's attack on


>him), of course it makes sense to talk about infinitesimals in maths too

>... and to speculate, for instance, about whether, if Abraham Robinson
>had been born 100 years earlier, we'd still think that every set of
>reals bounded above has a lub and so on. [...]

I'm not a great fan of AR and NSA. But luckily, we have been
presented over the last few decades with several untraditional ways of
thinking about numbers that may illuminate this discussion. We have
the surreals, and games. We have the concept of computability. We
have the telephone system. We have tree-labellings, and Dewey decimal
systems. We are much happier with digitisation, representations,
binary. balanced binary/ternary, alphanumeric strings, etc. So we
ought to find it easier to accept different world-views in which
engineering works as it should but in which pure maths is different.

>That's all very interesting stuff, I don't deny. I just don't think it
>supports your contention that simple proofs like the one in question are
>fallacious.

Well, they are either fallacious or else hopelessly obscure!
If you and Bright agree that there are no infinitesimals, then you
surely can simply point out that 1.4999... and 1.5 differ by less than
any number and therefore are equal. If you don't agree on that, then
he is not going to accept a proof that relies on it, and that leads
to incorrect results in [some] perfectly good number systems.

--
Andy Walker, School of MathSci., Univ. of Nott'm, UK.
a...@maths.nott.ac.uk

implied odds

unread,
Jan 30, 2002, 3:55:20 PM1/30/02
to
Dar :Unless its divergent - A rather important qualification. – one
can hardly declare a statement true in mathematics and then seek to
qualify it after.
Ex 1/2 + 1/3 + 1/4 + 1/5 + 1/6 is clearly divergent and the sum of
constants

Rob:To indicate that it is not germane to intimate a proof or suggest
an appropriate approach is in my opinion not at all sensible. As an
both an undergrad and postgrad, for me this was the nature of
discussion with supervisors and colleagues. The rigour is important
but the development of the thought processes and the ability to
visualise how to extend and apply techniques is what is key. This is
why ideas can develop whilst asleep or after a drink, I couldn't solve
a problem analytically under these conditions but I often found a way
forward, an inspiration, release of a block etc. Of course ultimately
the proofs must follow, but often a seemingly intractable problem can
be reeled in with just a nudge. If I can believe that to be true,
when trying to find analytical matrix solutions to non-linear matrix
equations, where there is an abundance of rigour and not a number in
sight, then I can believe it is true elsewhere


Rob:My interpretation of the thread (abridged), was that there was
debate as to how the ‘far right' was handled, that we cannot state
that non terminating decimals obeyed the same rules of arithmetic
operation that terminating decimals (0.99r /10 = 0.09r/10) do and
indeed if we could then we could prove that that eg 1.499 (r) = 1.5
etc.

Rob: Yes I agree you could follow a similar argument with induction to
infer that 1.4999r < 1.5.

Rob:I reiterate that I don't believe our decimal system handles
numbers such as root2, the decimal system is our attempt to describe
points somewhat mechanistically, in an imaginary space. The fact
that the decimal system cannot contain root 2 in a finite way, in the
sense that surds can is, imo, an indication of this. Furthermore,
the decimal system attempts to describe a continuous space in a
discrete way. I see the decimal system as an attempt to describe
points in an imaginary space, to facilitate the mathematical
operations that allow us to attempt map those points back into space.

If there was an argument to indicate that 1.499 r < 1.5 then one would
have to believe that 1.499 r was irrational, since no two numbers
integers p,q exist s.t ….

Do we seek to define/describe numbers as the sum of their component
parts or in
their relationships to other numbers ?

Dar: Ref contradiction: it is evidently not a contradiction given that
I have stated that I believe that the decimal system is trying to
reflect or represent space..

The nature of these discussions descends rapidly into a subjective
one, when I see a ‘decimal number' I simply consider it as an attempt
to describe a point in space. Space is continuous the decimal system
is discrete – this is why, imo, the ‘mapping' goes ‘astray'.

Perhaps there are some number theory or set theory , rather than limit
theory arguments, that can inform the debate – but its been a many a
year since I delved there. Though I agree that this shouldn't become
a non-terminating thread.

regards

Dr A. N. Walker

unread,
Jan 31, 2002, 10:07:36 AM1/31/02
to
In article <3C567737...@brutele.be>, Clark <cl...@brutele.be> wrote:
>I'll rejoin the discussion at this point, if I may. The interesting part
>of it (to me, at any rate) is about what constitutes a proof. I'm
>suggesting that what counts as a proof is a relative matter, something I
>take to be Imre Lakatos' point in 'Proofs and Refutations' and
>elsewhere.

Indeed, but I think you're also somewhat misrepresenting
that point. Note that as the discussion of [in P&R] the Euler
formula proceeds, the students are forced to contemplate "monsters"
which push the limits of what they mean by polyhedra. The formula
keeps breaking down and being revived. If you had barged in on
them in chapter 1 and shown them the modern form of the result,
they would have said "Yeah, whatever, but we're only interested
in polyhedra, not in your horrid objects". At times, they *do*
say that, and have to be persuaded that there is something new
and interesting out there; sometimes they face choices, and if
some quite arbitrary choices had been made differently, the final
results would have changed.

> I'm suggesting tht there are some contexts in which it's
>possible to prove (yes, really _prove_) that
>1.499...=1.5
>without first defining what a limit is or mentioning any Archimedean
>axiom.

Life would get very tedious if we had to mention all the
axioms that we use every time we use results that depend on them.
A proof only needs to be convincing to its intended audience. But
that usually [always?] requires a shared set of [relevant] values.
If you don't have that, then you are producing merely a conjuring
trick. *You* produce your "10X" trick to "prove" that 1.4999...
== 1.5. In the next lesson, I "prove" that 1,4999... < 1.5 ["Well,
it's obvious: 1,4999... rounds to 1, but 1,5 rounds to 2, so they
differ, but you can see the same thing more formally by looking at
the partial sums, 1, 1.4, 1.49, 1.499, 1.4999, ... are all less
than 1.5"]. Why is your conjuring trick better than mine? As both
of these are tricks that your bright 15yo could invent for herself,
how will you resolve her difficulties [without using limits or the
axioms!]? [Bear in mind that my proof is also perfectly correct
for some axiom systems and meanings of "...".]

>OK. Now look. We (me and a group of pre-analysis students) know
>something about recurring decimals. Try dividing 1 by 3 ... what do we
>get? It's clear enough (blindingly obvious, in fact) that we get
>0.333...

Yes, ...

(where '...' means 'and so on without end').

..., but no. In different contexts, "..." could mean
"and now we see the pattern", or "etc" or "until we get tired"
or "as far as necessary" or "as far as we like" or "without
bound" or "limit of the partial sums as the number of digits
tends to infinity". All but the last of these are reasonably
comprehensible to even an unbright 15yo; but it's the last
that you intend [and that conventional maths decrees]. All
of these mean the same in practical applications; all differ
"in theory". Numerical analysis and computing are mostly
about *processes* [algorithms], for which the "necessary"
meaning makes much more sense than the limit: from [eg]

sin x = x - x^3/3! + x^5/5! - x^7/7! + ...

we write computer code [in some language] something like

real epsilon = 10^-10, sum = x, term = x
int n = 1
while abs(term) > epsilon
do n += 2; term *= -x^2/((n-1)*n); sum += term done
result = sum

[errors, omissions, translations to real languages, and numerical
improvements left as exercises]. Without the terminating condition,
it's no longer an algorithm, no longer useful, but much closer to
the "..." that you claim to be "blindingly obvious".

> We also know that
>when we divide 1 by 3 we get 1/3, the rational number. So, we've proved
>that 1/3=0.333...

[Note in brackets: although we get 0.333..., and 0.142857...
and lots of others this way, we *never* get 1.4999.... That should
alert you to the fact that 1,4999... is "special", and may not follow
the usual rules.]

>Now, how do these recurring decimals behave? Can we add them? Well, if
>there are any 'carry's' it might get a bit tricky, but if not, well,
>isn't it clear that if I add 0.333... and 0.111... I'll get 0.444...?

Yes; but note that we can add these "from the left" [as well
as converting back to rationals: 1/3 + 1/9 == 4/9 == 0.444...]:
0.333333X + 0.111111Y is either 0.444444Z or 0.444445W, no matter
what the digit-strings X and Y may be, so we can write down 0.44444
"now". Only if the leading digits of X and Y sum to 9 is there any
problem, so when we look at them and find they're 3 and 1 again, we
can write down another 4. Likewise, 0.777... + 0.571428... ==
1.349206... == 7/9 + 4/7 == 85/63. But if we add 0.142857... to
0.857142... then we don't know whether the answer is 0.999..., as
optics suggest if we assume there is never any carry, or 1.000...
if we assume that eventually a carry comes whooshing in from the
far end. The corresonding rational sum is 1/7 + 6/7 == 1 == 1.000...,
suggesting that optics can be misleading!

Notes:
(a) This is a real problem in the theory of computation. If
x and y are computable [ie, there exist computer programs
that print them out], then so is x+y, and for most x and y
we can write down the x+y program: it's the composition
of the x and y programs with the "schoolboy" add process
but working from the left, as above. But this fails if
x+y is "in fact" a terminating decimal; x+y is still
computable, for it is rational, but we can't write down
its program from those for x and y.
(b) No-one ever asks whether 1 and 1.000... are equal, though
this is just as tricky as 0.999... [if 1 - eps = 0.999...,
then 1 + eps = 1.000..., for infinitesimal eps].

>And so ... do some similar stuff for 1.4999... What's it equal to?
>Convinced? If not, pinpoint where the reasoning fails. It's not enough
>to simply assert that recurring decimals aren't the same as terminating
>decimals.

But if you aren't prepared to discuss limits or infinitesimals,
then nor is it enough to assert that they *are* the same. Your 15yos
could easily be convinced either way, and may well be confused because
they have convinced themselves *both* ways. You can resolve this only
by pointing to authority: maths *could* have gone either way, we have
*chosen* to go *this* way, but *that* way is quite interesting, and is
even more useful for certain purposes [NA, computing, games -- but not
analysis].

>Right. Is (<-) a proof of '1.499...=1.5'? Well, we know from Lakatos (if
>he's right: I think he is) that an answer 'it depends' makes sense. So:
>answer 'it depends'. Further, it depends on the context ... and in the
>context I've described, the answer, I'm claiming, is 'yes'.

If you're educating cannon fodder, then you're probably right.
But if your bright 15yos really are bright, then you're short-changing
them. Maths is full of "what if"'s; lots of them are accessible to
surprisingly young people, especially when you can link it in to
games, or computing, or their ideas of infinity, or to interesting
features of numbers/functions. Maths is usually taught in a very
authoritarian way -- not least because most maths teachers, esp in
primary education, are not mathematicians and have never themselves
seen or thought about the alternatives -- which is one of the many
reasons why maths is "difficult" and "boring". Here you have a good
chance to do something better!

Nicholas Clarke

unread,
Feb 1, 2002, 12:09:24 PM2/1/02
to
Being the 'enthusiatic 15 year old' in question (Ithink) (Just found the
n/g) I want to make a few points that led to the discussion in question, and
other thoughts.

While arguments using both the forms I have been shown to attempt to prove
1.499999... = 1.5 seem to be correct (10x - x and sum of inf. series),
doesn't this merely demonstrate that conventional maths cannot work with
infinities or infinitessimals? For example, the question infinity -
1,000,000 clearly makes no sense when you find that the answer is still an
infinity.

For example, if a circle is made of an infinite number of infinitessimally
small sides, we get a problem if 1.5 = 1.4999... Doesn't the exact same
explanation prove that 0.0...1 = 0, given that the only possible other
explanation for 1.5 - 1.4999... = 0.0...1 (And wouldn't this be the
definition of an infinitessimal?) . If 1.4999... = 1.5, then
1.5 -1.499999... = 0, which by conventional maths makes a circle an
impossible shape, because it's circumference would be infinity * 0.

Interestingly, I read the article on NSA today; i'd be interested to know
what (if any) light that would shed on this question.

Nick


Darrell

unread,
Feb 1, 2002, 11:34:56 PM2/1/02
to
Hello Nicholas. I will briefly attempt to address the issues you mention
from a standard analysis (i.e. conventional maths, not NSA) point of view.
Perhaps someone else will have something to say about NSA.

"Nicholas Clarke" <nichola...@blueyonder.co.uk> wrote in message
news:8Zz68.110$pB.2...@news1.cableinet.net...


> Being the 'enthusiatic 15 year old' in question (Ithink) (Just found the
> n/g) I want to make a few points that led to the discussion in question,
and
> other thoughts.
>
> While arguments using both the forms I have been shown to attempt to prove
> 1.499999... = 1.5 seem to be correct (10x - x and sum of inf. series),
> doesn't this merely demonstrate that conventional maths cannot work with
> infinities or infinitessimals? For example, the question infinity -
> 1,000,000 clearly makes no sense when you find that the answer is still an
> infinity.

I guess I agree to some extent with this, if by "conventional" maths you
mean number systems that do not include infinite elements (e.g. the reals.)
There are, of course, branches of math that do deal more directly with
"infinity" but I won't address those here. In conventional math, "infinity"
simply does not exist, (it's not a number) so not only does the answer to
the question you pose not make any sense, a strong argument can be made that
the question never gets asked at all (infinity is simply not a number.)
However, in analysis (i.e. calculus) we formally define certain processes
that in a _loose_ sense use the notion of infinity. But even in this
context, there is no such thing as "infinity." But, we can talk about
whether or not a function approaches a certain value when x "approaches (or
goes to) infinity." But in careful examination of the definition of these
processes (the algebra behind the words) one can see that they are defined
quite satisfactorily in terms of only real numbers (i.e. conventional math.)
When we say something like "x goes to infinity" we do not mean that in a
literal sense, rather we mean x can as big as we want/need it to be
(increases without bound.) For example, in calculus we never really ask
anything like "what is infinity-1,000,000" but rather we ask things like
"what is the limit of f(x) as x approaches infinity." x never need get
there, it only "approaches."

>
> For example, if a circle is made of an infinite number of infinitessimally
> small sides, we get a problem if 1.5 = 1.4999... Doesn't the exact same
> explanation prove that 0.0...1 = 0, given that the only possible other
> explanation for 1.5 - 1.4999... = 0.0...1 (And wouldn't this be the
> definition of an infinitessimal?) . If 1.4999... = 1.5, then
> 1.5 -1.499999... = 0, which by conventional maths makes a circle an
> impossible shape, because it's circumference would be infinity * 0.

You are making some rather interesting observations. In short, it was the
development of the integral calculus that addresses these issues. (not NSA,
but "standard" analysis, i.e. calculus.) In this context, the number of
"sides" is not really infinity, but rather "approaches" infinity. And, as
this value "approaches" infinity, another value that we are really looking
for (e.g. the area) also "approaches" a certain value. It is this value
that is of concern to us (well, most of the time.)

So we really have no "problem" in this context, because the circle is never
actually made of infinitesimally small sides. You can speak _loosely_ and
say it is (and often we say just that,) but in the strict algebraic sense it
is not, which is what makes the problem doable. It is made of a finite
number of sides, it's just that we let this value "approach infinity," which
again does not involve infinity directly but rather is just another way of
saying "if the number of sides keeps increasing and increasing and
increasing (without bound) the area of this region gets closer and closer
and closer to some fixed value (a real number.)" In this case, this fixed
value (called a "limit") represents the exact area.

Similar notion for 1.499.... It is the sum of an infinite series, and the
definition of this sum is given in very similar terms (i.e. it is a
"limit.") So we have no "problem" at all once we extend our notion of
"conventional" math to include the calculus. Of course, there are other
contexts in which to address the issues, but these are not standard analysis
rather something like NSA, some of which has been hinted on already in this
thread (i.e. 1-.99...=epsilon, an infinitesimal.)

--
Darrell

Clark

unread,
Feb 4, 2002, 12:31:07 PM2/4/02
to

The same as Euclid's proofs? I'll settle for that.

Bob

Dr A. N. Walker

unread,
Feb 4, 2002, 3:10:19 PM2/4/02
to
In article <8Zz68.110$pB.2...@news1.cableinet.net>,

Nicholas Clarke <nichola...@blueyonder.co.uk> wrote:
>While arguments using both the forms I have been shown to attempt to prove
>1.499999... = 1.5 seem to be correct (10x - x and sum of inf. series),
>doesn't this merely demonstrate that conventional maths cannot work with
>infinities or infinitessimals?

No; rather that conventional maths *chooses* not to work
with them -- that's the convention! Darrell has done a good job
of explaining the conventional position.

> For example, the question infinity -
>1,000,000 clearly makes no sense when you find that the answer is still an
>infinity.

One of the things you will discover is that "clearly" is a
very individual thing. It was clear to my daughter when she invented
infinity for herself that infinity + 1 was still infinity ["there
can't be anything bigger than the biggest number"], so was infinity
+ 1000000, and therefore so was infinity - 1000000.

>For example, if a circle is made of an infinite number of infinitessimally
>small sides, we get a problem if 1.5 = 1.4999... Doesn't the exact same
>explanation prove that 0.0...1 = 0, given that the only possible other
>explanation for 1.5 - 1.4999... = 0.0...1 (And wouldn't this be the
>definition of an infinitessimal?) .

So many question, so few electrons .... If your library has
them, you [and/or your teachers] should look at one or more of

On Numbers and Games, by J. H. Conway
Winning Ways, by Berlekamp, Conway and Guy
Surreal Numbers, by D. E. Knuth

ONAG is *very* difficult in places, but you should be able to read many
little bits of it. WW is mostly easier, but many parts will still be
too hard even for most maths graduates. SN is a "novelette". Don't
buy WW unless you're very rich -- the first edition was two large and
handsome but expensive volumes, the second edition is in four handsome
and incredibly expensive volumes. ONAG is also quite expensive, and
only in black-and-white, whereas WW is in colour. You need a Cambridge
sense of humour to make best sense of either.

If you want a free "taster", take a look at

http://www.maths.nott.ac.uk/personal/anw/G13GAM/Hack and
http://www.maths.nott.ac.uk/personal/anw/G13GAM/TAOG/0.Preamble/0cDom.html

[note that the 0's are zeros and the O's ohs, in case you type rather
than cut-and-paste]. The former shows a game which contains numbers
including some which correspond to recurring decimals, some which are
suspiciously like 1.4999... [but definitely not like 1.5], and some
which are infinite or infinitesimal .... The latter is a work in
progress, which shows some positions from a game called "Domineering"
together with commentary.

>Interestingly, I read the article on NSA today; i'd be interested to know
>what (if any) light that would shed on this question.

Like Robert, I haven't read it yet. But NSA is not an interest
of mine, so (a) I'll leave it to others to relate to this thread, and
(b) I'd welcome a summary!

Robert Low

unread,
Feb 4, 2002, 3:48:36 PM2/4/02
to
On Mon, 4 Feb 2002, Clark wrote:
> Robert Low wrote:
> > Clark wrote

> > > Knowing that theorems in Euclid rely on axioms only made explicit by
> > > Hilbert, are Euclid's proofs of those theorems invalid? What do you
> > > think?
> >
> > Same thing there.
>
> The same as Euclid's proofs? I'll settle for that.

Well, if pushed to it, I'd say that the 1.4999... proof by
multiplying by 10 was rather more incomplete than the Euclid
corpus. It takes a fair degree of sophistication to see
why Euclid needs more work, and relatively little to see that
1.4999... needs serious thinking about.

But as they say on Usenet, YMMV.

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Robert Low

unread,
Feb 5, 2002, 4:47:35 AM2/5/02
to
On 4 Feb 2002, Dr A. N. Walker wrote:

> Nicholas Clarke <nichola...@blueyonder.co.uk> wrote:
> >Interestingly, I read the article on NSA today; i'd be interested to know
> >what (if any) light that would shed on this question.
>
> Like Robert, I haven't read it yet. But NSA is not an interest
> of mine, so (a) I'll leave it to others to relate to this thread, and
> (b) I'd welcome a summary!

I've read it now...it's just a brief statement that (a) infinitesimals
(can be made to) make sense, and (b) you can do some pretty useful
stuff with them.

Regardless of that, the 1.499... thing is just the same in NSA. If
the '...' means 'a 9 for every integer in the NS universe' then
1.49... still equals 1.5. If you stop before that, it's a bit
smaller.

If anybody wants to find out more about NSA, there are various
expository articles available, from different points of view.
Two are:-

KD Stroyan 'The infinitesimal rule of three' in Developments in
nonstandard analysis, ed NJ Cutland et al, Longman

which takes an axiomatic approach, and

T Lindstrom 'An introduction to nonstandard analysis' in
Nonstandard analysis and its applications ed NJ Cutland, CUP

which is rather more constructive.

The horse is Abraham Robinson, and his book 'Non-standard
analysis' is in print with Princeton University Press; but
you'll need some fairly solid background in logic to read
that. Sometimes the horse's mouth isn't the best place
to start, though anybody at all serious ought to look there
eventually.

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


Gazza

unread,
Feb 15, 2002, 8:44:21 PM2/15/02
to

"Gazza" <ga...@garyjones.co.uk.invalid> wrote in message
news:a2p5gh$70r$1...@newsg4.svr.pol.co.uk...

>
> "Robert Low" <mtx...@coventry.ac.uk> wrote in message
> news:Pine.LNX.4.33.020124...@alfgar.coventry.ac.uk...
>
> >I don't know any more immediate way
> > to show that 1.4999... is equal to 1.5
>
> I think that the OP was using was the following:
>
> Let x = 1.4999999...
> then 10x = 14.99999...
> Subtracting upwards gives 9x = 13.5
> Solving for x, gives x = 1.5, hence 1.4999999...= 1.5

OK, well after showing my innocence to the understanding what exactly it was I posted
(both secondary teachers and university lecturers have taught me this "proof" on more
than one occasion, without once mentioning everything that the rest of the OP thread
mentions), I find out that I have to teach "Finding a fraction equivalent to a
recurring decimal", and the three examples given use the 10x-x / 100x-x /1000x - x
method.

Should I teach it to this Yr 9 set 1 class as I was taught it, not mentioning
anything about limits, or the fact that subtraction for recurring decimals isn't the
same as for terminating ones, or should I at least mention that the 10x-x isn't a
definitive "proof" but say that for what they need to understand, it will suffice?
This will actually be my first lesson teaching them, so I don't want to give the
wrong impression :o)

Actually one of the questions is:
"Find the fraction which is equivalent to the recurring decimal 0.99999999... Explain
the significance of your result.
How ~should~ I answer that one?

Comments appreciated.

Gary


Virgil

unread,
Feb 15, 2002, 10:35:58 PM2/15/02
to
In article <a4kdhu$o8f$1...@news5.svr.pol.co.uk>,
"Gazza" <ga...@garyjones.co.uk.invalid> wrote:

A brief digression into geometric series might be helpful. I have
found that once these series are understood, problems about
repeating decimals become trivial.

The finite geometric series of n terms is
s(n) = a + a*r + a*r^2 + a*r^3 + ... + a*r^(n-1)
which has a closed form sum s(n) = a*(1 - r^n)/(1-r), when r <> 1.

This closed form is easily verified with a calculator for small
values of n, and follows from the algebraic identity
(1-r)*(1+r+...+r^(n-1)) = 1-r^n, also easily demonstrated for small
values of n in a way that makes it obvious for all values of n. It
is also easy to prove inductively, if the students can deal with
induction.

Then consider that r^n -> 0 as n -> +oo, for |r| < 1,
so s(n) -> a/(1-r) as n -> +oo,.


It takes a bit of time to introduce the finite and converging
infinite geometric series and convince the students of their
properties, but once done, all those problems with repeating
decimals become trivial.

For example 0.999... = 0.9 + 0.9*(0.1) + 0.9*(0.1)^2 + ...,
so a = 0.9 and r = 0.1, and the infinite sum is
(0.9)/(1-0.1) = (0.9)/(0.9) = 1.

Darrell

unread,
Feb 15, 2002, 10:39:44 PM2/15/02
to
"Gazza" <ga...@garyjones.co.uk.invalid> wrote in message
news:a4kdhu$o8f$1...@news5.svr.pol.co.uk...

> Actually one of the questions is:
> "Find the fraction which is equivalent to the recurring decimal
0.99999999... Explain
> the significance of your result.
> How ~should~ I answer that one?

I'm on the other side of the pond, so I'm not sure exactly what type of
discussion would be suitable for a "9th yr. set 1 student," but you can look
at http://home.earthlink.net/~aahfaq/Is_999.html and see if anything there
is of use to you. It includes several informal demonstrations as well as a
proof. As already discussed, many of these methods are not "proofs," so if
you use one of the informal arguments you should mention that the result can
and will be proven more formally with other methods, once they progress to
the appropriate course in their studies.

I particularly like the 10x-x method already discussed (as an informal
demonstration to get them to intuitively "believe" the result) because it
can be generalized to an algorithm that converts *any* repeating decimal
into fractional form, i.e. dividing the period by the same number of 9's as
there are digits in the period, which can be a very useful tool.

--
Darrell

Robert Low

unread,
Feb 16, 2002, 5:28:14 AM2/16/02
to
On Sat, 16 Feb 2002, Gazza wrote:
> Should I teach it to this Yr 9 set 1 class as I was taught it, not mentioning
> anything about limits, or the fact that subtraction for recurring decimals isn't the
> same as for terminating ones, or should I at least mention that the 10x-x isn't a
> definitive "proof" but say that for what they need to understand, it will suffice?
> This will actually be my first lesson teaching them, so I don't want to give the
> wrong impression :o)

I'd probably settle for telling them that this stuff works, but
that a careful justification of it is something that they won't
see until they've done more maths. You could point them to
some of the decent maths resources on the Web if they want
to investigate further.

> Actually one of the questions is:
> "Find the fraction which is equivalent to the recurring decimal 0.99999999... Explain
> the significance of your result.
> How ~should~ I answer that one?

Personally, I find it significant that decimal representations aren't
unique. (Any terminating decimal expansion can be replaced by
one with an infinite string of 9s at the end which has the same
value.)

---
Rob. http://www.mis.coventry.ac.uk/~mtx014/


0 new messages