Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Almost Infinite Mill vs. Gaea's Blessing (again)

559 views
Skip to first unread message

fwo...@monmouth.com

unread,
Jan 7, 1998, 3:00:00 AM1/7/98
to

Insipired by Dave's recent weekly summary, a thought I had been
processing in the background finally surfaced. It's been ruled that if
you have the Earthcraft/Sacred Mesa/Wild Growthed Plains/Altar of
Dementia combo, and your opponent is playing with, say, four Gaea's
Blessings, then you can "cut to the chase" and leave your opponent
with just the 4 Gaea's Blessings as their library. Fine.

But let's add a new wrinkle. (Pause for the wailing and gnashing of
teeth to stop) Let's say that your opponent has an odd number of cards
in their library (including 4 Gaea's Blessings) and an even number in
their graveyard. Now add a Crusade. Uh oh.

Now, you actually have an arbitrarily large number of 2-card-Mill
effects that you can apply. Now, after any number of these effects,
there will be an odd number of cards in your opponent's library. The
above-mentioned ruling doesn't exactly cover this situation, does it?
You can't say that there are 3 Blessings left, as whichever one is in
the graveyard would have had a reshuffle effect. So let's say there
are 5 cards left, with 4 of them Blessings. Which is the fifth card?
An argument could be made that it's a random card, but I could also
argue as follows: I can do any number of mills, so I see every card in
your library. Therefore, I'll stop when I see every card except one I
choose (and the 4 Blessings of course) in your graveyard, so *I get to
choose* the fifth card. Note that changing the power of the generated
tokens could leave any number of unspecified cards (say I mill 7 cards
at a time. Can I pick the other three? How about the order? And
speaking of order, in what order are the milled cards stacked in the
graveyard for any of the above cases. Using a similar argument, I
could say *I* choose the order of your graveyard.

Thoughts?

--
Frank J. T. Wojcik O-

Do not disrupt my carefully controlled pattern of hype,
or YOU WILL BE PUT IN A BOX WITH BILL GATES AND SHAKEN. -- K.

Mike Marcelais

unread,
Jan 7, 1998, 3:00:00 AM1/7/98
to


fwo...@monmouth.com wrote in article <68ukcp$ijj$1...@news.monmouth.com>...

Agreed. Note that you cannot choose the order of the library since you have
no way of knowing what the order of the five cards that are left are without
doing another mill which destroys the library.

Another case that nobody has clued into yet: how many blessings does your
opponent have?

You can clearly know if your opponent has any. However, consider the
following situation: Library has 10 cards, Graveyard empty, milling two at a
time. You mill Crusade, Crusade, Crusade, Mesa Pegasus, Mesa Pegasus, Mesa
Pegasus, Gaea's Blessing, Gaea's Blessing, *reshuffle twice*.

Now, you know the "library" has 3 Crusades, 3 Pegasi, 2 Blessing, and 2 other
cards. But you have no way of knowing what those other cards are. They could
be two more Blessings, or they could be a Crusade and a Pegasus, but no amount
of shuffling will allow you to determine this because it is not possible to
distinguish between "I've been getting bad luck and can't ever mill 2 cards
our of 4 left that are both not GB's" vs "There are 4 GB left in the library
(out of 4 cards).". The best that you can do is leave the library with 4
cards, which contain all of the GB left in the game, plus 2 other cards.

Note that _NON-LAZY_ evalutation might be able to solve the problem by
actually doing the actions a few times and determining the outcomes (eg, if
the second time through, you mill 4 Crusades, then you know one of the two
"hidden" cards is a Crusade), but Lazy evaluation can't help you because one
of the two outcomes can be "proven"; with the other outcome being "I don't
know".

--
+------------------------+----------------------+
| Mike Marcelais | MS Office Developer |
| mich...@microsoft.com | and Magic Rules Guru |
+------------------------+----------------------+
| Opinions expressed in this post are mine, and |
| do not necessarily reflect those of Microsoft |
+--= Moonstone Dragon =---------------= UDIC =--+

Kyle Nishioka

unread,
Jan 7, 1998, 3:00:00 AM1/7/98
to

fwo...@monmouth.com wrote:
: Now, you actually have an arbitrarily large number of 2-card-Mill

: effects that you can apply. Now, after any number of these effects,
: there will be an odd number of cards in your opponent's library. The
: above-mentioned ruling doesn't exactly cover this situation, does it?

That particular question he asked never got a response, IIRC, but lazy
evaluation is allowed. So I'll assume the loop is allowed even though the
probability of it actually happening is slim.

: You can't say that there are 3 Blessings left, as whichever one is in


: the graveyard would have had a reshuffle effect. So let's say there
: are 5 cards left, with 4 of them Blessings. Which is the fifth card?
: An argument could be made that it's a random card, but I could also
: argue as follows: I can do any number of mills, so I see every card in
: your library. Therefore, I'll stop when I see every card except one I
: choose (and the 4 Blessings of course) in your graveyard, so *I get to
: choose* the fifth card. Note that changing the power of the generated
: tokens could leave any number of unspecified cards (say I mill 7 cards
: at a time. Can I pick the other three?

This sounds possible in theory.

: How about the order?

Of the cards in the library, no.

: And


: speaking of order, in what order are the milled cards stacked in the
: graveyard for any of the above cases. Using a similar argument, I
: could say *I* choose the order of your graveyard.

The owner of the cards chooses the order when two or more cards go to the
grave at the same time. The most you could do is choose the order of
pairs but the owner will choose the order of the cards in the pair.

--
Kyle
nk...@hawaii.edu

#include <std_disclaimer.h>
#include <blue_ribbon>

Kwyjibo200

unread,
Jan 7, 1998, 3:00:00 AM1/7/98
to

Pardon me for sounding dumb, but if you had even two Blessings in your deck,
and they milled for infinate cards, wouldn't your graveyard be reshuffled
infinate times? This seems to me to be he outcome of the blessings going to
your graveyard infinate times...

Mike Marcelais

unread,
Jan 8, 1998, 3:00:00 AM1/8/98
to


Kwyjibo200 <kwyji...@aol.com> wrote in article
<19980107234...@ladder02.news.aol.com>...

Nope -- each "mill 2 cards" is a separate event and triggered effects happen
after each pair of millings. So what happens is you mill two cards. If one
of them is a Blessing, you reshuffle your library & graveyard. Repeat.
Eventually, you will have a library that has only two cards in them, both of
which are Blessings. At this point you stop.

Paul Miller

unread,
Jan 8, 1998, 3:00:00 AM1/8/98
to

On 7 Jan 1998 17:26:38 GMT, nk...@Hawaii.Edu (Kyle Nishioka) wrote:

>fwo...@monmouth.com wrote:
>: Now, you actually have an arbitrarily large number of 2-card-Mill
>: effects that you can apply. Now, after any number of these effects,
>: there will be an odd number of cards in your opponent's library. The
>: above-mentioned ruling doesn't exactly cover this situation, does it?
>
>That particular question he asked never got a response, IIRC, but lazy
>evaluation is allowed. So I'll assume the loop is allowed even though the
>probability of it actually happening is slim.

Consider also the converse of this statement. That is, the probability of it
*not* happening is nonzero. Now since it can't be guaranteed that such a state
will be reached with certainty, I would say lazy evaluation can't apply.

>: You can't say that there are 3 Blessings left, as whichever one is in
>: the graveyard would have had a reshuffle effect. So let's say there
>: are 5 cards left, with 4 of them Blessings. Which is the fifth card?
>: An argument could be made that it's a random card, but I could also
>: argue as follows: I can do any number of mills, so I see every card in
>: your library. Therefore, I'll stop when I see every card except one I
>: choose (and the 4 Blessings of course) in your graveyard, so *I get to
>: choose* the fifth card. Note that changing the power of the generated
>: tokens could leave any number of unspecified cards (say I mill 7 cards
>: at a time. Can I pick the other three?
>
>This sounds possible in theory.

Again, you cannot guarantee such a state will be reached.

>: How about the order?

>The owner of the cards chooses the order when two or more cards go to the
>grave at the same time. The most you could do is choose the order of
>pairs but the owner will choose the order of the cards in the pair.
>

Assuming one agreed with you that lazy evaluation can apply to this case, and
one could mill one card at a time, then one *still* shouldn't be able to dictate
the order the cards go to the graveyard. If there are n blessings in the deck,
assume one mills all but n+1 cards into the graveyard, one at a time. Now say I
want to impose an ordering on my milling. Well that depends on your deck being
in a certain state at the time I begin the milling process. Since again there
is nonzero probability of not being in that state (the deck must be sufficiently
randomized at all times), you simply cannot declare "Well I'll mill all your
land into your graveyard," or somesuch.

Walter Goodwin

unread,
Jan 8, 1998, 3:00:00 AM1/8/98
to

In article <34b46a5d...@nntp.lni.net>,
Paul Miller <pmiller@*DIESPAMMERS*.lni.net> wrote:

>Consider also the converse of this statement. That is, the probability of it
>*not* happening is nonzero. Now since it can't be guaranteed that such a state
>will be reached with certainty, I would say lazy evaluation can't apply.

Well, not to be pedantic, but this is slightly false :)

Each time the loop is performed, it has a very small chance of the desired
result happening. For the sake of argument, let's assume the chance of
that event happening is 1 in 10 ^ 100 times. This means that if reality
would follow probability, the desired result will happen once every
googoolth repetition. This means, that if the desired result was carried
out to infinity, it will happen. By adding infinity to the equasion,
even the most un-likely situation becomes a certainity eventually.

>Assuming one agreed with you that lazy evaluation can apply to this case, and
>one could mill one card at a time, then one *still* shouldn't be able to dictate
>the order the cards go to the graveyard.

Sure you could. (by using the same argument above incidentally)

>If there are n blessings in the deck,
>assume one mills all but n+1 cards into the graveyard, one at a time. Now say I
>want to impose an ordering on my milling. Well that depends on your deck being
>in a certain state at the time I begin the milling process. Since again there
>is nonzero probability of not being in that state (the deck must be sufficiently
>randomized at all times), you simply cannot declare "Well I'll mill all your
>land into your graveyard," or somesuch.

Well, yes you could. Assuming you knew how much land was in their deck.
Once infinity is added, then any feasable combination becomes possible.
(Although it has been noted, that you still have no control over the order
of the library, at best you can dictate what cards are in it)


Paul Miller

unread,
Jan 8, 1998, 3:00:00 AM1/8/98
to

On 8 Jan 1998 01:06:43 GMT, "Mike Marcelais" <mich...@microsoft.com> wrote:

>
>
>Kwyjibo200 <kwyji...@aol.com> wrote in article
><19980107234...@ladder02.news.aol.com>...
>> Pardon me for sounding dumb, but if you had even two Blessings in your deck,
>> and they milled for infinate cards, wouldn't your graveyard be reshuffled
>> infinate times? This seems to me to be he outcome of the blessings going to
>> your graveyard infinate times...
>
>Nope -- each "mill 2 cards" is a separate event and triggered effects happen
>after each pair of millings. So what happens is you mill two cards. If one
>of them is a Blessing, you reshuffle your library & graveyard. Repeat.
>Eventually, you will have a library that has only two cards in them, both of
>which are Blessings. At this point you stop.
>

Please demonstrate how you are _guaranteed_ to reach such a state. At each
reshuffling, the library is thrown into a random state. Now, if I can show that
at least one of those random states leads to another reshuffle, then that
implies that the probability of achieving the 2-card library is nonzero. Since
the library was initially in a reshuffle state, simply take the cards in the
same order, turn over the graveyard and put the original library on top, and
thus we have established a reshuffle loop.

This is an entirely different case from lazy evaluation, since we cannot
determine the ultimate result of our actions. Simply stating that you mill down
to a two-card library (assume you even know there are exactly two blessings in
that library) is not a legal play.

Kyle Nishioka

unread,
Jan 8, 1998, 3:00:00 AM1/8/98
to

Paul Miller (pmiller@*DIESPAMMERS*.lni.net) wrote:
: On 8 Jan 1998 01:06:43 GMT, "Mike Marcelais" <mich...@microsoft.com> wrote:

: >Nope -- each "mill 2 cards" is a separate event and triggered effects happen


: >after each pair of millings. So what happens is you mill two cards. If one
: >of them is a Blessing, you reshuffle your library & graveyard. Repeat.
: >Eventually, you will have a library that has only two cards in them, both of
: >which are Blessings. At this point you stop.
: >

: Please demonstrate how you are _guaranteed_ to reach such a state.

You cannot prove that it such a state is impossible, therefore you have to
allow for the possibility. The number of times you have to reshuffle
doesn't matter. A simpler example is the Goblin Bomb, there is no way to
guarantee that the Bomb will go off, but should it happen there is an
effect. We are discussing how to handle the situation should this
unlikely event occur. Whether or not lazy evaluation applies at all is
irrelavent.

James Wood

unread,
Jan 8, 1998, 3:00:00 AM1/8/98
to

(The following is edited for brevity)

> >Each time the loop is performed, it has a very small chance of the desired

> >result happening. This means, that if the desired result was carried


> >out to infinity, it will happen. By adding infinity to the equasion,
> >even the most un-likely situation becomes a certainity eventually.
> >

> Simply because something has a nonzero probability of happening over a time
> period doesn't mean it will happen. In fact, probability doesn't tell us anything
> about whether that
> event will happen or not. Probability will only state that there is a 1 -
> (9/10) ^ 10 chance of that event happening at all.
>
> Now throw infinity into the mix. My previous post makes an attempt to show that
> there exist at least one possible state of the milled player's deck such that it
> is impossible to reach the two card library. Of course this is easy to produce.
> Now since each reshuffling of the deck randomizes it, presumably, each
> successive state after each reshuffling is independent from any other. So it is
> conceivable (never mind likely) that we could forever end up in the exact same
> state after each reshuffle. So the outcome after the entire sequence
> of millings is non-deterministic.

I have really enjoyed reading this thread. Obviously, you all have a good feel for
probibility. I have my own thoughts on probabbility.

1) It is probbable that you would lose any tournament round due to delay of game;
the judges cannot force your opponent to show you his deck before or during a game,
so you would have to do the infinite moves. (I'm not sure, but I think that would
take a long time.)

2) Further, it is probeble that you would be unable to convince your opponent, in a
friendly match, that you are correct. Again, time for infinite moves.

Therefore, the probibilitie that the combo would be of any practical use seems low.
Most importantly, it has been fun, and the disscusion was useful in coming to that
conclusion.

(I probule would spell "per-obb-e-Bill-itty" better if I used a dictionary, but with
more variations my probability of success increases doesn't it? :-) )


Paul Miller

unread,
Jan 9, 1998, 3:00:00 AM1/9/98
to

On 8 Jan 1998 08:00:03 GMT, wgoo...@expert.cc.purdue.edu (Walter Goodwin)
wrote:

>In article <34b46a5d...@nntp.lni.net>,
>Paul Miller <pmiller@*DIESPAMMERS*.lni.net> wrote:
>
>>Consider also the converse of this statement. That is, the probability of it
>>*not* happening is nonzero. Now since it can't be guaranteed that such a state
>>will be reached with certainty, I would say lazy evaluation can't apply.
>
>Well, not to be pedantic, but this is slightly false :)
>

>Each time the loop is performed, it has a very small chance of the desired

>result happening. For the sake of argument, let's assume the chance of
>that event happening is 1 in 10 ^ 100 times. This means that if reality
>would follow probability, the desired result will happen once every

>googoolth repetition. This means, that if the desired result was carried


>out to infinity, it will happen. By adding infinity to the equasion,
>even the most un-likely situation becomes a certainity eventually.
>

[snipped some because I agree with it assuming you accept that lazy evaluation
applies]

Simply because something has a nonzero probability of happening over a time

period doesn't mean it will happen. To take a simpler example, say we have an
event with 1/10 probability of happening and we take ten independent trials of
that event. Then, by your argument, that event *would* happen once out of the
ten chances. In fact, probability doesn't tell us anything about whether that


event will happen or not. Probability will only state that there is a 1 -

(9/10) ^ 10 chance of that event happening at all. That works out to about a
61% chance. Hardly a guarantee.

Now throw infinity into the mix. My previous post makes an attempt to show that
there exist at least one possible state of the milled player's deck such that it
is impossible to reach the two card library. Of course this is easy to produce.
Now since each reshuffling of the deck randomizes it, presumably, each
successive state after each reshuffling is independent from any other. So it is
conceivable (never mind likely) that we could forever end up in the exact same

state after each reshuffle. I won't debate whether it's possible to achieve any
given state, because it certainly is. So the outcome after the entire sequence
of millings is non-deterministic. Hence lazy evaluation cannot apply.

Paul Miller

unread,
Jan 9, 1998, 3:00:00 AM1/9/98
to

Kyle Nishioka wrote:

Sure it's not impossible. It's entirely conceivable that one could eventually
reach a 2 card library via such a milling process. But when Mr. Marcelais says
"Repeat. Eventually you will have a library that has only two cards in them,
..." he is essentially saying "Yes I am guaranteeing this process will
terminate eventually." If you buy that, I've got a bridge in Brooklyn to sell
you. :-) (no flame intended, Mike)

What we have here is a case of the halting problem stated in MTG terms. I've
given an argument showing that whether the process halts or not is undecideable,
which goes to the original question "Can I lazy evaluate a loop not infinitely,
but until some unlikely event happens?" Check Dave DeLaney's post on 12/20
"Things Tom has said to the netreps" or somesuch for Tom's answer. In short,
Tom says you have to guarantee you'll reach a terminating condition (if I am
interpreting Tom right. I'm open to suggestions as to what he actually means,
though.)

Mike Marcelais

unread,
Jan 9, 1998, 3:00:00 AM1/9/98
to

> [snipped some because I agree with it assuming you accept that lazy
evaluation
> applies]
>
> Simply because something has a nonzero probability of happening over a time
> period doesn't mean it will happen.

Given an unlimited amount of time, yes it will happen.

> To take a simpler example, say we have an
> event with 1/10 probability of happening and we take ten independent trials
of
> that event. Then, by your argument, that event *would* happen once out of
the
> ten chances. In fact, probability doesn't tell us anything about whether
that
> event will happen or not. Probability will only state that there is a 1 -
> (9/10) ^ 10 chance of that event happening at all. That works out to about
a
> 61% chance. Hardly a guarantee.

But you aren't doing 10 trials. You are doing an arbitrarily large number of
trails. The chance that (after X trials) of the event happening is 1 - 0.9^x.
As X gets larger and larger, that probability gets closer and closer to 1.
If you could actually do it an infinite number of times, the probability it
has happened _is_ 1. Since lazy evaluation lets you do something an infinite
number of times, AND you can easily verify if/when the deck is stacked like
you want, AND the first part of this paragraph proves that it will happen if
you do it an infinite number of times (although it could happen sooner), lazy
evaluation applies.


>
> Now throw infinity into the mix. My previous post makes an attempt to show
that
> there exist at least one possible state of the milled player's deck such
that it
> is impossible to reach the two card library. Of course this is easy to
produce.
> Now since each reshuffling of the deck randomizes it, presumably, each
> successive state after each reshuffling is independent from any other. So
it is
> conceivable (never mind likely) that we could forever end up in the exact
same
> state after each reshuffle.

There is a certain probability that (after each shuffle) that the deck will be
in an unsuitable state. However, given an infinite number of these shuffles,
you can show (very similarly to the previous paragraph) that the probability
that you get a suitable deck is 100% given an infinite number of shuffles.
Since you also have a way of determining when you have a suitable deck, you
can stop whenever you get such a deck.

Joemac69

unread,
Jan 9, 1998, 3:00:00 AM1/9/98
to

If a deck is shuffled an infinite number of times
and that deck contains only two gaeas blessings
then a number of those shuffled states *will* be such
that the blessings are the last two cards left.

You would have to then take into account conditions which would stop you from
milling down to only two cards (ie there is an odd number of cards in the deck
forcing you to go to 3 cards left)

The question in this matter is not whether it can happen because given enough
time (ie several thousand years etc..) it definitely would. What is germain is
whether the (tournament) rules allow you to *jump* to that state.

On a slightly different note I think the answer to this question would clarify
whether somebody who has a lock deck of some
sort can just say "you cannot win now because.." and then state why.

To explain I had a chap who had a cronatog-statsis lock in force
enough mana to cast 4 counterspells and his opponent did not have anough mana
to cast a scragnoth which would have been useless once it attacked anyway, or
to cast anyother spells which would not get countered. stasis man said "you
will draw and then discard until you have nothing left and youre decked or you
can try and cast somthing and I'll counter it , then when youre tapped out
you'll have no option but to deck yourself" His opponent then started to drag
his feet about what to discard" Stasis-man even offere to let him pick up his
whole deck and pick the 7 cards of his choice in order to "cut to the chase" as
it were.

He was doing it within his minute of time and was just about timely but I knew
he was taking just enough time to time-out the game.

Should I
(1) have said "you've lost. concede the match or ill award it against you
anyway? (there were 12 mins to go at this point)

(2) Say "theres little for you to think about except which card to discard I'm
reducing your thinking time to 10 secs a draw. Get on with it or you'll be out
for cheating

(3) Tell stasis man "He cant beat you with the situation you've got and
theoretically you must eventually win but a time limit is a time limit and if
he is using his specified thinking time then theres nothing i can do. Pick a
faster deck next time

or something else.

Please email me with thoughts

Joe McInally

Kytep

unread,
Jan 9, 1998, 3:00:00 AM1/9/98
to

<snip: One player has the other totally locked down; the locked player can do
NOTHING about it; it is impossible for the locked player to win>

>Should I
>(1) have said "you've lost. concede the match or ill award it against you
>anyway? (there were 12 mins to go at this point)
>
>(2) Say "theres little for you to think about except which card to discard
>I'm
>reducing your thinking time to 10 secs a draw. Get on with it or you'll be
>out
>for cheating
>
>(3) Tell stasis man "He cant beat you with the situation you've got and
>theoretically you must eventually win but a time limit is a time limit and if
>he is using his specified thinking time then theres nothing i can do. Pick a
>faster deck next time
>
>or something else.

I would say #1. It's not the control player's fault that the locked player is
not conceding or that the locked player is taking his/her own sweet time. Time
limits on tourneys are to keep the tourneys to a reasonable, predictable time
so everyone can go home at a decent hour, NOT to screw someone out of a win
because "TECHNICALLY, they didn't deck the person within the time constraints.
Nyah, nyah, nyah!" Whatever. If (and ONLY if) there is NO way for the locked
player to win (or the locking player to lose, or a tie to happen), then the
duel should be awarded to the locking player. A player should not HAVE to rely
on a "speed damage" deck to win, and have the time limits of a tourney hanging
over his/her head, forcing him/her to play a different kind of deck. This
takes away from the diversity of the game. When you can prove beyond a shadow
of a doubt you have won, then you have won. End of story. The only reason to
strictly impose the time limits is to a) Screw control players out of wins,
mostly because they piss people off and they love to see a control player
screwed this way, or b) Flat-out discourage control to be played at all in
tourneys, thereby reducing the field by one major deck archetype.

Allowing #1 is fine. It doesn't take any extra time, and EVERYONE knows who
won. It's like Al Capone getting off the hook because some idiot in the police
department didn't get the evidence in the absolute-most-proper-manner. Oooo,
guess he isn't guilty after all. My ass. When you have won, you have won.
Period.

Me,
Dave

Warrl kyree Tale'sedrin

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to

Paul Miller wrote in rec.games.trading-cards.magic.rules:

>Sure it's not impossible. It's entirely conceivable that one could eventually
>reach a 2 card library via such a milling process. But when Mr. Marcelais says
>"Repeat. Eventually you will have a library that has only two cards in them,
>..." he is essentially saying "Yes I am guaranteeing this process will
>terminate eventually." If you buy that, I've got a bridge in Brooklyn to sell
>you. :-) (no flame intended, Mike)

You don't have enough interest in that bridge to be worth buying.

But Mr. Marcelais says it is guaranteed that the process will
terminate eventually, he is effectively correct (assuming you actually
do create a new random order for the cards on each iteration). The
catch is that "eventually" is not defined; it could be one try, or one
try every ten seconds for the next gazillion years, or anywhere in
between.

Strictly speaking, it is not guaranteed that the process will ever
terminate; however, the chance of it not terminating within a hundred
years is vanishingly small. Unfortunately, it remains nonzero until
two weeks after eternity.


--------------------------------------------------------------
Pursuant to US Code, Title 47, Chapter 5, Subchapter II, ß227,
any and all nonsolicited commercial E-mail sent to this address
is subject to a download and archival fee in the amount of $500
US. E-mailing denotes acceptance of these terms.

Jeffrey G. Montgomery

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to

In article <01bd1d28$e43ad480$1fd1...@michmarc2.dns.microsoft.com>,

Mike Marcelais <mich...@microsoft.com> wrote:
>> Simply because something has a nonzero probability of happening over a time
>> period doesn't mean it will happen.
>
>Given an unlimited amount of time, yes it will happen.

As long as the probability is not zero, it will happen at some time, yes.
(I might win Saturday's lottery... I might not. If I play forever, and the
game goes on forever, eventually I'll win.)

>> To take a simpler example, say we have an
>> event with 1/10 probability of happening and we take ten independent trials
>of
>> that event. Then, by your argument, that event *would* happen once out of
>the
>> ten chances. In fact, probability doesn't tell us anything about whether
>that
>> event will happen or not. Probability will only state that there is a 1 -
>> (9/10) ^ 10 chance of that event happening at all. That works out to about
>a
>> 61% chance. Hardly a guarantee.

He never said the event WOULD happen. The more times you run an experiment,
the greater the chances of having an event you wish to occur to occur.
(We're talking about probability experiments; not mixing chemicals. :) )

>But you aren't doing 10 trials. You are doing an arbitrarily large number of
>trails. The chance that (after X trials) of the event happening is 1 - 0.9^x.
> As X gets larger and larger, that probability gets closer and closer to 1.
>If you could actually do it an infinite number of times, the probability it
>has happened _is_ 1.

No, it is NOT 1 - it is just so CLOSE to 1 that we say it is.
What's 9 * 0? We really don't know, and I can prove it:

9 * 0 = 0
0 / 9 = ??????

What is 0? 1-0=1
The closer we get to 1, the less the difference is between 1 and the
current value. If we have .99999999999999999999999999999999999999..., we
call it 1 for ease of use, but it isn't 1 and never will be.
1-(.9^(infinity)) = 1-(almost 0)
We can never get 1 because .9^(infinity) never reaches 0 -- it just gets
REALLY REALLY close.

Therefore, the chances OF it happening are so close to 1 that we call it 1...
but in reality, it's not.

>Since lazy evaluation lets you do something an infinite
>number of times, AND you can easily verify if/when the deck is stacked like
>you want, AND the first part of this paragraph proves that it will happen if
>you do it an infinite number of times (although it could happen sooner), lazy
>evaluation applies.

Lazy evaluation is a way for us to look at an experiment and say "Which way
is this heading - 0 or 1?" You could shuffle from now until cows developed
the ability to swallow black holes... you might never get the deck the way
you want it.

>> Now throw infinity into the mix. My previous post makes an attempt to show
>that
>> there exist at least one possible state of the milled player's deck such
>that it
>> is impossible to reach the two card library. Of course this is easy to
>produce.

I'll give you this one; but the question is, HOW do we check the state of
the deck without looking at it? :)

>> Now since each reshuffling of the deck randomizes it, presumably, each
>> successive state after each reshuffling is independent from any other. So
>it is
>> conceivable (never mind likely) that we could forever end up in the exact
>same
>> state after each reshuffle.

This is essentially what I said up above; but you're not the one I'm replying
to. :)

>There is a certain probability that (after each shuffle) that the deck will be
>in an unsuitable state. However, given an infinite number of these shuffles,
>you can show (very similarly to the previous paragraph) that the probability
>that you get a suitable deck is 100% given an infinite number of shuffles.
>Since you also have a way of determining when you have a suitable deck, you
>can stop whenever you get such a deck.

Again, you will NEVER get 100%. You might get 99.99999999...%; but never
100% based on the fact that you are not multlying by 1.

How do you determine if you have a suitable deck? Rules say no peeking. :)

Basic probability:
Chances of an event happening:
(number of desired outcomes)/(number of outcomes possible)
Chances of an event happening at least once in X tries:
1-([1-(number of desired outcomes)/(number of outcomes possible)]^X)
Chances of an event happening ONCE in X tries:
(number of desired outcomes)/(number of outcomes possible) *
(1-[number of desired outcomes]/[number of outcomes possible])^(X-1)
Chances of an event happening N times in X tries:
([number of desired outcomes]/[number of outcomes possible])^N *
(1-[number of desired outcomes]/[number of outcomes possible])^(X-N)

Examples:
Chances of winning a coin toss: 1/2
Chances of winning at least one toss in three: 1-([1-(1/2)]^3) =
1-([1/2]^3) =
1-(1/8) = 7/8
Chances of winning one toss in three: 1/2 * 1-([1-(1/2)]^2) =
1/2 * (1-([1/2]^2)) =
1/2 * 1-(1/4) = 1/2 * 3/4 = 3/8
Chances of winning five tosses in nine: (1/2)^5 * 1-(1/2)^4 =
1/32 * 1-(1/16) = 1/32 * 15/16 = 15/512

Chances of winning three die rolls in five, 2 or 5 being a win:
2 = number of winning outcomes, 6 = total number of possible outcomes.
2/6 = 1/3
(1/3)^3 * (1-(1/3)^2) = 1/27 * 8/9 = 8/243

Chances of winning AT LEAST ONE die throw, 2 or 5 being a win:
1-([1-(1/3)]^5) = 1-([2/3]^5) = 1-(32/243) = 211/243

Conversely: Chances of losing at least once: (1, 3, 4, 6 are wins)
4 = number of winning outcomes, 6 = total number of possible outcomes.
4/6 = 2/3
1-([1-(2/3)]^5) = 1-([1/3]^5) = 1-(1/243) = 242/243

Probability is fun when you know how to do it. :)

Jeff

David DeLaney

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to

In a previous article, fwo...@monmouth.com () says:
>Insipired by Dave's recent weekly summary, a thought I had been
>processing in the background finally surfaced. It's been ruled that if
>you have the Earthcraft/Sacred Mesa/Wild Growthed Plains/Altar of
>Dementia combo, and your opponent is playing with, say, four Gaea's
>Blessings, then you can "cut to the chase" and leave your opponent
>with just the 4 Gaea's Blessings as their library. Fine.

Did I say that got _ruled_ that way, or did I _ask_ about it in my
Summary? I can't see my Rulings file from here so I can't recall which
way Tom jumped when asked...

>Now, you actually have an arbitrarily large number of 2-card-Mill
>effects that you can apply. Now, after any number of these effects,
>there will be an odd number of cards in your opponent's library. The
>above-mentioned ruling doesn't exactly cover this situation, does it?

>You can't say that there are 3 Blessings left, as whichever one is in
>the graveyard would have had a reshuffle effect. So let's say there
>are 5 cards left, with 4 of them Blessings. Which is the fifth card?

You don't know, actually...

>An argument could be made that it's a random card, but I could also
>argue as follows: I can do any number of mills, so I see every card in
>your library.

Er: no. By your own argument, there's always one card, at least, you won't
see every time; it _could_ happen that that's always the same card, even
with an unbounded number of repetitions. So you can't say "I _will_ get
to a point where I have seen every card". [Though by the same token you
can't say "I _will_ get to a point where there's five cards in his library and
four are Blessings", now that I think about it.]

>Therefore, I'll stop when I see every card except one I
>choose (and the 4 Blessings of course) in your graveyard, so *I get to
>choose* the fifth card. Note that changing the power of the generated
>tokens could leave any number of unspecified cards (say I mill 7 cards
>at a time. Can I pick the other three?

I'll give you "picking the one/three".

>How about the order?

Nope. You haven't a clue what the _order_ of the cards in his library is,
at any tinme except when you're actually looking through it... and all
the things that let you do that make you shuffle afterwards as part of the
effect.

>And speaking of order, in what order are the milled cards stacked in the
>graveyard for any of the above cases. Using a similar argument, I
>could say *I* choose the order of your graveyard.

No, actually. The owner of the library chooses the order for each pair of
cards that is in the graveyard. You can determine, in the above nebulous
manner, which _pair_ of cards occupies each pair of positions, as well
as the card(s) left in the library ... but he gets to say "Of those two,
I chose for this one to go in first, then that one" any time they go together,
as Grindstone or Millstone do, and he can stick to that each time, so he
gets to determine the order of each pair.

Dave
--
\/David DeLaney d...@panacea.phys.utk.edu "It's not the pot that grows the flowe
It's not the clock that slows the hour The definition's plain for anyone to se
Love is all it takes to make a family" - R&P. VISUALIZE HAPPYNET VRbeable<BLINK
http://panacea.phys.utk.edu/~dbd/ - net.legends FAQ/ I WUV you in all CAPS! --K

David DeLaney

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to

Okay, I went back to the Rulings/Things Tom Said file to see what he -did-
say, and it's interestingly informative. Near-quote:

"You can cut to the chase _if_ you can know for sure whether you've reached the
desired final state. For instance, it's usually possible to tell if there's one
Gaea's Blessing left in his library, but cards like Mangara's Tome can muck
this up."

He's not saying here that you can cut to the chase if there exists _no chance
whatsoever that you can fail to attain_ the desired final state. He's
saying something different: that if you can _tell_ whether you've _gotten to_
the desired final state, then you can cut through the having to do it over and
over again and possibly getting stuck in an infinitely unlikely but possible
loop of never having that particular configuration come up.

The example he gives notes that there _are_ times when you _can't_ see all
the cards he's got - when some are face down, set aside by Mangara's Tome,
for instance. So even if you've seen his whole library before in the game,
at that point you don't know if the Blessings you're looking for are _in_ it.

But given that he doesn't have cards set aside that you can't see ... and you
know what's in his hand for some reason ... and you know such-and-such
cards _are_ in his deck and aren't already out-of-game or in the ante or
in play ... then Tom's saying that you can in that case determine if you've
_reached_ the state you're looking for, so that if you can repeat the process
of trying to get things into that state infinitely with a non-zero chance
of success each time then you can cut to the chase and "lazily evaluate".

At least that's what I'm inetrpreting the above as...

More questions?

David DeLaney

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to

In a previous article, pmiller@*DIESPAMMERS*.lni.net (Paul Miller) says:
>What we have here is a case of the halting problem stated in MTG terms.

Correct.

And looking at what Tom _did_ say, he didn't say "It has to be guaranteed
to halt to do this"; instead he said the weaker "It has to be that you can
_tell if you have halted_ to do this". It's generally not a problem to tell
_if_ a program has halted - the problem comes in trying to figure out
if it _will_ halt in the future for a given input [without actually running
it to see].

> I've
>given an argument showing that whether the process halts or not is undecideable,
>which goes to the original question "Can I lazy evaluate a loop not infinitely,
>but until some unlikely event happens?" Check Dave DeLaney's post on 12/20
>"Things Tom has said to the netreps" or somesuch for Tom's answer. In short,
>Tom says you have to guarantee you'll reach a terminating condition (if I am
>interpreting Tom right. I'm open to suggestions as to what he actually means,
>though.)

Well, no; I believe I'd interpret that as the weaker "you have to be able to
_tell if you have reached_ the terminating condition". Not that you have to
guarantee you _will_, which _is_ essentially the halting problem with
a randomness in the program...

Dustin Wood

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to

snippers applied freely :)

Jeffrey G. Montgomery wrote:

> >But you aren't doing 10 trials. You are doing an arbitrarily large number of
> >trails. The chance that (after X trials) of the event happening is 1 - 0.9^x.
> > As X gets larger and larger, that probability gets closer and closer to 1.
> >If you could actually do it an infinite number of times, the probability it
> >has happened _is_ 1.
>
> No, it is NOT 1 - it is just so CLOSE to 1 that we say it is.
> What's 9 * 0? We really don't know, and I can prove it:
>
> 9 * 0 = 0
> 0 / 9 = ??????
>

0/9 = 0 always has. On the other hand 9/0 is undefined.

> What is 0? 1-0=1
> The closer we get to 1, the less the difference is between 1 and the
> current value. If we have .99999999999999999999999999999999999999..., we
> call it 1 for ease of use, but it isn't 1 and never will be.
> 1-(.9^(infinity)) = 1-(almost 0)
> We can never get 1 because .9^(infinity) never reaches 0 -- it just gets
> REALLY REALLY close.
>

Wrong. .999999... out to infinity IS exactly equal to one. Let's see if I
remember my basic Algebra for converting repeating decimals to fractions.

Let's let x = 0.4545454545....
then 10x = 45.454545454545....
and 10x - x = 9x
and 45.45454545.... - 0.45454545.... = 45.0
then, by substitution, 9x = 45
using division we get x = 45 / 9
viola .45454545..... exactly equals 45 / 9

now lets apply this to 0.999999999...
let x = 0.9999999...
10x = 9.99999999....
10x - x = 9x and 9.9999.... - 0.99999.... = 9.0
therefore 9x = 9 and x = 9 / 9 = 1
therefore 0.999999.... is EXACTLY equal to 1.0
If you still disagree, then go talk to any Mathematics teacher and they will tell
you the same thing.

> Therefore, the chances OF it happening are so close to 1 that we call it 1...
> but in reality, it's not.
>

Ahhh, but it is, as I just showed.

> Lazy evaluation is a way for us to look at an experiment and say "Which way
> is this heading - 0 or 1?" You could shuffle from now until cows developed
> the ability to swallow black holes... you might never get the deck the way
> you want it.

True, if we assume that cows will develop that ability, but if we go out to
infinity instead, then we will have experienced every possible card ordering
possible an infinite number of times.

> HOW do we check the state of
> the deck without looking at it? :)
>

The idea is to check the graveyard, which is legal, to see if the cards that you
want are there or not and to continue forcing shuffles and milling cards until you
do see the exact cards you want to see in the graveyard. You can't know the
_order_ of the cards in the library, but you can know _what_ cards are in the
library.

>
>
> >> Now since each reshuffling of the deck randomizes it, presumably, each
> >> successive state after each reshuffling is independent from any other. So
> >it is
> >> conceivable (never mind likely) that we could forever end up in the exact
> >same
> >> state after each reshuffle.

That statement is EXTREMELY misleading. Once you invoke infinity, you guarantee
that ALL combinations will eventually occur. This is the problem with infinity, it
guarantees itself too. By that I mean that given a particular order of the cards
after a shuffle, with infinite shuffles, sometime you will get that same shuffle
exactly twice in a row. You will also get a stretch of exactly 3, 4, 100 and yes
infinite consecutive shuffles yielding the same order of cards. Not only that but
ALL possible orderings will have an infinite number of shuffling sequences that
yield that same ordering an infinite number of consecutive times. I know this is a
paradox because one string of orderings can't start until the previous one ends,
but that is what happens when you start messing with infinity. EVERY possible
thing that _can_ happen, WILL happen. And since you _could_ get the same shuffle
infinitely, somewhere within that infinity _will_ be an infinite string of that
shuffle.

> How do you determine if you have a suitable deck? Rules say no peeking. :)
>

See above about the graveyard.

> Probability is fun when you know how to do it. :)

And even more so if you really understand the concept of infinity :)

Dustin


Dustin Wood

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to

Mike Marcelais wrote:

> There is a certain probability that (after each shuffle) that the deck will be
> in an unsuitable state.

That I agree with, but it isn't what he said. He said that the deck would be in a
state where it is _impossible_ to reach the 2 card library. There is a
difference.

Dustin


Dustin Wood

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to

Butchered all to..... :)

Paul Miller wrote:

> Simply because something has a nonzero probability of happening over a time

> period doesn't mean it will happen. To take a simpler example, say we have an


> event with 1/10 probability of happening and we take ten independent trials of
> that event. Then, by your argument, that event *would* happen once out of the
> ten chances.

He never said anything even close to that.

> In fact, probability doesn't tell us anything about whether that
> event will happen or not.

Sure it does, it tells us that _on average_ it will happen once for every ten
chances.

> Probability will only state that there is a 1 -
> (9/10) ^ 10 chance of that event happening at all. That works out to about a
> 61% chance. Hardly a guarantee.
>

But that is only for ten tries. Hardly a fair comparison to infinity.

> Now throw infinity into the mix. My previous post makes an attempt to show that
> there exist at least one possible state of the milled player's deck such that it
> is impossible to reach the two card library. Of course this is easy to produce.

I would like to see you produce even one! I understand what you meant, but what you
actually said is a different matter altogether. If the deck in question does not
directly lead to a situation that CAN NOT lead to the two card deck (i.e. not the two
cards you want AND no way to reshuffle), then, and only then, can you say that you
have a deck from which it is impossible to reach the desired 2 card deck. As far as
I can tell, given the defined situation, you will ALWAYS be able to reshuffle, even
with the desired two card deck.

> Now since each reshuffling of the deck randomizes it, presumably, each
> successive state after each reshuffling is independent from any other. So it is
> conceivable (never mind likely) that we could forever end up in the exact same

> state after each reshuffle. I won't debate whether it's possible to achieve any
> given state, because it certainly is. So the outcome after the entire sequence
> of millings is non-deterministic. Hence lazy evaluation cannot apply.

See one of my lower postings for a discussion on this :)

Dustin


Dustin Wood

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to


Mike Marcelais wrote:

> Another case that nobody has clued into yet: how many blessings does your
> opponent have?
>

good question :)

> You can clearly know if your opponent has any. However, consider the
> following situation: Library has 10 cards, Graveyard empty, milling two at a
> time. You mill Crusade, Crusade, Crusade, Mesa Pegasus, Mesa Pegasus, Mesa
> Pegasus, Gaea's Blessing, Gaea's Blessing, *reshuffle twice*.
>
> Now, you know the "library" has 3 Crusades, 3 Pegasi, 2 Blessing, and 2 other
> cards. But you have no way of knowing what those other cards are. They could
> be two more Blessings, or they could be a Crusade and a Pegasus, but no amount
> of shuffling will allow you to determine this because it is not possible to
> distinguish between "I've been getting bad luck and can't ever mill 2 cards
> our of 4 left that are both not GB's" vs "There are 4 GB left in the library
> (out of 4 cards).". The best that you can do is leave the library with 4
> cards, which contain all of the GB left in the game, plus 2 other cards.
>
> Note that _NON-LAZY_ evalutation might be able to solve the problem by
> actually doing the actions a few times and determining the outcomes (eg, if
> the second time through, you mill 4 Crusades, then you know one of the two
> "hidden" cards is a Crusade), but Lazy evaluation can't help you because one
> of the two outcomes can be "proven"; with the other outcome being "I don't
> know".

hmmmm... just speaking about the infinity side and ignoring magic rules for a
sec..... Given an infinite number of shuffles you will know how many GB's are in
your opp.s deck because ALL possibilities WILL be exhausted an infinite number of
times. Therefore, if you never see beyond the four card mark, your opponent has
exactly four GB's.

The problem is that you can't actually do it an infinite number of times. So.....
somehow you have to be told/shown those last four cards. This goes against the
rules of having to reveal your deck to your opponent.

If it is decided that your opponent will have to let you know what he has, then we
can stop here. But if he/she doesn't have to show you, then we have your
example, but on a slightly bigger scale.

It seems that most of us feel that you should be able to arrange the cards in the
graveyard to your liking (at least in pairs of your liking anyway). But you only
get to arrange the cards you know about.

Basically the problem is this: Assume I get the combo set up. I run through your
deck until I hit the first GB. I now know some number X of your cards. If this
is all of them, then I am happy and can stop looking and start worrying about
setting your graveyard up. But what if it is only a few of your cards, or only 2
even. Then I am not so happy, I want you to shuffle so I can see more cards
before I invoke the lazy evaluation. In a tournament, how long should I be
allowed to do this before it is called stalling? Do I have to write down all the
cards and how many of each I've seen? This will lead to all kinds of problems :)
Looks like fun for the tournament judges if this works :)

Dustin :)


Dustin Wood

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to


Warrl kyree Tale'sedrin wrote:

> Strictly speaking, it is not guaranteed that the process will ever
> terminate; however, the chance of it not terminating within a hundred
> years is vanishingly small. Unfortunately, it remains nonzero until
> two weeks after eternity.

Given that your opponent has only 2 GB's and the number of cards in the graveyard
and draw pile totals an even number, then Yes, it is guaranteed to happen if the
process is carried out to infinity. It is not guaranteed to happen within a
trillion years, though it would be extremely unlikely not to, but that is because a
triillion years is a finite amount of time. Given an infinite number of tries,
eventually all possibilities will occur. Choosing to stop when you have obtained a
specific one only means that any others may or may not have occured before you got
that one. However, if all you want is that one, then you will eventually get it.

Dustin


Dustin Wood

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to


David DeLaney wrote:

> >So let's say there
> >are 5 cards left, with 4 of them Blessings. Which is the fifth card?
>
> You don't know, actually...
>
> >An argument could be made that it's a random card, but I could also
> >argue as follows: I can do any number of mills, so I see every card in
> >your library.
>
> Er: no. By your own argument, there's always one card, at least, you won't
> see every time; it _could_ happen that that's always the same card, even
> with an unbounded number of repetitions. So you can't say "I _will_ get
> to a point where I have seen every card". [Though by the same token you
> can't say "I _will_ get to a point where there's five cards in his library and
> four are Blessings", now that I think about it.]

Sorry to keep harping on this, but with infinity, ALL arrangements will occur and
you WILL get to see all of his cards.

> You haven't a clue what the _order_ of the cards in his library is,
> at any tinme except when you're actually looking through it... and all
> the things that let you do that make you shuffle afterwards as part of the
> effect.

They do? How about orcish spy? Or Elemental Augury? Granted that's not his
entire library (unless he only has 3 cards left) but if all we want is to know
where that 1 card is, these should do the trick.

Dustin


David Wintheiser

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to

Dustin Wood wrote:
>
> snippers applied freely :)

Nice idea. I'll do the same.

> Jeffrey G. Montgomery wrote:
>
> > We can never get 1 because .9^(infinity) never reaches 0 -- it just gets
> > REALLY REALLY close.
> >
>

> Wrong. .999999... out to infinity IS exactly equal to one. Let's see if I
> remember my basic Algebra for converting repeating decimals to fractions.
>
> Let's let x = 0.4545454545....
> then 10x = 45.454545454545....
> and 10x - x = 9x
> and 45.45454545.... - 0.45454545.... = 45.0
> then, by substitution, 9x = 45
> using division we get x = 45 / 9
> viola .45454545..... exactly equals 45 / 9
>
> now lets apply this to 0.999999999...
> let x = 0.9999999...
> 10x = 9.99999999....
> 10x - x = 9x and 9.9999.... - 0.99999.... = 9.0
> therefore 9x = 9 and x = 9 / 9 = 1
> therefore 0.999999.... is EXACTLY equal to 1.0
> If you still disagree, then go talk to any Mathematics teacher and they will tell
> you the same thing.

A very elegant proof. Nice job.

> > HOW do we check the state of
> > the deck without looking at it? :)
> >
>

> The idea is to check the graveyard, which is legal, to see if the cards that you
> want are there or not and to continue forcing shuffles and milling cards until you
> do see the exact cards you want to see in the graveyard. You can't know the
> _order_ of the cards in the library, but you can know _what_ cards are in the
> library.

True, you can't "know" the order of the cards in the library, because it
isn't legal by the rules of Magic to look at them. You could, however,
mandate an order given some other condition--say you want to end this
lazy loop with all four of your opponent's Gaea's Blessings and one
other arbitrary card in his library, and you want the arbitrary card to
be on top of the library. Assuming that the total number of cards in
your opponent's graveyard and library equals an odd number, you can
perform lazy evaluation until those five cards are the only cards in the
library. With Field of Dreams in play (the top card of each players
library is face-up), you could also see whether the top card is a
Blessing or not, and if it isn't, "restart" the lazy loop until the
combination of five cards with the arbitrarily chosen card on top is
reached.

There's another question involved here as well--how do you know how many
Blessings your opponent has in his/her library? Your opponent is not
required to tell you. With Altar of Dementia (or Millstone for that
matter, assuming there was a way to use Millstone for this infinite
loop), you will never see more than two cards from your opponent's
library at once; so long as your opponent has more than one Blessing in
the deck, you would, during your loop, see two Blessings in the same
mill from time to time, but without a way to look at the entire library
at once, you could never be sure if those two Blessings were the only
ones, or if they were merely a combination of two out of three or four
Blessings. Also (and this falls into the "no sh*t, Sherlock"
classification of observations), this loop won't affect any Blessings
your opponent is currently holding in his/her hand.

> I know this is a
> paradox because one string of orderings can't start until the previous one ends,
> but that is what happens when you start messing with infinity. EVERY possible
> thing that _can_ happen, WILL happen. And since you _could_ get the same shuffle
> infinitely, somewhere within that infinity _will_ be an infinite string of that
> shuffle.

Yes, this is confusing--the idea that infinity is infinitely larger than
infinity itself confuses anyone who hasn't had a fairly high level of
math instruction. In our group, when we started playing around with
Enduring Renewal engines, we made a distinction--whenever a loop
produced some commodity (mana from Ashnod's Altar, draws from
Inheritance, mills from Altar of Dementia), rather than calling it
"infinite", we started calling it "limitless", where "limitless" is
defined as "enough to achieve an arbitrary effect" like kill an opponent
regardless of how much finite damage prevention he/she has available.

You: "I complete the limitless mana loop by casting Ashnod's Altar,
generate limitless mana by repeatedly sacrificing Shield Sphere with
Enduring Renewal in play, and tap this mountain for R. I Fireball you
for limitless damage."
Opponent: "Guess this Healing Salve won't do much good. I'm dead."

Or, somewhat more humorously:

You: <same statement as first one above>
Opponent: Ah, good thing I have this Honorable Passage. You're dead."
You: "Shoot."

> > How do you determine if you have a suitable deck? Rules say no peeking. :)
>

> See above about the graveyard.

And also Field of Dreams above. If anyone else can think of any
continuous or reusable library "peeking" abilities, let us know.

David Wintheiser

"It is not that you will go mad. It is that you will beg for madness."
-Volrath (from "Altar of Dementia")

James Wood

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to


David Wintheiser wrote:

> Dustin Wood wrote:
> >
> > snippers applied freely :)

>
>
> Nice idea. I'll do the same.
>

I prefer the butcher knife approach.

> > Jeffrey G. Montgomery wrote:
> > > HOW do we check the state of
> > > the deck without looking at it? :)
> > >

> True, you can't "know" the order of the cards in the library, because it
> isn't legal by the rules of Magic to look at them. You could, however,
> mandate an order given some other condition--say you want to end this
> lazy loop with all four of your opponent's Gaea's Blessings and one
> other arbitrary card in his library, and you want the arbitrary card to
> be on top of the library. Assuming that the total number of cards in
> your opponent's graveyard and library equals an odd number, you can
> perform lazy evaluation until those five cards are the only cards in the
> library. With Field of Dreams in play (the top card of each players
> library is face-up), you could also see whether the top card is a
> Blessing or not, and if it isn't, "restart" the lazy loop until the
> combination of five cards with the arbitrarily chosen card on top is
> reached.
>
> There's another question involved here as well--how do you know how many
> Blessings your opponent has in his/her library? Your opponent is not required to tell
> you.

That's Where Jester's clothing, Lobotomy, and Grinning Totems come into play. Of
course, these might also negate the need to worry about Blessings all togerther. Then
we might actually 'mill' their deck completely. What a bummer.

> Also (and this falls into the "no sh*t, Sherlock"
> classification of observations), this loop won't affect any Blessings
> your opponent is currently holding in his/her hand.

(This also falls into that category) Even if the player has nothing but 4 Blessings left
in his deck/hand/graveyard, he could still cast the Blessings infinitely. Why do Blue
decks always get this complicated? This is why I stay away from playing Blue.(I know
Winter Orb or Abeyance also work, but the Orb is slower and you would need all 4 of your
Abeyances to be safe)

> If anyone else can think of any
> continuous or reusable library "peeking" abilities, let us know.
>
> David Wintheiser

Just off the top of my head: Orcish Spy, Elemental Augury, Zur's Wierding,
Breathstealer's Crypt and Precognition.

Four Precognitions would actually allow you to re-order his remaining library (something
we have been told can not be done.), while with Zur's you could finish decking him.
(Bazarr of Wonders would also let you deck him regardless of his library order.)

Precognition card text: "During your upkeep you may look at the top card of target
opponent's library. You may then place that card on the bottom of his or her library."

Dustin Wood

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to

> Wrong. .999999... out to infinity IS exactly equal to one. Let's see if I
> remember my basic Algebra for converting repeating decimals to fractions.
>
> Let's let x = 0.4545454545....
> then 10x = 45.454545454545....
> and 10x - x = 9x
> and 45.45454545.... - 0.45454545.... = 45.0
> then, by substitution, 9x = 45
> using division we get x = 45 / 9
> viola .45454545..... exactly equals 45 / 9
>
> now lets apply this to 0.999999999...
> let x = 0.9999999...
> 10x = 9.99999999....
> 10x - x = 9x and 9.9999.... - 0.99999.... = 9.0
> therefore 9x = 9 and x = 9 / 9 = 1
> therefore 0.999999.... is EXACTLY equal to 1.0
> If you still disagree, then go talk to any Mathematics teacher and they will tell
> you the same thing.

Oooops, sorry everyone. Guess I can't even remember elementary school math :) The
argument is sound, I simply put in the wrong numbers. The second example is correct,
hoever the first example should look like this:

Let's let x = 0.4545454545....

then _100_x = 45.454545454545....
and _100_x - x = _99_x


and 45.45454545.... - 0.45454545.... = 45.0

then, by substitution, _99_x = 45
using division we get x = 45 / _99_
viola .45454545..... exactly equals 45 / _99_

Sorry bout that :)

Dustin


Dustin Wood

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to


Paul Miller wrote:

> Mike Marcelais (mich...@microsoft.com) wrote:
>
> >There is a certain probability that (after each shuffle) that the deck will be

> >in an unsuitable state. However, given an infinite number of these shuffles,
> >you can show (very similarly to the previous paragraph) that the probability
> >that you get a suitable deck is 100% given an infinite number of shuffles.
> >Since you also have a way of determining when you have a suitable deck, you
> >can stop whenever you get such a deck.
>

> Mike, don't take this the wrong way, but you're completely wrong. Look at it
> this way, you have two possible outcomes: A, the deck will never reach a
> suitable state, and B, the deck will reach a suitable state. Now what you're
> saying is A = B and B = 100%? Surely you can see the utter lack of logic to
> that conclusion. One thing is certain: someone's definition of probability is
> screwed.

Paul, don't take this the wrong way, but you're completely wrong :) Your problem is
that you are ignoring the properties of infinity. If I have two infinite strings of
events and I add them together what do I get? I get one infinite string of events.
I can then break that string into as many different, but still infinite, strings of
events as I want. I can also break it into an infinite number of finite strings of
events if I want.

The problem isn't the definition of probability, but what happens to that definition
when it encounters infinity.

> Now on to the crux of my argument. I don't care about probabilities at all.
> I'm claiming that there is no guarantee this process will ever terminate, and no
> way to tell if it will ever terminate, either in a finite or infinite number of
> steps, given only the available information. Apply this to any probabilistic
> situation in which the probability of your desired outcome is not certain.
>
> You: I do <such and such> until <such and such>. I'm using probabilistic lazy
> evaluation to guarantee that I'm reaching the desired state.
>
> Me: Well, I, too, am using probabilistic lazy evaluation, and I am lazy
> evaluating this process to <some other state>

You don't have control of this process, I do :-P

>

> At which point, the judge walks over and gives us both a warning for failure to
> agree on reality.
>
> In this situation, the only person I agree with here is the judge. :)

This is a totally irrational argument for the current thread. If I'm the one with
the combo, then I have the possibility of running you through your deck as many
times as I wish. There is no way for you (under the given situation) to do anything
as many times as you wish. The process stops only when I choose to stop, not when
you do.

If you throw something else in. where you can infinitely loop too, and somehow your
looping affects mine, then we have a problem, but _that is not the case here_.

Dustin


Dustin Wood

unread,
Jan 10, 1998, 3:00:00 AM1/10/98
to

hehehe, you are right, I screwed up, but so did you :)

Steen Bang-Madsen wrote:

> > Let's let x = 0.4545454545....
> > then 10x = 45.454545454545....
> > and 10x - x = 9x
> > and 45.45454545.... - 0.45454545.... = 45.0
> > then, by substitution, 9x = 45
> > using division we get x = 45 / 9
> > viola .45454545..... exactly equals 45 / 9
>

> Hmmm. Not quite. I've never had any algebra, but this looks
> wrong.
> In your first argument you start out by miscalculating. But
> still completing the piece you end up saying that
> .45454545.... exactly equals 45 / 9. Isn't 45 / 9 = 5 ?
> 10 x 0.45454545... isn't 45.454545... but 5.45454545

Nope :) we are both wrong here, it should be 4.54545454...

I also noticed my error. Unfortunantely it wasn't until I was reading it on the
newsgroup. I have posted a correction, but thank you for keeping me on my toes :)

Dustin


Paul Miller

unread,
Jan 11, 1998, 3:00:00 AM1/11/98
to

Mike Marcelais (mich...@microsoft.com) wrote:

>There is a certain probability that (after each shuffle) that the deck will be
>in an unsuitable state. However, given an infinite number of these shuffles,
>you can show (very similarly to the previous paragraph) that the probability
>that you get a suitable deck is 100% given an infinite number of shuffles.
>Since you also have a way of determining when you have a suitable deck, you
>can stop whenever you get such a deck.

Mike, don't take this the wrong way, but you're completely wrong. Look at it
this way, you have two possible outcomes: A, the deck will never reach a
suitable state, and B, the deck will reach a suitable state. Now what you're
saying is A = B and B = 100%? Surely you can see the utter lack of logic to
that conclusion. One thing is certain: someone's definition of probability is
screwed.

Now on to the crux of my argument. I don't care about probabilities at all.

I'm claiming that there is no guarantee this process will ever terminate, and no
way to tell if it will ever terminate, either in a finite or infinite number of
steps, given only the available information. Apply this to any probabilistic
situation in which the probability of your desired outcome is not certain.

You: I do <such and such> until <such and such>. I'm using probabilistic lazy
evaluation to guarantee that I'm reaching the desired state.

Me: Well, I, too, am using probabilistic lazy evaluation, and I am lazy
evaluating this process to <some other state>

At which point, the judge walks over and gives us both a warning for failure to

Steen Bang-Madsen

unread,
Jan 11, 1998, 3:00:00 AM1/11/98
to

> We can never get 1 because .9^(infinity) never reaches 0
-- it just gets
> > REALLY REALLY close.
> >
>
> Wrong. .999999... out to infinity IS exactly equal to one. Let's see if I
> remember my basic Algebra for converting repeating decimals to fractions.
>
> Let's let x = 0.4545454545....
> then 10x = 45.454545454545....
> and 10x - x = 9x
> and 45.45454545.... - 0.45454545.... = 45.0
> then, by substitution, 9x = 45
> using division we get x = 45 / 9
> viola .45454545..... exactly equals 45 / 9
>
> now lets apply this to 0.999999999...
> let x = 0.9999999...
> 10x = 9.99999999....
> 10x - x = 9x and 9.9999.... - 0.99999.... = 9.0
> therefore 9x = 9 and x = 9 / 9 = 1
> therefore 0.999999.... is EXACTLY equal to 1.0
> If you still disagree, then go talk to any Mathematics teacher and they will tell
> you the same thing.

Hmmm. Not quite. I've never had any algebra, but this looks


wrong.
In your first argument you start out by miscalculating. But
still completing the piece you end up saying that
.45454545.... exactly equals 45 / 9. Isn't 45 / 9 = 5 ?

10 x 0.45454545... isn't 45.454545... but 5.45454545 and
that makes it hard to complete the calculations. I think you
meant to multiply by 100, which would make the argument look
like:

Let's let x = 0.4545454545....

then 100x = 45.454545454545....
and 100x - x = 99x


and 45.45454545.... - 0.45454545.... = 45.0

then, by substitution, 99x = 45
using division we get x = 45 / 99
viola .45454545..... exactly equals 45 / 99

Not that it really matters, because the second calculation
was completely correct.
I simply wanted to correct the error.

See ya.

Walter Goodwin

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

In article <34b81755...@nntp.lni.net>,

Paul Miller <pmiller@*DIESPAMMERS*.lni.net> wrote:
>Mike, don't take this the wrong way, but you're completely wrong.

No, he's not. I'm first going to give a riddle, then tell a joke.

The Riddle:
If you have a string of numbers stretching from negative infinity to
positve infinity, start at 0, and either randomly subtract one, or add
one from your position, and repeat this an infinite number of time, then
where are you going to end up at, and which numbers would you have
traversed?

The Joke:
A Mathmatician and Physicist are taking part in a psychological trial.
the Mathmatician steps into a room where he is told to stand on a big
X while a beautiful naked woman lays on a bed across the room. He is
told that when he hears them ring a bell, he may travel 1/2 of the
distance between them, and when he reaches her, he can do anything he
wants. He runs screaming from the room yelling that he'll never reach
her. The Physicist steps into the room and is told the same thing. He
gets really happy and lets out a "WooHoo". Confused, the psychiatrist
asks him if he noticed that he'd never actually reach her, the Physicist
then said "That's technically correct, but for all practical purposes I
will reach her."

BTW, the answer to the riddle is, Anywhere you want, and all of them.

(No, I don't really find the joke that funny :)

> Look at it
>this way, you have two possible outcomes: A, the deck will never reach a
>suitable state, and B, the deck will reach a suitable state. Now what you're
>saying is A = B and B = 100%? Surely you can see the utter lack of logic to
>that conclusion. One thing is certain: someone's definition of probability is
>screwed.

Actually, it is your understanding of infinity that is slightly lacking.
By adding infinity to the mix, all possible combinations will be reached,
including your hypothetical infinite string of positions where the desired
result can't be reached. However, since the miller is picking the terminating
point of the infinite loop, he can by-pass your string.

>Now on to the crux of my argument. I don't care about probabilities at all.
>I'm claiming that there is no guarantee this process will ever terminate, and no
>way to tell if it will ever terminate, either in a finite or infinite number of
>steps, given only the available information. Apply this to any probabilistic
>situation in which the probability of your desired outcome is not certain.

Given probability and a good understanding of infinity, it is certain that
a terminating point is reached eventually. Infinity encompasses all
possibilities.

>You: I do <such and such> until <such and such>. I'm using probabilistic lazy
>evaluation to guarantee that I'm reaching the desired state.

>Me: Well, I, too, am using probabilistic lazy evaluation, and I am lazy
>evaluating this process to <some other state>

Are you the one picking the loop's temination point? No? The stop lazily
evaluating!!! :)

Ingo Warnke

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

Mike Marcelais (mich...@microsoft.com) wrote:
: > [snipped some because I agree with it assuming you accept that lazy
: evaluation
: > applies]
: >
: > Simply because something has a nonzero probability of happening over a time

: > period doesn't mean it will happen.

: Given an unlimited amount of time, yes it will happen.

It will happen with probability 1. Unfortunately, if there are infinitely many
possibilities at work, 'propability 1' doesn't mean 'sure', just as 'propability
0' doesn't mean 'never happens'.

Ingo Warnke

Ingo Warnke

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

Joemac69 (joem...@aol.com) wrote:
: If a deck is shuffled an infinite number of times
: and that deck contains only two gaeas blessings

: then a number of those shuffled states *will* be such
: that the blessings are the last two cards left.

Not necessarily. It will happen with probability one, but that doesn't mean
it will 'surely happen'. When there are infinitely many possibillities, that
something happens with probability 1 means it 'almost surely' happens, but it might
still come around otherwise. For example, If you toss a coin infinitely the probability
that you get at least once 'head' is 1. Nevertheless, it is still possible to get
all 'tail'.

: The question in this matter is not whether it can happen because given enough
: time (ie several thousand years etc..) it definitely would. What is germain is
: whether the (tournament) rules allow you to *jump* to that state.

As I said it will not surely happen, so the game can't automatically advance.
My question is: If you do the milling over and over again, would this be
considered stalling?

Ingo Warnke

Paul Miller

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

On 12 Jan 1998 04:27:46 GMT, wgoo...@expert.cc.purdue.edu (Walter Goodwin)
wrote:

>In article <34b81755...@nntp.lni.net>,
>Paul Miller <pmiller@*DIESPAMMERS*.lni.net> wrote:

>Actually, it is your understanding of infinity that is slightly lacking.

I think I can be forgiven for not understanding something noone has defined (in
Magic terms). ;)

>By adding infinity to the mix, all possible combinations will be reached,

Prove it. This is a logical leap everyone is making, but I fail to see what
logic they are using. The set of all possible deck states over countably many
millings is most definitely infinite. It's just that some of the states occur
with probability zero in an axiomatic sense. That does not mean they aren't
still possible configurations, just "highly unlikely".

>including your hypothetical infinite string of positions where the desired
>result can't be reached. However, since the miller is picking the terminating

^^^^^^^^^


>point of the infinite loop, he can by-pass your string.

^^^^
In that case you have to show that it does terminate. That is impossible
because of my construction.

Now here might be the problem: the word "terminate" really means "terminate in
finitely many steps" in most cases. But, even if you allow a process to
"terminate" in "infinitely" many steps, there is still a problem, because at any
point you cannot know if you will ever "terminate." It's that pesky
non-determinism thingy again. And again, I utterly fail to see how one can lazy
evaluate a non-deterministic process. It's even more absurd than some of the
other silly rulings in Magic. :)

Kyle Nishioka

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

Paul Miller (pmiller@*DIESPAMMERS*.lni.net) wrote:
: On 12 Jan 1998 04:27:46 GMT, wgoo...@expert.cc.purdue.edu (Walter Goodwin)
: wrote:

: >In article <34b81755...@nntp.lni.net>,
: >Paul Miller <pmiller@*DIESPAMMERS*.lni.net> wrote:

: >Actually, it is your understanding of infinity that is slightly lacking.

: I think I can be forgiven for not understanding something noone has defined (in
: Magic terms). ;)

: >By adding infinity to the mix, all possible combinations will be reached,

: Prove it. This is a logical leap everyone is making, but I fail to see what
: logic they are using. The set of all possible deck states over countably many
: millings is most definitely infinite.

Nope, a finite number of cards yields a finite number of permutations.
The desired state is possible, the only problem is waiting for the state
to be reached by random shuffling.

: It's just that some of the states occur


: with probability zero in an axiomatic sense. That does not mean they aren't
: still possible configurations, just "highly unlikely".

: >including your hypothetical infinite string of positions where the desired
: >result can't be reached. However, since the miller is picking the terminating

: ^^^^^^^^^
: >point of the infinite loop, he can by-pass your string.
: ^^^^
: In that case you have to show that it does terminate. That is impossible
: because of my construction.

The termination point exists because the conditions are possible. The
other points in the loop are irrelavant.

: Now here might be the problem: the word "terminate" really means "terminate in


: finitely many steps" in most cases. But, even if you allow a process to
: "terminate" in "infinitely" many steps, there is still a problem,
: because at any
: point you cannot know if you will ever "terminate."

Why not?

--
Kyle
nk...@hawaii.edu

#include <std_disclaimer.h>
#include <blue_ribbon>

Morgan Lewis

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

Dustin Wood wrote:
>
> > Wrong. .999999... out to infinity IS exactly equal to one. Let's see if I
> > remember my basic Algebra for converting repeating decimals to fractions.
> >
> > Let's let x = 0.4545454545....
> > then 10x = 45.454545454545....
> > and 10x - x = 9x
> > and 45.45454545.... - 0.45454545.... = 45.0
> > then, by substitution, 9x = 45
> > using division we get x = 45 / 9
> > viola .45454545..... exactly equals 45 / 9
> >
> > now lets apply this to 0.999999999...
> > let x = 0.9999999...
> > 10x = 9.99999999....
> > 10x - x = 9x and 9.9999.... - 0.99999.... = 9.0
> > therefore 9x = 9 and x = 9 / 9 = 1
> > therefore 0.999999.... is EXACTLY equal to 1.0
> > If you still disagree, then go talk to any Mathematics teacher and they will tell
> > you the same thing.
>
> Oooops, sorry everyone. Guess I can't even remember elementary school math :) The
> argument is sound, I simply put in the wrong numbers. The second example is correct,
> hoever the first example should look like this:
>
> Let's let x = 0.4545454545....
> then _100_x = 45.454545454545....
> and _100_x - x = _99_x
> and 45.45454545.... - 0.45454545.... = 45.0
> then, by substitution, _99_x = 45
> using division we get x = 45 / _99_
> viola .45454545..... exactly equals 45 / _99_
>
> Sorry bout that :)
>
> Dustin

Here's how my mid-school algebra teacher explained it to us:
3/3=1, as we all know.
1/3 = .33333333....
3*(1/3)= 3*(.3333....) by basic algebra
3/3 = .999999.....
1 = .9999.....

Simple. Elegant.

Morgan
--
--------------------------------------------------------------------
My home page: http://www.geocities.com/Athens/Delphi/4360/index.html
My Eclectic Quotes page:
http://www.geocities.com/Athens/Delphi/4360/quotes.html

Phaedrus

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

In article <34ba5...@news.uni-rostock.de>,

Ingo Warnke <nfa...@hp710.math.uni-rostock.de> wrote:
>Mike Marcelais (mich...@microsoft.com) wrote:
>:>Simply because something has a nonzero probability of happening over a time
>:>period doesn't mean it will happen.

>: Given an unlimited amount of time, yes it will happen.
>It will happen with probability 1. Unfortunately, if there are infinitely
>many possibilities at work, 'propability 1' doesn't mean 'sure', just as
>'propability 0' doesn't mean 'never happens'.

Errrr, with all due respect, this is simply wrong.
Something that has probability 1 will always happen, every time, by
definition. If it won't happen every time, then it is not probability 1.
Something that has probability 0 will never happen, ever, by definition.
If there is any chance of it ever happening, then it is not probability 0.
For example, let's say that I pick a random integer. Assuming that it's
truly random, the odds of it being odd are 1/2. The odds of it being even
are 1/2. The odds of it being odd or even are 1. Are you saying that if I
pick enough random integers, I'll run into one that isn't odd or even?
The confusion here is this: Let's say that I pick X random integers.
The odds that at least one of those integers will be even are 1-(1/(2^X)).
If I use higher and higher values of X, then the odds get closer and closer to
1; the odds approach 1 as X approaches infinity. I can make the odds as close
to 1 as I want, by choosing a high enough value of X. But "approaching 1 as
X approaches infinity" is not the same as "1". Even if I make X "infinity",
there's always that one-in-two-to-the-infinite-power chance that I'll pick
nothing but odd numbers for the rest of eternity. So saying "It's certain that
I'll eventually pick an even number" is wrong.
--
\o\ If you're interested in books and stories with transformation themes, \o\
/o/ please have a look at <URL:http://www.halcyon.com/phaedrus/>. /o/
\o\ FC1.21:FC(W/C)p6arw A- C->++ D>++ H+ M>+ P R T++++ W** Z+ Sm RLCT \o\
/o/ a cmn++++$ d e++ f+++ h- i++wf p-- sm# /o/

David Wintheiser

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

Paul Miller wrote:
>
> On 12 Jan 1998 04:27:46 GMT, wgoo...@expert.cc.purdue.edu (Walter Goodwin)
> wrote:
>
> >By adding infinity to the mix, all possible combinations will be reached,
>
> Prove it.

Okay, I'm going to give this a try. Forgive me if I err; it is human,
after all. :)

> The set of all possible deck states over countably many
> millings is most definitely infinite.

Actually, it isn't. The number of cards in a deck is finite, and thus
the number of possible combinations of those cards in a deck is also
finite, though extremely large. By using the formula for combinations,
it is possible to determine how many combinations of cards are possible
from an original 60-card deck. This can be repeated for a deck
consisting of 59 of the original 60 cards, and again for 58 of the
original 60, and so on until every possible combination of every
possible number of cards is calculated; in the subsequent proof, I will
use "C" to indicate the total number of deck combinations possible from
any number of cards taken from an original 60-card deck. No matter how
many times you mill, or how many times you shuffle, your ending deck
state will wind up as one of these C combinations.

Ergo, the probability of reaching a specific single combination of a
specific number of cards in the deck (p) is 1/C. The probability of
-not- reaching that specific single combination of a specific number of
cards (p') is thus 1-(1/C).

Let us make a hypothesis that it is possible to perform infinitely many
millings and shuffles and never reach the specific single combination of
the specific number of cards we are looking for. This would mean that
we would succeed in -not- reaching that specific single combination of a
specific number of cards, and thus in the mathematics of probability,
p'=1. Since we have already calculated that p'=1-(1/C), we can use the
associative property of equality to say:

1=1-(1/C)

Subtracting one from both sides of this equation leaves us with:

1/C=0

And multiplying both sides by C yields:

1=0

Since it is not the case that 1=0, our original hypothesis must be
incorrect (by reductio ad absurdum), and therefore it is not possible to
perform infinitely many millings and shuffles and never reach the
specific single combination of the specific number of cards we are
looking for. Thus, it -must- be possible to reach the specific single
combination at some point in time, and since it -must- be possible, we
are therefore justified in "cutting to the chase" in order to save time.

Criticism is welcome, though perhaps this entire discussion should be
moved to alt.math.problems.insane. ;)

David Wintheiser

Phaedrus

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

I apologize for the big hunks of quoted text; but the context is
important here.

In article <34BB02...@isd.net>,


David Wintheiser <David.Wi...@ActiveSoftware.com> wrote:
>Paul Miller wrote:
>>
>> On 12 Jan 1998 04:27:46 GMT, wgoo...@expert.cc.purdue.edu (Walter Goodwin)
>> wrote:
>>
>> >By adding infinity to the mix, all possible combinations will be reached,
>>
>> Prove it.
>
>Okay, I'm going to give this a try. Forgive me if I err; it is human,
>after all. :)
>
>> The set of all possible deck states over countably many
>> millings is most definitely infinite.
>
>Actually, it isn't. The number of cards in a deck is finite, and thus
>the number of possible combinations of those cards in a deck is also
>finite, though extremely large. By using the formula for combinations,
>it is possible to determine how many combinations of cards are possible
>from an original 60-card deck. This can be repeated for a deck
>consisting of 59 of the original 60 cards, and again for 58 of the
>original 60, and so on until every possible combination of every
>possible number of cards is calculated; in the subsequent proof, I will
>use "C" to indicate the total number of deck combinations possible from
>any number of cards taken from an original 60-card deck. No matter how
>many times you mill, or how many times you shuffle, your ending deck
>state will wind up as one of these C combinations.

I'm with you so far.

>Ergo, the probability of reaching a specific single combination of a
>specific number of cards in the deck (p) is 1/C.

It's worth clarifying here: the probability of reaching that set
combination _in a single trial_ is 1/C. In other words, if I randomize the
cards once, I have a 1-in-C chance of the cards winding up in the
order I'm after.

> The probability of
>-not- reaching that specific single combination of a specific number of
>cards (p') is thus 1-(1/C).

Again, this is the chance of not reaching that combination in a single
trial.

>Let us make a hypothesis that it is possible to perform infinitely many
>millings and shuffles and never reach the specific single combination of
>the specific number of cards we are looking for. This would mean that
>we would succeed in -not- reaching that specific single combination of a
>specific number of cards, and thus in the mathematics of probability,
>p'=1.

It is here that your proof, regrettably, becomes mathematical garbage.
If something has a probability of 1, then that thing is _certain_ to
happen. Nobody here has made the argument that you're _certain_ to never
reach that combination of cards; that would be silly--it's clear that you can.
(Heck, it's clear that you can reach that combination in a single try, let
alone an infinite number.) It was argued that it is _possible_ to shuffle an
infinite number of times without ever reaching the desired order.

> Since we have already calculated that p'=1-(1/C),

Another problem. This is the odds for a single try; we weren't discussing
that--we were discussing an infinite number of tries.

> we can use the
>associative property of equality to say:
>
>1=1-(1/C)
>
>Subtracting one from both sides of this equation leaves us with:
>
>1/C=0
>
>And multiplying both sides by C yields:
>
>1=0
>
>Since it is not the case that 1=0, our original hypothesis must be
>incorrect (by reductio ad absurdum), and therefore it is not possible to
>perform infinitely many millings and shuffles and never reach the
>specific single combination of the specific number of cards we are
>looking for.

Your math is sound; unfortunately, it's all based on a faulty initial
assumption, so it doesn't work. All you've proved is that it's not _certain_
that you won't arrive at the desired combination in _one_ try; but again, no
one is arguing that.

> Thus, it -must- be possible to reach the specific single
>combination at some point in time, and since it -must- be possible, we
>are therefore justified in "cutting to the chase" in order to save time.

Now, let me take a stab at proving that it is in fact possible to
shuffle forever and _never_ reach the desired order.
First of all, it's worth looking at the probabilities again. We've
already seen that the odds of getting our desired combo in one try are 1/C,
and the odds of not getting it in one try are 1-(1/C). What are the odds of
not reaching that combo in N tries? Well, since each try is independent
(we're assuming that we're shuffling thoroughly each time), the odds are
(1-(1/C))^N, And what are the odds of reaching that combo at least once in
N tries? Well, it's 1 minus the odds of _not_ reaching that combo in N
tries--so it's 1-((1-(1/C))^N).
So, what happens if we make N a really really high number? Well,
the odds of missing every time--(1-(1/C))^N-- get smaller and smaller, closer
and closer to 0. So the odds of hitting at least once get closer and closer
to 1. But, no matter how high we make N, the odds of hitting at least once
never quite reach 1. Even if we make N infinite, the odds of missing are still
an infinitesimal fraction away from 0, and so the odds of hitting at least
once are still just that one-in-infinity fraction away from 1.
But let's get this away from math. Here's a couple of relatively simple
ways to prove that it's possible to go forever without hitting the combo
we're after.
First, there's proof by induction. Is it possible to shuffle the cards
once without hitting the combo we're after? Well, yes, certainly.
Now, given the fact that it's possible to shuffle the cards N times
without hitting our combo, is it possible to shuffle the cards N+1 times
without hitting our combo? Again, clearly yes--I just shuffle the cards
N times the wrong way, then shuffle them once more the wrong way.
So, by induction, it's possible to shuffle the cards the wrong way
any number of times, including infinity; no matter how many times I've
already done it wrong, I can always do it wrong once more.
Or, to look at it another way: You're saying that it's _impossible_--
in other words, probability zero--to go forever without hitting the desired
combo. But, for that to be true, that would mean that at some point it would
have to be _certain_--probability 1-- that I would hit the combo. But we've
just agreed that there's no such point--on any given try, the chance of
hitting the combo is only 1/C. So there's never a particular try during
which I'm certain to hit the combo, unless I'm using a one-card deck.
The interesting thing I've found about this sort of problem is: Ask
someone "Suppose I flip this coin an infinite number of times. Is it
possible that the flips will come up a head, then a tail, then a head, and
a tail, and so on forever?" Most people will scratch their head and think
about this for a while, and say "Well, I suppose it's possible--it's really
really _unlikely_, but it's possible." But ask someone "Suppose I flip this
coin an infinite number of times. Is it possible that the flips will just
keep coming up heads, forever?" And the answer you get will very probably
be "No--a tail would have to come up eventually." For some reason, the idea
of an infinite series in which every possible outcome happens in a particular
order doesn't bother us; but the idea of an infinite series in which a
particular outcome _always_ happens, or _never_ happens, strikes us as
fundamentally wrong. This is true even though the odds in each case are
exactly the same--a "head, tail, head, tail..." sequence is exactly as
likely as a "head, head, head, head..." sequence.

>Criticism is welcome, though perhaps this entire discussion should be
>moved to alt.math.problems.insane. ;)

Naaah. Magic is a mathematical game; it attracts a lot of mathematical
minds. The newsgroup goes through these "Magic proofs" phases; they end
eventually. "Before you end the phase, I have one more proof to play..." :-)

Paul Miller

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

On 12 Jan 98 17:29:11 GMT, nfa...@hp710.math.uni-rostock.de (Ingo Warnke) wrote:

>Joemac69 (joem...@aol.com) wrote:
>: If a deck is shuffled an infinite number of times
>: and that deck contains only two gaeas blessings
>: then a number of those shuffled states *will* be such
>: that the blessings are the last two cards left.
>
>Not necessarily. It will happen with probability one, but that doesn't mean
>it will 'surely happen'. When there are infinitely many possibillities, that
>something happens with probability 1 means it 'almost surely' happens, but it might
>still come around otherwise. For example, If you toss a coin infinitely the probability
>that you get at least once 'head' is 1. Nevertheless, it is still possible to get
>all 'tail'.

Thank you Ingo. You're the first person I've seen who understands this subtle
distinction. I was beginning to think noone agreed with me. :)

>As I said it will not surely happen, so the game can't automatically advance.
>My question is: If you do the milling over and over again, would this be
>considered stalling?

I'd have to say no. Each milling is a legal and purposeful play. This is as
distinguished from the "I use my Soldier of Fortune" 4 times each round," which
is a legal, but purposeless play unless some library manipulation has taken
place.

What I'd do is simply tell my opponent to turn over cards in his library until
he hits a Blessing, or until I decide he's milled enough cards, or until he can
somehow stop the process. As long as it's apparent that I can play this
sequence as many times as I wish, that seems to be the most expedient way to
"cut to the chase.".

Ingo Kemper

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

On 12 Jan 1998 19:25:51 -0800, phae...@halcyon.com (Phaedrus) wrote:

> Errrr, with all due respect, this is simply wrong.
> Something that has probability 1 will always happen, every time, by
>definition. If it won't happen every time, then it is not probability 1.
> Something that has probability 0 will never happen, ever, by definition.
>If there is any chance of it ever happening, then it is not probability 0.
>
> For example, let's say that I pick a random integer.

Well, even that's a problem. If it's truly random, each number should
have the same probability of showing up; but how many numbers are
there? If you randomly choose one of n objects, each has the
probability of 1/n of being chosen, but if you randomly choose one
element of an infinite set, each element has the probability of
1/infinite = 0 of showing up! (You might even argue that it is thus
impossible to choose a random integer...)

Ingo Kemper
--
__ _ __ __ __ __
__/ /_/ \/ /_/____/_ |___Sky...@uni-muenster.de___---===> \
/_/ /_/\_/ |__/ |__/ ~~~~~~~~~~~~~~~~~~~~~~~~~ ---===>__/

Ingo Warnke

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

Dustin Wood (dw...@best.com) wrote:


: Warrl kyree Tale'sedrin wrote:

: > Strictly speaking, it is not guaranteed that the process will ever

: > terminate; however, the chance of it not terminating within a hundred


: > years is vanishingly small. Unfortunately, it remains nonzero until
: > two weeks after eternity.

: Given that your opponent has only 2 GB's and the number of cards in the graveyard


: and draw pile totals an even number, then Yes, it is guaranteed to happen if the
: process is carried out to infinity.

No it is *not* guaranteed. The probability is 1, but that *doesn't* mean it will happen
surely. If you flip a coin infinitely, the chance that you at least once get 'head' is
1, still it is not impossible to come up with 'tail' all the time.


Ingo Warnke

Ingo Warnke

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

Phaedrus (phae...@halcyon.com) wrote:
: In article <34ba5...@news.uni-rostock.de>,

: Ingo Warnke <nfa...@hp710.math.uni-rostock.de> wrote:
: >Mike Marcelais (mich...@microsoft.com) wrote:
: >:>Simply because something has a nonzero probability of happening over a time
: >:>period doesn't mean it will happen.

: >: Given an unlimited amount of time, yes it will happen.
: >It will happen with probability 1. Unfortunately, if there are infinitely

: >many possibilities at work, 'propability 1' doesn't mean 'sure', just as
: >'propability 0' doesn't mean 'never happens'.

: Errrr, with all due respect, this is simply wrong.

I'm pretty sure I'm correct,

: Something that has probability 1 will always happen, every time, by
: definition.

Then you use some other definition than me, not the one accepted by
mathematicians. That something that always happens has probability one
is correct. Just the reverse is not true. Think of producing somehow
an infinite series of zero's and one's by flipping a coin. After you did this,
find out what the probability is that the just produced series comes up.
It is simply 0. Nevertheless, it *did* happen.

: If it won't happen every time, then it is not probability 1.

Also wrong. You use your intuition and generalize from the finite case.
That's leads often to wrong conclusions if infinity is involved.

Ingo Warnke

Phaedrus

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

In article <69f4nl$t...@majestix.uni-muenster.de>,

Ingo Kemper <SkyG...@uni-muenster.de> wrote:
>On 12 Jan 1998 19:25:51 -0800, phae...@halcyon.com (Phaedrus) wrote:
>
>> Errrr, with all due respect, this is simply wrong.
>> Something that has probability 1 will always happen, every time, by
>>definition. If it won't happen every time, then it is not probability 1.
>> Something that has probability 0 will never happen, ever, by definition.
>>If there is any chance of it ever happening, then it is not probability 0.
>>
>> For example, let's say that I pick a random integer.
>
>Well, even that's a problem. If it's truly random, each number should
>have the same probability of showing up; but how many numbers are
>there? If you randomly choose one of n objects, each has the
>probability of 1/n of being chosen, but if you randomly choose one
>element of an infinite set, each element has the probability of
>1/infinite = 0 of showing up! (You might even argue that it is thus
>impossible to choose a random integer...)

You've just disproven your own hypothesis--namely, that 1/infinity is 0.
It's about as close to zero as you can get; but it ain't zero.
Another induction proof: Is 1/infinity is zero? You're saying it is.
If N/infinity is zero, then is (N+1)/infinity zero? Well, it must be; because
to get from N/infinity to (N+1)/infinity, we add 1/infinity, which you've
just said is zero. So, by induction, any number over infinity--including
infinity/infinity--is zero. That makes no sense; infinity/infinity is
undefined, but it's not necessarily zero. So 1/infinity can't be zero.

Ingo Kemper

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

On 13 Jan 1998 09:30:38 -0800, phae...@halcyon.com (Phaedrus) wrote:

> Another induction proof: Is 1/infinity is zero? You're saying it is.
>If N/infinity is zero, then is (N+1)/infinity zero? Well, it must be; because
>to get from N/infinity to (N+1)/infinity, we add 1/infinity, which you've
>just said is zero. So, by induction, any number over infinity--including
>infinity/infinity--is zero. That makes no sense; infinity/infinity is
>undefined, but it's not necessarily zero. So 1/infinity can't be zero.

Well, are you sure that complete induction proves that the statement
is true for infinity as well? I was under the impression that it would
only show that the statement is true for all natural numbers, but not
for infinity itself (so while n/infinity is indeed be zero for each
natural n, it would not necessarily be true for infinity/infinity).

For example, if I want to prove that for all naturals n, 2n is even. I
start with n=0 and find 2*0 = 0, which is indeed even. I recurse by
saying that if 2n is even, 2(n+1)=2n+2 must be even as well. This
proves that all 2n are even, but doesn't prove that 2*infinity (=
infinity) is even, does it? (Otherwise I could do the same for 2n+1
and would find that infinity is odd as well...)

Phaedrus

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

In article <34bb5...@news.uni-rostock.de>,

Ingo Warnke <nfa...@hp710.math.uni-rostock.de> wrote:
>Phaedrus (phae...@halcyon.com) wrote:
>: In article <34ba5...@news.uni-rostock.de>,
>: Ingo Warnke <nfa...@hp710.math.uni-rostock.de> wrote:
>: >Mike Marcelais (mich...@microsoft.com) wrote:
>: >:>Simply because something has a nonzero probability of happening over a time
>: >:>period doesn't mean it will happen.
>: >: Given an unlimited amount of time, yes it will happen.
>: >It will happen with probability 1. Unfortunately, if there are infinitely
>: >many possibilities at work, 'propability 1' doesn't mean 'sure', just as
>: >'propability 0' doesn't mean 'never happens'.
>: Errrr, with all due respect, this is simply wrong.
>I'm pretty sure I'm correct,

I just called up a friend of mine who's a TA in a Probability class at
the University of Washington. If I'm wrong about "probability 1 is certain;
and certain is probability 1", then not only is he wrong too, but the two
reference books he checked are tragically misprinted... :-)

>: Something that has probability 1 will always happen, every time, by
>: definition.

>Then you use some other definition than me, not the one accepted by
>mathematicians.

Again, it seems to be accepted by at least a pretty hefty chunk of
mathematicians.

> That something that always happens has probability one
>is correct. Just the reverse is not true. Think of producing somehow
>an infinite series of zero's and one's by flipping a coin. After you did this,
>find out what the probability is that the just produced series comes up.
>It is simply 0. Nevertheless, it *did* happen.

You're making another faulty assumption here; namely, that the probability
of that series of flips is zero. It's not; it's about as close to zero as it's
possible to get, but it's not zero. If it were, then it would be _impossible_
to produce an infinite series of zero's and one's.
If I flip a coin N times, then the probability of any one particular set
of flips coming up--all heads, all tails, heads and tails alternating,
whatever--is 1/(2^N). As N gets larger and larger, this fraction gets smaller
and smaller; as N approaches infinity, the fraction's value approaches zero.
But "A function of N approaches some value as N approaches infinity" does _not_
mean "That function of N _reaches_ that value when N _reaches_ infinity."
1/N approaches zero as N approaches infinity; that does not mean that
1/infinity is zero. If 1/infinity were zero, then zero times infinity would
have to be 1, and that's not true; zero times _any_ number--even infinity--
is zero.

>: If it won't happen every time, then it is not probability 1.
>Also wrong. You use your intuition and generalize from the finite case.
>That's leads often to wrong conclusions if infinity is involved.

Again, this simply isn't true.
If I do some thing, then the sum of the probabilities all the possible
outcomes of that thing has to be 1, by definition. If I roll a die, the sum
of the probabilities of the possible outcomes of that die roll has to be 1.
If there's six sides on the die, and the die is "fair", then the odds of any
given side coming up have to be 1/6--or slightly less, if we want to include
the odds of the die coming to rest on edge or spontaneously exploding in
midroll.
If I produce an infinite series of coin flips, then the sum of the
probabilities of all the possible series has to be 1, by definition. So it
_cannot be true_ that the probability of each individual series coming up is
0. It doesn't matter how many zeroes you add together, even an infinite
number of them; you're not going to get to 1.
If I flip a coin repeatedly, the odds of it coming up heads are almost
infinitely small--1/(2^infinity), to be precise. But the odds are not zero.

Paul Miller

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

On Mon, 12 Jan 1998 23:56:29 -0600, David Wintheiser <dwin...@isd.net> wrote:

>Paul Miller wrote:
>>
>> On 12 Jan 1998 04:27:46 GMT, wgoo...@expert.cc.purdue.edu (Walter Goodwin)
>> wrote:
>>
>> >By adding infinity to the mix, all possible combinations will be reached,
>>
>> Prove it.

>> The set of all possible deck states over countably many


>> millings is most definitely infinite.
>
>Actually, it isn't. The number of cards in a deck is finite, and thus
>the number of possible combinations of those cards in a deck is also
>finite, though extremely large.

Well, I am using "deck state" to mean the current arrangement of cards in the
deck, as well as all arrangements of the deck after said milling process was
started. I can produce an infinity of those easily enough.

> No matter how
>many times you mill, or how many times you shuffle, your ending deck
>state will wind up as one of these C combinations.

Right, and some of those combinations lead to another eventual reshuffle.

>Ergo, the probability of reaching a specific single combination of a

>specific number of cards in the deck (p) is 1/C. The probability of


>-not- reaching that specific single combination of a specific number of
>cards (p') is thus 1-(1/C).

This is right as well, just misapplied.

>Let us make a hypothesis that it is possible to perform infinitely many
>millings and shuffles and never reach the specific single combination of
>the specific number of cards we are looking for.

[Snip]

>
>Since it is not the case that 1=0, our original hypothesis must be
>incorrect (by reductio ad absurdum), and therefore it is not possible to
>perform infinitely many millings and shuffles and never reach the
>specific single combination of the specific number of cards we are

>looking for. Thus, it -must- be possible to reach the specific single


>combination at some point in time, and since it -must- be possible, we
>are therefore justified in "cutting to the chase" in order to save time.

Well, what you've shown is that the probability of an infinite reshuffle loop is
zero. I don't doubt it. Zero probability does not equal impossible, however.
It is merely "very unlikely". Likewise probability 1 is not "certain," just
"very nearly certain."

Okay look at it this way: Say I tell you pick an integer at random. Now what
is the probability that you pick 34223? Zero. By your argument, the probability
of picking some integer other than 34223 is 1 - probability of picking 34223,
which is correct. Your conclusion is that I can never pick 34223, which is
incorrect.

Oh, and I don't know if alt.math.problems.insane would even be crazy enough for
this. ;)

Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to


Ingo Warnke wrote:

> : >It will happen with probability 1. Unfortunately, if there are infinitely
> : >many possibilities at work, 'propability 1' doesn't mean 'sure', just as
> : >'propability 0' doesn't mean 'never happens'.
>
> : Errrr, with all due respect, this is simply wrong.
>
> I'm pretty sure I'm correct,

I'm pretty sure you're wrong.

> : Something that has probability 1 will always happen, every time, by
> : definition.
>
> Then you use some other definition than me, not the one accepted by

> mathematicians. That something that always happens has probability one


> is correct. Just the reverse is not true. Think of producing somehow
> an infinite series of zero's and one's by flipping a coin. After you did this,
> find out what the probability is that the just produced series comes up.
> It is simply 0. Nevertheless, it *did* happen.

The problem is that you can't produce that infinite stream. And if you could, you
couldn't then check it to see what you had. I'm not sure that it matters though,
since the original thought behind this thread (which seems to have been discarded in
favor of a discussion on the realities of infinity vs. probability).... the original
thought behind this thread was about shuffling a deck infinitely. Given any deck,
there is a finite number of ways it can be shuffled. Several of those will yield
all of the Gaea's Blessings on the bottom of the deck. The probability of one of
these coming up within the first X shuffles is greater than 0 and less than 1.
Unless there are only GB's to start with, in which case it is exactly 1. However,
as X gets larger, but still finite, the probability gets closer and closer to 1. If
X actually goes to infinity, then the probability actually goes to 1. This does not
mean that on any given shuffle, you will get an acceptable deck. It does mean that
you will get an acceptable shuffle though. You will get an acceptable shuffle an
infinite number of times. You simply can't say when you will get them.

Think about this if you really want a headache :)

I'm going to randomly pick an integer from the set of ALL integers.......OK I've got
one (don't ask how I got it, that question can't be answered)

Probability I got the integer 67343: zero
Probability I got the integer 2: zero
Probability I got an integer: one

Now I pick random integers an infinite number of times:
Probability that all of them are integers: one
Probability that all of them are the integer 2: zero
Probability that none of them are the integer 2: zero
Probability that the Xth one is the integer 2: zero
Probability that ANY of them are the integer 2: one
Total number of times that the integer 2 was chosen: infinity
Total number of times that the integer 2 wasn't chosen: infinity (a larger infinity
than the previous :)

What does all this mean? If I only pick 1 random integer, I will not get the number
2. I will not get any specific integer. I will get an integer though, and it must
have a value. So why can't that value be 2? Well, it can, it just won't be :)
Confused yet? It gets worse :) I picked an integer, it has a value. What is that
value? Infinity :) It must be infinity. There is no other value that will be
chosen. But if I choose an infinite number of random integers, then an infinite
number of them will have the value 2.

The problem only occurs if you try to mix something finite and something infinite.
The finite part will always get swallowed unless it is somehow anchored to a place
that we can start from (i.e 45.67333333... if the 4567 were after the 3's, then
infinity would have swallowed them).

> : If it won't happen every time, then it is not probability 1.
>
> Also wrong. You use your intuition and generalize from the finite case.
> That's leads often to wrong conclusions if infinity is involved.
>

Are you saying that something that won't happen every time can have a probability of
1? Are you sure you want to make that claim? Even with infinity thrown in, if it
has a probability of 1, then IT WILL HAPPEN. Every time. In the case of infinity,
_every time_ means every time infinity is invoked, not every instantiation within
that infinity. By that I mean that it will occur somewhere in the stream of
infinite events, but not necessarily at each event.

Infinity is so much fun. It really messes with the mind :)

Dustin


Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to


Paul Miller wrote:

> >: If a deck is shuffled an infinite number of times


> >: and that deck contains only two gaeas blessings
> >: then a number of those shuffled states *will* be such
> >: that the blessings are the last two cards left.
> >

> >Not necessarily. It will happen with probability one, but that doesn't mean
> >it will 'surely happen'.

Yes it does.

> When there are infinitely many possibillities, that
> >something happens with probability 1 means it 'almost surely' happens, but it might
> >still come around otherwise.

No it can't.

> For example, If you toss a coin infinitely the probability
> >that you get at least once 'head' is 1. Nevertheless, it is still possible to get
> >all 'tail'.

No it isn't. Well, to be exact it both is and isn't. With the infinity, you will get, at
some point, an infinite string of all tails. However, at some other point, you will get an
infinite stream of all heads. In fact, at different points of your infinite string of coin
flips, all infinite and finite substrings will occur. You can't say which will come first,
but you can say that all of them will come. Even if you start off with the infinite string
of tails first, because we are doing it an infinite number of times, we will get to all the
other possibilities too :)

> Thank you Ingo. You're the first person I've seen who understands this subtle
> distinction. I was beginning to think noone agreed with me. :)

It's not that I disagree with the possibility, in fact I agree that it is guaranteed to
happen. The problem is that so are all the other possibilities guaranteed to happen.

> >As I said it will not surely happen, so the game can't automatically advance.
> >My question is: If you do the milling over and over again, would this be
> >considered stalling?
>
> I'd have to say no. Each milling is a legal and purposeful play. This is as
> distinguished from the "I use my Soldier of Fortune" 4 times each round," which
> is a legal, but purposeless play unless some library manipulation has taken
> place.

If it is obvious that you are doing something to stall, then you are stalling. The 4
shuffles of your opponents deck may be legal, but unless there is some purpose to it other
than to stall, I would have to say that you are stalling.

> What I'd do is simply tell my opponent to turn over cards in his library until
> he hits a Blessing, or until I decide he's milled enough cards, or until he can
> somehow stop the process. As long as it's apparent that I can play this
> sequence as many times as I wish, that seems to be the most expedient way to
> "cut to the chase.".

How is that cutting to the chase???? Cutting to the chase means that we get to the point
where I want to stop because the deck is exactly how I want it to be. Cutting to the chase
is a finite maneuver. It happens immediately. Your method takes as long as it takes.


Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to


Ingo Warnke wrote:

> : Given that your opponent has only 2 GB's and the number of cards in the graveyard
> : and draw pile totals an even number, then Yes, it is guaranteed to happen if the
> : process is carried out to infinity.
>
> No it is *not* guaranteed. The probability is 1, but that *doesn't* mean it will happen
> surely. If you flip a coin infinitely, the chance that you at least once get 'head' is
> 1, still it is not impossible to come up with 'tail' all the time.

When carried out to infinity, you will get an infinite amount of heads and an infinite
amount of tails. You will have an infinite number of substrings of HTTTTHTTTH. You will
have an infinite number of substrings of infinite length comprised of every possible
combination.

Your argument (and several others throughout this thread) supposes that if the first thing
you come across is an infinite string of tails, then you will never get to anything else.
This is simply not true. Infinity has very strange properties, not the least of which is
to encompass itself. After you finish the infinite string of tails, you will start on some
other subset, whether infinite or finite makes no difference. Also the first thing you
will encounter after finishing the infinite string of tails will be a head because if it is
a tail, then you haven't finished yet. And you must get to the end of that infinite
string, because we have already declared that we are going to infinity.

Dustin


Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

Okay, Here we go again. Let's see what can be done to (er with) Phaedrus' "proof"

Phaedrus wrote:

> I apologize for the big hunks of quoted text; but the context is
> important here.

Sorry, but I'm cutting much of it out :)

> Now, let me take a stab at proving that it is in fact possible to
> shuffle forever and _never_ reach the desired order.
> First of all, it's worth looking at the probabilities again. We've
> already seen that the odds of getting our desired combo in one try are 1/C,
> and the odds of not getting it in one try are 1-(1/C). What are the odds of
> not reaching that combo in N tries? Well, since each try is independent
> (we're assuming that we're shuffling thoroughly each time), the odds are
> (1-(1/C))^N, And what are the odds of reaching that combo at least once in
> N tries? Well, it's 1 minus the odds of _not_ reaching that combo in N
> tries--so it's 1-((1-(1/C))^N).
> So, what happens if we make N a really really high number? Well,
> the odds of missing every time--(1-(1/C))^N-- get smaller and smaller, closer
> and closer to 0. So the odds of hitting at least once get closer and closer
> to 1.

And once we let N go to infinity, it becomes 1.

> But, no matter how high we make N, the odds of hitting at least once
> never quite reach 1. Even if we make N infinite, the odds of missing are still
> an infinitesimal fraction away from 0, and so the odds of hitting at least
> once are still just that one-in-infinity fraction away from 1.

Wrong. Once you go to infinity the odds of missing no longer approaches zero. It
becomes zero. This can be proved the same way I proved that .999999... is exactly
equal to 1. If you refuse to accept that .999999... = 1, even after seeing proof
that is accepted by every math professor in the world, then skip this whole thread
because nothing will convince you. Your proof actually shows that with an infinite
number of tries, you MUST succeed. Thank you :)

> But let's get this away from math. Here's a couple of relatively simple
> ways to prove that it's possible to go forever without hitting the combo
> we're after.
> First, there's proof by induction. Is it possible to shuffle the cards
> once without hitting the combo we're after? Well, yes, certainly.

Ok, no problem yet.

> Now, given the fact that it's possible to shuffle the cards N times
> without hitting our combo, is it possible to shuffle the cards N+1 times
> without hitting our combo? Again, clearly yes--I just shuffle the cards
> N times the wrong way, then shuffle them once more the wrong way.

Ok, still no problem.

> So, by induction, it's possible to shuffle the cards the wrong way
> any number of times, including infinity; no matter how many times I've
> already done it wrong, I can always do it wrong once more.

Oops, now we have a problem. You started with just one shuffle and then expanded
out to N shuffles. That only works if N is finite. Because if N is finite, then
N+1 is finite and if N is infinite, then N+1 is also infinite. At some point in
your proof you have to go from a finite N to an infinite N + 1. This is not
possible.

> Or, to look at it another way: You're saying that it's _impossible_--
> in other words, probability zero--to go forever without hitting the desired
> combo. But, for that to be true, that would mean that at some point it would
> have to be _certain_--probability 1-- that I would hit the combo. But we've
> just agreed that there's no such point--on any given try, the chance of
> hitting the combo is only 1/C. So there's never a particular try during
> which I'm certain to hit the combo, unless I'm using a one-card deck.

True, but not inclusive. For any given try you are not guaranteed of getting any
particular configuration. But over the whole scope of shuffles, you are guaranteed
of getting ALL combinations of both deck configurations and series of deck
configurations.

> >Criticism is welcome, though perhaps this entire discussion should be
> >moved to alt.math.problems.insane. ;)

I would tend to agree with this statement. Anyone that truly believes that they
fully understand infinity is insane. And since I truly believe that I understand
the concept of infinity, if not the reality, I am probably insane too ;)

Dustin


Phaedrus

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to
>Dustin Wood (dw...@best.com) wrote:
>: Warrl kyree Tale'sedrin wrote:
>: > Strictly speaking, it is not guaranteed that the process will ever
>: > terminate; however, the chance of it not terminating within a hundred
>: > years is vanishingly small. Unfortunately, it remains nonzero until
>: > two weeks after eternity.
>: Given that your opponent has only 2 GB's and the number of cards in the
>: graveyard and draw pile totals an even number, then Yes, it is guaranteed
>: to happen if the process is carried out to infinity.

>No it is *not* guaranteed.

Agreed.

> The probability is 1,

No. The probability is infinitesimally _close_ to 1; but it is not 1.

> but that *doesn't* mean it will happen
>surely.

That's because the probability of it happening is not 1.

>If you flip a coin infinitely, the chance that you at least once get 'head' is
>1,

No.
The probability of getting at least 1 head in an infinite number of flips
is 1 minus the possibility of flipping tails every time. The possibility of
flipping tails every time is 1/(2^infinity); that's infinitesimally close to
zero, but it's not zero. So the odds of getting at least one head are
equally close to 1, but not quite 1.

> still it is not impossible to come up with 'tail' all the time.

That's because the possibility of flipping all tails is not zero. There's
that one-in-two-to-the-infinite-power chance that you'll do it.

Phaedrus

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

In article <69gavg$u...@majestix.uni-muenster.de>,

Ingo Kemper <SkyG...@uni-muenster.de> wrote:
>On 13 Jan 1998 09:30:38 -0800, phae...@halcyon.com (Phaedrus) wrote:
>
>> Another induction proof: Is 1/infinity is zero? You're saying it is.
>>If N/infinity is zero, then is (N+1)/infinity zero? Well, it must be; because
>>to get from N/infinity to (N+1)/infinity, we add 1/infinity, which you've
>>just said is zero. So, by induction, any number over infinity--including
>>infinity/infinity--is zero. That makes no sense; infinity/infinity is
>>undefined, but it's not necessarily zero. So 1/infinity can't be zero.
>
>Well, are you sure that complete induction proves that the statement
>is true for infinity as well? I was under the impression that it would
>only show that the statement is true for all natural numbers, but not
>for infinity itself (so while n/infinity is indeed be zero for each
>natural n, it would not necessarily be true for infinity/infinity).

You're right; this was a bogus proof on my part.
Proof of bogosity: 2*1 is finite. If 2*N is finite, then 2*(N+1) is
finite--after all, you're taking a finite number and adding 2. Therefore,
2*infinity is finite. This is clearly a problematic statement; therefore,
there must be something wrong with the idea that all inductive proofs extend
to infinity.
I still think I'm right that 1/infinity is not zero; but I'll have to
fall back and get some better ammunition. :-)

Phaedrus

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

In article <34BB862A...@best.com>, Dustin Wood <dw...@best.com> wrote:
>Okay, Here we go again. Let's see what can be done to (er with) Phaedrus' "proof"

Errr, if you're under the impression that I'm doing this to annoy you,
there's a probability of 1 that you're mistaken. I'm doing this because the
issue was raised, and because this is an area that there's lots of
misconceptions about--and because, if I'm wrong, I want to know about it.
If you're not getting something out of the discussion, then feel free not to
continue it...

>Phaedrus wrote:
>> Now, let me take a stab at proving that it is in fact possible to
>> shuffle forever and _never_ reach the desired order.
>> First of all, it's worth looking at the probabilities again. We've
>> already seen that the odds of getting our desired combo in one try are 1/C,
>> and the odds of not getting it in one try are 1-(1/C). What are the odds of
>> not reaching that combo in N tries? Well, since each try is independent
>> (we're assuming that we're shuffling thoroughly each time), the odds are
>> (1-(1/C))^N, And what are the odds of reaching that combo at least once in
>> N tries? Well, it's 1 minus the odds of _not_ reaching that combo in N
>> tries--so it's 1-((1-(1/C))^N).
>> So, what happens if we make N a really really high number? Well,
>> the odds of missing every time--(1-(1/C))^N-- get smaller and smaller, closer
>> and closer to 0. So the odds of hitting at least once get closer and closer
>> to 1.
>

>And once we let N go to infinity, it becomes 1.

If you believe that, then all you have to do is prove it, and the
discussion will end very rapidly. :-)

>> But, no matter how high we make N, the odds of hitting at least once
>> never quite reach 1. Even if we make N infinite, the odds of missing are still
>> an infinitesimal fraction away from 0, and so the odds of hitting at least
>> once are still just that one-in-infinity fraction away from 1.
>

>Wrong. Once you go to infinity the odds of missing no longer approaches
>zero. It becomes zero.

This is provably wrong. The sum of the probabilities of an outcome
of an event has to equal 1--even if there's an infinite number of possible
outcomes. This is one of the basic axioms of probability. If I flip a coin
an infinite number of times, then there's a probability of 1 that I'll wind
up with an infinite number of coin flips.
If I flip a coin an infinite number of times, there's an infinite number
of possible series of flips that can result--an infinite number of possible
outcomes. If you pick _any_ particular series of flips, then the probability
of missing that series with at least one flip approaches 1 as the number of
flips approaches infinity, just like the odds of missing the "all heads"
series does. So if the probability of getting "all heads" with an infinite
number of flips is zero, then the probability of getting _any other_
particular series of flips is zero. But that can't be true; because if it
is, then the sum of the probabilities of all the possible outcomes can't be
1. No matter how many times I add zero to itself, even if I do it an
infinite number of times, I'm not going to get to 1.

>This can be proved the same way I proved that .999999... is exactly
>equal to 1. If you refuse to accept that .999999... = 1, even after seeing
>proof that is accepted by every math professor in the world, then skip this
>whole thread because nothing will convince you. Your proof actually shows
>that with an infinite number of tries, you MUST succeed. Thank you :)

Dustin, if you can prove that 1/infinity is 0, using the same proof that
you used for .999999=1, then I would truly, sincerely appreciate it if you
could demonstrate that for me.
Here's the first three lines of that proof:

Let X = 0.999999...
Then 100X = 99.999999...
And 100X - X = 99X

You can use proofs involving multiplication and subtraction and so on
when you're dealing with finite numbers--and "0.99999..." is a finite number,
even though it has an infinitely-repeating decimals. But when you're dealing
with infinity, these sorts of proofs simply break down. If I is infinity,
then 100*I is not 100I; it's just I. 100I-I is not 99I; it's undefined.
I've said that 1/infinity is infinitely close to zero, and that
1-1/infinity is infinitely close to 1. But, just because of that, you can't
say that 1-1/infinity is .999999... By your own proof, .999999... is not
"infinitely close to" 1; it _is_ 1.

[bogus proof of mine deleted...]

>> So, by induction, it's possible to shuffle the cards the wrong way
>> any number of times, including infinity; no matter how many times I've
>> already done it wrong, I can always do it wrong once more.
>

>Oops, now we have a problem.

Yes, we do. Those responsible have been sacked. :-)

>> Or, to look at it another way: You're saying that it's _impossible_--
>> in other words, probability zero--to go forever without hitting the desired
>> combo. But, for that to be true, that would mean that at some point it
>> would have to be _certain_--probability 1-- that I would hit the combo.
>> But we've just agreed that there's no such point--on any given try, the
>> chance of hitting the combo is only 1/C. So there's never a particular
>> try during which I'm certain to hit the combo, unless I'm using a one-card
>> deck.
>

>True, but not inclusive. For any given try you are not guaranteed of
>getting any particular configuration. But over the whole scope of shuffles,
>you are guaranteed of getting ALL combinations of both deck configurations
>and series of deck configurations.

You're assuming what you're trying to prove--namely, that you're
guaranteed to get all combinations. That's cheating. :-)
Here's another way of looking at it; maybe this one will go over better.
You're saying that, if I flip a coin an infinite number of times, that
it's impossible that I'll get nothing but heads. Okay, let's assume for
the moment that that's true.
Pick an infinite series of coin flips. Any series of coin flips. It
doesn't even have to be random. Pick any one you want. Go ahead. I'll wait.
First of all, I hope you'll concede that the series you just picked was
possible. If not, then you're saying that it's _impossible_ to produce an
infinite series of coin flips, which makes it meaningless to talk about the
probabilities of them at all.
Does that series have nothing but heads? Well, if it does, then it must
have been possible to get nothing but heads. But we assumed that that was
impossible, so we'll move on.
Since the series does not have nothing but heads, it has to have at least
one tail. In fact, by your own argument, it must have an infinite number of
tails. But let's just look at the first one--the first tail in the series.
(Even though the series is infinite--so there's no "last tail"--there's still
a first one.)
Was it _certain_ that a tail would be flipped there? Well, by definition,
no; there's a fifty-fifty chance of a head on any particular flip.
So, if the series you picked is possible, then there's another series--
the series you picked, but with the first tail replaced by a head--that's
also possible.
Now, does _that_ series have nothing but heads? If it does, then I'm
right--all heads was possible. If it doesn't, then that new series must also
have a first tail in it. And that tail can't have been certain either; a
head was possible there too. So, if I replace _that_ tail with a head, then
the resulting series must be possible too.
So I can go on, replacing every tail in the series with a head in this
way. It may take an infinite number of replacements; but eventually, I'll be
left with "head, head, head..." for infinity--and therefore, that series must
be possible.
And this is not a case where you can say "Yes, that works for any
finite number of replacements, but it doesn't work for infinity." By
definition, the number of tails in the series of flips can't be any larger
than the number of flips in the series. So, if there's such a thing as an
infinite series in the first place, then the "work" in converting that series
into a "head, head, head..." series can't be any greater than the "work" in
creating that series in the first place. And, again, if there's no such thing
as an infinite series of coin flips, then all-heads is still possible--after
all, it's clearly possible to come up with nothing but heads in any _finite_
series of flips.

>> >Criticism is welcome, though perhaps this entire discussion should be
>> >moved to alt.math.problems.insane. ;)

>I would tend to agree with this statement. Anyone that truly believes that
>they fully understand infinity is insane. And since I truly believe that I


>understand the concept of infinity, if not the reality, I am probably
>insane too ;)

Well, ya know, like the song says, "We're never gonna survive unless we
get a little crazy." :-)

Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to


Phaedrus wrote:

> > That something that always happens has probability one
> >is correct. Just the reverse is not true. Think of producing somehow
> >an infinite series of zero's and one's by flipping a coin. After you did this,
> >find out what the probability is that the just produced series comes up.
> >It is simply 0. Nevertheless, it *did* happen.
>

> You're making another faulty assumption here; namely, that the probability
> of that series of flips is zero. It's not; it's about as close to zero as it's
> possible to get, but it's not zero. If it were, then it would be _impossible_
> to produce an infinite series of zero's and one's.

No, only impossible to _randomly_ produce. The odds of flipping a coin and ending up
with any particular infinite series is zero. But that is because you have a
specified starting point. At some point, when carried out to infinity there will be
a deviation, even if that deviation is infinitely far down the stream, it will still
be there. On the other hand, the odds of that infinite series occuring somewhere
within our string of infinite coin flips is 1. It will happen somewhere, just
nowhere that you can get to.

> If I flip a coin N times, then the probability of any one particular set
> of flips coming up--all heads, all tails, heads and tails alternating,
> whatever--is 1/(2^N). As N gets larger and larger, this fraction gets smaller
> and smaller; as N approaches infinity, the fraction's value approaches zero.
> But "A function of N approaches some value as N approaches infinity" does _not_
> mean "That function of N _reaches_ that value when N _reaches_ infinity."
> 1/N approaches zero as N approaches infinity; that does not mean that
> 1/infinity is zero. If 1/infinity were zero, then zero times infinity would
> have to be 1, and that's not true; zero times _any_ number--even infinity--
> is zero.
>

I'm missing something. How does what you said show that zero times infinity would be
1? This doesn't seem to follow from anything you said. The only thing that I can
see from this would be a conclusion that:1/infinity * infinity/1 = 1 (by the way,
this is not true unless the two infinities are identical, which they by no means have
to be)
For example, infinity + 1 = infinity. It is just not the same infinity.
infinity/1 = infinity
1/infinity = 0
infinity * 0 = undefined or zero depending on who you ask.
but infinity * 1/infinity is undefined.

> If I produce an infinite series of coin flips, then the sum of the


> probabilities of all the possible series has to be 1, by definition. So it
> _cannot be true_ that the probability of each individual series coming up is

> 0. It doesn't matter how many zeroes you add together, even an infinite
> number of them; you're not going to get to 1.

The probability of each series coming up is each 1. The problem is that you are
mixing whether or not you have a place to start a particular series from. If you do
not look for a starting point, then the probability for each series, finite or
infinite, of occuring somewhere in the infinite flips is 1. If on the other hand,
you have a point from which you are checking for a particular series, then you get
the following:The probability of any infinite series from that point is zero. I
understand that some infinite series must occur. That doesn't change the fact that
the probabilities are still zero. Any infinite series that you specify _will not
occur_.
The probability of a particular finite series occuring is dependant upon the length
of that series.
The sum of all those probabilities is infinity, NOT 1.
The sum of all probabilities of all series of the same length IS 1.
The sum of all probabilities of all infinite length series is 1, even though all the
probabilities individually are zero. It is a property of infinity that causes these
strange things.

> If I flip a coin repeatedly, the odds of it coming up heads are almost
> infinitely small--1/(2^infinity), to be precise. But the odds are not zero.
>

I think you meant to say something different. The odds of it coming up heads _at
least once_ gets closer and closer to 1, not zero. You are saying that it becomes
almost impossible to flip a coin and have it come up heads. But for any given flip
the odds are still 1/2. The coin has no memory.

Dustin


Phaedrus

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

In article <34bd10bb....@news.charm.net>,
Kutulu <spam...@cyber-wizard.com> wrote:
>phae...@halcyon.com (Phaedrus) wrote:
>: You've just disproven your own hypothesis--namely, that 1/infinity is 0.
>: It's about as close to zero as you can get; but it ain't zero.

>In this case, 1/infinity is a definition. It is *defined* by mathematics to
>be zero. It's not neccessarily intuitive, but it is a definition taken for
>granted in calculus.

If it's "taken for granted", then can you please give me the title of a
calculus textbook that says so? As I've said, I'd really like to read up on
this if I'm wrong, and so would my friend at the UW.
1/infinity--in fact, any positive finite number over infinity--is so
_close_ to zero, that for the vast majority of problems it can be taken to be
zero. But that does not mean that it _is_ zero; and the summing of an infinite
series--which is what this problem boils down to--is one of the areas where you
_cannot_ make this assumption. If 1/infinity were zero, then the sum of an
infinite series of "1/infinities" would be zero. That's not true--and, when
we look at probability, we see that it _can't_ be true.
If I pick a random integer, then the probability that I've picked a random
integer is 1. The probability that I pick any particular integer is
1/infinity. If 1/infinity is zero, then the sum of the probabilities of
picking all the possible integers would be zero--again, no matter how many
zeroes you add together, you're going to get zero. But that would mean that
it's impossible to pick a random integer at all--that the probability of
doing that is zero.


>In the case your trying to prove, you are saying that 1/x is really close to 0
>when x = infinity. This isn't true. This is basic limit theory. As x
>APPROACHES infinity, 1/x geting infinitely (and arbitrarily) small. However,
>this always assumes that x is an arbitrarily large *finite* number, in which
>case 1/x is an arbitrarily small *finite* number. Once you decide that x
>*equals* infinity, the mathemtical behavior of finite numbers no longer
>applies, and a definition is imposed on that construct.

Even for finite numbers, I don't believe that what you're saying it
correct. If I say "The limit of F(X), as X approaches zero, is 1", that does
not mean that F(0)=1. If you want an example, I can give you one: F(X)=X/X.
For every possible value of X _except_ zero, F(X)=1. By every definition,
the limit of F(X) as X approaches zero is 1. But that doesn't mean that 0/0=1;
0/0 is completely undefined.

>In a similar fashion, given any number N, N/infinity is equal to 0, *as long
>as* that number is a finite number. This, again, is a definition based on the
>same concept you showed:

But again, if you believe this, then you're either saying that the
probabilities of all the possible outcomes of an event do not have to add up
to 1--which is going to send a lot of mathematicians back to the drawing
board--or you're saying that you can add up a bunch of zeroes and get 1--
which is going to send a lot of mathematicians to the nearest bar.

>: If N/infinity is zero, then is (N+1)/infinity zero? Well, it must be;


>: because to get from N/infinity to (N+1)/infinity, we add 1/infinity, which
>: you've just said is zero.
>

>However, as before, once N is set *equal* to infinity, which is not a finite
>number, that definition and mathmatical behavior no longer applies. Again,
>mathematics has imposed a definition that infinity/infinity is not defined.

But wait a minute. In that case, you're saying that the limit of
N/infinity as X approaches infinity is zero--but that infinity/infinity is
undefined. Doesn't that contradict your whole point about limits earlier?

>The conceptual way to look at this is: You are trying to prove that you can
>shullfe a deck an infinite number of times without reaching a given card order.
>If you shuffle an infinite number of times, you must shuffle *until* you reach
>every card order. Stopping your shuffles before reaching any given card order
>makes the number of shuffles finite. In fact, you must shuffle until you've
>reached every combination an infinite number of times, and even beyond that.,
>or else you will not have shuffled an infinite number of shuffles.

You're assuming your conclusion. If I'm right--and it's possible to
go forever without hitting that combo--then I _don't_ have to "shuffle until
you've reached every combination an infinite number of times." In fact, if
the combinations we're talking about are infinitely long, then I _can't_
shuffle until I hit them all.

Phaedrus

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

In article <34BB7B03...@best.com>, Dustin Wood <dw...@best.com> wrote:
>Paul Miller wrote:
>
>> >: If a deck is shuffled an infinite number of times
>> >: and that deck contains only two gaeas blessings
>> >: then a number of those shuffled states *will* be such
>> >: that the blessings are the last two cards left.
>> >
>> >Not necessarily. It will happen with probability one, but that doesn't mean
>> >it will 'surely happen'.
>
>Yes it does.

Agreed. If something's probability is 1, then it's certain. There's
no such thing as "probability-1 things that are more likely than others"; by
definition, all probability-1 things are equally likely--they all happen, every
time.
Likewise, if something's probability is zero, then it can't happen. If
it happens, then it can't have had a probability of zero.

>> > When there are infinitely many possibillities, that something happens
>> > with probability 1 means it 'almost surely' happens, but it might
>> > still come around otherwise.

>No it can't.

Agreed again. If something is "almost" certain to happen, then its
probability is not 1; it's "almost" 1.

>> >For example, If you toss a coin infinitely the probability that you get
>> >at least once 'head' is 1. Nevertheless, it is still possible to get
>> >all 'tail'.
>
>No it isn't. Well, to be exact it both is and isn't.

Errrrrr, it's impossible for something to be both possible and impossible.
(It's also certain that something can't be both certain and uncertain.)

>With the infinity, you will get, at some point, an infinite string of all

>tails. However, at some other point, you will get an infinite stream of all


>heads. In fact, at different points of your infinite string of coin flips,
>all infinite and finite substrings will occur. You can't say which will
>come first, but you can say that all of them will come. Even if you start

>off with the infinite string of tails first, because we are doing it an


>infinite number of times, we will get to all the other possibilities too :)

It is with deep and sincere regret that I inform you that this is total
gibberish. :-)
Let's look at three different infinite strings of coin flips--the
set "head, head, head, head...", the set "tail, tail, tail, tail...", and the
set "head, tail, head, tail..." You're saying that it's possible to include
all three of these infinite strings of coin flips in a single infinite string
of coin flips. That simply isn't so. (Well, make that "that isn't so";
there's nothing in this thread that's "simply" anything. :-) )
An infinite series, by definition, cannot be "An infinite number of
things following one rule, followed by an infinite number of things following
a different rule." The infinite series "head, head, {infinite heads}, tail,
tail, {infinite tails}, head, tail, {infinite heads and tails}..." cannot
exist. If there's some point at which I stop applying the first series and
start applying the second series, then by definition, the first series
wasn't infinite. If a series contains an infinite run of heads, then by
definition, there can't be anything _after_ that infinite run of heads in the
series; by definition, an infinite run of heads never stops.

Phaedrus

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

In article <69hi6h$o32$1...@news.iquest.net>,
Dan Johnson <pano...@iquest.net> wrote:

>In article <69h87r$7b5$1...@halcyon.com>, Phaedrus <phae...@halcyon.com> wrote:
>> 1/infinity--in fact, any positive finite number over infinity--is so
>>_close_ to zero, that for the vast majority of problems it can be taken to be
>>zero.
>
>If 1/x is not 0, x is NOT infinity.

I've already said "1 over infinity is not zero," and I've already given
several explanations and proofs for why that has to be. If you're going to
say "1 over infinity is zero," and not give any proof for why that has to be,
then there's not much I can say other than "Is not."

>> But that does not mean that it _is_ zero; and the summing of an infinite
>>series--which is what this problem boils down to--is one of the areas where you
>>_cannot_ make this assumption. If 1/infinity were zero, then the sum of an
>>infinite series of "1/infinities" would be zero.
>

>infinity * 0 is a indeterminate form.

Daniel, I'm going to ask you a question, and I'm not asking it to be
sarcastic; I'm asking it because I want to be sure that I understand your
point.
Are you, or are you not, saying that I can add 0 to itself an infinite
number of times, and get a number other than zero? (Please note that I
wasn't talking about multiplication; I was talking about the sum of an
infinite series.)

Phaedrus

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

In article <34BC1343...@best.com>, Dustin Wood <dw...@best.com> wrote:
>Phaedrus wrote:
>> [Someone whose attribution line was deleted wrote:]

>> > That something that always happens has probability one
>> >is correct. Just the reverse is not true. Think of producing somehow
>> >an infinite series of zero's and one's by flipping a coin. After you did
>> >this, find out what the probability is that the just produced series
>> >comes up. It is simply 0. Nevertheless, it *did* happen.

>> You're making another faulty assumption here; namely, that the probability

>> of that series of flips is zero. It's not; it's about as close to zero as


>> it's possible to get, but it's not zero. If it were, then it would be
>> _impossible_ to produce an infinite series of zero's and one's.

>No, only impossible to _randomly_ produce.

Are you saying that it's impossible to randomly produce an infinite
series of coin flips? (Note very carefully what I'm saying. I'm not asking
whether it's possible to randomly produce a particular series of coin flips
that I happen to be looking for; I'm asking whether it's possible to produce
any infinite series of coin flips at all.)
If you're saying "Yes; it's impossible to produce an infinite series of
coin flips at all," then I can only say "That's an interesting viewpoint,"
and leave you on your way. :-)
If you're saying "It's possible to randomly produce an infinite series
of coin flips; but it's not possible to produce the one that you already had
in mind," then I can prove to you that you're wrong.
Let's say that I randomly produce an infinite series of coin flips,
with no particular one in mind at all; I'll just take whatever flips come up.
Shortly thereafter, someone else produces his own infinite series of
coin flips.
Now, let me ask you a question: Is it possible that the two of us
will come up with the same series?
If your answer is "No," then I can only respectfully submit to you that
you're wrong. It's clearly possible to produce that particular series of
coin flips; _I just did it._ "It happened; therefore, it's possible" is
one of the few completely indisputable forms of argument there is. :-) There
is no law in the universe that says "It's possible for me to produce this
particular series of coin flips, but it's not possible for him to do the
same." Our coins are not karmically linked in any way.
If your answer is "Yes," then you're saying that it _is_ possible to
produce a predetermined, infinite series of coin flips--namely, the one that
I just produced. And unless you believe that there's something magical about
my coin, and the particular infinite series I produced, then it must be
possible to produce any other particular infinite series of coin flips.

> The odds of flipping a coin and ending up
>with any particular infinite series is zero.

In that case, the odds of producing an infinite series of coin flips
at all must be zero; after all, if I produce any infinite series of coin
flips at all, it's going to match _some_ particular infinite series of coin
flips.

> But that is because you have a
>specified starting point.

Why should that matter? Does the coin have a mind of its own? Does it
know the series I have in mind, and is it determined to foil me at some point?
:-)
As Dave said, you seem to be falling victim to the old gambler's fallacy
here; namely, that if the dice keep coming up 7's, that 7's somehow become
less likely in the future--that the dice have to "even out" somehow. The
dice have no memory, and neither do the cards once they're shuffled. The
fact that the coin has come up the way I wanted for the last umpteen bajillion
flips does not make it any more or less likely to come up the way I want next
time.

> At some point, when carried out to infinity there will be
>a deviation, even if that deviation is infinitely far down the stream, it
>will still be there.

Errrr, by definition, it's not possible for me to guess right an
infinite number of times, then guess wrong. It's not possible for the coin
to come up with an infinite number of heads, then a tail. If the run ever
stops, then it wasn't infinite.
(Yes, there are classes of infinity for which this isn't true. But
we're talking about "countable infinities" here, and it _is_ true for those.)

> On the other hand, the odds of that infinite series occuring somewhere
>within our string of infinite coin flips is 1. It will happen somewhere, just
>nowhere that you can get to.

Errrr, again, you seem to be mistaken about the nature of an infinite
series here. If I have an infinite series in mind, then it can't happen
"somewhere within" a larger infinite series. Well, that's not quite true;
it can happen at _the end_ of another infinite series. If the series I have
in mind is "an infinite number of heads", for example, then it falls somewhere
within the series "a finite number of tails, than an infinite number of
heads." But, by definition, it's impossible to have the series "Something
else, then the infinite series I have in mind, then something else." It's
fine to have that finite run of "something else" first; but if it ever stops
being "the infinite series I had in mind", then by definition, it wasn't
infinite.

>> If I flip a coin N times, then the probability of any one particular set
>> of flips coming up--all heads, all tails, heads and tails alternating,
>> whatever--is 1/(2^N). As N gets larger and larger, this fraction gets smaller
>> and smaller; as N approaches infinity, the fraction's value approaches zero.
>> But "A function of N approaches some value as N approaches infinity" does _not_
>> mean "That function of N _reaches_ that value when N _reaches_ infinity."
>> 1/N approaches zero as N approaches infinity; that does not mean that
>> 1/infinity is zero. If 1/infinity were zero, then zero times infinity would
>> have to be 1, and that's not true; zero times _any_ number--even infinity--
>> is zero.

>I'm missing something. How does what you said show that zero times infinity
>would be 1? This doesn't seem to follow from anything you said.

Sorry; I may have left this out. By definition, if an event can happen
in several different, mutually exclusive ways, then the probability of that
event happening is the sum of the probabilities of each of those possible ways
it can happen. For example, the chance that I'll roll a 1, 2, or 3 on a die
is "the chance that I'll roll 1, plus the chance that I'll roll 2, plus the
chance that I'll roll 3."
Likewise, the chance that I'll produce any infinite series of coin flips
at all is the sum of the probabilities of producing each of the series that I
might come up with. If the probability of producing any particular series is
zero, then the sum of the probabilities of producing all the series has to be
zero too--you can't add zeroes together and get a number other than zero, even
if we do it an infinite number of times. So, if the odds of producing any
particular series are zero, then the odds of producing a series at all has
to be zero.


>The only thing that I can see from this would be a conclusion that:
>1/infinity * infinity/1 = 1 (by the way, this is not true unless the two
>infinities are identical, which they by no means have to be)

As Dave said, this gets into "infinitesimals" (and I'm going to run out
and get the book he mentioned tomorrow). If you're going to work with
problems like this at all and get sensible results, then you have to accept
that there can be numbers that are infinitely small, as well as numbers that
are infinitely large.

>For example, infinity + 1 = infinity. It is just not the same infinity.

Actually, it's _exactly_ the same infinity.

>infinity/1 = infinity

Infinity / any positive finite number is infinity. Infinity / any
negative finite number is negative infinity; infinity / 0 is undefined.
Infinity / infinity is undefined; but, in certain contexts, we can still
deal with it anyway. For example, look at f(x) = x/x. f(infinity) is
undefined; it's infinity divided by itself. But it's still clear that the
limit of f(x), as x approaches infinity, is 1. (By the way, this is another
argument against the idea of "If f(x) approaches n as x approaches infinity,
that means that f(infinity) must be n.")

>1/infinity = 0

Again, not true, at least not when dealing with this sort of problem.

>infinity * 0 = undefined or zero depending on who you ask.

Don't ask me. :-)

>but infinity * 1/infinity is undefined.

Again, in general, yes; but in the context of certain problems, we
can look at equations that involve infinity/infinity, and get meaningful
answers.

>> If I produce an infinite series of coin flips, then the sum of the
>> probabilities of all the possible series has to be 1, by definition. So it
>> _cannot be true_ that the probability of each individual series coming up is
>> 0. It doesn't matter how many zeroes you add together, even an infinite
>> number of them; you're not going to get to 1.
>
>The probability of each series coming up is each 1. The problem is that you
>are mixing whether or not you have a place to start a particular series from.
>If you do not look for a starting point, then the probability for each
>series, finite or infinite, of occuring somewhere in the infinite flips is 1.

Again, not so. It is simply not possible for an infinite run of heads,
and an infinite number of tails, to both appear in the same infinite sequence
of coin flips. It's not possible for _any_ two infinite series to both appear
in the same infinite series--unless one of the two series happens to be
"Some finite series, then the other infinite series."

>If on the other hand, you have a point from which you are checking for a
>particular series, then you get the following:The probability of any
>infinite series from that point is zero. I understand that some infinite
>series must occur. That doesn't change the fact that the probabilities are
>still zero.

So you're not just saying that you can add a bunch of impossibilities
together and get a possibility; you're saying that you can add a bunch of
impossibilities together and get a _certainty_? That simply makes no sense.

>Any infinite series that you specify _will not occur_.

But you just said that "some infinite series must occur." So now you're
saying that it's possible for an infinite series to occur, but only if I
didn't think of it first? Again, that simply makes no sense, unless the
coin in question has psychic powers and a grudge against me... :-)

>The probability of a particular finite series occuring is dependant upon the
>length of that series.
>The sum of all those probabilities is infinity, NOT 1.

When I say "The sum of all the probabilities is 1", here's what I'm
talking about:
If I flip a coin once, then there are two possible outcomes--a head,
and a tail. The odds of each outcome are 1/2. The odds of getting an outcome
at all is the sum of the probabilities of each outcome--or 1/2+1/2, or 1.
This is good, because by definition, the sum of all the possible outcomes of
an event has to be 1.
If I flip a coin N times, then there are 2^N possible outcomes.
The odds of each outcome are 1/2^N. The odds of getting an outcome at all
is the sum of the probabilities of each outcome--or 2^N additions of 1/2^N,
or 1.
Even if there are an infinite number of possible outcomes, then the
odds of getting an outcome at all are still the sum of the probabilities of
each outcome; and that sum had still better be 1, or a lot of mathematicians
are going to lose their Nobel Prizes.

>The sum of all probabilities of all series of the same length IS 1.
>The sum of all probabilities of all infinite length series is 1, even though
>all the probabilities individually are zero. It is a property of infinity
>that causes these strange things.

There is no property of infinity that says "You can add 0 to itself an
infinite number of times and get 1."

>> If I flip a coin repeatedly, the odds of it coming up heads are almost
>> infinitely small--1/(2^infinity), to be precise. But the odds are not zero.

>I think you meant to say something different. The odds of it coming up
>heads _at least once_ gets closer and closer to 1, not zero.

No; I meant to say "The odds of it coming up heads _all the time_ are
almost infinitely small..."

> You are saying that it becomes
>almost impossible to flip a coin and have it come up heads.

No; that's not what I'm saying. I'm saying that the coin has no memory;
that no matter how many times it's come up right so far, it's still no more
and no less likely to come up right the next time. You're the one that
seems to be saying that the coin _does_ have a memory--that, at some point,
it becomes certain that the flip will come up wrong.

> But for any given flip
>the odds are still 1/2. The coin has no memory.

Exactly.
And by the way, for those of you who have no interest in this topic,
I truly apologize; at least it's confined itself nicely to one thread, so you
can killfile it easily. But, based on the email I'm getting, a surprising-
to-me number of people are very interested in this topic. And, on a personal
level, I've found the discussion extremely interesting, even when I disagree
completely with the points being made.

Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

Why does everyone continue to insist that .999999.... does not equal 1? That is
basically what you said. Common sense might say that they are not equal, but
common sense is wrong here.

A cumulative probability of 1 means, by definition, that something within the set
of choices that were added to come up with the cumulative total of 1 is guaranteed
to happen. Likewise, a cumulative probability of zero means, by definition, that
nothing within that set will happen.

All of this is for a single attempt. It doesn't matter if our one attempt is
infinitely long or of finite length. The only time, that I am aware of, that this
fails to hold is when you mix multiple different infinities.

If you have an infinite series, then somewhere within that series, all
possibilities will occur. By definition infinity covers EVERYTHING, including
itself and all possible subsets. This includes all subsets of both finite and
infinite length.

From any given point the odds of any infinite series occuring _starting at that
point_ is 1. Obviously some series must occur. But the odds of any particular
infinite series starting from that point is zero. This is the point where this
whole thread seems to be stuck. Everyone keeps saying that 0+0+0+0... can not
equal 1. Under normal circumstances, any time you add zero to something you get
that thing back. But we are going from the finite to the infinite. I.E. anytime
we stop the adding to see if it is still zero, we have to do so at some finite
point. And the sum will still be zero. It is not possible to say, ok we will just
check it after we have added infinite zeroes because the addition is only valid if
we can check every step. There is no way to go from something finite to something
infinite without simply being there. there is no transition point. Everything is
either finite or infinite and there is no point where the two touch.

Dustin


Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to


Paul Miller wrote:

> Well, what you've shown is that the probability of an infinite reshuffle loop is
> zero. I don't doubt it. Zero probability does not equal impossible, however.
> It is merely "very unlikely". Likewise probability 1 is not "certain," just
> "very nearly certain."

Where do you get this junk? If someone verbally gives me an integer between 1 and 2
inclusive. And I don't know ahead of time which number they will give me. Then the
odds of them telling me 1 is 1/2, the odds of them telling me 2 is 1/2, odds of them
not telling me either 1 or 2 is zero and the odds that they will tell me either 1 or
2 is 1/1 or 1. You can't say well they might say 3 instead. That doesn't work
because the problem states that they give me either 1 or 2, anything else is outside
the scope of the stated problem.

Now, according to you it is possible not to get either 1 or 2 even though the
probability is 1. I would like to know what would make you believe that nonsense.

> Okay look at it this way: Say I tell you pick an integer at random. Now what
> is the probability that you pick 34223? Zero. By your argument, the probability
> of picking some integer other than 34223 is 1 - probability of picking 34223,
> which is correct. Your conclusion is that I can never pick 34223, which is
> incorrect.

If you are truly picking from an infinite pools of numbers, then the odds of any
partiucular one emerging is zero and the odds of some one number emerging is 1.
These are not conflicting under infinity, they only seem to be to our ideas of
common sense. But as I said in an earlier post, common sense goes out the window
once infinity gets involved.


Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to


Phaedrus wrote:

> >Okay, Here we go again. Let's see what can be done to (er with) Phaedrus' "proof"
>
> Errr, if you're under the impression that I'm doing this to annoy you,
> there's a probability of 1 that you're mistaken. I'm doing this because the
> issue was raised, and because this is an area that there's lots of
> misconceptions about--and because, if I'm wrong, I want to know about it.
> If you're not getting something out of the discussion, then feel free not to
> continue it...

My apologies. I did not mean to come across as upset. I am actually enjoying this
thread immensely.

That is where you are wrong. If you add an infinite number of zeroes, or anything else
for that matter, then what you get is something that has to be defined. Everyone seems
to agree that if the numbers are all greater than zero, (regardless of howw close they
are) then you will get infinity for an answer. But what is infinity? It is only a
concept. it has no reality in any way that humans are capable of understanding. By
adding an infinite number of zeroes, you have in effect invoked a concept (infinity)
that skews the whole thing so that you can no longer be certain of what the outcome
will be with only the given datum. You must look to exterior bounds in order to
_possibly_ determine what the sum will be. In this case the exterior structure
dictates that the sum, when carried to infinity must equal 1.

> >This can be proved the same way I proved that .999999... is exactly
> >equal to 1. If you refuse to accept that .999999... = 1, even after seeing
> >proof that is accepted by every math professor in the world, then skip this
> >whole thread because nothing will convince you. Your proof actually shows
> >that with an infinite number of tries, you MUST succeed. Thank you :)
>
> Dustin, if you can prove that 1/infinity is 0, using the same proof that
> you used for .999999=1, then I would truly, sincerely appreciate it if you
> could demonstrate that for me.
> Here's the first three lines of that proof:
>
> Let X = 0.999999...
> Then 100X = 99.999999...
> And 100X - X = 99X
>
> You can use proofs involving multiplication and subtraction and so on
> when you're dealing with finite numbers--and "0.99999..." is a finite number,
> even though it has an infinitely-repeating decimals. But when you're dealing
> with infinity, these sorts of proofs simply break down. If I is infinity,
> then 100*I is not 100I; it's just I.

True, but it isn't the same I. I'm not sure that matters here though.

> 100I-I is not 99I; it's undefined.
> I've said that 1/infinity is infinitely close to zero, and that
> 1-1/infinity is infinitely close to 1. But, just because of that, you can't
> say that 1-1/infinity is .999999... By your own proof, .999999... is not
> "infinitely close to" 1; it _is_ 1.

Hmmmm... I suppose that you don't agree that .99999.... is the same as .99999.....7
They are exactly equal, but I can't think of how to prove that other than simply
realizing that infinite 9's are infinite 9's regardless of what you try to put after
them.

If you will accept that then read on :)

x = 0.000000....something
10x - 00.000000....something
10x - x = 9x
0.000....something - 00.000...something = 0.000...something
9x = 0.000....something
x = 0.000...something / 9
but 0.000...something = 0 (If you don't accept this, then we will simply have to agree
to disagree because I can't think of any way to prove that infinite zeroes is zero,
regardless of what follows it)
x = 0/9
x = 0

> >> So, by induction, it's possible to shuffle the cards the wrong way
> >> any number of times, including infinity; no matter how many times I've
> >> already done it wrong, I can always do it wrong once more.
> >
> >Oops, now we have a problem.
>
> Yes, we do. Those responsible have been sacked. :-)
>
> >> Or, to look at it another way: You're saying that it's _impossible_--
> >> in other words, probability zero--to go forever without hitting the desired
> >> combo. But, for that to be true, that would mean that at some point it
> >> would have to be _certain_--probability 1-- that I would hit the combo.
> >> But we've just agreed that there's no such point--on any given try, the
> >> chance of hitting the combo is only 1/C. So there's never a particular
> >> try during which I'm certain to hit the combo, unless I'm using a one-card
> >> deck.
> >
> >True, but not inclusive. For any given try you are not guaranteed of
> >getting any particular configuration. But over the whole scope of shuffles,
> >you are guaranteed of getting ALL combinations of both deck configurations
> >and series of deck configurations.
>
> You're assuming what you're trying to prove--namely, that you're
> guaranteed to get all combinations. That's cheating. :-)

It's not cheating, it's the definition of infinity. If I'm not allowed to use the
definition, then how can I be expected to do anything at all with infinity. The
definition of infinity is that it covers everything possible. If it can happen, then
over infinity, it will happen. The definition of infinity includes itself too.

> Here's another way of looking at it; maybe this one will go over better.
> You're saying that, if I flip a coin an infinite number of times, that
> it's impossible that I'll get nothing but heads. Okay, let's assume for
> the moment that that's true.

You keep misinterpreting me. Given a specific starting point it is impossible to get
all heads (or any other infinite combination). Given an infinite string of flips, a
substring of infinite length of all heads will occur. But it has no starting point and
no ending point. this does not exclude it, nor does it exclude any other
possibilities. It is one of those quirky properties of infinity.

> Pick an infinite series of coin flips. Any series of coin flips. It
> doesn't even have to be random. Pick any one you want. Go ahead. I'll wait.
> First of all, I hope you'll concede that the series you just picked was
> possible. If not, then you're saying that it's _impossible_ to produce an
> infinite series of coin flips, which makes it meaningless to talk about the
> probabilities of them at all.
> Does that series have nothing but heads? Well, if it does, then it must
> have been possible to get nothing but heads. But we assumed that that was
> impossible, so we'll move on.

Not so fast.... I can produce any series I want. If the definition of the infinite
series I produce says that I can not get a tails flip, then I will never get a tail
flip. A random flip does not exclude any possibilities including the possibility of
all heads, all tails, or any other infinite series. But none of those series will
start from anywhere that we can get to. If we start flipping and get some infinite
series, what I am saying is that that series CAN NOT have any restrictions on it (like
all heads, all tails, or any combination thereof) because if it does have ANY
restrictions, then eventually we will get a flip that deviates. This must be so by the
definition of infinity which says that everything that can happen will. And it
certainly can happen that we could flip the coin and get something that doesn't fit
with the restrictions, whatever they may be. On the other hand, if there are no
restrictions on the flips, then we can flip for eternity and never violate the
definition of infinity. All infinite streams will occur, just nnot with a beginning or
ending that is accessible.

> Since the series does not have nothing but heads, it has to have at least
> one tail. In fact, by your own argument, it must have an infinite number of
> tails. But let's just look at the first one--the first tail in the series.
> (Even though the series is infinite--so there's no "last tail"--there's still
> a first one.)

The first tail is just as inaccessible as the last one. Neither exist until you invoke
infinity, and then both exist but can not be attached to any fixed point.

> Was it _certain_ that a tail would be flipped there? Well, by definition,
> no; there's a fifty-fifty chance of a head on any particular flip.

The problem is where is "there". By the definition it was certain that a tail would be
flipped or the series wouldn't have started there. You are stuck on the idea that this
series has to start _somewhere_. It doesn't start anywhere that exists in any real
sense. It starts somewhere within infinity. You can not attach some finite point to
an infinite string.

> So, if the series you picked is possible, then there's another series--
> the series you picked, but with the first tail replaced by a head--that's
> also possible.
> Now, does _that_ series have nothing but heads? If it does, then I'm
> right--all heads was possible. If it doesn't, then that new series must also
> have a first tail in it. And that tail can't have been certain either; a
> head was possible there too. So, if I replace _that_ tail with a head, then
> the resulting series must be possible too.
> So I can go on, replacing every tail in the series with a head in this
> way. It may take an infinite number of replacements; but eventually, I'll be
> left with "head, head, head..." for infinity--and therefore, that series must
> be possible.
> And this is not a case where you can say "Yes, that works for any
> finite number of replacements, but it doesn't work for infinity." By
> definition, the number of tails in the series of flips can't be any larger
> than the number of flips in the series. So, if there's such a thing as an
> infinite series in the first place, then the "work" in converting that series
> into a "head, head, head..." series can't be any greater than the "work" in
> creating that series in the first place.

You can't change the first tail to a head because then it wouldn't be the first tail
anymore and you would have to go to the new first tail, but when you changed it it
would no longer be the first tail either. You can change them for infinity and still
never change them all. The series of infinite flips can not be produced. You would
have to start with one flip which is finite and somehow proceed through to the
infinite. The infinite series of flips must either exist or not. There is no way to
construct it in parts unless those parts also include at least one infinite series or
an infinite number of finite series. Either way you have to use infinity to build
itself.

> And, again, if there's no such thing
> as an infinite series of coin flips, then all-heads is still possible--after
> all, it's clearly possible to come up with nothing but heads in any _finite_
> series of flips.

All heads is possible in either a finite or an infinite series, and is actually
guaranteed in an infinite series. But you can not move between the infinite and the
finite series as part of your proof because the two are incompatible. The infinite can
contain the finite, but not be built from _only_ the finite.

Dustin


Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to


David DeLaney wrote:

> If you're adding infinite-size quantities into your number system, then
> if you've got division you also have to add in what's called "infinitesimals"...
> which are numbers that are infinitely _small_. One of them is 1/infinity...
> which is smaller than any non-infinitesimal number, but bigger than zero,
> in the same way that infinity is bigger than any non-infinite number.
>

I think you are wrong, 1/infinity has no real meaning, but has been defined to be
zero.

> Basically, you have infinity x 1/infinity = 1; infinity x 0 = 0; infinity^2
> x 1/infinity = infinity; etc.

This is completely untrue. infinity x 1/infinity will equal one if and only if the
two infinities are EXACTLY the same. This definately does not have to be the case
(just add one to either of them). The same applies to your third equation.

Dustin


Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to


David DeLaney wrote:

> >Sorry to keep harping on this, but with infinity, ALL arrangements will occur and
> >you WILL get to see all of his cards.
>
> No. You suffer from a fundamental misconception about infinity, I'm
> afraid. "I have one thing out of a set of three things happen, an
> infinite number of times" is quite different from "I guarantee that I
> will have all three things happen an infinite number of times" ... or
> even "I guarantee that I will have all three things happen". It is
> a perfectly possible sequence to have "{1,2,1,2,1,2,repeat forever}" happen,
> whether you have three or thirty or even an infinite number of things to
> choose from. It may be infinitely un_likely_ ... but it is _possible_
> to choose this as the sequence of things that happens, therefore it's possible
> for it to happen given an infinite set of choices. Whether or not they're
> made randomly.

I agree. I am saying that they will happen as a matter of fact. But I am saying that
it CAN'T happen starting from any specified point. Any infinite series within another
infinite series has beginning and ending points that only exist as concepts, not as
points that can be referenced.

> In another guise, your misconception is the classic gambler's misconception
> "If I bet on red / on 33 / on getting a five-card straight _long enough_,
> it _will_ come up". With a related misconception of "it somehow becomes
> more likely for event X to happen randomly each time it _fails to_ happen
> randomly".

I have NEVER promoted anything like that. If you bet on it forever (i.e. infinitely)
then it will come up infinitely many times. But you can't do that any more than you
can count to infinity. It is the concept that matters, not actually doing it. This
is not the same as thinking that somehow the odds change simply because something has
or hasn't occured recently.

> The "all arrangements will happen" concept is called "completeness";
> it is a fairly different concept from "infinity". [For example, the "all
> arrangements of a 60-card deck will happen" number is 60!, or 60-factorial.
> Nothing like infinity.]

That has nothing to do with all arrangements happening. 60! is simply the total
number of possible arangements of 60 distinct cards. But it says nothing about how
long it would take to randomly get all of those arrangements. There is no way to put
a lower bound on how long it will take to randomly get all of those arrangements. It
will take _at least_ 60! shuffles and no more than infinity shuffles to get all of the
arrangements through random shuffling.


Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to


David DeLaney wrote:

> And, in fact, you can prove mathematically that the +size+ of the
> infinity that counts up the number of possible series is _larger_ than the
> size of the infinity that counts how many flips are in the series. Yes,
> there's more han one size of infinity available.
>

I agree that there is more than one size of infinity involved, but I think that you
have the order reversed. The infinite series of coin flips includes ALL possible
series of coin flips be they finite or infinite.

> There's even a quite lovely proof [by Georg Cantor, yes that's spelled
> correctly] that this is so ... using the fact that if you try to _list_
> the possible series, counting along the numberline you're counting
> the flips along ... then you can always construct at least one series
> that is _not_ in the list. [Go down the diagonal of the square formed by
> the list of series ... and make the series that has each coin flipped
> oppositely to the coinflip that appears on the diagonal in that place. This
> series is different from every single one of the series in the list.]
>

Ther is a fundamental flaw in this proof in that it assumes a finite limit that
isn't there. If carried out to infinity then the diagonals become meaningless.
Infinity interferes with them.

> > You're assuming what you're trying to prove--namely, that you're
> >guaranteed to get all combinations. That's cheating. :-)
>

> Agreed.
>

As I said in my response to Phaedrus, I have to be allowed to use the definition.
That is hardly cheating.

Dustin


Dustin Wood

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

Ok, I give up. I love this thread, but it seems that we are all going
around in circles. There is exactly one place where all of the
disagreements can be boiled down to.

All of these arguments hinge on what happens/how can you get to infinity
if you start with a single shuffle. I believe that you can't 'get' to
infinity from a finite point. You have to simply 'be' there at infinity
and the two can't be reconciled in any meaningful way that we all will
accept.

Unless I see something new or outrageously wrong, I am giving up on this
thread. Anyone that would like to continue the discussion please send me
an email and I will be happy to respond.

Dustin


David DeLaney

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In a previous article, dw...@best.com (Dustin Wood) says:
>David DeLaney wrote:
>> >So let's say there
>> >are 5 cards left, with 4 of them Blessings. Which is the fifth card?
>>
>> You don't know, actually...
>>
>> >An argument could be made that it's a random card, but I could also
>> >argue as follows: I can do any number of mills, so I see every card in
>> >your library.
>>
>> Er: no. By your own argument, there's always one card, at least, you won't
>> see every time; it _could_ happen that that's always the same card, even
>> with an unbounded number of repetitions. So you can't say "I _will_ get
>> to a point where I have seen every card". [Though by the same token you
>> can't say "I _will_ get to a point where there's five cards in his library and
>> four are Blessings", now that I think about it.]


>
>Sorry to keep harping on this, but with infinity, ALL arrangements will occur and
>you WILL get to see all of his cards.

No. You suffer from a fundamental misconception about infinity, I'm
afraid. "I have one thing out of a set of three things happen, an
infinite number of times" is quite different from "I guarantee that I
will have all three things happen an infinite number of times" ... or
even "I guarantee that I will have all three things happen". It is
a perfectly possible sequence to have "{1,2,1,2,1,2,repeat forever}" happen,
whether you have three or thirty or even an infinite number of things to
choose from. It may be infinitely un_likely_ ... but it is _possible_
to choose this as the sequence of things that happens, therefore it's possible
for it to happen given an infinite set of choices. Whether or not they're
made randomly.

In another guise, your misconception is the classic gambler's misconception


"If I bet on red / on 33 / on getting a five-card straight _long enough_,
it _will_ come up". With a related misconception of "it somehow becomes
more likely for event X to happen randomly each time it _fails to_ happen
randomly".

The "all arrangements will happen" concept is called "completeness";


it is a fairly different concept from "infinity". [For example, the "all
arrangements of a 60-card deck will happen" number is 60!, or 60-factorial.
Nothing like infinity.]

>They do? How about orcish spy? Or Elemental Augury? Granted that's not his
>entire library (unless he only has 3 cards left) but if all we want is to know
>where that 1 card is, these should do the trick.

Okay, granted. Field of Dreams can also help. Of course, none of these
goes _five_ cards deep ... but if you can incorporate a loop that
keeps infinitely recasting Visions somehow, then you've got it. [Farrelite
Priest/Ornithopter/Enduring Renewal [which was here anyway]/Ashnod's
Altar/three Gaea's Blessings of your own/Celestial Dawn/Jandor's Ring or
some such.]

Dave
--
\/David DeLaney d...@panacea.phys.utk.edu "It's not the pot that grows the flowe
It's not the clock that slows the hour The definition's plain for anyone to se
Love is all it takes to make a family" - R&P. VISUALIZE HAPPYNET VRbeable<BLINK
http://panacea.phys.utk.edu/~dbd/ - net.legends FAQ/ I WUV you in all CAPS! --K

David DeLaney

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

phae...@halcyon.com (Phaedrus) says:
> I still think I'm right that 1/infinity is not zero; but I'll have to
>fall back and get some better ammunition. :-)

If you're adding infinite-size quantities into your number system, then


if you've got division you also have to add in what's called "infinitesimals"...
which are numbers that are infinitely _small_. One of them is 1/infinity...
which is smaller than any non-infinitesimal number, but bigger than zero,
in the same way that infinity is bigger than any non-infinite number.

And note here that Q/0 is _not_ "infinity"; it is "undefined". It would
be larger than any possible infinite number if it existed, which
it doesn't.

Basically, you have infinity x 1/infinity = 1; infinity x 0 = 0; infinity^2
x 1/infinity = infinity; etc.

As I've often recommended before, the best _non_technical book I know on
the subject that comes anywhere close to imparting ideas on how to handle
infinity correctly _and_ is readily available in bookstores is a book
called "Infinity and the Mind", by one Rudy Rucker. Waldenbooks or B. Dalton
should easily be able to find it for you. There are _many_ elementary mistakes
that people who haven't done any studying of infinity or probability are
likely to make when trying to argue about infinity, I'm afraid.

David DeLaney

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

phae...@halcyon.com (Phaedrus) says:
> If I flip a coin an infinite number of times, there's an infinite number
>of possible series of flips that can result--an infinite number of possible
>outcomes.

And, in fact, you can prove mathematically that the +size+ of the


infinity that counts up the number of possible series is _larger_ than the
size of the infinity that counts how many flips are in the series. Yes,
there's more han one size of infinity available.

There's even a quite lovely proof [by Georg Cantor, yes that's spelled


correctly] that this is so ... using the fact that if you try to _list_
the possible series, counting along the numberline you're counting
the flips along ... then you can always construct at least one series
that is _not_ in the list. [Go down the diagonal of the square formed by
the list of series ... and make the series that has each coin flipped
oppositely to the coinflip that appears on the diagonal in that place. This
series is different from every single one of the series in the list.]

> You're assuming what you're trying to prove--namely, that you're

>guaranteed to get all combinations. That's cheating. :-)

Agreed.

Dan Johnson

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article <69h87r$7b5$1...@halcyon.com>, Phaedrus <phae...@halcyon.com> wrote:
> 1/infinity--in fact, any positive finite number over infinity--is so
>_close_ to zero, that for the vast majority of problems it can be taken to be
>zero.

If 1/x is not 0, x is NOT infinity.

> But that does not mean that it _is_ zero; and the summing of an infinite


>series--which is what this problem boils down to--is one of the areas where you
>_cannot_ make this assumption. If 1/infinity were zero, then the sum of an
>infinite series of "1/infinities" would be zero.

infinity * 0 is a indeterminate form.
--
Daniel W. Johnson
pano...@iquest.net
http://www.members.iquest.net/~panoptes/
039 53 36 N / 086 11 55 W

Jeffrey G. Montgomery

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article <69f28k$4id$1...@halcyon.com>, Phaedrus <phae...@halcyon.com> wrote:
> I apologize for the big hunks of quoted text; but the context is
>important here.
>
>In article <34BB02...@isd.net>,

>David Wintheiser <David.Wi...@ActiveSoftware.com> wrote:
>>Paul Miller wrote:
>>>
>>> On 12 Jan 1998 04:27:46 GMT, wgoo...@expert.cc.purdue.edu (Walter Goodwin)
>>> wrote:
>>>
>>> >By adding infinity to the mix, all possible combinations will be reached,
>>>
>>> Prove it.
>>
>>Okay, I'm going to give this a try. Forgive me if I err; it is human,
>>after all. :)

I thought an example would be in order.
60 cards consisting of:
Creatures: 5, 4 of each 20 cards total
Lands: 2 types, 10 of each 40 cards total
Artifacts: 3, 4 of each 52 cards total
Spells: 3, 4 of one and
2 each of the others 60 cards total

Total possible combinations:
60!
----------------------------------------------------------------------
(4!)^5 * (10!)^2 * (4!)^3 * (4!) * (2!)^2
[creatures] [lands] [artifacts] [other]

=
60*59*58*57*56*55*54*53*52*51*50*49*48*47*46*45*44*43*42*41*
40*39*38*37*36*35*34*33*32*31*30*29*28*27*26*25*24*23*22*21*
20*19*18*17*16*15*14*13*12*11*10*9*8*7*6*5*4*3*2*1
------------------------------------------------------------------------------
(4*3*2*1)^5 * (10*9*8*7*6*5*4*3*2*1)^2 * (4*3*2*1)^3 * (4*3*2*1) * (2*1)^2

= (I'm doing some canceling here to get rid of the denominator)
59*29*57*11*53*13*17*5*7*47*23*5*11*43*7*41*5*39*19*37*12*35*17*11*31*
29*7*9*26*25*23*11*7*20*19*3*17*16*15*14*13*12*11*10*9*7*6*5*3*1

=
59*57*53*47*43*41*39*37*35*31*29*29*26*25*23*23*20*19*19*17*17*17*16*
15*14*13*13*12*12*11*11*11*11*11*10*9*9*7*7*7*7*7*6*5*5*5*5*3*3*1

= [combine every two numbers]
3363*2491*1763*1443*1085*841*650*529*380*323*289*240*182*156*132*121*
121*90*63*49*49*30*25*15*3

= [again, repeat until done]
8377233*2544009*912485*343850*122740*69360*28392*15972*10890*3087*1470*1125

= (approximately) 1435155165962366500000000000000000000000000000000000000000

>>> The set of all possible deck states over countably many
>>> millings is most definitely infinite.

>>Actually, it isn't. The number of cards in a deck is finite, and thus
>>the number of possible combinations of those cards in a deck is also
>>finite, though extremely large.

I just gave an example. The less repeats you have, the closer it is to
[number of cards]!.

>>By using the formula for combinations,
>>it is possible to determine how many combinations of cards are possible
>>from an original 60-card deck. This can be repeated for a deck
>>consisting of 59 of the original 60 cards, and again for 58 of the
>>original 60, and so on until every possible combination of every
>>possible number of cards is calculated; in the subsequent proof, I will
>>use "C" to indicate the total number of deck combinations possible from
>>any number of cards taken from an original 60-card deck. No matter how


>>many times you mill, or how many times you shuffle, your ending deck
>>state will wind up as one of these C combinations.
>

> I'm with you so far.

As am I... I got lost below; I'll try again. :)

>>Ergo, the probability of reaching a specific single combination of a

>>specific number of cards in the deck (p) is 1/C.

Gotcha...

> It's worth clarifying here: the probability of reaching that set
>combination _in a single trial_ is 1/C. In other words, if I randomize the
>cards once, I have a 1-in-C chance of the cards winding up in the
>order I'm after.

Right.

>> The probability of
>>-not- reaching that specific single combination of a specific number of
>>cards (p') is thus 1-(1/C).
>

> Again, this is the chance of not reaching that combination in a single
>trial.

Again, right.

>>Let us make a hypothesis that it is possible to perform infinitely many
>>millings and shuffles and never reach the specific single combination of

>>the specific number of cards we are looking for. This would mean that
>>we would succeed in -not- reaching that specific single combination of a
>>specific number of cards, and thus in the mathematics of probability,
>>p'=1.
>
> It is here that your proof, regrettably, becomes mathematical garbage.
> If something has a probability of 1, then that thing is _certain_ to
>happen. Nobody here has made the argument that you're _certain_ to never
>reach that combination of cards; that would be silly--it's clear that you can.
>(Heck, it's clear that you can reach that combination in a single try, let
>alone an infinite number.) It was argued that it is _possible_ to shuffle an
>infinite number of times without ever reaching the desired order.

Here was where I got lost - I misread his (the >> ) statement. Let me
try and clarify what you have here:

If you have p' = 1, then because p' = 1-(1/C),
1 = 1-(1/C)
0 = (1/C),
C must be infinite.

If you have a finite C, this cannot happen - ergo, p' < 1

>> Since we have already calculated that p'=1-(1/C),
>
> Another problem. This is the odds for a single try; we weren't discussing
>that--we were discussing an infinite number of tries.
>
>> we can use the
>>associative property of equality to say:
>>1=1-(1/C)
>>
>>Subtracting one from both sides of this equation leaves us with:
>>1/C=0
>>
>>And multiplying both sides by C yields:
>>1=0

Which is impossible

>>Since it is not the case that 1=0, our original hypothesis must be
>>incorrect (by reductio ad absurdum), and therefore it is not possible to
>>perform infinitely many millings and shuffles and never reach the
>>specific single combination of the specific number of cards we are
>>looking for.
>

> Your math is sound; unfortunately, it's all based on a faulty initial
>assumption, so it doesn't work. All you've proved is that it's not _certain_
>that you won't arrive at the desired combination in _one_ try; but again, no
>one is arguing that.

Actually, he did it right -- by showing that it IS possible to arrive
at the desired state in one try, he has shown that in an infinite
number of tries, it is CERTAIN that he can end up at that state.

It makes sense to me!

>> Thus, it -must- be possible to reach the specific single
>>combination at some point in time, and since it -must- be possible, we
>>are therefore justified in "cutting to the chase" in order to save time.

You just better hope he doesn't have something that forces a player to
place the top card of their library in their graveyard ... :)

> Now, let me take a stab at proving that it is in fact possible to
>shuffle forever and _never_ reach the desired order.

Since we've shown it is possible to do it in one shuffle, let's see you
try. :)

> First of all, it's worth looking at the probabilities again. We've
>already seen that the odds of getting our desired combo in one try are 1/C,
>and the odds of not getting it in one try are 1-(1/C).

Ok so far

>What are the odds of
>not reaching that combo in N tries? Well, since each try is independent
>(we're assuming that we're shuffling thoroughly each time), the odds are
>(1-(1/C))^N,

Again, OK so far

>And what are the odds of reaching that combo at least once in
>N tries? Well, it's 1 minus the odds of _not_ reaching that combo in N
>tries--so it's 1-((1-(1/C))^N).

Here's a summary:

Odds of getting the state in one shuffle: 1/C
Odds of NOT getting the state in one shuffle: 1-(1/C)
Odds of NOT getting the state in N shuffles: 1-(1/C)^N
Odds of getting the state in N shuffles: 1-(1-(1/C))^N

> So, what happens if we make N a really really high number?

Let's look at that...

IF C=1: P=1, P'=0
IF C<1:
N = 0: P'=0
N = 1: P'=1-(1-(1/C))
N = 2: P'=1-(1-(2/C)+(1/(C^2)))
N -> infinity: P'-> 1

as N increases, 1-(1-(1/C))^N (or P') approaches 1

>Well,
>the odds of missing every time--(1-(1/C))^N-- get smaller and smaller, closer
>and closer to 0. So the odds of hitting at least once get closer and closer

>to 1. But, no matter how high we make N, the odds of hitting at least once
>never quite reach 1.

It never quite reaches 1, no - however, at the same time, it keeps
growing larger and larger, approaching 1. Therefore we can say that
AT infinity (which we can never calculate), the probability that we
have in fact reached the desired state IS 1.


>Even if we make N infinite, the odds of missing are still
>an infinitesimal fraction away from 0, and so the odds of hitting at least

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


>once are still just that one-in-infinity fraction away from 1.

No - it's an infintessimal fraction from *1*! Therefore, the more
times you shuffle, the better the chances that you have reached that
desired state.

> But let's get this away from math. Here's a couple of relatively simple

>ways to prove that it's possible to go forever without hitting the combo
>we're after.

But it is NOT impossible - if it were, P would equal 1. It does not.



> First, there's proof by induction. Is it possible to shuffle the cards
>once without hitting the combo we're after? Well, yes, certainly.

Right.


> Now, given the fact that it's possible to shuffle the cards N times
>without hitting our combo, is it possible to shuffle the cards N+1 times
>without hitting our combo? Again, clearly yes--I just shuffle the cards
>N times the wrong way, then shuffle them once more the wrong way.

yeah yeah - and 1=2.
a=b
a^2=ab
a^2 - b^2 = ab - b^2
(a+b)(a-b)=b(a-b)
a+b=b
(since a = b, I can make a 1, b = 1 following that, and:)
2=1

You are adding - not running the calculations. As N approaches
infinity, the chances that you have reached the state desired
approaches 1.

> So, by induction, it's possible to shuffle the cards the wrong way
>any number of times, including infinity; no matter how many times I've
>already done it wrong, I can always do it wrong once more.

This is not a case for induction, though! AT infinity, P'=1.

> Or, to look at it another way: You're saying that it's _impossible_--
>in other words, probability zero--to go forever without hitting the desired
>combo. But, for that to be true, that would mean that at some point it would
>have to be _certain_--probability 1-- that I would hit the combo.

NO!
It would just have to be possible - probability > 0 - to hit the combo,
which it is: P(N=1) = 1/C.

>But we've
>just agreed that there's no such point

No; I proved you wrong.

>--on any given try, the chance of
>hitting the combo is only 1/C.

Right! Since 1/C is not 0, there is always a chance of getting that
state, meaning P' > 0!

> So there's never a particular try during
>which I'm certain to hit the combo, unless I'm using a one-card deck.

This is true - however, look at it this way: Two card deck:
Chances of cards being in order AB (A is first card, B = second) are:
P = probability of not getting that state
P'= probability of having reached that state already (including current
shuffle)
Try #1 : P=(1/2)^1 = 1/2, P'=1-P = 1/2
Try #2 : P=(1/2)^2 = 1/4, P'=1-P = 3/4
Try #3 : P=(1/2)^3 = 1/8, P'=7/8
Try #4 : P=(1/2)^4 = 1/16, P'=15/16
Try #5 : P=(1/2)^5 = 1/32, P'=31/32
...
Try #100: P=(1/2)^100 = 1/(1024)^10, P'=((1024)^10)-1/(1024)^10
Try #(infinity): P=1/infinity (almost 0), P' = 1-(almost 0) = (almost 1)

> The interesting thing I've found about this sort of problem is: Ask
>someone "Suppose I flip this coin an infinite number of times. Is it
>possible that the flips will come up a head, then a tail, then a head, and
>a tail, and so on forever?" Most people will scratch their head and think
>about this for a while, and say "Well, I suppose it's possible--it's really
>really _unlikely_, but it's possible." But ask someone "Suppose I flip this
>coin an infinite number of times. Is it possible that the flips will just
>keep coming up heads, forever?" And the answer you get will very probably
>be "No--a tail would have to come up eventually." For some reason, the idea
>of an infinite series in which every possible outcome happens in a particular
>order doesn't bother us; but the idea of an infinite series in which a
>particular outcome _always_ happens, or _never_ happens, strikes us as
>fundamentally wrong. This is true even though the odds in each case are
>exactly the same--a "head, tail, head, tail..." sequence is exactly as
>likely as a "head, head, head, head..." sequence.

You forget: The probability of the order: H,T,H,T,...,H,T,... coming up
is the same as the probability of H,H,H,H,...,H,H,... coming up.
1/2 * 1/2 * 1/2 * 1/2 * 1/2 * .......
However, the chances of it NOT happening is: 1-(1/2 * 1/2 * 1/2 * .....),
which approaches 1 as you increase the number of flips.

>>Criticism is welcome, though perhaps this entire discussion should be
>>moved to alt.math.problems.insane. ;)
>

> Naaah. Magic is a mathematical game; it attracts a lot of mathematical
>minds. The newsgroup goes through these "Magic proofs" phases; they end
>eventually. "Before you end the phase, I have one more proof to play..." :-)

This is no worse than figuring out the probability of the dealer having
Blackjack in Las Vegas. :)

Besides - I, and probably others (*wink*), enjoy proving people wrong. :)

Jeff

Jeffrey G. Montgomery

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article <34bc6...@isc-newsserver.isc.rit.edu>,
Jeffrey G. Montgomery <jgm...@osfmail.isc.rit.edu> wrote:
>= (approximately) 1435155165962366500000000000000000000000000000000000000000

Just for clarification, that number is the possible number of states the
cards can be in. The chances of coming up with one specific version
of the deck is 1/[that number].

If you have a standard deck, the chances of the cards being in the order
AAAA222233334444....KKKK (where suit does not matter) is:
52! / (4!)^13 =


52*51*50*49*48*47*46*45*44*43*42*41*40*39*38*37*36*35*34*33*32*31*30*
29*28*27*26*25*24*23*22*21*20*19*18*17*16*15*14*13*12*11*10*9*8*7*6*5*
4*3*2
-----------------------------------------------------------------------------
4*4*4*4*4*4*4*4*4*4*4*4*4*3*3*3*3*3*3*3*3*3*3*3*3*3*2*2*2*2*2*2*2*2*2*2*2*2*2

= 270725*178365*61705*319865*28985*23751*3575*25840*32760*1247400
= (approximately) 92024242230271048000000000000000000000000000000000

(I used C programs to get these numbers... this is as close as the
computer can get without using 128 bit fields... (I *THINK* it's using
32 bit...))

Therefore the chances of getting the cards in order, suits ignored, is
1 in [that number].

The chances of getting the cards in the order
A234567890JQKA234567890JQKA234567890JQKA234567890JQK (0 being 10)
is the same

Chances of getting them in ANY specific order (disregarding suit) is
the same.

Countability is important... The suits can be used in place of (for
example) WHICH counterspell you want there. If you want the
counterspells in a specific order, and you have 4 of them, you do that
as:

C
a b = a!/(a-b)!b!

a items, choose b.

(a=4, b=4)

4!/(4-4)!4! = 4!/(0!)(4!) = 1

whereas if you just want all four and don't care what order, the number
of permutations is: (a=4, b=0)

4!/0!(4-4)! = 4!/(0!)(4!) = 1

Gee... they're the same! :)

Add another kind of card though...
Aces and Dueces, all four suits. (8 cards)
Order counts, suit doesn't:
AAAA2222

4!/(4-4)!4! * 4!/(4-4)!4! = 1 * 1 = 1 way to do it
However, if you only want them so that whatever comes first is also last,
A??????A or 2??????2
2*1* 6!/(6-6)!6! = 2
:)

It's easy when you get the hang of it...

I recently posted the basic rules of probability... go check it out! :)

Jeff

John Dilick

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Yea, verily, on 13 Jan 1998 18:34:35 -0800, phae...@halcyon.com (Phaedrus)
proclaimed:

> If I pick a random integer, then the probability that I've picked a random
>integer is 1. The probability that I pick any particular integer is
>1/infinity. If 1/infinity is zero, then the sum of the probabilities of
>picking all the possible integers would be zero--again, no matter how many
>zeroes you add together, you're going to get zero. But that would mean that
>it's impossible to pick a random integer at all--that the probability of
>doing that is zero.

Once you put 'infinity' into the picture, standard mathematics (including
probability theory) gets thrown out of the window.

1/infinity is arbitrarily close to zero. If we wish to deal with the reals
alone, combined with a single infinity, it can be treated as zero. If we
wish to add an infinite number of 1/infinity terms together, then we have
an infinite number of 1's summed together, all divided by infinity. In
other words, infinity/infinity, which is undefined. Infinitesimals are
strange beasts.

> Even for finite numbers, I don't believe that what you're saying it
>correct. If I say "The limit of F(X), as X approaches zero, is 1", that does
>not mean that F(0)=1. If you want an example, I can give you one: F(X)=X/X.
>For every possible value of X _except_ zero, F(X)=1. By every definition,
>the limit of F(X) as X approaches zero is 1. But that doesn't mean that 0/0=1;
>0/0 is completely undefined.

Err, not quite. You cannot use limits to appropriately determine the value
of F(x)=X/X at x=0, not because limits are wrong, but because that
particular F(x) is _discontinuous_. For a continuous function, that is a
perfectly valid method of determining a value of a function at a point.

>>In a similar fashion, given any number N, N/infinity is equal to 0, *as long
>>as* that number is a finite number. This, again, is a definition based on the
>>same concept you showed:
>
> But again, if you believe this, then you're either saying that the
>probabilities of all the possible outcomes of an event do not have to add up
>to 1--which is going to send a lot of mathematicians back to the drawing
>board--or you're saying that you can add up a bunch of zeroes and get 1--
>which is going to send a lot of mathematicians to the nearest bar.

Transfinite mathematics does not obey the same rules as mathematics limited
to the reals. 1/infinity, under the reals, exactly equals 0, even though
it is an infinitesimal.

> But wait a minute. In that case, you're saying that the limit of
>N/infinity as X approaches infinity is zero--but that infinity/infinity is
>undefined. Doesn't that contradict your whole point about limits earlier?

No.

Limits indicate what the value is tending _towards_. For F(x)=1/x,
limit((x->infinity)(F(x)))=0. As we are talking about the reals, x must be
a real, and never actually _equal_ infinity. It just gets bigger and
bigger. If it were to actually _equal_ infinity, then it is undefined and
_another_ tool must be employed -- L'Hopital's Rule.

If F(x) = 1/x, then it can be re-written as F(x) = G(x)/H(x), where G(x) =
1 and H(x) = x. Then, at the point where F(x) is undefined, F(x) =
G'(x)/H'(x), where G'(x) is the derivative of G(x) and H'(x) is the
derivative of H(x). G'(x) = 0, H'(x) = 1, so F(x) (at x=infinity) = 0/1,
which is 0.

> You're assuming your conclusion. If I'm right--and it's possible to
>go forever without hitting that combo--then I _don't_ have to "shuffle until
>you've reached every combination an infinite number of times." In fact, if
>the combinations we're talking about are infinitely long, then I _can't_
>shuffle until I hit them all.

Not physically, no. However, if the state is _possible_, then given an
infinite amount of time, the state _will_ occur.

Now that I've bored everyone to tears with the math, let me add my $.02.

I hate the lazy evaluation move. Matches are timed, and played with
physical objects. This whole discussion is an example of why it should be
scrapped. Infinity is, at best, poorly understood, and at worst, horribly
misapplied. If you have a combo to be played, it should _have_ to be
played out until you reach the state you'd like. If it takes too long, it
should not be allowed at a tournament.

--
John Dilick
dili...@cris.com

Lasse Reichstein Nielsen

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Warning: This will get technical at times, mostly just to say that my
claims are backed by mathematics, and not just my own inventions.

In <34b6f...@isc-newsserver.isc.rit.edu> jgm...@osfmail.isc.rit.edu (Jeffrey G. Montgomery) writes:

>In article <01bd1d28$e43ad480$1fd1...@michmarc2.dns.microsoft.com>,
>Mike Marcelais <mich...@microsoft.com> wrote:
>>> Simply because something has a nonzero probability of happening over a time
>>> period doesn't mean it will happen.
>>
>>Given an unlimited amount of time, yes it will happen.

>As long as the probability is not zero, it will happen at some time, yes.
>(I might win Saturday's lottery... I might not. If I play forever, and the
>game goes on forever, eventually I'll win.)

Nitpick:
Not true... If you only bought one ticket pr. week, and each week they
printed twice as many tickets as the previous week, then your overall
chance of winning would be 2/N where N was the number of tickets printed
the week where you started. An infinite sum of positive nonzero numbers
might be finite (we knew that; it's a probability, so it's at most 1)
and less than 1 (but cannot be zero ofcourse).

>He never said the event WOULD happen. The more times you run an experiment,
>the greater the chances of having an event you wish to occur to occur.
>(We're talking about probability experiments; not mixing chemicals. :) )

>>But you aren't doing 10 trials. You are doing an arbitrarily large number of
>>trails. The chance that (after X trials) of the event happening is 1 - 0.9^x.
>> As X gets larger and larger, that probability gets closer and closer to 1.
>>If you could actually do it an infinite number of times, the probability it
>>has happened _is_ 1.

>No, it is NOT 1 - it is just so CLOSE to 1 that we say it is.
>What's 9 * 0? We really don't know, and I can prove it:

>9 * 0 = 0
>0 / 9 = ??????

it's 0 (because 0*9 = 0... that's what defines division... a/b=c iff
a=b*c, a multiplicative inverse)! 9/0 is not defined (at least not in
the traditional number-fields, N, Z, Q, R or C... you might make your own
algebraic structure where it is defined... though I doubt it will make
as much sense... if the additive neutral have a multiplicative inverse,
at least the zero law breaks (a*0=0 for all a) and what you have is not
an algebraic field)

>What is 0? 1-0=1

True. Becuase 0 is the additive neutral element (a+0=a for all a) and as
such it's additive inverse (-0) is itself, so 1-0 =def= 1+(-0)=1+0=1.
Subtraction is defined as addition of the additive inverse
(a-b =def= a+(-b)).

>The closer we get to 1, the less the difference is between 1 and the
>current value. If we have .99999999999999999999999999999999999999..., we
>call it 1 for ease of use, but it isn't 1 and never will be.
>1-(.9^(infinity)) = 1-(almost 0)
>We can never get 1 because .9^(infinity) never reaches 0 -- it just gets
>REALLY REALLY close.
_
Is 0.9 (short for 0.9999....) a real number at all? If yes (which I hope
inf
you accept, because it's sum 9/10^n, and the real numbers are closed
n=1
under the limit of such sums (it's finite initial sums form a Cauchy
sequence)).
Which real number is it then? Is it 1? You say no, so it must be
different from 1. Thus
_
|1-0.9| > 0

(or it wouldn't be different from 1). This is ofcourse wrong, as the
difference is smaller than any nonzero positive real number. Therefore
_ _
1 = 0.9 if 0.9 is a real number at all. If not, what good is it?

(This have been argued to death on some math groups... i'll try to see
if I can find the faq.)

>Therefore, the chances OF it happening are so close to 1 that we call it 1...
>but in reality, it's not.

It is Exactly 1. Common sense breaks down around infinities... so if you
want to actually prove anything, you need some rules and definitions to
work from... and then ONLY use those. Common sense arguments are
worthless, except as a reason for trying to avoid the infinity to begin
with.

<snip>

>>There is a certain probability that (after each shuffle) that the deck will be
>>in an unsuitable state. However, given an infinite number of these shuffles,
>>you can show (very similarly to the previous paragraph) that the probability
>>that you get a suitable deck is 100% given an infinite number of shuffles.
>>Since you also have a way of determining when you have a suitable deck, you
>>can stop whenever you get such a deck.

>Again, you will NEVER get 100%. You might get 99.99999999...%; but never
>100% based on the fact that you are not multlying by 1.

Same thing. Infinities are usually defined by limits, i.e. 0.99999....
is the limit of 0.(9^n) [shorthand for 0. followed by n nines] for n
going towards infinity, or, as the supremum of the set
{0.(9^n) | n is a natural number}
Because it's a growing sequence, the limit is the supremum (notice, NOT
the maximum! That set doesn't have a maximum. The supremum is the
smallest number larger than everything in the set, and not necessarily
in the set itself). That number is 1.

>How do you determine if you have a suitable deck? Rules say no peeking. :)

Mill until it's obvious, if I understand the original magic-problem
correctly. If it wasn't suitable, no harm was done... the deck was
reshuffled and you can try again.

>Basic probability:
>Chances of an event happening:
> (number of desired outcomes)/(number of outcomes possible)
>Chances of an event happening at least once in X tries:
> 1-([1-(number of desired outcomes)/(number of outcomes possible)]^X)

--snip--

>Probability is fun when you know how to do it. :)

Yes, and intuitive in the finite case. Infinities are all math, and
intuition often just gets in the way.
The probability of getting at least one success in n goes with
probability p for each go (as you said):
1-(1-p)^n
The probability of getting at least one success in an infinite
number of goes:
def
1-(1-p)^(inf) === lim (1-(1-p)^n) = 1 (Exactly!)
n->inf

You can argue the definition of infinity as a limit, but I'd be
delighted to se another coherent definition.

Notice that I haven't commented on whether lazy evaluation applies or
not, as I have given up trying to use mathematical logic on Tom's
rulings. Sometimes it works, but too often it doesn't! (Just make ME
head judge and rules provider, and you would soon see limits and
fixpoints everywhere :).

/L "not a mathematician, just pedantic"


--
Lasse Reichstein Nielsen - l...@daimi.aau.dk
This message may be reproduced freely for non commercial purposes.
'It is error alone that needs the support of government. Truth can stand by itself'
'Faith without Judgement merely degrades the Spirit Divine..'

Lasse Reichstein Nielsen

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In <69f28k$4id$1...@halcyon.com> phae...@halcyon.com (Phaedrus) writes:
<snip>

> Now, let me take a stab at proving that it is in fact possible to
>shuffle forever and _never_ reach the desired order.

> First of all, it's worth looking at the probabilities again. We've


>already seen that the odds of getting our desired combo in one try are 1/C,

>and the odds of not getting it in one try are 1-(1/C). What are the odds of


>not reaching that combo in N tries? Well, since each try is independent
>(we're assuming that we're shuffling thoroughly each time), the odds are

>(1-(1/C))^N, And what are the odds of reaching that combo at least once in


>N tries? Well, it's 1 minus the odds of _not_ reaching that combo in N
>tries--so it's 1-((1-(1/C))^N).

> So, what happens if we make N a really really high number? Well,


>the odds of missing every time--(1-(1/C))^N-- get smaller and smaller, closer
>and closer to 0. So the odds of hitting at least once get closer and closer
>to 1. But, no matter how high we make N, the odds of hitting at least once

>never quite reach 1. Even if we make N infinite, the odds of missing are still


>an infinitesimal fraction away from 0, and so the odds of hitting at least

>once are still just that one-in-infinity fraction away from 1.

Wrong. There is no real number an infinitesimal fraction away from 0 or
1 (as an inifinitesiomal isn't a real unmber). Probabilites are real
numbers, so the argument is mathematically unsound.

> But let's get this away from math. Here's a couple of relatively simple
>ways to prove that it's possible to go forever without hitting the combo
>we're after.

> First, there's proof by induction. Is it possible to shuffle the cards
>once without hitting the combo we're after? Well, yes, certainly.

> Now, given the fact that it's possible to shuffle the cards N times
>without hitting our combo, is it possible to shuffle the cards N+1 times
>without hitting our combo? Again, clearly yes--I just shuffle the cards
>N times the wrong way, then shuffle them once more the wrong way.

> So, by induction, it's possible to shuffle the cards the wrong way
>any number of times,

Yes

>including infinity;

No! Absolutely No! That does NOT follow from the principle of induction.
By induction you can prove that a property holds for all natural
numbers... showing that this infers that it holds for infinity (however
your proposition manage to incorporate this concept) is a separate step.

> no matter how many times I've
>already done it wrong, I can always do it wrong once more.

Right... and still have done it a finite number of times, proving what
about infinity?

> Or, to look at it another way: You're saying that it's _impossible_--
>in other words, probability zero--to go forever without hitting the desired
>combo. But, for that to be true, that would mean that at some point it would
>have to be _certain_--probability 1-- that I would hit the combo.

No. At least not at any FINITE point... because then we would get
probability zero without an infinite repetition, which we know we don't.
Tell me, what is an infinite point? Given a formal definition, perhaps I
can accept this argument... as it stands, it's meaningless.

> But we've
>just agreed that there's no such point--on any given try, the chance of
>hitting the combo is only 1/C. So there's never a particular try during


>which I'm certain to hit the combo, unless I'm using a one-card deck.

True... after any round, you are still not sure to be lucky...
The truble here is that "infinity" isn't after any single round. It has
no direct predecessor, so this argument is meaningless again.

> The interesting thing I've found about this sort of problem is: Ask
>someone "Suppose I flip this coin an infinite number of times. Is it
>possible that the flips will come up a head, then a tail, then a head, and
>a tail, and so on forever?" Most people will scratch their head and think
>about this for a while, and say "Well, I suppose it's possible--it's really
>really _unlikely_, but it's possible." But ask someone "Suppose I flip this
>coin an infinite number of times. Is it possible that the flips will just
>keep coming up heads, forever?" And the answer you get will very probably
>be "No--a tail would have to come up eventually." For some reason, the idea
>of an infinite series in which every possible outcome happens in a particular
>order doesn't bother us; but the idea of an infinite series in which a
>particular outcome _always_ happens, or _never_ happens, strikes us as
>fundamentally wrong. This is true even though the odds in each case are
>exactly the same--a "head, tail, head, tail..." sequence is exactly as
>likely as a "head, head, head, head..." sequence.

This is true. Just goes to show that a little intuition is a dangerous
thing. With infinities, it's more likely to kick you in the groin than
help you.

>>Criticism is welcome, though perhaps this entire discussion should be
>>moved to alt.math.problems.insane. ;)

The problem is that people are trying to treat infinity as if it were a
number (I know it is in magic... but these are mathematical arguments).
Infinity isn't a number. Any definition of infinity in mathematics
usually uses limits:

1/infinity is defined as the limit of (1/n) for n going towards
infinity, or (in this case) infimum of the set {1/n | n is a natural number}
... which is 0, as we would want it to be.

1-(1-p)^inf = limit { 1-(1-p)^n | n natural }
n->inf

= supremum { 1-(1-p)^n | n natural } = 1
n->inf

What is the probability of any given infinite sequence of coin-tosses?
It's 0, because it's 1/infinity... (actually not even a countable
infinity) but one of them HAS to happen, right? That is, if you could
ever make an infinite sequence of coin tosses... Can you? Or, you can
make a measure on the set of natural numbers, and use heavy duty
mathematics to define an integral over them, so that the pointwise
probability is 0, but on all subsets of the natural number with measure
more than zero, you will get the probability of hitting something in
that set (perhaps nonzero).

Likewise, there is a more than countable infinite set of possible
infinite sequences of reshuffles (a subset of it is in bijective
correspondance with the interval [0,1] of the real numbers), when you
only remember if the result was good or bad for you. The probability of
any one of them is zero, but this is *not discrete probability* any more,
so you do not sum, you integrate, and the probability of a subset
depends on the size of the subset (or measure of it, depending how you
define "size").

Morale: As soon as you go to infinities, discrete probabilities might
not work (even probably won't). Continous probability has pointwise 0
probability for all outcomes, you have to check some subset of a nonzero
measure (in real numbers, at least more than countably many points).

/L "pedantic, almost mathematician"

Ingo Warnke

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Dustin Wood (dw...@best.com) wrote:

: > : Something that has probability 1 will always happen, every time, by
: > : definition.
: >
: > Then you use some other definition than me, not the one accepted by
: > mathematicians. That something that always happens has probability one


: > is correct. Just the reverse is not true. Think of producing somehow
: > an infinite series of zero's and one's by flipping a coin. After you did this,
: > find out what the probability is that the just produced series comes up.
: > It is simply 0. Nevertheless, it *did* happen.

: The problem is that you can't produce that infinite stream.

Of course that's practically right. But you can't shuffle infinitely often, too.

: I'm going to randomly pick an integer from the set of ALL integers.......OK I've got
: one (don't ask how I got it, that question can't be answered)

Actually, it can't be done if you want to give each number the same probability. It is
one of the most irritating facts that you sometimes *can't* assign probabilities to some
things without leaving some of the rules that usually apply behind. But let's use your
own argument to show that you are wrong:

: Probability I got the integer 67343: zero
: Probability I got the integer 2: zero
: Probability I got an integer: one

What is the probability that you get an integer > 100000?

If it is less than one, please explain, as you said (I generalize here):

Probability you got the integer 0: zero
Probability you got the integer 1: zero
Probability you got the integer 2: zero
...
Probability you got the integer 99999: zero
Probability you got the integer 100000: zero.

If it *is* one, do you really think that it is *impossible* to not get a number
bigger than 100000?

As I said already, when infinitely many possibilities are at hand,
probability 1 does *not* mean sure. I agree that if you infinitely shuffle
you get with probability one to a point sometimes in between where the cards
are arranged as you want. It is 'almost sure' (a term used by mathematicians as
a synonym for 'has probability one'), but it is not 'sure'.

: Now I pick random integers an infinite number of times:
: Probability that all of them are integers: one
: Probability that all of them are the integer 2: zero
: Probability that none of them are the integer 2: zero
: Probability that the Xth one is the integer 2: zero
: Probability that ANY of them are the integer 2: one

All correct (well that doesn't mean anything really, as I explained above your method
of selecting intergers does not exist). But the crucial point is that
probability one does *not* mean sure. If you disagree with this you are
simply wrong. If you don't believe me, ask a math prof near you.

Ingo Warnke

Ingo Warnke

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Dustin Wood (dw...@best.com) wrote:


: Ingo Warnke wrote:

: > : Given that your opponent has only 2 GB's and the number of cards in the graveyard


: > : and draw pile totals an even number, then Yes, it is guaranteed to happen if the
: > : process is carried out to infinity.
: >

: > No it is *not* guaranteed. The probability is 1, but that *doesn't* mean it will happen
: > surely. If you flip a coin infinitely, the chance that you at least once get 'head' is
: > 1, still it is not impossible to come up with 'tail' all the time.

: When carried out to infinity, you will get an infinite amount of heads and an infinite
: amount of tails.

This is *not* guranteed. Sure the probability that you get all heads is 0, but it still
can happen. Are you really saying that it is impossible? In fact, each given infinite
series has the same probability, namely 0.

: You will have an infinite number of substrings of HTTTTHTTTH. You will
: have an infinite number of substrings of infinite length comprised of every possible
: combination.

All this is only guaranteed with probability one. That doesn't mean it surely happesn.

: Your argument (and several others throughout this thread) supposes that if the first thing
: you come across is an infinite string of tails, then you will never get to anything else.

I don't say anything such. All I said was that it is possible in my experiment to get all
'tail'. I didn't 'encounter' anything first.

: This is simply not true. Infinity has very strange properties, not the least of which is
: to encompass itself.

And the other is to confuse many people. :-)

Ingo Warnke

Lasse Reichstein Nielsen

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In <69hlrh$dvr$1...@halcyon.com> phae...@halcyon.com (Phaedrus) writes:

>In article <69hi6h$o32$1...@news.iquest.net>,
>Dan Johnson <pano...@iquest.net> wrote:

>>In article <69h87r$7b5$1...@halcyon.com>, Phaedrus <phae...@halcyon.com> wrote:

>>> 1/infinity--in fact, any positive finite number over infinity--is so
>>>_close_ to zero, that for the vast majority of problems it can be taken to be
>>>zero.
>>

>>If 1/x is not 0, x is NOT infinity.

> I've already said "1 over infinity is not zero," and I've already given


>several explanations and proofs for why that has to be. If you're going to
>say "1 over infinity is zero," and not give any proof for why that has to be,
>then there's not much I can say other than "Is not."

Ok... As infinity isn't a real number, I'd like to know what division is
in this case. Probabilities are real numbers, so if 1/infinity is a
probability, it is a real number, and the most likely definition of it
is

lim (1/n) = 0.
n->inf

I assume you won't argue that it doesn't have a nonzero distance to 0,
and in the real numbers, that means it IS 0. If we are not using real
numbers, please tell me what we ARE suing, cause I cannot begin to
guess.

As for whether 0*infinity is anything, I need to know what * means
again.
It's not multiplication as in the real numbers, as infinity isn't real.
If it is defined as
lim (0*n) = 0
n->inf
then it's probably 0, but it depends on defintions.

When you add infinity to your number system, a lot of things break down,
unless you define them all yourself. Whether I'd call it numbers
afterwards is a question I'll wait answering 'till I se it. In this case,
it's not applicable, as we are talking traditional probabilities (real
numbers), and the problem is that people are using the wrong theory
(discrete probability), not whether 1/inf is defined or not.

In continous probability, a possible event can have probability 0. It
doen't matter, for the probability of any set of events with "measure"
0 is always 0... and you could say that it's not a real probability
unless we are talking of sets of events with nonzero measure. (sorry if
I'm too technical, or too simplical... feel free to ask, and I'll go get
my book on probability theory... it's only 5 years since I saw it last,
which explains the lack of details :)

/L

Ingo Warnke

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Phaedrus (phae...@halcyon.com) wrote:
: >Well, even that's a problem. If it's truly random, each number should
: >have the same probability of showing up; but how many numbers are
: >there? If you randomly choose one of n objects, each has the
: >probability of 1/n of being chosen, but if you randomly choose one
: >element of an infinite set, each element has the probability of
: >1/infinite = 0 of showing up! (You might even argue that it is thus
: >impossible to choose a random integer...)

: You've just disproven your own hypothesis--namely, that 1/infinity is 0.
: It's about as close to zero as you can get; but it ain't zero.

If you use some non-standard analysis, please say so. Otherwise Ingo was correct,
you can't 'pick an integer at random' and expect each integer to get the same
probability > 0.

: Another induction proof: Is 1/infinity is zero? You're saying it is.

It's *undefined*. The simple reason is that if you try to define it generally
and for all cases you get contradictions. Infinity isn't even a number. So
all your 'proofs' show just one thing. infinity and 1/infinity are not
mathematically reasonable terms (in standard analysis), *because they
lead to contradictions*.

Ingo Warnke

Dan Johnson

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article <69hlrh$dvr$1...@halcyon.com>, Phaedrus <phae...@halcyon.com> wrote:
> I've already said "1 over infinity is not zero," and I've already given
>several explanations and proofs for why that has to be. If you're going to
>say "1 over infinity is zero," and not give any proof for why that has to be,
>then there's not much I can say other than "Is not."

If you divide a number with a limit of 1 by a number with a limit of
infinity, the result will have a limit of 0. I challenge you to provide
just one counter example. Remember, if you are using infinity in a
calculation, you ARE dealing with limits.

> Are you, or are you not, saying that I can add 0 to itself an infinite
>number of times, and get a number other than zero? (Please note that I

>wasn't talking about multiplication; I was talking about the sum of an
>infinite series.)

If you sum an infinite series whose terms all have 0 as a limit (remember,
this is the only way to meaningfully discuss 1/infinity), the sum can be
anything.

Ingo Warnke

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Phaedrus (phae...@halcyon.com) wrote:
: >: >It will happen with probability 1. Unfortunately, if there are infinitely
: >: >many possibilities at work, 'propability 1' doesn't mean 'sure', just as
: >: >'propability 0' doesn't mean 'never happens'.
: >: Errrr, with all due respect, this is simply wrong.
: >I'm pretty sure I'm correct,

: I just called up a friend of mine who's a TA in a Probability class at
: the University of Washington.

I'm not sure what a TA is? To get my 'certification' right: I have a diploma
(after 5 years of study at an university) in mathematics. I'm not really
sure how that ranks into american style grades.

: If I'm wrong about "probability 1 is certain;
: and certain is probability 1", then not only is he wrong too, but the two
: reference books he checked are tragically misprinted... :-)

Are you sure thy don't say 'almost certain' and that they really deal with
infinite cases?

: > That something that always happens has probability one


: >is correct. Just the reverse is not true. Think of producing somehow
: >an infinite series of zero's and one's by flipping a coin. After you did this,
: >find out what the probability is that the just produced series comes up.
: >It is simply 0. Nevertheless, it *did* happen.

: You're making another faulty assumption here; namely, that the probability
: of that series of flips is zero. It's not; it's about as close to zero as it's


: possible to get, but it's not zero. If it were, then it would be _impossible_

: to produce an infinite series of zero's and one's.

Now you use your argument in a circle. Probability 1 does not mean sure, but
probability 0 does also not mean impossible. That's what we are discussing
here. And could you tell me what that magical number is that can get as close
as zero as possible but isn't quite zero? What about 1/2 of that number?
If you use some non-standard analysis which use 'infinitely small' numbers,
please say so, so I can check that out. But with plain vanilla real numbers
such a number doesn't exist, you don't have the 'next greater' real number to
a given real number.

: If I flip a coin N times, then the probability of any one particular set


: of flips coming up--all heads, all tails, heads and tails alternating,
: whatever--is 1/(2^N). As N gets larger and larger, this fraction gets smaller
: and smaller; as N approaches infinity, the fraction's value approaches zero.

Right.

: But "A function of N approaches some value as N approaches infinity" does _not_


: mean "That function of N _reaches_ that value when N _reaches_ infinity."

Right. But it isn't forbidden , either.

: 1/N approaches zero as N approaches infinity; that does not mean that
: 1/infinity is zero.

What *is* infinity, please? How do you calculate with it? If it
was easy, we wouldn't have the whole infinity problem in magic since
some time.

: If I produce an infinite series of coin flips, then the sum of the
: probabilities of all the possible series has to be 1, by definition. So it


: _cannot be true_ that the probability of each individual series coming up is
: 0.

Good point. But the conclusion is wrong, and I didn't realize what it
really means until now:

You *can't* assign each possible outcome of an infinite coin tossing a
probability. That's nothing spectecular for a mathematician, but probably
rather shocking for somebody else. Similiarly, you can't assign a
'volume' to each bounded set of points in the 3 dimensional room that
has the 'desired' properties, like being additive.

One of the biggest hurdles to start understanding measure theory (which
is the basis for probability theory) is that you can't measure everything.
This applies here.

Ingo Warnke

Ingo Warnke

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Dustin Wood (dw...@best.com) wrote:


: Paul Miller wrote:

: > Well, what you've shown is that the probability of an infinite reshuffle loop is
: > zero. I don't doubt it. Zero probability does not equal impossible, however.
: > It is merely "very unlikely". Likewise probability 1 is not "certain," just
: > "very nearly certain."

: Where do you get this junk?

Probably from a good mathematics course. Because it is not junk, but correct!

: If someone verbally gives me an integer between 1 and 2


: inclusive. And I don't know ahead of time which number they will give me. Then the
: odds of them telling me 1 is 1/2, the odds of them telling me 2 is 1/2, odds of them
: not telling me either 1 or 2 is zero and the odds that they will tell me either 1 or
: 2 is 1/1 or 1. You can't say well they might say 3 instead. That doesn't work
: because the problem states that they give me either 1 or 2, anything else is outside
: the scope of the stated problem.

: Now, according to you it is possible not to get either 1 or 2 even though the
: probability is 1. I would like to know what would make you believe that nonsense.

He doesn't say that. In your example probability one is the same as 'surely happens'.
That's always the case in finite cases. But when infinity comes into play, things
get more compliated. One of those things is that probability zero does not mean
'impossible' and probability one does not mean 'sure'.

Ingo Warnke

Alex Werner

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article <69h87r$7b5$1...@halcyon.com>, Phaedrus <phae...@halcyon.com> wrote:
> 1/infinity--in fact, any positive finite number over infinity--is so
>_close_ to zero, that for the vast majority of problems it can be taken to be
>zero. But that does not mean that it _is_ zero; and the summing of an infinite

>series--which is what this problem boils down to--is one of the areas where you
>_cannot_ make this assumption. If 1/infinity were zero, then the sum of an
>infinite series of "1/infinities" would be zero. That's not true--and, when
>we look at probability, we see that it _can't_ be true.

> If I pick a random integer, then the probability that I've picked a random
>integer is 1. The probability that I pick any particular integer is
>1/infinity. If 1/infinity is zero, then the sum of the probabilities of
>picking all the possible integers would be zero--again, no matter how many
>zeroes you add together, you're going to get zero. But that would mean that
>it's impossible to pick a random integer at all--that the probability of
>doing that is zero.

as a matter of fact, it _is_ impossible to pick a random integer. In a
sense. It's impossible to have a flat probability distribution over all
positive integers. It is logically impossible to set up a theoretical or
practical method in which you randomly select one of the positive
integers, each with equal probability.

However, that is more or less a side point. The important point that I
want to stress is that, barring some bizarre and tangential branches of
mathematics, infinity is NOT A NUMBER! therefore, there is no such thing
as 1*infinity, infinity/0, 0/infinity, 3/infinity, etc. All of those
symbols (multiplication, division, etc) only have meaning if they're
mathematically defined, and they are defined only to work on certain very
precise sets (the whole numbers, the integers, the real numbers, etc.).
Infinity is not a member of any of those sets, and thus there is no such
thing as 1/infinity.

So, when someone says _anything_ about infinity, they always are talking
in terms of limits. So if someone says "1/infinity=0", either they don't
know what they're talking about, or they mean "the limit as x goes to
infinity of 1/x is 0", which is certainly true.


Furthermore, it's meaningless to talk about doing something an infinite
number of times. That concept just doesn't make sense. Of course, you
could do something an unbounded number of times, meaning that any time
anyone thought of any integer, you could say "OK, I'll do it more than
that many times". But that's not the same as infinite.


As for gaea's blessing and "infinite millstoning", if you pick some
arbitrarily large number, like, say, 10^(10^(10^(10^(10^(10^10))))),
then the probability of getting the shuffle you want somewhere in that
millstoning is far higher than, say, the probability of every proton and
electron in the card you're holding spontaneously decaying at the same
time. So, if I were a judge, I'd certainly accept the imprecise statement
"I'll do it an infinite number of times, and eventually I will arrive at
the outcome I want"


Phaedrus

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article <34BC17A1...@best.com>, Dustin Wood <dw...@best.com> wrote:
>Why does everyone continue to insist that .999999.... does not equal 1?

As far as I can tell, no one has.

> That is
>basically what you said. Common sense might say that they are not equal, but
>common sense is wrong here.

Common sense is very dangerous to apply to infinity. For that matter,
_uncommon_ sense is very dangerous to apply to infinity.

>A cumulative probability of 1 means, by definition, that something within
>the set of choices that were added to come up with the cumulative total of 1
>is guaranteed to happen. Likewise, a cumulative probability of zero means,
>by definition, that nothing within that set will happen.

Exactly so.

>All of this is for a single attempt. It doesn't matter if our one attempt is
>infinitely long or of finite length. The only time, that I am aware of,
>that this fails to hold is when you mix multiple different infinities.

Well, to be exact, all of this is for "whatever number of attempts the
probability figures in question talk about."
The definition of "attempt" is nebulous in the first place; it has to
be defined for each problem. In this case, when we're talking about "Will
we eventually hit the combo?", we're talking about an infinite number of
attempts--or, if you prefer, a single attempt that consists of an infinite
number of shuffles/coin flips/whatever.

>If you have an infinite series, then somewhere within that series, all
>possibilities will occur. By definition infinity covers EVERYTHING, including
>itself and all possible subsets. This includes all subsets of both finite and
>infinite length.

First of all, again, this is not correct. Suppose three of the infinite
subsets I have in mind are "an infinite string of 1's", "an infinite string
of 2's", and "an infinite string of 3's". A single infinite string of numbers
_cannot_ contain all three of these infinite strings. The series "1, 1, 1...
1, 2, 2, 2...2, 3, 3, 3..." cannot exist. That would mean that the series of
2's has both a definite starting point and a definite stopping point; and a
series with a definite starting point and stopping point _cannot_ be infinite.
An infinite string can contain an infinite number of other infinite
strings, but only because they overlap--in fact, only because they're trivial
variations on each other. The series "1, 2, 3, 4, 5...", contains the series
"2, 3, 4, 5...", the series "3, 4, 5, 6...", etc. But, at the other extreme,
the series "2, 2, 2, 2..." doesn't include any other infinite series at all,
other than itself.

>From any given point the odds of any infinite series occuring _starting at
>that point_ is 1. Obviously some series must occur. But the odds of any
>particular infinite series starting from that point is zero.

But you've just said that the odds of _some_ infinite series happening
is 1. So you're saying that it's possible--certain, even--that I can produce
such a series. But you're now saying that, once I've produced that series,
the odds of producing that series _I just produced_ are zero? By definition,
that _cannot possibly be true_--something that just happened _cannot_ have had
a probability of zero.

> This is the point where this
>whole thread seems to be stuck. Everyone keeps saying that 0+0+0+0... can not
>equal 1.

Errrr, if _everyone_ is saying that, then it may just be because we're
right. Join us. Don't be scared. :-)

> Under normal circumstances, any time you add zero to something you
>get that thing back.

That is, in fact, part of the definition of zero.

> But we are going from the finite to the infinite.
>I.E. anytime we stop the adding to see if it is still zero, we have to do
>so at some finite point. And the sum will still be zero. It is not possible
>to say, ok we will just check it after we have added infinite zeroes because

>the addition is only valid if we can check every step. There is no way to


>go from something finite to something infinite without simply being there.
>there is no transition point. Everything is either finite or infinite and
>there is no point where the two touch.

Dustin, "We're going from the finite to the infinite--it doesn't work
the same way" doesn't address every problem with infinity. There are certain
constants that do work, even with infinity. And "0+0=0, even if you do it
an infinite number of times" is one of them.
Let's take a different series: 1/2+1/4+1/8+... We say that the sum
of this infinite series is 1. We say that, even though we can never actually
add it up and reach 1, because we can come _as close to_ 1 as we want to; if
you tell me how close to 1 you want to be, without actually reaching it, I
can tell you the number of terms of this infinite series that you need to add
up to get that close. If you say "I want to be within a million-billion-
trillionth of 1", I can say "Okay, add the first 47 bajillion terms of the
series, and you'll be there."
0+0+0+0... does not work in the same way. The sum of this infinite
series is not 1. It's not 1/2. It's not 1/100000. It's not anything other
than zero. In the case of the first series, we can say that the sum of the
series is 1, because we can see that we keep taking steps towards that "finish
line"; even though we can never reach it by actually adding up the terms, we
can get at close to it as we want. In the case of 0+0+0+0..., _we never even
leave the starting line._ No matter how many times we add it up, we make
no progress--not even infinitesimally small progress--towards any number other
than zero. There is no magic "But if I do it an infinite number of times,
I'll get somewhere!"; an infinite number of nothings is still nothing.

Ingo Warnke

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Jay Lorch (lo...@ananke.CS.Berkeley.EDU) wrote:
: In conclusion, we don't even need to have the ability Tom has given us
: (to repeat the milling an infinite number of times)! The desired
: outcome will occur with probability one in a finite amount of time
: (although this amount of time could be arbitrarily large).

All correct. But one point in this discussion that is if 'this
happens with probability one' is / is not the same as 'this
happens guaranteed'. You showed correctly that with probability
one the desired card state will be reached (even in finite time).
But if that is the same as 'the process will end guaranteed' is
IMHO the key point in this discussion.

Ingo Warnke

Ingo Warnke

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Phaedrus (phae...@halcyon.com) wrote:

: Agreed. If something's probability is 1, then it's certain. There's
: no such thing as "probability-1 things that are more likely than others"; by
: definition, all probability-1 things are equally likely--they all happen, every
: time.

What definition do you use? I use Kolmogorov's, where a set A and a sigma algebra
S are given and the probability measure p is then a function from S into the real
numbers that has some properties. One of those properties is that
p(A)=1, that means that the sure event has probability one. But nowhere
is it required that p(X)=1 only for X=A.

Ingo Warnke

Alex Werner

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article <34bcf...@news.uni-rostock.de>,

Ingo Warnke <nfa...@hp710.math.uni-rostock.de> wrote:
>As I said already, when infinitely many possibilities are at hand,
>probability 1 does *not* mean sure. I agree that if you infinitely shuffle
>you get with probability one to a point sometimes in between where the cards
>are arranged as you want. It is 'almost sure' (a term used by mathematicians as
>a synonym for 'has probability one'), but it is not 'sure'.

probability 1 always means certain. However, probabilities tending to 1
and probabilities being 1 are not the same thing.

If you flip a coin an infinite number of times, what's the probability of
getting at least one head?
1? no
very close to 1? no

the answer is, that question has no meaning. One can not flip a coin an
infinite number of times.

If, on the other hand, you flip a coin X times, take the probability of
there being at least one head, and move X towards infinity, what happens
to the probability? It goes towards one. That means, the more coins you
flip, the more likely it gets that there will be a head. It means no more
or no less than that.

THERE IS NO SUCH THING AS FLIPPING A COIN AN INFINITE NUMBER OF TIMES.


(So if someone is using "probability 1" as a shorthand for "probability
tending to 1", then I suppose you might make some weird syntactic
argument that something with probability 1 won't always happen.)


Jeffrey G. Montgomery

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article <34BC17A1...@best.com>, Dustin Wood <dw...@best.com> wrote:
>Why does everyone continue to insist that .999999.... does not equal
-> 1? That is

>basically what you said. Common sense might say that they are not equal, but
>common sense is wrong here.
>

No it isn't -- The closer you get to 0, the closer 1-X (X is the number
approaching 0) is to 1. However, if MULTIPLYING, unless EVERY FACTOR
INVOLVED is 1, you can NEVER get 1.
1*1*...1...1*1 = 1
1*.99999999999*1....1....*1 = .9999999999
Whoops!

>A cumulative probability of 1 means, by definition, that something

-> within the set


>of choices that were added to come up with the cumulative total of 1

-> is guaranteed


>to happen. Likewise, a cumulative probability of zero means, by

-> definition, that


>nothing within that set will happen.

And if you are multiplying .99999 by .99999 by .99999 ... and then subtract
whatever the product is from 1, you can't get 1!
.99999 * .99999 = .99998 * .99999 = .99997 * .99999 = .99996 ...
.99999^100 = .999000494-
.99999^200 = .998001988-
.99999^1000 = .990049784
.99999^1,000,000 = .000045397
.99999^1,000,000,000 = .0000000000- NOT ZERO
.99999^infinity = .0000000000000000000000000000000000000000000000000000000-
NOT ZERO

Therefore, EVEN if you run it an infinite amount of times, you will
never get zero - and therefore, 1-X will never be 1.

>All of this is for a single attempt. It doesn't matter if our one attempt is
>infinitely long or of finite length. The only time, that I am aware

-> of, that this


>fails to hold is when you mix multiple different infinities.

Infinity is infinity - *shrug*

infinity * infinity = infinity
infinity * 1 = infinity
infinity * 0 = 0

>If you have an infinite series, then somewhere within that series, all
>possibilities will occur. By definition infinity covers EVERYTHING, including
>itself and all possible subsets. This includes all subsets of both finite and
>infinite length.

Right: However, infinity + 1 = infinity
You can always run the calc again with one more... therefore, you never
reach 1.

>From any given point the odds of any infinite series occuring

-> _starting at that


>point_ is 1. Obviously some series must occur. But the odds of any

-> particular
>infinite series starting from that point is zero. This is the point
-> where this


>whole thread seems to be stuck. Everyone keeps saying that 0+0+0+0...

-> can not
>equal 1. Under normal circumstances, any time you add zero to
-> something you get
>that thing back. But we are going from the finite to the infinite.
-> I.E. anytime


>we stop the adding to see if it is still zero, we have to do so at some

-> finite


>point. And the sum will still be zero. It is not possible to say,

-> ok we will just


>check it after we have added infinite zeroes because the addition is

-> only valid if


>we can check every step. There is no way to go from something finite

-> to something


>infinite without simply being there. there is no transition point.

-> Everything is


>either finite or infinite and there is no point where the two touch.

please limit your lines in the future to 70-75 characters.. :)

When you go from finite to infinite, you MUST take into consideration
that 1 is 1 and ALMOST 1 is NOT 1.

The fibonacci series is {1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...}
I could do this for a year - there will ALWAYS be a next number. I will
never reach infinity even though, by defninition, it's an infinite series.

1/2 = .5 /2 = .25 /2 = .125 ...
Again, I can do it forever - I will never reach zero! I'm constantly
multiplying by 1/2, but NEVER ZERO!

Any non-zero value multiplied by another non-zero value will come out
a non-zero value. This includes .0000000000000000000000000000...00001

It's math - you can't change the rules just to put your opponents' deck
in an order you want it to be.

>Dustin
>

Jeff

Stuart Smith

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Attempt to rephrase some of these arguments by replacing the word "infinity"
by the concept that is really referred to:

In article 1...@halcyon.com, phae...@halcyon.com (Phaedrus) writes:
>In article <34BC1343...@best.com>, Dustin Wood <dw...@best.com> wrote:
>>Phaedrus wrote:
>>> [Someone whose attribution line was deleted wrote:]


>>> > That something that always happens has probability one
>>> >is correct. Just the reverse is not true. Think of producing somehow
>>> >an infinite series of zero's and one's by flipping a coin.

What do we really mean here? Obviously, we can't REALLY flip a coin an
"infinite" number of times. We can only flip a coin N times, where N is
a positive integer.

Perhaps he is intending to state that "there is a number between 0 and 1,
which I can write in binary as .001010011000... where, if you ask me what
the value of the nth digit is, I will flip a coin and tell you the nth
digit is a 1 if I get heads and a 0 if I get tails." However, this number
is rather poorly defined. I can't do any ordinary arithmetic with it
because I don't know what value the 10,000th digit is, for instance, until
I ask you to flip a coin and tell me the result.

Note that a number can have an infinite decimal expansion and still be a
well defined number. In fact, using decimal notation, the number 1/2 is
equal to .500000000000... Even though I can't actually write out the full
decimal notation for 1/2 I can give you a rule that states "the 1st digit
of the decimal expansion is 5 and all other digits are 0". This deterministicly
sets the value of every digit and I can do some real math with the number.
The "infinite sequence" of coin flips is not a number and can't be made
into a number. It can't be used to make any kind of an arithmetic statement.
It can't be used to prove or disprove any points in this discussion.

>>> > After you did
>>> >this, find out what the probability is that the just produced series
>>> >comes up. It is simply 0. Nevertheless, it *did* happen.
>
>>> You're making another faulty assumption here; namely, that the probability
>>> of that series of flips is zero. It's not; it's about as close to zero as
>>> it's possible to get, but it's not zero. If it were, then it would be
>>> _impossible_ to produce an infinite series of zero's and one's.

Again, you can't really flip the coin an infinite number of times. If you
believe otherwise, go ahead and do it. Then come back and re-join the
discussion.

What you may really mean to say is that for any positive integer N, I can
flip a coin N times to produce a N-length sequence of zeros and ones.
I can also say that
"As N increases, the probability of someone else flipping a coin N times
and getting the same sequence as I did approaches zero." (where "approaches
zero" means that for any positive real number you select, I can select a
value of M for which any two sequences of N digits, where N > M, have a
probability of being the same less than the number you selected.)

>>No, only impossible to _randomly_ produce.
>
> Are you saying that it's impossible to randomly produce an infinite
>series of coin flips?

Yes.

This is patently obvious.

I can randomly produce a sequence of N coin flips where N is any positive
integer. However, infinity is NOT a positive integer. It is a concept
which is often rather sloppily used to describe situations.

> If you're saying "Yes; it's impossible to produce an infinite series of
>coin flips at all," then I can only say "That's an interesting viewpoint,"
>and leave you on your way. :-)

Bye :-)


OK, I couldn't leave yet.

>>> If I flip a coin N times, then the probability of any one particular set
>>> of flips coming up--all heads, all tails, heads and tails alternating,

>>> whatever--is 1/(2^N). As N gets larger, this fraction gets smaller


>>> and smaller; as N approaches infinity, the fraction's value approaches zero.

>>> But "A function of N approaches some value as N approaches infinity" does _not_
>>> mean "That function of N _reaches_ that value when N _reaches_ infinity."

What does the statement "That function of N reaches that value when N
reaches infinity" mean? This is NOT an arithmetic statement. In arithmetic,
this statement is undefined. You MUST re-phrase it as a statement that
does NOT include the word infinity to have a statement that can be
discussed.

>>> 1/N approaches zero as N approaches infinity; that does not mean that
>>> 1/infinity is zero.

"1/infinity is zero" does not mean ANYTHING until I explain what it means.
What it is usually DEFINED to mean is:

"1/N approaches zero as N approaches infinity"

Note that "approaches infinity" is also undefined. Whenever I increase N, it
is NO CLOSER TO INFINITY than it was before. Continuing with the elimination
of fuzzy infinities, what we really mean is:

"1/N approaches zero as N increases without bound"

Even this statement requires that we agree on what we mean by "approaches zero"
and "increases without bound". The mathematically provable statement is:

"For any real positive number D, there exists a positive integer M such
that for all N > M, the absolute value of 1/N is less than D."

>>For example, infinity + 1 = infinity. It is just not the same infinity.

Can you define what you mean by this statement without using the word
infinity? If you can, we will be able to discuss its truthhood.

---
Stuart Smith
== Any opinions expressed are my own. You may share them for free. ==


Joachim Tabaczek

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article <69gk64$mq5$1...@halcyon.com>, phae...@halcyon.com (Phaedrus) writes:

|> >: Ingo Warnke <nfa...@hp710.math.uni-rostock.de> wrote:
|> >: >
|> >: >It will happen with probability 1. Unfortunately, if there are infinitely
|> >: >many possibilities at work, 'propability 1' doesn't mean 'sure', just as
|> >: >'propability 0' doesn't mean 'never happens'.
|>
|> I just called up a friend of mine who's a TA in a Probability class at
|> the University of Washington. If I'm wrong about "probability 1 is certain;

|> and certain is probability 1", then not only is he wrong too, but the two
|> reference books he checked are tragically misprinted... :-)

Probability 1 _is_ certain _if_ your experiment only has a
finite number of possible outcomes; for example, if you flip
a coin any finite number of times. But if you've got an
_infinite_ number of possible results, then this is no longer
true. I don't know which books your friend checked, but maybe
you should ask him again which of these cases they were
actually talking about.

|> > That something that always happens has probability one
|> >is correct. Just the reverse is not true. Think of producing somehow

|> >an infinite series of zero's and one's by flipping a coin. After you did this,


|> >find out what the probability is that the just produced series comes up.
|> >It is simply 0. Nevertheless, it *did* happen.
|>
|> You're making another faulty assumption here; namely, that the probability
|> of that series of flips is zero. It's not; it's about as close to zero as it's
|> possible to get, but it's not zero. If it were, then it would be _impossible_
|> to produce an infinite series of zero's and one's.

|> If I flip a coin N times, then the probability of any one particular set
|> of flips coming up--all heads, all tails, heads and tails alternating,

|> whatever--is 1/(2^N). As N gets larger and larger, this fraction gets smaller


|> and smaller; as N approaches infinity, the fraction's value approaches zero.
|> But "A function of N approaches some value as N approaches infinity" does _not_
|> mean "That function of N _reaches_ that value when N _reaches_ infinity."

|> 1/N approaches zero as N approaches infinity; that does not mean that
|> 1/infinity is zero.

Well, infinity isn't really a number, so a mathematician will
tell you that `1/infinity' is just a meaningless expression.
But as a physicist I will happily tell you that, yes, 1/infinity
_is_ zero. :-)

|> If 1/infinity were zero, then zero times infinity would
|> have to be 1, and that's not true; zero times _any_ number--even infinity--
|> is zero.

No it's not. `Zero times infinity' is simply undefined --
depending on what you're actually multiplying here, it could
be zero, infinity, or 42 [the most likely result :-)].

If you flip a coin an infinite number of times, then the
probability of any given chain of results is

lim_{N -> infinity} 1/(2^N) = 0

|> If I do some thing, then the sum of the probabilities all the possible
|> outcomes of that thing has to be 1, by definition.

True.

|> If I roll a die, the sum
|> of the probabilities of the possible outcomes of that die roll has to be 1.
|> If there's six sides on the die, and the die is "fair", then the odds of any
|> given side coming up have to be 1/6--or slightly less, if we want to include
|> the odds of the die coming to rest on edge or spontaneously exploding in
|> midroll.

True as well.

|> If I produce an infinite series of coin flips, then the sum of the

|> probabilities of all the possible series has to be 1, by definition.

Still true...

|> So it
|> _cannot be true_ that the probability of each individual series coming up is

|> 0. It doesn't matter how many zeroes you add together, even an infinite
|> number of them; you're not going to get to 1.

... but this is where you're completely wrong. Adding up an
infinite amount of zeros can indeed give you a result of 1
[or any other result]. Integrals, for example, are basically
just that -- infinite sums of zeros. And you're not trying
to tell us that all integrals are zero by definition, are
you?

|> If I flip a coin repeatedly, the odds of it coming up heads are almost
|> infinitely small--1/(2^infinity), to be precise. But the odds are not zero.

Okay. Let's use your notation [which treats infinity as a
number and makes this much easier to write down -- any
mathematicians around are advised to stop reading _now_],
and assume that

1/(2^infinity) = x

Multiply this on both sides with 2, and you get

1/(2^(infinity-1)) = 2x

Since you will surely agree that infinity-1 = infinity,
this is equivalent to saying

1/(2^infinity) = 2x

Therefore x = 2x, to which the only solution is x = 0.

Joachim

Paul Miller

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Ingo Kemper wrote:
>Well, are you sure that complete induction proves that the statement
>is true for infinity as well? I was under the impression that it would
>only show that the statement is true for all natural numbers, but not
>for infinity itself (so while n/infinity is indeed be zero for each
>natural n, it would not necessarily be true for infinity/infinity).

Ingo, you are correct that finite induction only proves a statement true for
each natural number (or more generally for each member of a countable set with a
greatest lower or least upper bound). However n/infinity is most definitely not
zero (nor anything), since infinity is not a number, and division only applies
to numbers. Likewise infinity/infinity is not anything.

My God, what has this thread degenerated to? ;-)

Stuart Smith

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article 1...@news.iquest.net, pano...@iquest.net (Dan Johnson) writes:
>In article <69h87r$7b5$1...@halcyon.com>, Phaedrus <phae...@halcyon.com> wrote:
>> 1/infinity--in fact, any positive finite number over infinity--is so
>>_close_ to zero, that for the vast majority of problems it can be taken to be
>>zero.
>
>If 1/x is not 0, x is NOT infinity.
>
>> But that does not mean that it _is_ zero; and the summing of an infinite
>>series--which is what this problem boils down to--is one of the areas where you
>>_cannot_ make this assumption. If 1/infinity were zero, then the sum of an
>>infinite series of "1/infinities" would be zero.
>
>infinity * 0 is a indeterminate form.

Much of this infinity discussion assumes that "infinity" is a number, just
like 0, 1, or 666. This is not true. Infinity is not a natural number.
You cannot use it in ordinary arithmetic statements. Trying to do so
leads to all kinds of false reasoning.

When I say that I have an "infinite" number of monkeys, I only mean that
"No matter how long I count, I won't ever finish counting up all the monkeys
I have" or "No matter what number you state, I can show you more monkeys
than that and I still won't have shown you all my monkeys."

When I informally say
1/infinity = 0
what I am really asserting by this non-arithmetical shorthand is that
1/n approaches 0 as n increases and that no matter what small non-zero
positive real number you select, I can show you that there is a value of
n for which 1/n is less than the number you selected and 1/m is also less
than the number you selected for all m > n.

Note that I am NOT stating that there exists a number k for which 1/k = 0.

You need to rephrase all equations with infinity into logical statements
that only deal with real numbers.

For instance, the assertion that
infinity * 0 is an indeterminate form.
has to be defined to have any meaning whatsoever.

I would interpret the statement


infinity * 0 = 0

to mean
n * 0 = 0 for all n, no matter how large n is.
This statement is true, provable, and clear.

What about the statement
1/infinity * infinity = 1
This can be interpreted in many ways, depending on what you plug in to the
meanings for the two non-arithmetic "infinity" words. One valid interpretation
may be
1/n * n = 1 for all n, where n is a positive integer, no matter how large n is.

If this is what you mean by the expression "1/infinity * infinity = 1" then
your expression is provably true.

If you interpret the statement to mean, instead,
1/n * m = 1 for all positive n and m
then the statement is patently false.

If you interpret the statement to mean
1/n * m approaches 1 as n and m increase and that no matter what small
non-zero positive real number you select, I can show you that there are
values of M and N for which the absolute difference between 1/N * M and 1
is less than your number and all values of 1/n * m where n > N and
m > M are also "very close" to 1

then your statement is false.

My point here is that ANY assertion about infinity must be replaced by a
statement that does NOT include the word infinity before you can legitimately
attempt to prove or disprove it.

Stuart Smith

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article 1...@halcyon.com, phae...@halcyon.com (Phaedrus) writes:
>In article <34BB7B03...@best.com>, Dustin Wood <dw...@best.com> wrote:

> Errrrrr, it's impossible for something to be both possible and impossible.
>(It's also certain that something can't be both certain and uncertain.)

Interestingly, it IS possible for something to be both true and unprovable.

In fact, Godel (should be an umlaut over the o) has proven that any logical
system of mathematics that includes standard school arithmetic will have
at least one statement that is both true and unprovable.

Dustin Wood

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to Phaedrus

Ok, I've thought of some new arguments,so I'm posting.....Again! sheesh, will this
thing never die? :)

Phaedrus wrote:

>>If you have an infinite series, then somewhere within that series, all
>>possibilities will occur. By definition infinity covers EVERYTHING, including
>>itself and all possible subsets. This includes all subsets of both finite and
>>infinite length.

> First of all, again, this is not correct. Suppose three of the infinite
> subsets I have in mind are "an infinite string of 1's", "an infinite string
> of 2's", and "an infinite string of 3's". A single infinite string of numbers
> _cannot_ contain all three of these infinite strings. The series "1, 1, 1...
> 1, 2, 2, 2...2, 3, 3, 3..." cannot exist. That would mean that the series of
> 2's has both a definite starting point and a definite stopping point; and a
> series with a definite starting point and stopping point _cannot_ be infinite.


You have, once again, completely faile to grasp the concept I am trying to get
across. You put
111....1222.....223333.....33... That is not what I mean. What I mean is more
like:
start_of_series....11111....something_else....22222....something_else....33333....end_of_series.The

infinite series are all contained. They have neither defined starting points nor
defined ending points.
Yet, because we have taken all of infinity into account, they don't have to have
defined endpoints.

For an example of multiple infinities all within a single infinity, how about this?

Think of a 12 inch ruler. If I take a measurement with it and the length of the
object is no more than
12 inches but greater than nothing. What is the probability that the length is at
least 2 inches but less
than 3 inches? It is exactly 1/12 under the specified conditions. (We know
nothing about the length
of the object ahead of time other than what is already stated). But what is the
probability that the
length is EXACTLY 2.5 inches. You are not rounding it to the nearest sixteenth or
quarter or
anything. You are taking the EXACT measurement. The probability of it being any
specified length
is 0. You can round it to the nearest such and such or say that it is between such
and such. But you
can not say what its exact length is.

> > This is the point where this
> >whole thread seems to be stuck. Everyone keeps saying that 0+0+0+0... can not
> >equal 1.

> Errrr, if _everyone_ is saying that, then it may just be because we're
> right. Join us. Don't be scared. :-)

Well, not everyone is saying that anymore either :) It looks like someone went to
alt.math.problems.insane and recruited some mathematicians. Check some of the
other posts, they
bring up some great points :)

> Dustin, "We're going from the finite to the infinite--it doesn't work
> the same way" doesn't address every problem with infinity. There are certain
> constants that do work, even with infinity. And "0+0=0, even if you do it
> an infinite number of times" is one of them.

Sorry, but that just isn't true.

> 0+0+0+0... does not work in the same way. The sum of this infinite
> series is not 1. It's not 1/2. It's not 1/100000. It's not anything

That's true.

> other than zero.

That's not true.

Dustin


Dustin Wood

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to Jeffrey G. Montgomery


Jeffrey G. Montgomery wrote:

> No it isn't -- The closer you get to 0, the closer 1-X (X is the number
> approaching 0) is to 1. However, if MULTIPLYING, unless EVERY FACTOR
> INVOLVED is 1, you can NEVER get 1.
> 1*1*...1...1*1 = 1
> 1*.99999999999*1....1....*1 = .9999999999
> Whoops!
>

Whoops indeed. If that was .99999.... then it would indeed equal 1. As is you
haven'tincluded infinity. .999999999999 is a finite number.

> .99999^infinity = .0000000000000000000000000000000000000000000000000000000-
> NOT ZERO

It is zero.I give up. Go find a math professor and tell him. Normal rules of math
don't apply to infinity.Infinity is a concept, not a number. Nuff said.

Dustin


Phaedrus

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

In article <34bd0...@news.uni-rostock.de>,

I'll have to get back to you on that once I read that Rucker book. :-)
But really, the whole discussion of probability numbers is a sidetrack.
The original question was "Is it possible to go forever without hitting the
desired combination?" To answer that question, you don't need to express
the probability in numeric terms. It's a binary question; either it's
possible, or it's impossible. If it's possible, then it doesn't matter _how_
possible it is; if it's impossible, then it doesn't matter what number you
use to express that impossibility.

Phaedrus

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Dustin, I hate to be a noodge. But _please_ figure out how to get your
browser to hold each line of your posts to 80 characters, and _please_
don't delete the attribution lines saying who said what. I'm on my knees.
I'm begging. Okay?

In article <34BD5B7A...@best.com>, Dustin Wood <dw...@best.com> wrote:
>Ok, I've thought of some new arguments,so I'm posting.....Again! sheesh,
>will this thing never die? :)
>
>Phaedrus wrote:

>>[Someone whose attribution line was deleted wrote:]

>>>If you have an infinite series, then somewhere within that series, all
>>>possibilities will occur. By definition infinity covers EVERYTHING,
>>>including itself and all possible subsets. This includes all subsets of
>>>both finite and infinite length.

>> First of all, again, this is not correct. Suppose three of the infinite
>>subsets I have in mind are "an infinite string of 1's", "an infinite string
>>of 2's", and "an infinite string of 3's". A single infinite string of
>>numbers _cannot_ contain all three of these infinite strings. The series

>>"1, 1, 1...1, 2, 2, 2...2, 3, 3, 3..." cannot exist. That would mean that


>>the series of 2's has both a definite starting point and a definite stopping
>>point; and a series with a definite starting point and stopping point
>>_cannot_ be infinite.

>You have, once again, completely faile to grasp the concept I am trying to
>get across.

On the contrary, I completely grasp the concept that you are trying to
get across. Honest. I really do. I understand exactly what you're saying.
It's just that what you're saying happens to be wrong. :-)

> You put
>111....1222.....223333.....33... That is not what I mean. What I mean is
>more like: start_of_series....11111....something_else....22222....
>something_else....33333....end_of_series.

Fine. So let me put it more simply. The series "1, 1, 1...1, _anything
other than 1_..." is impossible. It is illegal. By the very definition of
an infinite series, it cannot conceivably exist. It is just plain wrong.
It is an ex-series. :-)
Look again at what you wrote: 1, 1, 1....1, 2..." In order for that
series to exist, then there has to be a _last_ "1" in that infinite series of
1's--the 1 just before the 2. _An infinite series cannot have a last number.
Ever._ By definition, an infinite series never stops. There cannot be
anything after an infinite series, because that would mean that the infinite
series would have to end, and an infinite series cannot end.
It's fine to take an infinite series, and stick a finite number of things
in front of it; "1, 2, 3, 4, 4, 4, 4, 4..." is a perfectly fine infinite
series. But you cannot add anything--finite or infinite--to the _end_ of an
infinite series, because an infinite series does not have an end.

>The infinite series are all contained. They have neither defined starting


>points nor defined ending points.

If they don't have a defined ending point, then why do you keep trying
to tack things onto the end of them? :-)
And an infinite series _can_ have a defined starting point. The series
"1, 2, 3, 4, 5..." is a perfectly valid infinite series, and it certainly
has a defined starting point.

>Yet, because we have taken all of infinity into account, they don't have to
>have defined endpoints.
>
>For an example of multiple infinities all within a single infinity, how
>about this?

>Think of a 12 inch ruler.

Stop right there.
There are different classes of infinities. When we talk about infinity
in Magic, we are talking about the most basic of those classes--"countably
infinite." The series "1, 2, 3, 4, 5..." is infinite. But it's composed of
definite steps. I can never write out the entire series; but if you pick a
positive integer N, and ask me what the Nth number in the series is, I can tell
you. So it's countably infinite.
The set "The possible measurements on a 12-inch ruler" is not the same
thing at all. It is not countably infinite. "The set of real numbers between
0 and 12, inclusive" is _uncountably_ infinite. If I asked you "What's the
first number in that set?", you could tell me--it's 0. But if I asked you
"Okay, what's the second number in that set?", you could _not_ tell me--it
doesn't exist. After all, the second number in that set would have to be
"The real number closest to 0, but not actually 0"--and no such number exists.
Whatever number you give, I can give you another number that's closer to zero.
You can't count the number of real numbers between 0 and 1.
Uncountable infinities are as different from countable infinities as
countable infinites are from finite numbers. And, again, by definition,
any infinity that we talk about in Magic is countably infinite; because every
infinity we produce is the result of doing something--repeating some loop of
actions--an infinite number of times. (Counterexamples, anyone?) Even if
I take a countably- infinite number of countable-infinity/countable-infinity
creatures, and sacrifice them all to a Dracoplasm, the resulting Dracoplasm
is still countable-infinity/countable-infinity; it's not uncountable.
The fact that we're talking about "series" at all means that we're
talking about countable infinities; a series, by definition, is composed of
distinct terms--a first one, a second one, and so on. There's no such thing
as an uncountably infinite series--there are plenty of uncountably infinite
_sets_, but that's a different beastie.
So, if you're having a discussion about countable infinities, and you
even utter the word "ruler", the person you're talking with has a legal right
to slap you. :-)

> If I take a measurement with it and the length of the
>object is no more than 12 inches but greater than nothing. What is the
>probability that the length is at least 2 inches but less than 3 inches?
>It is exactly 1/12 under the specified conditions. (We know nothing about
>the length of the object ahead of time other than what is already stated).
>But what is the probability that the length is EXACTLY 2.5 inches. You are
>not rounding it to the nearest sixteenth or quarter or anything. You are
>taking the EXACT measurement. The probability of it being any specified
>length is 0. You can round it to the nearest such and such or say that it
>is between such and such. But you can not say what its exact length is.

Again, this is apples and oranges. "The set of all possible length
measurements"--even "The set of all possible length measurements between
2 and 2.000000000000000000001 inches"--is uncountably infinite. It's not
comparable to the card-shuffling problem.

>> >This is the point where this whole thread seems to be stuck. Everyone
>> >keeps saying that 0+0+0+0... can not equal 1.

>> Errrr, if _everyone_ is saying that, then it may just be because we're
>> right. Join us. Don't be scared. :-)

>Well, not everyone is saying that anymore either :) It looks like someone


>went to alt.math.problems.insane and recruited some mathematicians. Check
>some of the other posts, they bring up some great points :)
>

>> Dustin, "We're going from the finite to the infinite--it doesn't work
>>the same way" doesn't address every problem with infinity. There are certain
>>constants that do work, even with infinity. And "0+0=0, even if you do it
>>an infinite number of times" is one of them.
>

>Sorry, but that just isn't true.
>

>> 0+0+0+0... does not work in the same way. The sum of this infinite
>> series is not 1. It's not 1/2. It's not 1/100000. It's not anything
>

>That's true.
>
>> other than zero.
>
>That's not true.

Dustin, in my post, I gave you a detailed explanation for the reason
why 0+0+0+0... is different from other infinite series involving nonzero
numbers. If you think that that explanation is incorrect, then I would
sincerely welcome your telling me why. But when you snip out my entire
explanation without comment, cut directly to my conclusion, and then simply
say "That's not true," without saying why, then you're not providing much of
a meaningful discussion.

It is loading more messages.
0 new messages