Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

More math: Dembski's latest on specification

0 views
Skip to first unread message

eNo

unread,
Jun 24, 2005, 6:27:09 PM6/24/05
to
New article (take 341?) at:
http://www.designinference.com/documents/2005.06.Specification.pdf.

--
`昂,,,,喊`昂,,,,喊`昂,,,,喊`昂,,,,喊`昂,,,,喊
,,喊`昂,,,,喊`昂,,,,喊`昂,,,,喊`昂,,,,喊`昂,,
eNo
"Test everything; hold on to the good."

John Harshman

unread,
Jun 24, 2005, 6:44:07 PM6/24/05
to
eNo wrote:

I've read through page 4, and so far he hasn't said anything, merely
made unsupported and silly statements about his ability to declare
proper levels of statistical significance objectively. Unlike Dembski,
Fisher realized that such levels are arbitrary. When we reject the null
hypothesis at the .05 level, Fisher realized quite properly that we will
be in error about one time in 20. Dembski goes on to try to render his
point rigorous by claiming that sufficiently small significance levels
(i.e. 1/2^100) are non-arbitrary. I guess that's supposed to be obvious
because he says nothing to support it, only a little story about
prisoners flipping coins that gains all its force (such as it has) by
entirely mistaking the statistical universe in which the story operates.

Does it get better after page 4?

Ken Shaw

unread,
Jun 24, 2005, 7:34:16 PM6/24/05
to

No. On page 7 he pulls out this whopper:
"Suppose now that the die is thrown 6,000,000 times and that each face
appears exactly 1,000,000 times (as in the previous 6-tuple). Even if
the die is fair, there is something anomalous about getting exactly one
million appearances of each face of the die."
He then proceeds to throw around some very large and very small numbers
to try and make this crap significant. No where does he make the point
that this result is the most likely result. I'm not clear on whether he
just doesn't understand combinatorial mathematics or is being
intentionally deceptive.

For those unaware of it, when a very large number of outcomes of an
event are possible it is very difficult to predict the outcome before
hand and combinatorial math is used. OTOH one of those outcomes will
happen and that it is very very unlikely to have happened is not cause
to decide that the outcome was manipulated.

Ken

Kari Tikkanen

unread,
Jun 24, 2005, 9:20:31 PM6/24/05
to
> "Suppose now that the die is thrown 6,000,000 times and that each face
> appears exactly 1,000,000 times (as in the previous 6-tuple). Even if
> the die is fair, there is something anomalous about getting exactly one
> million appearances of each face of the die."
> ...

> OTOH one of those outcomes will
> happen and that it is very very unlikely to have happened is not cause
> to decide that the outcome was manipulated.

But I think it is a good cause. Well, at least it depends who You
suspect to be the manipulator.

Let's assume that I wander around flea/junk markets of Paris as turist.

Some salesman there introduces me couple golden dices at price 300
euros ($400).
He claims that De Institute de repubmbmbmb..( mumbling rest of it) has
verified that each face appeared
exactly 1,000,000 times for both of these dices thrown
6,000,000+6,000,000 times so that they
surely are in good balance.
I can conclude quite surely that outcome has been manipulated/made-up
by human.
I need not to be absolute sure but sure enough that there is something
dishonesty (which ordinary laymen don't observe)
and I would not to buy "gold" offered by dishonest guys.

But IDers claiming reason to be supernatural (manipulator) sets problem
to higher, different category. Sagan said:
"Extraordinary claims need extraordinary evidence". Some appearing
exactly 1'000'000 faces would not be *such* evidence but perhaps huge
stony JHWH letters on surfaces of moons and Genesis found coded inside
our DNA would be *such* evidence.

--
"'ALIEN %VV#5AF UPSETTING HUMANS' - SOCIOLOGICAL EXPERIMENT FOR
SCHOOL SCIENCE PROJECT GOING
ON. CONFIDENTIAL - DO NOT DISTURB." -huge star slogan in dark side of
galaxy

r norman

unread,
Jun 24, 2005, 9:47:49 PM6/24/05
to

NO NO NO NO NO!!!!!

This is yet another example of trying to refute a silly argument
using bad mathematics and bad logic. The simple fact is if you roll a
die 6 million times and you get the number 1 exactly 1 million, not
somewhere close to 1 million but exactly 1 million, then something is
terribly wrong. You absolutely must reject the null hypothesis that
the roll is random. If a theory predicts that a particular outcome
is very, very very unlikely to happen and then it does happen, that
most definitely IS cause to decide that the theory is wrong. In this
case, the theory is that of a fair die roll (or coin toss, in other of
Dembski's examples). Rejecting that hypothesis IS cause to conclude
that the outcome was manipulated. The fact that exactly 1 million is
the "most" likely outcome is irrelevant. It still is extremely
UNlikely.

What Dembski is arguing about significance levels is that, given a
very large number of trials, then even an event with relatively small
probability is likely to occur. There is nothing at all unusual or
unexpected about that. At a significance level of 0.01, you might
expect the result to occur on average once every hundred trials. I the
significance level is 10^-8, it might occur once in 100 million
trials. However, Dembski also argues quite correctly that if the
significance level is much, much smaller, say 10^-30 or 10^-60 or
whatever, then there is no way that you could ever conduct enough
trials in the entire history of the universe to expect to the event to
occur. If it does occur under those conditions, you MUST reject the
hypothesis. Again, no surprise here. I have repeatedly tried to
remind people here to accept this simple fact about probability theory
and statistical testing.

The error in using probability theory arguments to reject evolution
lies not in trying to argue that very rare events actually do happen.
They really don't. The real response must be to reject the
calculation that leads to the tiny probability. If the probability
turns out to be merely very small, then it can, indeed, occur by
chance given enough trials (and enough time).


John Harshman

unread,
Jun 24, 2005, 10:07:55 PM6/24/05
to
r norman wrote:

I find both of you to be a bit garbled. In the case of the 6 million
dice rolls, every possible outcome is vanishingly unlikely. Therefore,
according to your logic we must always reject any die being fair.
Exactly even numbers is indeed the most likely outcome, i.e. it's the
mode of the distribution, but it's still vanishingly unlikely.

But in this case I think Dembski is closer to right. We would be
surprised about exactly even numbers precisely because it's a
pre-specified outcome. We would be surprised if the count of rolls were
too far from it, but we would also be surprised if the count were too
close to it. Any other count of rolls could have been pre-specified, and
had we done so that would have been surprising as well, and we would
reject the fairness of a set of rolls that matched that count too
closely too. But if the count lay comfortably outside either tail and
comfortably far from exactly equal, we would have no reason to suspect
that the die were not fair, despite the extraordinarily low probability
of achieving just that count (or anything close to it).

> In this
> case, the theory is that of a fair die roll (or coin toss, in other of
> Dembski's examples). Rejecting that hypothesis IS cause to conclude
> that the outcome was manipulated. The fact that exactly 1 million is
> the "most" likely outcome is irrelevant. It still is extremely
> UNlikely.
>
> What Dembski is arguing about significance levels is that, given a
> very large number of trials, then even an event with relatively small
> probability is likely to occur. There is nothing at all unusual or
> unexpected about that. At a significance level of 0.01, you might
> expect the result to occur on average once every hundred trials. I the
> significance level is 10^-8, it might occur once in 100 million
> trials.

All true, but Dembski draws some fake significance from this observation.

> However, Dembski also argues quite correctly that if the
> significance level is much, much smaller, say 10^-30 or 10^-60 or
> whatever, then there is no way that you could ever conduct enough
> trials in the entire history of the universe to expect to the event to
> occur. If it does occur under those conditions, you MUST reject the
> hypothesis. Again, no surprise here. I have repeatedly tried to
> remind people here to accept this simple fact about probability theory
> and statistical testing.

Or, to put it more sensibly, higher levels of significance (smaller P)
let us reject the null hypothesis with greater confidence. Whoop-te-doo.

> The error in using probability theory arguments to reject evolution
> lies not in trying to argue that very rare events actually do happen.
> They really don't. The real response must be to reject the
> calculation that leads to the tiny probability. If the probability
> turns out to be merely very small, then it can, indeed, occur by
> chance given enough trials (and enough time).

The greater error is in assuming that evolution is claimed to proceed by
chance.

Ken Shaw

unread,
Jun 24, 2005, 10:23:45 PM6/24/05
to

r norman wrote:

By your logic any specific outcome is evidence of manipulation. Of all
the possible outcomes of rolling a die 6 million times the single most
probable outcome is an exactly even distribution. It is very low
probability but is still the most likely.

>
> What Dembski is arguing about significance levels is that, given a
> very large number of trials, then even an event with relatively small
> probability is likely to occur. There is nothing at all unusual or
> unexpected about that. At a significance level of 0.01, you might
> expect the result to occur on average once every hundred trials. I the
> significance level is 10^-8, it might occur once in 100 million
> trials. However, Dembski also argues quite correctly that if the
> significance level is much, much smaller, say 10^-30 or 10^-60 or
> whatever, then there is no way that you could ever conduct enough
> trials in the entire history of the universe to expect to the event to
> occur. If it does occur under those conditions, you MUST reject the
> hypothesis. Again, no surprise here. I have repeatedly tried to
> remind people here to accept this simple fact about probability theory
> and statistical testing.

You are reversing things. You are correct if a single specific outcome
of the event is desired. Since evolution doesn't work that way it is
irrelevant to this discussion.

>
> The error in using probability theory arguments to reject evolution
> lies not in trying to argue that very rare events actually do happen.
> They really don't. The real response must be to reject the
> calculation that leads to the tiny probability. If the probability
> turns out to be merely very small, then it can, indeed, occur by
> chance given enough trials (and enough time).

Consider, the chances of my parents producing germ cells with exactly
the mix of genetic material to produce me is unbelievably unlikely.
Since I exist the very rare event did occur. The calculation of the odds
of my exact genetic pattern coming from my parents genomes is correct.

Ken

r norman

unread,
Jun 24, 2005, 11:24:53 PM6/24/05
to

I most emphatically DO reject any one specific outcome. However, what
I do NOT reject is an outcome within 5% of the expected value. Or any
other reasonable range of values about the expected value. I cannot
accept an outcome that is equal to the expected value plus or minus
0.0001% . A value equal to the expected value plus or minus 1% would
be unusual, making me suspect something. Dembski is perfectly correct
in saying that Mendel must have manipulated his numbers in order to
get results so astonishingly close to the expected value. That is old
hat, now, in the history of science. Whether he made up the numbers,
or simply only counted those results that agreed with his expectations
is not relevant -- either way is now unacceptable scientifically.

>But in this case I think Dembski is closer to right. We would be
>surprised about exactly even numbers precisely because it's a
>pre-specified outcome. We would be surprised if the count of rolls were
>too far from it, but we would also be surprised if the count were too
>close to it. Any other count of rolls could have been pre-specified, and
>had we done so that would have been surprising as well, and we would
>reject the fairness of a set of rolls that matched that count too
>closely too. But if the count lay comfortably outside either tail and
>comfortably far from exactly equal, we would have no reason to suspect
>that the die were not fair, despite the extraordinarily low probability
>of achieving just that count (or anything close to it).

You don't have to have an outcome lying way out on the tail of the
distribution. The probability of an event is the area under the
probability curve for all outcomes that match the event. If you make
the width of the acceptable area so narrow (one possibility only), you
get a tiny area, just as if you took the total area of everything
beyond some point far out in the tail.

Exactly -- what that means is, in other words, that the calculation of
a probability is totally invalid.


r norman

unread,
Jun 24, 2005, 11:44:29 PM6/24/05
to

See my answer to John Harshman. That is exactly true: finding any one
prespecified outcome is evidence of manipulation. Whether it is the
most likely or not is irrelevant.

Suppose you have one million lottery tickets in a giant pot, mix them
up thoroughly, and draw one at random. However, somebody sneaked in
an extra copy of their number. So drawing that particular number is
the most likely event -- twice as likely as drawing any other specific
number. It still is extremely unlikely -- a chance of 1 in 500,000,
that you will hit it. If number 253884 was the specific number that
was doubled and the lottery drew the number 253884, I would accuse the
lottery system of rigging the drawing. That accusation is completely
independent of the irrelevant fact that the holder of that particular
number also cheated in a different (and much lesser) way by having two
tickets instead of one.

Yes, somebody is going to win the lottery with one million tickets
sold. But "somebody" is not the same as "this specific person".

>> What Dembski is arguing about significance levels is that, given a
>> very large number of trials, then even an event with relatively small
>> probability is likely to occur. There is nothing at all unusual or
>> unexpected about that. At a significance level of 0.01, you might
>> expect the result to occur on average once every hundred trials. I the
>> significance level is 10^-8, it might occur once in 100 million
>> trials. However, Dembski also argues quite correctly that if the
>> significance level is much, much smaller, say 10^-30 or 10^-60 or
>> whatever, then there is no way that you could ever conduct enough
>> trials in the entire history of the universe to expect to the event to
>> occur. If it does occur under those conditions, you MUST reject the
>> hypothesis. Again, no surprise here. I have repeatedly tried to
>> remind people here to accept this simple fact about probability theory
>> and statistical testing.
>
>You are reversing things. You are correct if a single specific outcome
>of the event is desired. Since evolution doesn't work that way it is
>irrelevant to this discussion.

I have always argued that evolution doesn't work that way. So counter
Dembski with that argument, not by using erroneous logic in trying to
counter his correct probability statements.


>> The error in using probability theory arguments to reject evolution
>> lies not in trying to argue that very rare events actually do happen.
>> They really don't. The real response must be to reject the
>> calculation that leads to the tiny probability. If the probability
>> turns out to be merely very small, then it can, indeed, occur by
>> chance given enough trials (and enough time).
>
>Consider, the chances of my parents producing germ cells with exactly
>the mix of genetic material to produce me is unbelievably unlikely.
>Since I exist the very rare event did occur. The calculation of the odds
>of my exact genetic pattern coming from my parents genomes is correct.
>

I have demolished this argument previously using the example of
producing a specific result when shuffling a deck of cards. The
problem is that, in order to interpret probabilities correctly, you
must specify the exact outcome BEFORE seeing what happens. If
somebody wrote down your specific DNA sequence before you were
conceived, and then your parents produced that exact genome, I would
have to believe in miracles, probably divine intervention. It simply
will not happen. It does NOT count to say that my parents must
produce "some" genetic combination, to be described later. Then,
after you were born, determine your genome sequence and claim: "See,
THAT is the one I really meant!". The probability that your parents
will produce "some" genetic combination, but totally unspecified, is
one. You were born and actually did have "some" genetic combination.
So an event with probability one actually happened. I am not terribly
surprised or shocked.

You are not allowed to specify your genome by saying "the genome that
turned out to be me", equivalent to "the genome that results from this
trial". Phrased that way, you are saying "what is the probability of
getting the exact result that occurs when I do the trial" . If you do
the trial and get the result that results from that trial, are you
surprised?


Ken Shaw

unread,
Jun 24, 2005, 11:59:12 PM6/24/05
to

I said this:

>>>>For those unaware of it, when a very large number of outcomes of an
>>>>event are possible it is very difficult to predict the outcome before
>>>>hand and combinatorial math is used. OTOH one of those outcomes will
>>>>happen and that it is very very unlikely to have happened is not cause
>>>>to decide that the outcome was manipulated.

You said this:


>>>The error in using probability theory arguments to reject evolution
>>>lies not in trying to argue that very rare events actually do happen.
>>>They really don't. The real response must be to reject the
>>>calculation that leads to the tiny probability. If the probability
>>>turns out to be merely very small, then it can, indeed, occur by
>>>chance given enough trials (and enough time).

and then this:


> I have demolished this argument previously using the example of
> producing a specific result when shuffling a deck of cards. The
> problem is that, in order to interpret probabilities correctly, you
> must specify the exact outcome BEFORE seeing what happens. If
> somebody wrote down your specific DNA sequence before you were
> conceived, and then your parents produced that exact genome, I would
> have to believe in miracles, probably divine intervention. It simply
> will not happen. It does NOT count to say that my parents must
> produce "some" genetic combination, to be described later. Then,
> after you were born, determine your genome sequence and claim: "See,
> THAT is the one I really meant!". The probability that your parents
> will produce "some" genetic combination, but totally unspecified, is
> one. You were born and actually did have "some" genetic combination.
> So an event with probability one actually happened. I am not terribly
> surprised or shocked.
>
> You are not allowed to specify your genome by saying "the genome that
> turned out to be me", equivalent to "the genome that results from this
> trial". Phrased that way, you are saying "what is the probability of
> getting the exact result that occurs when I do the trial" . If you do
> the trial and get the result that results from that trial, are you
> surprised?
>

Beyond saying it in a lot more words what you are saying is exactly what
I said originally.

Ken

Stuart

unread,
Jun 25, 2005, 12:10:17 AM6/25/05
to

But the problem is, any other ditribution is less probable. It seems
weird to you, because you attach "significance" to that particular
outcome, simply because it has symmetry.

But why should that outcome have any more signficance than one in which
we obtained 999,998 1's, 1000002 6's and 10^6 everything else?

IMHO, you're making the same mistake Waldo makes.

Dembksi attaches particular significance to certain arrangements of
amino acids cuz the proteins they form have "significance", or
"function". Now, given yay amino acids, you can make a large number of
different proteins. Many of them will have a function. But Dembski
doesn't attach "significance" to them cuz we don't find them in
organisms.

But this is where the argument fails. It fails because there is no a
priori imperative that any particular protein, much less organism, be
found in nature. Demski's argument only makes sense if the current
compliment of proteins found in nature are the only ones which could
exist on a world capable of sustaining life. THis is clearly false. Had
the Permian extinction occurred 100 million years earlier or 100
million years later, life on Earth would be different.. certainly many
organims would be recongnizable to us, others not.

Evolution is not a blind man throwing darts at a dart board and getting
a bulls eye. Thats the caricature.

Instead, the blind man throws a dart at the wall, and evolution paints
the target around the dart.


Stuart

John Harshman

unread,
Jun 25, 2005, 10:06:37 AM6/25/05
to
r norman wrote:

You have a very odd way of putting things here. I notice you have added
the word "specific", by which presumably you mean what I (and Dembski)
mean by "pre-specified". I think that addition is crucial. Your choice
of what to accept or reject would of course depend on what P=value you
have chosen, and you want the rejected regions of the distribution (both
tails, modal region) to sum to that proportion of the distribution.

And Mendel could easily have produced his odd results with a slight,
unconscious bias in scoring doubtful peas.

>>But in this case I think Dembski is closer to right. We would be
>>surprised about exactly even numbers precisely because it's a
>>pre-specified outcome. We would be surprised if the count of rolls were
>>too far from it, but we would also be surprised if the count were too
>>close to it. Any other count of rolls could have been pre-specified, and
>>had we done so that would have been surprising as well, and we would
>>reject the fairness of a set of rolls that matched that count too
>>closely too. But if the count lay comfortably outside either tail and
>>comfortably far from exactly equal, we would have no reason to suspect
>>that the die were not fair, despite the extraordinarily low probability
>>of achieving just that count (or anything close to it).
>
> You don't have to have an outcome lying way out on the tail of the
> distribution. The probability of an event is the area under the
> probability curve for all outcomes that match the event. If you make
> the width of the acceptable area so narrow (one possibility only), you
> get a tiny area, just as if you took the total area of everything
> beyond some point far out in the tail.

I'm sure you have a point here, but I don't know what it's supposed to
be, or how it relates to what I said. Are you misreading something I
wrote here?

By the way, we are visualizing this distribution as a continuous,
univariate one, but it's really a multinomial, thus discrete, and any
plot of it would be 5-dimensional.

r norman

unread,
Jun 25, 2005, 10:03:59 AM6/25/05
to
On Sat, 25 Jun 2005 03:59:12 GMT, Ken Shaw <non...@your.biz> wrote:

>
>I said this:
>>>>>For those unaware of it, when a very large number of outcomes of an
>>>>>event are possible it is very difficult to predict the outcome before
>>>>>hand and combinatorial math is used. OTOH one of those outcomes will
>>>>>happen and that it is very very unlikely to have happened is not cause
>>>>>to decide that the outcome was manipulated.
>
>You said this:
>>>>The error in using probability theory arguments to reject evolution
>>>>lies not in trying to argue that very rare events actually do happen.
>>>>They really don't. The real response must be to reject the
>>>>calculation that leads to the tiny probability. If the probability
>>>>turns out to be merely very small, then it can, indeed, occur by
>>>>chance given enough trials (and enough time).

Your statement "that it is very very unlikely to have happened is not
cause to decide that the outcome was manipulated" is wrong. I say
"that it is very very unlikely to have happened IS cause to reject the
hypothesis that produces that probability". We differ very strongly on
this point. The words you wrote seem to suggest you reject
statistical hypothesis testing. What you really believe may be
something entirely different.

There are two problems here. One is in pure probability theory. The
probability of obtaining EXACTLY 1 million 1's in rolling a die 6
million times is very very unlikely. If it happens, I reject that the
throw is random. On the other hand, the probability of getting a
number of 1's VERY CLOSE TO 1 million is quite high. You will almost
always get such a result. You will never (for all practical purposes)
get exactly 1 million.

The second problem has to do with the application of probability
theory to evolution. You and I agree that calculating a ridiculously
small probability for the evolution of a single protein, for example,
is totally bogus. Therefore we agree that the anti-evolutionists
argument is completely flawed. The difference is that I insist on
focussing on the second problem. Often times the claim of the
creationist about probability theory is correct. In fact Dembski (who
was trained as a mathematician) is correct on that one technical
point. It is in the application of probability to evolution that they
fail.


r norman

unread,
Jun 25, 2005, 10:17:05 AM6/25/05
to

As I indicate in my response to Ken Shaw, there are two problems here.
One of them is a question of pure probability theory, the question
posed by Demski about throwing a die 6 million times. It was he who
said that the particular result of obtaining a 1 exactly 1 million
times is very very low. That is a true statement. I attach no
"significance" to that particular result. The probability of
obtaining a 1 exactly 946893 times is also very very low, even lower.
All I claim, and what Dembski is claiming by using that example, is
that if a theory predicts that the probability of a particular set of
outcomes in very low and you get an outcome in that set, you must
reject the theory. In this case, the set has one only one outcome.
That is a true statement of the way science uses statistics and
probability theory. Demski is correct in that portion of his writing.

There is a second question, the application of probability theory to
evolution. Here, you and I are in complete agreement. The problem
is, as you say, evolution does not have to create "this specific
protein with this specific amino acid sequence". All it has to do is
produce "some protein, I don't care how long it is or what amino acids
it has in what sequence, as long as it has at least some small ability
to perform this specific function". That is an entirely different
story with an entirely different probability and the probability for
the real problem is not vanishingly small. Therefore you cannot
reject the hypothesis, evolution, that generates it.

What I rail against is the fact, evidenced in the beginning of this
thread, for people to try to reject the anti-evolution probability
argument by denying probability theory and statistical hypothesis
testing. To paraphrase, they claim "just because an outcome has a
ridiculously small probability doesn't matter. Such outcomes happen
every day, for example, every time I shuffle a deck of cards." I
claim that this attempt to refute the creationists or IDers is totally
fallacious and must not be used. Instead, use a valid argument.


John Harshman

unread,
Jun 25, 2005, 10:29:03 AM6/25/05
to
r norman wrote:

> On Sat, 25 Jun 2005 03:59:12 GMT, Ken Shaw <non...@your.biz> wrote:
>
>
>>I said this:
>>
>>>>>>For those unaware of it, when a very large number of outcomes of an
>>>>>>event are possible it is very difficult to predict the outcome before
>>>>>>hand and combinatorial math is used. OTOH one of those outcomes will
>>>>>>happen and that it is very very unlikely to have happened is not cause
>>>>>>to decide that the outcome was manipulated.
>>
>>You said this:
>>
>>>>>The error in using probability theory arguments to reject evolution
>>>>>lies not in trying to argue that very rare events actually do happen.
>>>>>They really don't. The real response must be to reject the
>>>>>calculation that leads to the tiny probability. If the probability
>>>>>turns out to be merely very small, then it can, indeed, occur by
>>>>>chance given enough trials (and enough time).
>
> Your statement "that it is very very unlikely to have happened is not
> cause to decide that the outcome was manipulated" is wrong. I say
> "that it is very very unlikely to have happened IS cause to reject the
> hypothesis that produces that probability". We differ very strongly on
> this point. The words you wrote seem to suggest you reject
> statistical hypothesis testing. What you really believe may be
> something entirely different.

They don't suggest that to me. What you guys are really arguing about
here is the difference between an improbable result and a pre-specified
improbable result. You neglected to insert the term "pre-specified" or
its equivalent into your original post. I'm not sure you have any real
disagreement at all.

> There are two problems here. One is in pure probability theory. The
> probability of obtaining EXACTLY 1 million 1's in rolling a die 6
> million times is very very unlikely. If it happens, I reject that the
> throw is random. On the other hand, the probability of getting a
> number of 1's VERY CLOSE TO 1 million is quite high. You will almost
> always get such a result. You will never (for all practical purposes)
> get exactly 1 million.

And this is true for any single value of the distribution, though you
will indeed end up getting one of those values from a fair die. You need
to stress that exactly 1 million is surprising (and thus evidence of
fraud) merely because it's the pre-specified expectation.

> The second problem has to do with the application of probability
> theory to evolution. You and I agree that calculating a ridiculously
> small probability for the evolution of a single protein, for example,
> is totally bogus. Therefore we agree that the anti-evolutionists
> argument is completely flawed. The difference is that I insist on
> focussing on the second problem. Often times the claim of the
> creationist about probability theory is correct. In fact Dembski (who
> was trained as a mathematician) is correct on that one technical
> point. It is in the application of probability to evolution that they
> fail.

Though I think that, in that short article, he is wrong about several
other technical points of statistics, in particular this notion that you
can make choice of the proper significance level a rigorous act.

Ken Shaw

unread,
Jun 25, 2005, 11:07:18 AM6/25/05
to

Either you are having a problem reading for comprehension or you are
being intentionally argumentative.

You are railing against getting the exact result predicted before hand.
Nowhere did I say anything even remotely implying that that was a valid
situation neither did I ever make any statements about predicting the
outcome.

Go back and look at what Dembski wrote. He is talking about the result
of rolling a die 6 million times. He is not talking about predicting the
outcome beforehand he is pointing to the obtained result that happens to
be the most likely result and saying it is anomalous. That is bunk.
Further your arguments are apparently against some previous probability
calculation Dembski made. In this case Dembski is simply abusing
probability calculations for his benefit.

Ken

r norman

unread,
Jun 25, 2005, 11:30:40 AM6/25/05
to
On Sat, 25 Jun 2005 14:06:37 GMT, John Harshman
<jharshman....@pacbell.net> wrote:

Yes, by "specific" I have always insisted that the event be
pre-specified. Otherwise, as I have tried apparently unsuccessfully
to explain, the word "specific" has no meaning. It doesn't make sense
to talk about "this specific event, but I won't tell you which event I
really mean until it happens". You can only say "this specific event
and here is the event I mean". That is, pre-specified.

You are also right about first selecting the P-value. however, a P
value of ten to the minus a big number would have to be considered
significant by any reasonable observer. We are not talking about
whether or not to accept .05 or .01 here.

>And Mendel could easily have produced his odd results with a slight,
>unconscious bias in scoring doubtful peas.

It did not have to be deliberate deception. But it still is
suspiciously "too good".

>
>>>But in this case I think Dembski is closer to right. We would be
>>>surprised about exactly even numbers precisely because it's a
>>>pre-specified outcome. We would be surprised if the count of rolls were
>>>too far from it, but we would also be surprised if the count were too
>>>close to it. Any other count of rolls could have been pre-specified, and
>>>had we done so that would have been surprising as well, and we would
>>>reject the fairness of a set of rolls that matched that count too
>>>closely too. But if the count lay comfortably outside either tail and
>>>comfortably far from exactly equal, we would have no reason to suspect
>>>that the die were not fair, despite the extraordinarily low probability
>>>of achieving just that count (or anything close to it).
>>
>> You don't have to have an outcome lying way out on the tail of the
>> distribution. The probability of an event is the area under the
>> probability curve for all outcomes that match the event. If you make
>> the width of the acceptable area so narrow (one possibility only), you
>> get a tiny area, just as if you took the total area of everything
>> beyond some point far out in the tail.
>
>I'm sure you have a point here, but I don't know what it's supposed to
>be, or how it relates to what I said. Are you misreading something I
>wrote here?

Ordinarily hypothesis testing calculates the probability of getting a
result "this far away from the expected value or greater" or "this
far away from the expected or greater in either direction". That is,
a one-tailed or two-tailed test. In that case, the probability is the
area under the tail of the distribution. You referred to the tails
in the post I responded to and you again referred to the tails earlier
in this most recent post. I was simply pointing out that the way that
the problem of rolling a die was framed, the probability did not
involve the tail of the distribution but does involve a particular
area in the center.

>By the way, we are visualizing this distribution as a continuous,
>univariate one, but it's really a multinomial, thus discrete, and any
>plot of it would be 5-dimensional.
>

I simplified the problem to getting a specified number of 1's, not a
specified number of each value to reduce the dimensionality. And
whether the distribution is continuous (hence involving areas) or
discrete (hence involving sums which are comparable to areas) doesn't
change the situation.

Clearly I am not making myself clear.

There is a difference between getting EXACTLY one million 1's or
getting one million 1's OR SOME VERY SIMILAR VALUE. There is a
difference between getting THIS PARTICULAR (pre)specified shuffle of
cards and getting SOME shuffle of a specified order but I won't tell
you what that order is until after it happens. There is a difference
between getting THIS PARTICULAR amino acid or nucleotide sequence and
getting SOME SEQUENCE THAT SEEMS TO WORK. In each case, the first
example might have vanishingly small probability, the second might
have merely a small probability or possibly a large probability. When
someone proposes a mathematical problem of the first kind you can't
argue against it by saying "rare events do happen". Rare events
rarely happen. On the other hand, when someone proposes that an
argument of the first kind should be used to invalidate a theory for
which the probability really is of the second kind, then you should
point out that error.


r norman

unread,
Jun 25, 2005, 11:33:36 AM6/25/05
to

Yes, as in my other response -- by "specified" I always have meant
that you have to produce the specification in advance.

Yes, Demski is wrong about choosing rigorously the proper significance
level. However he is not wrong in everything he says no matter how
much we dislike his arguments and conclusions.

John Harshman

unread,
Jun 25, 2005, 12:04:04 PM6/25/05
to
r norman wrote:

All clear from the outset.

>>By the way, we are visualizing this distribution as a continuous,
>>univariate one, but it's really a multinomial, thus discrete, and any
>>plot of it would be 5-dimensional.
>
> I simplified the problem to getting a specified number of 1's, not a
> specified number of each value to reduce the dimensionality. And
> whether the distribution is continuous (hence involving areas) or
> discrete (hence involving sums which are comparable to areas) doesn't
> change the situation.

True.

r norman

unread,
Jun 25, 2005, 12:37:51 PM6/25/05
to
On Sat, 25 Jun 2005 15:07:18 GMT, Ken Shaw <non...@your.biz> wrote:


>
>Either you are having a problem reading for comprehension or you are
>being intentionally argumentative.
>
>You are railing against getting the exact result predicted before hand.
>Nowhere did I say anything even remotely implying that that was a valid
>situation neither did I ever make any statements about predicting the
>outcome.
>
>Go back and look at what Dembski wrote. He is talking about the result
>of rolling a die 6 million times. He is not talking about predicting the
>outcome beforehand he is pointing to the obtained result that happens to
>be the most likely result and saying it is anomalous. That is bunk.
>Further your arguments are apparently against some previous probability
>calculation Dembski made. In this case Dembski is simply abusing
>probability calculations for his benefit.
>

I am reading what I see and I am also argumentative.

No, I am not railing against getting the exact result predicted
beforehand. What I am railing against is those who incorrectly answer
an argument based on the probability of getting the exact result
predicted beforehand. John Harshman has concluded that the difficulty
is in the word "pre-specified". My use of the term "exact result"
does, in fact, mean "exact result predicted beforehand", that is
"pre-specified". If you only mean an exact result whose specification
cannot be determined only the result is obtained, we have no
agreement. But then I question what you mean by "exact". Exactly
what?

You said "that it is very very unlikely to have happened is not
cause to decide that the outcome was manipulated". I say that the
event is not at all unlikely unless the outcome was (pre)specified.
So I interpreted your comment to mean getting an exact result
predicted beforehand. If that is not what you meant, then it is also
true that the unspecified event your are talking about is almost
certainly not unlikely. You can't talk about the probability of an
event unless you know what event (or what set of outcomes) you mean.
That is, unless you prespecify the set.

I did go back and looked at what Dembski wrote. It is "Suppose now


that the die is thrown 6,000,000 times and that each face appears
exactly 1,000,000 times (as in the previous 6-tuple). Even if the die
is fair, there is something anomalous about getting exactly one

million appearances of each face of the die." I interpret that as
getting a specific result predicted beforehand. There is no abuse of
probability calculation here. You can calculate the probability of
that event occurring and it is very small. Dembski (whose name I
finally learned to spell) correctly argues here that if such an event
actually occurred it would be considered anomalous. I agree with
Dembski. Other people seem to claim that Dembski is wrong -- that
such a result would be unremarkable and could easily be passed off. I
claim that Dembski is correct in this particular instance. In other
instances, he does abuse probability calculations for his benefit. In
this instance, he does not. That is all I am arguing.


Stuart

unread,
Jun 25, 2005, 2:57:18 PM6/25/05
to


But now Dembski is attaching "significance".."that if a theory


predicts that the probability of a particular set "

What is the basis for choosing that particular set? If it could be
shown that the observed compliment proteins in the biosphere are the
only ones possible, then Dembski has a valid basis for choosing, as his
particular set, proteins found in extant organisms.


In this case, the set has one only one outcome.
> That is a true statement of the way science uses statistics and
> probability theory. Demski is correct in that portion of his writing.
>
> There is a second question, the application of probability theory to
> evolution. Here, you and I are in complete agreement. The problem
> is, as you say, evolution does not have to create "this specific
> protein with this specific amino acid sequence". All it has to do is
> produce "some protein, I don't care how long it is or what amino acids
> it has in what sequence, as long as it has at least some small ability
> to perform this specific function". That is an entirely different
> story with an entirely different probability and the probability for
> the real problem is not vanishingly small. Therefore you cannot
> reject the hypothesis, evolution, that generates it.
>
> What I rail against is the fact, evidenced in the beginning of this
> thread, for people to try to reject the anti-evolution probability
> argument by denying probability theory and statistical hypothesis
> testing. To paraphrase, they claim "just because an outcome has a
> ridiculously small probability doesn't matter. Such outcomes happen
> every day, for example, every time I shuffle a deck of cards."

That depends.. If you're playing poker nobody is going to say one draw
is as good as another.. and you'll be shocked if you draw a ROyal
flush.

But if you're not playing a game, one draw is as good as another.

I
> claim that this attempt to refute the creationists or IDers is totally
> fallacious and must not be used. Instead, use a valid argument.

In this case, I don't think it is fallacious because there is no
rational a. priori for choosing one outcome over another. THis is a
sleight of hand Dembski uses to bamboozle his readers.

Anyway, thats my 2cents.

Stuart

Ken Shaw

unread,
Jun 25, 2005, 4:09:03 PM6/25/05
to

Well that settles everything. I'm reading what was written and expect
everyone to read what I wrote. You are extrapolating additional meaning
in order to grind your own ax.

In the future do me a favor when responding to my posts, don't add
anything at all to my plain meaning.

Ken

r norman

unread,
Jun 25, 2005, 5:01:25 PM6/25/05
to
On Sat, 25 Jun 2005 20:09:03 GMT, Ken Shaw <non...@your.biz> wrote:


>Well that settles everything. I'm reading what was written and expect
>everyone to read what I wrote. You are extrapolating additional meaning
>in order to grind your own ax.
>
>In the future do me a favor when responding to my posts, don't add
>anything at all to my plain meaning.
>

I was trying to be generous and kind in saying what I really meant.
You say: "Consider, the chances of my parents producing germ cells


with exactly the mix of genetic material to produce me is unbelievably

unlikely." That is an example of an "exact result". Unless you
(pre)specify just what that result means, its probability has no
meaning. Either my interpretation of your comments (in this case and
in related ones) is correct or they have no meaning. Take your pick.

Ken Shaw

unread,
Jun 25, 2005, 6:15:18 PM6/25/05
to

No, The rhetorical question was very specific. I exist. my genome
exists. What is the probability that my parents genomes produced my
genome? Astronomical. But it happened and there is no reason to
investigate for manipulation because no one was not trying to produce my
genome. You seem to have severe problems reading English for comprehension.

I have now explained to you that I was not discussing your pet peeve at
least 3 times. That you do not simply accept that you read more into my
statement than was present is troubling.

Ken

r norman

unread,
Jun 25, 2005, 9:42:06 PM6/25/05
to
On 25 Jun 2005 11:57:18 -0700, "Stuart" <bigd...@aol.com> wrote:

>But now Dembski is attaching "significance".."that if a theory
>predicts that the probability of a particular set "
>
>What is the basis for choosing that particular set? If it could be
>shown that the observed compliment proteins in the biosphere are the
>only ones possible, then Dembski has a valid basis for choosing, as his
>particular set, proteins found in extant organisms.
>

That is the crucial point. Dembski says that IF the theory does
predict that the probability of an event is very small then you must
reject the theory. The fact is that the theory makes no such
prediction. There is no basis for choosing that particular set (or
outcome) just as you say. That is the real fallacy. Unfortunately,
other people try to argue against Dembski's statement as it stands and
that is the fallacious argument I complain about. It really is true
that IF the theory does make that prediction, then you must reject the
theory. Don't reject that correct statement -- reject the notion that
the theory makes any such prediction.

>> What I rail against is the fact, evidenced in the beginning of this
>> thread, for people to try to reject the anti-evolution probability
>> argument by denying probability theory and statistical hypothesis
>> testing. To paraphrase, they claim "just because an outcome has a
>> ridiculously small probability doesn't matter. Such outcomes happen
>> every day, for example, every time I shuffle a deck of cards."
>
>That depends.. If you're playing poker nobody is going to say one draw
>is as good as another.. and you'll be shocked if you draw a ROyal
>flush.
>
>But if you're not playing a game, one draw is as good as another.
>

If you are playing poker, there is a predefined set of "special"
outcomes: royal flushes, four of a kind, etc. If you get dealt such a
hand, it is a very unusual and rare event. However, that event is not
of such a small probability to think that it could never happen. It
can occasionally happen given the thousands of poker games going on
all the time. If you don't have any particular game in mind, then
there is no prespecified "special" outcome to be concerned with. In
that situation, the probability simply of getting "some" unspecified
hand when dealing a shuffled deck of cards is one. And that is what
always happens. Everybody gets "some" hand.

>In this case, I don't think it is fallacious because there is no
>rational a. priori for choosing one outcome over another. THis is a
>sleight of hand Dembski uses to bamboozle his readers.
>

Exactly. So complain about Dembski's slight of hand. Don't argue
that if a theory truly predicts an outcome with ridiculously small
probability and that outcome happens you don't have to reject the
theory. ( I am not talking about you, but about other posts here.)

All I am trying to say is that IF the theory REALLY DOES make that
prediction and the outcome happens you REALLY DO have to reject the
theory. However, the theory really does NOT make that prediction.


TomS

unread,
Jun 26, 2005, 9:23:45 AM6/26/05
to
"On Sat, 25 Jun 2005 21:42:06 -0400, in article
<251sb1t8uokmvbmvh...@4ax.com>, r norman stated..."
[...snip...]

Perhaps this may be helpful:

The probability of a particular hand is dependent, not only
on the particular cards that you are dealt, but also by what
"special" hand that it represents.

If you are playing poker, then you must also consider the
particular rules that you are playing under. Are there wild cards?
Are you playing "high-low" (the highest hand and the lowest hand
split the pot)? Is it draw poker? Is there a card in the hole?

You have to place the particular hand that you are holding
within a context.

And the context could be that you're not playing poker, but
some other game -- for example, blackjack.


--
---Tom S. <http://talkreason.org/articles/chickegg.cfm>
..The Earth obey'd, and strait/Op'ning her fertil Woomb teem'd at a Birth/
Innumerous living Creatures, perfet formes,/Limb'd and full grown: out of the
ground up rose/As from his Laire the wilde Beast...
Milton, Paradise Lost. Book VII 453-457

Roger Coppock

unread,
Jun 26, 2005, 10:09:17 AM6/26/05
to
According to this paper then, the basis for ID theory
is a rehash of basic statistics that probably was
plagiarized from an old statistics book and a cute
reference to the "Da Vinci Code." I've seen better
papers from undergraduate science students, and
those papers earned a "C."

Is this all there is to something that has become a
national issue? State legislatures, school boards,
and court rooms across the country are having hissy
fits about this, only this? Maybe we ought to keep
the real science under raps, lest there be total
anarchy.

Andrew Arensburger

unread,
Jul 2, 2005, 1:35:27 AM7/2/05
to
Ken Shaw <non...@your.biz> wrote:
> John Harshman wrote:
>> eNo wrote:
>>>New article (take 341?) at:
>>>http://www.designinference.com/documents/2005.06.Specification.pdf.

>> Does it get better after page 4?

Call me crazy, but I actually slogged through this. Well, I
skipped the addenda, and didn't look up any of the endnotes, and kinda
skimmed over a lot of the equations when it was obvious that they
weren't adding anything of value. But yeah, I just read the whole
thing.
Basically, it can be summed up as "Tornado in a junkyard."

I'm not sure who the intended audience is. On one hand, the
paper is long enough, and there are enough equations, to keep most lay
readers from reading it. On the other hand, the math is at an
undergraduate level (I could follow along) and he often pauses to
explain or illustrate simple concepts. So it wasn't written for
mathematicians or statisticians, either.

> No. On page 7 he pulls out this whopper:
> "Suppose now that the die is thrown 6,000,000 times and that each face
> appears exactly 1,000,000 times (as in the previous 6-tuple). Even if
> the die is fair, there is something anomalous about getting exactly one
> million appearances of each face of the die."
> He then proceeds to throw around some very large and very small numbers
> to try and make this crap significant. No where does he make the point
> that this result is the most likely result. I'm not clear on whether he
> just doesn't understand combinatorial mathematics or is being
> intentionally deceptive.

Believe it or not, the bit you quote isn't even the stupid
part. While it's poorly written, it does make a certain amount of
sense: in real life, if you tossed a coin 100 times and it came up
heads each time, you would strongly suspect that the coin was biased,
even though that sequence is precisely as likely (given a fair coin)
as every other 100-toss sequence.
What makes us suspicious of a 100-heads sequence, he says, is
that it follows a simple pattern. He goes on at length about Chaitin-
Kolmogorov-Solomonoff information theory, and measuring the complexity
of a string by the length of the shortest computer program that can
generate that string, and so forth. But the basic idea seems sound:
that if there's a pattern, it's probably not random.
If he wants to argue that bacterial flagella, eyes, cells,
etc. don't just spontaneously self-assemble, but call for some other
explanation, I'll be the first to agree.

As has been pointed out in t.o before, given 100 data points,
any fool can write an equation with 100 terms that goes through each
point (the epicycle problem, for instance). A lot of what scientists
do consists in trying to come up with the _simplest_ equation that
goes through all 100 data points.
And yes, we could go on and on about human psychology and
probabilities and pattern-recognition and information theory and "what
constitutes a hit?", but at the end of the day, not all data sets look
equally random. If you plot the data and it forms a straight line or a
circle, you'll think that this isn't chance, that there's an
underlying reason for this distribution.

Where Dembski starts losing it is in the first paragraph on
p.18:

For a less artificial example of specificational resources in
action, imagine a dictionary of 100,000 (= 10^5) basic
concepts. There are then 10^5 1-level concepts, 10^10 2-level
concepts, 10^15 level concepts, and so on. If "bidirectional,"
"rotary," "motor-driven," and "propeller" are basic concepts,
then the molecular machine known as the bacterial flagellum
can be characterized as a 4-level concept of the form
"bidirectional rotary motor-driven propeller." Now, there are
approximately N = 10^20 concepts of level 4 or less, which
therefore constitute the specificational resources relevant to
characterizing the bacterial flagellum. Next, define p =
P(T|H) as the probability for the chance formation for the
bacterial flagellum. T, here, is conceived not as a pattern
but as the evolutionary event/pathway that brings about that
pattern (i.e., the bacterial flagellar structure). Moreover,
H, here, is the relevant chance hypothesis that takes into
account Darwinian and other material mechanisms. We may
therefore think of the specificational resources as allowing
as many as N = 10^20 possible targets for the chance formation
of the bacterial flagellum, where the probability of hitting
each target is not more than p. Factoring in these N
specificational resources then amounts to checking whether the
probability of hitting any of these targets by chance is
small, which in turn amounts to showing that the product Np is
small.

Up until now, H (as in "P(T|H)", above) has been used for the chance
hypothesis, i.e., the idea that the observed sequence is due to chance
(coin flips, die tosses, etc).
And here, he says that evolution is just chance. In other
words, tornado in a junkyard.

Then follows some stuff that isn't completely stupid if H
really means "random chance". Then on p. 25 he's back to flagella, and
says that "H [...] is an evolutionary chance hypothesis that takes
into account Darwinian and other material mechanisms". Now that
there's a hint that maybe he understands that evolution != chance, he
engages in hand-waving:

Is P(T|H) in fact less than 1/4*10^-140, thus making T a
specification? The precise calculation of P(T|H) has yet to be
done. But some methods for decomposing this probability into a
product of more manageable probabilities as well as some
initial estimates for these probabilities are now in
place.[33] These preliminary indicators point to T's specified
complexity being greater than 1 and to T in fact constituting
a specification.

[Endnote 33]: See my article "Irreducible Complexity
Revisited" at www.designinference.com (last accessed June 17,
2005). See also section 5.10 of my book No Free Lunch.

In other words, the probability of the E. coli flagellum having
evolved is vanishingly small. No, really. Trust me on this.

I won't deny that this paper offers some insights (to a layman
such as myself, at least) on how to tell random sequences from
nonrandom ones, this was never the issue in biology. No one ever
claimed that interesting features of living organisms were due to mere
chance. Isn't selection the very opposite of chance?

"You see?! You see?! Your stupid minds! Stupid! Stupid!"
-- Eros, "Plan 9 From Outer Space"

After that, the paper accelerates downhill.

p.27: "Objections only get raised against inferring design on
the basis of such small probability, chance elimination
arguments when the designers implicated by them are
unacceptable to a materialistic worldview, as happens at the
origin of life, whose designer could not be an intelligence
that evolved through purely materialistic processes."

Even if you adopt a broad definition of "design", broad enough to make
"chance vs. design" a true dichotomy, that still doesn't allow you to
claim that design implies intelligence. Yet here (pp.27-28), Dembski
makes this leap casually, without the least word of explanation why.

Near the bottom of p.28, he says:

Archaeologists sometimes unearth tools of unknown function,
but still reasonably draw the inference that these things are,
in fact, _tools_."[42] Sober's remark suggests that design
inferences may look strictly to features of designed objects
and thus presuppose no knowledge about the characteristics of
the designer.

Except, of course, that archeologists study _humans_, which allows
them to make all sorts of assumptions and suppositions about what the
tool-makers (if any) were capable of doing and what they may have had
in mind.
This reminds me of some of Zoe's assertions about "hallmarks
of design". Except, of course, that Zoe has enough character to air
her views in a public forum where she knows that people will point out
every error.

I'm trying to come up with a charitable explanation for
Dembski writing forty pages of bafflegab that boil down to "The mean
atheist establishment doesn't like my tornado in a junkyard!", but I'm
drawing a blank.

--
Andrew Arensburger, Systems guy University of Maryland
arensb.no-...@umd.edu Office of Information Technology
Keeping freedom safe from democracy.

r norman

unread,
Jul 2, 2005, 10:44:33 AM7/2/05
to

You are quite correct. Dembski's obfuscates the issues by masking
them in mathematical mumbo-jumbo, thus giving a "scientific" basis for
his nonsense.

My problem (expressed at some great length earlier in this thread) is
with people who seem to seize on the mathematical facts he actually
states correctly. The problem is how he interprets and applies his
mathematics to situations where they are totally inapplicable. Many
of his mathematical statements, though, are quite true in isolation.
An event with microscopically small probability simply doesn't happen.
But you can't calculate the probability of evolution producing
products (like the bacterial flagellum) using the methods he promotes.
The probability is meaningless, hence his conclusions are meaningless.


Ken Shaw

unread,
Jul 2, 2005, 11:19:05 AM7/2/05
to

Andrew Arensburger wrote:

If he had made an argument about a sequence and not just the total
number of times each face of the die came up I wouldn't have argued with
his point.

To make this clear when flipping a coin 100 times every sequence is
equally likely and getting a predicted outcome is suspicious. However
when examining the total results of the flips someone outcomes are more
likely. 50 heads and 50 tails is the most likely and all heads or all
tails are least likely. Getting the most likely result should not be
surprising unless you predicted that outcome beforehand.

In the above example Dembski isn't discussing sequences he is discussing
totals and he is blatantly incorrect.

Ken

r norman

unread,
Jul 2, 2005, 12:50:42 PM7/2/05
to
On Sat, 02 Jul 2005 15:19:05 GMT, Ken Shaw <non...@your.biz> wrote:

>If he had made an argument about a sequence and not just the total
>number of times each face of the die came up I wouldn't have argued with
>his point.
>
>To make this clear when flipping a coin 100 times every sequence is
>equally likely and getting a predicted outcome is suspicious. However
>when examining the total results of the flips someone outcomes are more
>likely. 50 heads and 50 tails is the most likely and all heads or all
>tails are least likely. Getting the most likely result should not be
>surprising unless you predicted that outcome beforehand.
>
>In the above example Dembski isn't discussing sequences he is discussing
>totals and he is blatantly incorrect.
>

Just because an event is the _most_ likely does not make it likely at
all. Tossing a coin 100 times and getting exactly 50 heads and 50
tails is a very unlikely event. Getting a very unlikely event is
very unlikely and it is very surprising when it occurs. As you say,
surprise depends on how a prediction is made. If you look at the
result and then predict it, there is no surprise. If you predict in
advance and then get that exact result, that is a surprise no matter
whether the prediction is 50:50 or 49:51 or 72:28. However, human
nature being what it is, there are "predefined" special cases we have
in our heads. Getting exactly 50:50 is such a case. If it happens, we
take notice. Getting exactly 53:47 doesn't have the same impact on our
sensibilities and so doesn't attract any attention. In particular to
use the Dembski example, to get exactly 1 million 1's on 6 million die
rolls (not to mention exaction 1 million 2's, 1 million 3's, etc) is
exactly the type of thing that we have already predefined in our minds
as a very special situation. I predict in advance that the
probability of getting one of the "predefined" special cases is very
very small. Then, if one of those cases does occur, it is a big
surprise and cause for casting doubt on the validity of the process.

The real problem is not carefully defining just what you are doing
when using probabilities. To do so correctly means defining exactly
_in advance_ just what set of outcomes you are calculating a
probability for and defining exactly _in advance_ just what type of
random process is generating the outcomes. If you define everything
correctly, there is no confusion in applying and interpreting
probabilities. And if you do that, Dembski's calculations fall apart
totally. If you don't do that, the probabilities you bandy about
don't mean anything at all. Either way, you are completely correct;
Dembski is blatantly incorrect.


Andrew Arensburger

unread,
Jul 2, 2005, 2:16:27 PM7/2/05
to
Ken Shaw <non...@your.biz> wrote:

> Andrew Arensburger wrote:
>> What makes us suspicious of a 100-heads sequence, he says, is
>> that it follows a simple pattern. He goes on at length about Chaitin-
>> Kolmogorov-Solomonoff information theory, and measuring the complexity
>> of a string by the length of the shortest computer program that can
>> generate that string, and so forth. But the basic idea seems sound:
>> that if there's a pattern, it's probably not random.

> If he had made an argument about a sequence and not just the total
> number of times each face of the die came up I wouldn't have argued with
> his point.

[...]


> In the above example Dembski isn't discussing sequences he is discussing
> totals and he is blatantly incorrect.

Actually, on pp.14-15, he does discuss sequences.
Specifically, he introduces a sequence of coin flips that looks
random, but is actually produced by (sorta) counting in base 2:
0
1
00
01
10
11
000
001
010
011
100
101
...

and yes, I agree that if a (sufficiently-long) series of coin-flips
produced this sequence, I would think that there was something fishy
going on.

As for die-throw totals, I agree. You can apply Chaitin-
Kolmogoroff-Solomonoff complexity to the result -- for instance, I
just tossed a virtual die 6,000,000 times and got
1: 1001234
2: 998797
3: 999660
4: 999996
5: 1000841
6: 999472
which is obviously less compressible than
1: 1000000
2: 1000000
3: 1000000
4: 1000000
5: 1000000
6: 1000000

--but since the strings in question are so short, I'd be suspicious of
the results. It seems true that highly-compressible strings exhibit
patterns, and we humans are good at noticing patterns. But the strings
are so short, and so many of them would constitute a "hit", that I
would suspect pareidolia before I suspected an underlying order.

--
Andrew Arensburger, Systems guy University of Maryland
arensb.no-...@umd.edu Office of Information Technology

Show me a completely smooth operation and I'll show you
someone who's covering mistakes. Real boats rock.

catshark

unread,
Jul 2, 2005, 3:48:43 PM7/2/05
to
On Sat, 2 Jul 2005 05:35:27 +0000 (UTC), Andrew Arensburger
<arensb.no-...@umd.edu> wrote:

>Ken Shaw <non...@your.biz> wrote:
>> John Harshman wrote:
>>> eNo wrote:
>>>>New article (take 341?) at:
>>>>http://www.designinference.com/documents/2005.06.Specification.pdf.
>
>>> Does it get better after page 4?
>
> Call me crazy, but I actually slogged through this.

I'd call it worthy of a POTM. Any seconds?

[...]

--
---------------
J. Pieret
---------------

In the name of the bee
And of the butterfly
And of the breeze, amen

- Emily Dickinson -

Ken Shaw

unread,
Jul 2, 2005, 3:56:58 PM7/2/05
to

I was specifically talking about his example on pages 7 and 8. That he
elsewhere actually, perhaps correctly, discusses sequences seems to be
evidence of his attempts to use incorrect but technical material to make
the unsophisticated believe his claims.

If it wasn't for my unwillingness to plagiarize someone else's work I
would submit the article to some peer reviewed math and info science
journals and make the reviewers comments public.

Ken

Mark Isaak

unread,
Jul 2, 2005, 4:18:22 PM7/2/05
to
On Sat, 2 Jul 2005 05:35:27 +0000 (UTC), Andrew Arensburger
<arensb.no-...@umd.edu> wrote:

>>> eNo wrote:
>>>>New [Dembski] article (take 341?) at:
>>>>http://www.designinference.com/documents/2005.06.Specification.pdf.
>

He is also guilty of drawing his targets after the arrow has been
shot. There is no a priori reason to include "bidirectional",
"rotary", and "motor-driven" among the concepts. Evolution for
propulsion is interested only in the concept of propulsion; any
mechanism that propels will be selected for. And we know several
others have.

My first thought upon reading his N = 10^20 number was, I wonder how
many combinations of four concepts can be interpreted to refer to
something. Just by poking my finger into a dictionary at arbitrary
places, I came up with "benevolence," "effective," "literature,"
"preface." Now, I cannot name any effective prefaces to benevolence
literature offhand, but I would be surprised if none existed. I would
guess that most of Dembski's 4-level concepts will hit a target
somewhere.

--
Mark Isaak eciton (at) earthlink (dot) net
"Voice or no voice, the people can always be brought to the bidding of
the leaders. That is easy. All you have to do is tell them they are
being attacked, and denounce the pacifists for lack of patriotism and
exposing the country to danger." -- Hermann Goering

John Wilkins

unread,
Jul 2, 2005, 5:12:38 PM7/2/05
to
catshark wrote:
> On Sat, 2 Jul 2005 05:35:27 +0000 (UTC), Andrew Arensburger
> <arensb.no-...@umd.edu> wrote:
>
>
>>Ken Shaw <non...@your.biz> wrote:
>>
>>>John Harshman wrote:
>>>
>>>>eNo wrote:
>>>>
>>>>>New article (take 341?) at:
>>>>>http://www.designinference.com/documents/2005.06.Specification.pdf.
>>
>>>>Does it get better after page 4?
>>
>> Call me crazy, but I actually slogged through this.
>
>
> I'd call it worthy of a POTM. Any seconds?
>
> [...]
>
Second

--
John S. Wilkins, Postdoctoral Research Fellow, Biohumanities Project
University of Queensland - Blog: evolvethought.blogspot.com
"Darwin's theory has no more to do with philosophy than any other
hypothesis in natural science." Tractatus 4.1122

Andrew Arensburger

unread,
Jul 3, 2005, 1:39:56 AM7/3/05
to
Ken Shaw <non...@your.biz> wrote:
> I was specifically talking about his example on pages 7 and 8. That he
> elsewhere actually, perhaps correctly, discusses sequences seems to be
> evidence of his attempts to use incorrect but technical material to make
> the unsophisticated believe his claims.

Ah, that one. Yeah, the math on that seemed fishy to me. At
one point, he seemed to be saying, "Let T be some set of events. Every
event in T is highly improbable. Therefore, the probability of getting
an event in T is very low." From which it follows that every time a
poker hand is dealt, it's a miracle.

--
Andrew Arensburger, Systems guy University of Maryland
arensb.no-...@umd.edu Office of Information Technology

I have noticed that the people who are late are often so much
jollier than the people who have to wait for them.

dene_...@yahoo.co.uk

unread,
Jul 3, 2005, 6:35:34 AM7/3/05
to

I couldn't find mention of Dembski's concept of "fabrication" in this
paper. I wonder if it's still meant to apply.

0 new messages