Dembski's Intelligent Design Hypothesis

11 views
Skip to first unread message

Ivar Ylvisaker

unread,
Sep 11, 1999, 3:00:00 AM9/11/99
to
Some recent posts led me to investigate some of Dembski's thinking
about intelligent design more carefully.

It helps to be concrete. Assume that the possible design event in
question is abiogenesis, the beginning of life on this planet. I know
of no event more promising than this for proving design.

Abiogenesis could be a natural event or a design event. Natural is my
term. Dembski divides my natural events into regular events and chance
events [TDI page 36]. Regular events -- "regularities" -- are those
explained by natural laws. Chance events are events like the outcomes
of honest lotteries. But there is a problem with Dembski's division.
Real events in the real world are always affected by many factors, some
of which are lawful and some of which are random. Abiogenesis, if it is
a natural event, depends on lawful events like the ability of atoms to
bind to one another and on chance events like weather and ocean
currents. Hence, natural events is a more realistic category.

If abiogenesis was a natural event, then we do NOT know how it
happened. Some people have ideas but no one, to my knowledge, has
advanced a theory that he or she thinks should be compelling to others.
I assume that abiogenesis, if natural, would be a stochastic process;
atoms and groups of atoms would build up over time until living cells
were present. The process could have started more than once and there
may be more than way in which the process can succeed. The combination
of such processes is a chance process in that we could never predict
precisely when life would first establish itself or even if it would
ever establish itself. (But we know we are alive.)

Dembski's alternative to abiogenesis as a natural event is abiogenesis
as a design event. Dembski's writings are ambiguous about whether
intelligent design is a scientific theory (or hypothesis). The Design
Inference says no: "Indeed, confirming hypotheses is precisely what the
design inference does not do. The design inference is in the business of
eliminating hypotheses, not confirming them" [TDI page 68]. In "The
Design Inference," design events are what's left after regular and
chance events have been eliminated. But other writings imply yes: "But
in fact intelligent design can be formulated as a scientific theory
having empirical consequences and devoid of religious commitments"
[IDasInfo]. Regardless, Dembski argues that you can identify design by
recognizing choice. "The principal characteristic of intelligent agency
is choice. Whenever an intelligent agent acts, it chooses from a range
of competing possibilities" [S&D]. In some articles, Dembski calls this
"Actualization-Exclusion-Specification" [IDasInfo], in others
"complexity-specification" [S&D].

But the idea of abiogenesis as a design choice has problems. What are
the "range of competing possibilities" that the designer excluded? This
is an awkward question if the design agent is a man. It is even more
awkward if the design agent is a god. Dembski's answer is curious. He
does not clarify his design hypothesis; he points instead to the
competing natural hypothesis. Of course, this is awkward too because
there is no well defined natural hypothesis for abiogenesis. So Dembski
invents one. Dembski feels that he has to show that the "choosing"
cannot be an accidental outcome, that an intelligent agent must have
acted. Therefore, the actual outcome must be improbable if the process
is a natural one: "The role of complexity in detecting design is now
clear since improbability is precisely what we mean by complexity"
[IDasInfo]. Dembski is vague about his version of the natural
hypothesis but he seems to be thinking of something like assembling the
first DNA by randomly stringing different bases together until a
functioning molecule appears. That would take a very long time; hence,
Dembski asserts that there must be an intelligent agent that knows a
shortcut.

Specification also has a problem. A choice implies that the design
agent intended to choose one possibility and to reject the others.
Dembski implies that "specification" is his way of identifying the
selected possibility. Dembski uses the example of an archer who paints
a bull's-eye on a wall. The archer then backs off and shoots his arrows
into the bull's-eye, which demonstrates his intention (and ability) to
do so. But Dembski does not define specification in a way that assures
that specification reflects intention. In "The Design Inference," a
specification is defined as a pattern and a pattern is defined as "any
description that corresponds uniquely to some prescribed event" [TDI
page 36]. In this discussion, the event is abiogenesis. According to
Dembski: "Specification in biology always makes reference in some way
to an organism's function" [S&D]. Dembski quotes Richard Dawkins for an
example: "In the case of living things, the quality that is specified
in advance is . . . the ability to propagate genes in reproduction"
[S&D]. But even if living things can be specified, how does that tell
us that the creation of living things was intended? Put another way,
how can we be sure that a choice was made and, therefore, that an
intelligent chooser existed? Maybe God inadvertently left some garbage
when she visited Earth and we are the unintended consequence. As an
aside, Dembski again links an element of his design hypothesis to an
element of the competing natural hypothesis. He says that the
information that the specification is based on must not influence the
hypothesis that abiogenesis is a natural event.

A last comment. If a detailed, specific abiogenesis hypothesis is
proposed, how do we know whether to assign it to the natural category or
to the design category? Suppose something akin to natural selection is
proposed. Does that count as natural or design? If the implementing
being cannot talk, does that count as intelligent? Suppose the being is
a machine, a robot. What then?


Ivar


References:

[TDI] The Design Inference. Cambridge University Press 1998

[S&D] "Science and Design" First Things October 1998 (
http://www.firstthings.com/ftissues/ft9810/dembski.html )

[IDasInfo] "Intelligent Design as a Theory of Information" (
http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html )


Tim Tyler

unread,
Sep 24, 1999, 3:00:00 AM9/24/99
to
Ivar Ylvisaker <ylvi...@erols.com> wrote:

: Some recent posts led me to investigate some of Dembski's thinking


: about intelligent design more carefully.

: It helps to be concrete. Assume that the possible design event in
: question is abiogenesis, the beginning of life on this planet. I know
: of no event more promising than this for proving design.

It seems to me few events are less promising. Computers, toasters,
wahshing machines, and venetian blinds are all far more likely to be
designed than the abiogenesis event, IMO.

: [S&D] "Science and Design" First Things October 1998 (
: http://www.firstthings.com/ftissues/ft9810/dembski.html )

: [IDasInfo] "Intelligent Design as a Theory of Information" (
: http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html )

Dembski's argument is essentially that of Michael Behe: that there's such
a thing as "irreducible complexity" that signals design events.

The idea that this notion has any relevance to biology is utter twaddle.
Myself and others have mashed it on a number of occasions, now.

To quote from DBB:

``An irreducibly complex system cannot be produced . . . by slight,
successive modifications of a precursor system, because any precursor
to an irreducibly complex system that is missing a part is by
definition nonfunctional. . . . Since natural selection can only
choose systems that are already working, then if a bio logical
system cannot be produced gradually it would have to arise as an
integrated unit, in one fell swoop, for natural selection to have
anything to act on.''

...but there are no such systems in biology.

Systems which have all their components in a complex interdependence
with one another don't /necessarily/ indicate design - they just indicate
the existence of a supporting structure that is no longer visible.

Consider a stone arch. Remove any stone in an arch and it collapses.
Every stone is a keystone.

Does this mean an arch is "irreducibly complex"? Of course not!

To build an arch, simply heap up a pile of stones, build the arch on top
of that and then remove the pile of stones.

This argument is due to A. G. Cairns-Smith.

Observing a system with complex interdependencies - where removing
individual components causes things to break is *not* sufficient
to conclude that an object has been designed. You have to show
that no conceivable chain of events could possibly have led to
the structure by small modifications.

*Even* then you can't conclude that intelligent design was
necessarily involved:

When the event concerned is abiogenesis, /even/ if the event
was fantastically improbable (which it was not), the (weak) anthropic
principle can be employed to conquor *any* odds for this particular
event. Intelligent design need not be invoked.

Behe's and Dembski's argument about "irreducible complexity" in
biology utterly collapses over such points.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com

Love is chemistry, sex is physics.


Jeff Patterson

unread,
Sep 24, 1999, 3:00:00 AM9/24/99
to
Tim Tyler wrote:

> Ivar Ylvisaker <ylvi...@erols.com> wrote:
>
> : Some recent posts led me to investigate some of Dembski's thinking


> : about intelligent design more carefully.
>
> : It helps to be concrete. Assume that the possible design event in
> : question is abiogenesis, the beginning of life on this planet. I know
> : of no event more promising than this for proving design.
>

> It seems to me few events are less promising. Computers, toasters,
> wahshing machines, and venetian blinds are all far more likely to be
> designed than the abiogenesis event, IMO.
>

> : [S&D] "Science and Design" First Things October 1998 (

> Dembski's argument is essentially that of Michael Behe: that there's such
> a thing as "irreducible complexity" that signals design events.

Tim, given your generally erudite insights, I am surprised at the above
remark. You obviously have either not read or not understood Dembski.
Dembski's argument from complex specified information (CSI) in the above
reference has nothing at all to do with irreducible complexity except that
these phrases both contain a word with a common root.CSI is pattern specific
information associated with low probability events (actualizations to use
Dembski's term). The central question is whether such information can be
generated by natural causes. Dembski's claim is that it cannot since by
natural cause we mean arising from either a natural law or from a random
stochastic process. It is well accepted that information (much less CSI)
cannot be generated from an event whose outcome is known a priori, which
precludes those arising from natural law (e.g.. no information is contained in
the observation that an apple when dropped, hit the ground). The question thus
boils down to, can CSI be generated by a random stochastic process. But, so
the argument goes, CSI by its very nature fits a specified pattern which is
complex enough to preclude occurrence by random chance (such a filter has been
developed for use in the SETI project to determine if signals received from
space are from an intelligent source) , and thus cannot have arisen randomly.

While the argument appears logically sound, its proof depends on the criteria
for classifying CSI from information in general. If a logically acceptable
bounds on the pattern complexity can be formulated and CSI which meets this
criteria can be shown to exist, Dembski's conclusions necessarily follow,
with a confidence interval determined by the probability that the pattern was
indeed generated by chance.


> The idea that this notion has any relevance to biology is utter twaddle.
> Myself and others have mashed it on a number of occasions, now.
>
> To quote from DBB:
>
> ``An irreducibly complex system cannot be produced . . . by slight,
> successive modifications of a precursor system, because any precursor
> to an irreducibly complex system that is missing a part is by
> definition nonfunctional. . . . Since natural selection can only
> choose systems that are already working, then if a bio logical
> system cannot be produced gradually it would have to arise as an
> integrated unit, in one fell swoop, for natural selection to have
> anything to act on.''
>
> ...but there are no such systems in biology.
>
> Systems which have all their components in a complex interdependence
> with one another don't /necessarily/ indicate design - they just indicate
> the existence of a supporting structure that is no longer visible.
>
> Consider a stone arch. Remove any stone in an arch and it collapses.
> Every stone is a keystone.
>
> Does this mean an arch is "irreducibly complex"? Of course not!
>
> To build an arch, simply heap up a pile of stones, build the arch on top
> of that and then remove the pile of stones.

Of course the more arches you build which share a common capstone, the more
difficult it becomes to remove the rocks without the whole thing collapsing.
The arch arguments ignores in inter-relatedness of functionality at the
cellular level.
[rest deleted]

Jeff


maff91

unread,
Sep 24, 1999, 3:00:00 AM9/24/99
to
On 24 Sep 1999 20:50:12 -0400, Jeff Patterson <jp...@mpmd.com> wrote:

>Tim Tyler wrote:
>
>> Ivar Ylvisaker <ylvi...@erols.com> wrote:
>>

>> : Some recent posts led me to investigate some of Dembski's thinking


>> : about intelligent design more carefully.
>>
>> : It helps to be concrete. Assume that the possible design event in
>> : question is abiogenesis, the beginning of life on this planet. I know
>> : of no event more promising than this for proving design.
>>

>> It seems to me few events are less promising. Computers, toasters,
>> wahshing machines, and venetian blinds are all far more likely to be
>> designed than the abiogenesis event, IMO.
>>

>> : [S&D] "Science and Design" First Things October 1998 (

Try http://x33.deja.com/getdoc.xp?AN=519544184

[...]
--
L.P.#0000000001


Ivar Ylvisaker

unread,
Sep 25, 1999, 3:00:00 AM9/25/99
to
On 24 Sep 1999 12:59:58 -0400, Tim Tyler <t...@cryogen.com> wrote:

>Ivar Ylvisaker <ylvi...@erols.com> wrote:
>
>: Some recent posts led me to investigate some of Dembski's thinking


>: about intelligent design more carefully.
>
>: It helps to be concrete. Assume that the possible design event in
>: question is abiogenesis, the beginning of life on this planet. I know
>: of no event more promising than this for proving design.
>

>It seems to me few events are less promising. Computers, toasters,
>wahshing machines, and venetian blinds are all far more likely to be
>designed than the abiogenesis event, IMO.

I don't think that Dembski is very interested in toasters as "possible
design events." I earlier wrote a post in which I asked whether there
was any events other than abiogenesis that would be interesting to
Dembski. In that post, I excluded events due to humans, animals, and
aliens from outer space. It would have been clearer if I done the
same here.

[snip]

>When the event concerned is abiogenesis, /even/ if the event
>was fantastically improbable (which it was not), the (weak) anthropic
>principle can be employed to conquor *any* odds for this particular
>event. Intelligent design need not be invoked.


Richard Swinburne has an argument against this. Dembski, I think,
refers to it some place.

Suppose you are a prisoner. Your captor show you a machine. The
machine will randomly generate a number between one and the number of
atoms in the universe. You have to guess the number. If you guess
correctly, your life will be spared. If you guess wrong, the machine
will kill you. You guess and live.

Swinburne says that this outcome is not credible.

Ivar


Ivar Ylvisaker

unread,
Sep 26, 1999, 3:00:00 AM9/26/99
to
On 24 Sep 1999 21:27:31 -0400, Marty Fouts
<mathem...@usenet.nospam.fogey.com> wrote:

While I do not find Dembski's reasoning persuasive, he is a little
more logical than your post suggests.

>There in lies the first problem with Dembski's reasoning. Low
>probability of occurrence does not equal complexity, nor does it need
>arise from design.

Dembski does equate improbability and complexity. His reasoning seems
to be that improbable events when they occur convey lots of
information, i.e., you have to use lots of bits to distinguish the
event that did occur from all the other events that might have
occurred. And an information measure is a kind of complexity.

>Using Dembski's definition of information, for example, any quantum
>state transition in a crystal qualifies as a low probability occurrence
>that contains information, and is clearly not a design artifact or
>evidence of complexity.
>
>One such source of causes is quantum events. Thus collapses Dembski's
>entire house of cards.

Dembski does recognize that complex events can occur that are not
designed. Your quantum events is one example. Shuffling a deck of
cards is another. Dembski's solution is to require "side information"
that somehow independently identifies -- i.e., specifies -- the
specific outcome. In the case of abiogenesis, the ability of all
living things to reproduce themselves is possible side information.

>But much information is gained from noting that this apple, not that
>one, hit the ground. Dembski and his followers routinely confuse
>abstraction with 'actualization', in their quest for 'information.

Events (actualizations) do generate information according to Dembski.
But you have to distinguish ordinary information from complex
specified information (CSI).

>The question [can CSI be generated by a random stochastic process]
>is poorly formed. The underlying question is "what is
>complexity"? The best Dembski can do is show that complexity of
>information is subjective, rather than objective, as when he uses the
>example of a speaker of Mandarin. (Apologies to Ruth Rendel fans.)

In "The Design Inference," Dembski has a chapter on complexity. He
briefly discusses various kinds of measures of complexity including an
information measure. He doesn't do any calculations of measures that
are important so far as I know. But the question of how one can
generate CSI -- or, say, abiogenesis -- using a random stochastic
process -- or, better, using any natural process -- is a reasonable
one. Of course, I cannot imagine how one would ever show that it is
impossible.

Why Ruth Rendel fans?

Ivar


Tim Tyler

unread,
Sep 26, 1999, 3:00:00 AM9/26/99
to
Ivar Ylvisaker <ylvi...@erols.com> wrote:
: On 24 Sep 1999 12:59:58 -0400, Tim Tyler <t...@cryogen.com> wrote:

:>When the event concerned is abiogenesis, /even/ if the event


:>was fantastically improbable (which it was not), the (weak) anthropic
:>principle can be employed to conquor *any* odds for this particular
:>event. Intelligent design need not be invoked.

: Richard Swinburne has an argument against this. Dembski, I think,
: refers to it some place.

: Suppose you are a prisoner. Your captor show you a machine. The
: machine will randomly generate a number between one and the number of
: atoms in the universe. You have to guess the number. If you guess
: correctly, your life will be spared. If you guess wrong, the machine
: will kill you. You guess and live.

: Swinburne says that this outcome is not credible.

Richard Swinburne tries to use the anthropic principle to argue for the
existence of a divince creator!

Apart from the fact that this completly discredits all his views on the
subject, the argument you present seems to be logically flawed.

The correct analogy is with *infinite* numbers of prisoners, each of who's
life depends on the improbable event.

It is not the case that one /particular/ prisoner has to live - any of
them will do. If *any* of them survive - they will look at a resulting
universe and see it arranged in such a manner that life can exist.

There /may/ be an argument that if you pick a universe at random with
life in it, then the probability of abiogenesis in that universe is high.

However, /that/ argument is another story ;-)


--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com

I'm not a complete idiot - several parts are missing.


Tim Tyler

unread,
Sep 26, 1999, 3:00:00 AM9/26/99
to
Ivar Ylvisaker <ylvi...@erols.com> wrote:
: <mathem...@usenet.nospam.fogey.com> wrote:
:>Ivar Ylvisaker filled the aether with:

:>> But I suspect that only one event is crucial to Dembski and that is
:>> abiogenesis. The first life had those long, unlikely DNA strings.
:>
:>Sorry, but you're making an assumption that may not need to be true.
:>We don't know how abiogenesis happened, and so there is no need to
:>require that DNA be present, let alone present in long strands.

: I should have said the first life that has been observed ....

No one who observed the first organisms is around to tell the tale today.

Abiogeneisis does *not* require unlikely DNA strings, or anything terribly
unlikely, certainly not any of Dembski's "CSI".

The lowest probability abiogeneisis scenario that I'm aware of is A. G.
Cairns-Smith's theory of the mineral origin of life. This has primitive
evolutionary processes happeng spontaneously *today* in bodies of water
around the planet, but never "taking off" due to problems relating to
organic molecules in the environment being "already spoken for".

Estimating probability of such scenarios is a hairy business, though - and
the "Kauffman" crowd and regular "protocell" folks all seem to believe
that they have the probability of life's origin down to plausible levels.


--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com

Love is grand; divorce, forty grand.


Tim Tyler

unread,
Sep 26, 1999, 3:00:00 AM9/26/99
to
Jeff Patterson <jp...@mpmd.com> wrote:
: Tim Tyler wrote:

:> Systems which have all their components in a complex interdependence


:> with one another don't /necessarily/ indicate design - they just indicate
:> the existence of a supporting structure that is no longer visible.
:>
:> Consider a stone arch. Remove any stone in an arch and it collapses.
:> Every stone is a keystone.
:>
:> Does this mean an arch is "irreducibly complex"? Of course not!
:>
:> To build an arch, simply heap up a pile of stones, build the arch on top
:> of that and then remove the pile of stones.

: Of course the more arches you build which share a common capstone, the more
: difficult it becomes to remove the rocks without the whole thing collapsing.
: The arch arguments ignores in inter-relatedness of functionality at the
: cellular level.

No, no! ;-)

The "arch" argument indicates that no matter /how/ complex and inter
related things are at the cellular level, they *still* may have been
built by the use of elaborate supporting structures.

Showing that inter-dependence exists proves diddley-squat about whether
a system can be build by gradual processes.

You can build large number of arches wich share a common capstone
(moving only one stone at a time) if you first build a mound of rocks,
then build the arches, and then /carefully/ remove the "scaffolding".

Whether the arch collapses if you then remove a stone says nothing about
whether it was constructed by gradual processes. If you really want to
remove stones from the arch, moving a stone at a time and without the
arch collapsing, start by building a pile of rocks underneath it to act
as supporting scaffolding.

Similarly arguing from the apparent interdependency of things at the
cellular level simply *ignores* the possibility that these were
constructed from simpler systems, which are now no longer visible.

See A. G. Cairns-Smith, 1982, 1985 for one idea concerning what form
these structures took.


--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com

Kilroy occupied these co-ordinates.


Ivar Ylvisaker

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
On 26 Sep 1999 21:55:51 -0400, Marty Fouts
<mathem...@usenet.nospam.fogey.com> wrote:

>Ivar Ylvisaker filled the aether with:
>

>> On 26 Sep 1999 12:32:13 -0400, Marty Fouts


>> <mathem...@usenet.nospam.fogey.com> wrote:
>
>>> Ivar Ylvisaker filled the aether with:
>>>
>>>> But I suspect that only one event is crucial to Dembski and that
>>>> is abiogenesis. The first life had those long, unlikely DNA
>>>> strings.
>>> Sorry, but you're making an assumption that may not need to be
>>> true. We don't know how abiogenesis happened, and so there is no
>>> need to require that DNA be present, let alone present in long
>>> strands.
>
>> I should have said the first life that has been observed ....
>

>is a long way down the path from abiogenesis.

There are fossil records of cells in rocks that are 3.5 billion years
old. You can't see the DNA but, if they didn't have it, you have
uncovered a new major crisis for evolution.

Ivar


Ivar Ylvisaker

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
On 26 Sep 1999 19:15:32 -0400, Tim Tyler <t...@cryogen.com> wrote:

>Ivar Ylvisaker <ylvi...@erols.com> wrote:


>: <mathem...@usenet.nospam.fogey.com> wrote:
>:>Ivar Ylvisaker filled the aether with:
>
>:>> But I suspect that only one event is crucial to Dembski and that is
>:>> abiogenesis. The first life had those long, unlikely DNA strings.
>:>
>:>Sorry, but you're making an assumption that may not need to be true.
>:>We don't know how abiogenesis happened, and so there is no need to
>:>require that DNA be present, let alone present in long strands.
>
>: I should have said the first life that has been observed ....
>

>No one who observed the first organisms is around to tell the tale today.
>
>Abiogeneisis does *not* require unlikely DNA strings, or anything terribly
>unlikely, certainly not any of Dembski's "CSI".
>
>The lowest probability abiogeneisis scenario that I'm aware of is A. G.
>Cairns-Smith's theory of the mineral origin of life. This has primitive
>evolutionary processes happeng spontaneously *today* in bodies of water
>around the planet, but never "taking off" due to problems relating to
>organic molecules in the environment being "already spoken for".
>
>Estimating probability of such scenarios is a hairy business, though - and
>the "Kauffman" crowd and regular "protocell" folks all seem to believe
>that they have the probability of life's origin down to plausible levels.

By unlikely, I meant something more like surprising or puzzling rather
than impossible.

Many people are trying to invent a plausible abiogenesis process. No
one, to my knowledge, has a theory that is very convincing to other
scientists.

Ivar


Ivar Ylvisaker

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
On 27 Sep 1999 01:47:48 -0400, Marty Fouts
<mathem...@usenet.nospam.fogey.com> wrote:


>not relevant. until you know *how* abiogenesis occured, you can't argue
>about its properties. There is no evidence that requires that the
>first objects that we would recognize as alive had to have DNA, nor
>that the first with DNA had to have long strands of it.
>

I'm not talking about abiogenesis. All I'm saying is that nature had
to get to DNA somehow. Dembski is hoping that science will never find
a way (other than God).

But if you are postulating that DNA is a relatively modern phenomenum,
then a radical change to the theory of evolution will be necessary.
And to any theory of life.

Good night.

Ivar


Bigdakine

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
>Subject: Re: Dembski's Intelligent Design Hypothesis
>From: ylvi...@erols.com (Ivar Ylvisaker)
>Date: Sun, 26 September 1999 02:02 AM EDT
>Message-id: <37eda9a2...@news.erols.com>
>
>On 24 Sep 1999 21:27:31 -0400, Marty Fouts

><mathem...@usenet.nospam.fogey.com> wrote:
>
>While I do not find Dembski's reasoning persuasive, he is a little
>more logical than your post suggests.
>
>>There in lies the first problem with Dembski's reasoning. Low
>>probability of occurrence does not equal complexity, nor does it need
>>arise from design.
>
>Dembski does equate improbability and complexity. His reasoning seems
>to be that improbable events when they occur convey lots of
>information, i.e., you have to use lots of bits to distinguish the
>event that did occur from all the other events that might have
>occurred. And an information measure is a kind of complexity.

The question is, is how much information was needed for that first
self-replicating molecule that led to the abiogenic process. Dembski's argument
assumes a priori that it was large; large enough to be improbable. Its
circular.

Stuart
Dr. Stuart A. Weinstein
Ewa Beach Institute of Tectonics
"To err is human, but to really foul things up
requires a creationist"


Tim Tyler

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
Ivar Ylvisaker <ylvi...@erols.com> wrote:

: On 26 Sep 1999 19:15:32 -0400, Tim Tyler <t...@cryogen.com> wrote:
:>Ivar Ylvisaker <ylvi...@erols.com> wrote:
:>: <mathem...@usenet.nospam.fogey.com> wrote:
:>:>Ivar Ylvisaker filled the aether with:

:>:>> But I suspect that only one event is crucial to Dembski and that is
:>:>> abiogenesis. The first life had those long, unlikely DNA strings.
:>:>
:>:>Sorry, but you're making an assumption that may not need to be true.
:>:>We don't know how abiogenesis happened, and so there is no need to
:>:>require that DNA be present, let alone present in long strands.
:>
:>: I should have said the first life that has been observed ....
:>
:>No one who observed the first organisms is around to tell the tale today.
:>
:>Abiogeneisis does *not* require unlikely DNA strings, or anything terribly
:>unlikely, certainly not any of Dembski's "CSI".
:>
:>The lowest probability abiogeneisis scenario that I'm aware of is A. G.

:>Cairns-Smith's theory of the mineral origin of life. [...]

: By unlikely, I meant something more like surprising or puzzling rather
: than impossible.

Suprising or puzzling would be fine by me.

However for Dembski's argument to apply to abiogenesis, the situation
*demands* that the event be not just improbably - but very, *very*
unlikely.

: Many people are trying to invent a plausible abiogenesis process. No
: one, to my knowledge, has a theory that is very convincing to other
: scientists.

...but there's more than one theory that claims life could come into
existence rather easily and gives details of the possible mechanism
involved.

The reason no one story has yet won out is not because the stories
involved are all very unlikely (the Cairns-Smith reference suggests
exactly the opposite is the case) but because the events are lost in
the mists of time and nobody can see which of the various possible
scenarios actually happened.


--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com

'i' before 'e', except in pleiotropy.


Michael

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
jpa...@my-deja.com wrote:

> Marty Fouts wrote:
>
> >Ivar Ylvisaker filled the aether with:
>

> >> Dembski does equate improbability and complexity. His reasoning
> >> seems to be that improbable events when they occur convey lots of
> >> information, i.e., you have to use lots of bits to distinguish the
> >> event that did occur from all the other events that might have
> >> occurred. And an information measure is a kind of complexity.
>

> >But Dembski, who should know better, is wrong. If the universe is
> >large and only one event can occur from it, then the the probability
> >of any event is low, with no correlation to complexity.
>
> First, you are confusing probability with probability density. You must
> INTEGRATE pd over some interval to obtain the probability of an
> observed datum being between that interval. The information contained in
> that event, and measured in a specified way, is calculated from the
> probability, not the pd. In the limit, as the interval goes to zero, it
> takes an infinite number of bits to convey the information,
> corresponding to the infinite number of significant digits it would take
> to specify and analog value precisely.
>
> Second, you example is muddled. Your premise states that one event can
> occur and then you go one to talk about the probability of _any_ event.
>
> Finally, you insist on claiming Dembski's arguments for ID rest on
> complexity alone. This is false. There are two types of complexity,
> specified and unspecified. Unspecified complexity can result from random
> stochastic processes while specifed complexity, according to Dembski,
> can not.
>
> You are walking along the beach and it begins to rain. You come across a
> large pile of drift wood that forms a crude shelter and so you crawl
> inside. Once in, you gaze across the beach and see a beautiful beach
> home built on the clifts above. What distinguishes these two shelters?
> They are both complex. The probability of the driftwood being arranged
> in exactly that pattern is exceeding low and the structure can therefore
> be said to be complex. The difference of course is that one of them is
> built according to a _plan_, the other is not. The beach home exhibits
> specified complexity which implies design. The driftwood shelter is
> complex but unspecified.
> [snip]


>
> >> Dembski does recognize that complex events can occur that are not
> >> designed. Your quantum events is one example. Shuffling a deck of
> >> cards is another. Dembski's solution is to require "side
> >> information" that somehow independently identifies -- i.e.,
> >> specifies -- the specific outcome. In the case of abiogenesis, the
> >> ability of all living things to reproduce themselves is possible
> >> side information.
>

> >He's got his cart and horse backwards here. He is positing 'side
> >information' with showing that it is necessary.
>
> Necessary for what? He's saying some information adheres to a specified
> pattern and some does not. That which does implies design. The more
> complex the pattern, the greater the implication for design. What about
> this simple concept confuses you?
> [snip]


>
> >> In "The Design Inference," Dembski has a chapter on complexity. He
> >> briefly discusses various kinds of measures of complexity including
> > >an information measure. He doesn't do any calculations of measures

> > t>hat are important so far as I know. But the question of how one


> >> can generate CSI -- or, say, abiogenesis -- using a random
> >> stochastic process -- or, better, using any natural process -- is a
> >> reasonable one. Of course, I cannot imagine how one would ever show
> >> that it is impossible.
>

> The beauty of Dembski's approach to this is that it gets around having
> to speculate about the proto cell. If CSI can not, as Dembski claims be
> generated by natural processes but only conveyed, it is enough to show
> that a modern cell contains CSI to implicate ID at the origin.
>
> >It's a classic fallacy. That CSI *might* exist is not proof that it
> >does,
>
> This is silly. Examples of CSI as defined by Dembski (he is free to
> define it however he wishes) abound, which obviously does prove that it
> exists. You may disagree with the inference but not the premise that CSI
> exists. It does, by definition.
>
> > while on the other hand, there is plenty of evidence that the
> >'side information' is not necessary.
>
> Please provide one example of complex specified information, (complex
> information with the 'side information' of specification) being
> _generated_ by a non-intelligent mechanism.
>

I applied Dembski's definition to the following:

http://www.cogs.susx.ac.uk/users/adrianth/ices96/node5.html


The search space is 2^1800 different circuits. The specified information
is the funtional criteria for the circuit. If you assume a uniform
probability distirbution, the probability that a functional circuit could
arise in the time involved (allowing 5 seconds for each test over 3 weeks)
is about 1 in a billion, IIRC (I don't have my calculations with me).

There are a couple of ways to dispute the fact that CSI was created. One
(which Dembski seems to favor) is that the uniform density assumption is
violated by the evolutionary process, making a seemingly complex event,
non-complex (read more probable). By doing this however, you essentially
define away the problem of evolving specified complexity since it by
definition is not complex. This doesn't help answer the question at all
because now seemingly complex events (DNA, for example) are not complex, if
they evolved. How can you tell the difference?


Mike


Jeff Patterson

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
Michael wrote:

> [snip]

> > Please provide one example of complex specified information, (complex
> > information with the 'side information' of specification) being
> > _generated_ by a non-intelligent mechanism.
> >
>
> I applied Dembski's definition to the following:
>
> http://www.cogs.susx.ac.uk/users/adrianth/ices96/node5.html
>
> The search space is 2^1800 different circuits. The specified information
> is the funtional criteria for the circuit. If you assume a uniform
> probability distirbution, the probability that a functional circuit could
> arise in the time involved (allowing 5 seconds for each test over 3 weeks)
> is about 1 in a billion, IIRC (I don't have my calculations with me).

A nice piece of work, one which, as a EE, I can really appreciate :>)

> There are a couple of ways to dispute the fact that CSI was created. One
> (which Dembski seems to favor) is that the uniform density assumption is
> violated by the evolutionary process, making a seemingly complex event,
> non-complex (read more probable). By doing this however, you essentially
> define away the problem of evolving specified complexity since it by
> definition is not complex. This doesn't help answer the question at all
> because now seemingly complex events (DNA, for example) are not complex, if
> they evolved. How can you tell the difference?
>

I think you've mis-stated Dembski's objection slightly. As I understand his
argument, which relates to genetic algoritms said to mimic natural evolution
and not to the evolutionary process itself, if a genetic algorithm stacks the
deck to make an outcome more likely (or as in the case of Dawkins' "me
thinks.." evolver, a certainty), the algorithm cannot be said to have generated
information, but instead has merely mapped the information from input to
output. In essence, such system move the event (of producting a specified
output) out of the stochastic universe and into the space of natural law. A
process that is destined to produce an outcome and no other cannot be said to
have generated CSI regardless of how complex the outcome is. All of the
information present in such a system was present in the initial conditions.
Likewise, a system that has a high probability of generating a specified
outcome cannot be used to make general statements about the evolutionary
process which is claimed to completely stochastic and without purpose.

Dembski's objection is that the GAs don't work in the same way as biologists
claim evolution works.
To put it another way, if evolution was bound to generate DNA and few (or no)
other possibility existed given the initial conditions, the the specifications
for DNA was evidently prescribed in the initial conditions. Where did that
specification come from?

To convincingly generate CSI stochastically would require that zero information
be contained in the fitness function. The problem with this is that GAs that
meet this criteria invariably do not evolve, they get trapped in local minima.
In your experiment, the constants k1 and k2 had to be empirically determined.
This amounts to front loading the fitness function with information. As you
describe in your paper, the zero information fitness function (k1=k2=1)
resulted in the a local minima trap, precisely the problem alluded to above.
Once front loading occurs, the system evolves to convert the CSI from one form
(information contained in k1 and k2), into another (how to interconnect the
cells in the FPGA), but in this process it cannot be conclusively shown that
any new information was created.

Not to say that it isn't possible to do really interesting things with guided
evolution, as your circuit demonstrates. One question that concerns me though
is how the circuit made use of unspecified parameters (second order
interactions with neighboring cells) to solve the problem. In a practical
design, the overall performance bounds can be calculated from the part
specifications of the composite components. Allowing a circuit to make use of
unspecified parameters would make proving robustness challenging. Is it
possible somehow to preclude this reliance?

Jeff

Michael

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
Jeff Patterson wrote:

> Michael wrote:
>
> > [snip]


>
> > > Please provide one example of complex specified information, (complex
> > > information with the 'side information' of specification) being
> > > _generated_ by a non-intelligent mechanism.
> > >
> >
> > I applied Dembski's definition to the following:
> >
> > http://www.cogs.susx.ac.uk/users/adrianth/ices96/node5.html
> >
> > The search space is 2^1800 different circuits. The specified information
> > is the funtional criteria for the circuit. If you assume a uniform
> > probability distirbution, the probability that a functional circuit could
> > arise in the time involved (allowing 5 seconds for each test over 3 weeks)
> > is about 1 in a billion, IIRC (I don't have my calculations with me).
>

> A nice piece of work, one which, as a EE, I can really appreciate :>)

I'm an EE myself; perhaps that's what drew me to it.

>
> > There are a couple of ways to dispute the fact that CSI was created. One
> > (which Dembski seems to favor) is that the uniform density assumption is
> > violated by the evolutionary process, making a seemingly complex event,
> > non-complex (read more probable). By doing this however, you essentially
> > define away the problem of evolving specified complexity since it by
> > definition is not complex. This doesn't help answer the question at all
> > because now seemingly complex events (DNA, for example) are not complex, if
> > they evolved. How can you tell the difference?
> >
>

> I think you've mis-stated Dembski's objection slightly. As I understand his
> argument, which relates to genetic algoritms said to mimic natural evolution
> and not to the evolutionary process itself, if a genetic algorithm stacks the
> deck to make an outcome more likely (or as in the case of Dawkins' "me
> thinks.." evolver, a certainty), the algorithm cannot be said to have generated
> information, but instead has merely mapped the information from input to
> output. In essence, such system move the event (of producting a specified
> output) out of the stochastic universe and into the space of natural law. A
> process that is destined to produce an outcome and no other cannot be said to
> have generated CSI regardless of how complex the outcome is. All of the
> information present in such a system was present in the initial conditions.
> Likewise, a system that has a high probability of generating a specified
> outcome cannot be used to make general statements about the evolutionary
> process which is claimed to completely stochastic and without purpose.

Thanks for the effort of reading the article and replying.

A couple of points are in order. I did not set out to prove that life evolved
per se. I realize that GA's are not the best models of evolution (Larry Moran
has beat me over the head with that quite a few times). But is a simplified
version of natural selection, don't you agree? Natural selection is undoubtably
a part of evolution, thus its deterministic traits are also a part of evolution.
This is why you will hear many posters say that life did not arrive by chance.

The results of the GA algorithm are not characterized by law because were we to
run the algorithm again, it is unlikely that we will get the same circuit.
According to Dembski's book, it thus can't be the work of law. I don't think you
can claim that the final circuit was present in the intial conditions. Its
functional traits, yes, but not the circuit itself.

I think the argument that the result is no longer complex (because it is now
likely to produce the specified event) can be made based on Dembski's technique,
but that leads us back to the problem I mentioned: how can you tell if its
complex?

>
>
> Dembski's objection is that the GAs don't work in the same way as biologists
> claim evolution works.
> To put it another way, if evolution was bound to generate DNA and few (or no)
> other possibility existed given the initial conditions, the the specifications
> for DNA was evidently prescribed in the initial conditions. Where did that
> specification come from?
>
> To convincingly generate CSI stochastically would require that zero information
> be contained in the fitness function. The problem with this is that GAs that
> meet this criteria invariably do not evolve, they get trapped in local minima.
> In your experiment, the constants k1 and k2 had to be empirically determined.
> This amounts to front loading the fitness function with information. As you
> describe in your paper, the zero information fitness function (k1=k2=1)
> resulted in the a local minima trap, precisely the problem alluded to above.

Not to be pedantic, but its not my paper, its Adrian Thompson's. I wouldn't want
to take credit for his work.


>
> Once front loading occurs, the system evolves to convert the CSI from one form
> (information contained in k1 and k2), into another (how to interconnect the
> cells in the FPGA), but in this process it cannot be conclusively shown that
> any new information was created.

However, even if k1 and k2 have to be determined, that is a much simpler set. I
don't think you can claim that k1 and k2 constitute CSI.

>
>
> Not to say that it isn't possible to do really interesting things with guided
> evolution, as your circuit demonstrates. One question that concerns me though
> is how the circuit made use of unspecified parameters (second order
> interactions with neighboring cells) to solve the problem. In a practical
> design, the overall performance bounds can be calculated from the part
> specifications of the composite components. Allowing a circuit to make use of
> unspecified parameters would make proving robustness challenging. Is it
> possible somehow to preclude this reliance?
>

Some of his other papers are at

http://www.cogs.susx.ac.uk/users/adrianth/ade.html

where he looks at varying the temperature of the circuits while they are evolving
to make them more robust.


Mike


Jeff Patterson

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to

Bigdakine wrote:

> >Subject: Re: Dembski's Intelligent Design Hypothesis
> >From: ylvi...@erols.com (Ivar Ylvisaker)
> >Date: Sun, 26 September 1999 02:02 AM EDT
> >Message-id: <37eda9a2...@news.erols.com>
> >
> >On 24 Sep 1999 21:27:31 -0400, Marty Fouts
> ><mathem...@usenet.nospam.fogey.com> wrote:
> >
> >While I do not find Dembski's reasoning persuasive, he is a little
> >more logical than your post suggests.
> >
> >>There in lies the first problem with Dembski's reasoning. Low
> >>probability of occurrence does not equal complexity, nor does it need
> >>arise from design.
> >

> >Dembski does equate improbability and complexity. His reasoning seems
> >to be that improbable events when they occur convey lots of
> >information, i.e., you have to use lots of bits to distinguish the
> >event that did occur from all the other events that might have
> >occurred. And an information measure is a kind of complexity.
>

> The question is, is how much information was needed for that first
> self-replicating molecule that led to the abiogenic process. Dembski's argument
> assumes a priori that it was large; large enough to be improbable. Its
> circular.

Not quite. He argues that CSI is conserved. Thus if found today, it must have been
present at the start.

Jeff

Jeff Patterson

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to

Marty Fouts wrote:

> jpat41 filled the aether with:
>
> > Marty Fouts blessed us with:
>
> [snip]
>
> > And no where will you find Dembski making such a inane claim. What
> > he says is low probability of occurrence + a pre-specified pattern =
> > complex specified information. He specifically says low probability
> > alone does NOT equal CSI. In his archery analogy, the arrow hitting
> > the wall at a specific point was low probability occurrence but
> > without the pre-specified pattern (the target) the event contained
> > no complexity of information.
>
> But since the 'pre-specificied pattern' is an imponderable,...

How so?

> >> Using Dembski's definition of information,
>

> > It's not Dembski's definition, it Shannon's. You remember Shannon,
> > the father of information theory, that one.
>
> It is *not* Shannon's, and even Dembski has written that it is not.
> Communication channel, you remember communication channel.

That is what Shannon worked out, the information capacity of a
communication channel in the presence of noise. As to Dembski claiming it
is a different definition, please provide a reference. Here is mine:
" Thus we define the measure of information in an event of probability p
as -log2p (see Shannon and Weaver, 1949, p. 32; Hamming, 1986; or indeed
any mathematical introduction to information theory". (Intelligent Design
as a Theory of Information, William A. Dembski).

Classical Shannon, properly attributed.

> [snip]


>
> >>
> >> But much information is gained from noting that this apple, not
> >> that one, hit the ground.
>

> > Now you've gone from a natural law event to a random stochastic
> > event. Now show me a particular apple that hits the ground near a
> > particular spot at a particular point in time, all specified in
> > advance and I'll show you a Tree Shaker, unless you perhaps believe
> > in psychic power.
>
> I believe in QM. There is no tree shaker, not there, nor in
> evolution.

So you think QM can generate prespecification of events in time? This gets
stranger and stranger.

>
>
> [snip]
>
> > What I think he's getting at is that it is
> > possible to determine the pre-specified pattern after the pattern
> > has been generated, thus turning preceived complex but unspecified
> > information into CSI. In any case, the point that it took ID to
> > create the CSI in this case (Chinese speaker) is still valid.
>
> Human languages, for the most part are not designed, by the way, they
> evolve. The mechanism of the evolution is well studied.

Interesting. Do you think language would evolve in the absence of
intelligence? Language contains CSI. The CSI increases over time,
increasing its ability to express more complex ideas. All of this requires
intelligence.

> [snip]
>
> >> And yet artifacts that give every appearance of having been
> >> 'designed', such as arrowheads, are known to be both simple and to
> >> have arisen stochastically.
>
> > A high probability event (the crystalline structure of obsidian
> > makes arrow-head like creation events likely) is ruled out as a test
> > case.
>
> Sorry, this makes no sense. That obsidian forms flakes in such a way
> that it creates sharp edges is no more a 'high probability event' than
> any property of a physical material.

Try making an arrow head out of a sugar cube. Find one that did so on its
own, subject to only natural, randowm forces and you'd have a case in
point.

> > Like I said, Dembski hasn't proven anything to me yet. I too
> > wonder whether a razor so fine exists as to definitively separate a
> > complex pattern from a non-complex one.
>
> It doesn't, nor is there a tool that can show the presence of the
> 'pre-specificied pattern'. Meanwhile, at least in evolution, there
> are well known mechanisms that require neither.

I haven't read anything yey on the formalization of the pattern
recognition so you may be correct. But while we wait for a rigorous
definition, a working definition is, like obsenity, "you'll know it when
you see it". In real life, all of us have confidence of being able to
determine the designed from the undesigned without a formal criteria and
the more complex the system, the less likely we are to be fooled.

> > What is clear though is that the probability of any given
> > pre-specified pattern occurring at random can be calculated if one
> > can estimate the probability density(-ies) involved.
>
> Not really. An aboriginal arrowhead is indistinguishable from a flint
> flake.

Probably because the aboriginals used flint flakes for arrow heads.

> The one is an arbitrary event the other an act of design.

Not too much design is involved here.

> Good anthropologists with much field experience have difficulty
> telling them apart.

As would be predicted from the lack of design necessary to produce one.

> Simple, uncomplex, highly probable objects exist
> that could be either coincident or design.

Of course but what we are taking about is complex, highly unprobable
objects!

[snip]

> It is easy to show that designs exist that are indistinguishable from
> random chance. That in itself kills half of Dembski's argument. It
> is impossible, without talking to the designer, to determine what a
> 'pre-specified pattern' is, let alone, that it bears any
> responsibility for the object in question.

Back to the heart of the matter. Dembski claims you can reliably infer the
prespecified complexity. I remain unconvinced but am not well enough
informed on the details yet. His ideas are intriguing though and deserve
critique on a fair reading, not self-serving straw men.


Ivar Ylvisaker

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
On 27 Sep 1999 02:33:47 -0400, Marty Fouts
<mathem...@usenet.nospam.fogey.com> wrote:

>Ivar Ylvisaker filled the aether with:
>

>> I'm not talking about abiogenesis. All I'm saying is that nature
>> had to get to DNA somehow. Dembski is hoping that science will
>> never find a way (other than God).
>

>You're not talking about abiogenesis? Then why did you write, earlier
>in this exchange:


>
>>>> Ivar Ylvisaker filled the aether with:
>>>>

>>>>> But I suspect that only one event is crucial to Dembski and that
>>>>> is abiogenesis. The first life had those long, unlikely DNA
>>>>> strings.
>

>It was, after all, that specific paragraph that has led us directly to
>this post.

I think that we are using the word "abiogenesis" in somewhat different
ways. I was thinking of abiogenesis as a kind of black box. Before
the box was present, there was no life. After the black box did its
work, there was life incorporating those long DNA strings. Dembski's
best chance of demonstrating design is inside that black box (in my
opinion, Dembski may think otherwise). The alternative to design is
some kind of natural process. But I have no opinion on what that
natural process might be. I wasn't looking inside the box. You
wrote:

>until you know *how* abiogenesis occured, you can't argue
>about its properties. There is no evidence that requires that the
>first objects that we would recognize as alive had to have DNA, nor
>that the first with DNA had to have long strands of it.

I wasn't concerned about the properties of abiogenesis inside the box
(assuming the process is natural) or when things inside the box might
be described as alive.

In short, I think we are interpreting the word abiogenesis from
different viewpoints.

Ivar


John Wilkins

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
In article <wkpuz3q...@usenet.nospam.fogey.com>, Marty Fouts
<mathem...@usenet.nospam.fogey.com> wrote:

|Jeff Patterson filled the aether with:
|
|> Marty Fouts wrote:
|
|[snip]


|
|>> But since the 'pre-specificied pattern' is an imponderable,...
|
|> How so?
|

|There's not way to distinguish after the fact whether an event had a
|pre-specificied pattern.


|
|>> >> Using Dembski's definition of information,
|>>
|>> > It's not Dembski's definition, it Shannon's. You remember
|>> Shannon, the father of information theory, that one.
|>>
|>> It is *not* Shannon's, and even Dembski has written that it is
|>> not. Communication channel, you remember communication channel.
|
|> That is what Shannon worked out, the information capacity of a
|> communication channel in the presence of noise. As to Dembski
|> claiming it is a different definition, please provide a
|> reference. Here is mine: " Thus we define the measure of information
|> in an event of probability p as -log2p (see Shannon and Weaver,
|> 1949, p. 32; Hamming, 1986; or indeed any mathematical introduction
|> to information theory". (Intelligent Design as a Theory of
|> Information, William A. Dembski).
|

|You've missed my point. The definition of information that I was
|using was *not* Shannon's definition, it was Dembski's. Dembski wrote
|in
|http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html
|
| What then is information? The fundamental intuition underlying
| information is not, as is sometimes thought, the transmission of
| signals across a communication channel, but rather, the
| actualization of one possibility to the exclusion of others.
|
|That is definitely not Shannon's definition.

Isn't it? I thought that Shannon's was the probability that the state of
the receiver would be the same as the state of the sender, which would
imply the exclusion of other states. Certainly that's how it was
interpreted at the time. I have a little essay of JZ Young's from 1954 (6
years after Shannon's classical paper) in which that is *exactly* how he
characterises Shannon information. It's in _Evolution as a Process_, eds
Huxley, Hardy and Ford.
<snip rest>

--
John Wilkins, Head, Graphic Production
The Walter and Eliza Hall Institute of Medical Research, Melbourne,
Australia <mailto:wil...@WEHI.EDU.AU><http://www.wehi.edu.au/~wilkins>
Homo homini aut deus aut lupus - Erasmus of Rotterdam


Jeff Patterson

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
Marty Fouts wrote:

> Jeff Patterson filled the aether with:
>
> > Marty Fouts wrote:
>
> [snip]


>
> >> But since the 'pre-specificied pattern' is an imponderable,...
>
> > How so?
>

> There's not way to distinguish after the fact whether an event had a
> pre-specificied pattern.

That will come as news to cryptologists :>) Seriously you can't mean for
your statement to be a generalization. If I hand you a piece of paper with
two strings of binary digits say 1000 digits long, each is equally likely
to have been generated by a fair coin toss. Say one was in fact so
generated. But if the other one is alternating, 101010...., you have no
trouble discerning the pattern the designer used to generate the string.
Just to make it clear, both are equally complex (same probability), one
has complex specificity, the other does not. In the one that does, the
pattern can be discerned.

> >> >> Using Dembski's definition of information,
> >>
> >> > It's not Dembski's definition, it Shannon's. You remember
> >> Shannon, the father of information theory, that one.
> >>
> >> It is *not* Shannon's, and even Dembski has written that it is
> >> not. Communication channel, you remember communication channel.
>
> > That is what Shannon worked out, the information capacity of a
> > communication channel in the presence of noise. As to Dembski
> > claiming it is a different definition, please provide a
> > reference. Here is mine: " Thus we define the measure of information
> > in an event of probability p as -log2p (see Shannon and Weaver,
> > 1949, p. 32; Hamming, 1986; or indeed any mathematical introduction
> > to information theory". (Intelligent Design as a Theory of
> > Information, William A. Dembski).
>

> You've missed my point. The definition of information that I was
> using was *not* Shannon's definition, it was Dembski's. Dembski wrote
> in
> http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html
>
> What then is information? The fundamental intuition underlying
> information is not, as is sometimes thought, the transmission of
> signals across a communication channel, but rather, the
> actualization of one possibility to the exclusion of others.
>
> That is definitely not Shannon's definition.

Dembski has not adopted a novel definition of information here. Nor would
he group Shannon in with those who think the "fundamental intuition
underlying information is the transmission of signals across communication
channels". Shannon had to first formalized the definition of information
before he could apply that formulation to the question of corruption that
occurs in channels. Dembski is taking Shannon's formal definition, which
decribes mathematically "the actualization of one possibility to the
exclusion of others", and using it to examine not its corruption but
instead its inception, a question as far as I know, was never taken up by
Shannon. Dembski is asking the fundamental question, "where did the
information come from". In examining that issue he is still using the
universally accepted mathematical definition of information,
I(A|B)=-log2(P(A))-log2(P(A|B)) due to Shannon and not some novel
definition. In any case, as long as we can agree that the above equation
is what Dembski means by information, the rest is semantics..

> [snip]


>
> >> I believe in QM. There is no tree shaker, not there, nor in
> >> evolution.
>
> > So you think QM can generate prespecification of events in time?
> > This gets stranger and stranger.
>

> You are right, your misinterpretation of my point gets stranger and
> stranger. I believe that 'prespecification' in Dembski's sense is a
> meaningless concept.

But wouldn't you agree that, at least sometimes, it is possible to discern
a specified pattern from the result with no a priori knowledge of the
specification- as in the coin toss sequence above or the cryptologists
who breaks the code?

> [snip]


>
> >> Human languages, for the most part are not designed, by the way,
> >> they evolve. The mechanism of the evolution is well studied.
>
> > Interesting. Do you think language would evolve in the absence of
> > intelligence?
>

> No. But I believe that a certain sophistication of language is a
> prerequisite for intelligence. Animals are capable of no greater
> intelligence than their languages allow.
>
> I also believe that neither language nor intelligence are binary, but
> rather are matters of degree.


>
> > Language contains CSI. The CSI increases over time, increasing its
> > ability to express more complex ideas. All of this requires
> > intelligence.
>

> you've got the cart and horse backwards. Language comes before
> intelligence. One first needs the medium of expression and then one
> expresses.
>

Well for what its worth here's my stand on this chicken and the egg. The
utterance of whatever audible sound an organism is capable of making is
only worth the calories required to do so if the organism has some
reasonable expectation that another organism is going understand the
meaning conveyed. This means the uttering organism knows that sounds can
represent things (the lesson it took Helen Keller so long to learn. She
had intellegence without language!) and thus has intelligence. The means
of utterance does not a language make. Encapsulating information into
audible symbols does and the encapsulation takes intelligence or so it
would appear to me.

> [snip]


>
> >> Sorry, this makes no sense. That obsidian forms flakes in such a
> >> way that it creates sharp edges is no more a 'high probability
> >> event' than any property of a physical material.
>
> > Try making an arrow head out of a sugar cube. Find one that did so
> > on its own, subject to only natural, randowm forces and you'd have a
> > case in point.
>

> Why? The arrowhead example is here to make the point that there are
> situations in which it is impossible to tell designed objects from
> coincidental objects. It also points out that design does not
> correlate with complexity, since arrowheads, even the most
> sophisticated designed ones of today, are hardly complex by any metric
> of complexity.
>

I think by correlate here you something more like "follows directly".
Correlation is an analog property, not a binary one. In that sense of the
word, objects which have the attribute of specified complexity are highly
correlated with objects designed by an intelligent agent. But all that is
besides the point. Dembski's evaluation filter is not immune to false
negatives (it can wrongly attribute ID to chance), and he explicitly
states this. Using his filter, your arrowhead would be always be
attributed to chance. That attribution may be wrong but so what? The point
is that you can design a filter that never exhibits false positives,
attributing ID to an object generated by chance or law. This makes the
whole arrowhead argument mute.

> [snip]


>
> > I haven't read anything yey on the formalization of the pattern
> > recognition so you may be correct. But while we wait for a rigorous
> > definition, a working definition is, like obsenity, "you'll know it
> > when you see it".
>

> That's a subjective definition, and, in my humble opinion, the best
> that will ever be arrived at. Being subjective, rather than
> objective, takes it outside the realm of science.


>
> > In real life, all of us have confidence of being able to determine
> > the designed from the undesigned without a formal criteria and the
> > more complex the system, the less likely we are to be fooled.
>

> But that's an intuition built by looking at a certain kind of design
> and noticing its complexity. It doesn't take into account either
> simple objects, such as arrowheads, or complex objects deliberately
> designed to look as if they weren't, such as certain kinds of
> camouflage.

In both cases though, you could not make the mistake, using the filter
described by Dembski, of attributing a chance occurrence to ID. The point
is *not* that every complex thing is generated by chance but that at least
some are not. If some of those things that aren't include occurrences
which have no human intervention, the obvious question arises, where did
they come from? You claim without proof that Natural Selection can be the
root cause of CSI, if CSI is ever found to exist in biological systems.
Dembski claims with proof that NS (or any stochastic, deterministic or
hybrid process) can never generate CSI, only transmute it. This follows
logically from the definition of CSI which, if I may use very rough terms,
is that information not produced by chance or law. If that definition
stood by itself, you points about circularity would be well taken, it
excludes chance a priori. But if we have another, independent test of CSI,
the circularity is broken. We can use the second test to determine the
presence of CSI and use the first definition to exclude chance and law.
Now I think you have implicitly agreed in your earlier remarks that
prespecification of pattern provides that independent test, but hold that
this is useless because it is impossible to determine the prespecified
pattern from the event. But clearly this is not so, I can certainly think
of some sequences where the pattern can be discerned with certainty. Does
life fall into this category? Who knows. But if you could be convinced
somehow, that some aspect of life, that is required for something to be
deemed alive, did indeed fall into that category, would you allow that it
follows that NS cannot be responsible for the observed pattern
specificity?

> [snip]


>
> >> Not really. An aboriginal arrowhead is indistinguishable from a
> >> flint flake.
>
> > Probably because the aboriginals used flint flakes for arrow heads.
>

> Sometimes they used found flakes, sometimes they deliberately worked
> the material. It is only as the working of the material becomes more
> obvious, as when tool marks become found on the arrowhead, that the
> anthropologists are able to tell the difference.
>
> Your assertions about noticing complexity really amount to noticing
> the hallmark of the workman, and I've seen complex designed objects
> that are, by intention, indistinguishable from complex undesigned
> objects.

Again, all of this presents no problem for the CSI detection algorithm. It
would, at worst, cast a false negative which is of no consequence. Find a
case where ID is wrongly attributed to chance, using the procedure
outlined by Dembski, and I agree the whole thing falls apart.

> The presence of the hallmarking that makes it seem to people that
> design is so easily detectable is an artifact of how primitive our
> design skills are, not a feature inherent in design.


>
> >> The one is an arbitrary event the other an act of design.
>
> > Not too much design is involved here.
>

> Well yes, but some. Thus, at one end of the spectrum, undetectable
> design.

Design may be undetectable, but at issue is whether chance can masquerade
as design. I think it is practical to limit this possibility to an
acceptable infinitesimal by suitable choice of the threshold of specified
complexity required to be inherent in the event under scrutiny. Dembski
goes so far as to posit an absolute probability so low that it equates
with impossible, derived from bounds on the number of atoms in the
universe the age of the universe and the known minimum time for phase
transitions.

> Also, and more importantly, as noted above, a demonstration
> that design does not correlate with complexity.

It is not necessary for design to follow from complexity to show that
design does follow from specified complexity.

> [snip]


>
> >> Simple, uncomplex, highly probable objects exist that could be
> >> either coincident or design.
>
> > Of course but what we are taking about is complex, highly unprobable
> > objects!
>

> we are talking about a theory that claims that design is noticeable
> because of complexity and improbability. I have show a case in which
> design is neither complex nor improbable, thus demonstrating that the
> theory has at least one flaw.

The flaw is in your reasoning. Complexity and improbability are different
measures of the same thing. Design is noticeable because of *specified*
complexity and improbability. Over and over you want to knock down this
complexity straw man. I keep telling you over and over, complexity alone
is not sufficient and Dembski never says it is. It has to complexity that
adheres to a pattern known in advance.

> > [snip]
>
> >> It is easy to show that designs exist that are indistinguishable
> >> from random chance. That in itself kills half of Dembski's
> >> argument.

No it doesn't. What would kill Dembski's argument is proving chance caused
an pattern that is indistinguishable from design.

> It is impossible, without talking to the designer, to
> >> determine what a 'pre-specified pattern' is, let alone, that it
> >> bears any responsibility for the object in question.
>
> > Back to the heart of the matter. Dembski claims you can reliably
> > infer the prespecified complexity.
>

> Dembski is wrong. He is ignoring several things, including the
> cultural error of assuming that the state of the art of human design
> is a good metric and the design error of ignoring those designs which
> are deliberately camouflage.

There are at least some patterns for which this is not true. I gave you
two in this post.

> But that only covers the false negative case, in which design is
> present but not inferred by Dembski. The false positive case, in
> which design is not present but claimed is just as likely, taking his
> criteria, which you have admitted to be the subjective "I don't know
> what it is, but I'll know it when I see it."

I admited to being ill-informed on the claim of objective criteria made by
Dembski and therefore of being unable to comment on its rigorousness. That
is not the same as admitting it is subjective.

Jeff


Jeff Patterson

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to

Tim Tyler wrote:

> Jeff Patterson <jp...@mpmd.com> wrote:
> : Tim Tyler wrote:
>

> :[snip]


> :>
> :> To build an arch, simply heap up a pile of stones, build the arch on top
> :> of that and then remove the pile of stones.
>
> : Of course the more arches you build which share a common capstone, the more
> : difficult it becomes to remove the rocks without the whole thing collapsing.
> : The arch arguments ignores in inter-relatedness of functionality at the
> : cellular level.
>
> No, no! ;-)
>
> The "arch" argument indicates that no matter /how/ complex and inter
> related things are at the cellular level, they *still* may have been
> built by the use of elaborate supporting structures.
>
> Showing that inter-dependence exists proves diddley-squat about whether
> a system can be build by gradual processes.
>
> You can build large number of arches wich share a common capstone
> (moving only one stone at a time) if you first build a mound of rocks,
> then build the arches, and then /carefully/ remove the "scaffolding".

It was this scaffolding I was referring to when I was talking about removing rocks
in my analogy. You seem to think I was talking about remove rocks from the arches
themselves. Just as with arches, it becomes increasingly more difficult to remove
the scaffolding as more and more arches share a common keystone (if you don't
believe it, try it), without the whole thing collapsing. Subtractive evolutionary
just so stories may be able to account for a single irreducibly complex function.
I have yet to see one that accounts for the interrelationships between functions
at the cellular level, each of which is irreducibly complex or that explains why
IC is the rule and not the exception in these functions.

Jeff

Ivar Ylvisaker

unread,
Sep 27, 1999, 3:00:00 AM9/27/99
to
On 27 Sep 1999 20:50:29 -0400, wil...@wehi.edu.au (John Wilkins)
wrote:

> |Jeff Patterson filled the aether with:
> |
> |> Marty Fouts wrote:
> |
> |[snip]


> |
> |>> But since the 'pre-specificied pattern' is an imponderable,...
> |
> |> How so?
> |

> |There's not way to distinguish after the fact whether an event had a
> |pre-specificied pattern.
> |

> |>> >> Using Dembski's definition of information,
> |>>
> |>> > It's not Dembski's definition, it Shannon's. You remember
> |>> Shannon, the father of information theory, that one.
> |>>
> |>> It is *not* Shannon's, and even Dembski has written that it is
> |>> not. Communication channel, you remember communication channel.
> |
> |> That is what Shannon worked out, the information capacity of a
> |> communication channel in the presence of noise. As to Dembski
> |> claiming it is a different definition, please provide a
> |> reference. Here is mine: " Thus we define the measure of information
> |> in an event of probability p as -log2p (see Shannon and Weaver,
> |> 1949, p. 32; Hamming, 1986; or indeed any mathematical introduction
> |> to information theory". (Intelligent Design as a Theory of
> |> Information, William A. Dembski).
> |

> |You've missed my point. The definition of information that I was
> |using was *not* Shannon's definition, it was Dembski's. Dembski wrote
> |in
> |http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html
> |
> | What then is information? The fundamental intuition underlying
> | information is not, as is sometimes thought, the transmission of
> | signals across a communication channel, but rather, the
> | actualization of one possibility to the exclusion of others.
> |
> |That is definitely not Shannon's definition.
>

>Isn't it? I thought that Shannon's was the probability that the state of
>the receiver would be the same as the state of the sender, which would
>imply the exclusion of other states. Certainly that's how it was
>interpreted at the time. I have a little essay of JZ Young's from 1954 (6
>years after Shannon's classical paper) in which that is *exactly* how he
>characterises Shannon information. It's in _Evolution as a Process_, eds
>Huxley, Hardy and Ford.
><snip rest>
>

Shannon's classic paper is available on the web:
http://www.math.washington.edu/~hillman/Entropy/infcode.html

As far as I know, Shannon didn't worry much about a definition of
information. He specifically said he didn't care about meaning.
Shannon's interest was in clever ways to transmit information and in
establishing a bound on how clever one could get. In the beginning of
his paper, there is a diagram with a box labeled information source.
He includes a few examples and that's about it insofar as an
explanation of the source of information is concerned. He did define
a measure of information, that -log2p stuff, but it applies only after
the information source produces some information. Defining
information is different than defining a measure of information.

Information theory (and other theories) may help clarify and
ultimately resolve the problem of abiogenesis. But they won't make it
disappear.

Ivar


Bigdakine

unread,
Sep 28, 1999, 3:00:00 AM9/28/99
to
>Subject: Re: Dembski's Intelligent Design Hypothesis
>From: Jeff Patterson jp...@mpmd.com
>Date: Mon, 27 September 1999 05:42 PM EDT
>Message-id: <37EFE5EB...@mpmd.com>

>
>
>
>Bigdakine wrote:
>
>> >Subject: Re: Dembski's Intelligent Design Hypothesis
>> >From: ylvi...@erols.com (Ivar Ylvisaker)
>> >Date: Sun, 26 September 1999 02:02 AM EDT
>> >Message-id: <37eda9a2...@news.erols.com>
>> >
>> >On 24 Sep 1999 21:27:31 -0400, Marty Fouts

>> ><mathem...@usenet.nospam.fogey.com> wrote:
>> >
>> >While I do not find Dembski's reasoning persuasive, he is a little
>> >more logical than your post suggests.
>> >
>> >>There in lies the first problem with Dembski's reasoning. Low
>> >>probability of occurrence does not equal complexity, nor does it need
>> >>arise from design.
>> >
>> >Dembski does equate improbability and complexity. His reasoning seems
>> >to be that improbable events when they occur convey lots of
>> >information, i.e., you have to use lots of bits to distinguish the
>> >event that did occur from all the other events that might have
>> >occurred. And an information measure is a kind of complexity.
>>
>> The question is, is how much information was needed for that first
>> self-replicating molecule that led to the abiogenic process. Dembski's
>argument
>> assumes a priori that it was large; large enough to be improbable. Its
>> circular.
>
>Not quite. He argues that CSI is conserved. Thus if found today, it must have
>been
>present at the start.
>
>Jeff
>
How does that mesh with Big-Bang cosmology?

Wesley R. Elsberry

unread,
Sep 28, 1999, 3:00:00 AM9/28/99
to
In article <37EFC51A...@mpmd.com>, Jeff Patterson <jp...@mpmd.com> wrote:
>Michael wrote:

JP> [snip]

JP> Please provide one example of complex specified information, (complex
JP> information with the 'side information' of specification) being
JP> _generated_ by a non-intelligent mechanism.

MK> I applied Dembski's definition to the following:

MK> http://www.cogs.susx.ac.uk/users/adrianth/ices96/node5.html

MK> The search space is 2^1800 different circuits. The
MK> specified information is the funtional criteria for the
MK> circuit. If you assume a uniform probability distirbution,
MK> the probability that a functional circuit could arise in the
MK> time involved (allowing 5 seconds for each test over 3
MK> weeks) is about 1 in a billion, IIRC (I don't have my
MK> calculations with me).

JP>A nice piece of work, one which, as a EE, I can really
JP>appreciate :>)

MK> There are a couple of ways to dispute the fact that CSI was
MK> created. One (which Dembski seems to favor) is that the
MK> uniform density assumption is violated by the evolutionary
MK> process, making a seemingly complex event, non-complex (read
MK> more probable). By doing this however, you essentially
MK> define away the problem of evolving specified complexity
MK> since it by definition is not complex. This doesn't help
MK> answer the question at all because now seemingly complex
MK> events (DNA, for example) are not complex, if they evolved.
MK> How can you tell the difference?

JP>I think you've mis-stated Dembski's objection slightly.

I think that Michael hit the nail on the head. Of course,
I had already said essentially the same thing in regard to
picking two identical solutions apart where one is due to
an algorithm and the other due to an intelligent agent.

JP>As I understand his argument, which relates to genetic
JP>algoritms said to mimic natural evolution and not to the
JP>evolutionary process itself, if a genetic algorithm stacks
JP>the deck to make an outcome more likely (or as in the case of
JP>Dawkins' "me thinks.." evolver, a certainty), the algorithm
JP>cannot be said to have generated information, but instead has
JP>merely mapped the information from input to output. In
JP>essence, such system move the event (of producting a
JP>specified output) out of the stochastic universe and into the
JP>space of natural law.

This critique applies to intelligent agents as well as
unintelligent processes. If a task is made more likely to be
solved because intelligence is applied, then it would seem to
me that the result must also be classed as "apparent CSI"
rather than "actual CSI" on that head. By this consistent
view, an omnipotent, omniscient hyeprintelligence would never
be able to produce anything but "apparent CSI", and thus would
be indistinguishable in that regard from algorithmic
'probability amplifiers'.

JP>A process that is destined to produce an outcome and no
JP>other cannot be said to have generated CSI regardless of how
JP>complex the outcome is.

See comment upon omnipotent omniscient hyperintelligences
above.

JP>All of the information present in such a system was present
JP>in the initial conditions.

I've challenged Dembski with explaining the information
contained in the final state of a GA that solves a 100-city
tour of the Traveling Salesman Problem. I've analyzed this
case to show that the solution state is *not* contained in the
initial conditions. See
<http://inia.cls.org/~welsberr/zgists/wre/papers/antiec.html>.

JP>Likewise, a system that has a high probability of
JP>generating a specified outcome cannot be used to make
JP>general statements about the evolutionary process which is
JP>claimed to completely stochastic and without purpose.

I wish to resolve a somewhat less ambitious claim first, that
evolutionary computation algorithms are capable of producing
CSI.

JP>Dembski's objection is that the GAs don't work in the same
JP>way as biologists claim evolution works.

Is it? In 1997, Dembski's objection was that the information
of the TSP tour found by GA was somehow "infused" by the
intelligence that went into the operating system, programming
system, and the GA program itself. My analysis (see URL
above) presents a (IMO) compelling argument against such a
stance.

In his "reiterations" post, Dembski presented another
objection, the 'probability amplifier' objection. But it
would appear that Dembski does not wish to emphasize the
differences between GAs and evolutionary processes. Rather,
it appears to me that Dembski wants to apply the 'probability
amplifier' conclusion to natural selection based upon how well
GAs and natural selection correspond.

[Quote]

WAD>The Darwinian mutation-selection mechanism, neural nets,
WAD>and genetic algorithms all fall within this broad
WAD>definition of evolutionary algorithms.

[End Quote - WA Dembski, "Explaining Specified Complexity"]

JP>To put it another way, if evolution was bound to generate DNA
JP>and few (or no) other possibility existed given the initial
JP>conditions, the the specifications for DNA was evidently
JP>prescribed in the initial conditions. Where did that
JP>specification come from?

Why is it that things that must move through substances that
can be treated as fluids typically have a fusiform body shape?
Where did that specification come from? Why does falling
water take on a characteristic teardrop shape? Where did that
specification come from?

JP>To convincingly generate CSI stochastically would require
JP>that zero information be contained in the fitness function.

Why? That reduces any evolutionary computation to a random
walk. It isn't a random walk that we want to model. Apply
the same restriction to intelligent agents, and one will
find that those agents also end up doing blind search.

The evaluation function plays the role of the environment in a
GA. Thus, a evaluation function should provide no more
information than is provided in the interaction of organism
and environment. (Environment in this sense includes possible
interactions with other organisms.) How much information does
environment give with respect to fitness? It doesn't give in
one step the information of a best-fit organism, but it at
least gives a relative measure across organisms of which are
better and which are worse. And that is all that is necessary
to make evolutionary computation different from a random walk.

JP>The problem with this is that GAs that meet this criteria
JP>invariably do not evolve, they get trapped in local minima.

Bullshit. Random walks are not trapped in local minima.
Local minima are *defined* by the information in the
evaluation function. If there is no information, there
necessarily are no minima. If only one solution state has a
positive value, and all other states share the same lower
value, there likewise is no *local* minimum. Random walks
also do not represent natural selection, but rather would
correspond to genetic drift.

JP>In your experiment, the constants k1 and k2 had to be
JP>empirically determined. This amounts to front loading the
JP>fitness function with information. As you describe in your
JP>paper, the zero information fitness function (k1=k2=1)
JP>resulted in the a local minima trap, precisely the problem
JP>alluded to above. Once front loading occurs, the system
JP>evolves to convert the CSI from one form (information
JP>contained in k1 and k2), into another (how to interconnect
JP>the cells in the FPGA), but in this process it cannot be
JP>conclusively shown that any new information was created.

The objection that one goes from an antecedent set of
information in one form to a CSI solution of another form is
no objection at all. An intelligent agent would have no hope
of solving a 100-city tour of the TSP in the absence of the
distance information, or at least would only have the same
expectations as one would obtain from blind search. The
algorithm and the agent operate to generate CSI that did not
exist before from the information that is available. This is
another way of saying that intelligence and algorithms have
the same dependencies when generating CSI.

Let's posit that I have a 100-city tour of a TSP generated by
GA. (BTW, I have done the simulation, and it does work.) It
is the minimum closed path distance. This meets Dembski's
criteria for CSI. The solution state has CSI, whether one
calls it "apparent" or "actual", it still does the job. The
fitness function used simply returns the closed path length
of each candidate solution. The initial candidate population
of solutions had much longer closed path lengths than the end
solution. I assert that this example shows that evolutionary
computation can generate CSI. Further, any and all arguments
that would set aside the production of "actual CSI" by
algorithms would lead to the very same conclusion if
consistently applied to intelligent agents.

We know what is desired by the IDCs: a bright-line demarcation
between what can be accomplished by natural process and what
can be accomplished by intelligent agency. Unfortunately,
Dembski's original analysis failed to provide such a
demarcation, and it is my opinion that the problems in the
original analysis led to the arguments made in the
"reiterations" post. Those arguments, too, fail to provide
that demarcation. As I argue above, each factor urged as a
reason to exclude algorithms as sources of "actual CSI" can
(and should, if one is consistent in these things) be urged as
a reason to exclude intelligent agency as a source of "actual
CSI".

The whole "apparent CSI"/"actual CSI" distinction seems to me
to be an act of desperation to shore up an otherwise
collapsing argument. More on why later.

JP>Not to say that it isn't possible to do really interesting
JP>things with guided evolution, as your circuit
JP>demonstrates. One question that concerns me though is how the
JP>circuit made use of unspecified parameters (second order
JP>interactions with neighboring cells) to solve the problem. In
JP>a practical design, the overall performance bounds can be
JP>calculated from the part specifications of the composite
JP>components. Allowing a circuit to make use of unspecified
JP>parameters would make proving robustness challenging. Is it
JP>possible somehow to preclude this reliance?

Put the constraint into the fitness function by favoring
solutions that minimize reliance on second order interactions.
Whether it is possible to detect that condition or not is
another matter.


Repeating from prior posts:

Dembski wants us to use the evidence of biological phenomena
to conclude that life was designed, or that certain features
of living systems were designed. (See his First Things
article from October 1998.) In those cases, we do not have
definitive evidence that shows what kind of process produced
the systems in question. Thus, what gets fed to Dembski's
Explanatory Filter is by necessity the produced object alone.
The level of specified complexity inherent in that object is
our guide to whether one must conclude regularity, chance, or
design. We cannot feed the process that produced the object
to Dembski's Explanatory Filter without presupposing the
answer, and thus begging the question.

Feeding an object into Dembski's Explanatory Filter determines
whether the object has the attribute of high probability,
intermediate or low probability without specification, or
"complexity-specification". Dembski's "reiterations" post now
implies that while feeding an object into his Explanatory
Filter may find "complexity-specification" in that object,
feeding the event that produced the object may find only
regularity, and thus only "apparent CSI".

But Dembski previously claimed that his
"complexity-specification" was a completely reliable indicator
of the action of an intelligent agent back in the "First
Things" article. His "reiterations" post stance completely
obviates that claim. If the determination of "actual CSI" or
"apparent CSI" requires the evidence of what sort of process
produced the object in question, then finding that an object
itself has CSI is necessarily ambiguous and uninformative on
the issue of whether it was produced by an intelligent agent
or an unintelligent natural process.

My review of TDI showed that natural selection shared the same
triad of attributes that Dembski claimed for intelligent
agents alone. It appears that Dembski must concur with me,
given his recent post and its distinction between "actual CSI"
and "apparent CSI".


[Quote]

The apparent, but unstated, logic behind the move from design to
agency can be given as follows:

1.There exists an attribute in common of some subset of objects
known to be designed by an intelligent agent.
2.This attribute is never found in objects known not to be


designed by an intelligent agent.

3.The attribute encapsulates the property of directed contingency
or choice.
4.For all objects, if this attribute is found in an object, then
we may conclude that the object was designed by an intelligent agent.

This is an inductive argument. Notice that by the second step,
one must eliminate from consideration precisely those
biological phenomena which Dembski wishes to categorize. In
order to conclude intelligent agency for biological examples,
the possibility that intelligent agency is not operative is
excluded a priori. One large problem is that directed
contingency or choice is not solely an attribute of events due
to the intervention of an intelligent agent. The
"actualization-exclusion-specification" triad mentioned above
also fits natural selection rather precisely. One might thus
conclude that Dembski's argument establishes that natural
selection can be recognized as an intelligent agent.

[End Quote - WR Elsberry, Review of TDI, RNCSE March/April 1999]

Dembski has adopted a new stance that still allows him to
claim that algorithms cannot produce CSI: change the
definition of CSI such that algorithms cannot produce it by
definition. This is easy: just add the qualifiers "actual"
and "apparent". "Actual CSI", then, is the CSI that
intelligent agents come up with. "Apparent CSI" is the CSI
that algorithms come up with.

A solution to a problem that is deemed "Actual CSI" when
a human does it may be identical to the solution found
by algorithm that gets labelled as "Apparent CSI". The
solution is just as complex and works just as well in
either case, but now those algorithms don't get in the
way of a good apologetic.

There are some changes that this makes. The attribute from
(1) in my list now becomes "Actual CSI" rather than just
"CSI". One can, unfortunately, no longer simply "Do the
calculation." (TDI, p.228.) One must know the causal story
beforehand in order to know which of the two qualifiers
("Actual" or "Apparent") is to be prepended to "CSI" before
one even gets so far as figuring out whether "CSI" applies.
And this confirms my statement that the possibility that
intelligent design is *not* operative is excluded a priori is
precisely right. Dembski's penultimate paragraph shows this
very clearly, as he inverts the search order of his own
Explanatory Filter, and insists that *Design* has to be
eliminated from consideration first, rather than flowing as a
conclusion from the data.

[Quote]

Does Davies's original problem of finding radically new laws
to generate specified complexity thus turn into the slightly
modified problem of finding find radically new laws that
generate apparent-but not actual-specified complexity in
nature? If so, then the scientific community faces a logically
prior question, namely, whether nature exhibits actual
specified complexity. Only after we have confirmed that nature
does not exhibit actual specified complexity can it be safe to
dispense with design and focus all our attentions on natural
laws and how they might explain the appearance of specified
complexity in nature.

[End Quote -- WA Dembski, "reiterations" post]

--
Wesley R. Elsberry, Student in Wildlife & Fisheries Sciences, Tx A&M U.
Visit the Online Zoologists page (http://www.rtis.com/nat/user/elsberry)
Email to this account is dumped to /dev/null, whose Spam appetite is capacious.
If cucumbers & watermelons had antigravity, sunsets would be more interesting.


Wesley R. Elsberry

unread,
Sep 28, 1999, 3:00:00 AM9/28/99
to
In article <37F03609...@mpmd.com>, Jeff Patterson <jp...@mpmd.com> wrote:
>Marty Fouts wrote:
>> Jeff Patterson filled the aether with:
>> > Marty Fouts wrote:

[...]

JP>I think by correlate here you something more like "follows
JP>directly". Correlation is an analog property, not a binary
JP>one. In that sense of the word, objects which have the
JP>attribute of specified complexity are highly correlated with
JP>objects designed by an intelligent agent. But all that is
JP>besides the point. Dembski's evaluation filter is not immune
JP>to false negatives (it can wrongly attribute ID to chance),
JP>and he explicitly states this. Using his filter, your
JP>arrowhead would be always be attributed to chance. That
JP>attribution may be wrong but so what? The point is that you
JP>can design a filter that never exhibits false positives,
JP>attributing ID to an object generated by chance or law. This
JP>makes the whole arrowhead argument mute.

"Mute"? I'll assume "moot" was meant. Even then, Jeff will
find that "moot" means "arguable".

So far, though, I have not seen anything that would lead me to
believe that signal detection theory has been overturned by
Dembski. Nor have I seen anything that would indicate that
Dembski's Explanatory Filter is an instance of such a design.
In fact, I list several ways in which Dembski's Explanatory
Filter fails to accurately capture how humans go about finding
design in day-to-day life in my review of TDI.

[...]

JP>In both cases though, you could not make the mistake, using
JP>the filter described by Dembski, of attributing a chance
JP>occurrence to ID. The point is *not* that every complex thing
JP>is generated by chance but that at least some are not. If
JP>some of those things that aren't include occurrences which
JP>have no human intervention, the obvious question arises,
JP>where did they come from? You claim without proof that
JP>Natural Selection can be the root cause of CSI, if CSI is
JP>ever found to exist in biological systems. Dembski claims
JP>with proof that NS (or any stochastic, deterministic or
JP>hybrid process) can never generate CSI, only transmute
JP>it. This follows logically from the definition of CSI which,
JP>if I may use very rough terms, is that information not
JP>produced by chance or law.

If that were how Dembski defined CSI, then we could mumble,
"Begs the question," and go home. It is not how he did so,
and thus there is more to argue about. At least, Dembski
did not so define it before his "reiterations" post. With
the introduction of the unspecified qualifiers "apparent"
and "actual" to be prepended to "CSI", it looks like Dembski
may indeed have simply slipped into publicly begging the
question.

Dembski talked about natural selection in his 1997 paper given
at the NTSE. In it, one will find a reference to an extended
analysis of natural selection from which the points given in
the paper were taken. Dembski said that that analysis
appeared in Section 6.3 of "The Design Inference". Bill
Jefferys brought up Dembski's analysis of natural selection
during the discussion period. Bill Jefferys' comments seemed
to have an effect: Section 6.3 of what was published as TDI
does *not* contain an extended discussion of natural
selection. Nor did it get moved to another section of the
book. It disappeared entirely.

So, I would ask Jeff where this "proof" of Dembski's
concerning natural selection is to be found. I know that
Dembski is working on a book that supposedly will give his
extended analysis of both evolutionary computation and
natural selection, but it is not yet here. Nor do I accept
that it is valid in the absence of being able to review it
myself. Where's the proof?

Again, the objections raised to algorithms as sources of CSI
seem to be handled preferentially: they are not applied to
intelligent agents, and yet that application seems both fair
and reasonable. Dembski uses the case of Nick Caputo as an
instance of CSI that implies a design. And yet, Nick Caputo's
design consisted entirely of converting the
previously-existing information of party affiliation into a
preferential placement on voting ballot. People who
plagiarize do no more than *copy* previously existing
information, and yet this is another of Dembski's examples of
design. If one excludes things from producing CSI on the
basis of transmutation of information, then it seems that one
must exclude *both* algorithms and intelligent agents, or at
least intelligent agents who use rational thought in producing
solutions.

JP>If that definition stood by itself, you points about
JP>circularity would be well taken, it excludes chance a
JP>priori. But if we have another, independent test of CSI, the
JP>circularity is broken. We can use the second test to
JP>determine the presence of CSI and use the first definition to
JP>exclude chance and law. Now I think you have implicitly
JP>agreed in your earlier remarks that prespecification of
JP>pattern provides that independent test, but hold that this is
JP>useless because it is impossible to determine the
JP>prespecified pattern from the event. But clearly this is not
JP>so, I can certainly think of some sequences where the pattern
JP>can be discerned with certainty. Does life fall into this
JP>category? Who knows. But if you could be convinced somehow,
JP>that some aspect of life, that is required for something to
JP>be deemed alive, did indeed fall into that category, would
JP>you allow that it follows that NS cannot be responsible for
JP>the observed pattern specificity?

No. The presence of CSI does not imply intelligent agency.
Dembski says this no fewer than three different times in
TDI. One can look at NS and find that it conforms to the
triad of attributes Dembski says define what intelligent
agents do: actualization-exclusion-specification.

[...]

JP>Design may be undetectable, but at issue is whether chance
JP>can masquerade as design.

No it isn't. The issue is whether natural selection can
produce events that have the attribute of CSI. As I point
out in my review, the Design Inference classifies events,
not causes.

JP>I think it is practical to limit this possibility to an
JP>acceptable infinitesimal by suitable choice of the threshold
JP>of specified complexity required to be inherent in the event
JP>under scrutiny. Dembski goes so far as to posit an absolute
JP>probability so low that it equates with impossible, derived
JP>from bounds on the number of atoms in the universe the age of
JP>the universe and the known minimum time for phase
JP>transitions.

Yes. Dembski's 500-bit threshold for CSI is met by the
solution of a 100-city tour of the TSP by genetic algorithm.
(Actually, 97 cities will get us over the CSI threshold, but
100 is a nice round number.)

[...]

JP>The flaw is in your reasoning. Complexity and improbability
JP>are different measures of the same thing. Design is
JP>noticeable because of *specified* complexity and
JP>improbability. Over and over you want to knock down this
JP>complexity straw man. I keep telling you over and over,
JP>complexity alone is not sufficient and Dembski never says it
JP>is. It has to complexity that adheres to a pattern known in
JP>advance.

Actually, Dembski goes to some trouble to say that the pattern
must be independent of the event. If it is known in advance
of the event, then independence is a given. But Dembski does
hold that such independence can be shown even for producing
specifications for events that have already happened. Else
CSI would not be of much use for Dembski's purposes.

[...]

JP>No it doesn't. What would kill Dembski's argument is proving
JP>chance caused an pattern that is indistinguishable from
JP>design.

And showing that processes like natural selection can produce
CSI means that the argument becomes irrelevant.

[...]

--
Wesley R. Elsberry, Student in Wildlife & Fisheries Sciences, Tx A&M U.
Visit the Online Zoologists page (http://www.rtis.com/nat/user/elsberry)
Email to this account is dumped to /dev/null, whose Spam appetite is capacious.

"sing your faith in what you get to eat right up to the minute you are eaten"-a.


Mark Isaak

unread,
Sep 28, 1999, 3:00:00 AM9/28/99
to
In article <1999092805...@cx33978-a.dt1.sdca.home.com>,

Wesley R. Elsberry <w...@cx33978-a.dt1.sdca.home.com> wrote:
>Why does falling water take on a characteristic teardrop shape?

It doesn't. The falling drop is rounded on top and flattened on the
bottom. The teardrop shape is merely a cultural icon to represent
something that people don't normally see.
--
Mark Isaak atta @ best.com http://www.best.com/~atta
"My determination is not to remain stubbornly with my ideas but
I'll leave them and go over to others as soon as I am shown
plausible reason which I can grasp." - Antony Leeuwenhoek


Mark Isaak

unread,
Sep 28, 1999, 3:00:00 AM9/28/99
to
In article <wkk8pbp...@usenet.nospam.fogey.com>,

Marty Fouts <mathem...@usenet.nospam.fogey.com> wrote:
>Jeff Patterson filled the aether with:
>> If I hand you a piece of
>> paper with two strings of binary digits say 1000 digits long, each
>> is equally likely to have been generated by a fair coin toss. Say
>> one was in fact so generated. But if the other one is alternating,
>> 101010...., you have no trouble discerning the pattern the designer
>> used to generate the string.
>
>Sorry? The above does not make sense. Because humans like patterns,
>and want them to have meaning, seeing the string 101010... gives me a
>very subjective desire to believe that a 'design' was involved, but
>the fact is that wanting to believe that is not proof that it did
>*not* arise from a sequence of coin tosses. It is, after all, exactly
>as likely as any other string of the same length.

On the other hand, the 101010... string could easily have come about
naturally without design, since there are lots of oscillators in nature
which can produce such a signal. Dembski's filter tries to rule out both
natural law and randomness. One problem is that he doesn't give us a clue
about how to rule out natural law.

Tim Tyler

unread,
Sep 28, 1999, 3:00:00 AM9/28/99
to
Marty Fouts <mathem...@usenet.nospam.fogey.com> wrote:
: Jeff Patterson filled the aether with:

:> Interesting. Do you think language would evolve in the absence of
:> intelligence?

: No. But I believe that a certain sophistication of language is a


: prerequisite for intelligence. Animals are capable of no greater
: intelligence than their languages allow.

[...]

: Language comes before intelligence. One first needs the medium of


: expression and then one expresses.

I don't much like such ideas.

For example, I believe it would be possible for a wandering, asexual nomad
to develop high intelligence, spatial reasoning, tool using etc, even in
the absence of any significant social contact.

I don't doubt that language accelerates the evolution of intelligence -
and can't really point to a natural example of my wandering nomad.

What is commonly called "intelligence" does have a linguistic component,
of course, but also seems to me to involve spatial reasoning, deduction
and other events which have little to do with linguistic competence.


--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com

UART what UEAT.


Ivar Ylvisaker

unread,
Sep 29, 1999, 3:00:00 AM9/29/99
to
On 26 Sep 1999 10:06:54 -0400, jpa...@my-deja.com wrote:

>The beauty of Dembski's approach to this is that it gets around having
>to speculate about the proto cell. If CSI can not, as Dembski claims be
>generated by natural processes but only conveyed, it is enough to show
>that a modern cell contains CSI to implicate ID at the origin.

In this case, the beauty is in the eye of the beholder.

There are serious problems with Dembski's approach.

First, Dembski is not proposing a scientific theory of the design that
some see in nature. There will be no experiments and no observations
that confirm or refute his hypothesis. There will be no amplifying
scientific investigations of design. On the other hand, in "Mere
Creation," Dembski indicates that theologians may have a role.

Second, Dembski does not actually argue that there is design in
nature. He only outlines an approach for showing that design is
necessary. His approach requires that all possible alternative
natural hypotheses be examined and found inadequate. But this is an
impossible task.

Third, Dembski's model of an intelligent agent is man. He is arguing
that complexity implies intelligent design because man can design
complicated things. His argument for the nature of the design agent
is from analogy.

Fourth, the term design is incomplete. One cannot detect design.
What one can detect is the implementation of a design. Dembski's
designer must also be a builder. A plausible design event for Dembski
is the origin of life on this planet. But building life requires
detailed manipulations at the molecular level. Dembski's designer not
only requires intelligence; he (or she or it) also requires magical
powers.

Ancient peoples commonly invented gods that were men with magical
powers. Dembski is doing the same thing today, disguising his
invention in a veneer of mathematics and science.

Ivar


howard hershey

unread,
Sep 29, 1999, 3:00:00 AM9/29/99
to
Ivar Ylvisaker wrote:
>
> On 27 Sep 1999 01:47:48 -0400, Marty Fouts
> <mathem...@usenet.nospam.fogey.com> wrote:
>
> >not relevant. until you know *how* abiogenesis occured, you can't argue

> >about its properties. There is no evidence that requires that the
> >first objects that we would recognize as alive had to have DNA, nor
> >that the first with DNA had to have long strands of it.
> >
>
> I'm not talking about abiogenesis. All I'm saying is that nature had
> to get to DNA somehow. Dembski is hoping that science will never find
> a way (other than God).
>
> But if you are postulating that DNA is a relatively modern phenomenum,
> then a radical change to the theory of evolution will be necessary.
> And to any theory of life.

Compared to some other biochemical features of life (like the
development of the 'universal' genetic code and translation) one can
indeed argue that DNA, and especially the long multigenic strands of DNA
we use as a genome, are relative latecomers in the molecular biologic
history of life. Comparing DNA replication and chromosomal
conformation, there are clear differences and more variability in the
basic mechanisms of replication initiation, the conformation of
chromosomes, and other features of DNA metabolism between the major
superkingdoms than there is in features like translation. There are
also similarities in all three superkingdoms, but it is hard to root out
secondary modification of a common system from independent evolution for
molecular features. It is quite likely that the initial DNA genome was
fragmented and gene sized and present in multiple copies per cell, with
distribution of genes during cell division being initially on the basis
of chance rather than a precise distribution mechanism. I would suspect
that DNA replication and the DNA takeover of genomic functions probably
occurred reasonably close to the time of the final divergence of the
superkingdoms (with eucaryotes retaining more of the primitive state and
eubacteria becoming secondarily simplified). Close enough to allow some
horizontal exchange among these groups (and horizontal transfer was
likely more common at that time), but also permit the independent
evolution of different strategies for utilizing and fully incorporating
this novel feature of DNA based genomes.

The earliest fossil forms of life had a superficial morphology similar
to that of certain eubacteria. But superficial morphologic similarity
to modern organisms is no more a clue to common internal features than
the superficial morphological similarity of tuna and dolphin is a clue
that implies a common set of internal features of bone structure.
>
> Good night.
>
> Ivar


Jeff Patterson

unread,
Sep 29, 1999, 3:00:00 AM9/29/99
to
Wesley R. Elsberry wrote:

> [snip]

> JP>This makes the whole arrowhead argument mute.

> "Mute"? I'll assume "moot" was meant.

No, mute -as in unable to speak to the issue. Oh alright, it was late and I was
tired.

> Even then, Jeff will
> find that "moot" means "arguable"

Ironic isn't it? - that the common meaning has become the opposite of the formal?
Even webster has raised the white flag, "having no legal substance".

> .
> So far, though, I have not seen anything that would lead me to
> believe that signal detection theory has been overturned by
> Dembski.

I'm not sure what you mean here Wes. I don't think I implied anything at all about
this. I was just trying to establish that Dembski's definition of information was
the same one Shannon developed for signal detection, to use your term. Dembski
applies the definition to inquire about of the origin of information, a question
Shannon didn't address. Shannon starts by postulating an information source. Must we
grind every point to powder?

> Nor have I seen anything that would indicate that
> Dembski's Explanatory Filter is an instance of such a design.

Sorry. I'm lost. What do you mean by "such a design". What category of design do you
have in mind here?

> [snip]
>
> JP> Dembski claims


> JP>with proof that NS (or any stochastic, deterministic or
> JP>hybrid process) can never generate CSI, only transmute
> JP>it. This follows logically from the definition of CSI which,
> JP>if I may use very rough terms, is that information not
> JP>produced by chance or law.
>
> If that were how Dembski defined CSI, then we could mumble,
> "Begs the question," and go home. It is not how he did so,
> and thus there is more to argue about.

For some reason, you chose to split my definition right at the point where I
resolved the "begs the question" issue. But as far as the first part of the
definition goes, at one point you agreed with me. From your published review of
TDI<http://www.rtis.com/nat/user/elsberry/zgists/wre/papers/dembski7.html>:

"From the set of all possible explanations, he first eliminates the explanatory
categories of regularity and chance; then whatever is left is by definition design.
Since all three categories complete the set, design is the set-theoretical
complement of regularity and chance. "

Unless you are quibbling about my use of the word law (which I clearly equate to
regularity below), I don't see how your review summary of Dembski's argument differs
from mine.

> At least, Dembski
> did not so define it before his "reiterations" post. With
> the introduction of the unspecified qualifiers "apparent"
> and "actual" to be prepended to "CSI", it looks like Dembski
> may indeed have simply slipped into publicly begging the
> question.

You have me at a disadvantage here as I haven't seen the post you reference and
haven't run across him using the qualifiers you mentioned in the articles I have
read. A pointer would be most appreciated.

> Dembski talked about natural selection in his 1997 paper given
> at the NTSE. In it, one will find a reference to an extended
> analysis of natural selection from which the points given in
> the paper were taken. Dembski said that that analysis
> appeared in Section 6.3 of "The Design Inference". Bill
> Jefferys brought up Dembski's analysis of natural selection
> during the discussion period. Bill Jefferys' comments seemed
> to have an effect: Section 6.3 of what was published as TDI
> does *not* contain an extended discussion of natural
> selection. Nor did it get moved to another section of the
> book. It disappeared entirely.
>
> So, I would ask Jeff where this "proof" of Dembski's
> concerning natural selection is to be found. I know that
> Dembski is working on a book that supposedly will give his
> extended analysis of both evolutionary computation and
> natural selection, but it is not yet here. Nor do I accept
> that it is valid in the absence of being able to review it
> myself. Where's the proof?

Here we are largely in agreement. I hope you haven't construed my refutations of
what I believe to be fallacious arguments as an implication of my belief that
Dembski has proved his case. The glaring weakness I find in his argument comes down
to an unjustifiable assumption that superposition applies. That is, eliminating
chance OR regularity as cause of CSI (which he has done to my satisfaction) is not
sufficient to disprove chance AND regularity as cause, if the system which binds
these processes is non-linear. Now this would seem an obvious objection and one
which I assume has been raised before. I just haven't been able to find it and so
post it now so that you know I remain unconvinced.

> Again, the objections raised to algorithms as sources of CSI
> seem to be handled preferentially: they are not applied to
> intelligent agents, and yet that application seems both fair
> and reasonable.

I have some thoughts on this matter and would be interested in your response. We
started down this road on another post and got sidetracked. In the meantime, I have
refined an analogy I think is illustrative. Let's use Dembski's archer-as-agent
analogy as a starting point. Recall he uses a target painted on a wall as an example
of CSI, in that hitting any point on the wall is a zero probability event (for this
argument I assume that the target is a point in the mathematical sense, i.e. has
zero area) which makes the information associated that event complex, and to single
out a particular point in advance of the shot and makes the information specified.
Now I propose to evolve an archer. To do so I randomly choose an ensemble of vectors
each comprised of a direction and an angle relative to the horizon from the space of
such vectors which intersect the wall (or I could enclose the whole thought
experiment in a sphere and then the space would be all such vectors). Next we shoot
an arrow according to the information contained in each vector. Note that these
vectors contain complex but unspecified information. We measure the distance from
each arrow to the target and choose a subset of the vectors which are closest. For
each vector in the subset we flip a coin to deside which of the two elements to
change and then randomly change that randomly selected element. We repeat the
experiment for this new generation of vectors. I maintain such a system will evolve
a perfect archer given a large enough initial ensemble, because in the limit as the
ensemble grows to encompass all of the available information space, it contains the
specified point with probability 1.

So we've evolved our archer with a reasonable, if simplified, model of genetic
algoritms in general. We could add complications like sex between the surviving
vectors, multiple mutations etc., but that would not change the result. Given enough
time and a large enough ensemble, we eventually can get arbitrarily close to the
target. Have we produced CSI? Only in the sense that a machine designed to find a
needle in a haystack can produce the needle. It does not however, create the needle.
The needle was there to begin with. In the same way GA's can produce *knowledge* by
scouring a predefined information space to identify a given piece of CSI. The cannot
create the information themselves. They are attracted to the solution as surely as a
ball dropped from a height is attacted to the earth's center of gravity. This is
what Dembski meant by GA's being probability amplifier, an exceedingly poor choice
of words. His idea here thoughy is clearly that if a machine is designed to find a
piece of CSI, it surely belongs in the regularity set, even if it uses stochastic
processes in its implementation.

You have reasonably asked in another GA context, how onbe can you tell the
difference between Dembski's archer-as-agent and my evolved archer? If Dembski
offers "appearent" vs. "real" CSI as an answer then IMHO he's raised the white flag
to soon. My answer would be that you cannot tell the difference because BOTH were
designed, that is, IF he could prove that the real archer was in fact designed. He
may well have to raise the flag at some point on that issue -- but the
archer-evolver is a design as surely as is a bubble sort algorithm.

> Dembski uses the case of Nick Caputo as an
> instance of CSI that implies a design. And yet, Nick Caputo's
> design consisted entirely of converting the
> previously-existing information of party affiliation into a
> preferential placement on voting ballot. People who
> plagiarize do no more than *copy* previously existing
> information, and yet this is another of Dembski's examples of
> design. If one excludes things from producing CSI on the
> basis of transmutation of information, then it seems that one
> must exclude *both* algorithms and intelligent agents, or at
> least intelligent agents who use rational thought in producing
> solutions.

I would include both agents and algorithms as things designed- or at least if one
is, the other is as well.

> JP>If that definition stood by itself, you points about
> JP>circularity would be well taken, it excludes chance a
> JP>priori.

BTW, here's where I answered your begging the question objection...

> But if we have another, independent test of CSI, the
> JP>circularity is broken. We can use the second test to
> JP>determine the presence of CSI and use the first definition to
> JP>exclude chance and law. Now I think you have implicitly
> JP>agreed in your earlier remarks that prespecification of
> JP>pattern provides that independent test, but hold that this is
> JP>useless because it is impossible to determine the
> JP>prespecified pattern from the event.

[snip]

> No. The presence of CSI does not imply intelligent agency.
> Dembski says this no fewer than three different times in
> TDI. One can look at NS and find that it conforms to the
> triad of attributes Dembski says define what intelligent
> agents do: actualization-exclusion-specification.

I think I must surrender on this issue. It is possible that NS falls into the
category where the superposition assumption I think Dembski erroneously makes breaks
down. Note that that is not the same as saying I think NS is capable of producing
CSI. I am actually quite skeptical that it can. I will allow though that when
coupled with a non-linear system, which I presume describes genetic inheritance, it
is not excluded on a set-theoritical basis and using Dembski's definition, as being
incapable of producing CSI.

> [...]
>
> [snip]


>
> JP>I think it is practical to limit this possibility to an
> JP>acceptable infinitesimal by suitable choice of the threshold
> JP>of specified complexity required to be inherent in the event
> JP>under scrutiny. Dembski goes so far as to posit an absolute
> JP>probability so low that it equates with impossible, derived
> JP>from bounds on the number of atoms in the universe the age of
> JP>the universe and the known minimum time for phase
> JP>transitions.
>
> Yes. Dembski's 500-bit threshold for CSI is met by the
> solution of a 100-city tour of the TSP by genetic algorithm.
> (Actually, 97 cities will get us over the CSI threshold, but
> 100 is a nice round number.)
>

I haven't reviewed your paper on this algorithm (would you mind reposting a pointer
to it? I can't find it.) but I suspect it is another needle finder. The target is
the shortest length path and the fitness function allows the subset of paths whose
lengths are minimized to live another day. It has one interesting wrinkle though
that I will give some thought to. Being discrete (the segments between cities are of
fixed lengths), it allows for the possibility of multiple solutions, if it so
happens that multiple permutations yeild the same path length. At first blush it
seems that this is equivalent to placing multiple needles in the haystack but this
is based merely on intuition.

> [...]
>
> JP>The flaw is in your reasoning. Complexity and improbability
> JP>are different measures of the same thing. Design is
> JP>noticeable because of *specified* complexity and
> JP>improbability. Over and over you want to knock down this
> JP>complexity straw man. I keep telling you over and over,
> JP>complexity alone is not sufficient and Dembski never says it
> JP>is. It has to complexity that adheres to a pattern known in
> JP>advance.
>
> Actually, Dembski goes to some trouble to say that the pattern
> must be independent of the event.

As it must be to avoid the "begs the question" objection already discussed Question,
given that Dembski specifies pattern independence as a necessary condition, to you
agree that the "begs the question" objection is (un)moot :>) ?.

> If it is known in advance
> of the event, then independence is a given. But Dembski does
> hold that such independence can be shown even for producing
> specifications for events that have already happened. Else
> CSI would not be of much use for Dembski's purposes.

I agree with both points (yours and Dembski's). I think the best example of
Dembski's is the cryptologists who cracks the code. If suddenly the gibberish turns
into Hamlet, one can with near certainty assume that one has found the prespecified
encryption pattern.

> [...]
>
> JP>No it doesn't. What would kill Dembski's argument is proving
> JP>chance caused an pattern that is indistinguishable from
> JP>design.
>
> And showing that processes like natural selection can produce
> CSI means that the argument becomes irrelevant.

If you have done this I missed it. If you offer GA's as attempt to prove this, I
think you fall well short.

Jeff


Jeff Patterson

unread,
Sep 29, 1999, 3:00:00 AM9/29/99
to

Jeff Patterson wrote:

> Wesley R. Elsberry wrote:
>
>
> > Dembski uses the case of Nick Caputo as an
> > instance of CSI that implies a design. And yet, Nick Caputo's
> > design consisted entirely of converting the
> > previously-existing information of party affiliation into a
> > preferential placement on voting ballot. People who
> > plagiarize do no more than *copy* previously existing
> > information, and yet this is another of Dembski's examples of
> > design. If one excludes things from producing CSI on the
> > basis of transmutation of information, then it seems that one
> > must exclude *both* algorithms and intelligent agents, or at
> > least intelligent agents who use rational thought in producing
> > solutions.
>
> I would include both agents and algorithms as things designed- or at least if one
> is, the other is as well.
>

This was sloppy on my part. What I should have said in conformance with my earlier
remarks was that the algorithm was designed. That the agent was designed remains
speculative.

Jeff

Jeff Patterson

unread,
Sep 29, 1999, 3:00:00 AM9/29/99
to
Tim Tyler wrote:

> ...but there's more than one theory that claims life could come into
> existence rather easily and gives details of the possible mechanism
> involved.

If it's so goddam easy why doesn't somebody just do it (create life from
inanimate material) and end at least the "impossible" part of the debate?

Jeff


Jeff Patterson

unread,
Sep 29, 1999, 3:00:00 AM9/29/99
to
Marty Fouts wrote:

>
>
> Language precedes intelligence. All those forms of 'reasoning' have
> linguistic components. You can not reason _about_ something unless you
> reason _in_ a language.
>

My dog has this annoying habit of reasoning his way out of his pen. Do you
suppose he talks in his sleep?

Jeff


Jeff Patterson

unread,
Sep 29, 1999, 3:00:00 AM9/29/99
to
Ivar Ylvisaker wrote:

> On 26 Sep 1999 10:06:54 -0400, jpa...@my-deja.com wrote:
>
> >The beauty of Dembski's approach to this is that it gets around having
> >to speculate about the proto cell. If CSI can not, as Dembski claims be
> >generated by natural processes but only conveyed, it is enough to show
> >that a modern cell contains CSI to implicate ID at the origin.
>
> In this case, the beauty is in the eye of the beholder.
>
> There are serious problems with Dembski's approach.
>
> First, Dembski is not proposing a scientific theory of the design that
> some see in nature. There will be no experiments and no observations
> that confirm or refute his hypothesis.

Wrong. The experiments will attempt to prove or disprove that an observable
event contains a measurable entity, namely CSI.

> There will be no amplifying
> scientific investigations of design. On the other hand, in "Mere
> Creation," Dembski indicates that theologians may have a role.

I am sick of this one. To most people, the overriding concern is arriving at
a closer and closer approximation to the truth and couldn't care less about
the taxonomy of the source. If science wants to define itself in a manner
that allows them to ignore the 400 pound gorilla sitting in the living room,
fine. The rest of us will leave you to play in your Escherian sandbox and
march merrily along.

> Second, Dembski does not actually argue that there is design in
> nature. He only outlines an approach for showing that design is
> necessary.

More accurately that there are things that can't be explained by chance or
regularity (natural law). He (wrongly in my view), thinks that all that is
left is design. I think this relies on an unjustifiable assumption of
superposition and thus does not eliminate the *possibility* that chance AND
regularity, bound together in some non-linear way, could generate CSI.

> His approach requires that all possible alternative
> natural hypotheses be examined and found inadequate. But this is an
> impossible task.

Again not so. He has formulated an algorithm for doing so. Others may
improve or refine it. Any resemblance this activity has to science is merely
coincidental.

> Third, Dembski's model of an intelligent agent is man. He is arguing
> that complexity implies intelligent design because man can design
> complicated things.

Not in any way. His argument is from set theory. Regularity, chance and
design form the universe of creative power. His argument attempts to
eliminate two of these for certain classes of creation events, leaving
design. Using this logic though, I believe we must include chance AND
regularity in a non-linear system as an element of the design set, unless
and until he explicitly removes them by proof.

> His argument for the nature of the design agent
> is from analogy.

I don't think he has speculated at all on the nature of the design agent.
Despite the fervent hopes that he fall into this trap, he has scrupulously
avoided it.

> Fourth, the term design is incomplete.

Perhaps we agree here.

> One cannot detect design.
> What one can detect is the implementation of a design.

Of course Dembski doesn't attempt to detect design but a type of information
that he feels implicates design.

> Dembski's
> designer must also be a builder. A plausible design event for Dembski
> is the origin of life on this planet. But building life requires
> detailed manipulations at the molecular level. Dembski's designer not
> only requires intelligence; he (or she or it) also requires magical
> powers.

Do molecular biologists possess magical powers? If not, are you saying that
creation of life in the laboratory is impossible? If not, why can one agent
do without the magic you require of the other?

"When you believe in things that you don't understand then you suffer,
superstition ain't the way" -Stevie Wonder

Jeff


maff91

unread,
Sep 29, 1999, 3:00:00 AM9/29/99
to

Ivar Ylvisaker

unread,
Sep 29, 1999, 3:00:00 AM9/29/99
to
On 29 Sep 1999 20:18:08 -0400, Jeff Patterson <jp...@mpmd.com> wrote:

>Ivar Ylvisaker wrote:
>

>> First, Dembski is not proposing a scientific theory of the design that
>> some see in nature. There will be no experiments and no observations
>> that confirm or refute his hypothesis.
>
>Wrong. The experiments will attempt to prove or disprove that an observable
>event contains a measurable entity, namely CSI.

Dembski is attempting to show that there is design in nature and,
hence, there must be (or have been) a designer (or designers). But
Dembski offers no hypothesis about this design or about the
designer(s). Moreover, he specifically says he doesn't intend to.
From the beginning of chapter 3 of The Design Inference: "Indeed,
confirming hypotheses is precisely what the design inference does not
do. The design inference is in the business of eliminating hypotheses,
not confirming them."

With respect to CSI, see the comment about Dembski's algorithm below.

>> There will be no amplifying
>> scientific investigations of design. On the other hand, in "Mere
>> Creation," Dembski indicates that theologians may have a role.
>
>I am sick of this one. To most people, the overriding concern is arriving at
>a closer and closer approximation to the truth and couldn't care less about
>the taxonomy of the source. If science wants to define itself in a manner
>that allows them to ignore the 400 pound gorilla sitting in the living room,
>fine. The rest of us will leave you to play in your Escherian sandbox and
>march merrily along.

I don't understand this comment. Which one does "this one" refer to?

>> Second, Dembski does not actually argue that there is design in
>> nature. He only outlines an approach for showing that design is
>> necessary.
>
>More accurately that there are things that can't be explained by chance or
>regularity (natural law). He (wrongly in my view), thinks that all that is
>left is design. I think this relies on an unjustifiable assumption of
>superposition and thus does not eliminate the *possibility* that chance AND
>regularity, bound together in some non-linear way, could generate CSI.
>
>> His approach requires that all possible alternative
>> natural hypotheses be examined and found inadequate. But this is an
>> impossible task.
>
>Again not so. He has formulated an algorithm for doing so. Others may
>improve or refine it. Any resemblance this activity has to science is merely
>coincidental.

What algorithm? Look at his summary of The Design Inference on page
222 of his book by the same name. It begins "Suppose a subject S has
identified all the relevant chance hypotheses H that could be
responsible for some event E." What algorithm does a scientist use to
do this?

>> Third, Dembski's model of an intelligent agent is man. He is arguing
>> that complexity implies intelligent design because man can design
>> complicated things.
>
>Not in any way. His argument is from set theory. Regularity, chance and
>design form the universe of creative power. His argument attempts to
>eliminate two of these for certain classes of creation events, leaving
>design. Using this logic though, I believe we must include chance AND
>regularity in a non-linear system as an element of the design set, unless
>and until he explicitly removes them by proof.
>
>> His argument for the nature of the design agent
>> is from analogy.
>
>I don't think he has speculated at all on the nature of the design agent.
>Despite the fervent hopes that he fall into this trap, he has scrupulously
>avoided it.

But he says the agent is intelligent. The one agent that we know is
intelligent is man. What does intelligent mean if it does not mean
man-like?

>> Fourth, the term design is incomplete.
>
>Perhaps we agree here.
>
>> One cannot detect design.
>> What one can detect is the implementation of a design.
>
>Of course Dembski doesn't attempt to detect design but a type of information
>that he feels implicates design.
>
>> Dembski's
>> designer must also be a builder. A plausible design event for Dembski
>> is the origin of life on this planet. But building life requires
>> detailed manipulations at the molecular level. Dembski's designer not
>> only requires intelligence; he (or she or it) also requires magical
>> powers.
>
>Do molecular biologists possess magical powers? If not, are you saying that
>creation of life in the laboratory is impossible? If not, why can one agent
>do without the magic you require of the other?

I was trying to emphasize what a broad claim Dembski is making.
Dembski is implying that he has found a crack into the realm of the
supernatural.

Ivar


jpa...@my-deja.com

unread,
Sep 30, 1999, 3:00:00 AM9/30/99
to
In article <37f2b3da...@news.erols.com>,

ylvi...@erols.com (Ivar Ylvisaker) wrote:
> On 29 Sep 1999 20:18:08 -0400, Jeff Patterson <jp...@mpmd.com> wrote:
>
> >Ivar Ylvisaker wrote:
> >
>
> >> First, Dembski is not proposing a scientific theory of the design
that
> >> some see in nature. There will be no experiments and no
observations
> >> that confirm or refute his hypothesis.
> >
> >Wrong. The experiments will attempt to prove or disprove that an
observable
> >event contains a measurable entity, namely CSI.
>
> Dembski is attempting to show that there is design in nature and,
> hence, there must be (or have been) a designer (or designers). But
> Dembski offers no hypothesis about this design or about the
> designer(s). Moreover, he specifically says he doesn't intend to.
> From the beginning of chapter 3 of The Design Inference: "Indeed,
> confirming hypotheses is precisely what the design inference does not
> do. The design inference is in the business of eliminating hypotheses,
> not confirming them."

And you don't see eliminating scientific hypothesis as legitimate
science??

>
> With respect to CSI, see the comment about Dembski's algorithm below.
>
> >> There will be no amplifying
> >> scientific investigations of design. On the other hand, in "Mere
> >> Creation," Dembski indicates that theologians may have a role.
> >
> >I am sick of this one. To most people, the overriding concern is
arriving at
> >a closer and closer approximation to the truth and couldn't care less
about
> >the taxonomy of the source. If science wants to define itself in a
manner
> >that allows them to ignore the 400 pound gorilla sitting in the
living room,
> >fine. The rest of us will leave you to play in your Escherian sandbox
and
> >march merrily along.
>
> I don't understand this comment. Which one does "this one" refer to?

"This one" refers to the various attempts to define away the whole issue
of ID as being outside the bounds of science.

Dembski has published an "Explanatory Filter"
http://www.arn.org/docs/dembski/wd_explfilter.htm which is an algorithm
to determine whether to attribute an event to chance,law or design. A
variation of it is being developed at SETI, further evidence that this
is indeed science by any reasonable meaning if the word.

>
> >> Third, Dembski's model of an intelligent agent is man. He is
arguing
> >> that complexity implies intelligent design because man can design
> >> complicated things.
> >
> >Not in any way. His argument is from set theory. Regularity, chance
and
> >design form the universe of creative power. His argument attempts to
> >eliminate two of these for certain classes of creation events,
leaving
> >design. Using this logic though, I believe we must include chance AND
> >regularity in a non-linear system as an element of the design set,
unless
> >and until he explicitly removes them by proof.
> >
> >> His argument for the nature of the design agent
> >> is from analogy.
> >
> >I don't think he has speculated at all on the nature of the design
agent.
> >Despite the fervent hopes that he fall into this trap, he has
scrupulously
> >avoided it.
>
> But he says the agent is intelligent. The one agent that we know is
> intelligent is man. What does intelligent mean if it does not mean
> man-like?

Asking that question is precisely what you characterize as a "serious
problem in Dembski's approach", as if investigation of other possible
forms of intelligent agents in or outside the universe is not
legitimate science.

>
> >> Fourth, the term design is incomplete.
> >
> >Perhaps we agree here.
> >
> >> One cannot detect design.
> >> What one can detect is the implementation of a design.
> >
> >Of course Dembski doesn't attempt to detect design but a type of
information
> >that he feels implicates design.
> >
> >> Dembski's
> >> designer must also be a builder. A plausible design event for
Dembski
> >> is the origin of life on this planet. But building life requires
> >> detailed manipulations at the molecular level. Dembski's designer
not
> >> only requires intelligence; he (or she or it) also requires magical
> >> powers.
> >
> >Do molecular biologists possess magical powers? If not, are you
saying that
> >creation of life in the laboratory is impossible? If not, why can one
agent
> >do without the magic you require of the other?
>
> I was trying to emphasize what a broad claim Dembski is making.

What broad claim is that? BTW, you didn't answer the questions.

> Dembski is implying that he has found a crack into the realm of the
> supernatural.

I don't find that implication at all. What I find is unreasoned fear
among mainstream scientists that this may be so. Again, Dembski
steadfastly refuses to speculate on the characteristics of the design
agent that may be implicated by his theories.

Jeff


Sent via Deja.com http://www.deja.com/
Before you buy.


Michael

unread,
Sep 30, 1999, 3:00:00 AM9/30/99
to
Jeff Patterson wrote:

> Here we are largely in agreement. I hope you haven't construed my refutations of
> what I believe to be fallacious arguments as an implication of my belief that
> Dembski has proved his case. The glaring weakness I find in his argument comes down
> to an unjustifiable assumption that superposition applies. That is, eliminating
> chance OR regularity as cause of CSI (which he has done to my satisfaction) is not
> sufficient to disprove chance AND regularity as cause, if the system which binds
> these processes is non-linear. Now this would seem an obvious objection and one
> which I assume has been raised before. I just haven't been able to find it and so
> post it now so that you know I remain unconvinced.

Actaully, chance and regularity can be reduced to chance. Note that
Dembski does not require a uniform probability distribution (though
all of his examples seem to use it). You should be able to represent
the combination of law and chance as a non-uniform density. In
this sense, I agree with Dembski that GA's are probability amplifiers.
They make finding a solution much more probable. As long as
Dembski defines complexity as improbability, then he has
"defined away" the whole question.


[SNIP for brevity]


>
> You have reasonably asked in another GA context, how onbe can you tell the
> difference between Dembski's archer-as-agent and my evolved archer? If Dembski
> offers "appearent" vs. "real" CSI as an answer then IMHO he's raised the white flag
> to soon. My answer would be that you cannot tell the difference because BOTH were
> designed, that is, IF he could prove that the real archer was in fact designed. He
> may well have to raise the flag at some point on that issue -- but the
> archer-evolver is a design as surely as is a bubble sort algorithm.
>

Where does the assumption that the algorithm was designed come from?
Consider a flat plain with a circle drawn on it. Toss a ball into the air and it
lands inside the circle. This is analogous to firing an arrow at a wall.
Assuming a large plain, the circle is CSI. Now consider a plain that is
sloped. The circle occupies the lowest part of the plain. Now it becomes
very likely that he ball will end up inside the circle. What has happened?
Is gravity designed (your argument, IIUC)? Or is the circle not really
CSI? Something else?


Mike

Jeff Patterson

unread,
Sep 30, 1999, 3:00:00 AM9/30/99
to
Marty Fouts wrote:

> Jeff Patterson filled the aether with:
>
> > Marty Fouts wrote:
>
> >[snip]...Also, don't confuse problem solving with reasoning.

I realize that to those for whom Natural Selection is the answer to
everything, problem solving without reason comes as second nature. To
those of us in the real world, one is required to accomplish the other.

Jeff


Mark Isaak

unread,
Sep 30, 1999, 3:00:00 AM9/30/99
to
In article <37F2AD97...@mpmd.com>, Jeff Patterson <jp...@mpmd.com> wrote:
>Ivar Ylvisaker wrote:
>> First, Dembski is not proposing a scientific theory of the design that
>> some see in nature. There will be no experiments and no observations
>> that confirm or refute his hypothesis.
>
>Wrong. The experiments will attempt to prove or disprove that an observable
>event contains a measurable entity, namely CSI.

As I see it, Dembski has already proposed the scientific hypothesis that
CSI won't be detectable in anything which occurs naturally. That
hypothesis has been falsified.

>> ... Dembski indicates that theologians may have a role.


>
>I am sick of this one. To most people, the overriding concern is arriving at
>a closer and closer approximation to the truth and couldn't care less about
>the taxonomy of the source. If science wants to define itself in a manner
>that allows them to ignore the 400 pound gorilla sitting in the living room,
>fine. The rest of us will leave you to play in your Escherian sandbox and
>march merrily along.

Since the 400 pound gorilla is invisible, weightless, and otherwise
undetectable, why shouldn't we ignore it?

>> Second, Dembski does not actually argue that there is design in
>> nature. He only outlines an approach for showing that design is
>> necessary.
>
>More accurately that there are things that can't be explained by chance or
>regularity (natural law).

He refuses to recognize, though, that there is a big difference between
"isn't explained" and "can't be explained." His entire thesis reduces to
the ever-popular argument from incredulity.

Stephen F. Schaffner

unread,
Sep 30, 1999, 3:00:00 AM9/30/99
to
In article <wkiu4sn...@usenet.nospam.fogey.com>,
Marty Fouts <mathem...@usenet.nospam.fogey.com> wrote:

>'problem solving' is *not* reasoning, nor is reasoning always required
>for problem solving. Rote, brute force and exhaustive search due very
>well at times, for example. And none of those things are "natural
>selection", either.

My question is, which do you do more frequently, solve the problem
without reasoning or reason without solving the problem?

-------
Steve Schaffner s...@genome.wi.mit.edu
SLAC and I have a deal: they don't || Immediate assurance is an excellent sign
pay me, and I don't speak for them. || of probable lack of insight into the
|| topic. Josiah Royce


Jeff Patterson

unread,
Sep 30, 1999, 3:00:00 AM9/30/99
to

Michael wrote:

> Jeff Patterson wrote:
>
> > Here we are largely in agreement. I hope you haven't construed my refutations of
> > what I believe to be fallacious arguments as an implication of my belief that
> > Dembski has proved his case. The glaring weakness I find in his argument comes down
> > to an unjustifiable assumption that superposition applies. That is, eliminating
> > chance OR regularity as cause of CSI (which he has done to my satisfaction) is not
> > sufficient to disprove chance AND regularity as cause, if the system which binds
> > these processes is non-linear. Now this would seem an obvious objection and one
> > which I assume has been raised before. I just haven't been able to find it and so
> > post it now so that you know I remain unconvinced.
>

> Actaully, chance and regularity can be reduced to chance. Note that
> Dembski does not require a uniform probability distribution (though
> all of his examples seem to use it). You should be able to represent
> the combination of law and chance as a non-uniform density.

For a large class of systems this is true. Signal flow theory tells us we can represent
such a system as a stochastic source driving a system function to one or more outputs.
Thus if at some output y we have y=g(x) as the system function and X is some r.v. with
pdf fx(x), then the pdf fy(y) can be shown to be (cf Papoulis pg. 126)

fy(y) = fx(x1)/|g'(x1)|+fx(x2)/|g'(x2)|+...fx(xn)/|g'(xn)|

where x1,x2...xn are the real roots of g(x)

and fy(y)=0

for any y such that no real roots exist for y=g(x). But there are certain restrictions on
g(x). It must have a countable number of discontinuities and must not be a constant over
any interval.

Now suppose g(x) is a quantizer (staircase function) which violates the above. For a
continuous X, the outcome Y will also be quantized. This means Fy(y) (probability that Y
=< y) will be staircase which means the pdf fy(y) will be a series of impulses located at
each staircase transition. These impulsive pdfs may present a problem for Dembski because
they represent concentrations of information, i.e. in these cases the information space
would be punctuated with points where P(X=x) is non-zero. Whether this does in fact
present a problem of generality to Dembski is not clear to me but I know from (painful)
experience that wrapping a quantize in a feedback loop (so called delta-sigma modulators)
can result in structured limit cycles that are anything but random. Another important
case is where y=g(x) contains only complex roots which gives fy(y)=0 for all y.
Oscillators, which fall into this category, require noise as input to start but have
outputs which are quite structured.

> In
> this sense, I agree with Dembski that GA's are probability amplifiers.
> They make finding a solution much more probable. As long as
> Dembski defines complexity as improbability, then he has
> "defined away" the whole question.

If by question you mean whether GAs can create CSI, I don't think the answer depends on
the definition of complexity. In my example which shows how GA's "find" CSI that already
exists, I did not use such a definition.

> [SNIP for brevity]


>
> >
> > You have reasonably asked in another GA context, how onbe can you tell the
> > difference between Dembski's archer-as-agent and my evolved archer? If Dembski
> > offers "appearent" vs. "real" CSI as an answer then IMHO he's raised the white flag
> > to soon. My answer would be that you cannot tell the difference because BOTH were
> > designed, that is, IF he could prove that the real archer was in fact designed. He
> > may well have to raise the flag at some point on that issue -- but the
> > archer-evolver is a design as surely as is a bubble sort algorithm.
> >
>

> Where does the assumption that the algorithm was designed come from?

Uh... 'cause I designed it just as Dawkins designed his "methinks" evolver and just like
all GAs are designed.

> Consider a flat plain with a circle drawn on it. Toss a ball into the air and it
> lands inside the circle. This is analogous to firing an arrow at a wall.
> Assuming a large plain, the circle is CSI. Now consider a plain that is
> sloped. The circle occupies the lowest part of the plain.

You've just designed a system.

> Now it becomes
> very likely that he ball will end up inside the circle. What has happened?

A number of things. You've demonstrated a basic precept of information theory, that is
that information is always tied to a particular experiment. If instead of throwing a ball
you shoot an arrow we are right back to the archer analogy. You've also shown that by
introducing regularity you have removed the uncertainty from the system, a result that
seems law-like. A system whose output is certain cannot convey information much less
create it. Finally, you've given me an excellent mental picture of the impulsive pdf I
was describing above. The pdf for the system you've described contains an impulse at the
location of the dimple.

> Is gravity designed (your argument, IIUC)?

Careful, you're stepping outside the bounds of science :>)

> Or is the circle not really CSI?

Depends on the experiment. If you're throwing balls it is not CSI, if you are shooting
arrows it is.

Jeff

hrgr...@my-deja.com

unread,
Sep 30, 1999, 3:00:00 AM9/30/99
to
In article <37F39762...@mpmd.com>,

A stone fallíng down from a mountain and a light ray refracted by a
water surface solve the variational problem of minimizing action.
Reasoning does not come into it.

HRG.

Tim Tyler

unread,
Sep 30, 1999, 3:00:00 AM9/30/99
to
Jeff Patterson <jp...@mpmd.com> wrote:
: Tim Tyler wrote:

:> ...but there's more than one theory that claims life could come into
:> existence rather easily and gives details of the possible mechanism
:> involved.

: If it's so goddam easy why doesn't somebody just do it (create life from
: inanimate material) and end at least the "impossible" part of the debate?

It depends on what you mean by "life"?

Small-scale evolutionary processes are likely to be coming into existence
on a daily basis in pools all over the planet as part of the mechanics of
crystal growth.

Many people regularly create evolving systems in virtual worlds.

Simple evolutionary processes take place among computer viruses
distrubuted around the internet.

Of course these latter examples are strictly "life coming from life"...

What would it take to make you happy?


--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com

Anarchy is against the law.


Tracy P. Hamilton

unread,
Sep 30, 1999, 3:00:00 AM9/30/99
to

Jeff Patterson wrote in message <37F3B0D2...@mpmd.com>...

>
>
>Michael wrote:
>
>> Jeff Patterson wrote:


[snip]

>> Consider a flat plain with a circle drawn on it. Toss a ball into the
air and it
>> lands inside the circle. This is analogous to firing an arrow at a
wall.
>> Assuming a large plain, the circle is CSI. Now consider a plain that is
>> sloped. The circle occupies the lowest part of the plain.
>
>You've just designed a system.


Not if you draw the circle after the fact. Which is exactly what
Dembski and others do.

They see the ball going to only a specific place on a complex
landscape, and say "Ooooh, CSI!"

There are physical reasons that a particular (specified) biopolymer
is chosen over a large number of those having equivalent probability
of being formed at random. They aren't formed at random.

>> Now it becomes
>> very likely that he ball will end up inside the circle. What has
happened?
>
>A number of things. You've demonstrated a basic precept of information
theory, that is
>that information is always tied to a particular experiment.

How about this one: Just because information theory can be
applied to a system does not mean it had information "put" there
by intelligence.

[snip]

Tracy P. Hamilton

Jeff Patterson

unread,
Sep 30, 1999, 3:00:00 AM9/30/99
to
Mark Isaak wrote:

> In article <37F2AD97...@mpmd.com>, Jeff Patterson <jp...@mpmd.com> wrote:
> >Ivar Ylvisaker wrote:
> >> First, Dembski is not proposing a scientific theory of the design that
> >>