It helps to be concrete. Assume that the possible design event in
question is abiogenesis, the beginning of life on this planet. I know
of no event more promising than this for proving design.
Abiogenesis could be a natural event or a design event. Natural is my
term. Dembski divides my natural events into regular events and chance
events [TDI page 36]. Regular events -- "regularities" -- are those
explained by natural laws. Chance events are events like the outcomes
of honest lotteries. But there is a problem with Dembski's division.
Real events in the real world are always affected by many factors, some
of which are lawful and some of which are random. Abiogenesis, if it is
a natural event, depends on lawful events like the ability of atoms to
bind to one another and on chance events like weather and ocean
currents. Hence, natural events is a more realistic category.
If abiogenesis was a natural event, then we do NOT know how it
happened. Some people have ideas but no one, to my knowledge, has
advanced a theory that he or she thinks should be compelling to others.
I assume that abiogenesis, if natural, would be a stochastic process;
atoms and groups of atoms would build up over time until living cells
were present. The process could have started more than once and there
may be more than way in which the process can succeed. The combination
of such processes is a chance process in that we could never predict
precisely when life would first establish itself or even if it would
ever establish itself. (But we know we are alive.)
Dembski's alternative to abiogenesis as a natural event is abiogenesis
as a design event. Dembski's writings are ambiguous about whether
intelligent design is a scientific theory (or hypothesis). The Design
Inference says no: "Indeed, confirming hypotheses is precisely what the
design inference does not do. The design inference is in the business of
eliminating hypotheses, not confirming them" [TDI page 68]. In "The
Design Inference," design events are what's left after regular and
chance events have been eliminated. But other writings imply yes: "But
in fact intelligent design can be formulated as a scientific theory
having empirical consequences and devoid of religious commitments"
[IDasInfo]. Regardless, Dembski argues that you can identify design by
recognizing choice. "The principal characteristic of intelligent agency
is choice. Whenever an intelligent agent acts, it chooses from a range
of competing possibilities" [S&D]. In some articles, Dembski calls this
"Actualization-Exclusion-Specification" [IDasInfo], in others
"complexity-specification" [S&D].
But the idea of abiogenesis as a design choice has problems. What are
the "range of competing possibilities" that the designer excluded? This
is an awkward question if the design agent is a man. It is even more
awkward if the design agent is a god. Dembski's answer is curious. He
does not clarify his design hypothesis; he points instead to the
competing natural hypothesis. Of course, this is awkward too because
there is no well defined natural hypothesis for abiogenesis. So Dembski
invents one. Dembski feels that he has to show that the "choosing"
cannot be an accidental outcome, that an intelligent agent must have
acted. Therefore, the actual outcome must be improbable if the process
is a natural one: "The role of complexity in detecting design is now
clear since improbability is precisely what we mean by complexity"
[IDasInfo]. Dembski is vague about his version of the natural
hypothesis but he seems to be thinking of something like assembling the
first DNA by randomly stringing different bases together until a
functioning molecule appears. That would take a very long time; hence,
Dembski asserts that there must be an intelligent agent that knows a
shortcut.
Specification also has a problem. A choice implies that the design
agent intended to choose one possibility and to reject the others.
Dembski implies that "specification" is his way of identifying the
selected possibility. Dembski uses the example of an archer who paints
a bull's-eye on a wall. The archer then backs off and shoots his arrows
into the bull's-eye, which demonstrates his intention (and ability) to
do so. But Dembski does not define specification in a way that assures
that specification reflects intention. In "The Design Inference," a
specification is defined as a pattern and a pattern is defined as "any
description that corresponds uniquely to some prescribed event" [TDI
page 36]. In this discussion, the event is abiogenesis. According to
Dembski: "Specification in biology always makes reference in some way
to an organism's function" [S&D]. Dembski quotes Richard Dawkins for an
example: "In the case of living things, the quality that is specified
in advance is . . . the ability to propagate genes in reproduction"
[S&D]. But even if living things can be specified, how does that tell
us that the creation of living things was intended? Put another way,
how can we be sure that a choice was made and, therefore, that an
intelligent chooser existed? Maybe God inadvertently left some garbage
when she visited Earth and we are the unintended consequence. As an
aside, Dembski again links an element of his design hypothesis to an
element of the competing natural hypothesis. He says that the
information that the specification is based on must not influence the
hypothesis that abiogenesis is a natural event.
A last comment. If a detailed, specific abiogenesis hypothesis is
proposed, how do we know whether to assign it to the natural category or
to the design category? Suppose something akin to natural selection is
proposed. Does that count as natural or design? If the implementing
being cannot talk, does that count as intelligent? Suppose the being is
a machine, a robot. What then?
Ivar
References:
[TDI] The Design Inference. Cambridge University Press 1998
[S&D] "Science and Design" First Things October 1998 (
http://www.firstthings.com/ftissues/ft9810/dembski.html )
[IDasInfo] "Intelligent Design as a Theory of Information" (
http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html )
: Some recent posts led me to investigate some of Dembski's thinking
: about intelligent design more carefully.
: It helps to be concrete. Assume that the possible design event in
: question is abiogenesis, the beginning of life on this planet. I know
: of no event more promising than this for proving design.
It seems to me few events are less promising. Computers, toasters,
wahshing machines, and venetian blinds are all far more likely to be
designed than the abiogenesis event, IMO.
: [S&D] "Science and Design" First Things October 1998 (
: http://www.firstthings.com/ftissues/ft9810/dembski.html )
: [IDasInfo] "Intelligent Design as a Theory of Information" (
: http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html )
Dembski's argument is essentially that of Michael Behe: that there's such
a thing as "irreducible complexity" that signals design events.
The idea that this notion has any relevance to biology is utter twaddle.
Myself and others have mashed it on a number of occasions, now.
To quote from DBB:
``An irreducibly complex system cannot be produced . . . by slight,
successive modifications of a precursor system, because any precursor
to an irreducibly complex system that is missing a part is by
definition nonfunctional. . . . Since natural selection can only
choose systems that are already working, then if a bio logical
system cannot be produced gradually it would have to arise as an
integrated unit, in one fell swoop, for natural selection to have
anything to act on.''
...but there are no such systems in biology.
Systems which have all their components in a complex interdependence
with one another don't /necessarily/ indicate design - they just indicate
the existence of a supporting structure that is no longer visible.
Consider a stone arch. Remove any stone in an arch and it collapses.
Every stone is a keystone.
Does this mean an arch is "irreducibly complex"? Of course not!
To build an arch, simply heap up a pile of stones, build the arch on top
of that and then remove the pile of stones.
This argument is due to A. G. Cairns-Smith.
Observing a system with complex interdependencies - where removing
individual components causes things to break is *not* sufficient
to conclude that an object has been designed. You have to show
that no conceivable chain of events could possibly have led to
the structure by small modifications.
*Even* then you can't conclude that intelligent design was
necessarily involved:
When the event concerned is abiogenesis, /even/ if the event
was fantastically improbable (which it was not), the (weak) anthropic
principle can be employed to conquor *any* odds for this particular
event. Intelligent design need not be invoked.
Behe's and Dembski's argument about "irreducible complexity" in
biology utterly collapses over such points.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com
Love is chemistry, sex is physics.
> Ivar Ylvisaker <ylvi...@erols.com> wrote:
>
> : Some recent posts led me to investigate some of Dembski's thinking
> : about intelligent design more carefully.
>
> : It helps to be concrete. Assume that the possible design event in
> : question is abiogenesis, the beginning of life on this planet. I know
> : of no event more promising than this for proving design.
>
> It seems to me few events are less promising. Computers, toasters,
> wahshing machines, and venetian blinds are all far more likely to be
> designed than the abiogenesis event, IMO.
>
> : [S&D] "Science and Design" First Things October 1998 (
> : http://www.firstthings.com/ftissues/ft9810/dembski.html )
>
> : [IDasInfo] "Intelligent Design as a Theory of Information" (
> : http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html )
>
> Dembski's argument is essentially that of Michael Behe: that there's such
> a thing as "irreducible complexity" that signals design events.
Tim, given your generally erudite insights, I am surprised at the above
remark. You obviously have either not read or not understood Dembski.
Dembski's argument from complex specified information (CSI) in the above
reference has nothing at all to do with irreducible complexity except that
these phrases both contain a word with a common root.CSI is pattern specific
information associated with low probability events (actualizations to use
Dembski's term). The central question is whether such information can be
generated by natural causes. Dembski's claim is that it cannot since by
natural cause we mean arising from either a natural law or from a random
stochastic process. It is well accepted that information (much less CSI)
cannot be generated from an event whose outcome is known a priori, which
precludes those arising from natural law (e.g.. no information is contained in
the observation that an apple when dropped, hit the ground). The question thus
boils down to, can CSI be generated by a random stochastic process. But, so
the argument goes, CSI by its very nature fits a specified pattern which is
complex enough to preclude occurrence by random chance (such a filter has been
developed for use in the SETI project to determine if signals received from
space are from an intelligent source) , and thus cannot have arisen randomly.
While the argument appears logically sound, its proof depends on the criteria
for classifying CSI from information in general. If a logically acceptable
bounds on the pattern complexity can be formulated and CSI which meets this
criteria can be shown to exist, Dembski's conclusions necessarily follow,
with a confidence interval determined by the probability that the pattern was
indeed generated by chance.
> The idea that this notion has any relevance to biology is utter twaddle.
> Myself and others have mashed it on a number of occasions, now.
>
> To quote from DBB:
>
> ``An irreducibly complex system cannot be produced . . . by slight,
> successive modifications of a precursor system, because any precursor
> to an irreducibly complex system that is missing a part is by
> definition nonfunctional. . . . Since natural selection can only
> choose systems that are already working, then if a bio logical
> system cannot be produced gradually it would have to arise as an
> integrated unit, in one fell swoop, for natural selection to have
> anything to act on.''
>
> ...but there are no such systems in biology.
>
> Systems which have all their components in a complex interdependence
> with one another don't /necessarily/ indicate design - they just indicate
> the existence of a supporting structure that is no longer visible.
>
> Consider a stone arch. Remove any stone in an arch and it collapses.
> Every stone is a keystone.
>
> Does this mean an arch is "irreducibly complex"? Of course not!
>
> To build an arch, simply heap up a pile of stones, build the arch on top
> of that and then remove the pile of stones.
Of course the more arches you build which share a common capstone, the more
difficult it becomes to remove the rocks without the whole thing collapsing.
The arch arguments ignores in inter-relatedness of functionality at the
cellular level.
[rest deleted]
Jeff
>Tim Tyler wrote:
>
>> Ivar Ylvisaker <ylvi...@erols.com> wrote:
>>
>> : Some recent posts led me to investigate some of Dembski's thinking
>> : about intelligent design more carefully.
>>
>> : It helps to be concrete. Assume that the possible design event in
>> : question is abiogenesis, the beginning of life on this planet. I know
>> : of no event more promising than this for proving design.
>>
>> It seems to me few events are less promising. Computers, toasters,
>> wahshing machines, and venetian blinds are all far more likely to be
>> designed than the abiogenesis event, IMO.
>>
>> : [S&D] "Science and Design" First Things October 1998 (
>> : http://www.firstthings.com/ftissues/ft9810/dembski.html )
>>
>> : [IDasInfo] "Intelligent Design as a Theory of Information" (
>> : http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html )
>>
Try http://x33.deja.com/getdoc.xp?AN=519544184
[...]
--
L.P.#0000000001
>Ivar Ylvisaker <ylvi...@erols.com> wrote:
>
>: Some recent posts led me to investigate some of Dembski's thinking
>: about intelligent design more carefully.
>
>: It helps to be concrete. Assume that the possible design event in
>: question is abiogenesis, the beginning of life on this planet. I know
>: of no event more promising than this for proving design.
>
>It seems to me few events are less promising. Computers, toasters,
>wahshing machines, and venetian blinds are all far more likely to be
>designed than the abiogenesis event, IMO.
I don't think that Dembski is very interested in toasters as "possible
design events." I earlier wrote a post in which I asked whether there
was any events other than abiogenesis that would be interesting to
Dembski. In that post, I excluded events due to humans, animals, and
aliens from outer space. It would have been clearer if I done the
same here.
[snip]
>When the event concerned is abiogenesis, /even/ if the event
>was fantastically improbable (which it was not), the (weak) anthropic
>principle can be employed to conquor *any* odds for this particular
>event. Intelligent design need not be invoked.
Richard Swinburne has an argument against this. Dembski, I think,
refers to it some place.
Suppose you are a prisoner. Your captor show you a machine. The
machine will randomly generate a number between one and the number of
atoms in the universe. You have to guess the number. If you guess
correctly, your life will be spared. If you guess wrong, the machine
will kill you. You guess and live.
Swinburne says that this outcome is not credible.
Ivar
While I do not find Dembski's reasoning persuasive, he is a little
more logical than your post suggests.
>There in lies the first problem with Dembski's reasoning. Low
>probability of occurrence does not equal complexity, nor does it need
>arise from design.
Dembski does equate improbability and complexity. His reasoning seems
to be that improbable events when they occur convey lots of
information, i.e., you have to use lots of bits to distinguish the
event that did occur from all the other events that might have
occurred. And an information measure is a kind of complexity.
>Using Dembski's definition of information, for example, any quantum
>state transition in a crystal qualifies as a low probability occurrence
>that contains information, and is clearly not a design artifact or
>evidence of complexity.
>
>One such source of causes is quantum events. Thus collapses Dembski's
>entire house of cards.
Dembski does recognize that complex events can occur that are not
designed. Your quantum events is one example. Shuffling a deck of
cards is another. Dembski's solution is to require "side information"
that somehow independently identifies -- i.e., specifies -- the
specific outcome. In the case of abiogenesis, the ability of all
living things to reproduce themselves is possible side information.
>But much information is gained from noting that this apple, not that
>one, hit the ground. Dembski and his followers routinely confuse
>abstraction with 'actualization', in their quest for 'information.
Events (actualizations) do generate information according to Dembski.
But you have to distinguish ordinary information from complex
specified information (CSI).
>The question [can CSI be generated by a random stochastic process]
>is poorly formed. The underlying question is "what is
>complexity"? The best Dembski can do is show that complexity of
>information is subjective, rather than objective, as when he uses the
>example of a speaker of Mandarin. (Apologies to Ruth Rendel fans.)
In "The Design Inference," Dembski has a chapter on complexity. He
briefly discusses various kinds of measures of complexity including an
information measure. He doesn't do any calculations of measures that
are important so far as I know. But the question of how one can
generate CSI -- or, say, abiogenesis -- using a random stochastic
process -- or, better, using any natural process -- is a reasonable
one. Of course, I cannot imagine how one would ever show that it is
impossible.
Why Ruth Rendel fans?
Ivar
:>When the event concerned is abiogenesis, /even/ if the event
:>was fantastically improbable (which it was not), the (weak) anthropic
:>principle can be employed to conquor *any* odds for this particular
:>event. Intelligent design need not be invoked.
: Richard Swinburne has an argument against this. Dembski, I think,
: refers to it some place.
: Suppose you are a prisoner. Your captor show you a machine. The
: machine will randomly generate a number between one and the number of
: atoms in the universe. You have to guess the number. If you guess
: correctly, your life will be spared. If you guess wrong, the machine
: will kill you. You guess and live.
: Swinburne says that this outcome is not credible.
Richard Swinburne tries to use the anthropic principle to argue for the
existence of a divince creator!
Apart from the fact that this completly discredits all his views on the
subject, the argument you present seems to be logically flawed.
The correct analogy is with *infinite* numbers of prisoners, each of who's
life depends on the improbable event.
It is not the case that one /particular/ prisoner has to live - any of
them will do. If *any* of them survive - they will look at a resulting
universe and see it arranged in such a manner that life can exist.
There /may/ be an argument that if you pick a universe at random with
life in it, then the probability of abiogenesis in that universe is high.
However, /that/ argument is another story ;-)
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com
I'm not a complete idiot - several parts are missing.
:>> But I suspect that only one event is crucial to Dembski and that is
:>> abiogenesis. The first life had those long, unlikely DNA strings.
:>
:>Sorry, but you're making an assumption that may not need to be true.
:>We don't know how abiogenesis happened, and so there is no need to
:>require that DNA be present, let alone present in long strands.
: I should have said the first life that has been observed ....
No one who observed the first organisms is around to tell the tale today.
Abiogeneisis does *not* require unlikely DNA strings, or anything terribly
unlikely, certainly not any of Dembski's "CSI".
The lowest probability abiogeneisis scenario that I'm aware of is A. G.
Cairns-Smith's theory of the mineral origin of life. This has primitive
evolutionary processes happeng spontaneously *today* in bodies of water
around the planet, but never "taking off" due to problems relating to
organic molecules in the environment being "already spoken for".
Estimating probability of such scenarios is a hairy business, though - and
the "Kauffman" crowd and regular "protocell" folks all seem to believe
that they have the probability of life's origin down to plausible levels.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com
Love is grand; divorce, forty grand.
:> Systems which have all their components in a complex interdependence
:> with one another don't /necessarily/ indicate design - they just indicate
:> the existence of a supporting structure that is no longer visible.
:>
:> Consider a stone arch. Remove any stone in an arch and it collapses.
:> Every stone is a keystone.
:>
:> Does this mean an arch is "irreducibly complex"? Of course not!
:>
:> To build an arch, simply heap up a pile of stones, build the arch on top
:> of that and then remove the pile of stones.
: Of course the more arches you build which share a common capstone, the more
: difficult it becomes to remove the rocks without the whole thing collapsing.
: The arch arguments ignores in inter-relatedness of functionality at the
: cellular level.
No, no! ;-)
The "arch" argument indicates that no matter /how/ complex and inter
related things are at the cellular level, they *still* may have been
built by the use of elaborate supporting structures.
Showing that inter-dependence exists proves diddley-squat about whether
a system can be build by gradual processes.
You can build large number of arches wich share a common capstone
(moving only one stone at a time) if you first build a mound of rocks,
then build the arches, and then /carefully/ remove the "scaffolding".
Whether the arch collapses if you then remove a stone says nothing about
whether it was constructed by gradual processes. If you really want to
remove stones from the arch, moving a stone at a time and without the
arch collapsing, start by building a pile of rocks underneath it to act
as supporting scaffolding.
Similarly arguing from the apparent interdependency of things at the
cellular level simply *ignores* the possibility that these were
constructed from simpler systems, which are now no longer visible.
See A. G. Cairns-Smith, 1982, 1985 for one idea concerning what form
these structures took.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com
Kilroy occupied these co-ordinates.
>Ivar Ylvisaker filled the aether with:
>
>> On 26 Sep 1999 12:32:13 -0400, Marty Fouts
>> <mathem...@usenet.nospam.fogey.com> wrote:
>
>>> Ivar Ylvisaker filled the aether with:
>>>
>>>> But I suspect that only one event is crucial to Dembski and that
>>>> is abiogenesis. The first life had those long, unlikely DNA
>>>> strings.
>>> Sorry, but you're making an assumption that may not need to be
>>> true. We don't know how abiogenesis happened, and so there is no
>>> need to require that DNA be present, let alone present in long
>>> strands.
>
>> I should have said the first life that has been observed ....
>
>is a long way down the path from abiogenesis.
There are fossil records of cells in rocks that are 3.5 billion years
old. You can't see the DNA but, if they didn't have it, you have
uncovered a new major crisis for evolution.
Ivar
>Ivar Ylvisaker <ylvi...@erols.com> wrote:
>: <mathem...@usenet.nospam.fogey.com> wrote:
>:>Ivar Ylvisaker filled the aether with:
>
>:>> But I suspect that only one event is crucial to Dembski and that is
>:>> abiogenesis. The first life had those long, unlikely DNA strings.
>:>
>:>Sorry, but you're making an assumption that may not need to be true.
>:>We don't know how abiogenesis happened, and so there is no need to
>:>require that DNA be present, let alone present in long strands.
>
>: I should have said the first life that has been observed ....
>
>No one who observed the first organisms is around to tell the tale today.
>
>Abiogeneisis does *not* require unlikely DNA strings, or anything terribly
>unlikely, certainly not any of Dembski's "CSI".
>
>The lowest probability abiogeneisis scenario that I'm aware of is A. G.
>Cairns-Smith's theory of the mineral origin of life. This has primitive
>evolutionary processes happeng spontaneously *today* in bodies of water
>around the planet, but never "taking off" due to problems relating to
>organic molecules in the environment being "already spoken for".
>
>Estimating probability of such scenarios is a hairy business, though - and
>the "Kauffman" crowd and regular "protocell" folks all seem to believe
>that they have the probability of life's origin down to plausible levels.
By unlikely, I meant something more like surprising or puzzling rather
than impossible.
Many people are trying to invent a plausible abiogenesis process. No
one, to my knowledge, has a theory that is very convincing to other
scientists.
Ivar
>not relevant. until you know *how* abiogenesis occured, you can't argue
>about its properties. There is no evidence that requires that the
>first objects that we would recognize as alive had to have DNA, nor
>that the first with DNA had to have long strands of it.
>
I'm not talking about abiogenesis. All I'm saying is that nature had
to get to DNA somehow. Dembski is hoping that science will never find
a way (other than God).
But if you are postulating that DNA is a relatively modern phenomenum,
then a radical change to the theory of evolution will be necessary.
And to any theory of life.
Good night.
Ivar
The question is, is how much information was needed for that first
self-replicating molecule that led to the abiogenic process. Dembski's argument
assumes a priori that it was large; large enough to be improbable. Its
circular.
Stuart
Dr. Stuart A. Weinstein
Ewa Beach Institute of Tectonics
"To err is human, but to really foul things up
requires a creationist"
:>:>> But I suspect that only one event is crucial to Dembski and that is
:>:>> abiogenesis. The first life had those long, unlikely DNA strings.
:>:>
:>:>Sorry, but you're making an assumption that may not need to be true.
:>:>We don't know how abiogenesis happened, and so there is no need to
:>:>require that DNA be present, let alone present in long strands.
:>
:>: I should have said the first life that has been observed ....
:>
:>No one who observed the first organisms is around to tell the tale today.
:>
:>Abiogeneisis does *not* require unlikely DNA strings, or anything terribly
:>unlikely, certainly not any of Dembski's "CSI".
:>
:>The lowest probability abiogeneisis scenario that I'm aware of is A. G.
:>Cairns-Smith's theory of the mineral origin of life. [...]
: By unlikely, I meant something more like surprising or puzzling rather
: than impossible.
Suprising or puzzling would be fine by me.
However for Dembski's argument to apply to abiogenesis, the situation
*demands* that the event be not just improbably - but very, *very*
unlikely.
: Many people are trying to invent a plausible abiogenesis process. No
: one, to my knowledge, has a theory that is very convincing to other
: scientists.
...but there's more than one theory that claims life could come into
existence rather easily and gives details of the possible mechanism
involved.
The reason no one story has yet won out is not because the stories
involved are all very unlikely (the Cairns-Smith reference suggests
exactly the opposite is the case) but because the events are lost in
the mists of time and nobody can see which of the various possible
scenarios actually happened.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com
'i' before 'e', except in pleiotropy.
> Marty Fouts wrote:
>
> >Ivar Ylvisaker filled the aether with:
>
> >> Dembski does equate improbability and complexity. His reasoning
> >> seems to be that improbable events when they occur convey lots of
> >> information, i.e., you have to use lots of bits to distinguish the
> >> event that did occur from all the other events that might have
> >> occurred. And an information measure is a kind of complexity.
>
> >But Dembski, who should know better, is wrong. If the universe is
> >large and only one event can occur from it, then the the probability
> >of any event is low, with no correlation to complexity.
>
> First, you are confusing probability with probability density. You must
> INTEGRATE pd over some interval to obtain the probability of an
> observed datum being between that interval. The information contained in
> that event, and measured in a specified way, is calculated from the
> probability, not the pd. In the limit, as the interval goes to zero, it
> takes an infinite number of bits to convey the information,
> corresponding to the infinite number of significant digits it would take
> to specify and analog value precisely.
>
> Second, you example is muddled. Your premise states that one event can
> occur and then you go one to talk about the probability of _any_ event.
>
> Finally, you insist on claiming Dembski's arguments for ID rest on
> complexity alone. This is false. There are two types of complexity,
> specified and unspecified. Unspecified complexity can result from random
> stochastic processes while specifed complexity, according to Dembski,
> can not.
>
> You are walking along the beach and it begins to rain. You come across a
> large pile of drift wood that forms a crude shelter and so you crawl
> inside. Once in, you gaze across the beach and see a beautiful beach
> home built on the clifts above. What distinguishes these two shelters?
> They are both complex. The probability of the driftwood being arranged
> in exactly that pattern is exceeding low and the structure can therefore
> be said to be complex. The difference of course is that one of them is
> built according to a _plan_, the other is not. The beach home exhibits
> specified complexity which implies design. The driftwood shelter is
> complex but unspecified.
> [snip]
>
> >> Dembski does recognize that complex events can occur that are not
> >> designed. Your quantum events is one example. Shuffling a deck of
> >> cards is another. Dembski's solution is to require "side
> >> information" that somehow independently identifies -- i.e.,
> >> specifies -- the specific outcome. In the case of abiogenesis, the
> >> ability of all living things to reproduce themselves is possible
> >> side information.
>
> >He's got his cart and horse backwards here. He is positing 'side
> >information' with showing that it is necessary.
>
> Necessary for what? He's saying some information adheres to a specified
> pattern and some does not. That which does implies design. The more
> complex the pattern, the greater the implication for design. What about
> this simple concept confuses you?
> [snip]
>
> >> In "The Design Inference," Dembski has a chapter on complexity. He
> >> briefly discusses various kinds of measures of complexity including
> > >an information measure. He doesn't do any calculations of measures
> > t>hat are important so far as I know. But the question of how one
> >> can generate CSI -- or, say, abiogenesis -- using a random
> >> stochastic process -- or, better, using any natural process -- is a
> >> reasonable one. Of course, I cannot imagine how one would ever show
> >> that it is impossible.
>
> The beauty of Dembski's approach to this is that it gets around having
> to speculate about the proto cell. If CSI can not, as Dembski claims be
> generated by natural processes but only conveyed, it is enough to show
> that a modern cell contains CSI to implicate ID at the origin.
>
> >It's a classic fallacy. That CSI *might* exist is not proof that it
> >does,
>
> This is silly. Examples of CSI as defined by Dembski (he is free to
> define it however he wishes) abound, which obviously does prove that it
> exists. You may disagree with the inference but not the premise that CSI
> exists. It does, by definition.
>
> > while on the other hand, there is plenty of evidence that the
> >'side information' is not necessary.
>
> Please provide one example of complex specified information, (complex
> information with the 'side information' of specification) being
> _generated_ by a non-intelligent mechanism.
>
I applied Dembski's definition to the following:
http://www.cogs.susx.ac.uk/users/adrianth/ices96/node5.html
The search space is 2^1800 different circuits. The specified information
is the funtional criteria for the circuit. If you assume a uniform
probability distirbution, the probability that a functional circuit could
arise in the time involved (allowing 5 seconds for each test over 3 weeks)
is about 1 in a billion, IIRC (I don't have my calculations with me).
There are a couple of ways to dispute the fact that CSI was created. One
(which Dembski seems to favor) is that the uniform density assumption is
violated by the evolutionary process, making a seemingly complex event,
non-complex (read more probable). By doing this however, you essentially
define away the problem of evolving specified complexity since it by
definition is not complex. This doesn't help answer the question at all
because now seemingly complex events (DNA, for example) are not complex, if
they evolved. How can you tell the difference?
Mike
> [snip]
> > Please provide one example of complex specified information, (complex
> > information with the 'side information' of specification) being
> > _generated_ by a non-intelligent mechanism.
> >
>
> I applied Dembski's definition to the following:
>
> http://www.cogs.susx.ac.uk/users/adrianth/ices96/node5.html
>
> The search space is 2^1800 different circuits. The specified information
> is the funtional criteria for the circuit. If you assume a uniform
> probability distirbution, the probability that a functional circuit could
> arise in the time involved (allowing 5 seconds for each test over 3 weeks)
> is about 1 in a billion, IIRC (I don't have my calculations with me).
A nice piece of work, one which, as a EE, I can really appreciate :>)
> There are a couple of ways to dispute the fact that CSI was created. One
> (which Dembski seems to favor) is that the uniform density assumption is
> violated by the evolutionary process, making a seemingly complex event,
> non-complex (read more probable). By doing this however, you essentially
> define away the problem of evolving specified complexity since it by
> definition is not complex. This doesn't help answer the question at all
> because now seemingly complex events (DNA, for example) are not complex, if
> they evolved. How can you tell the difference?
>
I think you've mis-stated Dembski's objection slightly. As I understand his
argument, which relates to genetic algoritms said to mimic natural evolution
and not to the evolutionary process itself, if a genetic algorithm stacks the
deck to make an outcome more likely (or as in the case of Dawkins' "me
thinks.." evolver, a certainty), the algorithm cannot be said to have generated
information, but instead has merely mapped the information from input to
output. In essence, such system move the event (of producting a specified
output) out of the stochastic universe and into the space of natural law. A
process that is destined to produce an outcome and no other cannot be said to
have generated CSI regardless of how complex the outcome is. All of the
information present in such a system was present in the initial conditions.
Likewise, a system that has a high probability of generating a specified
outcome cannot be used to make general statements about the evolutionary
process which is claimed to completely stochastic and without purpose.
Dembski's objection is that the GAs don't work in the same way as biologists
claim evolution works.
To put it another way, if evolution was bound to generate DNA and few (or no)
other possibility existed given the initial conditions, the the specifications
for DNA was evidently prescribed in the initial conditions. Where did that
specification come from?
To convincingly generate CSI stochastically would require that zero information
be contained in the fitness function. The problem with this is that GAs that
meet this criteria invariably do not evolve, they get trapped in local minima.
In your experiment, the constants k1 and k2 had to be empirically determined.
This amounts to front loading the fitness function with information. As you
describe in your paper, the zero information fitness function (k1=k2=1)
resulted in the a local minima trap, precisely the problem alluded to above.
Once front loading occurs, the system evolves to convert the CSI from one form
(information contained in k1 and k2), into another (how to interconnect the
cells in the FPGA), but in this process it cannot be conclusively shown that
any new information was created.
Not to say that it isn't possible to do really interesting things with guided
evolution, as your circuit demonstrates. One question that concerns me though
is how the circuit made use of unspecified parameters (second order
interactions with neighboring cells) to solve the problem. In a practical
design, the overall performance bounds can be calculated from the part
specifications of the composite components. Allowing a circuit to make use of
unspecified parameters would make proving robustness challenging. Is it
possible somehow to preclude this reliance?
Jeff
> Michael wrote:
>
> > [snip]
>
> > > Please provide one example of complex specified information, (complex
> > > information with the 'side information' of specification) being
> > > _generated_ by a non-intelligent mechanism.
> > >
> >
> > I applied Dembski's definition to the following:
> >
> > http://www.cogs.susx.ac.uk/users/adrianth/ices96/node5.html
> >
> > The search space is 2^1800 different circuits. The specified information
> > is the funtional criteria for the circuit. If you assume a uniform
> > probability distirbution, the probability that a functional circuit could
> > arise in the time involved (allowing 5 seconds for each test over 3 weeks)
> > is about 1 in a billion, IIRC (I don't have my calculations with me).
>
> A nice piece of work, one which, as a EE, I can really appreciate :>)
I'm an EE myself; perhaps that's what drew me to it.
>
> > There are a couple of ways to dispute the fact that CSI was created. One
> > (which Dembski seems to favor) is that the uniform density assumption is
> > violated by the evolutionary process, making a seemingly complex event,
> > non-complex (read more probable). By doing this however, you essentially
> > define away the problem of evolving specified complexity since it by
> > definition is not complex. This doesn't help answer the question at all
> > because now seemingly complex events (DNA, for example) are not complex, if
> > they evolved. How can you tell the difference?
> >
>
> I think you've mis-stated Dembski's objection slightly. As I understand his
> argument, which relates to genetic algoritms said to mimic natural evolution
> and not to the evolutionary process itself, if a genetic algorithm stacks the
> deck to make an outcome more likely (or as in the case of Dawkins' "me
> thinks.." evolver, a certainty), the algorithm cannot be said to have generated
> information, but instead has merely mapped the information from input to
> output. In essence, such system move the event (of producting a specified
> output) out of the stochastic universe and into the space of natural law. A
> process that is destined to produce an outcome and no other cannot be said to
> have generated CSI regardless of how complex the outcome is. All of the
> information present in such a system was present in the initial conditions.
> Likewise, a system that has a high probability of generating a specified
> outcome cannot be used to make general statements about the evolutionary
> process which is claimed to completely stochastic and without purpose.
Thanks for the effort of reading the article and replying.
A couple of points are in order. I did not set out to prove that life evolved
per se. I realize that GA's are not the best models of evolution (Larry Moran
has beat me over the head with that quite a few times). But is a simplified
version of natural selection, don't you agree? Natural selection is undoubtably
a part of evolution, thus its deterministic traits are also a part of evolution.
This is why you will hear many posters say that life did not arrive by chance.
The results of the GA algorithm are not characterized by law because were we to
run the algorithm again, it is unlikely that we will get the same circuit.
According to Dembski's book, it thus can't be the work of law. I don't think you
can claim that the final circuit was present in the intial conditions. Its
functional traits, yes, but not the circuit itself.
I think the argument that the result is no longer complex (because it is now
likely to produce the specified event) can be made based on Dembski's technique,
but that leads us back to the problem I mentioned: how can you tell if its
complex?
>
>
> Dembski's objection is that the GAs don't work in the same way as biologists
> claim evolution works.
> To put it another way, if evolution was bound to generate DNA and few (or no)
> other possibility existed given the initial conditions, the the specifications
> for DNA was evidently prescribed in the initial conditions. Where did that
> specification come from?
>
> To convincingly generate CSI stochastically would require that zero information
> be contained in the fitness function. The problem with this is that GAs that
> meet this criteria invariably do not evolve, they get trapped in local minima.
> In your experiment, the constants k1 and k2 had to be empirically determined.
> This amounts to front loading the fitness function with information. As you
> describe in your paper, the zero information fitness function (k1=k2=1)
> resulted in the a local minima trap, precisely the problem alluded to above.
Not to be pedantic, but its not my paper, its Adrian Thompson's. I wouldn't want
to take credit for his work.
>
> Once front loading occurs, the system evolves to convert the CSI from one form
> (information contained in k1 and k2), into another (how to interconnect the
> cells in the FPGA), but in this process it cannot be conclusively shown that
> any new information was created.
However, even if k1 and k2 have to be determined, that is a much simpler set. I
don't think you can claim that k1 and k2 constitute CSI.
>
>
> Not to say that it isn't possible to do really interesting things with guided
> evolution, as your circuit demonstrates. One question that concerns me though
> is how the circuit made use of unspecified parameters (second order
> interactions with neighboring cells) to solve the problem. In a practical
> design, the overall performance bounds can be calculated from the part
> specifications of the composite components. Allowing a circuit to make use of
> unspecified parameters would make proving robustness challenging. Is it
> possible somehow to preclude this reliance?
>
Some of his other papers are at
http://www.cogs.susx.ac.uk/users/adrianth/ade.html
where he looks at varying the temperature of the circuits while they are evolving
to make them more robust.
Mike
Bigdakine wrote:
> >Subject: Re: Dembski's Intelligent Design Hypothesis
> >From: ylvi...@erols.com (Ivar Ylvisaker)
> >Date: Sun, 26 September 1999 02:02 AM EDT
> >Message-id: <37eda9a2...@news.erols.com>
> >
> >On 24 Sep 1999 21:27:31 -0400, Marty Fouts
> ><mathem...@usenet.nospam.fogey.com> wrote:
> >
> >While I do not find Dembski's reasoning persuasive, he is a little
> >more logical than your post suggests.
> >
> >>There in lies the first problem with Dembski's reasoning. Low
> >>probability of occurrence does not equal complexity, nor does it need
> >>arise from design.
> >
> >Dembski does equate improbability and complexity. His reasoning seems
> >to be that improbable events when they occur convey lots of
> >information, i.e., you have to use lots of bits to distinguish the
> >event that did occur from all the other events that might have
> >occurred. And an information measure is a kind of complexity.
>
> The question is, is how much information was needed for that first
> self-replicating molecule that led to the abiogenic process. Dembski's argument
> assumes a priori that it was large; large enough to be improbable. Its
> circular.
Not quite. He argues that CSI is conserved. Thus if found today, it must have been
present at the start.
Jeff
Marty Fouts wrote:
> jpat41 filled the aether with:
>
> > Marty Fouts blessed us with:
>
> [snip]
>
> > And no where will you find Dembski making such a inane claim. What
> > he says is low probability of occurrence + a pre-specified pattern =
> > complex specified information. He specifically says low probability
> > alone does NOT equal CSI. In his archery analogy, the arrow hitting
> > the wall at a specific point was low probability occurrence but
> > without the pre-specified pattern (the target) the event contained
> > no complexity of information.
>
> But since the 'pre-specificied pattern' is an imponderable,...
How so?
> >> Using Dembski's definition of information,
>
> > It's not Dembski's definition, it Shannon's. You remember Shannon,
> > the father of information theory, that one.
>
> It is *not* Shannon's, and even Dembski has written that it is not.
> Communication channel, you remember communication channel.
That is what Shannon worked out, the information capacity of a
communication channel in the presence of noise. As to Dembski claiming it
is a different definition, please provide a reference. Here is mine:
" Thus we define the measure of information in an event of probability p
as -log2p (see Shannon and Weaver, 1949, p. 32; Hamming, 1986; or indeed
any mathematical introduction to information theory". (Intelligent Design
as a Theory of Information, William A. Dembski).
Classical Shannon, properly attributed.
> [snip]
>
> >>
> >> But much information is gained from noting that this apple, not
> >> that one, hit the ground.
>
> > Now you've gone from a natural law event to a random stochastic
> > event. Now show me a particular apple that hits the ground near a
> > particular spot at a particular point in time, all specified in
> > advance and I'll show you a Tree Shaker, unless you perhaps believe
> > in psychic power.
>
> I believe in QM. There is no tree shaker, not there, nor in
> evolution.
So you think QM can generate prespecification of events in time? This gets
stranger and stranger.
>
>
> [snip]
>
> > What I think he's getting at is that it is
> > possible to determine the pre-specified pattern after the pattern
> > has been generated, thus turning preceived complex but unspecified
> > information into CSI. In any case, the point that it took ID to
> > create the CSI in this case (Chinese speaker) is still valid.
>
> Human languages, for the most part are not designed, by the way, they
> evolve. The mechanism of the evolution is well studied.
Interesting. Do you think language would evolve in the absence of
intelligence? Language contains CSI. The CSI increases over time,
increasing its ability to express more complex ideas. All of this requires
intelligence.
> [snip]
>
> >> And yet artifacts that give every appearance of having been
> >> 'designed', such as arrowheads, are known to be both simple and to
> >> have arisen stochastically.
>
> > A high probability event (the crystalline structure of obsidian
> > makes arrow-head like creation events likely) is ruled out as a test
> > case.
>
> Sorry, this makes no sense. That obsidian forms flakes in such a way
> that it creates sharp edges is no more a 'high probability event' than
> any property of a physical material.
Try making an arrow head out of a sugar cube. Find one that did so on its
own, subject to only natural, randowm forces and you'd have a case in
point.
> > Like I said, Dembski hasn't proven anything to me yet. I too
> > wonder whether a razor so fine exists as to definitively separate a
> > complex pattern from a non-complex one.
>
> It doesn't, nor is there a tool that can show the presence of the
> 'pre-specificied pattern'. Meanwhile, at least in evolution, there
> are well known mechanisms that require neither.
I haven't read anything yey on the formalization of the pattern
recognition so you may be correct. But while we wait for a rigorous
definition, a working definition is, like obsenity, "you'll know it when
you see it". In real life, all of us have confidence of being able to
determine the designed from the undesigned without a formal criteria and
the more complex the system, the less likely we are to be fooled.
> > What is clear though is that the probability of any given
> > pre-specified pattern occurring at random can be calculated if one
> > can estimate the probability density(-ies) involved.
>
> Not really. An aboriginal arrowhead is indistinguishable from a flint
> flake.
Probably because the aboriginals used flint flakes for arrow heads.
> The one is an arbitrary event the other an act of design.
Not too much design is involved here.
> Good anthropologists with much field experience have difficulty
> telling them apart.
As would be predicted from the lack of design necessary to produce one.
> Simple, uncomplex, highly probable objects exist
> that could be either coincident or design.
Of course but what we are taking about is complex, highly unprobable
objects!
[snip]
> It is easy to show that designs exist that are indistinguishable from
> random chance. That in itself kills half of Dembski's argument. It
> is impossible, without talking to the designer, to determine what a
> 'pre-specified pattern' is, let alone, that it bears any
> responsibility for the object in question.
Back to the heart of the matter. Dembski claims you can reliably infer the
prespecified complexity. I remain unconvinced but am not well enough
informed on the details yet. His ideas are intriguing though and deserve
critique on a fair reading, not self-serving straw men.
>Ivar Ylvisaker filled the aether with:
>
>> I'm not talking about abiogenesis. All I'm saying is that nature
>> had to get to DNA somehow. Dembski is hoping that science will
>> never find a way (other than God).
>
>You're not talking about abiogenesis? Then why did you write, earlier
>in this exchange:
>
>>>> Ivar Ylvisaker filled the aether with:
>>>>
>>>>> But I suspect that only one event is crucial to Dembski and that
>>>>> is abiogenesis. The first life had those long, unlikely DNA
>>>>> strings.
>
>It was, after all, that specific paragraph that has led us directly to
>this post.
I think that we are using the word "abiogenesis" in somewhat different
ways. I was thinking of abiogenesis as a kind of black box. Before
the box was present, there was no life. After the black box did its
work, there was life incorporating those long DNA strings. Dembski's
best chance of demonstrating design is inside that black box (in my
opinion, Dembski may think otherwise). The alternative to design is
some kind of natural process. But I have no opinion on what that
natural process might be. I wasn't looking inside the box. You
wrote:
>until you know *how* abiogenesis occured, you can't argue
>about its properties. There is no evidence that requires that the
>first objects that we would recognize as alive had to have DNA, nor
>that the first with DNA had to have long strands of it.
I wasn't concerned about the properties of abiogenesis inside the box
(assuming the process is natural) or when things inside the box might
be described as alive.
In short, I think we are interpreting the word abiogenesis from
different viewpoints.
Ivar
|Jeff Patterson filled the aether with:
|
|> Marty Fouts wrote:
|
|[snip]
|
|>> But since the 'pre-specificied pattern' is an imponderable,...
|
|> How so?
|
|There's not way to distinguish after the fact whether an event had a
|pre-specificied pattern.
|
|>> >> Using Dembski's definition of information,
|>>
|>> > It's not Dembski's definition, it Shannon's. You remember
|>> Shannon, the father of information theory, that one.
|>>
|>> It is *not* Shannon's, and even Dembski has written that it is
|>> not. Communication channel, you remember communication channel.
|
|> That is what Shannon worked out, the information capacity of a
|> communication channel in the presence of noise. As to Dembski
|> claiming it is a different definition, please provide a
|> reference. Here is mine: " Thus we define the measure of information
|> in an event of probability p as -log2p (see Shannon and Weaver,
|> 1949, p. 32; Hamming, 1986; or indeed any mathematical introduction
|> to information theory". (Intelligent Design as a Theory of
|> Information, William A. Dembski).
|
|You've missed my point. The definition of information that I was
|using was *not* Shannon's definition, it was Dembski's. Dembski wrote
|in
|http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html
|
| What then is information? The fundamental intuition underlying
| information is not, as is sometimes thought, the transmission of
| signals across a communication channel, but rather, the
| actualization of one possibility to the exclusion of others.
|
|That is definitely not Shannon's definition.
Isn't it? I thought that Shannon's was the probability that the state of
the receiver would be the same as the state of the sender, which would
imply the exclusion of other states. Certainly that's how it was
interpreted at the time. I have a little essay of JZ Young's from 1954 (6
years after Shannon's classical paper) in which that is *exactly* how he
characterises Shannon information. It's in _Evolution as a Process_, eds
Huxley, Hardy and Ford.
<snip rest>
--
John Wilkins, Head, Graphic Production
The Walter and Eliza Hall Institute of Medical Research, Melbourne,
Australia <mailto:wil...@WEHI.EDU.AU><http://www.wehi.edu.au/~wilkins>
Homo homini aut deus aut lupus - Erasmus of Rotterdam
> Jeff Patterson filled the aether with:
>
> > Marty Fouts wrote:
>
> [snip]
>
> >> But since the 'pre-specificied pattern' is an imponderable,...
>
> > How so?
>
> There's not way to distinguish after the fact whether an event had a
> pre-specificied pattern.
That will come as news to cryptologists :>) Seriously you can't mean for
your statement to be a generalization. If I hand you a piece of paper with
two strings of binary digits say 1000 digits long, each is equally likely
to have been generated by a fair coin toss. Say one was in fact so
generated. But if the other one is alternating, 101010...., you have no
trouble discerning the pattern the designer used to generate the string.
Just to make it clear, both are equally complex (same probability), one
has complex specificity, the other does not. In the one that does, the
pattern can be discerned.
> >> >> Using Dembski's definition of information,
> >>
> >> > It's not Dembski's definition, it Shannon's. You remember
> >> Shannon, the father of information theory, that one.
> >>
> >> It is *not* Shannon's, and even Dembski has written that it is
> >> not. Communication channel, you remember communication channel.
>
> > That is what Shannon worked out, the information capacity of a
> > communication channel in the presence of noise. As to Dembski
> > claiming it is a different definition, please provide a
> > reference. Here is mine: " Thus we define the measure of information
> > in an event of probability p as -log2p (see Shannon and Weaver,
> > 1949, p. 32; Hamming, 1986; or indeed any mathematical introduction
> > to information theory". (Intelligent Design as a Theory of
> > Information, William A. Dembski).
>
> You've missed my point. The definition of information that I was
> using was *not* Shannon's definition, it was Dembski's. Dembski wrote
> in
> http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html
>
> What then is information? The fundamental intuition underlying
> information is not, as is sometimes thought, the transmission of
> signals across a communication channel, but rather, the
> actualization of one possibility to the exclusion of others.
>
> That is definitely not Shannon's definition.
Dembski has not adopted a novel definition of information here. Nor would
he group Shannon in with those who think the "fundamental intuition
underlying information is the transmission of signals across communication
channels". Shannon had to first formalized the definition of information
before he could apply that formulation to the question of corruption that
occurs in channels. Dembski is taking Shannon's formal definition, which
decribes mathematically "the actualization of one possibility to the
exclusion of others", and using it to examine not its corruption but
instead its inception, a question as far as I know, was never taken up by
Shannon. Dembski is asking the fundamental question, "where did the
information come from". In examining that issue he is still using the
universally accepted mathematical definition of information,
I(A|B)=-log2(P(A))-log2(P(A|B)) due to Shannon and not some novel
definition. In any case, as long as we can agree that the above equation
is what Dembski means by information, the rest is semantics..
> [snip]
>
> >> I believe in QM. There is no tree shaker, not there, nor in
> >> evolution.
>
> > So you think QM can generate prespecification of events in time?
> > This gets stranger and stranger.
>
> You are right, your misinterpretation of my point gets stranger and
> stranger. I believe that 'prespecification' in Dembski's sense is a
> meaningless concept.
But wouldn't you agree that, at least sometimes, it is possible to discern
a specified pattern from the result with no a priori knowledge of the
specification- as in the coin toss sequence above or the cryptologists
who breaks the code?
> [snip]
>
> >> Human languages, for the most part are not designed, by the way,
> >> they evolve. The mechanism of the evolution is well studied.
>
> > Interesting. Do you think language would evolve in the absence of
> > intelligence?
>
> No. But I believe that a certain sophistication of language is a
> prerequisite for intelligence. Animals are capable of no greater
> intelligence than their languages allow.
>
> I also believe that neither language nor intelligence are binary, but
> rather are matters of degree.
>
> > Language contains CSI. The CSI increases over time, increasing its
> > ability to express more complex ideas. All of this requires
> > intelligence.
>
> you've got the cart and horse backwards. Language comes before
> intelligence. One first needs the medium of expression and then one
> expresses.
>
Well for what its worth here's my stand on this chicken and the egg. The
utterance of whatever audible sound an organism is capable of making is
only worth the calories required to do so if the organism has some
reasonable expectation that another organism is going understand the
meaning conveyed. This means the uttering organism knows that sounds can
represent things (the lesson it took Helen Keller so long to learn. She
had intellegence without language!) and thus has intelligence. The means
of utterance does not a language make. Encapsulating information into
audible symbols does and the encapsulation takes intelligence or so it
would appear to me.
> [snip]
>
> >> Sorry, this makes no sense. That obsidian forms flakes in such a
> >> way that it creates sharp edges is no more a 'high probability
> >> event' than any property of a physical material.
>
> > Try making an arrow head out of a sugar cube. Find one that did so
> > on its own, subject to only natural, randowm forces and you'd have a
> > case in point.
>
> Why? The arrowhead example is here to make the point that there are
> situations in which it is impossible to tell designed objects from
> coincidental objects. It also points out that design does not
> correlate with complexity, since arrowheads, even the most
> sophisticated designed ones of today, are hardly complex by any metric
> of complexity.
>
I think by correlate here you something more like "follows directly".
Correlation is an analog property, not a binary one. In that sense of the
word, objects which have the attribute of specified complexity are highly
correlated with objects designed by an intelligent agent. But all that is
besides the point. Dembski's evaluation filter is not immune to false
negatives (it can wrongly attribute ID to chance), and he explicitly
states this. Using his filter, your arrowhead would be always be
attributed to chance. That attribution may be wrong but so what? The point
is that you can design a filter that never exhibits false positives,
attributing ID to an object generated by chance or law. This makes the
whole arrowhead argument mute.
> [snip]
>
> > I haven't read anything yey on the formalization of the pattern
> > recognition so you may be correct. But while we wait for a rigorous
> > definition, a working definition is, like obsenity, "you'll know it
> > when you see it".
>
> That's a subjective definition, and, in my humble opinion, the best
> that will ever be arrived at. Being subjective, rather than
> objective, takes it outside the realm of science.
>
> > In real life, all of us have confidence of being able to determine
> > the designed from the undesigned without a formal criteria and the
> > more complex the system, the less likely we are to be fooled.
>
> But that's an intuition built by looking at a certain kind of design
> and noticing its complexity. It doesn't take into account either
> simple objects, such as arrowheads, or complex objects deliberately
> designed to look as if they weren't, such as certain kinds of
> camouflage.
In both cases though, you could not make the mistake, using the filter
described by Dembski, of attributing a chance occurrence to ID. The point
is *not* that every complex thing is generated by chance but that at least
some are not. If some of those things that aren't include occurrences
which have no human intervention, the obvious question arises, where did
they come from? You claim without proof that Natural Selection can be the
root cause of CSI, if CSI is ever found to exist in biological systems.
Dembski claims with proof that NS (or any stochastic, deterministic or
hybrid process) can never generate CSI, only transmute it. This follows
logically from the definition of CSI which, if I may use very rough terms,
is that information not produced by chance or law. If that definition
stood by itself, you points about circularity would be well taken, it
excludes chance a priori. But if we have another, independent test of CSI,
the circularity is broken. We can use the second test to determine the
presence of CSI and use the first definition to exclude chance and law.
Now I think you have implicitly agreed in your earlier remarks that
prespecification of pattern provides that independent test, but hold that
this is useless because it is impossible to determine the prespecified
pattern from the event. But clearly this is not so, I can certainly think
of some sequences where the pattern can be discerned with certainty. Does
life fall into this category? Who knows. But if you could be convinced
somehow, that some aspect of life, that is required for something to be
deemed alive, did indeed fall into that category, would you allow that it
follows that NS cannot be responsible for the observed pattern
specificity?
> [snip]
>
> >> Not really. An aboriginal arrowhead is indistinguishable from a
> >> flint flake.
>
> > Probably because the aboriginals used flint flakes for arrow heads.
>
> Sometimes they used found flakes, sometimes they deliberately worked
> the material. It is only as the working of the material becomes more
> obvious, as when tool marks become found on the arrowhead, that the
> anthropologists are able to tell the difference.
>
> Your assertions about noticing complexity really amount to noticing
> the hallmark of the workman, and I've seen complex designed objects
> that are, by intention, indistinguishable from complex undesigned
> objects.
Again, all of this presents no problem for the CSI detection algorithm. It
would, at worst, cast a false negative which is of no consequence. Find a
case where ID is wrongly attributed to chance, using the procedure
outlined by Dembski, and I agree the whole thing falls apart.
> The presence of the hallmarking that makes it seem to people that
> design is so easily detectable is an artifact of how primitive our
> design skills are, not a feature inherent in design.
>
> >> The one is an arbitrary event the other an act of design.
>
> > Not too much design is involved here.
>
> Well yes, but some. Thus, at one end of the spectrum, undetectable
> design.
Design may be undetectable, but at issue is whether chance can masquerade
as design. I think it is practical to limit this possibility to an
acceptable infinitesimal by suitable choice of the threshold of specified
complexity required to be inherent in the event under scrutiny. Dembski
goes so far as to posit an absolute probability so low that it equates
with impossible, derived from bounds on the number of atoms in the
universe the age of the universe and the known minimum time for phase
transitions.
> Also, and more importantly, as noted above, a demonstration
> that design does not correlate with complexity.
It is not necessary for design to follow from complexity to show that
design does follow from specified complexity.
> [snip]
>
> >> Simple, uncomplex, highly probable objects exist that could be
> >> either coincident or design.
>
> > Of course but what we are taking about is complex, highly unprobable
> > objects!
>
> we are talking about a theory that claims that design is noticeable
> because of complexity and improbability. I have show a case in which
> design is neither complex nor improbable, thus demonstrating that the
> theory has at least one flaw.
The flaw is in your reasoning. Complexity and improbability are different
measures of the same thing. Design is noticeable because of *specified*
complexity and improbability. Over and over you want to knock down this
complexity straw man. I keep telling you over and over, complexity alone
is not sufficient and Dembski never says it is. It has to complexity that
adheres to a pattern known in advance.
> > [snip]
>
> >> It is easy to show that designs exist that are indistinguishable
> >> from random chance. That in itself kills half of Dembski's
> >> argument.
No it doesn't. What would kill Dembski's argument is proving chance caused
an pattern that is indistinguishable from design.
> It is impossible, without talking to the designer, to
> >> determine what a 'pre-specified pattern' is, let alone, that it
> >> bears any responsibility for the object in question.
>
> > Back to the heart of the matter. Dembski claims you can reliably
> > infer the prespecified complexity.
>
> Dembski is wrong. He is ignoring several things, including the
> cultural error of assuming that the state of the art of human design
> is a good metric and the design error of ignoring those designs which
> are deliberately camouflage.
There are at least some patterns for which this is not true. I gave you
two in this post.
> But that only covers the false negative case, in which design is
> present but not inferred by Dembski. The false positive case, in
> which design is not present but claimed is just as likely, taking his
> criteria, which you have admitted to be the subjective "I don't know
> what it is, but I'll know it when I see it."
I admited to being ill-informed on the claim of objective criteria made by
Dembski and therefore of being unable to comment on its rigorousness. That
is not the same as admitting it is subjective.
Jeff
Tim Tyler wrote:
> Jeff Patterson <jp...@mpmd.com> wrote:
> : Tim Tyler wrote:
>
> :[snip]
> :>
> :> To build an arch, simply heap up a pile of stones, build the arch on top
> :> of that and then remove the pile of stones.
>
> : Of course the more arches you build which share a common capstone, the more
> : difficult it becomes to remove the rocks without the whole thing collapsing.
> : The arch arguments ignores in inter-relatedness of functionality at the
> : cellular level.
>
> No, no! ;-)
>
> The "arch" argument indicates that no matter /how/ complex and inter
> related things are at the cellular level, they *still* may have been
> built by the use of elaborate supporting structures.
>
> Showing that inter-dependence exists proves diddley-squat about whether
> a system can be build by gradual processes.
>
> You can build large number of arches wich share a common capstone
> (moving only one stone at a time) if you first build a mound of rocks,
> then build the arches, and then /carefully/ remove the "scaffolding".
It was this scaffolding I was referring to when I was talking about removing rocks
in my analogy. You seem to think I was talking about remove rocks from the arches
themselves. Just as with arches, it becomes increasingly more difficult to remove
the scaffolding as more and more arches share a common keystone (if you don't
believe it, try it), without the whole thing collapsing. Subtractive evolutionary
just so stories may be able to account for a single irreducibly complex function.
I have yet to see one that accounts for the interrelationships between functions
at the cellular level, each of which is irreducibly complex or that explains why
IC is the rule and not the exception in these functions.
Jeff
>In article <wkpuz3q...@usenet.nospam.fogey.com>, Marty Fouts
><mathem...@usenet.nospam.fogey.com> wrote:
>
> |Jeff Patterson filled the aether with:
> |
> |> Marty Fouts wrote:
> |
> |[snip]
> |
> |>> But since the 'pre-specificied pattern' is an imponderable,...
> |
> |> How so?
> |
> |There's not way to distinguish after the fact whether an event had a
> |pre-specificied pattern.
> |
> |>> >> Using Dembski's definition of information,
> |>>
> |>> > It's not Dembski's definition, it Shannon's. You remember
> |>> Shannon, the father of information theory, that one.
> |>>
> |>> It is *not* Shannon's, and even Dembski has written that it is
> |>> not. Communication channel, you remember communication channel.
> |
> |> That is what Shannon worked out, the information capacity of a
> |> communication channel in the presence of noise. As to Dembski
> |> claiming it is a different definition, please provide a
> |> reference. Here is mine: " Thus we define the measure of information
> |> in an event of probability p as -log2p (see Shannon and Weaver,
> |> 1949, p. 32; Hamming, 1986; or indeed any mathematical introduction
> |> to information theory". (Intelligent Design as a Theory of
> |> Information, William A. Dembski).
> |
> |You've missed my point. The definition of information that I was
> |using was *not* Shannon's definition, it was Dembski's. Dembski wrote
> |in
> |http://www.discovery.org/w3/discovery.org/crsc/crscviews/idesign2.html
> |
> | What then is information? The fundamental intuition underlying
> | information is not, as is sometimes thought, the transmission of
> | signals across a communication channel, but rather, the
> | actualization of one possibility to the exclusion of others.
> |
> |That is definitely not Shannon's definition.
>
>Isn't it? I thought that Shannon's was the probability that the state of
>the receiver would be the same as the state of the sender, which would
>imply the exclusion of other states. Certainly that's how it was
>interpreted at the time. I have a little essay of JZ Young's from 1954 (6
>years after Shannon's classical paper) in which that is *exactly* how he
>characterises Shannon information. It's in _Evolution as a Process_, eds
>Huxley, Hardy and Ford.
><snip rest>
>
Shannon's classic paper is available on the web:
http://www.math.washington.edu/~hillman/Entropy/infcode.html
As far as I know, Shannon didn't worry much about a definition of
information. He specifically said he didn't care about meaning.
Shannon's interest was in clever ways to transmit information and in
establishing a bound on how clever one could get. In the beginning of
his paper, there is a diagram with a box labeled information source.
He includes a few examples and that's about it insofar as an
explanation of the source of information is concerned. He did define
a measure of information, that -log2p stuff, but it applies only after
the information source produces some information. Defining
information is different than defining a measure of information.
Information theory (and other theories) may help clarify and
ultimately resolve the problem of abiogenesis. But they won't make it
disappear.
Ivar
JP> [snip]
JP> Please provide one example of complex specified information, (complex
JP> information with the 'side information' of specification) being
JP> _generated_ by a non-intelligent mechanism.
MK> I applied Dembski's definition to the following:
MK> http://www.cogs.susx.ac.uk/users/adrianth/ices96/node5.html
MK> The search space is 2^1800 different circuits. The
MK> specified information is the funtional criteria for the
MK> circuit. If you assume a uniform probability distirbution,
MK> the probability that a functional circuit could arise in the
MK> time involved (allowing 5 seconds for each test over 3
MK> weeks) is about 1 in a billion, IIRC (I don't have my
MK> calculations with me).
JP>A nice piece of work, one which, as a EE, I can really
JP>appreciate :>)
MK> There are a couple of ways to dispute the fact that CSI was
MK> created. One (which Dembski seems to favor) is that the
MK> uniform density assumption is violated by the evolutionary
MK> process, making a seemingly complex event, non-complex (read
MK> more probable). By doing this however, you essentially
MK> define away the problem of evolving specified complexity
MK> since it by definition is not complex. This doesn't help
MK> answer the question at all because now seemingly complex
MK> events (DNA, for example) are not complex, if they evolved.
MK> How can you tell the difference?
JP>I think you've mis-stated Dembski's objection slightly.
I think that Michael hit the nail on the head. Of course,
I had already said essentially the same thing in regard to
picking two identical solutions apart where one is due to
an algorithm and the other due to an intelligent agent.
JP>As I understand his argument, which relates to genetic
JP>algoritms said to mimic natural evolution and not to the
JP>evolutionary process itself, if a genetic algorithm stacks
JP>the deck to make an outcome more likely (or as in the case of
JP>Dawkins' "me thinks.." evolver, a certainty), the algorithm
JP>cannot be said to have generated information, but instead has
JP>merely mapped the information from input to output. In
JP>essence, such system move the event (of producting a
JP>specified output) out of the stochastic universe and into the
JP>space of natural law.
This critique applies to intelligent agents as well as
unintelligent processes. If a task is made more likely to be
solved because intelligence is applied, then it would seem to
me that the result must also be classed as "apparent CSI"
rather than "actual CSI" on that head. By this consistent
view, an omnipotent, omniscient hyeprintelligence would never
be able to produce anything but "apparent CSI", and thus would
be indistinguishable in that regard from algorithmic
'probability amplifiers'.
JP>A process that is destined to produce an outcome and no
JP>other cannot be said to have generated CSI regardless of how
JP>complex the outcome is.
See comment upon omnipotent omniscient hyperintelligences
above.
JP>All of the information present in such a system was present
JP>in the initial conditions.
I've challenged Dembski with explaining the information
contained in the final state of a GA that solves a 100-city
tour of the Traveling Salesman Problem. I've analyzed this
case to show that the solution state is *not* contained in the
initial conditions. See
<http://inia.cls.org/~welsberr/zgists/wre/papers/antiec.html>.
JP>Likewise, a system that has a high probability of
JP>generating a specified outcome cannot be used to make
JP>general statements about the evolutionary process which is
JP>claimed to completely stochastic and without purpose.
I wish to resolve a somewhat less ambitious claim first, that
evolutionary computation algorithms are capable of producing
CSI.
JP>Dembski's objection is that the GAs don't work in the same
JP>way as biologists claim evolution works.
Is it? In 1997, Dembski's objection was that the information
of the TSP tour found by GA was somehow "infused" by the
intelligence that went into the operating system, programming
system, and the GA program itself. My analysis (see URL
above) presents a (IMO) compelling argument against such a
stance.
In his "reiterations" post, Dembski presented another
objection, the 'probability amplifier' objection. But it
would appear that Dembski does not wish to emphasize the
differences between GAs and evolutionary processes. Rather,
it appears to me that Dembski wants to apply the 'probability
amplifier' conclusion to natural selection based upon how well
GAs and natural selection correspond.
[Quote]
WAD>The Darwinian mutation-selection mechanism, neural nets,
WAD>and genetic algorithms all fall within this broad
WAD>definition of evolutionary algorithms.
[End Quote - WA Dembski, "Explaining Specified Complexity"]
JP>To put it another way, if evolution was bound to generate DNA
JP>and few (or no) other possibility existed given the initial
JP>conditions, the the specifications for DNA was evidently
JP>prescribed in the initial conditions. Where did that
JP>specification come from?
Why is it that things that must move through substances that
can be treated as fluids typically have a fusiform body shape?
Where did that specification come from? Why does falling
water take on a characteristic teardrop shape? Where did that
specification come from?
JP>To convincingly generate CSI stochastically would require
JP>that zero information be contained in the fitness function.
Why? That reduces any evolutionary computation to a random
walk. It isn't a random walk that we want to model. Apply
the same restriction to intelligent agents, and one will
find that those agents also end up doing blind search.
The evaluation function plays the role of the environment in a
GA. Thus, a evaluation function should provide no more
information than is provided in the interaction of organism
and environment. (Environment in this sense includes possible
interactions with other organisms.) How much information does
environment give with respect to fitness? It doesn't give in
one step the information of a best-fit organism, but it at
least gives a relative measure across organisms of which are
better and which are worse. And that is all that is necessary
to make evolutionary computation different from a random walk.
JP>The problem with this is that GAs that meet this criteria
JP>invariably do not evolve, they get trapped in local minima.
Bullshit. Random walks are not trapped in local minima.
Local minima are *defined* by the information in the
evaluation function. If there is no information, there
necessarily are no minima. If only one solution state has a
positive value, and all other states share the same lower
value, there likewise is no *local* minimum. Random walks
also do not represent natural selection, but rather would
correspond to genetic drift.
JP>In your experiment, the constants k1 and k2 had to be
JP>empirically determined. This amounts to front loading the
JP>fitness function with information. As you describe in your
JP>paper, the zero information fitness function (k1=k2=1)
JP>resulted in the a local minima trap, precisely the problem
JP>alluded to above. Once front loading occurs, the system
JP>evolves to convert the CSI from one form (information
JP>contained in k1 and k2), into another (how to interconnect
JP>the cells in the FPGA), but in this process it cannot be
JP>conclusively shown that any new information was created.
The objection that one goes from an antecedent set of
information in one form to a CSI solution of another form is
no objection at all. An intelligent agent would have no hope
of solving a 100-city tour of the TSP in the absence of the
distance information, or at least would only have the same
expectations as one would obtain from blind search. The
algorithm and the agent operate to generate CSI that did not
exist before from the information that is available. This is
another way of saying that intelligence and algorithms have
the same dependencies when generating CSI.
Let's posit that I have a 100-city tour of a TSP generated by
GA. (BTW, I have done the simulation, and it does work.) It
is the minimum closed path distance. This meets Dembski's
criteria for CSI. The solution state has CSI, whether one
calls it "apparent" or "actual", it still does the job. The
fitness function used simply returns the closed path length
of each candidate solution. The initial candidate population
of solutions had much longer closed path lengths than the end
solution. I assert that this example shows that evolutionary
computation can generate CSI. Further, any and all arguments
that would set aside the production of "actual CSI" by
algorithms would lead to the very same conclusion if
consistently applied to intelligent agents.
We know what is desired by the IDCs: a bright-line demarcation
between what can be accomplished by natural process and what
can be accomplished by intelligent agency. Unfortunately,
Dembski's original analysis failed to provide such a
demarcation, and it is my opinion that the problems in the
original analysis led to the arguments made in the
"reiterations" post. Those arguments, too, fail to provide
that demarcation. As I argue above, each factor urged as a
reason to exclude algorithms as sources of "actual CSI" can
(and should, if one is consistent in these things) be urged as
a reason to exclude intelligent agency as a source of "actual
CSI".
The whole "apparent CSI"/"actual CSI" distinction seems to me
to be an act of desperation to shore up an otherwise
collapsing argument. More on why later.
JP>Not to say that it isn't possible to do really interesting
JP>things with guided evolution, as your circuit
JP>demonstrates. One question that concerns me though is how the
JP>circuit made use of unspecified parameters (second order
JP>interactions with neighboring cells) to solve the problem. In
JP>a practical design, the overall performance bounds can be
JP>calculated from the part specifications of the composite
JP>components. Allowing a circuit to make use of unspecified
JP>parameters would make proving robustness challenging. Is it
JP>possible somehow to preclude this reliance?
Put the constraint into the fitness function by favoring
solutions that minimize reliance on second order interactions.
Whether it is possible to detect that condition or not is
another matter.
Repeating from prior posts:
Dembski wants us to use the evidence of biological phenomena
to conclude that life was designed, or that certain features
of living systems were designed. (See his First Things
article from October 1998.) In those cases, we do not have
definitive evidence that shows what kind of process produced
the systems in question. Thus, what gets fed to Dembski's
Explanatory Filter is by necessity the produced object alone.
The level of specified complexity inherent in that object is
our guide to whether one must conclude regularity, chance, or
design. We cannot feed the process that produced the object
to Dembski's Explanatory Filter without presupposing the
answer, and thus begging the question.
Feeding an object into Dembski's Explanatory Filter determines
whether the object has the attribute of high probability,
intermediate or low probability without specification, or
"complexity-specification". Dembski's "reiterations" post now
implies that while feeding an object into his Explanatory
Filter may find "complexity-specification" in that object,
feeding the event that produced the object may find only
regularity, and thus only "apparent CSI".
But Dembski previously claimed that his
"complexity-specification" was a completely reliable indicator
of the action of an intelligent agent back in the "First
Things" article. His "reiterations" post stance completely
obviates that claim. If the determination of "actual CSI" or
"apparent CSI" requires the evidence of what sort of process
produced the object in question, then finding that an object
itself has CSI is necessarily ambiguous and uninformative on
the issue of whether it was produced by an intelligent agent
or an unintelligent natural process.
My review of TDI showed that natural selection shared the same
triad of attributes that Dembski claimed for intelligent
agents alone. It appears that Dembski must concur with me,
given his recent post and its distinction between "actual CSI"
and "apparent CSI".
[Quote]
The apparent, but unstated, logic behind the move from design to
agency can be given as follows:
1.There exists an attribute in common of some subset of objects
known to be designed by an intelligent agent.
2.This attribute is never found in objects known not to be
designed by an intelligent agent.
3.The attribute encapsulates the property of directed contingency
or choice.
4.For all objects, if this attribute is found in an object, then
we may conclude that the object was designed by an intelligent agent.
This is an inductive argument. Notice that by the second step,
one must eliminate from consideration precisely those
biological phenomena which Dembski wishes to categorize. In
order to conclude intelligent agency for biological examples,
the possibility that intelligent agency is not operative is
excluded a priori. One large problem is that directed
contingency or choice is not solely an attribute of events due
to the intervention of an intelligent agent. The
"actualization-exclusion-specification" triad mentioned above
also fits natural selection rather precisely. One might thus
conclude that Dembski's argument establishes that natural
selection can be recognized as an intelligent agent.
[End Quote - WR Elsberry, Review of TDI, RNCSE March/April 1999]
Dembski has adopted a new stance that still allows him to
claim that algorithms cannot produce CSI: change the
definition of CSI such that algorithms cannot produce it by
definition. This is easy: just add the qualifiers "actual"
and "apparent". "Actual CSI", then, is the CSI that
intelligent agents come up with. "Apparent CSI" is the CSI
that algorithms come up with.
A solution to a problem that is deemed "Actual CSI" when
a human does it may be identical to the solution found
by algorithm that gets labelled as "Apparent CSI". The
solution is just as complex and works just as well in
either case, but now those algorithms don't get in the
way of a good apologetic.
There are some changes that this makes. The attribute from
(1) in my list now becomes "Actual CSI" rather than just
"CSI". One can, unfortunately, no longer simply "Do the
calculation." (TDI, p.228.) One must know the causal story
beforehand in order to know which of the two qualifiers
("Actual" or "Apparent") is to be prepended to "CSI" before
one even gets so far as figuring out whether "CSI" applies.
And this confirms my statement that the possibility that
intelligent design is *not* operative is excluded a priori is
precisely right. Dembski's penultimate paragraph shows this
very clearly, as he inverts the search order of his own
Explanatory Filter, and insists that *Design* has to be
eliminated from consideration first, rather than flowing as a
conclusion from the data.
[Quote]
Does Davies's original problem of finding radically new laws
to generate specified complexity thus turn into the slightly
modified problem of finding find radically new laws that
generate apparent-but not actual-specified complexity in
nature? If so, then the scientific community faces a logically
prior question, namely, whether nature exhibits actual
specified complexity. Only after we have confirmed that nature
does not exhibit actual specified complexity can it be safe to
dispense with design and focus all our attentions on natural
laws and how they might explain the appearance of specified
complexity in nature.
[End Quote -- WA Dembski, "reiterations" post]
--
Wesley R. Elsberry, Student in Wildlife & Fisheries Sciences, Tx A&M U.
Visit the Online Zoologists page (http://www.rtis.com/nat/user/elsberry)
Email to this account is dumped to /dev/null, whose Spam appetite is capacious.
If cucumbers & watermelons had antigravity, sunsets would be more interesting.
[...]
JP>I think by correlate here you something more like "follows
JP>directly". Correlation is an analog property, not a binary
JP>one. In that sense of the word, objects which have the
JP>attribute of specified complexity are highly correlated with
JP>objects designed by an intelligent agent. But all that is
JP>besides the point. Dembski's evaluation filter is not immune
JP>to false negatives (it can wrongly attribute ID to chance),
JP>and he explicitly states this. Using his filter, your
JP>arrowhead would be always be attributed to chance. That
JP>attribution may be wrong but so what? The point is that you
JP>can design a filter that never exhibits false positives,
JP>attributing ID to an object generated by chance or law. This
JP>makes the whole arrowhead argument mute.
"Mute"? I'll assume "moot" was meant. Even then, Jeff will
find that "moot" means "arguable".
So far, though, I have not seen anything that would lead me to
believe that signal detection theory has been overturned by
Dembski. Nor have I seen anything that would indicate that
Dembski's Explanatory Filter is an instance of such a design.
In fact, I list several ways in which Dembski's Explanatory
Filter fails to accurately capture how humans go about finding
design in day-to-day life in my review of TDI.
[...]
JP>In both cases though, you could not make the mistake, using
JP>the filter described by Dembski, of attributing a chance
JP>occurrence to ID. The point is *not* that every complex thing
JP>is generated by chance but that at least some are not. If
JP>some of those things that aren't include occurrences which
JP>have no human intervention, the obvious question arises,
JP>where did they come from? You claim without proof that
JP>Natural Selection can be the root cause of CSI, if CSI is
JP>ever found to exist in biological systems. Dembski claims
JP>with proof that NS (or any stochastic, deterministic or
JP>hybrid process) can never generate CSI, only transmute
JP>it. This follows logically from the definition of CSI which,
JP>if I may use very rough terms, is that information not
JP>produced by chance or law.
If that were how Dembski defined CSI, then we could mumble,
"Begs the question," and go home. It is not how he did so,
and thus there is more to argue about. At least, Dembski
did not so define it before his "reiterations" post. With
the introduction of the unspecified qualifiers "apparent"
and "actual" to be prepended to "CSI", it looks like Dembski
may indeed have simply slipped into publicly begging the
question.
Dembski talked about natural selection in his 1997 paper given
at the NTSE. In it, one will find a reference to an extended
analysis of natural selection from which the points given in
the paper were taken. Dembski said that that analysis
appeared in Section 6.3 of "The Design Inference". Bill
Jefferys brought up Dembski's analysis of natural selection
during the discussion period. Bill Jefferys' comments seemed
to have an effect: Section 6.3 of what was published as TDI
does *not* contain an extended discussion of natural
selection. Nor did it get moved to another section of the
book. It disappeared entirely.
So, I would ask Jeff where this "proof" of Dembski's
concerning natural selection is to be found. I know that
Dembski is working on a book that supposedly will give his
extended analysis of both evolutionary computation and
natural selection, but it is not yet here. Nor do I accept
that it is valid in the absence of being able to review it
myself. Where's the proof?
Again, the objections raised to algorithms as sources of CSI
seem to be handled preferentially: they are not applied to
intelligent agents, and yet that application seems both fair
and reasonable. Dembski uses the case of Nick Caputo as an
instance of CSI that implies a design. And yet, Nick Caputo's
design consisted entirely of converting the
previously-existing information of party affiliation into a
preferential placement on voting ballot. People who
plagiarize do no more than *copy* previously existing
information, and yet this is another of Dembski's examples of
design. If one excludes things from producing CSI on the
basis of transmutation of information, then it seems that one
must exclude *both* algorithms and intelligent agents, or at
least intelligent agents who use rational thought in producing
solutions.
JP>If that definition stood by itself, you points about
JP>circularity would be well taken, it excludes chance a
JP>priori. But if we have another, independent test of CSI, the
JP>circularity is broken. We can use the second test to
JP>determine the presence of CSI and use the first definition to
JP>exclude chance and law. Now I think you have implicitly
JP>agreed in your earlier remarks that prespecification of
JP>pattern provides that independent test, but hold that this is
JP>useless because it is impossible to determine the
JP>prespecified pattern from the event. But clearly this is not
JP>so, I can certainly think of some sequences where the pattern
JP>can be discerned with certainty. Does life fall into this
JP>category? Who knows. But if you could be convinced somehow,
JP>that some aspect of life, that is required for something to
JP>be deemed alive, did indeed fall into that category, would
JP>you allow that it follows that NS cannot be responsible for
JP>the observed pattern specificity?
No. The presence of CSI does not imply intelligent agency.
Dembski says this no fewer than three different times in
TDI. One can look at NS and find that it conforms to the
triad of attributes Dembski says define what intelligent
agents do: actualization-exclusion-specification.
[...]
JP>Design may be undetectable, but at issue is whether chance
JP>can masquerade as design.
No it isn't. The issue is whether natural selection can
produce events that have the attribute of CSI. As I point
out in my review, the Design Inference classifies events,
not causes.
JP>I think it is practical to limit this possibility to an
JP>acceptable infinitesimal by suitable choice of the threshold
JP>of specified complexity required to be inherent in the event
JP>under scrutiny. Dembski goes so far as to posit an absolute
JP>probability so low that it equates with impossible, derived
JP>from bounds on the number of atoms in the universe the age of
JP>the universe and the known minimum time for phase
JP>transitions.
Yes. Dembski's 500-bit threshold for CSI is met by the
solution of a 100-city tour of the TSP by genetic algorithm.
(Actually, 97 cities will get us over the CSI threshold, but
100 is a nice round number.)
[...]
JP>The flaw is in your reasoning. Complexity and improbability
JP>are different measures of the same thing. Design is
JP>noticeable because of *specified* complexity and
JP>improbability. Over and over you want to knock down this
JP>complexity straw man. I keep telling you over and over,
JP>complexity alone is not sufficient and Dembski never says it
JP>is. It has to complexity that adheres to a pattern known in
JP>advance.
Actually, Dembski goes to some trouble to say that the pattern
must be independent of the event. If it is known in advance
of the event, then independence is a given. But Dembski does
hold that such independence can be shown even for producing
specifications for events that have already happened. Else
CSI would not be of much use for Dembski's purposes.
[...]
JP>No it doesn't. What would kill Dembski's argument is proving
JP>chance caused an pattern that is indistinguishable from
JP>design.
And showing that processes like natural selection can produce
CSI means that the argument becomes irrelevant.
[...]
--
Wesley R. Elsberry, Student in Wildlife & Fisheries Sciences, Tx A&M U.
Visit the Online Zoologists page (http://www.rtis.com/nat/user/elsberry)
Email to this account is dumped to /dev/null, whose Spam appetite is capacious.
"sing your faith in what you get to eat right up to the minute you are eaten"-a.
It doesn't. The falling drop is rounded on top and flattened on the
bottom. The teardrop shape is merely a cultural icon to represent
something that people don't normally see.
--
Mark Isaak atta @ best.com http://www.best.com/~atta
"My determination is not to remain stubbornly with my ideas but
I'll leave them and go over to others as soon as I am shown
plausible reason which I can grasp." - Antony Leeuwenhoek
On the other hand, the 101010... string could easily have come about
naturally without design, since there are lots of oscillators in nature
which can produce such a signal. Dembski's filter tries to rule out both
natural law and randomness. One problem is that he doesn't give us a clue
about how to rule out natural law.
:> Interesting. Do you think language would evolve in the absence of
:> intelligence?
: No. But I believe that a certain sophistication of language is a
: prerequisite for intelligence. Animals are capable of no greater
: intelligence than their languages allow.
[...]
: Language comes before intelligence. One first needs the medium of
: expression and then one expresses.
I don't much like such ideas.
For example, I believe it would be possible for a wandering, asexual nomad
to develop high intelligence, spatial reasoning, tool using etc, even in
the absence of any significant social contact.
I don't doubt that language accelerates the evolution of intelligence -
and can't really point to a natural example of my wandering nomad.
What is commonly called "intelligence" does have a linguistic component,
of course, but also seems to me to involve spatial reasoning, deduction
and other events which have little to do with linguistic competence.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com
UART what UEAT.
>The beauty of Dembski's approach to this is that it gets around having
>to speculate about the proto cell. If CSI can not, as Dembski claims be
>generated by natural processes but only conveyed, it is enough to show
>that a modern cell contains CSI to implicate ID at the origin.
In this case, the beauty is in the eye of the beholder.
There are serious problems with Dembski's approach.
First, Dembski is not proposing a scientific theory of the design that
some see in nature. There will be no experiments and no observations
that confirm or refute his hypothesis. There will be no amplifying
scientific investigations of design. On the other hand, in "Mere
Creation," Dembski indicates that theologians may have a role.
Second, Dembski does not actually argue that there is design in
nature. He only outlines an approach for showing that design is
necessary. His approach requires that all possible alternative
natural hypotheses be examined and found inadequate. But this is an
impossible task.
Third, Dembski's model of an intelligent agent is man. He is arguing
that complexity implies intelligent design because man can design
complicated things. His argument for the nature of the design agent
is from analogy.
Fourth, the term design is incomplete. One cannot detect design.
What one can detect is the implementation of a design. Dembski's
designer must also be a builder. A plausible design event for Dembski
is the origin of life on this planet. But building life requires
detailed manipulations at the molecular level. Dembski's designer not
only requires intelligence; he (or she or it) also requires magical
powers.
Ancient peoples commonly invented gods that were men with magical
powers. Dembski is doing the same thing today, disguising his
invention in a veneer of mathematics and science.
Ivar
Compared to some other biochemical features of life (like the
development of the 'universal' genetic code and translation) one can
indeed argue that DNA, and especially the long multigenic strands of DNA
we use as a genome, are relative latecomers in the molecular biologic
history of life. Comparing DNA replication and chromosomal
conformation, there are clear differences and more variability in the
basic mechanisms of replication initiation, the conformation of
chromosomes, and other features of DNA metabolism between the major
superkingdoms than there is in features like translation. There are
also similarities in all three superkingdoms, but it is hard to root out
secondary modification of a common system from independent evolution for
molecular features. It is quite likely that the initial DNA genome was
fragmented and gene sized and present in multiple copies per cell, with
distribution of genes during cell division being initially on the basis
of chance rather than a precise distribution mechanism. I would suspect
that DNA replication and the DNA takeover of genomic functions probably
occurred reasonably close to the time of the final divergence of the
superkingdoms (with eucaryotes retaining more of the primitive state and
eubacteria becoming secondarily simplified). Close enough to allow some
horizontal exchange among these groups (and horizontal transfer was
likely more common at that time), but also permit the independent
evolution of different strategies for utilizing and fully incorporating
this novel feature of DNA based genomes.
The earliest fossil forms of life had a superficial morphology similar
to that of certain eubacteria. But superficial morphologic similarity
to modern organisms is no more a clue to common internal features than
the superficial morphological similarity of tuna and dolphin is a clue
that implies a common set of internal features of bone structure.
>
> Good night.
>
> Ivar
> [snip]
> JP>This makes the whole arrowhead argument mute.
> "Mute"? I'll assume "moot" was meant.
No, mute -as in unable to speak to the issue. Oh alright, it was late and I was
tired.
> Even then, Jeff will
> find that "moot" means "arguable"
Ironic isn't it? - that the common meaning has become the opposite of the formal?
Even webster has raised the white flag, "having no legal substance".
> .
> So far, though, I have not seen anything that would lead me to
> believe that signal detection theory has been overturned by
> Dembski.
I'm not sure what you mean here Wes. I don't think I implied anything at all about
this. I was just trying to establish that Dembski's definition of information was
the same one Shannon developed for signal detection, to use your term. Dembski
applies the definition to inquire about of the origin of information, a question
Shannon didn't address. Shannon starts by postulating an information source. Must we
grind every point to powder?
> Nor have I seen anything that would indicate that
> Dembski's Explanatory Filter is an instance of such a design.
Sorry. I'm lost. What do you mean by "such a design". What category of design do you
have in mind here?
> [snip]
>
> JP> Dembski claims
> JP>with proof that NS (or any stochastic, deterministic or
> JP>hybrid process) can never generate CSI, only transmute
> JP>it. This follows logically from the definition of CSI which,
> JP>if I may use very rough terms, is that information not
> JP>produced by chance or law.
>
> If that were how Dembski defined CSI, then we could mumble,
> "Begs the question," and go home. It is not how he did so,
> and thus there is more to argue about.
For some reason, you chose to split my definition right at the point where I
resolved the "begs the question" issue. But as far as the first part of the
definition goes, at one point you agreed with me. From your published review of
TDI<http://www.rtis.com/nat/user/elsberry/zgists/wre/papers/dembski7.html>:
"From the set of all possible explanations, he first eliminates the explanatory
categories of regularity and chance; then whatever is left is by definition design.
Since all three categories complete the set, design is the set-theoretical
complement of regularity and chance. "
Unless you are quibbling about my use of the word law (which I clearly equate to
regularity below), I don't see how your review summary of Dembski's argument differs
from mine.
> At least, Dembski
> did not so define it before his "reiterations" post. With
> the introduction of the unspecified qualifiers "apparent"
> and "actual" to be prepended to "CSI", it looks like Dembski
> may indeed have simply slipped into publicly begging the
> question.
You have me at a disadvantage here as I haven't seen the post you reference and
haven't run across him using the qualifiers you mentioned in the articles I have
read. A pointer would be most appreciated.
> Dembski talked about natural selection in his 1997 paper given
> at the NTSE. In it, one will find a reference to an extended
> analysis of natural selection from which the points given in
> the paper were taken. Dembski said that that analysis
> appeared in Section 6.3 of "The Design Inference". Bill
> Jefferys brought up Dembski's analysis of natural selection
> during the discussion period. Bill Jefferys' comments seemed
> to have an effect: Section 6.3 of what was published as TDI
> does *not* contain an extended discussion of natural
> selection. Nor did it get moved to another section of the
> book. It disappeared entirely.
>
> So, I would ask Jeff where this "proof" of Dembski's
> concerning natural selection is to be found. I know that
> Dembski is working on a book that supposedly will give his
> extended analysis of both evolutionary computation and
> natural selection, but it is not yet here. Nor do I accept
> that it is valid in the absence of being able to review it
> myself. Where's the proof?
Here we are largely in agreement. I hope you haven't construed my refutations of
what I believe to be fallacious arguments as an implication of my belief that
Dembski has proved his case. The glaring weakness I find in his argument comes down
to an unjustifiable assumption that superposition applies. That is, eliminating
chance OR regularity as cause of CSI (which he has done to my satisfaction) is not
sufficient to disprove chance AND regularity as cause, if the system which binds
these processes is non-linear. Now this would seem an obvious objection and one
which I assume has been raised before. I just haven't been able to find it and so
post it now so that you know I remain unconvinced.
> Again, the objections raised to algorithms as sources of CSI
> seem to be handled preferentially: they are not applied to
> intelligent agents, and yet that application seems both fair
> and reasonable.
I have some thoughts on this matter and would be interested in your response. We
started down this road on another post and got sidetracked. In the meantime, I have
refined an analogy I think is illustrative. Let's use Dembski's archer-as-agent
analogy as a starting point. Recall he uses a target painted on a wall as an example
of CSI, in that hitting any point on the wall is a zero probability event (for this
argument I assume that the target is a point in the mathematical sense, i.e. has
zero area) which makes the information associated that event complex, and to single
out a particular point in advance of the shot and makes the information specified.
Now I propose to evolve an archer. To do so I randomly choose an ensemble of vectors
each comprised of a direction and an angle relative to the horizon from the space of
such vectors which intersect the wall (or I could enclose the whole thought
experiment in a sphere and then the space would be all such vectors). Next we shoot
an arrow according to the information contained in each vector. Note that these
vectors contain complex but unspecified information. We measure the distance from
each arrow to the target and choose a subset of the vectors which are closest. For
each vector in the subset we flip a coin to deside which of the two elements to
change and then randomly change that randomly selected element. We repeat the
experiment for this new generation of vectors. I maintain such a system will evolve
a perfect archer given a large enough initial ensemble, because in the limit as the
ensemble grows to encompass all of the available information space, it contains the
specified point with probability 1.
So we've evolved our archer with a reasonable, if simplified, model of genetic
algoritms in general. We could add complications like sex between the surviving
vectors, multiple mutations etc., but that would not change the result. Given enough
time and a large enough ensemble, we eventually can get arbitrarily close to the
target. Have we produced CSI? Only in the sense that a machine designed to find a
needle in a haystack can produce the needle. It does not however, create the needle.
The needle was there to begin with. In the same way GA's can produce *knowledge* by
scouring a predefined information space to identify a given piece of CSI. The cannot
create the information themselves. They are attracted to the solution as surely as a
ball dropped from a height is attacted to the earth's center of gravity. This is
what Dembski meant by GA's being probability amplifier, an exceedingly poor choice
of words. His idea here thoughy is clearly that if a machine is designed to find a
piece of CSI, it surely belongs in the regularity set, even if it uses stochastic
processes in its implementation.
You have reasonably asked in another GA context, how onbe can you tell the
difference between Dembski's archer-as-agent and my evolved archer? If Dembski
offers "appearent" vs. "real" CSI as an answer then IMHO he's raised the white flag
to soon. My answer would be that you cannot tell the difference because BOTH were
designed, that is, IF he could prove that the real archer was in fact designed. He
may well have to raise the flag at some point on that issue -- but the
archer-evolver is a design as surely as is a bubble sort algorithm.
> Dembski uses the case of Nick Caputo as an
> instance of CSI that implies a design. And yet, Nick Caputo's
> design consisted entirely of converting the
> previously-existing information of party affiliation into a
> preferential placement on voting ballot. People who
> plagiarize do no more than *copy* previously existing
> information, and yet this is another of Dembski's examples of
> design. If one excludes things from producing CSI on the
> basis of transmutation of information, then it seems that one
> must exclude *both* algorithms and intelligent agents, or at
> least intelligent agents who use rational thought in producing
> solutions.
I would include both agents and algorithms as things designed- or at least if one
is, the other is as well.
> JP>If that definition stood by itself, you points about
> JP>circularity would be well taken, it excludes chance a
> JP>priori.
BTW, here's where I answered your begging the question objection...
> But if we have another, independent test of CSI, the
> JP>circularity is broken. We can use the second test to
> JP>determine the presence of CSI and use the first definition to
> JP>exclude chance and law. Now I think you have implicitly
> JP>agreed in your earlier remarks that prespecification of
> JP>pattern provides that independent test, but hold that this is
> JP>useless because it is impossible to determine the
> JP>prespecified pattern from the event.
[snip]
> No. The presence of CSI does not imply intelligent agency.
> Dembski says this no fewer than three different times in
> TDI. One can look at NS and find that it conforms to the
> triad of attributes Dembski says define what intelligent
> agents do: actualization-exclusion-specification.
I think I must surrender on this issue. It is possible that NS falls into the
category where the superposition assumption I think Dembski erroneously makes breaks
down. Note that that is not the same as saying I think NS is capable of producing
CSI. I am actually quite skeptical that it can. I will allow though that when
coupled with a non-linear system, which I presume describes genetic inheritance, it
is not excluded on a set-theoritical basis and using Dembski's definition, as being
incapable of producing CSI.
> [...]
>
> [snip]
>
> JP>I think it is practical to limit this possibility to an
> JP>acceptable infinitesimal by suitable choice of the threshold
> JP>of specified complexity required to be inherent in the event
> JP>under scrutiny. Dembski goes so far as to posit an absolute
> JP>probability so low that it equates with impossible, derived
> JP>from bounds on the number of atoms in the universe the age of
> JP>the universe and the known minimum time for phase
> JP>transitions.
>
> Yes. Dembski's 500-bit threshold for CSI is met by the
> solution of a 100-city tour of the TSP by genetic algorithm.
> (Actually, 97 cities will get us over the CSI threshold, but
> 100 is a nice round number.)
>
I haven't reviewed your paper on this algorithm (would you mind reposting a pointer
to it? I can't find it.) but I suspect it is another needle finder. The target is
the shortest length path and the fitness function allows the subset of paths whose
lengths are minimized to live another day. It has one interesting wrinkle though
that I will give some thought to. Being discrete (the segments between cities are of
fixed lengths), it allows for the possibility of multiple solutions, if it so
happens that multiple permutations yeild the same path length. At first blush it
seems that this is equivalent to placing multiple needles in the haystack but this
is based merely on intuition.
> [...]
>
> JP>The flaw is in your reasoning. Complexity and improbability
> JP>are different measures of the same thing. Design is
> JP>noticeable because of *specified* complexity and
> JP>improbability. Over and over you want to knock down this
> JP>complexity straw man. I keep telling you over and over,
> JP>complexity alone is not sufficient and Dembski never says it
> JP>is. It has to complexity that adheres to a pattern known in
> JP>advance.
>
> Actually, Dembski goes to some trouble to say that the pattern
> must be independent of the event.
As it must be to avoid the "begs the question" objection already discussed Question,
given that Dembski specifies pattern independence as a necessary condition, to you
agree that the "begs the question" objection is (un)moot :>) ?.
> If it is known in advance
> of the event, then independence is a given. But Dembski does
> hold that such independence can be shown even for producing
> specifications for events that have already happened. Else
> CSI would not be of much use for Dembski's purposes.
I agree with both points (yours and Dembski's). I think the best example of
Dembski's is the cryptologists who cracks the code. If suddenly the gibberish turns
into Hamlet, one can with near certainty assume that one has found the prespecified
encryption pattern.
> [...]
>
> JP>No it doesn't. What would kill Dembski's argument is proving
> JP>chance caused an pattern that is indistinguishable from
> JP>design.
>
> And showing that processes like natural selection can produce
> CSI means that the argument becomes irrelevant.
If you have done this I missed it. If you offer GA's as attempt to prove this, I
think you fall well short.
Jeff
Jeff Patterson wrote:
> Wesley R. Elsberry wrote:
>
>
> > Dembski uses the case of Nick Caputo as an
> > instance of CSI that implies a design. And yet, Nick Caputo's
> > design consisted entirely of converting the
> > previously-existing information of party affiliation into a
> > preferential placement on voting ballot. People who
> > plagiarize do no more than *copy* previously existing
> > information, and yet this is another of Dembski's examples of
> > design. If one excludes things from producing CSI on the
> > basis of transmutation of information, then it seems that one
> > must exclude *both* algorithms and intelligent agents, or at
> > least intelligent agents who use rational thought in producing
> > solutions.
>
> I would include both agents and algorithms as things designed- or at least if one
> is, the other is as well.
>
This was sloppy on my part. What I should have said in conformance with my earlier
remarks was that the algorithm was designed. That the agent was designed remains
speculative.
Jeff
> ...but there's more than one theory that claims life could come into
> existence rather easily and gives details of the possible mechanism
> involved.
If it's so goddam easy why doesn't somebody just do it (create life from
inanimate material) and end at least the "impossible" part of the debate?
Jeff
>
>
> Language precedes intelligence. All those forms of 'reasoning' have
> linguistic components. You can not reason _about_ something unless you
> reason _in_ a language.
>
My dog has this annoying habit of reasoning his way out of his pen. Do you
suppose he talks in his sleep?
Jeff
> On 26 Sep 1999 10:06:54 -0400, jpa...@my-deja.com wrote:
>
> >The beauty of Dembski's approach to this is that it gets around having
> >to speculate about the proto cell. If CSI can not, as Dembski claims be
> >generated by natural processes but only conveyed, it is enough to show
> >that a modern cell contains CSI to implicate ID at the origin.
>
> In this case, the beauty is in the eye of the beholder.
>
> There are serious problems with Dembski's approach.
>
> First, Dembski is not proposing a scientific theory of the design that
> some see in nature. There will be no experiments and no observations
> that confirm or refute his hypothesis.
Wrong. The experiments will attempt to prove or disprove that an observable
event contains a measurable entity, namely CSI.
> There will be no amplifying
> scientific investigations of design. On the other hand, in "Mere
> Creation," Dembski indicates that theologians may have a role.
I am sick of this one. To most people, the overriding concern is arriving at
a closer and closer approximation to the truth and couldn't care less about
the taxonomy of the source. If science wants to define itself in a manner
that allows them to ignore the 400 pound gorilla sitting in the living room,
fine. The rest of us will leave you to play in your Escherian sandbox and
march merrily along.
> Second, Dembski does not actually argue that there is design in
> nature. He only outlines an approach for showing that design is
> necessary.
More accurately that there are things that can't be explained by chance or
regularity (natural law). He (wrongly in my view), thinks that all that is
left is design. I think this relies on an unjustifiable assumption of
superposition and thus does not eliminate the *possibility* that chance AND
regularity, bound together in some non-linear way, could generate CSI.
> His approach requires that all possible alternative
> natural hypotheses be examined and found inadequate. But this is an
> impossible task.
Again not so. He has formulated an algorithm for doing so. Others may
improve or refine it. Any resemblance this activity has to science is merely
coincidental.
> Third, Dembski's model of an intelligent agent is man. He is arguing
> that complexity implies intelligent design because man can design
> complicated things.
Not in any way. His argument is from set theory. Regularity, chance and
design form the universe of creative power. His argument attempts to
eliminate two of these for certain classes of creation events, leaving
design. Using this logic though, I believe we must include chance AND
regularity in a non-linear system as an element of the design set, unless
and until he explicitly removes them by proof.
> His argument for the nature of the design agent
> is from analogy.
I don't think he has speculated at all on the nature of the design agent.
Despite the fervent hopes that he fall into this trap, he has scrupulously
avoided it.
> Fourth, the term design is incomplete.
Perhaps we agree here.
> One cannot detect design.
> What one can detect is the implementation of a design.
Of course Dembski doesn't attempt to detect design but a type of information
that he feels implicates design.
> Dembski's
> designer must also be a builder. A plausible design event for Dembski
> is the origin of life on this planet. But building life requires
> detailed manipulations at the molecular level. Dembski's designer not
> only requires intelligence; he (or she or it) also requires magical
> powers.
Do molecular biologists possess magical powers? If not, are you saying that
creation of life in the laboratory is impossible? If not, why can one agent
do without the magic you require of the other?
"When you believe in things that you don't understand then you suffer,
superstition ain't the way" -Stevie Wonder
Jeff
http://www.lineone.net/express/99/09/11/news/n0100splash-d.html
http://x22.deja.com/=dnc/getdoc.xp?AN=424558267
You better pray hard for Rapture.
>
>Jeff
--
L.P.#0000000001
>Ivar Ylvisaker wrote:
>
>> First, Dembski is not proposing a scientific theory of the design that
>> some see in nature. There will be no experiments and no observations
>> that confirm or refute his hypothesis.
>
>Wrong. The experiments will attempt to prove or disprove that an observable
>event contains a measurable entity, namely CSI.
Dembski is attempting to show that there is design in nature and,
hence, there must be (or have been) a designer (or designers). But
Dembski offers no hypothesis about this design or about the
designer(s). Moreover, he specifically says he doesn't intend to.
From the beginning of chapter 3 of The Design Inference: "Indeed,
confirming hypotheses is precisely what the design inference does not
do. The design inference is in the business of eliminating hypotheses,
not confirming them."
With respect to CSI, see the comment about Dembski's algorithm below.
>> There will be no amplifying
>> scientific investigations of design. On the other hand, in "Mere
>> Creation," Dembski indicates that theologians may have a role.
>
>I am sick of this one. To most people, the overriding concern is arriving at
>a closer and closer approximation to the truth and couldn't care less about
>the taxonomy of the source. If science wants to define itself in a manner
>that allows them to ignore the 400 pound gorilla sitting in the living room,
>fine. The rest of us will leave you to play in your Escherian sandbox and
>march merrily along.
I don't understand this comment. Which one does "this one" refer to?
>> Second, Dembski does not actually argue that there is design in
>> nature. He only outlines an approach for showing that design is
>> necessary.
>
>More accurately that there are things that can't be explained by chance or
>regularity (natural law). He (wrongly in my view), thinks that all that is
>left is design. I think this relies on an unjustifiable assumption of
>superposition and thus does not eliminate the *possibility* that chance AND
>regularity, bound together in some non-linear way, could generate CSI.
>
>> His approach requires that all possible alternative
>> natural hypotheses be examined and found inadequate. But this is an
>> impossible task.
>
>Again not so. He has formulated an algorithm for doing so. Others may
>improve or refine it. Any resemblance this activity has to science is merely
>coincidental.
What algorithm? Look at his summary of The Design Inference on page
222 of his book by the same name. It begins "Suppose a subject S has
identified all the relevant chance hypotheses H that could be
responsible for some event E." What algorithm does a scientist use to
do this?
>> Third, Dembski's model of an intelligent agent is man. He is arguing
>> that complexity implies intelligent design because man can design
>> complicated things.
>
>Not in any way. His argument is from set theory. Regularity, chance and
>design form the universe of creative power. His argument attempts to
>eliminate two of these for certain classes of creation events, leaving
>design. Using this logic though, I believe we must include chance AND
>regularity in a non-linear system as an element of the design set, unless
>and until he explicitly removes them by proof.
>
>> His argument for the nature of the design agent
>> is from analogy.
>
>I don't think he has speculated at all on the nature of the design agent.
>Despite the fervent hopes that he fall into this trap, he has scrupulously
>avoided it.
But he says the agent is intelligent. The one agent that we know is
intelligent is man. What does intelligent mean if it does not mean
man-like?
>> Fourth, the term design is incomplete.
>
>Perhaps we agree here.
>
>> One cannot detect design.
>> What one can detect is the implementation of a design.
>
>Of course Dembski doesn't attempt to detect design but a type of information
>that he feels implicates design.
>
>> Dembski's
>> designer must also be a builder. A plausible design event for Dembski
>> is the origin of life on this planet. But building life requires
>> detailed manipulations at the molecular level. Dembski's designer not
>> only requires intelligence; he (or she or it) also requires magical
>> powers.
>
>Do molecular biologists possess magical powers? If not, are you saying that
>creation of life in the laboratory is impossible? If not, why can one agent
>do without the magic you require of the other?
I was trying to emphasize what a broad claim Dembski is making.
Dembski is implying that he has found a crack into the realm of the
supernatural.
Ivar
And you don't see eliminating scientific hypothesis as legitimate
science??
>
> With respect to CSI, see the comment about Dembski's algorithm below.
>
> >> There will be no amplifying
> >> scientific investigations of design. On the other hand, in "Mere
> >> Creation," Dembski indicates that theologians may have a role.
> >
> >I am sick of this one. To most people, the overriding concern is
arriving at
> >a closer and closer approximation to the truth and couldn't care less
about
> >the taxonomy of the source. If science wants to define itself in a
manner
> >that allows them to ignore the 400 pound gorilla sitting in the
living room,
> >fine. The rest of us will leave you to play in your Escherian sandbox
and
> >march merrily along.
>
> I don't understand this comment. Which one does "this one" refer to?
"This one" refers to the various attempts to define away the whole issue
of ID as being outside the bounds of science.
Dembski has published an "Explanatory Filter"
http://www.arn.org/docs/dembski/wd_explfilter.htm which is an algorithm
to determine whether to attribute an event to chance,law or design. A
variation of it is being developed at SETI, further evidence that this
is indeed science by any reasonable meaning if the word.
>
> >> Third, Dembski's model of an intelligent agent is man. He is
arguing
> >> that complexity implies intelligent design because man can design
> >> complicated things.
> >
> >Not in any way. His argument is from set theory. Regularity, chance
and
> >design form the universe of creative power. His argument attempts to
> >eliminate two of these for certain classes of creation events,
leaving
> >design. Using this logic though, I believe we must include chance AND
> >regularity in a non-linear system as an element of the design set,
unless
> >and until he explicitly removes them by proof.
> >
> >> His argument for the nature of the design agent
> >> is from analogy.
> >
> >I don't think he has speculated at all on the nature of the design
agent.
> >Despite the fervent hopes that he fall into this trap, he has
scrupulously
> >avoided it.
>
> But he says the agent is intelligent. The one agent that we know is
> intelligent is man. What does intelligent mean if it does not mean
> man-like?
Asking that question is precisely what you characterize as a "serious
problem in Dembski's approach", as if investigation of other possible
forms of intelligent agents in or outside the universe is not
legitimate science.
>
> >> Fourth, the term design is incomplete.
> >
> >Perhaps we agree here.
> >
> >> One cannot detect design.
> >> What one can detect is the implementation of a design.
> >
> >Of course Dembski doesn't attempt to detect design but a type of
information
> >that he feels implicates design.
> >
> >> Dembski's
> >> designer must also be a builder. A plausible design event for
Dembski
> >> is the origin of life on this planet. But building life requires
> >> detailed manipulations at the molecular level. Dembski's designer
not
> >> only requires intelligence; he (or she or it) also requires magical
> >> powers.
> >
> >Do molecular biologists possess magical powers? If not, are you
saying that
> >creation of life in the laboratory is impossible? If not, why can one
agent
> >do without the magic you require of the other?
>
> I was trying to emphasize what a broad claim Dembski is making.
What broad claim is that? BTW, you didn't answer the questions.
> Dembski is implying that he has found a crack into the realm of the
> supernatural.
I don't find that implication at all. What I find is unreasoned fear
among mainstream scientists that this may be so. Again, Dembski
steadfastly refuses to speculate on the characteristics of the design
agent that may be implicated by his theories.
Jeff
Sent via Deja.com http://www.deja.com/
Before you buy.
> Here we are largely in agreement. I hope you haven't construed my refutations of
> what I believe to be fallacious arguments as an implication of my belief that
> Dembski has proved his case. The glaring weakness I find in his argument comes down
> to an unjustifiable assumption that superposition applies. That is, eliminating
> chance OR regularity as cause of CSI (which he has done to my satisfaction) is not
> sufficient to disprove chance AND regularity as cause, if the system which binds
> these processes is non-linear. Now this would seem an obvious objection and one
> which I assume has been raised before. I just haven't been able to find it and so
> post it now so that you know I remain unconvinced.
Actaully, chance and regularity can be reduced to chance. Note that
Dembski does not require a uniform probability distribution (though
all of his examples seem to use it). You should be able to represent
the combination of law and chance as a non-uniform density. In
this sense, I agree with Dembski that GA's are probability amplifiers.
They make finding a solution much more probable. As long as
Dembski defines complexity as improbability, then he has
"defined away" the whole question.
[SNIP for brevity]
>
> You have reasonably asked in another GA context, how onbe can you tell the
> difference between Dembski's archer-as-agent and my evolved archer? If Dembski
> offers "appearent" vs. "real" CSI as an answer then IMHO he's raised the white flag
> to soon. My answer would be that you cannot tell the difference because BOTH were
> designed, that is, IF he could prove that the real archer was in fact designed. He
> may well have to raise the flag at some point on that issue -- but the
> archer-evolver is a design as surely as is a bubble sort algorithm.
>
Where does the assumption that the algorithm was designed come from?
Consider a flat plain with a circle drawn on it. Toss a ball into the air and it
lands inside the circle. This is analogous to firing an arrow at a wall.
Assuming a large plain, the circle is CSI. Now consider a plain that is
sloped. The circle occupies the lowest part of the plain. Now it becomes
very likely that he ball will end up inside the circle. What has happened?
Is gravity designed (your argument, IIUC)? Or is the circle not really
CSI? Something else?
Mike
> Jeff Patterson filled the aether with:
>
> > Marty Fouts wrote:
>
> >[snip]...Also, don't confuse problem solving with reasoning.
I realize that to those for whom Natural Selection is the answer to
everything, problem solving without reason comes as second nature. To
those of us in the real world, one is required to accomplish the other.
Jeff
As I see it, Dembski has already proposed the scientific hypothesis that
CSI won't be detectable in anything which occurs naturally. That
hypothesis has been falsified.
>> ... Dembski indicates that theologians may have a role.
>
>I am sick of this one. To most people, the overriding concern is arriving at
>a closer and closer approximation to the truth and couldn't care less about
>the taxonomy of the source. If science wants to define itself in a manner
>that allows them to ignore the 400 pound gorilla sitting in the living room,
>fine. The rest of us will leave you to play in your Escherian sandbox and
>march merrily along.
Since the 400 pound gorilla is invisible, weightless, and otherwise
undetectable, why shouldn't we ignore it?
>> Second, Dembski does not actually argue that there is design in
>> nature. He only outlines an approach for showing that design is
>> necessary.
>
>More accurately that there are things that can't be explained by chance or
>regularity (natural law).
He refuses to recognize, though, that there is a big difference between
"isn't explained" and "can't be explained." His entire thesis reduces to
the ever-popular argument from incredulity.
>'problem solving' is *not* reasoning, nor is reasoning always required
>for problem solving. Rote, brute force and exhaustive search due very
>well at times, for example. And none of those things are "natural
>selection", either.
My question is, which do you do more frequently, solve the problem
without reasoning or reason without solving the problem?
-------
Steve Schaffner s...@genome.wi.mit.edu
SLAC and I have a deal: they don't || Immediate assurance is an excellent sign
pay me, and I don't speak for them. || of probable lack of insight into the
|| topic. Josiah Royce
Michael wrote:
> Jeff Patterson wrote:
>
> > Here we are largely in agreement. I hope you haven't construed my refutations of
> > what I believe to be fallacious arguments as an implication of my belief that
> > Dembski has proved his case. The glaring weakness I find in his argument comes down
> > to an unjustifiable assumption that superposition applies. That is, eliminating
> > chance OR regularity as cause of CSI (which he has done to my satisfaction) is not
> > sufficient to disprove chance AND regularity as cause, if the system which binds
> > these processes is non-linear. Now this would seem an obvious objection and one
> > which I assume has been raised before. I just haven't been able to find it and so
> > post it now so that you know I remain unconvinced.
>
> Actaully, chance and regularity can be reduced to chance. Note that
> Dembski does not require a uniform probability distribution (though
> all of his examples seem to use it). You should be able to represent
> the combination of law and chance as a non-uniform density.
For a large class of systems this is true. Signal flow theory tells us we can represent
such a system as a stochastic source driving a system function to one or more outputs.
Thus if at some output y we have y=g(x) as the system function and X is some r.v. with
pdf fx(x), then the pdf fy(y) can be shown to be (cf Papoulis pg. 126)
fy(y) = fx(x1)/|g'(x1)|+fx(x2)/|g'(x2)|+...fx(xn)/|g'(xn)|
where x1,x2...xn are the real roots of g(x)
and fy(y)=0
for any y such that no real roots exist for y=g(x). But there are certain restrictions on
g(x). It must have a countable number of discontinuities and must not be a constant over
any interval.
Now suppose g(x) is a quantizer (staircase function) which violates the above. For a
continuous X, the outcome Y will also be quantized. This means Fy(y) (probability that Y
=< y) will be staircase which means the pdf fy(y) will be a series of impulses located at
each staircase transition. These impulsive pdfs may present a problem for Dembski because
they represent concentrations of information, i.e. in these cases the information space
would be punctuated with points where P(X=x) is non-zero. Whether this does in fact
present a problem of generality to Dembski is not clear to me but I know from (painful)
experience that wrapping a quantize in a feedback loop (so called delta-sigma modulators)
can result in structured limit cycles that are anything but random. Another important
case is where y=g(x) contains only complex roots which gives fy(y)=0 for all y.
Oscillators, which fall into this category, require noise as input to start but have
outputs which are quite structured.
> In
> this sense, I agree with Dembski that GA's are probability amplifiers.
> They make finding a solution much more probable. As long as
> Dembski defines complexity as improbability, then he has
> "defined away" the whole question.
If by question you mean whether GAs can create CSI, I don't think the answer depends on
the definition of complexity. In my example which shows how GA's "find" CSI that already
exists, I did not use such a definition.
> [SNIP for brevity]
>
> >
> > You have reasonably asked in another GA context, how onbe can you tell the
> > difference between Dembski's archer-as-agent and my evolved archer? If Dembski
> > offers "appearent" vs. "real" CSI as an answer then IMHO he's raised the white flag
> > to soon. My answer would be that you cannot tell the difference because BOTH were
> > designed, that is, IF he could prove that the real archer was in fact designed. He
> > may well have to raise the flag at some point on that issue -- but the
> > archer-evolver is a design as surely as is a bubble sort algorithm.
> >
>
> Where does the assumption that the algorithm was designed come from?
Uh... 'cause I designed it just as Dawkins designed his "methinks" evolver and just like
all GAs are designed.
> Consider a flat plain with a circle drawn on it. Toss a ball into the air and it
> lands inside the circle. This is analogous to firing an arrow at a wall.
> Assuming a large plain, the circle is CSI. Now consider a plain that is
> sloped. The circle occupies the lowest part of the plain.
You've just designed a system.
> Now it becomes
> very likely that he ball will end up inside the circle. What has happened?
A number of things. You've demonstrated a basic precept of information theory, that is
that information is always tied to a particular experiment. If instead of throwing a ball
you shoot an arrow we are right back to the archer analogy. You've also shown that by
introducing regularity you have removed the uncertainty from the system, a result that
seems law-like. A system whose output is certain cannot convey information much less
create it. Finally, you've given me an excellent mental picture of the impulsive pdf I
was describing above. The pdf for the system you've described contains an impulse at the
location of the dimple.
> Is gravity designed (your argument, IIUC)?
Careful, you're stepping outside the bounds of science :>)
> Or is the circle not really CSI?
Depends on the experiment. If you're throwing balls it is not CSI, if you are shooting
arrows it is.
Jeff
A stone fallÃng down from a mountain and a light ray refracted by a
water surface solve the variational problem of minimizing action.
Reasoning does not come into it.
HRG.
:> ...but there's more than one theory that claims life could come into
:> existence rather easily and gives details of the possible mechanism
:> involved.
: If it's so goddam easy why doesn't somebody just do it (create life from
: inanimate material) and end at least the "impossible" part of the debate?
It depends on what you mean by "life"?
Small-scale evolutionary processes are likely to be coming into existence
on a daily basis in pools all over the planet as part of the mechanics of
crystal growth.
Many people regularly create evolving systems in virtual worlds.
Simple evolutionary processes take place among computer viruses
distrubuted around the internet.
Of course these latter examples are strictly "life coming from life"...
What would it take to make you happy?
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com
Anarchy is against the law.
[snip]
>> Consider a flat plain with a circle drawn on it. Toss a ball into the
air and it
>> lands inside the circle. This is analogous to firing an arrow at a
wall.
>> Assuming a large plain, the circle is CSI. Now consider a plain that is
>> sloped. The circle occupies the lowest part of the plain.
>
>You've just designed a system.
Not if you draw the circle after the fact. Which is exactly what
Dembski and others do.
They see the ball going to only a specific place on a complex
landscape, and say "Ooooh, CSI!"
There are physical reasons that a particular (specified) biopolymer
is chosen over a large number of those having equivalent probability
of being formed at random. They aren't formed at random.
>> Now it becomes
>> very likely that he ball will end up inside the circle. What has
happened?
>
>A number of things. You've demonstrated a basic precept of information
theory, that is
>that information is always tied to a particular experiment.
How about this one: Just because information theory can be
applied to a system does not mean it had information "put" there
by intelligence.
[snip]
Tracy P. Hamilton
> In article <37F2AD97...@mpmd.com>, Jeff Patterson <jp...@mpmd.com> wrote:
> >Ivar Ylvisaker wrote:
> >> First, Dembski is not proposing a scientific theory of the design that
> >> some see in nature. There will be no experiments and no observations
> >> that confirm or refute his hypothesis.
> >
> >Wrong. The experiments will attempt to prove or disprove that an observable
> >event contains a measurable entity, namely CSI.
>
> As I see it, Dembski has already proposed the scientific hypothesis that
> CSI won't be detectable in anything which occurs naturally. That
> hypothesis has been falsified.
No, that's not his hypothesis at all. His hypothesis is that CSI cannot be
*created* by chance or regularity. Once created, CSI can be manipulated by any
number of "natural" systems and could in practice be detected in these systems. The
human genome may be one example.
> >> ... Dembski indicates that theologians may have a role.
> >
> >I am sick of this one. To most people, the overriding concern is arriving at
> >a closer and closer approximation to the truth and couldn't care less about
> >the taxonomy of the source. If science wants to define itself in a manner
> >that allows them to ignore the 400 pound gorilla sitting in the living room,
> >fine. The rest of us will leave you to play in your Escherian sandbox and
> >march merrily along.
>
> Since the 400 pound gorilla is invisible, weightless, and otherwise
> undetectable, why shouldn't we ignore it?
ID may be invisible and weightless but as for detectable, the jury is still out.
> >> Second, Dembski does not actually argue that there is design in
> >> nature. He only outlines an approach for showing that design is
> >> necessary.
> >
> >More accurately that there are things that can't be explained by chance or
> >regularity (natural law).
>
> He refuses to recognize, though, that there is a big difference between
> "isn't explained" and "can't be explained."
I think he understands the distinction perfectly well. After all, it is the later
he is trying to prove.
>His entire thesis reduces to the ever-popular argument from incredulity.
Fine by me if he can support it mathematically.
Jeff
Tracy P. Hamilton wrote:
> Jeff Patterson wrote in message <37F3B0D2...@mpmd.com>...
> >
> >
> >Michael wrote:
> >
> >> Jeff Patterson wrote:
>
> [snip]
>
> >> Consider a flat plain with a circle drawn on it. Toss a ball into the
> air and it
> >> lands inside the circle. This is analogous to firing an arrow at a
> wall.
> >> Assuming a large plain, the circle is CSI. Now consider a plain that is
> >> sloped. The circle occupies the lowest part of the plain.
> >
> >You've just designed a system.
>
> Not if you draw the circle after the fact. Which is exactly what
> Dembski and others do.
...when they are demonstrating that ex post facto specifications do not specify
CSI.
> They see the ball going to only a specific place on a complex
> landscape, and say "Ooooh, CSI!"
Well..yes, if the specific place was specified in advance then the ball didn't
likely get there by accident. The more balls there are in their respective
prespecified spots, the less likely it is.
> There are physical reasons that a particular (specified) biopolymer
> is chosen over a large number of those having equivalent probability
> of being formed at random. They aren't formed at random.
I don't understand your point or pretend to know what the significance of a
biopolymer is. If they aren't formed by chance though than I'll wager (without
even knowing what they are) that they are formed by regulation or design.
> >> Now it becomes
> >> very likely that he ball will end up inside the circle. What has
> happened?
> >
> >A number of things. You've demonstrated a basic precept of information
> theory, that is
> >that information is always tied to a particular experiment.
>
> How about this one: Just because information theory can be
> applied to a system does not mean it had information "put" there
> by intelligence.
If your are positing some sort of law I'd suggest you refine it a tad before
publishing. Just a suggestion.
Jeff
> [snip]
>
I'd say so.
Marty Fouts wrote:
> Jeff Patterson filled the aether with:
>
> > Marty Fouts wrote:
>
> >> Jeff Patterson filled the aether with:
> >>
> >> > Marty Fouts wrote:
> >>
> >> [snip]
> >>
> >> >> But since the 'pre-specificied pattern' is an imponderable,...
> >>
> >> > How so?
> >>
> >> There's not way to distinguish after the fact whether an event had
> >> a pre-specificied pattern.
>
> > That will come as news to cryptologists :>) Seriously you can't mean
> > for your statement to be a generalization. If I hand you a piece of
> > paper with two strings of binary digits say 1000 digits long, each
> > is equally likely to have been generated by a fair coin toss. Say
> > one was in fact so generated. But if the other one is alternating,
> > 101010...., you have no trouble discerning the pattern the designer
> > used to generate the string.
>
> Sorry? The above does not make sense. Because humans like patterns,
> and want them to have meaning, seeing the string 101010... gives me a
> very subjective desire to believe that a 'design' was involved.
What is subjective about a 2^1000 chance that this string was generated at
random?
Jeff
Bigdakine wrote:
> >Subject: Re: Dembski's Intelligent Design Hypothesis
> >From: Jeff Patterson jp...@mpmd.com
> >Date: Mon, 27 September 1999 05:42 PM EDT
> >Message-id: <37EFE5EB...@mpmd.com>
> >
> >
> >
> >Bigdakine wrote:
> >
> >> >Subject: Re: Dembski's Intelligent Design Hypothesis
> >> >From: ylvi...@erols.com (Ivar Ylvisaker)
> >> >Date: Sun, 26 September 1999 02:02 AM EDT
> >> >Message-id: <37eda9a2...@news.erols.com>
> >> >
> >> >On 24 Sep 1999 21:27:31 -0400, Marty Fouts
> >> ><mathem...@usenet.nospam.fogey.com> wrote:
> >> >
> >> >While I do not find Dembski's reasoning persuasive, he is a little
> >> >more logical than your post suggests.
> >> >
> >> >>There in lies the first problem with Dembski's reasoning. Low
> >> >>probability of occurrence does not equal complexity, nor does it need
> >> >>arise from design.
> >> >
> >> >Dembski does equate improbability and complexity. His reasoning seems
> >> >to be that improbable events when they occur convey lots of
> >> >information, i.e., you have to use lots of bits to distinguish the
> >> >event that did occur from all the other events that might have
> >> >occurred. And an information measure is a kind of complexity.
> >>
> >> The question is, is how much information was needed for that first
> >> self-replicating molecule that led to the abiogenic process. Dembski's
> >argument
> >> assumes a priori that it was large; large enough to be improbable. Its
> >> circular.
> >
> >Not quite. He argues that CSI is conserved. Thus if found today, it must have
> >been
> >present at the start.
> >
> >Jeff
> >
> How does that mesh with Big-Bang cosmology?
>
> Stuart
Quite well actually. For an event that created space and time, a little CSI would
be child's play
Jeff
> Jeff Patterson filled the aether with:
>
> > Marty Fouts wrote:
>
> >> Jeff Patterson filled the aether with:
> >>
> >> > Marty Fouts wrote:
> >>
> >> >> Jeff Patterson filled the aether with: Marty Fouts wrote: [snip]
> >> But since the 'pre-specificied pattern' is an imponderable,... How
> >> so? There's not way to distinguish after the fact whether an event
> >> had a pre-specificied pattern.
> >>
> >> > That will come as news to cryptologists :>) Seriously you can't
> >> mean for your statement to be a generalization. If I hand you a
> >> piece of paper with two strings of binary digits say 1000 digits
> >> long, each is equally likely to have been generated by a fair coin
> >> toss. Say one was in fact so generated. But if the other one is
> >> alternating, 101010...., you have no trouble discerning the pattern
> >> the designer used to generate the string.
> >>
> >> Sorry? The above does not make sense. Because humans like
> >> patterns, and want them to have meaning, seeing the string
> >> 101010... gives me a very subjective desire to believe that a
> >> 'design' was involved.
>
> > What is subjective about a 2^1000 chance that this string was
> > generated at random?
>
> nothing. Why are you trolling, rather than trying to have a
> conversation?
Not sure what trolling means but I thought we were haveing a conversation.
You claimed that it was only subjective intuition that led you suspect a
pattern. I was trying to point out that there is something much stronger
to compel you in that direction, namely the staggeringing odds against it
being anything but a pattern.
If trolling has something to do with sarcasm, its because I grow weary of
the never give an inch, never admit the obvious, always grind every point
to powder debating style that seems to be the norm here. I have ideas, you
have ideas - I'm not out to change minds or to do battle for battle's
sake. Just want a fair and open debate, and an honest objective critique.
Cheers,
Jeff
I've never seen a case where Dembski, or other proponents of CSI/ID,
have pre-specified the points of interest. Instead, all I've seen
is where people point to something after-the-fact and say "that's
complex". In other words, they all fit the "perfect marksman" joke.
[A man is driving one day and sees a barn covered with many painted
bulls-eyes. Intrigued, he gets out and looks, and finds that each
bulls-eye has a bullet hole in the exact center. Just then the
farmer shows up, and the man asks who did the shooting. "My cousin
Clem," the farmer answers. "He must be a great marksman," the man
says. "Nope," the farmer replies, "he just paints the bulls-eye
around the hole after he shoots."]
> > There are physical reasons that a particular (specified) biopolymer
> > is chosen over a large number of those having equivalent probability
> > of being formed at random. They aren't formed at random.
>
> I don't understand your point or pretend to know what the significance of a
> biopolymer is. If they aren't formed by chance though than I'll wager (without
> even knowing what they are) that they are formed by regulation or design.
Biopolymers include DNA, RNA, and proteins. Their formation from
the components is entirely in accord with chemistry and physics,
so "regulation or design" is involved only if it is also involved
in, say, the formation of a snowflake.
By the way, completely specifying a particular snowflake would take
a huge amount of information, even with the symmetry. Do you think,
looking at a snowflake, that there must be some design(er) involved
because of the huge amount of complexly-specified-information needed
to produce *exactly* that snowflake?
--
Ken Cox k...@research.bell-labs.com
> What is subjective about a 2^1000 chance that this string was generated at
> random?
Somehow I don't think you really mean that. Here, let me give
you a string of 1000 bits:
EBD5627A11676BA248B5259CAFEBFCFD7C2EEA8FBF72A14EA0
3C5C8FC6AB398A5A8441300A2AC6866B2C7BB316E406F51795
8C587723EA945B0D6C082F915B663FCBC14396671FB6AC3184
9A35B812F41BFDC205A6CB5810451C9A6CF1403345E42B7301
9DCF5EF9305CAB89CCA00E5E279AAAB4783C62592A5C5D519F
I hope you don't mind my compressing it by using hexadecimal.
Now, what is the pattern that you discern that the designer used
to generate the string? After all, if your previous logic holds,
such a designer and such a pattern must exist since the odds of
that string being generated at random is 1 in 2^1000.
--
Ken Cox k...@research.bell-labs.com
[snip]
>And you don't see eliminating scientific hypothesis as legitimate
>science??
I have no problem with scientists refuting an hypothesis. I do have a
problem with scientists claiming one hypothesis is true merely because
another hypothesis is false. If only two hypotheses are logically
possible, this might be OK, but, in science, this is rarely, if ever,
the case.
[snip]
>"This one" refers to the various attempts to define away the whole issue
>of ID as being outside the bounds of science.
I have no problem including intelligent design in science. Men, lab
rats, aliens from outer space, gods: all are acceptable hypotheses.
Any time you or Dembski wants to offer such an hypothesis, I'll
consider it. Of course, I'll want to know what your evidence is.
[snip]
>Dembski has published an "Explanatory Filter"
>http://www.arn.org/docs/dembski/wd_explfilter.htm which is an algorithm
>to determine whether to attribute an event to chance,law or design. A
>variation of it is being developed at SETI, further evidence that this
>is indeed science by any reasonable meaning if the word.
An algorithm is a step-by-step recipe for solving a problem.
Dembski's filter essentially asserts that one proves design by showing
that all natural explanations are unthinkable. (I'm lumping chance
and lawful explanations together here as natural.) Dembski's
description of his filter leaves out almost all the steps (some of
which will not be known for centuries). Dembski's filter is much too
vague to be called an algorithm.
[snip]
>Asking that question is precisely what you characterize as a "serious
>problem in Dembski's approach", as if investigation of other possible
>forms of intelligent agents in or outside the universe is not
>legitimate science.
See above.
[snip]
>> >Do molecular biologists possess magical powers? If not, are you
>>>saying that creation of life in the laboratory is impossible? If not, why
>>>can one agent do without the magic you require of the other?
>>
>>[Ivar] I was trying to emphasize what a broad claim Dembski is making.
>
>What broad claim is that? BTW, you didn't answer the questions.
The real alternative to a natural explanation is a supernatural
explanation. Dembski's term "design" is a euphemism that obscures the
controversial implications of Dembski's thesis.
The difference between science and magic probably depends on your
viewpoint. But Dembski clearly hopes that his design agent will prove
to be God. The first chapter of Genesis says "And God said, 'Let the
earth bring forth living creatures....' And it was so." I'd describe
those words as a magical incantation.
[snip]
>I don't find that implication at all. What I find is unreasoned fear
>among mainstream scientists that this may be so. Again, Dembski
>steadfastly refuses to speculate on the characteristics of the design
>agent that may be implicated by his theories.
The use of the term "design agent" is itself speculation.
Ivar
JP> [snip]
This was the point at issue. Deleting it makes no sense.
[Restored]
JP>The point is that you
JP>can design a filter that never exhibits false positives,
JP>attributing ID to an object generated by chance or law.
[End Restoration]
JP>This makes the whole arrowhead argument mute.
WRE> "Mute"? I'll assume "moot" was meant.
JP>No, mute -as in unable to speak to the issue. Oh alright, it
JP>was late and I was tired.
WRE> Even then, Jeff will find that "moot" means "arguable"
JP>Ironic isn't it? - that the common meaning has become the
JP>opposite of the formal? Even webster has raised the white
JP>flag, "having no legal substance".
WRE> .
WRE> So far, though, I have not seen anything that would lead me to
WRE> believe that signal detection theory has been overturned by
WRE> Dembski.
JP>I'm not sure what you mean here Wes. I don't think I implied
JP>anything at all about this.
Jeff claimed that a filter could be designed to yield no false
positives. That is, I'll grant, a *claim* rather than an
*implication*. The relationship of this claim to signal
detection theory is pretty obvious.
JP>I was just trying to establish
JP>that Dembski's definition of information was the same one
JP>Shannon developed for signal detection, to use your
JP>term. Dembski applies the definition to inquire about of the
JP>origin of information, a question Shannon didn't
JP>address. Shannon starts by postulating an information
JP>source. Must we grind every point to powder?
Signal detection theory says that there is no filter which has
no false positives, nor one which entirely eliminates false
negatives. Either Dembski has overturned signal detection
theory, or his Explanatory Filter can produce false positives.
That's what I mean.
I reserve the right to object to incorrect information being
disseminated, and to do so at my own pace and in my own way.
If Jeff does not like it, he is free to ignore my posts. He
won't be the first or the last to do so.
WRE> Nor have I seen anything that would indicate that Dembski's
WRE> Explanatory Filter is an instance of such a design.
JP>Sorry. I'm lost. What do you mean by "such a design". What
JP>category of design do you have in mind here?
Of the category of filters with the property of finding no
false positives.
JP> [snip]
[Restore]
In fact, I list several ways in which Dembski's Explanatory
Filter fails to accurately capture how humans go about finding
design in day-to-day life in my review of TDI.
[End Restoration]
And no comment on that from Jeff. Given that Dembski relies
heavily upon analogy to human detection of design in
day-to-day life in making his argument for proceeding to infer
intelligent agency once "design" is found as an attribute of
an event, I'd think that making an accurate description of the
process in question would be pretty important.
I'm not the only one to critique Dembski on his accuracy of
capturing human design detection procedures. Elliott Sober et
alia's "How Not to Detect Design" makes strong criticisms of
Dembski's Explanatory Filter and extended logical arguments.
JP> Dembski claims
JP>with proof that NS (or any stochastic, deterministic or
JP>hybrid process) can never generate CSI, only transmute
JP>it. This follows logically from the definition of CSI which,
JP>if I may use very rough terms, is that information not
JP>produced by chance or law.
WRE> If that were how Dembski defined CSI, then we could mumble,
WRE> "Begs the question," and go home. It is not how he did so,
WRE> and thus there is more to argue about.
JP>For some reason, you chose to split my definition right at
JP>the point where I resolved the "begs the question" issue. But
JP>as far as the first part of the definition goes, at one point
JP>you agreed with me. From your published review of
JP>TDI<http://www.rtis.com/nat/user/elsberry/zgists/wre/papers/dembski7.html>:
JP>"From the set of all possible explanations, he first
JP>eliminates the explanatory categories of regularity and
JP>chance; then whatever is left is by definition design. Since
JP>all three categories complete the set, design is the
JP>set-theoretical complement of regularity and chance. "
JP>Unless you are quibbling about my use of the word law (which
JP>I clearly equate to regularity below), I don't see how your
JP>review summary of Dembski's argument differs from mine.
Our topic of discussion was the definition of CSI given by
Dembski. The quote Jeff cites from my review concerns
"design". "CSI" or one of its synonymous phrases is
conspicuous by its absence from the quote. Jeff is invited to
come back to the topic. If Jeff has some citation that would
indicate that Dembski at some point provided a
question-begging version of a definition of CSI, I'd like to
see it. Until such time as Jeff ponies up the documentation
of his claim, though, I will continue to treat it as a
chimera.
WRE> At least, Dembski
WRE> did not so define it before his "reiterations" post. With
WRE> the introduction of the unspecified qualifiers "apparent"
WRE> and "actual" to be prepended to "CSI", it looks like Dembski
WRE> may indeed have simply slipped into publicly begging the
WRE> question.
JP>You have me at a disadvantage here as I haven't seen the post
JP>you reference and haven't run across him using the qualifiers
JP>you mentioned in the articles I have read. A pointer would be
JP>most appreciated.
<http://x44.deja.com/=dnc/getdoc.xp?AN=525224114>
WRE> Dembski talked about natural selection in his 1997 paper given
WRE> at the NTSE. In it, one will find a reference to an extended
WRE> analysis of natural selection from which the points given in
WRE> the paper were taken. Dembski said that that analysis
WRE> appeared in Section 6.3 of "The Design Inference". Bill
WRE> Jefferys brought up Dembski's analysis of natural selection
WRE> during the discussion period. Bill Jefferys' comments seemed
WRE> to have an effect: Section 6.3 of what was published as TDI
WRE> does *not* contain an extended discussion of natural
WRE> selection. Nor did it get moved to another section of the
WRE> book. It disappeared entirely.
WRE> So, I would ask Jeff where this "proof" of Dembski's
WRE> concerning natural selection is to be found. I know that
WRE> Dembski is working on a book that supposedly will give his
WRE> extended analysis of both evolutionary computation and
WRE> natural selection, but it is not yet here. Nor do I accept
WRE> that it is valid in the absence of being able to review it
WRE> myself. Where's the proof?
JP>Here we are largely in agreement. I hope you haven't
JP>construed my refutations of what I believe to be fallacious
JP>arguments as an implication of my belief that Dembski has
JP>proved his case.
The closing statement of Jeff's post makes it appear that
Jeff is leaning that way, though.
JP>The glaring weakness I find in his argument comes down to an
JP>unjustifiable assumption that superposition applies. That is,
JP>eliminating chance OR regularity as cause of CSI (which he
JP>has done to my satisfaction) is not sufficient to disprove
JP>chance AND regularity as cause, if the system which binds
JP>these processes is non-linear. Now this would seem an obvious
JP>objection and one which I assume has been raised before. I
JP>just haven't been able to find it and so post it now so that
JP>you know I remain unconvinced.
Dembski's "Design as a Theory of Information" attempts to
disqualify chance AND regularity as a cause of CSI. I didn't
buy it, though.
WRE> Again, the objections raised to algorithms as sources of CSI
WRE> seem to be handled preferentially: they are not applied to
WRE> intelligent agents, and yet that application seems both fair
WRE> and reasonable.
JP>I have some thoughts on this matter and would be interested
JP>in your response. We started down this road on another post
JP>and got sidetracked. In the meantime, I have refined an
JP>analogy I think is illustrative. Let's use Dembski's
JP>archer-as-agent analogy as a starting point. Recall he uses a
JP>target painted on a wall as an example of CSI, in that
JP>hitting any point on the wall is a zero probability event
JP>(for this argument I assume that the target is a point in the
JP>mathematical sense, i.e. has zero area) which makes the
JP>information associated that event complex, and to single out
JP>a particular point in advance of the shot and makes the
JP>information specified. Now I propose to evolve an archer. To
JP>do so I randomly choose an ensemble of vectors each comprised
JP>of a direction and an angle relative to the horizon from the
JP>space of such vectors which intersect the wall (or I could
JP>enclose the whole thought experiment in a sphere and then the
JP>space would be all such vectors). Next we shoot an arrow
JP>according to the information contained in each vector. Note
JP>that these vectors contain complex but unspecified
JP>information. We measure the distance from each arrow to the
JP>target and choose a subset of the vectors which are
JP>closest. For each vector in the subset we flip a coin to
JP>deside which of the two elements to change and then randomly
JP>change that randomly selected element. We repeat the
JP>experiment for this new generation of vectors. I maintain
JP>such a system will evolve a perfect archer given a large
JP>enough initial ensemble, because in the limit as the ensemble
JP>grows to encompass all of the available information space, it
JP>contains the specified point with probability 1.
This doesn't fit the properties of GAs that I have run. In
Perl scripts that I have done, I try to keep track of the
unique solutions which have been tried by the algorithm and
produce a ratio of the number of solutions states examined to
the number of solution states in the problem space. This is
typically a tiny, tiny fraction. The GA does *not* typically
explore the complete problem space. For the toy "WEASEL"
problem that Dembski criticizes, the typical ratio in my runs
is about 2E-37.
JP>So we've evolved our archer with a reasonable, if simplified,
JP>model of genetic algoritms in general. We could add
JP>complications like sex between the surviving vectors,
JP>multiple mutations etc., but that would not change the
JP>result. Given enough time and a large enough ensemble, we
JP>eventually can get arbitrarily close to the target. Have we
JP>produced CSI? Only in the sense that a machine designed to
JP>find a needle in a haystack can produce the needle. It does
JP>not however, create the needle. The needle was there to
JP>begin with. In the same way GA's can produce *knowledge* by
JP>scouring a predefined information space to identify a given
JP>piece of CSI. The cannot create the information
JP>themselves. They are attracted to the solution as surely as a
JP>ball dropped from a height is attacted to the earth's center
JP>of gravity. This is what Dembski meant by GA's being
JP>probability amplifier, an exceedingly poor choice of
JP>words. His idea here thoughy is clearly that if a machine is
JP>designed to find a piece of CSI, it surely belongs in the
JP>regularity set, even if it uses stochastic processes in its
JP>implementation.
This is a confusion of *cause* and *event*. The event is the
placement of an arrow into a prespecified location. Whether
one posits a human archer or a robot archer, the event
contains CSI. Dembski's Explanatory Filter categorizes
*events*, not *causes*.
This business of information transformation is one that I
explained earlier, but I can repeat that explanation easily
enough.
The objection that one goes from an antecedent set of
information in one form to a CSI solution of another form is
no objection at all. An intelligent agent would have no hope
of solving a 100-city tour of the TSP in the absence of the
distance information, or at least would only have the same
expectations as one would obtain from blind search. The
algorithm and the agent operate to generate CSI that did not
exist before from the information that is available. This is
another way of saying that rational intelligence and
algorithms have the same dependencies when generating CSI.
JP>You have reasonably asked in another GA context, how onbe can
JP>you tell the difference between Dembski's archer-as-agent and
JP>my evolved archer? If Dembski offers "appearent" vs. "real"
JP>CSI as an answer then IMHO he's raised the white flag to
JP>soon. My answer would be that you cannot tell the difference
JP>because BOTH were designed, that is, IF he could prove that
JP>the real archer was in fact designed. He may well have to
JP>raise the flag at some point on that issue -- but the
JP>archer-evolver is a design as surely as is a bubble sort
JP>algorithm.
It does not matter whether the agent producing the CSI is
"designed" or not. What Dembski seeks to establish is the
necessity of an *intelligent* agent as the sole class of
sources of CSI. While a bubble sort is designed, it fails to
convince most people that it itself is intelligent. Does Jeff
believe a bubble sort algorithm is itself intelligent? Does
this cause Jeff moral qualms when he shuts off his computer?
WRE> Dembski uses the case of Nick Caputo as an
WRE> instance of CSI that implies a design. And yet, Nick Caputo's
WRE> design consisted entirely of converting the
WRE> previously-existing information of party affiliation into a
WRE> preferential placement on voting ballot. People who
WRE> plagiarize do no more than *copy* previously existing
WRE> information, and yet this is another of Dembski's examples of
WRE> design. If one excludes things from producing CSI on the
WRE> basis of transmutation of information, then it seems that one
WRE> must exclude *both* algorithms and intelligent agents, or at
WRE> least intelligent agents who use rational thought in producing
WRE> solutions.
JP>I would include both agents and algorithms as things
JP>designed- or at least if one is, the other is as well.
That is not the issue. Does Jeff consider both to be
*intelligent*?
Probably the most agitated I saw Dembski get was in the
discussion period of Ann Foerst's presentation at the NTSE
conference. Foerst's topic was the "Imago Dei", and how this
gift of God to humans was transferrable to some products of
human design. Dembski objected strenuously, and cited his
theological alma mater as superior to Foerst's theological
alma mater. It was quite a show. But IIRC Dembski took issue
with even the consideration that some robotic product might be
considered intelligent, with the same kinds of prospects for
agency in design that Dembski attributed to humans and God.
JP>If that definition stood by itself, you points about
JP>circularity would be well taken, it excludes chance a
JP>priori.
JP>BTW, here's where I answered your begging the question objection...
JP> But if we have another, independent test of CSI, the
JP>circularity is broken. We can use the second test to
JP>determine the presence of CSI and use the first definition to
JP>exclude chance and law. Now I think you have implicitly
JP>agreed in your earlier remarks that prespecification of
JP>pattern provides that independent test, but hold that this is
JP>useless because it is impossible to determine the
JP>prespecified pattern from the event.
My reference to "begging the question" concerned the
characterization that Jeff gave of Dembski's definition of
"CSI". That characterization was incorrect: Dembski did not
define the phrase in a question-begging way. Just because
Jeff provides a reason to set aside the mischaracterization
doesn't make it go away. It isn't only Dembski's fans who can
point out lapses in correct characterization of Dembski.
My take on CSI is that it is an attribute that can be
recognized, but which does not necessitate an intelligent
agent as a cause. I know that Dembski argues that CSI does
provide a reliable basis for inferring an intelligent agent,
but I have not been convinced by his argument. Ugly little
facts can kill beautiful theories. One ugly little fact is
that algorithms produce CSI, whether one calls it
"transformation" or anything else. The only way rational
intelligent agents produce CSI is through the very same
"transformations" that cause such scoffing when an algorithm
is claimed to do the job. One might then step back to a
position that only non-rational intelligent agents (or
intelligent agents acting non-rationally) produce CSI, but I
don't think that is what Dembski wants to claim.
JP>[snip]
My answer doesn't make much sense without the question, so
why delete it?
[Restored]
JP>But clearly this is not
JP>so, I can certainly think of some sequences where the pattern
JP>can be discerned with certainty. Does life fall into this
JP>category? Who knows. But if you could be convinced somehow,
JP>that some aspect of life, that is required for something to
JP>be deemed alive, did indeed fall into that category, would
JP>you allow that it follows that NS cannot be responsible for
JP>the observed pattern specificity?
[End Restoration]
WRE> No. The presence of CSI does not imply intelligent agency.
WRE> Dembski says this no fewer than three different times in
WRE> TDI. One can look at NS and find that it conforms to the
WRE> triad of attributes Dembski says define what intelligent
WRE> agents do: actualization-exclusion-specification.
JP>I think I must surrender on this issue.
Wise move.
JP>It is possible that NS falls into the category where the
JP>superposition assumption I think Dembski erroneously makes
JP>breaks down. Note that that is not the same as saying I think
JP>NS is capable of producing CSI. I am actually quite
JP>skeptical that it can. I will allow though that when coupled
JP>with a non-linear system, which I presume describes genetic
JP>inheritance, it is not excluded on a set-theoritical basis
JP>and using Dembski's definition, as being incapable of
JP>producing CSI.
Ahem. Why would anyone believe that Dembski's set-theoretical
basis *would* exclude natural selection as a cause of events
which can be shown to have "design" as a property (sensu
Dembski)?
WRE> [...]
JP> [snip]
[Restored]
JP>Design may be undetectable, but at issue is whether chance
JP>can masquerade as design.
WRE>No it isn't. The issue is whether natural selection can
WRE>produce events that have the attribute of CSI. As I point
WRE>out in my review, the Design Inference classifies events,
WRE>not causes.
[End Restoration]
JP>I think it is practical to limit this possibility to an
JP>acceptable infinitesimal by suitable choice of the threshold
JP>of specified complexity required to be inherent in the event
JP>under scrutiny. Dembski goes so far as to posit an absolute
JP>probability so low that it equates with impossible, derived
JP>from bounds on the number of atoms in the universe the age of
JP>the universe and the known minimum time for phase
JP>transitions.
WRE> Yes. Dembski's 500-bit threshold for CSI is met by the
WRE> solution of a 100-city tour of the TSP by genetic algorithm.
WRE> (Actually, 97 cities will get us over the CSI threshold, but
WRE> 100 is a nice round number.)
JP>I haven't reviewed your paper on this algorithm (would you
JP>mind reposting a pointer to it? I can't find it.) but I
JP>suspect it is another needle finder.
So what if it is? The solution state stills represents CSI.
The method of generation of CSI still is not itself an
intelligent agent.
See
<http://inia.cls.org/~welsberr/zgists/wre/papers/antiec.html>.
JP>The target is the shortest length path and the fitness
JP>function allows the subset of paths whose lengths are
JP>minimized to live another day. It has one interesting wrinkle
JP>though that I will give some thought to. Being discrete (the
JP>segments between cities are of fixed lengths), it allows for
JP>the possibility of multiple solutions, if it so happens that
JP>multiple permutations yeild the same path length. At first
JP>blush it seems that this is equivalent to placing multiple
JP>needles in the haystack but this is based merely on
JP>intuition.
A multiple solution is one thing. Are there more than 10 million
multiple minimal length tours in a 100-city TSP? That's the
number one would need to exceed in order to put the ratio of
end solutions to problem space such that one might question
clearing or just missing Dembski's 500-bit CSI threshold.
WRE> [...]
JP>The flaw is in your reasoning. Complexity and improbability
JP>are different measures of the same thing. Design is
JP>noticeable because of *specified* complexity and
JP>improbability. Over and over you want to knock down this
JP>complexity straw man. I keep telling you over and over,
JP>complexity alone is not sufficient and Dembski never says it
JP>is. It has to complexity that adheres to a pattern known in
JP>advance.
WRE> Actually, Dembski goes to some trouble to say that the pattern
WRE> must be independent of the event.
JP>As it must be to avoid the "begs the question" objection
JP>already discussed Question, given that Dembski specifies
JP>pattern independence as a necessary condition, to you agree
JP>that the "begs the question" objection is (un)moot :>) ?.
Let's get this straight, please. It was Jeff who broached the
possibility of question-begging in his characterization of
Dembski's *definition* of CSI. My objection was that Dembski's
definition of CSI did *not* incorporate a question-begging
premise. It is not me who has changed his position on this
matter.
WRE> If it is known in advance
WRE> of the event, then independence is a given. But Dembski does
WRE> hold that such independence can be shown even for producing
WRE> specifications for events that have already happened. Else
WRE> CSI would not be of much use for Dembski's purposes.
JP>I agree with both points (yours and Dembski's). I think the
JP>best example of Dembski's is the cryptologists who cracks the
JP>code. If suddenly the gibberish turns into Hamlet, one can
JP>with near certainty assume that one has found the
JP>prespecified encryption pattern.
And so when an intelligent agent finds this "needle in a
haystick", it is CSI, but if an algorithm found it, it isn't?
WRE> [...]
JP>No it doesn't. What would kill Dembski's argument is proving
JP>chance caused an pattern that is indistinguishable from
JP>design.
WRE> And showing that processes like natural selection can produce
WRE> CSI means that the argument becomes irrelevant.
JP>If you have done this I missed it. If you offer GA's as
JP>attempt to prove this, I think you fall well short.
I'll settle for showing that GAs can produce CSI first. I
can argue the rest later.
Jeff claimed earlier to 1) not be convinced by Dembski's
argument concerning evolutionary algorithms and 2) not to have
seen my page dealing with criticisms of evolutionary
computation but 3) believes I "fall well short" given his
current state of non-knowledge of my argument. Interesting.
Is there a consistent explanation that brings these premises
together?
See
<http://inia.cls.org/~welsberr/zgists/wre/papers/antiec.html>.
For Dembski links, try out
<http://inia.cls.org/~welsberr/evobio/evc/ae/dembski_wa.html>.
--
Wesley R. Elsberry, Student in Wildlife & Fisheries Sciences, Tx A&M U.
Visit the Online Zoologists page (http://www.rtis.com/nat/user/elsberry)
Email to this account is dumped to /dev/null, whose Spam appetite is capacious.
"Curtains part and landscapes fall" - BOC
Stuart
>
>
>
>
>
>
Dr. Stuart A. Weinstein
Ewa Beach Institute of Tectonics
"To err is human, but to really foul things up
requires a creationist"
:> What is commonly called "intelligence" does have a linguistic
:> component, of course, but also seems to me to involve spatial
:> reasoning, deduction and other events which have little to do with
:> linguistic competence.
: Language precedes intelligence. All those forms of 'reasoning' have
: linguistic components. You can not reason _about_ something unless you
: reason _in_ a language.
We don't seem to agree.
Reasoning is not the only component of intelligence, nor do "languages"
which could be used for reasoning necessarily require there to me more
than one individual for their development. I'm sure the communication
between the hemispheres could easily be described in terms of a
"language" - and indeed the hemispheres of some individuals are very
much like separate minds.
If you say "language preceedes intelligence" and it turns out that
you'd accept any symbolic serial internal representation as being a
"language", then what you said and how I interpreted it may have
diverged.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com
Please refill this tagline dispenser.
Yes, that's what I was trying to say. According to Dembski, the human
genome, if it is CSI, did not occur naturally. Like I said, his
hypothesis has been falsified.
>> Since the 400 pound gorilla is invisible, weightless, and otherwise
>> undetectable, why shouldn't we ignore it?
>
>ID may be invisible and weightless but as for detectable, the jury is
>still out.
ID is easily, usually trivially, detectable. It's an intelligent designer
of life that can't be detected.
>> [Dembski] refuses to recognize, though, that there is a big difference
>> between "isn't explained" and "can't be explained."
>
>I think he understands the distinction perfectly well. After all, it is
>the later he is trying to prove.
Then why did he omit unknown natural causes out of his filter?
Any and every collection of particles in the universe, without exception,
is a pattern. There is no possible way for anything not to be a pattern.
Granted, certain patterns occur more often in certain circumstances, but
those other occurrences are patterns, too. What does being a pattern have
to do with being designed?
I believe this subject started by comparing two strings, a random one
(say, 011100100110100110101111000110) and another one (namely,
010101010101010101010101010101). If I understand you correctly, you say
the latter string looks designed because it is unlikely to have originated
by chance. Yet both the above strings have exactly the same probability
of having originated by chance, so your reason fails. Please tell, if
possible, which of the two strings looks designed, and more important,
why.
You need a third source of bit streams: streams gathered from known
natural processes (tidal fluctuations, digitized thunder recordings,
temperature changes, etc.). I predict that many of these would be
indistinguishable from the designed streams.
The analysis of DNA as info is ex post facto by 4 billion years, so I
suppose that therefore it does not have CSI? I don't think so.
In the ball example, there is a PHYSICAL reason for it showing
up repeatedly at a certain spot even though it was thrown at random
spots.
This is an example of simple regular behavior. DNA replication
is an example of complex regular behavior, driven by a PHYSICAL
reason for its appearance (biochemistry).
>> They see the ball going to only a specific place on a complex
>> landscape, and say "Ooooh, CSI!"
>
>Well..yes, if the specific place was specified in advance then the ball
didn't
>likely get there by accident. The more balls there are in their respective
>prespecified spots, the less likely it is.
Well, nobody EVER claimed that DNA replicates itself by accident.
The question is whether CSI is a clear, well thought out
and useful concept.
>> There are physical reasons that a particular (specified) biopolymer
>> is chosen over a large number of those having equivalent probability
>> of being formed at random. They aren't formed at random.
>
>I don't understand your point or pretend to know what the significance of a
>biopolymer is. If they aren't formed by chance though than I'll wager
(without
>even knowing what they are) that they are formed by regulation or design.
Regulation by PHYSICAL forces. These physical forces
(primarily hydrogen bonding) explain why T and A pair up,
C and G in DNA replication. The notion of design adds NOTHING
to any insight about biological systems. Neither does CSI.
Life is complex. It is specific. Scientists already knew that.
How is CSI useful?
The only use I've seen for CSI is in making bogus design arguments.
I.e., assuming your conclusion (only designed things have CSI)
while ignoring the huge differences between life and human designed
things.
>> >> Now it becomes
>> >> very likely that he ball will end up inside the circle. What has
>> happened?
>> >
>> >A number of things. You've demonstrated a basic precept of information
>> theory, that is
>> >that information is always tied to a particular experiment.
>>
>> How about this one: Just because information theory can be
>> applied to a system does not mean it had information "put" there
>> by intelligence.
>
>If your are positing some sort of law I'd suggest you refine it a tad
before
>publishing. Just a suggestion.
Someone named Shannon and Weaver already beat me to it.
White noise has more information on a communication channel than
any other signal. Either white noise is put there intentionally by design
or what I said is true.
Tracy P. Hamilton
> In article <37F28B4E...@mpmd.com>, Jeff Patterson <jp...@mpmd.com> wrote:
> >Wesley R. Elsberry wrote:
>
> JP> [snip]
>
> This was the point at issue. Deleting it makes no sense.
I apologize. I was not trying to be deceptive. I didn't understand the relevance of
the point you were trying to make. Now I do and have commented on it below.
> [Restored]
>
> JP>The point is that you
> JP>can design a filter that never exhibits false positives,
> JP>attributing ID to an object generated by chance or law.
>
> [End Restoration]
>
> JP>This makes the whole arrowhead argument mute.
>
> WRE> "Mute"? I'll assume "moot" was meant.
>
> JP>No, mute -as in unable to speak to the issue. Oh alright, it
> JP>was late and I was tired.
>
> WRE> Even then, Jeff will find that "moot" means "arguable"
>
> JP>Ironic isn't it? - that the common meaning has become the
> JP>opposite of the formal? Even webster has raised the white
> JP>flag, "having no legal substance".
>
> WRE> .
> WRE> So far, though, I have not seen anything that would lead me to
> WRE> believe that signal detection theory has been overturned by
> WRE> Dembski.
>
> JP>I'm not sure what you mean here Wes. I don't think I implied
> JP>anything at all about this.
>
> Jeff claimed that a filter could be designed to yield no false
> positives. That is, I'll grant, a *claim* rather than an
> *implication*. The relationship of this claim to signal
> detection theory is pretty obvious.
Signal detection theory deals with minimizing errors in the reconstruction of
information that has become corrupted by transmission through a channel. The goal
of the explanatory filter is not to reconstruct the CSI, but only to detect its
presence. A diode detector whose purpose is to only detect the presence or absence
of a signal (and not to demodulate the information the signal might be carrying)
can be made arbitrarily immune to a false positive triggered by channel noise by
lengthening its time constant (which determines how long the diode must conduct
before we have positive identification of signal presence). The application to the
EF is direct. The more complex the pattern, the more immune we are to false
positive indication. The immunity can be set arbitrarily large by adjusting the
complexity threshold.
As an aside, I believe it may be the lack of an adequate channel model that has
prevented Dembski thus far from making a definitive claim as to whether life
contains CSI. Adaptive filters work by building a model of the channel over time.
Given enough time and a stationary channel (a theoretical ideal), they too can
become immune to false positives, because they have learned about the statistics of
the noise that is present. This may be a fruitful avenue of exploration for Dembski
as he investigates the transmission of CSI through a biological channel.
Finally, when we are dealing with questions concerning CSI at its source , signal
detection theory plays no part. Source information is by definition uncorrupted.
> JP>I was just trying to establish
> JP>that Dembski's definition of information was the same one
> JP>Shannon developed for signal detection, to use your
> JP>term. Dembski applies the definition to inquire about of the
> JP>origin of information, a question Shannon didn't
> JP>address. Shannon starts by postulating an information
> JP>source. Must we grind every point to powder?
>
> Signal detection theory says that there is no filter which has
> no false positives, nor one which entirely eliminates false
> negatives. Either Dembski has overturned signal detection
> theory, or his Explanatory Filter can produce false positives.
> That's what I mean.
Again when we are inquiring about source information, the restrictions of signal
detection do not apply.
> I reserve the right to object to incorrect information being
> disseminated, and to do so at my own pace and in my own way.
> If Jeff does not like it, he is free to ignore my posts. He
> won't be the first or the last to do so
Yes, as I stated before, I missed your point and thought we were belaboring the
definition of information (which is what Marty and I were hashing out). It might be
helpful to emphasize a point a little more when you are changing the subject but I
withdraw and apologize for my implication that you were being recalcitrant..
> WRE> Nor have I seen anything that would indicate that Dembski's
> WRE> Explanatory Filter is an instance of such a design.
>
> JP>Sorry. I'm lost. What do you mean by "such a design". What
> JP>category of design do you have in mind here?
>
> Of the category of filters with the property of finding no
> false positives.
I may have said "no" when I should have said "have an arbitrarily small chance
of"-- but I have made that distinction clearly in other places. If we buy Dembski's
claim of an absolute probability which equates to impossible then theclaim is
correct, given that the complexity threshold can be set above this level.
> JP> [snip]
>
> [Restore]
>
> In fact, I list several ways in which Dembski's Explanatory
> Filter fails to accurately capture how humans go about finding
> design in day-to-day life in my review of TDI.
>
> [End Restoration]
>
> And no comment on that from Jeff. Given that Dembski relies
> heavily upon analogy to human detection of design in
> day-to-day life in making his argument for proceeding to infer
> intelligent agency once "design" is found as an attribute of
> an event, I'd think that making an accurate description of the
> process in question would be pretty important.
Frankly this is an issue I haven't thought a lot about. I glossed over your
objections not because I don't think they are relevant but because right now I'm
still grappling with the philosophical implications of CSI itself. I have thus far
been willing to accept as given that if CSI exists, a means can be found to
reliably detect it, even if Dembski's first cut turns out not to be the most
reliable. I read his paper on the EF and it struck an intuitive chord but I haven't
gone much beyond that. I guess I see the design of the filter as an engineering
challenge, not a theoretical one. If you would allow, I'd like to side-step this
issue and come back to it after I've read Dembski's paper and your review more
carefully. In the meantime, if you'd like to elaborate on why you think Dembski's
theory rests on a particular implementation of the filter, it would inform my
review.
> I'm not the only one to critique Dembski on his accuracy of
> capturing human design detection procedures. Elliott Sober et
> alia's "How Not to Detect Design" makes strong criticisms of
> Dembski's Explanatory Filter and extended logical arguments.
A pointer to Sober's paper would be helpful if you have it handy..
From your description of design and Dembski's assertion that CSI implicates
intelligent design, my description of Dembski's definition of CSI follows directly.
My attempt was not to quote verbatim Dembski's definition, but to split it into to
parts to make it clear to Marty why the definition does not "beg the question". You
took one of those parts out of the context of the other and made it appear that I
was giving a question begging definition. But the first part was *intended* to
isolate the question begging so I could clearly demonstrate that the second
independent part resolved it. In any case, we have no quarrel here. I think we both
agree that Dembski's definition of CSI as originally stated does not beg the
question.
> WRE> At least, Dembski
> WRE> did not so define it before his "reiterations" post. With
> WRE> the introduction of the unspecified qualifiers "apparent"
> WRE> and "actual" to be prepended to "CSI", it looks like Dembski
> WRE> may indeed have simply slipped into publicly begging the
> WRE> question.
>
> JP>You have me at a disadvantage here as I haven't seen the post
> JP>you reference and haven't run across him using the qualifiers
> JP>you mentioned in the articles I have read. A pointer would be
> JP>most appreciated.
>
> <http://x44.deja.com/=dnc/getdoc.xp?AN=525224114>
>
Thanks. This is the first time I have seen Dembski claim that life has the
attribute of CSI. Before he characterized this as an open question. This leads me
to wonder what he has discovered to resolve this to his satisfaction.
I'd like to quote here what I think is the central point he tries to make:
"It follows that Dawkins's evolutionary algorithm, by vastly increasing the
probability of getting the target sequence, vastly decreases the complexity
inherent in that sequence."
I think Dembski goes badly astray here which leads him to this muddle of apparent
vs actual complexity. He has forgotten something he knew to be true in his earlier
writings, namely that the information content of an event is dependent on the
experiment one designs to observe the event. He demonstrates this clearly in his
"royal flush" analogy. Designing an experiment to detect a non-royal flush reduces
the information contained in the event of obtaining the sequence of cards that
makes up a royal flush to one bit. The result of such an experiment is not apparent
complexity, it is not complex at all. In Dawkins GA as in my archer-evolver, the
event is obtaining some piece of CSI. If the event is bound to occur, the event
ceases to contain information.
We probably should start a new thread on this topic, that is whether GA's can
create CSI, because it deserves its own heading and this thread is getting unwieldy
but I don't know how to do so without losing the context of our current discussion
so I'll press on. It is clear to me from your comments below that we are not going
to get anywhere without a clear understanding of what each of us means by
intelligence so let me start there.
In "How the Mind Works", Pinker defines intelligence (pg. 62) as
"the ability to attain goals in the face of obstacles by means of decisions based
on rational (truth-obeying) rules.".
I find this to be unsatisfactory. Notice that this definition would apply the
attribute of intelligence to genetic algorithms (indeed any optimization routine)
and to perhaps even to evolution through natural selection. You make it clear below
that you do not feel GA's have intelligence so I assume you agree thus far What is
it then that makes this definition deficient? It is in his use of the word
"attain". What separates intelligent agents from mere automatons is the ability to
*set* goals, not to attain the goals set (possibly) by others. I think Pinker was
well aware of this distinction and chose the word attain purposefully because the
ability to set goals implicates desire, motivation, self-determination, in short
all of the things that do not fit neatly into his computational model for human
thought.
Why is this important? Because I maintain that is at the precise moment that one
sets a goal, and at no other, that CSI can come into existence and this is why I
think CSI does indeed implicate intelligence. Goal setting is when the choice is
made, when contingency is actualized, when the die is cast. It is when the archer's
paints of the bulls eye and picks one point among the infinite number that he could
have chosen that CSI come about, not in the shooting of the arrow. The beach house
existed in the mind of the architect before he ever laid pencil to paper. The
design is simply the means to an end. Any number of ways can be devised (designed)
to achieve the goal, but none of them create CSI, even if the design includes the
actions of an intelligent agent.
Now if it is true that any design has as its end a goal, when we find a design we
are safe in assuming a goal existed beforehand which the design attempts to attain.
Intelligence is implicated by a goal AND by a design to attain that goal. So the
reasoning goes from specified order which implicates design which implicates a
goal. This chain includes both a design and a goal which implicates intelligence if
and only if the goal is complex enough to preclude it having been attained it by
chance. Thus not every goal creates CSI. I can choose an easily attainable goal
for which good fortune may account. I could choose to paint the whole wall black
which while specified, is not complex.
Dawkins understands this well. It is why he is so adamant about the purposelessness
of the evolutionary process. To ascribe a goal is to implicate intelligence. It is
also why all the claims that purposeless evolution is not inconsistent with
religious belief is so much pabulum.
I think Dembski makes a mistake in not making this goal dependence more explicit.
It's there, but couched in terms like directed contingency and actualization it
gets fogged. It gets totally obliterated by this silly distinction between real and
apparent CSI.
> [snip of discussions related to natural selection and superposition - You can
> readress these issues if you like]
>
> WRE> Again, the objections raised to algorithms as sources of CSI
> WRE> seem to be handled preferentially: they are not applied to
> WRE> intelligent agents, and yet that application seems both fair
> WRE> and reasonable
With my definition of intelligence and the class of events capable of creating CSI
which follows from it, this is not the case. If either is merely attaining a goal,
no CSI is created. If either can formulate a goal, CSI is created. No preferential
treatment need be given.
These are measures of efficiency in attaining the goal. The CSI was created by you
when you set the task before your program. Dawkins created the CSI only once, when
he choose the phrase. Run the program as often as you like, it finds the CSI like a
rock finds it point of lowest potential energy. Neither the rock nor the law of
gravity cares a wit about the rock's position in space. No goal, no intelligence.
The only event of import here was the painting of the target. The cause of that
event was the archers innate ability to formulate a goal (to hit the target with an
arrow). He had some purpose in mind and was not just painting black spots on a
whim. The only place intelligence is required in the whole thought experiment was
in forming this goal.
> This business of information transformation is one that I
> explained earlier, but I can repeat that explanation easily
> enough.
>
> The objection that one goes from an antecedent set of
> information in one form to a CSI solution of another form is
> no objection at all. An intelligent agent would have no hope
> of solving a 100-city tour of the TSP in the absence of the
> distance information, or at least would only have the same
> expectations as one would obtain from blind search. The
> algorithm and the agent operate to generate CSI that did not
> exist before from the information that is available. This is
> another way of saying that rational intelligence and
> algorithms have the same dependencies when generating CSI.
But it doesn't matter who or what solves the problem. The question is who
formulated the problem to being with?
> JP>You have reasonably asked in another GA context, how onbe can
> JP>you tell the difference between Dembski's archer-as-agent and
> JP>my evolved archer? If Dembski offers "appearent" vs. "real"
> JP>CSI as an answer then IMHO he's raised the white flag to
> JP>soon. My answer would be that you cannot tell the difference
> JP>because BOTH were designed, that is, IF he could prove that
> JP>the real archer was in fact designed. He may well have to
> JP>raise the flag at some point on that issue -- but the
> JP>archer-evolver is a design as surely as is a bubble sort
> JP>algorithm.
>
> It does not matter whether the agent producing the CSI is
> "designed" or not. What Dembski seeks to establish is the
> necessity of an *intelligent* agent as the sole class of
> sources of CSI. While a bubble sort is designed, it fails to
> convince most people that it itself is intelligent. Does Jeff
> believe a bubble sort algorithm is itself intelligent? Does
> this cause Jeff moral qualms when he shuts off his computer?
LOL - good one but as you can see, my definition of intelligence lets me sleep with
a clear conscience. There is no difference between the archer and the robot in the
shooting of arrows. Both produce the same result. Only one can formulate the idea
of shooting arrows at a specified target to begin with. The bubble sort algorithm
has a clear goal and thus some intelligent agent had to have formulated that goal.
> WRE> Dembski uses the case of Nick Caputo as an
> WRE> instance of CSI that implies a design. And yet, Nick Caputo's
> WRE> design consisted entirely of converting the
> WRE> previously-existing information of party affiliation into a
> WRE> preferential placement on voting ballot.
The intelligence required though was in devising his nefarious end and an equally
nefarious means to that end.
> People who
> WRE> plagiarize do no more than *copy* previously existing
> WRE> information, and yet this is another of Dembski's examples of
> WRE> design. If one excludes things from producing CSI on the
> WRE> basis of transmutation of information, then it seems that one
> WRE> must exclude *both* algorithms and intelligent agents, or at
> WRE> least intelligent agents who use rational thought in producing
> WRE> solutions.
>
> JP>I would include both agents and algorithms as things
> JP>designed- or at least if one is, the other is as well.
>
> That is not the issue. Does Jeff consider both to be
> *intelligent*?
First we must be careful in this context when we use the word produce. As I pointed
out in my haystack analogy, this word can mean "present for inspection" as well as
create . A detective must produce evidence, he should never create it. I assume you
are using produce to mean create in the above and that we disagree for reasons that
should be obvious by now. It is not in the use of an intelligent agent in a design
that is determinative. If you place a needle in a haystack and I find it, I have
not created the CSI, you have.
> Probably the most agitated I saw Dembski get was in the
> discussion period of Ann Foerst's presentation at the NTSE
> conference. Foerst's topic was the "Imago Dei", and how this
> gift of God to humans was transferrable to some products of
> human design. Dembski objected strenuously, and cited his
> theological alma mater as superior to Foerst's theological
> alma mater. It was quite a show. But IIRC Dembski took issue
> with even the consideration that some robotic product might be
> considered intelligent, with the same kinds of prospects for
> agency in design that Dembski attributed to humans and God.
Until humans can create entities which can formulate goals independently, based on
their own robotic lusts, Dembski (and the rest of us for that matter) have nothing
to be concerned about.
> JP>If that definition stood by itself, you points about
> JP>circularity would be well taken, it excludes chance a
> JP>priori.
>
> JP>BTW, here's where I answered your begging the question objection...
>
> JP> But if we have another, independent test of CSI, the
> JP>circularity is broken. We can use the second test to
> JP>determine the presence of CSI and use the first definition to
> JP>exclude chance and law. Now I think you have implicitly
> JP>agreed in your earlier remarks that prespecification of
> JP>pattern provides that independent test, but hold that this is
> JP>useless because it is impossible to determine the
> JP>prespecified pattern from the event.
>
> My reference to "begging the question" concerned the
> characterization that Jeff gave of Dembski's definition of
> "CSI". That characterization was incorrect: Dembski did not
> define the phrase in a question-begging way. Just because
> Jeff provides a reason to set aside the mischaracterization
> doesn't make it go away. It isn't only Dembski's fans who can
> point out lapses in correct characterization of Dembski.
Granted but I don't think Dembski would object to my two part definition as a
mis-characterization.
> My take on CSI is that it is an attribute that can be
> recognized, but which does not necessitate an intelligent
> agent as a cause. I know that Dembski argues that CSI does
> provide a reliable basis for inferring an intelligent agent,
> but I have not been convinced by his argument.
Ok so now lets examine the design inference, which I argue is actually a goal
inference. The question becomes a chain. Can CSI accurately infer design and if so
can design accurately infer goal setting and can goal setting plus a design
accurately infer intelligence. As to the first link in the chain, CSI detected at
its source is independent of its design and infers intelligence directly. We see a
bulls eye and in some context recognize it as a goal. We don't need to inquire as
to how it got there to know that there is the will of an intelligent agent
somewhere down the line. But often we cannot detect CSI at its source but only as
it is encapsulated in some design. Thus we see the beach house and recognize it as
an embodiment of an ideal which was first conceived as pure thought. We don't need
to check if the house was built exactly according to plan (they never are) or
indeed if the plan exactly embodied the ideal first envisioned (mistakes in the
design process can occur) to rightly discern that the house represents the CSI even
if it only approximately attained the original goal. This is where your signal
detection analogy starts to come into play. The design can be thought of as a
channel by which the CSI can become distorted. But here again, we only need to
detect the presence of CSI, not reconstruct it with fidelity. Thus I think, given
that the design meets some complexity threshold, we can accurately infer that it is
in fact a design, attempting to express (however imperfectly) a goal and that that
goal required an intelligence to formulate.
But what if we can't see the embodiment of the design itself but only have what
appears to be a specification. We find the plans for the house, badly faded and
partially burned. Again, our intuition tells us an intelligent agent is behind this
but that will not suffice as proof. We must answer the question as to whether
natural processes (chance and/or regularity) can produce things which are so
complex that they appear appear to have a goal in mind but in fact do not. It is
here that I presently am stuck. I don't know how one goes about proving this
negative. It is no disproof though to point to things whose origin is not known, a
class which presently includes all biological systems. Clearly though, if one can
start from scratch and catch an agent which only relies on chance and law in
process of creating a complex goal and the means for achieving it, Dembski's theory
falls apart.
As for the second and third links of the chain, I think one must answer yes to both
questions, assuming that one is certain the design is truly a design and not
something that merely looks designed, which means if the objections found in the
first link can be overcome, the rest follows. Thus if Dembski, or someone else, can
show that it is impossible for chance + regularity to produce something which can
pass for CSI he will carry the argument. If on the other hand, someone can
demonstrate that chance+regularity can produce goal-setting plus a design to attain
that goal, he will not. Both appear to have their work cut out for them.
> Ugly little
> facts can kill beautiful theories. One ugly little fact is
> that algorithms produce CSI, whether one calls it
> "transformation" or anything else. The only way rational
> intelligent agents produce CSI is through the very same
> "transformations" that cause such scoffing when an algorithm
> is claimed to do the job. One might then step back to a
> position that only non-rational intelligent agents (or
> intelligent agents acting non-rationally) produce CSI, but I
> don't think that is what Dembski wants to claim.
I think a proper understanding of intelligence and CSI overcomes both of these
objections.
> JP>[snip]
>
> My answer doesn't make much sense without the question, so
> why delete it?
>
> [Restored]
>
> JP>But clearly this is not
> JP>so, I can certainly think of some sequences where the pattern
> JP>can be discerned with certainty. Does life fall into this
> JP>category? Who knows. But if you could be convinced somehow,
> JP>that some aspect of life, that is required for something to
> JP>be deemed alive, did indeed fall into that category, would
> JP>you allow that it follows that NS cannot be responsible for
> JP>the observed pattern specificity?
>
> [End Restoration]
>
> WRE> No. The presence of CSI does not imply intelligent agency.
> WRE> Dembski says this no fewer than three different times in
> WRE> TDI. One can look at NS and find that it conforms to the
> WRE> triad of attributes Dembski says define what intelligent
> WRE> agents do: actualization-exclusion-specification.
>
I disagree here as would Dawkins I assume. NS has no purpose in mind and thus fails
on the directed contingency (actualization + exclusion) requirement.
> JP>I think I must surrender on this issue.
>
> Wise move.
>
> JP>It is possible that NS falls into the category where the
> JP>superposition assumption I think Dembski erroneously makes
> JP>breaks down. Note that that is not the same as saying I think
> JP>NS is capable of producing CSI. I am actually quite
> JP>skeptical that it can. I will allow though that when coupled
> JP>with a non-linear system, which I presume describes genetic
> JP>inheritance, it is not excluded on a set-theoritical basis
> JP>and using Dembski's definition, as being incapable of
> JP>producing CSI.
>
> Ahem. Why would anyone believe that Dembski's set-theoretical
> basis *would* exclude natural selection as a cause of events
> which can be shown to have "design" as a property (sensu
> Dembski)?
Well first, I think Dembski does mean to exclude NS when he eliminates chance or
regularity as a root cause of CSI. But I am not exactly sure what you mean by "can
be shown to have design as a property". Events which are caused by NS cannot
definitively be shown to have a design attribute at least as far as I am using the
term design. If they do, the battle is over for me because for me design implicates
goals which implicate intelligence.
> WRE> [...]
>
> JP> [snip]
>
> [Restored]
>
> JP>Design may be undetectable, but at issue is whether chance
> JP>can masquerade as design.
>
> WRE>No it isn't. The issue is whether natural selection can
> WRE>produce events that have the attribute of CSI. As I point
> WRE>out in my review, the Design Inference classifies events,
> WRE>not causes.
>
> [End Restoration]
Did you mean the Design Inference or the Explanatory Filter here? I thought the
Design Inference (haven't read the book yet, ordered it a few days ago) refereed to
being able to infer design from CSI. In any case, I agree that an issue is whether
natural selection (or any other natural process) can generate goal-setting, goal
attaining behavior.
> JP>I think it is practical to limit this possibility to an
> JP>acceptable infinitesimal by suitable choice of the threshold
> JP>of specified complexity required to be inherent in the event
> JP>under scrutiny. Dembski goes so far as to posit an absolute
> JP>probability so low that it equates with impossible, derived
> JP>from bounds on the number of atoms in the universe the age of
> JP>the universe and the known minimum time for phase
> JP>transitions.
>
> WRE> Yes. Dembski's 500-bit threshold for CSI is met by the
> WRE> solution of a 100-city tour of the TSP by genetic algorithm.
> WRE> (Actually, 97 cities will get us over the CSI threshold, but
> WRE> 100 is a nice round number.)
>
> JP>I haven't reviewed your paper on this algorithm (would you
> JP>mind reposting a pointer to it? I can't find it.) but I
> JP>suspect it is another needle finder.
>
> So what if it is? The solution state stills represents CSI.
Yes, represents the CSI you created when you set the goal.
> The method of generation of CSI still is not itself an
> intelligent agent.
Generation requires an intelligence, implementation does not. Fast forward 500
years into the future. We've developed the ability to convert mental images to
bits. We have an architects strapped to a machine that can do so. He envisions a
building. That's all, just a mental picture of a building he wants built. The
machine captures this image and passes it off to a computer program that is capable
of taking this image and converting it directly to a set of building plans which
are then passed to an assembly line of robots at the building site who construct
the house. The only intelligence required was in creating the mental image and in
the will to do so. We see the house after completion. We do not have to know
anything about these chain of events to know that it came about because a designer
envisioned this particular house over the infinite houses one could envision. It is
therefore complex and specified. Where was the CSI created if not in the goal
setting?
Well not quite. I was attempting to formulate a two part definition to address
Marty's characterization of question begging. The first part, by design, begged the
question so when you grabbed only that part, it appeared that I
wasmis-characterizing Dembski.
> My objection was that Dembski's
> definition of CSI did *not* incorporate a question-begging
> premise. It is not me who has changed his position on this
> matter.
>
> WRE> If it is known in advance
> WRE> of the event, then independence is a given. But Dembski does
> WRE> hold that such independence can be shown even for producing
> WRE> specifications for events that have already happened. Else
> WRE> CSI would not be of much use for Dembski's purposes.
>
> JP>I agree with both points (yours and Dembski's). I think the
> JP>best example of Dembski's is the cryptologists who cracks the
> JP>code. If suddenly the gibberish turns into Hamlet, one can
> JP>with near certainty assume that one has found the
> JP>prespecified encryption pattern.
>
> And so when an intelligent agent finds this "needle in a
> haystick", it is CSI, but if an algorithm found it, it isn't?
Nope. Either way it is found it is conclusive evidence that an intelligent agent
designed the code, not because Hamlet popped out (it may have been just obscured by
noise) but because the code contained a precise complex specification. Replace a
letter by one looked up in a table indexed by the output of of a random number
generator with this seed for instance. This case makes clear the distinction
between finding CSI, that is the intent of the designer, and creating CSI. Clearly
neither code-cracking agent created CSI in this case.
> WRE> [...]
>
> JP>No it doesn't. What would kill Dembski's argument is proving
> JP>chance caused an pattern that is indistinguishable from
> JP>design.
>
> WRE> And showing that processes like natural selection can produce
> WRE> CSI means that the argument becomes irrelevant.
>
> JP>If you have done this I missed it. If you offer GA's as
> JP>attempt to prove this, I think you fall well short.
>
> I'll settle for showing that GAs can produce CSI first. I
> can argue the rest later.
>
> Jeff claimed earlier to 1) not be convinced by Dembski's
> argument concerning evolutionary algorithms and 2) not to have
> seen my page dealing with criticisms of evolutionary
> computation but 3) believes I "fall well short" given his
> current state of non-knowledge of my argument. Interesting.
> Is there a consistent explanation that brings these premises
> together?
I drew my conclusion confident in the assumption that no self-actuated,
goal-generating, means-devising algorithm has yet been developed. If that
assumption is wrong, I'd be most interested in seeing the counter-example.
Jeff
> >>
> TH> Not if you draw the circle after the fact. Which is exactly what
> TH> Dembski and others do.
> >
> JP>...when they are demonstrating that ex post facto specifications do not
> JP> specify CSI.
>
> TH> The analysis of DNA as info is ex post facto by 4 billion years, so I
> TH> suppose that therefore it does not have CSI? I don't think so.
It would be if someone were trying to say, "see here is the pattern we were
shooting for all along". No one is saying that". No one is trying to reconstruct
the CSI that might be present in DNA, only to detect whether it exists, a much
simpler job from a information-theoretical standpoint.
> In the ball example, there is a PHYSICAL reason for it showing
> up repeatedly at a certain spot even though it was thrown at random
> spots.
How it showed up at certain spots is of no import. CSI comes about when an (as
far as we know thus far) intelligent agent sets a goal. It is in choosing the
goal(s) and excluding all others that an agent demonstrates intelligence.
> This is an example of simple regular behavior. DNA replication
> is an example of complex regular behavior, driven by a PHYSICAL
> reason for its appearance (biochemistry).
But again, the only question that matters is does DNA represent the goal of some
intelligent agent or did it come to exhibit its complex behavior by chance? That
it now exhibits complex behavior makes us want to (as even Dawkins allows)
include it in the set of all other things capable of similarly complex behavior,
namely those things designed by an intelligent agent. The task before us is to
convince ourselves why the obvious answer is not the correct one.
> >> They see the ball going to only a specific place on a complex
> >> landscape, and say "Ooooh, CSI!"
> >
> >Well..yes, if the specific place was specified in advance then the ball
> didn't
> >likely get there by accident. The more balls there are in their respective
> >prespecified spots, the less likely it is.
>
> Well, nobody EVER claimed that DNA replicates itself by accident.
> The question is whether CSI is a clear, well thought out
> and useful concept.
Well that's what we're hashing out here. Just stating that it isn't doesn't get
us any closer to the answer though.
> >> There are physical reasons that a particular (specified) biopolymer
> >> is chosen over a large number of those having equivalent probability
> >> of being formed at random. They aren't formed at random.
> >
> >I don't understand your point or pretend to know what the significance of a
> >biopolymer is. If they aren't formed by chance though than I'll wager
> (without
> >even knowing what they are) that they are formed by regulation or design.
>
> Regulation by PHYSICAL forces. These physical forces
> (primarily hydrogen bonding) explain why T and A pair up,
> C and G in DNA replication. The notion of design adds NOTHING
> to any insight about biological systems. Neither does CSI.
> Life is complex. It is specific. Scientists already knew that.
> How is CSI useful?
Because if you allow that life is indeed built from complex specific
information, encoded in DNA for example, the obvious question is where did that
information come from? We have a falsifiable hypothesis that such information
can come into existence by accident. Is it not legitimate to try and falsify it?
If it could be shown somehow that there wasn't enough energy in the universe to
sustain the catalytic reactions necessary for some current hypothesis on
abiogenesis, wouldn't you be interested in knowing that result? What is
different about examining it from an information theoretic standpoint instead of
a thermodynamic standpoint. It may not be useful in determining say the
characteristics of a biological organism, but it may be very useful for giving
us a theoretical framework for discussing origins.
> The only use I've seen for CSI is in making bogus design arguments.
> I.e., assuming your conclusion (only designed things have CSI)
> while ignoring the huge differences between life and human designed
> things.
I think your assumptions are more readily apparent than mine. I have an open
mind on the subject of origins and find Dembski's theory intriguing. Besides
it's fun! You can play too but first you have to actually read Dembski's papers.
> MK > Now it becomes
> MK > very likely that he ball will end up inside the circle. What has
> MK> happened?
>
> JP > A number of things. You've demonstrated a basic precept of information
> JP > theory, that is that information is always tied to a particular
> experiment.
>
> TH> How about this one: Just because information theory can be
> TH> applied to a system does not mean it had information "put" there
> TH> by intelligence.
>
> JP> If your are positing some sort of law I'd suggest you refine it a tad
> JP> before ublishing. Just a suggestion.
>
> TH> Someone named Shannon and Weaver already beat me to it.
> TH> White noise has more information on a communication channel than
> TH> any other signal. Either white noise is put there intentionally by design
>
> TH> or what I said is true
I don't think Shannon phrased it quite the way you did in your first post or
that your truism about white noise embodies your first post. You may publish
without fear of plagiarism. In any case, the information contained in noise, or
more precisely, a noise measuring event, is not and cannot be specified.Non-CSI
is assumed to generated by chance so your statement is true. But it has nothing
to do with whether or not IT can be applied to a system.
CSI is about measuring the complexity of goals, which are chosen at the
exclusion of other possible (but not necessarily equally possible goals). As far
as we know, only intelligent agents can accomplish this feat (which of course,
opens up a means of falsification for Dembski's hypothesis). When these goals
get actualized in an object, we can try to infer the design goal from the
object. You seem to object to the application of this to biological systems, as
if they are somehow a special class of their own. In what way are they special
in this regard. Why shouldn't we look at them to see if they embody a goal or
are the result of blind chance?
Jeff
>The _Intelligent Design_ double blind filter test.
>
>1) Assume two sources of bit streams, one stochastic, the other
> presenting examples of 'designed' streams.
You are going to have to define your bit streams more precisely. A
stochastic process is a stream of random variables, 1's and 0's in
your case. The definition says nothing about the probability
distribution that governs the variables. Hence, a stochastic process
can consist of all 1's or all 0's. Your designed streams are special
cases of stochastic processes. This paragraph is the result of a
stochastic process.
Specifying bit streams that represent natural processes and designed
processes is not so easy. Bit streams that obviously come from men
such as those that come from the talk.origins news server are not the
kind that are interesting.
Ivar
Dembski uses almost the exact same example to show what is *not* CSI. In
his the marksman is an archer. If the bulls eye is painted before the
shot, then choosing that particular target over all of the many points
he could have chosen is an event that is both complex and specified. He
then goes on to say that if the archer paints the target after the shot,
even if it is on the exact same spot, it is not complex information (he
calls it I think ad hoc).
> > > There are physical reasons that a particular (specified)
biopolymer
> > > is chosen over a large number of those having equivalent
probability
> > > of being formed at random. They aren't formed at random.
> >
> > I don't understand your point or pretend to know what the
significance of a
> > biopolymer is. If they aren't formed by chance though than I'll
wager (without
> > even knowing what they are) that they are formed by regulation or
design.
>
> Biopolymers include DNA, RNA, and proteins. Their formation from
> the components is entirely in accord with chemistry and physics,
> so "regulation or design" is involved only if it is also involved
> in, say, the formation of a snowflake.
>
> By the way, completely specifying a particular snowflake would take
> a huge amount of information, even with the symmetry. Do you think,
> looking at a snowflake, that there must be some design(er) involved
> because of the huge amount of complexly-specified-information needed
> to produce *exactly* that snowflake?
This makes it complex, but unspecified (since the pattern must be given
in advance or be evident from the object itself). Such information is
generated by nature all the time. But think of how surprised you would
be to discover that each and every snowflake was exactly the same and
just as complex. Nature seems incapable of generating such specified
order. That leads us to DNA in which the sequence must be just so in
order to specify a certain organism. The information is both complex and
specified. Again no known natural agent exists which has been shown to
produce such specified order.
Notice that once such information is generated, it may be transmited by
natural causes. Thus an intelligent agent is not required to copy or
even change CSI. Showing a natural mechanism for the generation of DNA
is no more helpful than saying Maxwell's equations can account for the
picture on your television screen. It was transmited by a natural
process but this begs the question as to the source of the information.
Jeff
I was a bit inprecise. I should have said specified pattern.
> Granted, certain patterns occur more often in certain circumstances,
but
> those other occurrences are patterns, too. What does being a pattern
have
> to do with being designed?
A pattern which is specified in advance or which is self evident in the
object itself indicates it was the goal of some design. The forming of
goals and the means to achieve them implicates an intelligent agent.
>
> I believe this subject started by comparing two strings, a random one
> (say, 011100100110100110101111000110) and another one (namely,
> 010101010101010101010101010101). If I understand you correctly, you
> say the latter string looks designed because it is unlikely to have
> originated by chance.
Well as someone astutely pointed out, the pattern wasn't complex enough
to implicate design because it could have been produced by regularity
(some natural law like an oscillator). It was improbable enough though
to eliminate chance. Thus although specified, it is not complex.
> Yet both the above strings have exactly the same probability
> of having originated by chance, so your reason fails. Please tell, if
> possible, which of the two strings looks designed, and more important,
> why.
If I got you to believe that I flipped one of the sequence with a fair
coin you'd be very gullable. If I told you I flipped the other one,
you'd have no trouble believing me. You tell me which is which. Which
sequence would you not believe I actually flipped? How did you make that
determination since they both have equal probability?
Because there is *nothing left*. He is arguing from set exclusivity.
Events can be generated in three ways, randomly, by natural law and by
design. Can you think of some other way to generate an event? So if he
can eliminate law and chance, you can't say what about some unknown
natural cause.
Now before Wesley reads this an accuses me of arguing out of both sides
of my mouth, let me say I am not convinced he has completely eliminated
law and chance but he has come pretty darn close. Specifically, there is
a class of non-linear laws (functions) which when coupled with chance
may not be eliminated. But even on the off-chance that I am right about
this, there is nothing to indicate that this class of non-linear
functions can generate something that would pass for CSI or that natural
selection falls into this category of non-linearity.
Look, Dembski is a very bright guy. He has three Ph.D.'s; one in
mathematics, one in philosophy and one in theology. I think I read that
his philosophy thesis was on logic. He ran post-doc programs at M.I.T.
and Cornell in mathematics, the University of Chicago in physics, and at
Princeton in computer science His book on I.D. (The Design Inference)
was published by Cambridge University Press, not some creationist track
house. He's published in peer reviewed journals. This is not in anyway
an appeal to authority but just to point out that it is highly unlikely
that he has made some bone-headed mistake in his basic logic. If he had,
we never would of heard of him. He'd be mince-meat by now. I think it is
safe to assume his basic reasoning is sound and we need to look beyond
the obvious for meaningful critique.
"Evident" is a dangerous word. Unless you give us a procedure to decide
specification (which Dembski hasn't done, IMHO), it amounts to "I know
it when I see it" ;-).
Such information is
> generated by nature all the time. But think of how surprised you would
> be to discover that each and every snowflake was exactly the same and
> just as complex.
But most organisms are different in detail. Except for identical twins,
no two humans have the same DNA sequence.
And every uranium atom is exactly the same - and extremely complex.
Think of the quantum-mechanical 93-body problem ....
> Nature seems incapable of generating such specified
> order. That leads us to DNA in which the sequence must be just so in
> order to specify a certain organism.
And if the sequence would be different it would specify a different
organism. No kidding.
Saying that the set of current species (from _E._coli to _H._ sapiens)
is "specified" amounts exactly to painting the bull's eye around the
arrow. Unless you want to claim that our existence was predicted some 3
gigayears ago ....
The information is both complex and
> specified. Again no known natural agent exists which has been shown to
> produce such specified order.
And there is no result that says that nature cannot produce it.
> Notice that once such information is generated, it may be transmited
by
> natural causes. Thus an intelligent agent is not required to copy or
> even change CSI. Showing a natural mechanism for the generation of DNA
> is no more helpful than saying Maxwell's equations can account for the
> picture on your television screen.
But Newton's equation can account for the highly complex fractal
structure of the rings of Saturn. I'm sure some specification could be
invented on the spot.
> It was transmited by a natural
> process but this begs the question as to the source of the
information.
This would only be a problem if you could prove some conservation
theorem for information (like for electric charge etc.).
HRG.
Maybe he should learn more about Chemistry and Biochemistry.
>his philosophy thesis was on logic. He ran post-doc programs at M.I.T.
>and Cornell in mathematics, the University of Chicago in physics, and at
>Princeton in computer science His book on I.D. (The Design Inference)
>was published by Cambridge University Press, not some creationist track
>house. He's published in peer reviewed journals. This is not in anyway
>an appeal to authority but just to point out that it is highly unlikely
>that he has made some bone-headed mistake in his basic logic. If he had,
It's not logic but evidence which is lacking. When there was a crisis
in Physics in the 19th century due to the precession of Mercury,
Physicists didn't say "that does it; it was due to a designer". That's
not how science works.
>we never would of heard of him. He'd be mince-meat by now. I think it is
>safe to assume his basic reasoning is sound and we need to look beyond
>the obvious for meaningful critique.
Paley's arguments were refuted. Dembski's arguments are the same as
Paley's arguments in new garb.
>
>Jeff
--
L.P.#0000000001
:> :[snip]
:> :>
:> :> To build an arch, simply heap up a pile of stones, build the arch on top
:> :> of that and then remove the pile of stones.
:>
:> : Of course the more arches you build which share a common capstone, the
:> : more difficult it becomes to remove the rocks without the whole thing
:> : collapsing. The arch arguments ignores in inter-relatedness of
:> : functionality at the cellular level.
:>
:> No, no! ;-)
:>
:> The "arch" argument indicates that no matter /how/ complex and inter
:> related things are at the cellular level, they *still* may have been
:> built by the use of elaborate supporting structures.
:>
:> Showing that inter-dependence exists proves diddley-squat about whether
:> a system can be build by gradual processes.
:>
:> You can build large number of arches wich share a common capstone
:> (moving only one stone at a time) if you first build a mound of rocks,
:> then build the arches, and then /carefully/ remove the "scaffolding".
: It was this scaffolding I was referring to when I was talking about
: removing rocks in my analogy. You seem to think I was talking about
: remove rocks from the arches themselves.
That's what I thought you meant...
: Just as with arches, it becomes increasingly more difficult to remove
: the scaffolding as more and more arches share a common keystone (if you
: don't believe it, try it), without the whole thing collapsing.
This is an objection which can plausibly be applied to arches - but it
can't plausibly be applied to organisms in an evolving system.
If you remove some scaffolding and everything collapses, than the organism
concerned leaves no ancestors.
However, other organisms - related to the one that died - can continue to
reproduce, allowing the possibility of the scaffolding falling away in
a descendant.
For The Dembski/Behe argument to apply to biological systems it has to
be /impossible/ to find a gradual route from simple systems to
complex ones - not just very difficult.
If you are arguing that removing the support scaffolding might be a little
bit tricky, I would reply, "Fear not! evolution will find a way" -
provided only one rock at a time needs to be touched.
You would need to argue that removing the support scaffolding a piece
at a time is *impossible* for any "irreducible complexity" to be present.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com
I will never lie to you.
Well yes but there certainly are events which undeniably represent CSI.
We don't need to see the building plans to know the house we are looking
at did not self-assemble. Formalizing this ad hoc procedure seems doable
to me. Dembski's Explanatory Filter attempts to do just that as does
the procedure SETI is using to determine if a sinal represents
intelligent life. It needs to be stressed that it is not necessary for
this procedure to always attribute design to events that were in fact
designed as long as it never attributes design to events that in fact
were not designed.
> Such information is
> > generated by nature all the time. But think of how surprised you
would
> > be to discover that each and every snowflake was exactly the same
and
> > just as complex.
>
> But most organisms are different in detail. Except for identical
twins,
> no two humans have the same DNA sequence.
>
> And every uranium atom is exactly the same - and extremely complex.
> Think of the quantum-mechanical 93-body problem ....
Sorry I'm unfamiliar with this problem but if you are claiming that
uranium atoms contain complex specified information you've got an even
larger problem. If we assume all uranium has remained the same since the
beginning of time (i.e. the properties of matter have remained constant)
where did the information come from?
>
> > Nature seems incapable of generating such specified
> > order. That leads us to DNA in which the sequence must be just so in
> > order to specify a certain organism.
>
> And if the sequence would be different it would specify a different
> organism. No kidding.
>
> Saying that the set of current species (from _E._coli to _H._ sapiens)
> is "specified" amounts exactly to painting the bull's eye around the
> arrow. Unless you want to claim that our existence was predicted some
3
> gigayears ago ....
That is one implication. I think Dembski would say that our existence
implicates a goal set in the distant past. As Dawkins points out, the
universe appears to be designed (design = means for attaining a goal)
and the fundamental task of science is to show why it is not. Of course
science has not achieved this noble end yet which leaves open the
possibility that it was in fact designed.
>
> The information is both complex and
> > specified. Again no known natural agent exists which has been shown
to
> > produce such specified order.
>
> And there is no result that says that nature cannot produce it.
There is an argument from set exclusivity that says nature cannot
produce it. All it would take is one counter-example to reduce this
argument to rubble. Funny that know one has come up with one yet.
>
> > Notice that once such information is generated, it may be transmited
> by
> > natural causes. Thus an intelligent agent is not required to copy or
> > even change CSI. Showing a natural mechanism for the generation of
DNA
> > is no more helpful than saying Maxwell's equations can account for
the
> > picture on your television screen.
>
> But Newton's equation can account for the highly complex fractal
> structure of the rings of Saturn. I'm sure some specification could be
> invented on the spot.
>
> > It was transmited by a natural
> > process but this begs the question as to the source of the
> information.
>
> This would only be a problem if you could prove some conservation
> theorem for information (like for electric charge etc.).
>
As you said, there is a problem because that is exactly the conclusion
of Dembski's theory. From "Intelligent Design as a Theory of
Information",
"Natural causes are therefore incapable of generating CSI. This broad
conclusion I call the Law of Conservation of Information, or LCI for
short. LCI has profound implications for science. Among its corollaries
are the following: (1) The CSI in a closed system of natural causes
remains constant or decreases. (2) CSI cannot be generated
spontaneously, originate endogenously, or organize itself (as these
terms are used in origins-of-life research). (3) The CSI in a closed
system of natural causes either has been in the system eternally or was
at some point added exogenously (implying that the system though now
closed was not always closed). (4) In particular, any closed system of
natural causes that is also of finite duration received whatever CSI it
contains before it became a closed system."
This is what all the fuss is about. If Dembski is correct, Complex
Specified Information is entropic. Note that this result is completely
independent of our ability to accurately detect it, so in a sense the
objections about the accuracy of the filter don't amount to much. If
*any* example event can be found in nature that is shown unarguably to
represent CSI, the conclusion will be inescapable.
Jeff
:> UNK> [...] no known natural agent exists which has been shown
:> UNK> to produce such specified order.
:>
:> And there is no result that says that nature cannot produce it.
: There is an argument from set exclusivity that says nature cannot
: produce it. All it would take is one counter-example to reduce this
: argument to rubble. Funny that know one has come up with one yet.
Nature can produce intelligence (via evolution).
An intelligent designer is still involved in constructing the house
(or other apparently-designed construct), but no intelligent designer is
needed to produce the intelligent designer who built the house.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ t...@cryogen.com
Time's up - everyone out of the gene pool.
>
> It's not logic but evidence which is lacking.
Hmm, seems just the opposite to me. All of the observable evidence shows
that it takes intelligence to create CSI. Further, there is no obsevable
evidence to show that CSI can come about from natural cause. There is
broad agreement that life is complex and specified, one of the few
things both Dembski and Dawkins agree on. Now one could argue that it is
possible for a negligibly probable event to occur and produce an event
that is contrary to our universal experience but that argument is made
from outside the bounds of science because science is supposed to be
based on observed phenomena, not improbable fantasia.
> When there was a crisis
> in Physics in the 19th century due to the precession of Mercury,
> Physicists didn't say "that does it; it was due to a designer". That's
> not how science works.
>
The rest of science does not build its foundation upon a highly
improbable, non-observable, non-falsifiable unspecified hypothesis and
continue to cling to the fantastic depite all evidence to the contrary.
These days biology has replaced an untenable appeal to God with an
untenable appeal to chance. There is no substantive difference.
>
> Paley's arguments were refuted. Dembski's arguments are the same as
> Paley's arguments in new garb.
Paley's argument failed because it went from order to Divinity without a
scientific basis. That is, to use Humes term, the is no "uniform
experience" of Divine power upon which to base the argument that order
implicates a Supreme Being. Dembski's argument is quite different. It is
precisely because it is based on an analogy from uniform experience (all
observed specified complexity implicates intelligence) that it can lay
claim to the only truly scientific hypothesis about origins. Current so
called scientific theories regarding origons are not based on uniform
experience but rather an appeal to blind chance that is based on no
observable analogy.
Further, Dembski's theory gives us falsifiable corrolaries such as those
contained in his Law of Conservation of Information. These corrolaries
allow us to make predictions and test results. And that, my dear maff,
is exactly how science *does* work.
Jeff
>
> >
> >Jeff
>
> --
> L.P.#0000000001
Not complex or not specified?
>[...] Nature seems incapable of generating such specified
>order. That leads us to DNA in which the sequence must be just so in
>order to specify a certain organism. The information is both complex and
>specified. Again no known natural agent exists which has been shown to
>produce such specified order.
The specification came after the fact. You painted the target around the
DNA.
Besides, nature has ways of producing specified complexity. In Oregon
Caves, there are formations called clay worms which look like some kind of
alien writing. They give the result "designed" when passed through
Dembski's filter. Yet I find it inconceivable that they were designed
because of lack of a designer that could also manufacture them. They
almost certainly appeared naturally via unknown means. Which means
Dembski's ideas are completely worthless.
I understand "specified in advance", but what does "self-evident in the
object itself" mean? Is that a fancy way of saying "looks designed to
me"?
>> [011100100110100110101111000110 vs 010101010101010101010101010101]
>[...]
>If I got you to believe that I flipped one of the sequence with a fair
>coin you'd be very gullable. If I told you I flipped the other one,
>you'd have no trouble believing me. You tell me which is which.
Neither. It is tempting to say that the second string is less likely to
have formed randomly, but in fact it is EXACTLY as likely to have formed
as the first string. What you really mean to say is, if you compared any
of various strings with more-or-less arbitrary runs of zeros and ones
against a highly ordered string such as 1111... or 011011011..., you would
think the first was more likely to be formed by random processes. But
that has nothing to do with the pattern per se; it is a consequence of the
former set being much larger than the latter. In other words, it is
because you specified the latter string beforehand, but not the former.
In short, the pattern doesn't matter a bit unless it's specified in
advance.
The first step in his filter says, "Was it caused by natural law?"
That question is impossible to answer unless you can recognize the effects
of every last natural law. Do you believe all natural laws are known? I
sure don't. And if you don't know what the natural laws say, how are you
going to recognize their effects?
The only possible way to make that question answerable is to assume it
only refers to known natural laws and explicitly excludes unknown laws.
And since Dembski seems to think it is possible to get past that step to
step 3, I conclude he makes that exclusion.
Mark Isaak wrote:
> In article <7t4ai2$905$1...@nnrp1.deja.com>, <jpa...@my-deja.com> wrote:
> >In article <37f4dcb5$0$2...@nntp1.ba.best.com>,
> > at...@best.comNOSPAM (Mark Isaak) wrote:
> >> [The difference between "isn't explained" and "can't be explained" is
> >> what Dembski is trying to prove.]
> >>
> >> Then why did he omit unknown natural causes out of his filter?
> >
> >Because there is *nothing left*. He is arguing from set exclusivity.
> >Events can be generated in three ways, randomly, by natural law and by
> >design.
>
> The first step in his filter says, "Was it caused by natural law?"
> That question is impossible to answer unless you can recognize the effects
> of every last natural law. Do you believe all natural laws are known? I
> sure don't. And if you don't know what the natural laws say, how are you
> going to recognize their effects?
>
Appeal to an as yet unknown natural law is an appeal to ignorance, whereas
Dembski's design-detecter is essentially an appeal to knowledge. His is not a
god-of-the-gaps argument. In fact, he doesn't even claim the design-detecter
gives God, for it doesn't distinguish influence of vast intelligence from
Infinite Intelligence.
For example, if we observed "John 3:16" cratered on the back side of the moon,
Dembski's explanator filter (law, chance, design) would detect design
(specified complexity). The random pattern of craters actually there today,
and the "John 3:16", are both complex patterns (low p-value). But the "John
3:16" conforms to an independently specifiable pattern. Hence, we infer mere
design; but the filter cannot tell if it is the work of aliens or God.
Nor could it tell us if the cratered moon were written directly by the finger
of God, or if God so fine tuned the initial conditions of the universe that
"John 3:16" appeared by natural processes with no direct Intervention. It
reliably detects mere design, not the God of Abraham, Isaac, Jacob.
-Alan Wostenberg
>In article <xrr1NxOP=T34dEG7bkx+==leV...@4ax.com>,
> maff91 <maf...@nospam.my-deja.com> wrote:
>
>>
>> It's not logic but evidence which is lacking.
>
>Hmm, seems just the opposite to me. All of the observable evidence shows
>that it takes intelligence to create CSI. Further, there is no obsevable
>evidence to show that CSI can come about from natural cause. There is
Chemistry and Biochemistry are natural causes.
http://x22.deja.com/=dnc/getdoc.xp?AN=424558267
>broad agreement that life is complex and specified, one of the few
>things both Dembski and Dawkins agree on. Now one could argue that it is
It's not what any one scientist say which matters in science. They
have to show the scientific community that evidence correspond to the
proposed theory.
>possible for a negligibly probable event to occur and produce an event
>that is contrary to our universal experience but that argument is made
Creation is contrary to our universal experience.
http://x14.dejanews.com/getdoc.xp?AN=372243992
http://x14.dejanews.com/getdoc.xp?AN=372683542
>from outside the bounds of science because science is supposed to be
>based on observed phenomena, not improbable fantasia.
Are you projecting?
>
>> When there was a crisis
>> in Physics in the 19th century due to the precession of Mercury,
>> Physicists didn't say "that does it; it was due to a designer". That's
>> not how science works.
>>
>The rest of science does not build its foundation upon a highly
>improbable, non-observable, non-falsifiable unspecified hypothesis and
>continue to cling to the fantastic depite all evidence to the contrary.
>These days biology has replaced an untenable appeal to God with an
>untenable appeal to chance. There is no substantive difference.
Evolution is not simply a result of random chance. It is also a result
of non-random selection. See the Evolution and Chance FAQ and
http://www.talkorigins.org/faqs/chance.html
the Five Major Misconceptions about Evolution FAQ.
http://www.talkorigins.org/faqs/faq-misconceptions.html#chance
Many people of Christian and other faiths accept evolution as the
scientific explanation for biodiversity. See the God and
http://www.talkorigins.org/faqs/faq-god.html
Evolution FAQ and the Interpretations of Genesis FAQ.
http://www.talkorigins.org/faqs/interpretations.html
>> Paley's arguments were refuted. Dembski's arguments are the same as
>> Paley's arguments in new garb.
>
>Paley's argument failed because it went from order to Divinity without a
>scientific basis. That is, to use Humes term, the is no "uniform
>experience" of Divine power upon which to base the argument that order
>implicates a Supreme Being. Dembski's argument is quite different. It is
>precisely because it is based on an analogy from uniform experience (all
>observed specified complexity implicates intelligence) that it can lay
>claim to the only truly scientific hypothesis about origins. Current so
Analogy without evidence isn't very useful in science.
>called scientific theories regarding origons are not based on uniform
>experience but rather an appeal to blind chance that is based on no
>observable analogy.
>
>Further, Dembski's theory gives us falsifiable corrolaries such as those
>contained in his Law of Conservation of Information. These corrolaries
>allow us to make predictions and test results. And that, my dear maff,
>is exactly how science *does* work.
It also means publishing in peer reviewed science journals and
awaiting confirmation from the scientific community.
It's not going to be addressed by preaching to the apologetic crowd
and getting politicians to define what science is.
http://x42.deja.com/getdoc.xp?AN=529240504
>
>Jeff
[...]
--
L.P.#0000000001
1. The distribution of bright stars is certainly *complex* information.
The Greeks had a clear *specification* for it: that they can be grouped
into constellations which obviously (well, it was obvious to them)
correspond to characters from their mythology.
2. In 1610, the set of all planetary observations (by Tycho Brahe et
al.) was certainly *complex* information. Kepler found out that there
was an obvious pattern in it (elliptic orbits): *specification*. He
knew no natural law that would produce such orbits.
This example shows that "unknown natural law" is missing from the
filter.
Actually when a scientist detects an obvious pattern in the data, he
should not cry "Design! Design!", but try to formulate a new natural
law explaining the pattern. That's after all what all *known* natural
laws amount to: explanations of observed patterns, right ?
*Unless* there is independent evidence for intelligent designers, like
in archeology and paleoanthropology.
In article <7t5irv$2ei$1...@nnrp1.deja.com>,
Because we have seen house-builders at work. Thus there is no need to
introduce a new natural law. When Ohm found a specified pattern
(linearity) in the relationship between potential difference and
current, he formulated a new law; he had no evidence for minuscule
intelligent electron pushers .....
> Formalizing this ad hoc procedure seems
doable
> to me. Dembski's Explanatory Filter attempts to do just that as does
> the procedure SETI is using to determine if a sinal represents
> intelligent life.
SETI starts from some assumption about the methods and purposes of the
putative intelligent designers. It is not about "mere design".
It needs to be stressed that it is not necessary for
> this procedure to always attribute design to events that were in fact
> designed as long as it never attributes design to events that in fact
> were not designed.
>
> > Such information is
> > > generated by nature all the time. But think of how surprised you
> would
> > > be to discover that each and every snowflake was exactly the same
> and
> > > just as complex.
> >
> > But most organisms are different in detail. Except for identical
> twins,
> > no two humans have the same DNA sequence.
> >
> > And every uranium atom is exactly the same - and extremely complex.
> > Think of the quantum-mechanical 93-body problem ....
>
> Sorry I'm unfamiliar with this problem
1 nucleus and 92 electrons, interacting via relativistic Coulomb
forces. Don't forget all those spins interacting either ....
> but if you are claiming that
> uranium atoms contain complex specified information you've got an even
> larger problem. If we assume all uranium has remained the same since
the
> beginning of time (i.e. the properties of matter have remained
constant)
> where did the information come from?
Only if you assume that CSI cannot be generated. But since there is no
way for us to predict the details of the uranium spectrum ab initio
(the problem is too complex), *measuring* amounts to new information.
This example shows that "unknowable consequence of known natural law"
is missing from the filter as well.
> >
> > > Nature seems incapable of generating such specified
> > > order. That leads us to DNA in which the sequence must be just so
in
> > > order to specify a certain organism.
> >
> > And if the sequence would be different it would specify a different
> > organism. No kidding.
> >
> > Saying that the set of current species (from _E._coli to _H._
sapiens)
> > is "specified" amounts exactly to painting the bull's eye around the
> > arrow. Unless you want to claim that our existence was predicted
some
> 3
> > gigayears ago ....
>
> That is one implication. I think Dembski would say that our existence
> implicates a goal set in the distant past.
A very anthropocentric assumption. Does the existence of gas giant
planets implicate that the goal was to produce gas giants ? Or think of
Haldane's remark about the Creator's inordinate fondness for beetles.
As Dawkins points out, the
> universe appears to be designed (design = means for attaining a goal)
> and the fundamental task of science is to show why it is not. Of
course
> science has not achieved this noble end yet which leaves open the
> possibility that it was in fact designed.
An unlimited designer with unknown and inscrutable purposes is not a
scientific explanation, for the same reasons that Last Thursdayism is
not. It can be fitted to "explain" everything and thus explains nothing.
> >
> > The information is both complex and
> > > specified. Again no known natural agent exists which has been
shown
> to
> > > produce such specified order.
The Tierra simulation is an impressive counterexample.
I'd call this an assertion, not a conclusion - unless you include it in
the definition of CSI. If I understand Wesley Elsberry right, that's
what Dembski's newest claims (those introducing "apparent" CSI) amount
to.
Regards,
HRG.
>In article <7t487v$7kd$1...@nnrp1.deja.com>, <jpa...@my-deja.com> wrote:
>>A pattern which is specified in advance or which is self evident in the
>>object itself indicates it was the goal of some design.
>
>I understand "specified in advance", but what does "self-evident in the
>object itself" mean? Is that a fancy way of saying "looks designed to
>me"?
>
>>> [011100100110100110101111000110 vs 010101010101010101010101010101]
>>[...]
>>If I got you to believe that I flipped one of the sequence with a fair
>>coin you'd be very gullable. If I told you I flipped the other one,
>>you'd have no trouble believing me. You tell me which is which.
>
>Neither. It is tempting to say that the second string is less likely to
>have formed randomly, but in fact it is EXACTLY as likely to have formed
>as the first string. What you really mean to say is, if you compared any
>of various strings with more-or-less arbitrary runs of zeros and ones
>against a highly ordered string such as 1111... or 011011011..., you would
>think the first was more likely to be formed by random processes. But
>that has nothing to do with the pattern per se; it is a consequence of the
>former set being much larger than the latter. In other words, it is
>because you specified the latter string beforehand, but not the former.
>In short, the pattern doesn't matter a bit unless it's specified in
>advance.
Well no. A branch of statistical inference is hypothesis testing.
The hypothesis in this case is flipping a fair coin. When one
repeatedly flips a fair coin, one expects that heads comes up about
half the time and tails about half the time. If heads or tails
predominates, then one is suspicious that the coin is biased. Given
the hypothesis, one can calculate how likely an outcome is. The
counts of heads and tails are about equal here so that will not
disqualify the hypothesis. However, the hypothesis also implies that
each flip is independent of the other flips. In the second sequence,
heads always follows tails and tail always follows heads. If a fair
coin is being flipped, this is a very unlikely outcome; the odds are
worse than 500 million to one. Hence, the hypothesis is almost
certainly false for the second sequence.
Ivar
Dembski's filter requires that all natural laws, including unknown
laws, be proven inadequate to explain a possible design event. No one
else is appealing to anything.
But you raise an interesting point. Are there signs that would point
to an unknown designer, maybe God, maybe something else? "John 3:16"
has not been observed on the back side of the moon. But if such a
sign were found, it would certainly create a lot of interest.
Dembski's problem is that he is not looking for unambiguous signs like
"John 3:16." Rather he is trying to apply his filter to events such
as the origin of life. I don't think he'll ever get it to work.
Ivar
[...]
>
>Appeal to an as yet unknown natural law is an appeal to ignorance, whereas
There's no evidence for creation.
http://x14.dejanews.com/getdoc.xp?AN=372243992
http://x14.dejanews.com/getdoc.xp?AN=372683542
>Dembski's design-detecter is essentially an appeal to knowledge. His is not a
>god-of-the-gaps argument. In fact, he doesn't even claim the design-detecter
>gives God, for it doesn't distinguish influence of vast intelligence from
>Infinite Intelligence.
>
>For example, if we observed "John 3:16" cratered on the back side of the moon,
>Dembski's explanator filter (law, chance, design) would detect design
>(specified complexity). The random pattern of craters actually there today,
>and the "John 3:16", are both complex patterns (low p-value). But the "John
>3:16" conforms to an independently specifiable pattern. Hence, we infer mere
>design; but the filter cannot tell if it is the work of aliens or God.
>
>Nor could it tell us if the cratered moon were written directly by the finger
>of God, or if God so fine tuned the initial conditions of the universe that
>"John 3:16" appeared by natural processes with no direct Intervention. It
>reliably detects mere design, not the God of Abraham, Isaac, Jacob.
There isn't any evidence for design.
http://x33.deja.com/getdoc.xp?AN=519544184
>
>-Alan Wostenberg
--
L.P.#0000000001
The problem remains that any pattern is "independently specifiable" -
*after the fact*. See my post pointing out that the Greeks "specified"
the pattern of the brightest stars in our sky, by associating their
constellations with figures from their mythology.
Regards,
HRG.
Hence, we infer mere
> design; but the filter cannot tell if it is the work of aliens or God.
>
> Nor could it tell us if the cratered moon were written directly by
the finger
> of God, or if God so fine tuned the initial conditions of the
universe that
> "John 3:16" appeared by natural processes with no direct
Intervention. It
> reliably detects mere design, not the God of Abraham, Isaac, Jacob.
>
> -Alan Wostenberg
Dembski is also fond of both practical and hypothetical
illustrations to make his points. I'd like to propose a
hypothetical illustration to explore the utility of the
"apparent CSI"/"actual CSI" split.
Let's say that we have an intelligent agent in a room. The
room is equipped with all sorts of computers and tomes on
algorithms, including the complete works of Knuth. We'll call
this the "Algorithm Room". We pass a problem whose solution
would meet the criteria of CSI into the room (say, a 100-city
tour of the TSP or perhaps the Zeller congruence applied to
many dates). Enough time passes that our intelligent agent
could work the problem posed from first principles by hand
without recourse to references or other resources. The
correct solution is passed out of the room, with an statement
from the intelligent agent that no computational or reference
assistance was utilized. Under those circumstances, we pay
our intelligent agent at a high consultant rate. But if our
intelligent agent simply used the references or computers, he
would get paid at the lowly computer operator rate. We
suspect that our intelligent agent not only utilized the
references or computers to accomplish the task, but that he
also used the time thus freed up to do some light reading,
like "Once is Not Enough".
There are four broad categories of possible explanation of the
solution that was passed back out of the "Algorithm Room".
First, our intelligent agent might have employed chance,
throwing dice to come up with the solution, and then waiting
an appropriate period to pass the solution out. Given that
the solution actually did solve the problem passed in, we can
be highly confident that this category of explanation is not
the actual one. Second, our intelligent agent might have
ignored every resource of the "Algorithm Room" and spent the
entire time working out the solution from the basic
information provided with the problem (distances between
cities or dates in question). Third, our intelligent agent
might have gone so far as to look up and apply, via pencil and
paper, some appropriate algorithm taken from one of the
reference books. In this case, the sole novel intelligent
action on our agent's part was looking up the algorithm.
Essentially, our agent utilized himself as a computer.
Fourth, our intelligent agent might simply have fed the basic
data into one of the computers and run an algorithm to pop out
the needed solution. Again, the intelligent agent's
deployment of intelligence stopped well short of being applied
to produce the actual solution to the problem at hand.
Because we suspect cheating, we wish to distinguish between a
solution that is the result of the third or fourth categories
of action, and a solution that is the result of the second
category of action of our intelligent agent. We have only the
attributes of the provided solution to the problem to go upon.
Can we make a determination as to whether cheating happened or
not?
Dembski's article, "Explaining Specified Complexity",
critiques a specific evolutionary algorithm. Dembski does not
dispute that the solution represents CSI, but categorizes the
result as "apparent CSI" because the specific algorithm
critiqued must necessarily produce it. Dembski then claims
that this same critique applies to all evolutionary
algorithms, and Dembski includes natural selection within that
category.
The question all this poses is whether Dembski's analytical
processes bearing upon CSI can, in the absence of further
information from inside the "Algorithm Room", decide whether
the solution received was actually the work of the intelligent
agent (and thus "actual CSI") or the product of an algorithm
falsely claimed to be the work of the intelligent agent (and
thus "apparent CSI")?
If Dembski's analytical techniques cannot resolve the issue of
possible cheating in the "Algorithm Room", how does he hope to
resolve the issue of whether certain features of biology are
necessarily the work of an intelligent agent or agents? If
Dembski has no solution to this dilemma, the Design Inference
is dead.
--
Wesley R. Elsberry, Student in Wildlife & Fisheries Sciences, Tx A&M U.
Visit the Online Zoologists page (http://www.rtis.com/nat/user/elsberry)
Email to this account is dumped to /dev/null, whose Spam appetite is capacious.
"when something digestible meets with an eager digestion" - mehitabel
In article <37F546C6...@mpmd.com>, Jeff Patterson <jp...@mpmd.com> wrote:
>Wesley R. Elsberry wrote:
>
>> In article <37F28B4E...@mpmd.com>, Jeff Patterson <jp...@mpmd.com> wrote:
>> >Wesley R. Elsberry wrote:
[...]
WRE> [Restore]
WRE> In fact, I list several ways in which Dembski's Explanatory
WRE> Filter fails to accurately capture how humans go about finding
WRE> design in day-to-day life in my review of TDI.
WRE> [End Restoration]
WRE> And no comment on that from Jeff. Given that Dembski relies
WRE> heavily upon analogy to human detection of design in
WRE> day-to-day life in making his argument for proceeding to infer
WRE> intelligent agency once "design" is found as an attribute of
WRE> an event, I'd think that making an accurate description of the
WRE> process in question would be pretty important.
JP>Frankly this is an issue I haven't thought a lot about. I
JP>glossed over your objections not because I don't think they
JP>are relevant but because right now I'm still grappling with
JP>the philosophical implications of CSI itself. I have thus
JP>far been willing to accept as given that if CSI exists, a
JP>means can be found to reliably detect it, even if Dembski's
JP>first cut turns out not to be the most reliable. I read his
JP>paper on the EF and it struck an intuitive chord but I
JP>haven't gone much beyond that. I guess I see the design of
JP>the filter as an engineering challenge, not a theoretical
JP>one. If you would allow, I'd like to side-step this issue
JP>and come back to it after I've read Dembski's paper and
JP>your review more carefully. In the meantime, if you'd like
JP>to elaborate on why you think Dembski's theory rests on a
JP>particular implementation of the filter, it would inform my
JP>review.
Dembski's argument in section 2.4 of TDI relies upon current
human design detection as representing a *general* method of
finding the action of intelligent agents. Dembski wants to
establish that once "design" is found, that one is justified
in then inferring that an intelligent agent produced the
event in question. If Dembski's description of human
methods of detecting design fails to capture important
aspects of those methods (like making allowances for the
category of "unknown causation" and the action of unknown
natural processes, both of which are characterizations of
the state of information available that are absent from
Dembski's EF), then Dembski can hardly use the generality of
the *human design detecting activities* as a basis to claim
generality for his particular EF. It is the argument in
section 2.4 which is at risk over this issue.
WRE> I'm not the only one to critique Dembski on his accuracy of
WRE> capturing human design detection procedures. Elliott Sober et
WRE> alia's "How Not to Detect Design" makes strong criticisms of
WRE> Dembski's Explanatory Filter and extended logical arguments.
JP>A pointer to Sober's paper would be helpful if you have it handy..
<http://philosophy.wisc.edu/sober/papers.htm>
The paper itself is a PDF document linked from there.
[...]
--
Wesley R. Elsberry, Student in Wildlife & Fisheries Sciences, Tx A&M U.
Visit the Online Zoologists page (http://www.rtis.com/nat/user/elsberry)
Email to this account is dumped to /dev/null, whose Spam appetite is capacious.
"Your voice goes funny when you're quoting"-SG
In article <37F546C6...@mpmd.com>, Jeff Patterson <jp...@mpmd.com> wrote:
>Wesley R. Elsberry wrote:
>
>> In article <37F28B4E...@mpmd.com>, Jeff Patterson <jp...@mpmd.com> wrote:
>> >Wesley R. Elsberry wrote:
[...]
WRE> At least, Dembski
WRE> did not so define it before his "reiterations" post. With
WRE> the introduction of the unspecified qualifiers "apparent"
WRE> and "actual" to be prepended to "CSI", it looks like Dembski
WRE> may indeed have simply slipped into publicly begging the
WRE> question.
JP>You have me at a disadvantage here as I haven't seen the post
JP>you reference and haven't run across him using the qualifiers
JP>you mentioned in the articles I have read. A pointer would be
JP>most appreciated.
WRE> <http://x44.deja.com/=dnc/getdoc.xp?AN=525224114>
JP>Thanks. This is the first time I have seen Dembski claim
JP>that life has the attribute of CSI. Before he characterized
JP>this as an open question. This leads me to wonder what he
JP>has discovered to resolve this to his satisfaction.
JP>I'd like to quote here what I think is the central point he
JP>tries to make: "It follows that Dawkins's evolutionary
JP>algorithm, by vastly increasing the probability of getting
JP>the target sequence, vastly decreases the complexity
JP>inherent in that sequence."
And Dembski fails to analyze the probability of an
intelligent agent coming up with a solution as a measure of
complexity of that solution. This is yet another instance
of preferential treatment be given intelligent agents so
that "actual CSI" will seem important and different from
"apparent CSI".
As I mentioned earlier, the probability that an omnipotent,
omniscient hyperintelligence will solve a problem is 1, and
thus any event produced by such an entity must necessarily (by
the reasoning given by Dembski when looking at algorithms)
mean that such an entity is incapable in principle of making
anything of CSI except "apparent CSI".
JP>I think Dembski goes badly astray here which leads him to
JP>this muddle of apparent vs actual complexity. He has
JP>forgotten something he knew to be true in his earlier
JP>writings, namely that the information content of an event
JP>is dependent on the experiment one designs to observe the
JP>event. He demonstrates this clearly in his "royal flush"
JP>analogy. Designing an experiment to detect a non-royal
JP>flush reduces the information contained in the event of
JP>obtaining the sequence of cards that makes up a royal flush
JP>to one bit.
Dembski uses this example to argue that the individuation of
card hands into "royal flush" and "not-royal-flush" categories
is invalid as a measure of information, and thus complexity.
Why would anyone select something that Dembski said was the
wrong way to do things as an argument that things *should* be
handled in that way?
JP>The result of such an experiment is not
JP>apparent complexity, it is not complex at all. In Dawkins
JP>GA as in my archer-evolver, the event is obtaining some
JP>piece of CSI. If the event is bound to occur, the event
JP>ceases to contain information.
I think this is all bafflegab to avoid the obvious: The Design
Inference fails to provide an empirically reliable means of
detecting the action of an intelligent agent when one *only*
has the information inherent in an event, and not the complete
knowledge of *how* the event was produced. That one can say
that intentionality or some other internal mental state alters
CSI in some manner makes little difference if one cannot
discern its influence in the solution produced. If we had
*complete* knowledge of how all events of interest were
caused, then we wouldn't need to discuss Explanatory Filters
and Design Inferences. That would all be superfluous. The
only circumstances in which Explanatory Filters and Design
Inferences could possibly be of any use is where we do not
have complete information. And when the information about an
event is limited to the produced event itself, it appears that
the Design Inference is incapable of distinguishing between a
solution produced by an algorithm and a solution produced by
an intelligent agent.
JP>We probably should start a new thread on this topic, that
JP>is whether GA's can create CSI, because it deserves its own
JP>heading and this thread is getting unwieldy but I don't
JP>know how to do so without losing the context of our current
JP>discussion so I'll press on. It is clear to me from your
JP>comments below that we are not going to get anywhere
JP>without a clear understanding of what each of us means by
JP>intelligence so let me start there.
I've made this a new thread.
JP>In "How the Mind Works", Pinker defines intelligence
JP>(pg. 62) as "the ability to attain goals in the face of
JP>obstacles by means of decisions based on rational
JP>(truth-obeying) rules.".
JP>I find this to be unsatisfactory. Notice that this
JP>definition would apply the attribute of intelligence to
JP>genetic algorithms (indeed any optimization routine) and to
JP>perhaps even to evolution through natural selection. You
JP>make it clear below that you do not feel GA's have
JP>intelligence so I assume you agree thus far What is it then
JP>that makes this definition deficient? It is in his use of
JP>the word "attain". What separates intelligent agents from
JP>mere automatons is the ability to *set* goals, not to
JP>attain the goals set (possibly) by others. I think Pinker
JP>was well aware of this distinction and chose the word
JP>attain purposefully because the ability to set goals
JP>implicates desire, motivation, self-determination, in short
JP>all of the things that do not fit neatly into his
JP>computational model for human thought.
I prefer as a working definition of intelligence "emitting
appropriate behaviors under novel circumstances". One would
like a definition of intelligence to have *some* basis in
phenomena available for observation and possible test, which
intentionality definitely is not.
JP>Why is this important? Because I maintain that is at the
JP>precise moment that one sets a goal, and at no other, that
JP>CSI can come into existence and this is why I think CSI
JP>does indeed implicate intelligence. Goal setting is when
JP>the choice is made, when contingency is actualized, when
JP>the die is cast. It is when the archer's paints of the
JP>bulls eye and picks one point among the infinite number
JP>that he could have chosen that CSI come about, not in the
JP>shooting of the arrow. The beach house existed in the mind
JP>of the architect before he ever laid pencil to paper. The
JP>design is simply the means to an end. Any number of ways
JP>can be devised (designed) to achieve the goal, but none of
JP>them create CSI, even if the design includes the actions of
JP>an intelligent agent.
Setting a goal does *not* make CSI come into existence. It
isn't even a necessary precursor to the production of CSI by
intelligent agents, as the concept of "serendipity"
indicates. A goal as an internal and unevidenced mental
state of a designer can hardly be passed into Dembski's
Explanatory Filter. Dembski aims to pass things like Behe's
bioogical examples into his EF. According to Jeff's
statements above, this should not be allowed, as such things
would not bear upon whether CSI was present or not.
JP>Now if it is true that any design has as its end a goal,
JP>when we find a design we are safe in assuming a goal
JP>existed beforehand which the design attempts to attain.
This is backwards. A goal necessitates a design, not the
other way around. "Design" as deployed by Dembski does not
mean what Jeff references above. "Design" is the residue
after high-probability events and intermediate or
low-probability events without specifications are
eliminated. I didn't see anything about intelligence in
that. This confusion and conflation of the common meaning
of "design" with the specific connotation used by Dembski
was one of the things that I warned against in my review.
JP>Intelligence is implicated by a goal AND by a design to
JP>attain that goal. So the reasoning goes from specified
JP>order which implicates design which implicates a goal. This
JP>chain includes both a design and a goal which implicates
JP>intelligence if and only if the goal is complex enough to
JP>preclude it having been attained it by chance. Thus not
JP>every goal creates CSI. I can choose an easily attainable
JP>goal for which good fortune may account. I could choose to
JP>paint the whole wall black which while specified, is not
JP>complex.
See my comments above.
JP>Dawkins understands this well. It is why he is so adamant
JP>about the purposelessness of the evolutionary process. To
JP>ascribe a goal is to implicate intelligence. It is also why
JP>all the claims that purposeless evolution is not
JP>inconsistent with religious belief is so much pabulum.
"Pabulum"?
Many religious people accept that evolutionary processes are
reflected in the history of life without accepting that all
the changes that occurred were purposeless. But most do not
try to argue that their interventionist view of God's role
in the unfolding of life's history is a scientific hypothesis
to be taught alongside or in place of evolutionary biology
as it is currently constituted.
JP>I think Dembski makes a mistake in not making this goal
JP>dependence more explicit.
I think Dembski did not make a mistake there. I think Dembski
did well to stay far, far away from making goal dependence
explicit.
JP>It's there, but couched in terms like directed contingency
JP>and actualization it gets fogged. It gets totally
JP>obliterated by this silly distinction between real and
JP>apparent CSI.
I agree on the "obliteration" comment. I don't see any way
out of this for Dembski other than a retraction of the
"Explaining Specified Complexity" essay's use of "apparent"
and "actual" qualifiers for CSI. That, of course, also
means that the conclusions derived from their deployment
also must be retracted.
JP> [snip of discussions related to natural selection and
JP> superposition - You can readress these issues if you like]
WRE> Again, the objections raised to algorithms as sources of CSI
WRE> seem to be handled preferentially: they are not applied to
WRE> intelligent agents, and yet that application seems both fair
WRE> and reasonable
JP>With my definition of intelligence and the class of events
JP>capable of creating CSI which follows from it, this is not
JP>the case. If either is merely attaining a goal, no CSI is
JP>created. If either can formulate a goal, CSI is created. No
JP>preferential treatment need be given.
I think that Jeff's view and re-definition of CSI has many
more problems than Dembski's original. Dembski, after all,
could ground his assessment of CSI on empirical evidence of
an event. Jeff's CSI can only be invoked with special
information about an internal mental state of an intelligent
agent. The pitfalls in doing this are legion, not the least
of which is the tendency for human intelligent agents to
form post hoc rationalizations of their mental processes
leading to particular actions.
The fact remains that if we only have the event itself to
work from, distinguishing "apparent CSI" and "actual CSI"
appears to be a task upon which the Design Inference fails
to lend any assistance. That applies whether one uses
Dembski's writings as a basis or Jeff's reformulation.
WRE> This doesn't fit the properties of GAs that I have run. In
WRE> Perl scripts that I have done, I try to keep track of the
WRE> unique solutions which have been tried by the algorithm and
WRE> produce a ratio of the number of solutions states examined to
WRE> the number of solution states in the problem space. This is
WRE> typically a tiny, tiny fraction. The GA does *not* typically
WRE> explore the complete problem space. For the toy "WEASEL"
WRE> problem that Dembski criticizes, the typical ratio in my runs
WRE> is about 2E-37.
JP>These are measures of efficiency in attaining the goal.
Which are relevant because that was Jeff's argument for
saying that the complexity was low. Jeff claimed a
probability value near 1 for exploring the problem space
including the solution state; I stated the empirical
evidence of my actual runs of a program to indicate that the
proportion of the problem space examined is far closer to 0
than it is to 1. So now Jeff just wants to dump his
previous argument concerning efficiency and move on to his
new argument involving intentionality. Fine. I just want
to make sure that everyone recognizes that Jeff has indeed
abandoned his previous line of argument.
JP>The CSI was created by you when you set the task before
JP>your program. Dawkins created the CSI only once, when he
JP>choose the phrase. Run the program as often as you like, it
JP>finds the CSI like a rock finds it point of lowest
JP>potential energy. Neither the rock nor the law of gravity
JP>cares a wit about the rock's position in space. No goal, no
JP>intelligence.
The solution produced, however, is identical to that produced
by an intelligent agent with intentionality, and is thus
indistinguishable from that solution without the additional
(and unavailable) information about internal mental states of
the intelligent agent.
JP>So we've evolved our archer with a reasonable, if simplified,
JP>model of genetic algoritms in general. We could add
JP>complications like sex between the surviving vectors,
JP>multiple mutations etc., but that would not change the
JP>result. Given enough time and a large enough ensemble, we
JP>eventually can get arbitrarily close to the target. Have we
JP>produced CSI? Only in the sense that a machine designed to
JP>find a needle in a haystack can produce the needle. It does
JP>not however, create the needle. The needle was there to
JP>begin with. In the same way GA's can produce *knowledge* by
JP>scouring a predefined information space to identify a given
JP>piece of CSI. The cannot create the information
JP>themselves. They are attracted to the solution as surely as a
JP>ball dropped from a height is attacted to the earth's center
JP>of gravity. This is what Dembski meant by GA's being
JP>probability amplifier, an exceedingly poor choice of
JP>words. His idea here thoughy is clearly that if a machine is
JP>designed to find a piece of CSI, it surely belongs in the
JP>regularity set, even if it uses stochastic processes in its
JP>implementation.
WRE> This is a confusion of *cause* and *event*. The event is the
WRE> placement of an arrow into a prespecified location. Whether
WRE> one posits a human archer or a robot archer, the event
WRE> contains CSI. Dembski's Explanatory Filter categorizes
WRE> *events*, not *causes*.
JP>The only event of import here was the painting of the
JP>target. The cause of that event was the archers innate
JP>ability to formulate a goal (to hit the target with an
JP>arrow). He had some purpose in mind and was not just painting
JP>black spots on a whim. The only place intelligence is
JP>required in the whole thought experiment was in forming this
JP>goal.
Depending upon the problem, intelligence may not be required
even for that. Biological phenomena come to mind as an
example.
Dembski's archer example argues against Jeff's interpretation.
The placement of the arrow, Dembski says, tells us something
about the *skill* of the archer when the arrow is placed into
a pre-existing target, and not the skill of the archer in
painting the target beforehand. The target's placement is the
specification, the arrow hitting the wall is the event, and
the arrow's placement relative to the target indicates the
degree to which it has the property of CSI. Dembski's example
does not say that the *archer* himself *had* to paint the
target in order to have his arrow placement generate CSI.
(Dembski's illustration *does* have the archer painting the
target, though.)
WRE> This business of information transformation is one that I
WRE> explained earlier, but I can repeat that explanation easily
WRE> enough.
WRE> The objection that one goes from an antecedent set of
WRE> information in one form to a CSI solution of another form is
WRE> no objection at all. An intelligent agent would have no hope
WRE> of solving a 100-city tour of the TSP in the absence of the
WRE> distance information, or at least would only have the same
WRE> expectations as one would obtain from blind search. The
WRE> algorithm and the agent operate to generate CSI that did not
WRE> exist before from the information that is available. This is
WRE> another way of saying that rational intelligence and
WRE> algorithms have the same dependencies when generating CSI.
JP>But it doesn't matter who or what solves the problem. The
JP>question is who formulated the problem to being with?
[...]
That better not be the question. If it is the question,
then the Design Inference as a means of detecting design
based upon the attributes of possibly designed events is
effectively dead. In order to make a design inference
under this understanding would *require* the examination
of the internal mental state of the designer of the
question. This is not an improvement over what Dembski
has so far given.
See my "Algorithm Room" challenge. I have isolated out
the formulation of the problem so that it has no bearing
on the essential question of whether the Design Inference
is *capable* of resolving whether some event yielding
CSI is an instance of "actual CSI" or "apparent CSI".
--
Wesley R. Elsberry, Student in Wildlife & Fisheries Sciences, Tx A&M U.
Visit the Online Zoologists page (http://www.rtis.com/nat/user/elsberry)
Email to this account is dumped to /dev/null, whose Spam appetite is capacious.
"i might have been a poet if i had kept away from the theatre" - archy
MI> [Dembski] refuses to recognize, though, that there is a big
MI>difference between "isn't explained" and "can't be explained."
JP>I think he understands the distinction perfectly well. After all, it
JP>is the later he is trying to prove.
MI> Then why did he omit unknown natural causes out of his filter?
JP>Because there is *nothing left*. He is arguing from set
JP>exclusivity. Events can be generated in three ways,
JP>randomly, by natural law and by design. Can you think of some
JP>other way to generate an event? So if he can eliminate law
JP>and chance, you can't say what about some unknown natural
JP>cause.
I suggest that Jeff read TDI, p.53. It contains Dembski's
second and more detailed method for excluding regularity as a
class of causation. The method only takes into account
currently known natural laws.
JP>Now before Wesley reads this an accuses me of arguing out of
JP>both sides of my mouth, let me say I am not convinced he has
JP>completely eliminated law and chance but he has come pretty
JP>darn close. Specifically, there is a class of non-linear laws
JP>(functions) which when coupled with chance may not be
JP>eliminated. But even on the off-chance that I am right about
JP>this, there is nothing to indicate that this class of
JP>non-linear functions can generate something that would pass
JP>for CSI or that natural selection falls into this category of
JP>non-linearity.
Except, of course, for the examples that Jeff refuses to
accept. I'd like to ask Jeff what evidence he *would* find
convincing that natural selection or algorithms can produce
CSI, in contravention of Dembski's claims.
JP>Look, Dembski is a very bright guy. He has three Ph.D.'s; one
JP>in mathematics, one in philosophy and one in theology. I
JP>think I read that his philosophy thesis was on logic. He ran
JP>post-doc programs at M.I.T. and Cornell in mathematics, the
JP>University of Chicago in physics, and at Princeton in
JP>computer science His book on I.D. (The Design Inference) was
JP>published by Cambridge University Press, not some creationist
JP>track house. He's published in peer reviewed journals. This
JP>is not in anyway an appeal to authority but just to point out
JP>that it is highly unlikely that he has made some bone-headed
JP>mistake in his basic logic. If he had, we never would of
JP>heard of him. He'd be mince-meat by now. I think it is safe
JP>to assume his basic reasoning is sound and we need to look
JP>beyond the obvious for meaningful critique.
I find this argument underwhelming. There is nothing that
magically insulates Dembski from having made errors at a
fundamental level. To simply brush aside criticisms based on
credentials *is* an argument from authority. Dembski's EF
would not necessarily have played a big role in Dembski's
academic programs, and thus not have been put forward for
attention there. If Jeff can more precisely show where
Dembski's EF was put forward for peer review, then we can
start assessing just how well Jeff's apologetic applies to the
actual circumstances. For it is Dembski's EF that is being
discussed, and not Dembski himself.
I believe that I have taken Dembski's arguments seriously, and
that I have been contributing meaningful critiques. One of my
critiques is in fact about the lack of the "unknown causation"
category in Dembski's Explanatory Filter. I note this as a
problem for Dembski in claiming that his Explanatory Filter
captures how humans detect design. I develop an alternative
EF in my review of TDI as an illustration.
--
Wesley R. Elsberry, Student in Wildlife & Fisheries Sciences, Tx A&M U.
Visit the Online Zoologists page (http://www.rtis.com/nat/user/elsberry)
Email to this account is dumped to /dev/null, whose Spam appetite is capacious.
"It's springtime now and cares subside\The planning's almost done" - BOC
Could someone explain to me how this filter works?
Let's suppose that I was around in 14th century Europe and
was wondering about how the stars moved through the heavens.
Could I use the filter to find out whether they moved in accord
with natural law, or randomly, or by design?
As I understand early astronomy, they couldn't quite get the
motions down perfectly. The predictions were always off by a bit.
Although summer and winter did recur, so the farmers were able,
more or less, to determine from the motions of the stars, when
were the good times to plant and when to reap, and when the cattle
would breed. Was that Complex Specified Information?
--
Tom Scharle scha...@nd.edu "standard disclaimer"
TS>[...snip...]
>: The first step in his filter says, "Was it caused by natural law?"
>: That question is impossible to answer unless you can recognize the effects
>: of every last natural law. Do you believe all natural laws are known? I
>: sure don't. And if you don't know what the natural laws say, how are you
>: going to recognize their effects?
>[...snip...]
TS> Could someone explain to me how this filter works?
Certainly. Dembski's original Explanatory Filter essay is
linked from
<http://inia.cls.org/~welsberr/evobio/evc/ae/dembski_wa.html>
and some of my previous comments on Dembski's EF are linked
from
<http://inia.cls.org/~welsberr/evobio/evc/argresp/design.html>.
TS> Let's suppose that I was around in 14th century Europe
TS>and was wondering about how the stars moved through the
TS>heavens. Could I use the filter to find out whether they
TS>moved in accord with natural law, or randomly, or by design?
TS> As I understand early astronomy, they couldn't quite get
TS>the motions down perfectly. The predictions were always off
TS>by a bit. Although summer and winter did recur, so the
TS>farmers were able, more or less, to determine from the
TS>motions of the stars, when were the good times to plant and
TS>when to reap, and when the cattle would breed. Was that
TS>Complex Specified Information?
I take up the concept of "regression testing" for Dembski's
EF in
<http://www.deja.com/getdoc.xp?AN=388825197>.
[Quote]
What I have proposed concerning testing of Dembski's filter is
familiar to any software engineer as "regression testing". I
find it notable that thus far Dembski and others who express
support for the ID inference have conspicuously avoided
applying Dembski's Explanatory Filter in this way. If it were
a reliable detector, it would be a fine point for
argumentation to be able to say that past false attributions
of design could have been avoided by application of Dembski's
EF. However, as one can see by examination, the outcome does
not support that, and that is why I think that we don't see
such testing of Dembski's EF. (Although I anticipate that
this point will be the focus of much special pleading in the
future, with many fine distinctions made concerning specified
complexity. Consider it a prediction.) On the other hand, I
welcome such testing for my EF, which would not have posited
fairies in the fairy rings.
[End Quote - WR Elsberry,
<http://www.deja.com/getdoc.xp?AN=388825197>]
As a bonus, it appears that my prediction from before the
publication of Dembski's "The Design Inference" was right on
the money. I'll cite Dembski's deployment of "apparent CSI"/
"actual CSI" and Patterson's "intentionality as CSI" as
instances where my prediction was fulfilled.
--
Wesley R. Elsberry, Student in Wildlife & Fisheries Sciences, Tx A&M U.
Visit the Online Zoologists page (http://www.rtis.com/nat/user/elsberry)
Email to this account is dumped to /dev/null, whose Spam appetite is capacious.
"slumber gently and sweet dreams fluffy dear i said" - mehitabel
I think the point was that this is not the case (and it isn't! I think
you'll see this if you think a little bit more about it). But anyway,
if John 3:16 was inscribed on the moon, we would suspect a designer
because designers have been known to write this message (*not* for the
reasons the other poster was giving, of course--something about
specifiable patterns, and utter nonsense). We would suspect that some
form of Christianity pre-dated the evolution of man by some two or
three billion years. It would be very interesting, and one wonders why
God hasn't done something like this, if he in fact exists.
>See my post pointing out that the Greeks "specified"
> the pattern of the brightest stars in our sky, by associating their
> constellations with figures from their mythology.
Yeah, I saw that; I wasn't sure if you really believed it (i.e., if you
were really thinking about what you were saying at that point), but now
that it's cropped up again (somehow in the wrong order, or something;
must be due to the fluctuations of the posting processes) here, as
well, I think I'll refute it, just for the record. The refutation is:
just because human beings can come to believe that non-random patterns
are found in obviously randomly arranged data does not mean that there
is no such thing as non-randomly arranged data. It's like using the
Bible Code nonsense to prove something or other of relevance to this
issue, which I don't feel like thinking about at present.
snip
--vince
This is ridiculous. Since I have seen designers build houses, when I see a
house I can inferred it was designed by people. I do not need to conjure up a
design fairy for which there is no independent evidence pointing to its
existence. Design theory implicitly puts the cart before the horse. Design
theory assumes to be true, what in fact it claims to be testing for.
Formalizing this ad hoc procedure seems doable
>to me. Dembski's Explanatory Filter attempts to do just that as does
>the procedure SETI is using to determine if a sinal represents
>intelligent life.
SETI has an arduous task ahead of them. The problem with design theory is that
design theorists won't offer defensible criteria for goodness of design (CSI
ain't one of them). As far as I am concerned that involves making guesses about
the nature of the designer. SETI has a similar problem. SETI implicitly makes
several assumptions..
Extraterrestrial intelligence exists
These intelligences are advanced enough to be found
and the clincher:
These intelligences understand the problems with inferring design the way that
we do.
The one chance design theory has is what I call the "Bill Sux" hypothesis.
Remeber those INTEL engineers who burned "BIL SUX" into the intel chips? If we
could find something like PI to 100,000 digits encoded in our genes or
something, design theory would have a chance. Thats it. Thats the only chance.
CSI is just a ruse.
It needs to be stressed that it is not necessary for
>this procedure to always attribute design to events that were in fact
>designed as long as it never attributes design to events that in fact
>were not designed.
Well that is a problem isn't it.
>
>> Such information is
>> > generated by nature all the time. But think of how surprised you
>would
>> > be to discover that each and every snowflake was exactly the same
>and
>> > just as complex.
>>
>> But most organisms are different in detail. Except for identical
>twins,
>> no two humans have the same DNA sequence.
>>
>> And every uranium atom is exactly the same - and extremely complex.
>> Think of the quantum-mechanical 93-body problem ....
>
>Sorry I'm unfamiliar with this problem but if you are claiming that
>uranium atoms contain complex specified information you've got an even
>larger problem. If we assume all uranium has remained the same since the
>beginning of time (i.e. the properties of matter have remained constant)
>where did the information come from?
Look, you said that Dembski claims that information is conserved. Thats his
freakin problem. I maintain that information is not a conserved property. I
also know that embedded in the laws of chemistry and physics are perhaps an
inifinite number of recipes for creating what you call "information". You need
to answer this question. The answer is obvious, and thats why I asked you how
this crap ties into Big Bang. Instead of considering the question you said
something silly, "like it meshes really well"...
<SNIPPAGE>
>"Natural causes are therefore incapable of generating CSI. This broad
>conclusion I call the Law of Conservation of Information, or LCI for
>short. LCI has profound implications for science. Among its corollaries
>are the following: (1) The CSI in a closed system of natural causes
>remains constant or decreases. (2) CSI cannot be generated
>spontaneously, originate endogenously, or organize itself (as these
>terms are used in origins-of-life research). (3) The CSI in a closed
>system of natural causes either has been in the system eternally or was
>at some point added exogenously (implying that the system though now
>closed was not always closed). (4) In particular, any closed system of
>natural causes that is also of finite duration received whatever CSI it
>contains before it became a closed system."
WHy does the above have resemblance to the creationist laws of theormdynamics?
>
>This is what all the fuss is about. If Dembski is correct, Complex
>Specified Information is entropic.
He's not.
Stuart
Dr. Stuart A. Weinstein
Ewa Beach Institute of Tectonics
"To err is human, but to really foul things up
requires a creationist"
Saying "I don't know" is not an appeal to ignorance; it is an honest
admission that ignorance exists. Dembski does not make that admission.
Instead, he ignores all those unknown natural laws in the first step of
his filter and saves them instead for his conclusion--that anything
created by unknown natural causes must be designed. His argument is
argument from ignorance in its purest, most pristine form.
>whereas Dembski's design-detecter is essentially an appeal to knowledge.
You seem to think that ignoring ignorance is a double negative that
cancels itself out. Not so. The argument from ignorance relies on
sweeping the ignorance under the rug where people can disregard it, even
though it is a fatal flaw in the argument. Because Dembski ignores the
all-important existence of ignorace, Dembski's argument is worthless.
>
>
>Mark Isaak wrote:
>
>> In article <7t4ai2$905$1...@nnrp1.deja.com>, <jpa...@my-deja.com> wrote:
>> >In article <37f4dcb5$0$2...@nntp1.ba.best.com>,
>> > at...@best.comNOSPAM (Mark Isaak) wrote:
>> >> [The difference between "isn't explained" and "can't be explained" is
>> >> what Dembski is trying to prove.]
>> >>
>> >> Then why did he omit unknown natural causes out of his filter?
>> >
>> >Because there is *nothing left*. He is arguing from set exclusivity.
>> >Events can be generated in three ways, randomly, by natural law and by
>> >design.
>>
>> The first step in his filter says, "Was it caused by natural law?"
>> That question is impossible to answer unless you can recognize the effects
>> of every last natural law. Do you believe all natural laws are known? I
>> sure don't. And if you don't know what the natural laws say, how are you
>> going to recognize their effects?
>>
>
>Appeal to an as yet unknown natural law is an appeal to ignorance, whereas
>Dembski's design-detecter is essentially an appeal to knowledge. His is not a
>god-of-the-gaps argument. In fact, he doesn't even claim the design-detecter
>gives God, for it doesn't distinguish influence of vast intelligence from
>Infinite Intelligence.
I would not place any real hope in Dembski's filter. He is ignoring
one of his own fundamental principles (that the whole can be greater
than the sum of its parts) in its design. Even if you eliminate
random chance (as in white noise or statistically distributed noise)
and even if you eliminate naturally regulated processes you still
cannot rule out some combination of the two.
>For example, if we observed "John 3:16" cratered on the back side of the moon,
>Dembski's explanator filter (law, chance, design) would detect design
>(specified complexity). The random pattern of craters actually there today,
>and the "John 3:16", are both complex patterns (low p-value). But the "John
>3:16" conforms to an independently specifiable pattern. Hence, we infer mere
>design; but the filter cannot tell if it is the work of aliens or God.
Yes, but we have the specification via an entirely seperate channel
for John 3:16.
>Nor could it tell us if the cratered moon were written directly by the finger
>of God, or if God so fine tuned the initial conditions of the universe that
>"John 3:16" appeared by natural processes with no direct Intervention. It
>reliably detects mere design, not the God of Abraham, Isaac, Jacob.
I don't see that it does. It merely detects complexity. The Designer
does not wish to be detectable in this manner.
Dave Oldridge
In this case, it is the hypothesis which must be specified in advance.
Hypothesis testing doesn't apply if you try to specify the hypothesis
after performing the test. Specifying the hypothesis is essentially the
way by which the pattern is specified. For example, the hypothesis of
flipping a fair coin specifies certain patterns and rules out many others
(statistically speaking; the inclusions and exclusions are likelihoods,
not certainties). The important thing is to specify your test in
advance. Neocreationist design theorists don't do that. Never have,
never will.
More to the point, "John 3:16" inscribed on the back of the moon would
indicated a human designer, because it looks like the sort of thing which
a human designer would make. More specifically, I would suspect a 20th
century human designer, because that particular design was popular with
humans then.
The point, which design theorists are intent upon missing, is that design
isn't detected by complexity, specification, or somesuch. It is detected
by comparison with known designs.
>It would be very interesting, and one wonders why
>God hasn't done something like this, if he in fact exists.
Yes. One implication of design theory is that God did not create the moon
in its current form, since it doesn't have obviously designed graffiti all
over it.
> In article <37f6d88a...@news.erols.com>,
> Ivar Ylvisaker <ylvi...@erols.com> wrote:
> >On 2 Oct 1999 21:03:14 -0400, at...@best.comNOSPAM (Mark Isaak) wrote:
> >>In short, the pattern doesn't matter a bit unless it's specified in
> >>advance.
> >
> >Well no. A branch of statistical inference is hypothesis testing.
> >The hypothesis in this case is flipping a fair coin. When one
> >repeatedly flips a fair coin, one expects that heads comes up about
> >half the time and tails about half the time. [...]
>
> In this case, it is the hypothesis which must be specified in advance.
> Hypothesis testing doesn't apply if you try to specify the hypothesis
> after performing the test. Specifying the hypothesis is essentially the
> way by which the pattern is specified. For example, the hypothesis of
> flipping a fair coin specifies certain patterns and rules out many others
> (statistically speaking; the inclusions and exclusions are likelihoods,
> not certainties). The important thing is to specify your test in
> advance.
Not true. But it is important to be clear about what events we are talking
about.
The hypothesis is that a fair coin is flipped 30 times. Independent flips
are assumed. In this case:
P( 1 | flip) = P( 0 | flip) = 0.5 assuming that head is 1 and tail is 0.
Two possible outcomes, A and B, are suggested:
A: 011100100110100110101111000110
B: 010101010101010101010101010101
Many events can be defined:
Event A occurred.
Event B occurred.
The number of heads in the outcome is 0.
The number of heads in the outcome is 1.
...
The number of heads in the outcome is 15.
...
The number of heads in the outcome is 30.
The number of heads following a heads is 0.
...
The number of heads following a heads is 29.
And so on.
The probability of each of these events is always well defined, given the
hypothesis. It doesn't matter if the hypothesis was stated before, during,
or after the coin flipping. We can always calculate the numbers. For
example:
The number of possible outcomes is 2^30 or 1,073,741,824 or approximately one
billion. ^ is exponent. Hence:
P( A ) = P( B ) = 10 ^ -9 approximately.
This is also the probability that the number of heads is 0 or 30 because
there only one outcome that is all heads and one that is all tails.
On the other hand, there are 155,117,520 ways to get 15 heads flipping coins
30 times -- look in a probability or statistics book for an explanation -- so
the probability of 15 heads is about 0.144. 14 and 16 heads each have a
probability of about 0.135..
There are only two outcomes that alternate, one starting with heads and one
with tails. Hence, the probability of an alternating outcome is one in 500
million.
You're right that every outcome is equally likely and equally rare. But
outcomes that have about the same number of heads as tails are not rare.
Outcome A is OK.
On the other hand, alternating heads and tails is rare and, hence, the
hypothesis is probably false for outcome B.
I always get confused by combinations and permutations so I do not guarantee
these results.
Ivar
I'm not a fan of Dembski but I'm having problems following your
concerns about Dembski's use of the term "complex specified
information" or CSI.
As far as I can make out, CSI is an attribute of events that are
supposedly too complex to be generated by natural processes. A
natural process here means any process that does not depend on an
intelligent agent.
If an event happened and yet is characterized by CSI, then an
intelligent agent must have generated it, according to Dembski.
Dembski recognizes that there are algorithms such as evolutionary
algorithms that can generate apparently complex events. (Presumably,
these algorithms are models of what can happen in the real world.)
However, since these algorithms can generate solutions in a relatively
short period of time, they do not generate actual CSI and no
intelligent agent is required. Apparent CSI is not CSI.
Dembski essentially said this when responding to some of your posts on
the Calvin College HyperMail site (
http://www.calvin.edu/archive/evolution/199909/0383.html ):
"Design inferences are among other things eliminative arguments, and
what they must eliminate is a chance hypothesis (or more generally a
family of chance hypotheses). If the event under that chance
hypothesis has small probability, then it is per definitionem complex.
If not, then it isn't."
"Small probability" is Dembski's term for an extremely small
probability, e.g., something like ten to the minus 150th power.
I see no inconsistency in Dembski's terminology. I have doubts that
it is useful but it does not seem illogical.
Perhaps, you are trying to say something like natural processes
(including cleverly programmed computers) can sometime imitate human
activities.
Ivar