Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ID and the Difference Between Spheres and Cubes

4 views
Skip to first unread message

Seanpit

unread,
May 8, 2006, 3:39:57 PM5/8/06
to

Mujin wrote:

> > > But granite stones can be spherical and perfectly symmetrical.
> >
> > Not when it comes to irregularities where surface points at different
> > distances from the center of the stone are at the same distance as the
> > opposing point on the exact opposite side of the stone. A perfectly
> > symmetrical sphere does not have such irregularities since all surface
> > points are at the same distance from the center of the stone. You see,
> > I'm talking about symmetry with regard to irregularities here. That is
> > why I used a cube instead of a sphere for my illustration.
> >
> > < snip >
>
> I'm curious: are you proposing that stone polished deliberately by
> humans is perfectly symmetrical in respect to surface irregularities?
> Do you have an example of such an object?

I proposing that a highly symmetrical polished granite cube has a very
low degree of divergence (percentage wise) between the measurement of
one surface point from the center of the stone as compared to the exact
opposite surface point as measured from the center of the cube.

Such a cube has a wide range of different surface point distances. A
sphere does not. All of the points of a highly symmetrical sphere are
just about the same distance from the center of the stone sphere. A
cube is different in that some points are much farther from the center
than are other points of the surface of the cube. This creates the
irregularities (the corners) of the cube whereas the sphere has no such
irregularities. It is this high degree of reflective symmetry of these
irregularities that is beyond the abilities of non-deliberate forces
acting on granite or marble or flint - to a high degree of predictive
value.

In fact, the more irregularities there are, the better. For example, if
you have a very unusual, even a random-looking pattern etched into one
of the faces of the cube, it would be even more unlikely than the form
of the cube itself that this etched pattern would be reproduced, with
high fidelity, on the opposing face of the granite (or marble or flint)
cube by any non-deliberate force.

Sean Pitman
www.DetectingDesign.com

Bobby D. Bryant

unread,
May 8, 2006, 4:19:44 PM5/8/06
to

And when you look for divine intervention by applying that strange
analysis to a duck, you get...

--
Bobby Bryant
Austin, Texas

Deadrat

unread,
May 8, 2006, 4:34:17 PM5/8/06
to

"Seanpit" <seanpi...@naturalselection.0catch.com> wrote in message
news:1147117197.4...@u72g2000cwu.googlegroups.com...

>
> Mujin wrote:
>
>> > > But granite stones can be spherical and perfectly symmetrical.
>> >
>> > Not when it comes to irregularities where surface points at different
>> > distances from the center of the stone are at the same distance as the
>> > opposing point on the exact opposite side of the stone. A perfectly
>> > symmetrical sphere does not have such irregularities since all surface
>> > points are at the same distance from the center of the stone. You see,
>> > I'm talking about symmetry with regard to irregularities here. That is
>> > why I used a cube instead of a sphere for my illustration.
>> >
>> > < snip >
>>
>> I'm curious: are you proposing that stone polished deliberately by
>> humans is perfectly symmetrical in respect to surface irregularities?
>> Do you have an example of such an object?
>
> I proposing that a highly symmetrical polished granite cube has a very
> low degree of divergence (percentage wise) between the measurement of
> one surface point from the center of the stone as compared to the exact
> opposite surface point as measured from the center of the cube.

Propose away. But this is simply some ad hoc rule that you selected
to make your point.

> Such a cube has a wide range of different surface point distances. A
> sphere does not.

So what? The surface-point distances from the center of the cube are
governed by highly regular trigonometric functions. What makes you
think that fact is less important that the range of distance values?

> All of the points of a highly symmetrical sphere are
> just about the same distance from the center of the stone sphere. A
> cube is different in that some points are much farther from the center
> than are other points of the surface of the cube. This creates the
> irregularities (the corners) of the cube whereas the sphere has no such
> irregularities.

Why is this an "irregularity"?

> It is this high degree of reflective symmetry of these
> irregularities that is beyond the abilities of non-deliberate forces
> acting on granite or marble or flint - to a high degree of predictive
> value.

Of course, this supposed "high degree of reflective symmetry of these
irregularities" are not beyond the abilities of "non-deliberate forces acting
on" iron pyrite.

Until you come up with some rigorous definitions and general rules,
you're just spewing words.

All of my objections must seem pointless to you, since (I'm guessing)
you think that all this is so obvious. But until you move beyond
definition and demonstration by example, you've got nothing scientific
to go on.

Deadrat

Mitch...@aol.com

unread,
May 8, 2006, 5:19:46 PM5/8/06
to
So you are suggesting a polished sphere is obviously designed but a
polished cube is not?

Mark James

unread,
May 8, 2006, 5:50:44 PM5/8/06
to

SPit wrote:
> In fact, the more irregularities there are, the better. For example, if
> you have a very unusual, even a random-looking pattern etched into one
> of the faces of the cube, it would be even more unlikely than the form
> of the cube itself that this etched pattern would be reproduced, with
> high fidelity, on the opposing face of the granite (or marble or flint)
> cube by any non-deliberate force.

Are you suggesting god invented dice? That would explain quite a bit
regarding my recent trip to the Indian reservation.

Bobby D. Bryant

unread,
May 8, 2006, 5:56:44 PM5/8/06
to
On Mon, 08 May 2006, "Deadrat" <ephe...@sbcglobal.net> wrote:

> Until you come up with some rigorous definitions and general rules,
> you're just spewing words.
>
> All of my objections must seem pointless to you, since (I'm
> guessing) you think that all this is so obvious. But until you move
> beyond definition and demonstration by example, you've got nothing
> scientific to go on.

I think his deep-down problem is that he can't decide whether he
wants complexity or simplicity to be the indicator of designedness.

Desertphile

unread,
May 8, 2006, 7:02:51 PM5/8/06
to
Bobby D. Bryant wrote:

> > Mujin wrote:

> >> > < snip >

Because she's made out of wood?

Seanpit

unread,
May 8, 2006, 7:38:42 PM5/8/06
to

Mark James wrote:
> SPit wrote:
> > In fact, the more irregularities there are, the better. For example, if
> > you have a very unusual, even a random-looking pattern etched into one
> > of the faces of the cube, it would be even more unlikely than the form
> > of the cube itself that this etched pattern would be reproduced, with
> > high fidelity, on the opposing face of the granite (or marble or flint)
> > cube by any non-deliberate force.
>
> Are you suggesting god invented dice?

Some intelligent agent sure did! ; )

> That would explain quite a bit
> regarding my recent trip to the Indian reservation.

Perhaps those who build casinos are quite a bit smarter than those who
frequent them?

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
May 8, 2006, 7:36:27 PM5/8/06
to

Mitch...@aol.com wrote:
> So you are suggesting a polished sphere is obviously designed but a
> polished cube is not?

Just the opposite . . .

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
May 8, 2006, 7:48:02 PM5/8/06
to

Deadrat wrote:
> "Seanpit" <seanpi...@naturalselection.0catch.com> wrote in message
> news:1147117197.4...@u72g2000cwu.googlegroups.com...
> >
> > Mujin wrote:
> >
> >> > > But granite stones can be spherical and perfectly symmetrical.
> >> >
> >> > Not when it comes to irregularities where surface points at different
> >> > distances from the center of the stone are at the same distance as the
> >> > opposing point on the exact opposite side of the stone. A perfectly
> >> > symmetrical sphere does not have such irregularities since all surface
> >> > points are at the same distance from the center of the stone. You see,
> >> > I'm talking about symmetry with regard to irregularities here. That is
> >> > why I used a cube instead of a sphere for my illustration.
> >> >
> >> > < snip >
> >>
> >> I'm curious: are you proposing that stone polished deliberately by
> >> humans is perfectly symmetrical in respect to surface irregularities?
> >> Do you have an example of such an object?
> >
> > I proposing that a highly symmetrical polished granite cube has a very
> > low degree of divergence (percentage wise) between the measurement of
> > one surface point from the center of the stone as compared to the exact
> > opposite surface point as measured from the center of the cube.
>
> Propose away. But this is simply some ad hoc rule that you selected
> to make your point.

Prove me wrong then. Should be easy to falsify such an assertion if it
is in fact false.

> > Such a cube has a wide range of different surface point distances. A
> > sphere does not.
>
> So what? The surface-point distances from the center of the cube are
> governed by highly regular trigonometric functions. What makes you
> think that fact is less important that the range of distance values?

The irregularities in distances are what are interesting in that
non-deliberate processes do not produce irregularities with significant
reflective symmetry with the irregularties of the opposing side of the
stone.

> > All of the points of a highly symmetrical sphere are
> > just about the same distance from the center of the stone sphere. A
> > cube is different in that some points are much farther from the center
> > than are other points of the surface of the cube. This creates the
> > irregularities (the corners) of the cube whereas the sphere has no such
> > irregularities.
>
> Why is this an "irregularity"?

Call this feature whatever you want. The *point* is the same - no pun
intended ; )

> > It is this high degree of reflective symmetry of these
> > irregularities that is beyond the abilities of non-deliberate forces
> > acting on granite or marble or flint - to a high degree of predictive
> > value.
>
> Of course, this supposed "high degree of reflective symmetry of these
> irregularities" are not beyond the abilities of "non-deliberate forces acting
> on" iron pyrite.

That's right. Different materials behave differently in the presence of
random non-deliberate forces.

> Until you come up with some rigorous definitions and general rules,
> you're just spewing words.

I've given you very useful definitions and general rules. It is just
that a particular material must be investigated in fair detail before
the limits of what non-deliberate processes can achieve can be
adequately drawn.

> All of my objections must seem pointless to you, since (I'm guessing)
> you think that all this is so obvious. But until you move beyond
> definition and demonstration by example, you've got nothing scientific
> to go on.

I've given you several examples from highly symmetrical granite cubes
to identical etched patterns in opposing faces to French gardens. I've
explained why such examples go far beyond the very predictable limits
of non-deliberate processes as they interact with these materials.
These concepts are indeed obvious. I'm quite amazed, actually, that
most in this forum are so resistant to them.

> Deadrat

Sean Pitman
www.DetectingDesign.com

snex

unread,
May 8, 2006, 7:53:26 PM5/8/06
to

Seanpit wrote:
> Deadrat wrote:
> > "Seanpit" <seanpi...@naturalselection.0catch.com> wrote in message
> > news:1147117197.4...@u72g2000cwu.googlegroups.com...
<snip>

>
> > > It is this high degree of reflective symmetry of these
> > > irregularities that is beyond the abilities of non-deliberate forces
> > > acting on granite or marble or flint - to a high degree of predictive
> > > value.
> >
> > Of course, this supposed "high degree of reflective symmetry of these
> > irregularities" are not beyond the abilities of "non-deliberate forces acting
> > on" iron pyrite.
>
> That's right. Different materials behave differently in the presence of
> random non-deliberate forces.
>

so how do complex organic molecules behave in the presence of random
non-deliberate forces? please supply an exhaustive list of behaviors.

<snip>

Deadrat

unread,
May 8, 2006, 8:08:17 PM5/8/06
to

"Seanpit" <seanpi...@naturalselection.0catch.com> wrote in message
news:1147132082.3...@e56g2000cwe.googlegroups.com...

>
> Deadrat wrote:
>> "Seanpit" <seanpi...@naturalselection.0catch.com> wrote in message
>> news:1147117197.4...@u72g2000cwu.googlegroups.com...
>> >
>> > Mujin wrote:
>> >
>> >> > > But granite stones can be spherical and perfectly symmetrical.
>> >> >
>> >> > Not when it comes to irregularities where surface points at different
>> >> > distances from the center of the stone are at the same distance as the
>> >> > opposing point on the exact opposite side of the stone. A perfectly
>> >> > symmetrical sphere does not have such irregularities since all surface
>> >> > points are at the same distance from the center of the stone. You see,
>> >> > I'm talking about symmetry with regard to irregularities here. That is
>> >> > why I used a cube instead of a sphere for my illustration.
>> >> >
>> >> > < snip >
>> >>
>> >> I'm curious: are you proposing that stone polished deliberately by
>> >> humans is perfectly symmetrical in respect to surface irregularities?
>> >> Do you have an example of such an object?
>> >
>> > I proposing that a highly symmetrical polished granite cube has a very
>> > low degree of divergence (percentage wise) between the measurement of
>> > one surface point from the center of the stone as compared to the exact
>> > opposite surface point as measured from the center of the cube.
>>
>> Propose away. But this is simply some ad hoc rule that you selected
>> to make your point.

> Prove me wrong then. Should be easy to falsify such an assertion if it
> is in fact false.
>

Sorry, but that's not how the game is played. You have an assertion you
wish to support. It is up to you to define your terms clearly and provide
a way for others to repeat your observations. That is, if you want to do
science.

As an aside, it is extremely difficult to falsify your assertions because it's
almost impossible to pin down your ideas.

>> > Such a cube has a wide range of different surface point distances. A
>> > sphere does not.
>>
>> So what? The surface-point distances from the center of the cube are
>> governed by highly regular trigonometric functions. What makes you
>> think that fact is less important that the range of distance values?
>
> The irregularities in distances are what are interesting in that
> non-deliberate processes do not produce irregularities with significant
> reflective symmetry with the irregularties of the opposing side of the
> stone.

Not only do I not see what's "interesting" about this assertion, I don't
see any definition (except, again, by example) of what you mean by
"irregularities." Non-deliberate processes produce both spheroid
stones (e.g., by tumbling in rivers) and cubic forms (e.g., crystals of
iron pyrite).

>
>> > All of the points of a highly symmetrical sphere are
>> > just about the same distance from the center of the stone sphere. A
>> > cube is different in that some points are much farther from the center
>> > than are other points of the surface of the cube. This creates the
>> > irregularities (the corners) of the cube whereas the sphere has no such
>> > irregularities.
>>
>> Why is this an "irregularity"?
>
> Call this feature whatever you want. The *point* is the same - no pun
> intended ; )

Your point is evident ... to you. But it's not clear or clearly usable to
others. It's not the name that's lacking; it's the mathematical definition
of what you mean.

>> > It is this high degree of reflective symmetry of these
>> > irregularities that is beyond the abilities of non-deliberate forces
>> > acting on granite or marble or flint - to a high degree of predictive
>> > value.
>>
>> Of course, this supposed "high degree of reflective symmetry of these
>> irregularities" are not beyond the abilities of "non-deliberate forces acting
>> on" iron pyrite.
>
> That's right. Different materials behave differently in the presence of
> random non-deliberate forces.

Then somehow you're going to have to systematize this so that we
can recognize the work of deliberate forces. It's hard to do. Which
is why I suspect you haven't done it. What you've got now is "different
things react differently to different situations."

>> Until you come up with some rigorous definitions and general rules,
>> you're just spewing words.
>
> I've given you very useful definitions and general rules. It is just
> that a particular material must be investigated in fair detail before
> the limits of what non-deliberate processes can achieve can be
> adequately drawn.

This simply isn't true. You've given me absolutely *no* definitions
or general rules. You've got two example so far, a polished granite
cube and a garden. Give me a definition or rule that I can apply. It
shouldn't mention granite or flowers.

>> All of my objections must seem pointless to you, since (I'm guessing)
>> you think that all this is so obvious. But until you move beyond
>> definition and demonstration by example, you've got nothing scientific
>> to go on.
>
> I've given you several examples from highly symmetrical granite cubes
> to identical etched patterns in opposing faces to French gardens. I've
> explained why such examples go far beyond the very predictable limits
> of non-deliberate processes as they interact with these materials.

You've *claimed* they've gone beyond limits. This isn't an explanation.
For your framework to produce a prediction, you need a function that
takes some inputs (type of material? rules for symmetry?) and gives a
numerical output, perhaps a number between 0 or 1 or perhaps either
0 or 1.

> These concepts are indeed obvious. I'm quite amazed, actually, that
> most in this forum are so resistant to them.

This is because science has bumped into the "obvious" before: absolute
space and time, simultaneous position and momentum, objects seeking their
natural places of rest. None of these things turned out to be valid, and people
were astounded when they discovered this. That's why "most in this forum"
want a defined framework. It's not that we don't see the "obvious." We're
just suspicious of it.

Deadrat

> Sean Pitman
> www.DetectingDesign.com
>

John Wilkins

unread,
May 8, 2006, 8:50:54 PM5/8/06
to
... very small stones?


I got better.

--
John S. Wilkins, Postdoctoral Research Fellow, Biohumanities Project
University of Queensland - Blog: evolvethought.blogspot.com
"He used... sarcasm. He knew all the tricks, dramatic irony, metaphor, bathos,
puns, parody, litotes and... satire. He was vicious."

wf3h

unread,
May 8, 2006, 8:56:02 PM5/8/06
to

Seanpit wrote:
>>
>. It is this high degree of reflective symmetry of these
> irregularities that is beyond the abilities of non-deliberate forces
> acting on granite or marble or flint - to a high degree of predictive
> value.

such forces act within the laws of nature

creationists say creation does not

there's the contradiction.

Vend

unread,
May 8, 2006, 9:10:32 PM5/8/06
to
No matter how improbable is an outcome of a random process, if its
probability is greater tha zero, it will happen if the process is
repeated a sufficient amount of times.

dysfunction

unread,
May 8, 2006, 9:19:32 PM5/8/06
to

Yes of course, but IDists/Creationists will argue that even millions of
years is not long enough for random chance to produce life. They are
right in this regard, but totally wrong in supposing that evolution
occurs by sheer random chance. It involves chance, with cumulative
selection working on it. Random mutation is single step selection, and
is not sufficient by itself to produce complexity.

Mark Iredell

unread,
May 8, 2006, 10:02:37 PM5/8/06
to
<snipping body text, responding to subject line>
Why are salt crystals cubic?
Is it natural?
http://www.chem.cornell.edu/sl137/outreach.dir/salt1.html
Or designed?
http://www.choice.com.au/viewArticle.aspx?id=105123

Inez

unread,
May 9, 2006, 12:05:49 AM5/9/06
to

I would no doubt assume a polished stone cube was man made, were I to
run across one. But there are plenty of crystals which are cubic and
smooth. It is not so much that "non-deliberate forces" can't make
cubes, it's that we know about what sorts of granite you see in the
natural world, and know that people are a type of creature that
occasionally polishes up a bit of stone.

Seanpit

unread,
May 9, 2006, 12:37:09 AM5/9/06
to

You need to read some more of this discussion in other threads on this
topic. In short, why do you think I'm talking about granite instead of
something that can indeed form natural crystals? Hint: You have to
investigate the material in question to determine the potential and
limits of non-deliberate forces as they interact with that particular
material.

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
May 9, 2006, 12:39:30 AM5/9/06
to

Non-deliberate forces cannot make highly symmetrical polished cubes
with identical etching on opposing faces when it comes to the material
of granite. We aren't talking about any other material here - just
granite.

Sean Pitman
www.DetectingDesign.com

Deadrat

unread,
May 9, 2006, 1:31:06 AM5/9/06
to

"Seanpit" <seanpi...@naturalselection.0catch.com> wrote in message
news:1147149429....@i39g2000cwa.googlegroups.com...

Well, then, take your own hint. *You* have to investigate what it
is about granite that makes a cube of it made by deliberate forces
and not possibly by non-deliberate forces. *You* have to come
up with a means to differentiate granite from iron pyrite.

This will be difficult because the only reason that we know that
granite cubes are made by deliberate forces is that we know what
those forces are, since we know who makes granite cubes.

Deadrat

>
> Sean Pitman
> www.DetectingDesign.com
>

Deadrat

unread,
May 9, 2006, 1:34:04 AM5/9/06
to

"Seanpit" <seanpi...@naturalselection.0catch.com> wrote in message
news:1147149570....@y43g2000cwc.googlegroups.com...
>
> Inez wrote:
<snip>

>> I would no doubt assume a polished stone cube was man made, were I to
>> run across one. But there are plenty of crystals which are cubic and
>> smooth. It is not so much that "non-deliberate forces" can't make
>> cubes, it's that we know about what sorts of granite you see in the
>> natural world, and know that people are a type of creature that
>> occasionally polishes up a bit of stone.
>
> Non-deliberate forces cannot make highly symmetrical polished cubes
> with identical etching on opposing faces when it comes to the material
> of granite. We aren't talking about any other material here - just
> granite.

This is a tough sell. Non-deliberate forces that we have encountered
don't make h.s.p.c.w.i.e.o.o.f. If you're going to demonstrate that no
non-deliberate forces can, then you're going to have to go beyond granite.

Deadrat

>
> Sean Pitman
> www.DetectingDesign.com
>

Timberwoof

unread,
May 9, 2006, 1:51:46 AM5/9/06
to
In article <1147117197.4...@u72g2000cwu.googlegroups.com>,
"Seanpit" <seanpi...@naturalselection.0catch.com> wrote:

So a little cube of salt, like the oodles and oodles of little cubes of
salt in my salt shaker, are intelligently designed?

--
Timberwoof <me at timberwoof dot com> http://www.timberwoof.com

neverbetter

unread,
May 9, 2006, 3:53:56 AM5/9/06
to

Glad that something is.

neverbetter

unread,
May 9, 2006, 4:03:21 AM5/9/06
to

It's a good strategy.
How do you determine the potential and limit of non-deliberate forces
as they interact with granite?
You look at the kinds of granite that occur in the nature and whose
creation process has not been tampered with deliberately by known
intelligent agents, as far as you know, and conclude that those things
occur naturally as a result of non-deliberate forces.

How do you determine the potential and limits of non-deliberate forces
as they interact with life?
Look at the kinds of life that occur in the nature and and whose
creation process has not been tampered with deliberately by known
intelligent agents, as far as you know, and conclude that those things
occur naturally as a result of non-deliberate forces? Ergo life is
undesigned.

No? Why not?

Carsten Troelsgaard

unread,
May 9, 2006, 5:02:11 AM5/9/06
to

"Seanpit" <seanpi...@naturalselection.0catch.com> skrev i en meddelelse
news:1147132082.3...@e56g2000cwe.googlegroups.com...

It's your burden to support your position

This 'it's just that a particular material must be investigated in fair
detail before ... ' /is/ the science that you'r supposed to build a
foundation to - so far you've stacked ID upon science as I figure could be
proper for adding moral considerations to whatever one wants to do with
science (status quo for science and religion).
You are redifining science, aren't you? If not, you could guide us to the
boundary between the two (implying that ID is outside science)

>> All of my objections must seem pointless to you, since (I'm guessing)
>> you think that all this is so obvious. But until you move beyond
>> definition and demonstration by example, you've got nothing scientific
>> to go on.
>
> I've given you several examples from highly symmetrical granite cubes
> to identical etched patterns in opposing faces to French gardens. I've
> explained why such examples go far beyond the very predictable limits
> of non-deliberate processes as they interact with these materials.
> These concepts are indeed obvious. I'm quite amazed, actually, that
> most in this forum are so resistant to them.
>
>> Deadrat
>
> Sean Pitman
> www.DetectingDesign.com

We ask what ID is, and we expect you to proceed along the line "ID is ... ".
Instead, your answer seems to be "Can you garantee that _object of
investigation_ is not ID'ed"

In all honesty, how much light does that bring on what ID is?

Your guarantee that you can pop an even more difficult question (what is
this not?) to the question 'what is this?' is the only predictive value of
your 'science' - Atleast we look forward for you to provide a solid,
comprehensible, usable generalisation as to the new question you rise ...
you seem to let this part (of each object at hand) be the burden that rests
on the original questionaire and thus effectly does not answer anything.


Von R. Smith

unread,
May 9, 2006, 6:23:54 AM5/9/06
to


So in other words:

Your hypothesis is that non-deliberate processes can't form cubes,
except when they can.

The premise of your inference is that you can generalize from specific
instances of non-deliberate processes to universal principles about
them, except when you can't.

To detect deliberate vs. non-deliberate processes, you need extensive,
even exhaustive knowledge of the materials being worked on and the
context in which it occurs, except when you don't.

TomS

unread,
May 9, 2006, 6:51:20 AM5/9/06
to
"On Mon, 8 May 2006 21:56:44 +0000 (UTC), in article
<e3oeqr$e24$3...@geraldo.cc.utexas.edu>, Bobby D. Bryant stated..."

What decision?

Isn't that the essence of the "design" argument?


--
---Tom S. <http://talkreason.org/articles/chickegg.cfm>
"It is not too much to say that every indication of Design in the Kosmos is so
much evidence against the Omnipotence of the Designer. ... The evidences ... of
Natural Theology distinctly imply that the author of the Kosmos worked under
limitations..." John Stuart Mill, "Theism", Part II

ErikW

unread,
May 9, 2006, 7:04:45 AM5/9/06
to

>From the sidelines: I'd say that Sean doesn't know how to proceed and
he is probing the group for an acceptable way forward. I fail to see
how anything good will come out of ad hoc rationales for why a polished
cube and not a polished sphere is designed. The method will obviously
not work for a "designed" polished sphere or a flint axe.

So it looks like it's not an attempt at "doing science" but an attempt
at convincing individuals.

Von R. Smith

unread,
May 9, 2006, 8:40:05 AM5/9/06
to


It's your hypothesis. Why don't you test it? You seem keen on
maintaining that your position is somehow scientifically valid, but
your behavior does not bear this out. Making up assertions and then
sitting back and defying other people to prove you wrong is not testing
an hypothesis. That is a rhetorical game played by barroom
smart-alecks, not scientists who are serious about their work.

Besides, you have still not told anybody *how* one could prove you
wrong. To be able to do so, one would need an agreed-upon test for
*independently* determining whether a process were deliberate or
non-deliberate. If I *were* to see a granite cube forming without
human agency, how would I determine whether the process producing it
was non-deliberate? If I found a granite cube in a place where human
agency could be reasonably eliminated as a likely cause (perhaps in
your Mars case) what tests would I run to see whether or not it had
been caused by a non-deliberate cause?

>
> > > Such a cube has a wide range of different surface point distances. A
> > > sphere does not.
> >
> > So what? The surface-point distances from the center of the cube are
> > governed by highly regular trigonometric functions. What makes you
> > think that fact is less important that the range of distance values?
>
> The irregularities in distances are what are interesting in that
> non-deliberate processes do not produce irregularities with significant
> reflective symmetry with the irregularties of the opposing side of the
> stone.

We already know that cubes can form naturally in other materials. You
admit this when pressed, yet you still persist in trying to create the
impression that there is some instructive *general* principle at work
here, as opposed to a purely local observation about one hand-picked
example. This is useless if you are trying to extrapolate some
parameter about the "limitations of all non-deliberate processes" that
could be applied in a general case, which if I understand you correctly
is what you are trying to do.

>
> > > All of the points of a highly symmetrical sphere are
> > > just about the same distance from the center of the stone sphere. A
> > > cube is different in that some points are much farther from the center
> > > than are other points of the surface of the cube. This creates the
> > > irregularities (the corners) of the cube whereas the sphere has no such
> > > irregularities.
> >
> > Why is this an "irregularity"?
>
> Call this feature whatever you want. The *point* is the same - no pun
> intended ; )
>
> > > It is this high degree of reflective symmetry of these
> > > irregularities that is beyond the abilities of non-deliberate forces
> > > acting on granite or marble or flint - to a high degree of predictive
> > > value.
> >
> > Of course, this supposed "high degree of reflective symmetry of these
> > irregularities" are not beyond the abilities of "non-deliberate forces acting
> > on" iron pyrite.
>
> That's right. Different materials behave differently in the presence of
> random non-deliberate forces.


They also behave differently in the presence of non-random
non-deliberate forces. And the same materials behave differently in
the presence of different non-deliberate forces. So tell us again your
theory about the limitations of all non-deliberate forces, and how you
derived it.


>
> > Until you come up with some rigorous definitions and general rules,
> > you're just spewing words.
>
> I've given you very useful definitions and general rules. It is just
> that a particular material must be investigated in fair detail before
> the limits of what non-deliberate processes can achieve can be
> adequately drawn.


So, you have given us very useful general rules that are useless
outside a specific (fairly specified?) set of circumstances.

Also, your emphasis on materials is misplaced. If something about the
materials prevented granite from forming cubes, we wouldn't be able to
form it into cubes, either. Tell us more about processes, not
materials.


>
> > All of my objections must seem pointless to you, since (I'm guessing)
> > you think that all this is so obvious. But until you move beyond
> > definition and demonstration by example, you've got nothing scientific
> > to go on.
>
> I've given you several examples from highly symmetrical granite cubes
> to identical etched patterns in opposing faces to French gardens. I've
> explained why such examples go far beyond the very predictable limits
> of non-deliberate processes as they interact with these materials.
> These concepts are indeed obvious. I'm quite amazed, actually, that
> most in this forum are so resistant to them.


Anybody can make up examples that appear to bear out some point. What
you are being asked for is a set of operational definitions that would
allow us to use your brilliant insights into deliberate and
non-deliberate processes under unforeseen circumstances. Behind this
curtain is something that nobody has ever seen or even imagined before.
Explain to me how, upon revealing it, I could then use your ID method
to determine whether it could have been produced by non-deliberate
processes.

hersheyhv

unread,
May 9, 2006, 9:45:56 AM5/9/06
to

As opposed to Sean's "argument"? Yes.

hersheyhv

unread,
May 9, 2006, 9:51:13 AM5/9/06
to

Keep in mind that "non-deliberate forces" is nothing more than "what
happens in the absence of a detectable active agent". The only way
that Sean can *know* that something is due to non-deliberate forces is
to explicitly exclude active agents. For example, I can say that x
happens (or doesn't) in the absence of humans because I can know when
humans are absent. That is how you go about testing the limits and
possibilities of "non-deliberate forces". How Sean does this with an
agent that is deliberately undetectable is beyond me.

> Sean Pitman
> www.DetectingDesign.com

hersheyhv

unread,
May 9, 2006, 9:59:57 AM5/9/06
to

IOW, there is no universal way of determining that something is or is
not due to deliberate action by some agent based on reflective
symmetry. That reason only holds for *some* materials and *some*
scales or sizes and doesn't hold for others. The *only* way to
directly determine that something was deliberately made is to find
evidence of the active agent with the approporiate minimal powers
needed to manufacture the object in question. The only way to *infer*
such an agent is to know that *in the absence of such an agent*, the
object in question will not form under reasonably similar conditions.
You have neither presented direct evidence of the active agent nor
presented a way to infer your agent, which seems to have no testable
properties and can never be excluded.
>
> Sean Pitman
> www.DetectingDesign.com

dwib...@gmail.com

unread,
May 9, 2006, 10:48:00 AM5/9/06
to
So you are suggesting that a polished cube is designed and a polished
sphere is not?

Natural way to make a polished sphere = tumbling (all too common)

Natural way to make a polished cube = normal cyrstalization of minerals
(like iron pyrite)

Dwib

Deadrat

unread,
May 9, 2006, 12:12:18 PM5/9/06
to

"Von R. Smith" <trak...@gmail.com> wrote in message
news:1147178405.8...@j73g2000cwa.googlegroups.com...

Gee, I wish I had written that.

Sean, read and re-read this post. It's clear and instructive. You claim that
you're onto a scientific idea about the biosphere. You're not the first to find
himself there. People have struggled with this before in attempts to come up
with organizing principles that explain things. At one time, people thought that
every terrestrial animal had an analog in the ocean -- for every quarter horse,
there was a sea horse. At one time, people thought that the shapes of stony
objects were the important factors in studying past life. Old collections mixed
fossils and non-fossils. It's easy now to think these ideas foolish, but that's because
we've got better ways to organize what was a confusing array of data. And that's
what you need -- a better way to organize the data you see, with general principles
that are not dependent on a short list of examples and that may be independently
employed to distinguish the designed and deliberate from the non-designed and
non-deliberate.

Deadrat

Seanpit

unread,
May 9, 2006, 2:50:59 PM5/9/06
to

Non-deliberate processes would have a much harder time doing this with
granite. We are talking about granite here, not pyrite. Why do you
think I'm talking about granite all the time and not some crystaline
material like pyrite? Perhaps you weren't clear on that? - somehow?

> Dwib

Sean Pitman
www.DetectingDesign.com

ttambo...@gmail.com

unread,
May 9, 2006, 4:00:45 PM5/9/06
to
Seanpit wrote:
> Non-deliberate processes would have a much harder time doing this with
> granite. We are talking about granite here, not pyrite. Why do you
> think I'm talking about granite all the time and not some crystaline
> material like pyrite? Perhaps you weren't clear on that? - somehow?
>
> > Dwib
>
> Sean Pitman
> www.DetectingDesign.com

OK, Sean, you assert that a cube of polished granite could not be
created by natural processes (on Earth, by wild speculation on your
behalf) and ask that talk.origins folks prove you wrong. That's
evidence for ID, you say. However, you haven't even found such a cube
on Earth.

My response is to posit, that somewhere in the universe, there is a
cube of polished granite that is created by natural processes. The
granite is an unknown granite as compared to granite on Earth and the
natural processes that form the polished cube are equally unknown from
natural processes that occur on Earth. The natural processes that form
the polished cube do so under conditions that utilize the granites
natural (yet still unknown) properties (to people on Earth) of
fracture, producing a polished cube. One could equally consider this
one of Plato's Ideal Forms - the universal cube of polished granite,
from which all other cubes of polished granite (including yours) are
just pale imitations. I say this is evidence against ID.

Can you prove me wrong?

/ttam

Seanpit

unread,
May 9, 2006, 4:23:05 PM5/9/06
to

ttambo...@gmail.com wrote:
> Seanpit wrote:
> > Non-deliberate processes would have a much harder time doing this with
> > granite. We are talking about granite here, not pyrite. Why do you
> > think I'm talking about granite all the time and not some crystaline
> > material like pyrite? Perhaps you weren't clear on that? - somehow?
> >
> > > Dwib
> >
> > Sean Pitman
> > www.DetectingDesign.com
>
> OK, Sean, you assert that a cube of polished granite could not be
> created by natural processes (on Earth, by wild speculation on your
> behalf) and ask that talk.origins folks prove you wrong. That's
> evidence for ID, you say. However, you haven't even found such a cube
> on Earth.

I have one on my desk ; ) Also, you can see pictures of them if you do
a Google search. They are all designed though.

For reasons that only seem obvious to me in this forum, no highly
symmetrical polished granite cube has ever been found being produced by
any non-deliberate process.

< snip >

Sean Pitman
www.DetectingDesign.com

ttambo...@gmail.com

unread,
May 9, 2006, 5:41:33 PM5/9/06
to

Fine, you've got one on your desk created by *men*, not God, G_d, gods
or an "Intel. Designer" (unless you can prove otherwise). And you
haven't found one produced by natural means on Earth - which does not
mean that they don't exist, somewhere. Just because you lack the
evidence, doesn't mean that the evidence doesn't exist.

And you failed to address my second point (a follow up on the topic),
which was that:

Somewhere in the universe, there is a cube of polished granite that is

Deadrat

unread,
May 9, 2006, 5:43:53 PM5/9/06
to

"Seanpit" <seanpi...@naturalselection.0catch.com> wrote in message
news:1147206185....@e56g2000cwe.googlegroups.com...

The reason is perfectly obvious to everyone in this forum. Highly
symmetrical polished granite cubes are produced by the Toledo Paperweight
Company and companies like it. The jump from that fact to a generalization
about all non-deliberate processes in the universe is a leap that not everyone
in this forum wants to make. At least, not without some argument beyond
assertion.

How about giving it a try?

Deadrat

>
> < snip >
>
> Sean Pitman
> www.DetectingDesign.com
>

Vend

unread,
May 9, 2006, 5:50:06 PM5/9/06
to
Natural selection can't explain the origin of life. It can explain the
evolution of living beings but not the transition from inanimate and
living systems, because to be selected, a system must be capable of
reproduction, so must be alive.

hersheyhv

unread,
May 9, 2006, 6:50:09 PM5/9/06
to

How do you (or anyone) determine empirically that a process is
"non-deliberate"?

I know the answer (It occurs in the absence of a detectable active
intelligent agent.). I just want you to tell me. How do you determine
empirically that a process is "non-deliberate"? You keep acting as if
you had a way of telling, but you keep refusing to let us know what
your method is.

> > Dwib
>
> Sean Pitman
> www.DetectingDesign.com

nightlight

unread,
May 9, 2006, 7:31:37 PM5/9/06
to
> Natural selection can't explain the origin of life. It can explain
> the evolution of living beings ...

The random mutation (RM) + natural selection (NS) do not explain even
the microevolution, much less the macroevolution. The key missing piece
is a proof that RM can yield the observed rates of adaptation. In order
to prove that, one would first need to solve a combinatorial problem of
estimating how many possible configurations of DNA can arise by
applying RM to DNA, obtain an integer M, then estimate how many are
"favorable" (e.g. regarding some enviromental challenge) DNA
combinations, call it an integer F, then obtain probability of
favorable random mutation e.g. via P=F/M. Then one would need to
estimate the number of possible tries T (the number of offspring) in a
given time interval, and compare the expected number of favorable
solutions P*T vs the observed number of favorable solutions O in that
same time interval.

The smaller the P*T number is compared to the O number, the smaller
the chance that a random mutation is a good model for the observed
adaptation. Neo-Darwinians have yet to prove even in a single example
that a _random_ mutation would reproduce the observed favorable figures
O. Merely showing that, say bacteria adapts to antibiotics only
establishes numbers O and T, but not F, M and P. Without the latter
figures, they have no way of knowing whether the mutations which
provided the resistance were random or a result of some intelligent
agency/process.

Note that even within a purely mechanistic/materialistic perspective
(the "hidden variable" interpretation of Quantum Theory), there are
enough orders of magnitude below our current "elementary" particles
(10^-16cm) to accomodate complexity comparable to our own (built upon
some Planckian scale objects of 10^-33 cm) as there are between our
current elementary particles and ourselves. For all we know, the
"random" choices predicted by the present Quantum Theory may be
purposeful choices and solutions from some vast technological
civilization running 10^16 times faster than our own and living at the
scales between 10^-16 cm and 10^-33 cm (this region is considered an
'empty desert' by the present physics). For such sub-micro beings our
atomic laws (with their fine tuning seemingly optimized for life) might
be their advanced 'galactic' scale technology. And if you allow for
non-mechanistic models, which have been outdated by Quantum Theory (QT)
since 1920s anyway (hence, besides the laws of matter-energy, you might
also model the mind-stuff which appears necessary in some
interpretations of QT), there are many other possibilities to model
more accurately the observed adaptation fugures O.

Of course, we already know that the intelligent solutions (e.g. the
molecular biology) and the intelligent agency (e.g. a brain of a
molecular biologist) do occur in nature at our scales, whatever the
mechanism of their emergence might be. There is no reason (let alone a
natural law) why would the brain of a molecular biologist be the sole
arrangement of atoms and fields in nature that can perform such
function. (After all, computers might do it all by themselves in the
near future, as well.)

In any case, neo-Darwinian (ND) theory is not even close to proving
that RM+NS are a sufficiently likely mechanism of evolution for a given
number of tries (in a given time-space) at any level, from "simple"
adaptations to speciation and beyond. And if it turns out that random
mutations cannot predict the observed figures O, then the next nearest
possibility remaining is the non-random (i.e. "intelligent") mutations
aka the ID theory of evolution (which in turns opens the further
scientific questions about the nature of the intelligent agency).

The criteria which distinguish ND from ID (the computations of M, F, P,
T and measurement of O and comparison T*P with O) are quantitative and
perfectly scientific, the handwaving and yelling from the ND
neo-Lysenkoists notwithstanding.

Vend

unread,
May 9, 2006, 7:49:26 PM5/9/06
to
> ...but not the transition from inanimate TO living systems...

Inez

unread,
May 9, 2006, 10:16:56 PM5/9/06
to

> > I would no doubt assume a polished stone cube was man made, were I to
> > run across one. But there are plenty of crystals which are cubic and
> > smooth. It is not so much that "non-deliberate forces" can't make
> > cubes, it's that we know about what sorts of granite you see in the
> > natural world, and know that people are a type of creature that
> > occasionally polishes up a bit of stone.
>
> Non-deliberate forces cannot make highly symmetrical polished cubes
> with identical etching on opposing faces when it comes to the material
> of granite. We aren't talking about any other material here - just
> granite.

Can't? Really? What force prevents them? Or is it just very unlikely
that they would?

R. Baldwin

unread,
May 9, 2006, 11:09:00 PM5/9/06
to
"John Wilkins" <jo...@wilkins.id.au> wrote in message
news:e3oovj$1e73$2...@bunyip2.cc.uq.edu.au...
> Bobby D. Bryant wrote:

> > On Mon, 08 May 2006, "Seanpit"
<seanpi...@naturalselection.0catch.com> wrote:
> >
> >> Mujin wrote:
> >>
> >>>>> But granite stones can be spherical and perfectly symmetrical.
> >>>> Not when it comes to irregularities where surface points at different
> >>>> distances from the center of the stone are at the same distance as
the
> >>>> opposing point on the exact opposite side of the stone. A perfectly
> >>>> symmetrical sphere does not have such irregularities since all
surface
> >>>> points are at the same distance from the center of the stone. You
see,
> >>>> I'm talking about symmetry with regard to irregularities here. That
is
> >>>> why I used a cube instead of a sphere for my illustration.
> >>>>
> >>>> < snip >
> >>> I'm curious: are you proposing that stone polished deliberately by
> >>> humans is perfectly symmetrical in respect to surface irregularities?
> >>> Do you have an example of such an object?
> >> I proposing that a highly symmetrical polished granite cube has a very
> >> low degree of divergence (percentage wise) between the measurement of
> >> one surface point from the center of the stone as compared to the exact
> >> opposite surface point as measured from the center of the cube.
> >>
> >> Such a cube has a wide range of different surface point distances. A
> >> sphere does not. All of the points of a highly symmetrical sphere are

> >> just about the same distance from the center of the stone sphere. A
> >> cube is different in that some points are much farther from the center
> >> than are other points of the surface of the cube. This creates the
> >> irregularities (the corners) of the cube whereas the sphere has no such
> >> irregularities. It is this high degree of reflective symmetry of these

> >> irregularities that is beyond the abilities of non-deliberate forces
> >> acting on granite or marble or flint - to a high degree of predictive
> >> value.
> >>
> >> In fact, the more irregularities there are, the better. For example, if
> >> you have a very unusual, even a random-looking pattern etched into one
> >> of the faces of the cube, it would be even more unlikely than the form
> >> of the cube itself that this etched pattern would be reproduced, with
> >> high fidelity, on the opposing face of the granite (or marble or flint)
> >> cube by any non-deliberate force.
> >
> > And when you look for divine intervention by applying that strange
> > analysis to a duck, you get...
> >
> ... very small stones?
>
>
> I got better.
>

Who are you who are so wise in the ways of science?

ErikW

unread,
May 10, 2006, 2:57:12 AM5/10/06
to

If I saw a polished granite cube I'd say it had been man-made and be
fairly confident. Is there anything else we have to agree on before you
can proceed? :P

Do you care to make an outlook as to where this will lead? I don't
understand the point of the exercise. We'll apart from that it will
ultimately prove that god exists :)

ErikW


>
> < snip >
>
> Sean Pitman
> www.DetectingDesign.com

neverbetter

unread,
May 10, 2006, 4:14:20 AM5/10/06
to

>From what I've gathered: Perfect polished spheres are within the
capabilities of non-deliberate processes because they're equally
symmetrical everywhere, so finding them isn't relevant for the ID
hypothesis. But if we find granite cubes that seemingly have occurred
naturally without obvious intervention from humans or aliens, they must
have been carved by God, because they have symmetrical surface
irregularities, especially if there are etchings on the sides. The same
doesn't hold for pyrites or salt, so if we find salt or pyrite cubes
that seemingly have occurred naturally it doesn't mean that they must
have been carved by God.

We know this by applying a methodology which allows us to know the
limits of non-deliberate processes. We do this by looking at the
material and making observations of what seemingly occurs naturally in
the nature. Anything that seemingly occurs in the nature in the absence
of human influence is inferred to be within the capabilities of
non-deliberate processes, and things that we haven't observed
occurring in the nature are declared beyond the cababilities of
non-deliberate processes. It might seem that mere observation of
occurrence is not enough to determine if intelligent design was
involved, but Sean is very adamant that one does not need to know the
processes that are involved to make the distinction. Granite cubes are
definitely beyond the realm of realistic possibilities for all
unidentified non-deliberate processes because they haven't been
observed to occur in the absence of human influence, and salt and
pyrite cubes are within realistic possibilities for unidentified
non-deliberate processes because they have been observed to occur in
the nature.

Knowledge of the process is not necessary, the only thing we need is
observations of whether it occurs or not. This means that anything that
occurs in the nature in the absence of obvious interference of humans,
like salt and pyrite cubes, is considered within the realm of
possibility for non-deliberate processes and thus not proof of God. We
need to find something that doesn't occur in the nature in the
absence of obvious interference by humans to prove that God was
involved. But we soon realize that this is a paradox. If we find
something that doesn't occur in the nature without obvious human
interference, we have in fact found something that occurs in the nature
without obvious human interference, and this means that it's in the
realm of possibilities for non-deliberate processes after all, and thus
doesn't prove God did it.

The next thing we should do is to go looking for cubical lifeforms
which seemingly have occurred naturally. They are either like granite
(unobserved and must have been carved by God) or like salt and pyrite
(observed and don't need God to carve them), and we'll find out
which when and if we find out they exist. Hope this helps.

Richard Forrest

unread,
May 10, 2006, 4:40:27 AM5/10/06
to

<snipped>


>
> We know this by applying a methodology which allows us to know the
> limits of non-deliberate processes.

You fail to appreciate the true extent of Sean's genius here:
He has a methodology which is so powerful that it can draw conclusions
by statistical means of what his methodology will show *without even
applying his methodology*!


It's......quantum


RF

<snipped>

Peter Barber

unread,
May 10, 2006, 5:26:46 AM5/10/06
to
On 2006-05-10 09:31:37 +1000, "nightlight" <night...@omegapoint.com> said:

> The random mutation (RM) + natural selection (NS) do not explain even
> the microevolution, much less the macroevolution. The key missing piece
> is a proof that RM can yield the observed rates of adaptation.
>

> <snip simplistic calculations>
>
> <snip tenuous link to quantum theory>

I do hope you're being humorous rather than thick.

> <snip bizarre comparison of the neo-Darwinian synthesis to Lysenkoism>

OK, you _are_ taking the piss! Thank goodness you don't actually
believe what you wrote.
--
Peter Barber

ErikW

unread,
May 10, 2006, 5:26:00 AM5/10/06
to

Oh, if that's what he thinks then the exercise really do seem
pointless.

Thanks for showing me what this is about. It's rather more information
than I deserve, just jumping in like that. I'm afraid I'm mostly gonna
skip the topic in the future but please do take that as a compliment.
You have saved me some time and I thank you for it.

neverbetter

unread,
May 10, 2006, 5:39:58 AM5/10/06
to

Well, since the methodology seems to consist of the claim that things
that can be statistically shown to exist and that aren't known to be
designed could occur naturally and things that aren't known to exist in
the wild must have been intelligently designed, it's a no-brainer.
Actually doing the research would just give us more statistics of
things that exist and therefore do not support the proof of God: "God
must exist because granite cubes on Mars and other objects which aren't
known to exist are impossible to create without intelligent design."

nightlight

unread,
May 10, 2006, 9:22:56 AM5/10/06
to
>> The key missing piece
>> is a proof that RM can yield the observed rates of adaptation.
>> <snip simplistic calculations>

That wasn't a "calculation" but _definitions_ and symbolic
abbreviations of quantities that need to be calculated and compared in
order to justify the claim that RM+NS _explain_ any type of evolution
as the most probable mechanism. If you know of any published
calculation of the relevant combinatorial spaces and their
comparisons, you're welcome to cite it and provide figures obtained
showing that indeed the _random_ mutations can replicate the observed
numbers of favorable solutions for the available population sizes and
time available.

>> <snip tenuous link to quantum theory>

It may appear so from the position of ignorance. In any case, you can
check a bit more detailed argument on that point here:

http://www.uncommondescent.com/index.php/archives/1088#comment-33665
http://www.uncommondescent.com/index.php/archives/1088#comment-33670
http://www.uncommondescent.com/index.php/archives/1088#comment-33749
http://www.uncommondescent.com/index.php/archives/1088#comment-33778

(If you wish to argue QT proper, you're welcome to reply in the
moderated group sci.physics.research on some of my recent posts there.
Note that vacuous ad hominem posts, such as your present "reply", would
get bounced there.)

> I do hope you're being humorous rather than thick.

Well, having mistaken the definitions for calculation and confused the
statement of a problem with its solution, you haven't really touched
(or even got close to it) the substance of the question posed.

> <snip bizarre comparison of the neo-Darwinian synthesis to Lysenkoism>
> OK, you _are_ taking the piss! Thank goodness you don't actually
> believe what you wrote.

Thanks for supporting my point. The pseudo-autoritative vapidity and
the high pitch shrill of your "rebuttal" do illustrate quite nicely the
Lysenkoist nature of the neo-Darwinist priesthood.

hersheyhv

unread,
May 10, 2006, 11:55:45 AM5/10/06
to

nightlight wrote:
> > Natural selection can't explain the origin of life. It can explain
> > the evolution of living beings ...
>
> The random mutation (RM) + natural selection (NS) do not explain even
> the microevolution, much less the macroevolution. The key missing piece
> is a proof that RM can yield the observed rates of adaptation.

That is easy, if, of course, you accept standard timeframes. Humans
and chimpanzees diverged from a common ancestor some 5 million years
ago. Random mutation and *neutral drift* to fixation, unlike the much
faster random mutation plus positive selection, occurs at a relatively
constant (it does not need to be absolutely constant to make the point)
rate for selectively neutral sites. That rate of fixation is simply
the rate of mutation and it is roughly 1% per million years for amino
acid coding sites. The percent difference between chimps and humans is
known and is actually *less* than would be expected if all sites were
selectively neutral. This merely points out that most of the time
natural selection is a conservative force.

But it does mean that there is more than enough random mutation to
account for the difference between humans and chimps *even if* all the
difference between these species were merely random neutral fixation.
Again, given that most of the human genome is roughly selectively
neutral in sequence (most of it is non-coding and has little sequence
specificity requirements; even coding sequence is, at minimum, roughly
1/3 sequence neutral due to the degeneracy of the code), we would
expect the % difference between humans and chimps to be close to that
expected for neutral fixation with the effect of *selection* being
largely conservative.

If you look at the level of specific genes, or look for specific large
sequence differences in some coding seqeunce between humans and chimps,
you won't find much. Humans are missing one coding sequence due to a
90+ bp deletion. There are some transpositions and rearrangements.
But nothing that demands novel mutational mechanisms or rates of
mutation in the span of time available.

In my book of science, actual observation of organisms trumps your
hypothetical mathematics. That your math can demonstrate the
impossibility of bumblebee's flying is countered by the demonstrable
fact that they do. The fact is that you do not need any unusual
mechanism to generate the rate of change in genomes seen in actual
organisms in the time available. That the organisms in question (human
and chimp and their common ancestor) are/were *phenotypically*
different is unquestioned. That the organisms are *genotypically*
different and that some (small) fraction of this genotypic difference
accounts for the phenotypic difference because of selection for those
genotypic changes is also undoubtedly true. But the amount of such
change (which would be rapid relative to drift and fixation) is so
small relative to the amount of simple random fixation (not to mention
the amount of conservative selection) that it cannot even be observed
in the crude differences in % sequence change or in the differences in
specific genes. You can account for the totality of the sequence
difference between these organisms by mutation plus drift and fixation
alone.

dwib...@gmail.com

unread,
May 10, 2006, 1:27:48 PM5/10/06
to
Okay, I get your point about granite.

By the way, do you know a case of a cube of granite occuring in nature?

Although..... have you seen pictures of the rock growing out of Mt. St.
Helens? It's pretty darned smooth on one side. Designed? Or
extruded?

Dwib

Kim G. S. Øyhus

unread,
May 10, 2006, 1:52:09 PM5/10/06
to
>In fact, the more irregularities there are, the better. For example, if
>you have a very unusual, even a random-looking pattern etched into one
>of the faces of the cube, it would be even more unlikely than the form
>of the cube itself that this etched pattern would be reproduced, with
>high fidelity, on the opposing face of the granite (or marble or flint)
>cube by any non-deliberate force.

You described what is called point symmetry. This relatively easy for
nature to make. All that has to happen is that a form is filled twice
with granite, and that the 2 halves are put together.

Kim0

Kim G. S. Øyhus

unread,
May 10, 2006, 1:57:06 PM5/10/06
to
In article <1147172685....@i39g2000cwa.googlegroups.com>,
ErikW <bryo...@hotmail.com> wrote:
>
>From the sidelines: I'd say that Sean doesn't know how to proceed and
>he is probing the group for an acceptable way forward. I fail to see
>how anything good will come out of ad hoc rationales for why a polished
>cube and not a polished sphere is designed. The method will obviously
>not work for a "designed" polished sphere or a flint axe.

Good point. I have seen perfect granite spheres made by nature and
made by humans, designed, as mechanical art. Their symmetry and
polishedness did not suffice to separate them from each other.

Kim0

Kim G. S. Øyhus

unread,
May 10, 2006, 1:52:35 PM5/10/06
to
In article <e3oeqr$e24$3...@geraldo.cc.utexas.edu>,
Bobby D. Bryant <bdbr...@mail.utexas.edu> wrote:
>
>I think his deep-down problem is that he can't decide whether he
>wants complexity or simplicity to be the indicator of designedness.

Hear hear!

Kim0

nightlight

unread,
May 10, 2006, 3:31:40 PM5/10/06
to
>> The random mutation (RM) + natural selection (NS) do
>> not explain even the microevolution, much less the
>> macroevolution. The key missing piece is a proof
>> that RM can yield the observed rates of adaptation.
>
>
> That is easy, if, of course, you accept standard timeframes.

My argument is not about or based on short times available
for evolution.

> Humans and chimpanzees diverged from a common ancestor some 5
> million years ago. Random mutation and *neutral drift* to
> fixation, unlike the much faster random mutation plus positive
> selection, occurs at a relatively constant (it does not need
> to be absolutely constant to make the point) rate for
> selectively neutral sites.

The (approximately) "constant rate" of DNA change is an empirical fact.
It tells you absolutely nothing about the presence or absence of any
guidance (or anticipation, foresight...) in the generation of those DNA
changes (which are then subject to natural selection). An intelligent
agency could model the possible DNA changes internally, performing
look-ahead and elimination/selection before committing to the physical
changes. While running as an internal model could be many orders of
magnitude faster than the corresponding physical process, its net
effect may well be precisely the _observed_ rate of evolutionary
novelty, be it for steady or for highly variable (e.g. Cambrian
explosion) forms of evolution.

After all, the linguists use similar methods as evolutionary molecular
biologists to study the evolution and relations among natural languages
(which are created and transformed by intelligent/purposeful agents).
All the biological evolution patterns, such as those in your examples,
commonly trotted out by neo-Darwinists in support of the sufficiency of
"random" mutations, occur in the evolution of natural and artificial
(such as mathematical formalisms) languages, religions, arts,
scientific theories, technologies,... (just recall Dawkins memes).
Curiously, in all other observed instances of such evolutionary
patterns, there is always an intelligent agency behind. The sole
exception is, according to neo-Darwinist Gospel, the biological
evolution.

The constant (or variable) _observed_ rate of DNA change is only an
easy half of the criteria needed to discern between intelligent
(guided, anticipatory) and random generation of novelty. The present
molecular biology is still much too primitive to evaluate the other
half, since it cannot _compute_ either how many total DNA
configurations could be produced in a given setting, much less how
would every DNA change at the atomic level affect the phenotype (in
order to enumerate the 'favorable' or at least neutral outcomes).

The problem is therefore too complex to tackle with the present
technological and scientific tools, hence the question of ID vs ND is
an open and a perfectly legitimate scientific problem. The "irreducible
complexity" examples offered by the ID proponents are a qualitative
plausibility argument. Its effect in terms of the more precise criteria
described in the previous posts is to enlarge (exponentially in the
number of required changes) the space of possible configurations, thus
reduce the probability of favorable combined DNA change for any given
number of tries. Since the full combinatorial spaces relevant for the
precise criteria are enormous (e.g. of 10^C kind, where C is of the
order of number of atoms in the DNA), the handfuls of components used
in "irreducible complexity" arguments merely add relatively tiny
numbers to the exponent C for the total number of combinations. Arguing
those examples is like arguing fiscal policies based only on the last
digit of the figure for US national debt expressed in millicent units.


> But it does mean that there is more than enough random
> mutation to account for the difference between humans
> and chimps *even if* all the difference between these
> species were merely random neutral fixation.

As explained, the empirical mutation rate is a fact orthogonal to the
question of whether the mutations which differentiated humans from apes
were random or guided (selected with look-ahead).

> In my book of science, actual observation of organisms
> trumps your hypothetical mathematics.

I am not arguing against the value of empirical data. After all, the
observed counts of viable novelties O (in my first post) is the number
to be compared with the expected number of viable novelties from random
mutations (within given time & population sizes). The problem is that
there are no empirical facts that addresses the point of contention
between ID and ND.

Consider a simple analogy -- I tell you I am "randomly" tossing 10
coins behind a curtain and then you are allowed to check the outcome
and deduce whether I am tossing them "randomly" or putting them down by
hand. You are not allowed to look behind the curtain until I say the
coins have settled down. If we were to perform a "large" number of such
experiments, you could apply various random number tests on the
observed outcomes and deduce the probability of "random" tossing vs
putting the coins down by hand for the given number of tests. Since you
can't see or hear what is going on behind the curtain, the observation
is not enough -- you need a mathematical model for the "random coin
tossing" (binomial distribution), then you need to compute numbers
(various frequencies) within that model and compare them with the
observed numbers.

In nature, our current most fundamental theory (Quantum Theory; QT) is
irreducibly non-deterministic i.e. you're seeing the outcomes of
quantum transitions, you know the probabilities of various outcomes,
but neither the theory nor any observation can "see" behind the quantum
curtain. Hence if you wish to show _scientifically_ (instead of relying
on Lysenkoists methods used by the present neo-Darwinian priesthood)
that the mutations (which eventually come down to quantum dice)
involved in evolution (at any scale, micro and macro) are "random" you
need to work out the mathematical model for truly random DNA changes
and compare its predictions with the observed outcomes as sketched in
my first post.

z

unread,
May 12, 2006, 2:47:39 AM5/12/06
to
On 10 May 2006 12:31:40 -0700, "nightlight"
<night...@omegapoint.com> wrote:

What we see is easily accounted for based on the chemistry of DNA and
the ability of cells to repair DNA. There is no need for an outside
agency to account for the drift observed.

>After all, the linguists use similar methods as evolutionary molecular
>biologists to study the evolution and relations among natural languages
>(which are created and transformed by intelligent/purposeful agents).
>All the biological evolution patterns, such as those in your examples,
>commonly trotted out by neo-Darwinists in support of the sufficiency of
>"random" mutations, occur in the evolution of natural and artificial
>(such as mathematical formalisms) languages, religions, arts,
>scientific theories, technologies,... (just recall Dawkins memes).
>Curiously, in all other observed instances of such evolutionary
>patterns, there is always an intelligent agency behind. The sole
>exception is, according to neo-Darwinist Gospel, the biological
>evolution.

You are confusing models with reality.

>
>The constant (or variable) _observed_ rate of DNA change is only an
>easy half of the criteria needed to discern between intelligent
>(guided, anticipatory) and random generation of novelty. The present
>molecular biology is still much too primitive to evaluate the other
>half, since it cannot _compute_ either how many total DNA
>configurations could be produced in a given setting, much less how
>would every DNA change at the atomic level affect the phenotype (in
>order to enumerate the 'favorable' or at least neutral outcomes).
>

WTH are DNA configurations? That certainlly is not a term used by us
mol bio folks. And is "change at the atomic level" just an akward way
of saying mutation? DNA is pretty much DNA at an atomic level. You
gots your A's, C's, G's. and T's. There some finessing in various
organisms with methyl groups and/or other modifications acting as
party hats, but DNA is not that exciting at an atomic level.

Evolution is certainlly not guided, it is contingent. Look at the
Luria Delbruck experiment for a simple example.

>The problem is therefore too complex to tackle with the present
>technological and scientific tools, hence the question of ID vs ND is
>an open and a perfectly legitimate scientific problem. The "irreducible
>complexity" examples offered by the ID proponents are a qualitative
>plausibility argument. Its effect in terms of the more precise criteria
>described in the previous posts is to enlarge (exponentially in the
>number of required changes) the space of possible configurations, thus
>reduce the probability of favorable combined DNA change for any given
>number of tries. Since the full combinatorial spaces relevant for the
>precise criteria are enormous (e.g. of 10^C kind, where C is of the
>order of number of atoms in the DNA), the handfuls of components used
>in "irreducible complexity" arguments merely add relatively tiny
>numbers to the exponent C for the total number of combinations. Arguing
>those examples is like arguing fiscal policies based only on the last
>digit of the figure for US national debt expressed in millicent units.

Ahhh, no. You seem to be caught in the logical loop of "if DNA has X
# of atoms, then something like 10^X degrees of freedom are possible,
therefore evolution would take Y^10^X sample size". While you
certainly can shuffle numbers all you like you are confused about some
basic chemistry and biology. Using that sort of logic we preclude any
cell from existing for more than an fraction of a second. Using that
sort of logic, there is no way that any enzyme could exist or
function. It seems to me that a reasonable person would notice that
life did not logically depoof at the original assertion, and look for
the error.

Organisms are not free to explore the entire phase space available
based on their genome size and randomlly rearranging the DNA. There
is actually a relativelly small area of the phase space to explore,
and it's almost always nearby (lateral gene transfer is an exeption).
Kinda explains why species tend to go extinct.

>
>> But it does mean that there is more than enough random
>> mutation to account for the difference between humans
>> and chimps *even if* all the difference between these
>> species were merely random neutral fixation.
>
>As explained, the empirical mutation rate is a fact orthogonal to the
>question of whether the mutations which differentiated humans from apes
>were random or guided (selected with look-ahead).

What we see in comparison of the genomes shows that the changes are
biased towards genes implicated in neural development, diet, and
disease. Sort of what you would expect based on the biology. And
linkage disequilibrium of genes near these genes showing strong
selection indicate either a) evolution or b)
incompetant/malevolent/trickster designer.

Which is simpler, and which has actual evidence?

>
>> In my book of science, actual observation of organisms
>> trumps your hypothetical mathematics.
>
>I am not arguing against the value of empirical data. After all, the
>observed counts of viable novelties O (in my first post) is the number
>to be compared with the expected number of viable novelties from random
>mutations (within given time & population sizes). The problem is that
>there are no empirical facts that addresses the point of contention
>between ID and ND.

Bad model, and even worse conclusion.

>
>Consider a simple analogy -- I tell you I am "randomly" tossing 10
>coins behind a curtain and then you are allowed to check the outcome
>and deduce whether I am tossing them "randomly" or putting them down by
>hand. You are not allowed to look behind the curtain until I say the
>coins have settled down. If we were to perform a "large" number of such
>experiments, you could apply various random number tests on the
>observed outcomes and deduce the probability of "random" tossing vs
>putting the coins down by hand for the given number of tests. Since you
>can't see or hear what is going on behind the curtain, the observation
>is not enough -- you need a mathematical model for the "random coin
>tossing" (binomial distribution), then you need to compute numbers
>(various frequencies) within that model and compare them with the
>observed numbers.

>In nature, our current most fundamental theory (Quantum Theory; QT) is
>irreducibly non-deterministic i.e. you're seeing the outcomes of
>quantum transitions, you know the probabilities of various outcomes,
>but neither the theory nor any observation can "see" behind the quantum
>curtain. Hence if you wish to show _scientifically_ (instead of relying
>on Lysenkoists methods used by the present neo-Darwinian priesthood)
>that the mutations (which eventually come down to quantum dice)
>involved in evolution (at any scale, micro and macro) are "random" you
>need to work out the mathematical model for truly random DNA changes
>and compare its predictions with the observed outcomes as sketched in
>my first post.

Actually, most mutations fall into the macroscopic realm, at least for
QT. We are talking about relatively large chuncks of stuff being
moved- deamination of cytosine, oxidative adducts, strand breakage and
repair etc. You are blathering nonsense here if you think there are
any Schrodinger cats involved. On the off chance you and think that
most mutations involve radioactive decay, you are very much mistaken.

And before you call someone a Lysonkoist, please learn what a
Lysenkoist is. He certainlly did not beleive in random changes as a
driving force for changes in organisms. If you are refering to his
suppresion of what you would call "Neo-Darwinists" in the former USSR,
then you should use the term "Stalinist". Calling a biologist a
"Lysenkoist" is confusing.

As far as your math goes, GIGO. If you don't understand either the
biology or the chemistry, you really cannot make a sensible model.
Your model would make life impossible, so I suggest a change in model
parameters. Just a suggestion.

And as a final point, the real problem of ID is the invisible pink
unicorn (or it's hipper cousin the FSM). Any agent that can be
postulated to do anything, in any way, and hide itself from
inspection, is by definition unscientific. No predictive basis, no
testibility, no way to falsify.

As far as IC goes, it just means you are A) not current with biology a
la Behe or B) or willing to assume we now know all that is ever
possible to know about a system a la Dembski. I am not willing to say
I am god-like in my knowledge, but am always amazed at the relegious
folk who seem to think that ID is reasonable given that conclusion.
And that is the basis of most "scientific" ID- I can't explain it
fully, therefore God/IPU/FSM. To me, I can't explain it means that
that's something worth looking into. That's part of the joy, and most
of the frustration of being a scientist.


B Miller

hersheyhv

unread,
May 12, 2006, 11:39:26 AM5/12/06
to

nightlight wrote:
> >> The random mutation (RM) + natural selection (NS) do
> >> not explain even the microevolution, much less the
> >> macroevolution. The key missing piece is a proof
> >> that RM can yield the observed rates of adaptation.
> >
> >
> > That is easy, if, of course, you accept standard timeframes.
>
> My argument is not about or based on short times available
> for evolution.

Meaning that you do accept standard timeframes?

> > Humans and chimpanzees diverged from a common ancestor some 5
> > million years ago. Random mutation and *neutral drift* to
> > fixation, unlike the much faster random mutation plus positive
> > selection, occurs at a relatively constant (it does not need
> > to be absolutely constant to make the point) rate for
> > selectively neutral sites.
>
> The (approximately) "constant rate" of DNA change is an empirical fact.
> It tells you absolutely nothing about the presence or absence of any
> guidance (or anticipation, foresight...) in the generation of those DNA
> changes (which are then subject to natural selection).

You mean does science exclude the possiblity of theistic evolution?
No. Nothing can exclude the possibility of an unspecified and
unobservable appearing to act via naturalistic methods.

> An intelligent
> agency could model the possible DNA changes internally, performing
> look-ahead and elimination/selection before committing to the physical
> changes. While running as an internal model could be many orders of
> magnitude faster than the corresponding physical process, its net
> effect may well be precisely the _observed_ rate of evolutionary
> novelty, be it for steady or for highly variable (e.g. Cambrian
> explosion) forms of evolution.

One can propose a magical entity as the cause for *anything*. But what
you have to do is demonstrate that such a magical entity is
*necessary*.

> After all, the linguists use similar methods as evolutionary molecular
> biologists to study the evolution and relations among natural languages
> (which are created and transformed by intelligent/purposeful agents).

Yes. There is a lot of similarity in the two processes. A difference,
of course, is that the intelligent/purposeful agent involved in the
evolution of languages are not undetectable or magical. It is known,
as a fact, that human languages cannot exist in the absence of humans
(or some of their artifacts, such as radio, tape, and print).

> All the biological evolution patterns, such as those in your examples,
> commonly trotted out by neo-Darwinists in support of the sufficiency of
> "random" mutations, occur in the evolution of natural and artificial
> (such as mathematical formalisms) languages, religions, arts,
> scientific theories, technologies,... (just recall Dawkins memes).
> Curiously, in all other observed instances of such evolutionary
> patterns, there is always an intelligent agency behind. The sole
> exception is, according to neo-Darwinist Gospel, the biological
> evolution.

Languages, unlike organisms, do not self-reproduce. They are
manufactured by a known intelligent agent.

> The constant (or variable) _observed_ rate of DNA change is only an
> easy half of the criteria needed to discern between intelligent
> (guided, anticipatory) and random generation of novelty. The present
> molecular biology is still much too primitive to evaluate the other
> half, since it cannot _compute_ either how many total DNA
> configurations could be produced in a given setting, much less how
> would every DNA change at the atomic level affect the phenotype (in
> order to enumerate the 'favorable' or at least neutral outcomes).

I have explicitly pointed out that no DNA configuration is so deviant
from configurations held in common by both chimps and humans that any
search of some hypothetical total sequence space of possible sequences
ever is likely to have occurred. *All* that is needed is
*modification* of the subset of total sequence space that existed in
the common ancestor.

Evolution is descent with (or by) modifiction, not a random search of
total sequence space. There is no mechanism that allows organisms to
search total sequence space.

> The problem is therefore too complex to tackle with the present
> technological and scientific tools, hence the question of ID vs ND is
> an open and a perfectly legitimate scientific problem. The "irreducible
> complexity" examples offered by the ID proponents are a qualitative
> plausibility argument.

The underlying assumption of these arguments is a specific
non-evolutionary mechanism that involves inventing a sequence from
total sequence space rather than modifying a sequence from an ancestor.
It is quite true that *if* evolution worked by poofing useful
sequences from some total sequence space it would be unlikely to have
happened. But only IDiots think that such a model has anything to do
with evolution.

> Its effect in terms of the more precise criteria
> described in the previous posts is to enlarge (exponentially in the
> number of required changes) the space of possible configurations, thus
> reduce the probability of favorable combined DNA change for any given
> number of tries. Since the full combinatorial spaces relevant for the
> precise criteria are enormous (e.g. of 10^C kind, where C is of the
> order of number of atoms in the DNA), the handfuls of components used
> in "irreducible complexity" arguments merely add relatively tiny
> numbers to the exponent C for the total number of combinations. Arguing
> those examples is like arguing fiscal policies based only on the last
> digit of the figure for US national debt expressed in millicent units.

Given that such calculations are utter nonsense wrt to modelling how
evolution *actually* claims to work (and, from direct, observable
*evidence*, actually did so in the case of chimps and humans), what is
the use of presenting these ignorant "747 forming in a tornado"
calculations? Evolution works by descent with modification. What you
have to show is that there, in fact, is some sequence that
distinguishes chimp from human that requires mining total sequence
space rather than mining sequences present in the ancestor that only
needed to be modified slightly (and largely within the amounts
permitted by neutral drift alone, but there could well be some
sequences that have evolved more rapidly but within the rates permitted
by selection), as *evidenced* by the presence of similar sequences in
both presently living species.

> > But it does mean that there is more than enough random
> > mutation to account for the difference between humans
> > and chimps *even if* all the difference between these
> > species were merely random neutral fixation.
>
> As explained, the empirical mutation rate is a fact orthogonal to the
> question of whether the mutations which differentiated humans from apes
> were random or guided (selected with look-ahead).

Anyone can always claim that mutations occcurred by a God producing
them, just as you can claim that Lady Luck is responsible for the cards
you received. But you seem to be claiming that the rate of such
mutations is too slow to be happening in the absence of a God (or that
you could not have gotten those cards in the absence of Lady Luck).
All I am doing is pointing out that there is no *need* for such an
outside agent. The empirical mutation rate and its distribution is
quite sufficient to account for *all* the observed genomic differences,
the neutral or the positively selected for, between humans and chimps.
There is, of course, in evolution, no searching of total sequence
space, but only modification of ancestral sequences. You need to
present evidence that your outside agent is *necessary* in spite of the
fact that the known rate and types of mutation can produce all of the
empirically observed differences by modifying ancestral sequences.

> > In my book of science, actual observation of organisms
> > trumps your hypothetical mathematics.
>
> I am not arguing against the value of empirical data. After all, the
> observed counts of viable novelties O (in my first post) is the number
> to be compared with the expected number of viable novelties from random
> mutations (within given time & population sizes). The problem is that
> there are no empirical facts that addresses the point of contention
> between ID and ND.

Until ID presents a testable idea that would require the *necessity* of
the action by some outside agent, my thesis that such an agent is
*unnecessary* because *all* of the observed differences between humans
and chimps are consistent with known rates and types of mutation from
ancestral sequences, it remains ignored by and irrelevant to science
because it fails the razor test.

> Consider a simple analogy -- I tell you I am "randomly" tossing 10
> coins behind a curtain and then you are allowed to check the outcome
> and deduce whether I am tossing them "randomly" or putting them down by
> hand. You are not allowed to look behind the curtain until I say the
> coins have settled down. If we were to perform a "large" number of such
> experiments, you could apply various random number tests on the
> observed outcomes and deduce the probability of "random" tossing vs
> putting the coins down by hand for the given number of tests. Since you
> can't see or hear what is going on behind the curtain, the observation
> is not enough -- you need a mathematical model for the "random coin
> tossing" (binomial distribution), then you need to compute numbers
> (various frequencies) within that model and compare them with the
> observed numbers.

And that is relevant to the modification of pre-existing useful
sequences how? That is an interesting model of the "747 in a tornado"
explanation. But evolution doesn't propose that it works by that
mechanism. So how is such a model relevant?

> In nature, our current most fundamental theory (Quantum Theory; QT) is
> irreducibly non-deterministic i.e. you're seeing the outcomes of
> quantum transitions, you know the probabilities of various outcomes,
> but neither the theory nor any observation can "see" behind the quantum
> curtain. Hence if you wish to show _scientifically_ (instead of relying
> on Lysenkoists methods used by the present neo-Darwinian priesthood)
> that the mutations (which eventually come down to quantum dice)
> involved in evolution (at any scale, micro and macro) are "random" you
> need to work out the mathematical model for truly random DNA changes
> and compare its predictions with the observed outcomes as sketched in
> my first post.

There have been many, many, many empirical tests of the idea that
mutation occurs *independently* of need for the mutation. In every
case, from the Luria/Delbruck experiments on, the finding is that
mutation occurs *independently* of the need for that mutation. [It is
selection by the local environment that distinguishes the reproductive
value of different mutations that have already occurred by chance.]
If I have an honest die, I would expect a 6 about 1/6th of the time.
Again and again and again that is what I see in honest dice. Your
claim amounts to saying that I need to invoke Lady Luck to explain the
fact that I get a 6 about 1/6th of the time. And when I do happen to
get a 6 under conditions where a 6 is valuable, your claim is that Lady
Luck was *necessary* for that to happen.

TomS

unread,
May 12, 2006, 11:53:26 AM5/12/06
to
"On 12 May 2006 08:39:26 -0700, in article
<1147448366....@i39g2000cwa.googlegroups.com>, hersheyhv stated..."
[...snip...]

>One can propose a magical entity as the cause for *anything*. But what
>you have to do is demonstrate that such a magical entity is
>*necessary*.
[...snip...]

Even if the magical entity is necessary, that isn't enough.

The other half is that a magical entity is not *sufficient*.

That is to say that even if we accept the existence of such
an entity, we still need something in addition, to get to any
feature of the natural world. Why is it that way, rather than
some other way? Why did the magical entity do that? Is it
because it was unable to do anything else? (Such as, that there
is only one way that it could have designed an eye?) Is it
because it wanted to do it that way? (Such as, that the
designer wanted chimps and humans to have the same kind of eye?)

Magic alone is not enough of an explanation.


--
---Tom S. <http://talkreason.org/articles/chickegg.cfm>
"It is not too much to say that every indication of Design in the Kosmos is so
much evidence against the Omnipotence of the Designer. ... The evidences ... of
Natural Theology distinctly imply that the author of the Kosmos worked under
limitations..." John Stuart Mill, "Theism", Part II

Von R. Smith

unread,
May 12, 2006, 1:21:54 PM5/12/06
to


It's academic writing in the post-modern world: First you write your
thesis, then the rest of the paper. Then you do a little research so
that you can put in a bibliography and edit it in a few footnotes.
Then you come up with a catchy title like: "Romancing the Martian
Cube: The IDeological Construction of Intentionality in Sean Pitman's
On-line Essays".

Last but not least, you shove the completed paper under the eponymous
granite cube paperweight on your desk until the conference. Yet
another illustration of how the signifier can slide under the signified.

hersheyhv

unread,
May 12, 2006, 3:13:12 PM5/12/06
to

At a meter per side, you would need some sturdy desk as well as some
teflon sliders.

Mujin

unread,
May 12, 2006, 10:14:52 PM5/12/06
to
In article <1147454514.5...@i39g2000cwa.googlegroups.com>,
trak...@gmail.com says...

>
> It's academic writing in the post-modern world: First you write your
> thesis, then the rest of the paper. Then you do a little research so
> that you can put in a bibliography and edit it in a few footnotes.
> Then you come up with a catchy title like: "Romancing the Martian
> Cube: The IDeological Construction of Intentionality in Sean Pitman's
> On-line Essays".
>
> Last but not least, you shove the completed paper under the eponymous
> granite cube paperweight on your desk until the conference. Yet
> another illustration of how the signifier can slide under the signified.

Dear Mr.Smith,

We regret to inform you that your paper as submitted has not passed
peer review for our publication due to what reviewers consider
critical errors. However, all three reviewers have agreed that if you
are willing to make the following postmodern changes your paper should
be reconsidered:

1. The footnote on page 31 is poorly structured: please insert an
antonym-with-backslash in opposition to the subject of the sentence.

2. Several terms used in the paper have been obsolete since last
Thursday - please replace them with appropriate terms constructed
through apparently random aggregation of unrelated words. Remember:
just because a perfectly good word with a long history can be found in
the dictionary doesn't mean that it should be used.

3. Please alter your title to read as follows:

"Romancing the Martian Cube: The IDeological (de)Construction of

Intentionality in Sean Pitman's On-line Essays"

We look forward to seeing your revised submission.

Sincerely,
The Editor

nightlight

unread,
May 13, 2006, 1:18:08 AM5/13/06
to
> What we see is easily accounted for based on the chemistry
> of DNA and the ability of cells to repair DNA.

That depends, to paraphrase one former US president, what the meaning
of "is" is. Also on what you mean by "accounted for". As explained in
the earlier posts:

M1.
http://groups.google.com/group/talk.origins/msg/788d751e9ac239da?hl=en&
M2.
http://groups.google.com/group/talk.origins/msg/d6349ead1e3ff646?hl=en&

the "random mutation" half of the neo-Darwinian (ND) theory is a
gratuitous ideological assumption (Darwin himself was more guarded in
his writings). While one can certainly generate "random" mutations in
the lab, that doesn't imply that the mutations behind the observed
evolution (micro & macro) are "random" i.e. independent from the
environment in which they arose. After all, it is _trivially_ true that
mutation does depend on its physical cause (e.g. EM radiation, chemical
reactions, cosmic rays,...). Hence mutations are at least correlated
with the most immediate physical (quantum) state of the DNA and its
immediate environment, which in turn are correlated with the larger (in
space) future and past environments i.e. mutations are trivially
correlated with events and states of the past and the future
environments of the organism.

The neo-Darwinian "random mutation conjecture" (RMC) is that the
correlation between (evolutionary) mutations and the DNA environment
stops precisely at the immediate preceding physical causes. If the
physics were still the 19th century mechanistic theory, fixing the
physical state of DNA and its "immediate" environment (within the near
past light cone) automatically leads to a unique future state of the
DNA, hence to a unique mutation. Thus you wouldn't have to care about
wider correlations with future and past environments.

But the physics did advance from the 19th century mechanistic theory.
Our present fundamental physics is Quantum Theory (QT). The key
property of QT relevant for this discussion is non-determinism: the
exactly same physical state of the DNA and its environment generally
leads to different outcomes (mutations). The physics can only tell you
the probabilities of various outcomes (mutations), but nothing in the
most precise physical state of the DNA and its environment determines
what the specific outcome will be in any given instance.

Therefore, even if one were to grant to ND that the immediate DNA
environment (resulting in mutation) is independent from the physical
state of any larger context (which might allow for purposeful
mutations), there is always an impenetrable quantum curtain hiding the
selection of particular outcome/mutation even if the state of DNA and
its environment is known with theoretically maximum precision. Unlike
classical physics, where the 'first mover'/external intelligent agency
had to be constrained at the very beginning of the universe (to set up
the deterministic laws of physics and the initial conditions of all
particles and fields), the quantum physics provides an interface for an
external (or even an internal/subquantum system built upon Planckian
scale objects, see [M2]) intelligent agency to direct the universe
continuously and in great deal of detail (within the statistical
constraints of QT, since these are empirically fixed).

Of course, one wouldn't grant ND even that the immediate physical
conditions (quantum state) causing mutations are uncorrelated with the
wider past and future contexts. That would need to be established.
Since at the most basic physical level each mutation _is_ always
correlated with the quantum state of entire universe visible at that
location at the time of the mutation (since physical interactions can
travel at light speed, reach the mutation point and affect DNA
environment), as well as with the state of entire universe within the
future light cone of the mutation, one cannot a priory exclude
correlation of the mutation with higher level patterns in the past and
future much wider environments. One could _in principle_ exclude these
types of correlations (thus a foresight) empirically if one could
control the immediate environment precisely enough (as allowed by the
QT state of the system). Since this is technologically not viable at
present, even without the theoretical QT limitations noted earlier,
there are technological limitations precluding anyone from declaring
that even these less fundamental kinds of correlations have been
excluded empirically.

In fact we know that there are explicit empirical counterexamples to
accepting even this weaker (since it ignores QT dice) RM conjecture.
Namely, there _are_ perfectly natural "intelligently guided" mutations
i.e. there are natural processes which model and anticipate the
phenotopic outcomes of DNA changes and preselect which changes to
perform based on some far reaching purposes well beyond the immediate
physical DNA environment. This "natural process" occurs e.g. in the
brains of molecular biologists. Hence the neo-Darwinian RM conjecture
_must_ be refined in order not to outright contradict the established
empirical facts -- RM conjecture can't do better than to state that all
mutations are random, _except_ (at least) those guided by the
computations in the brains of molecular biologists. Now, that's one
very messy and ugly kind of conjecture.

For example, how would one make such necessary exception precise in a
formal scientific and mathematical language? What is it exactly about
the arrangements of atoms and fields in the brain of a molecular
biologist that makes it different from all other arrangements? Do you
include into such arrangement the atoms and fields of the computers
biologist uses to anticipate and guide his experiments?...

Additionally, on what scientific basis can ND claim that even the
vaguely stated exception is the sole exception to the RM conjecture?
After all the brains of molecular biologists, or brains in general, are
not the sole instance of intelligent networks (complex systems). Such
intelligent networks are ubiquitous at all levels in nature, from
biochemical reaction webs within cells through social networks and
ecowebs. All such networks model internally their 'environment' with
its punishments and rewards, play what-if game on the models and select
actions which optimize their particular punishments/rewards. The
neo-Darwinian RM dictum insists that in the entire spectrum of such
intelligent networks there is just a single point, the brain of a
molecular biologist, which purposefully guides mutations, and all other
networks are supposed to be entirely disinterested and uninvolved. When
at it, they might as well declare that there are exactly 12 angels that
can dance on a head of a pin and if you doubt you can forget about
publishing any papers in leading journals or getting any research
funding or any academic position.


> There is no need for an outside agency to account for
> the drift observed.

Only if you use the _empirical_ mutation rate in your calculations. But
that doesn't tell you whether these mutations were "random" or
"intelligent" (purposeful, anticipatory...). For all you know, the
"intelligent" mutations can yield the same exact empirical rates as
those observed. After all evolution patterns in other realms, such as
languages, exhibit analogous statistical phenomena and patterns, yet
they're clearly evolving through a purposeful activity of intelligent
agents.

The objective, ideologically and religiously neutral science should say
that the nature of the mutations (which resulted in choices being
offered to the natural selection) is not known. Unfortunately, the ND
priesthood declared that it knows the answer and that the answer is
"random mutations", and anyone questioning this dictum will have their
papers rejected, their academic career ruined. Well, thanks goodness at
least they're not threatening to tear the Dembski's limbs apart and
burn him along with the rest of the RM conjecture 'deniers' as in good
old times.


>> All the biological evolution patterns, ... occur in


>> the evolution of natural and artificial (such as
>> mathematical formalisms) languages, religions, arts,
>> scientific theories, technologies,... (just recall

>> Dawkins memes). ...


>
> You are confusing models with reality.

Not at all. Your reasoning apparently lost steam after one step. The
classification to "object" and "model of that object" is not absolute
i.e. the model M1(A) of some object A may be itself an object for
another model M2(M1(A))... etc. Hence, a language which models some
'reality' (object) may be an object of some other modeling scheme and
its language. {One can also define a model M3 via M3(A)=M2(M1(A)) and
M3 is then usually called a meta-language or meta-model of A.} Your
objection reminds me of a Galileo's inquisitor accusing Galileo of
confusing the rest with motion.


>>... since it cannot _compute_ either how many total DNA


>> configurations could be produced in a given setting,
>> much less how would every DNA change at the atomic
>> level affect the phenotype (in order to enumerate
>> the 'favorable' or at least neutral outcomes).
>
> WTH are DNA configurations?

That was described in the 1st post, [M1]. To test the neo-Darwinian RM
hypothesis for a single mutation, you need to estimate how large is the
combinatorial space containing all arrangements of the available atoms
which can arise (via laws of QT) from a given DNA state upon action of
a single mutagenic event (e.g. a cosmic ray, an alpha particle...).
(See [M1] & [M2] for description on why you need such number.)

> That certainlly is not a term used by us mol bio folks.

As pointed out in [M1],[M2], the present mathematical and computational
tools are much too primitive for such computations. And since the
questioning the RM conjecture is officially forbidden, there was no
need to create a term describing quantities no one in academia is
allowed to think about.

> And is "change at the atomic level" just an akward way
> of saying mutation?

It was stripped down to the minimum required by the combinatorial
reasoning in order to avoid any extraneous connotations and arguments
by dictums. Why drag all the baggage not needed for the combinatorics
and waste time on irrelevant tangents and associations, as you next
sentence illustrates:

> DNA is pretty much DNA at an atomic level. You gots your
> A's, C's, G's. and T's.

That comment illustrates the problem noted above. Once you eliminate
all possible molecules which can arise e.g. upon a single alpha
particle event, you have drastically reduced combinatorial space of all
possible configurations in the 1-mutaion neighborhood of a given DNA
state, hence you reduced the denominator in the expression P=F/M (cf.
[M1]) for probability of favorable mutation, artificially inflating
this probability.

Most mutagenic events will likely result in molecules which are not DNA
molecule proper. To test the RM conjecture, you need to allow all such
configurations (all possible molecules which can arise from given
physical causes) and give them equal a priori probability as outcomes,
whether they are proper DNA molecule or some other 'improper DNA'
molecule. After all, all such events take time and consume reproductive
resources. The RM conjecture requires that you give all such events
equal a priori probability, P1=1/M. What you suggest is a pre-selection
based on consequences expected later, which removes _upfront_ all
mutagenic events which result in improper DNA (i.e. create some other
molecules), sets their cost (in time and resources) to 0. That is
cheating since it contradicts the RM conjecture (which you wish to
prove/test) by setting the probabilities of the 'excluded' outcomes to
0. It is also a blatantly teleological exclusion, which further
contradicts the RM conjecture.

> Evolution is certainlly not guided, it is contingent.
> Look at the Luria Delbruck experiment for a simple example.

That and later similar experiments only show that the phage resistance
arose via mutations in the exposed cultures (i.e. it wasn't
pre-existent property). The results don't address at all the question
of the nature of the mutation i.e. whether the mutations were "random"
or "purposeful". Consider a counterexample where you get exactly the
same adaptation pattern even though the innovation was clearly
purposeful -- the technological evolution. Say, a sudden shortage of
some vital raw material places some set of companies in danger of
bankruptcy unless they can find a substitute or find some other product
design. These companies are analogous to separate Petri dishes in LD
experiment and the shortage is analogous to phage challenge (or even
closer analogy to sugar in Cairn's experiment). The affected companies
would switch into crisis mode, brainstorm, bring in new, more creative
people, ... etc. At the end, depending on difficulty of the problem and
the time constraints, you may have one or more companies that solved
the problem and survived while all the rest went under. (If the problem
was easy all of them may solve it.) Hence, you could get the same type
of results as LD, yet here the "mutations" in the manufacturing process
were "intelligent" (purposeful, guided....). Hence, this type of
evolutionary pattern as observed by LD doesn't go either in favor or
against the "random" mutation conjecture.

>> Ahhh, no. You seem to be caught in the logical loop of
>> "if DNA has X # of atoms, then something like 10^X degrees
>> of freedom are possible, therefore evolution would take
>> Y^10^X sample size". While you certainly can shuffle
>> numbers all you like you are confused about some basic
>> chemistry and biology. Using that sort of logic we
>> preclude any cell from existing for more than an fraction
>> of a second. Using that sort of logic, there is no way
>> that any enzyme could exist or function.

You've lost the context of 10^X combinatorial space (see the
explanation of "configuration" above). You need to count all outcomes
of mutagenic events according to time and resources they consume, hence
count all possible molecules which can arise as result of a mutagenic
cause from a given initial state. You can, of course, take into account
how long it takes to eliminate particular configuration e.g. some may
destroy a cell right away, while others may allow it to live longer and
propagate further. But you can't arbitrarily set to 0 cost of gross
failures. That would be equivalent to introducing some kind of
intelligent agency/process which can avoid such gross failures and
their costs upfront (before consuming physical resources and time).
That then contradicts your objective, to prove the RM conjecture.

For RM conjecture, you cannot assume that anything is eliminating gross
failures upfront based on some foresight about consequences -- within
RM all outcomes of mutagenic events are taken to be independent from
the later consequences. In your reasoning, you explicitly invoke later
consequences to eliminate _upfront_ non-functioning enzimes etc, and
declare that you do not wish to count such configuration as possible
physical/chemical outcomes of mutagenic event (say an alpha particle
impact). So, you have already changed the sides and crossed into the ID
camp.

> Organisms are not free to explore the entire phase
> space available based on their genome size and
> randomlly rearranging the DNA.

Organisms are not free, but the molecules arising from mutagenic events
are free to explore the combinatorial space of all possible molecules
implied by QT model of the mutagenic events. For RM conjecture to be
true, you cannot eliminate upfront any such configurations based upon
much later consequences for the organism. You reasoning is clearly
siding with ID.

>> As explained, the empirical mutation rate is a fact
>> orthogonal to the question of whether the mutations
>> which differentiated humans from apes were random
>> or guided (selected with look-ahead).
>
> What we see in comparison of the genomes shows that
> the changes are biased towards genes implicated in
> neural development, diet, and disease. Sort of what
> you would expect based on the biology.

How does that relate to quantitative criteria (described above and in
M1) needed to decide between "random" vs "intelligent". After all,
follow up your reasoning on evolution of technology or languages, which
are driven/guided by intelligent agents and show how the analogous
characterization to the one you give fails there. Your argument is
simply non sequitur. Purposeful mutations in no way contradict the
existence of gradual change (consider human languages).

> b) incompetant/malevolent/trickster designer.

Again look at the evolution of QWERTY keyboard layout or Windows OS.
Both very kludgy, yet "intelligent" agents were behind it. The presence
of foresight doesn't mean presence of perfect foresight. You are
increasingly clinging onto strawman arguments (perfect designer,
supernatural, magic,...etc). Not a good sign for the strength of your
case.

> And linkage disequilibrium of genes near these genes
> showing strong selection indicate either a) evolution
> or b) incompetant/malevolent/trickster designer.
>
> Which is simpler, and which has actual evidence?

Simplicity is a useful criterium only when everything else is equal
i.e. all candidates are equally correct (e.g. an arithmetic in which
addition A+B always yields 1 for any A and B, might be simpler than the
conventional one, yet it would be useless).

> Bad model, and even worse conclusion.

You need to provide a counter argument that holds for longer than one
post before we reach that conclusion.

> Actually, most mutations fall into the macroscopic realm,
> at least for QT. We are talking about relatively large
> chuncks of stuff being moved- deamination of cytosine,
> oxidative adducts, strand breakage and repair etc. You
> are blathering nonsense here if you think there are any
> Schrodinger cats involved. On the off chance you and
> think that most mutations involve radioactive decay,
> you are very much mistaken.

You have again lost the context for the QT part of the argument (see
[M2] and links there for more details). QT is relevant in the sense
that precisely same initial and boundary state of a DNA and its
environment can yield different outcomes and that there is nothing in
the physical state (as presently understood) that dictated which of the
outcomes will occur. It is a curtain that no empirical means can peek
behind. Hence, as explained in [M2] (with the analogy of 10 coins where
you're trying to figure out whether I am throwing the coins or laying
them out by hand behind the curtain) you need a combinatorial model (as
sketched in M1,M2) to test whether the RM conjecture can reproduce the
observed rates of evolutionary innovation. In the 10 coin example, the
combinatorics is simple enough to test the nature of the tosses (random
tosses or put by hand). With RM conjecture the combinatorics & physics
is too complex for our present techniques.

Therefore, the intelligent vs random mutations is an open question. The
ID arguments via "irreducible complexity" are merely plausibility
arguments for ID, not a decisive criterium sketched in [M1].

> And before you call someone a Lysonkoist, please learn
> what a Lysenkoist is. He certainlly did not beleive
> in random changes as a driving force for changes in
> organisms. If you are refering to his suppresion of
> what you would call "Neo-Darwinists" in the former USSR,
> then you should use the term "Stalinist". Calling a
> biologist a "Lysenkoist" is confusing.

ND is neo-Lysenkism, the "new and improved" Lysenkoism. Yet, you
can still see the same kind of hyper-sensitive, overly defensive,
zero-tolerance totalitarian mindset in the old and the new one. Just
listen Ward's "arguments" (dictums) and arrogant totalitarian style in
the two Ward-Meyer debates. It makes one kind of sad to think what kind
of intellects teach kids nowdays.

> Your model would make life impossible, so I suggest a
> change in model parameters.

I didn't propose any model but merely sketched quantitative criteria
needed to distinguish between random vs intelligent mutations. This is
no different in principle than criteria one uses to test random number
generators. The RM conjecture implies certain probabilities for
favorable mutations for which we have empirical frequencies, hence once
we can estimate these implied probabilities (which we cannot do at
present) we can decide whether the conjecture is valid. Therefore the
validity of RM is an open and legitimate question.

> And as a final point, the real problem of ID is the
> invisible pink unicorn (or it's hipper cousin the FSM).
> Any agent that can be postulated to do anything, in
> any way, and hide itself from inspection, is by
> definition unscientific. No predictive basis, no
> testibility, no way to falsify.

Why does the intelligent agent behind evolution have to be forever
inaccessible? While it is conceivable that what e think of "universe"
and ourselves may be a simulation in some gigantic game of Life, that
is by no means the only possibility.

Even if one limits oneself to the present laws of matter-energy (thus
ignoring the existence of mind stuff), it is perfectly conceivable that
various biochemical networks (including "junk" DNA) are intelligent
systems which can internally model DNA modifications and their possible
phenotopic results in order to select the "optimum" (within the
limitations of their computational capacity) mutation to make in the
actual DNA.

That such possibility is not far fetched at all is suggested by the
already existent examples of genetic look-ahead by natural processes
(besides the brains of molecular biologist or dog breeders). For
example, we know that societies through customs, folklore, religions,
arts, laws. etc also apply foresight regarding the future genetic
combinations it wishes to create. E.g. consider the stigma (and all its
manifestations) of criminal past - society does not wish to propagate
the genes of a criminal. Even the mere act of locking up someone for
several years performs the similar genetic look-ahead function (it
reduces mating chances of prisoners). The content of our prisons shows
which kind of genetic material the society wishes to eliminate.

Hence we know that such look-ahead regarding the future genetic content
certainly exists at the level of individuals and the level of society.
There is no reason or an argument why these two levels are all there
is. I think similar genetic look-ahead occurs at all levels, above and
below (possibly even below our present "elementary" particles), and in
a multitude of ways. The neo-Darwinian dogmatic insistence on "random"
mutations (which is rooted in 19th century mechanistic materialism,
where everything was modeled as 'machines') along with its Lysenkoists
methods of enforcing the thought conformity is a drag on science.

If one also recognizes the existence of mind stuff i.e. that it may be
possible to scientifically model answers to questions 'what is it like
to be this particular arrangement of atoms and fields that make up
you', other possibilities of 'intelligent agency' open. For example, it
is conceivable that what in our present QT are 'irreducibly'
non-deterministic choices are actually choices made by the future
model's counterparts of the mind stuff.

nightlight

unread,
May 13, 2006, 1:54:28 AM5/13/06
to
>> My argument is not about or based on short times available
>> for evolution.
>
> Meaning that you do accept standard timeframes?

Of course. Anything that has empirical support is fine within its
uncertainty margins (provided you state them). That wasn't my argument.
The argument was about the sufficiency, at the mathematical and the
empirical level, of the "random" mutations in replicating the observed
rates and quality (complexity) of evolutionary novelties. You are
welcome to point out what data and what calculations show that "random"
mutations can replicate the observed rates of novelty. Note that you
can't avoid the mathematical modeling part due, at the very least, to
the quantum dice curtain which doesn't allow you to see the precise
mechanism behind the individual choices (QT only lets you have
probabilities of various choices and no theory or present observation
will let you peak behind the QT curtain). In practice and at the
present technological level the domain of ignorance of the precise
detail of individual choices is much larger than what the final quantum
curtain prescribes.

>> The (approximately) "constant rate" of DNA change is an
>> empirical fact. It tells you absolutely nothing about
>> the presence or absence of any guidance (or anticipation,
>> foresight...) in the generation of those DNA changes
>> (which are then subject to natural selection).
>
> You mean does science exclude the possiblity of
> theistic evolution?

That's not what I meant and it has no relation to what I said. If you
are looking at a sequence of say, presumed coin tosses, just counting,
say totals for heads and tails doesn't tell you whether the sequence
was from genuine "random" coin tosses or from hand put outcomes. You
can eliminate (in the probabilistic sense) the latter by comparing the
prediction of "random" model (binomial distribution for genuine coin
tosses) with the observed frequencies of various subsequences (e.g.
counts of 01 vs 00 etc). Your RM argument seems to be that since you
don't see any pattern by just eyeballing the sequence of 0's and 1's,
then it must be due to random coin tosses. My argument is that, since
you're not allowed to watch the tossing but only to see the results (in
biology, due to quantum and technological indeterminisms), you need a
mathematical model for the generator, such as binomial distribution for
coin tosses. Then you need to compare whether its predictions match
(and how well) the empirical counts you can obtain from the actual
sequence. That was not done for RM conjecture in ND theory. Just
declaring it so and applying Lysenkoist methods to academically destroy
anyone doubting the RM conjecture merely emphasizes its fundamental
hollowness and sterility.

> No. Nothing can exclude the possibility of an unspecified
> and unobservable appearing to act via naturalistic methods.

Well, if logic, facts and imagination fail you, there is always that
strawman of 'supernatural' to fall on.

> One can propose a magical entity as the cause for
> *anything*. But what you have to do is demonstrate
> that such a magical entity is *necessary*.

Again the same strawman. You're the only one bringing in the magic. I
am asking you to show results that confirm "random" nature of
mutations, specifically the predictions of adaptation/innovation rates
based on "random" model and their comparison to empirical frequencies.
The difference between the two would imply the degree of likelihood of
the "random" mutation model of observed evolution rates (micro and
macro). If you don't have any, then the question is open whether the
mutations are random or not (which is my position here).

In the case they're not "random" (poor predictions), there are plenty
of alternative models (some of which I pointed out before e.g.
subquantum complexity between the current "elementary" particles,
10^-16 cm, down to Planckian scale objects, 10^-33 cm). In fact you
don't even need subquantum complexity, since you already know that
there _are_ natural processes guiding non-random, even purposeful and
anticipatory, generation of genetic novelty (e.g. brain of a molecular
biologist, or for that matter even the brain and senses of any animal
looking for a mate).

There is no magic or supernatural elements (assuming you exclude the
existence of "mind stuff" which, even though it exists, is presently
not modeled by the natural science of matter-energy transformations) in
any of them. I see no reason, much less a proof, that those must be the
sole examples of such purposeful natural processes. I don't even see
how would one express such exception in a precise scientific
mathematical language so one can model it.

Yet the neo-Darwinist RM dogma claims it is so, that those exceptions
(which it can't even describe in a precise mathematical language) are
the only exceptions to RM conjecture. Why is it so? My guess is that
such intelligent, purposeful processes exist at all levels, at least
from the ecoweb scale down to the biochemical reaction webs within
cells (including the "junk" DNA which I think is a key part of a highly
intelligent neural network which models and evaluates internally the
possible transformations of DNA and their phenotopic consequences
before committing its best pick to the much more costly physical
implementation/offspring).

>> After all, the linguists use similar methods as
>> evolutionary molecular biologists to study the
>> evolution and relations among natural languages
>> (which are created and transformed by intelligent/
>> purposeful agents).
>
> Yes. There is a lot of similarity in the two processes.
> A difference, of course, is that the intelligent/purposeful
> agent involved in the evolution of languages are not
> undetectable or magical.

That little strawman is quite busy in the debate here. Just because
your imagination fails to come up with anything else besides magic for
a given phenomenon, it doesn't mean that everyone else must also be
assuming magic as the "answer".

There is nothing magical or grossly undetectable in the examples above
of natural processes, that we all know to exist and which intelligently
and purposefully guide DNA transformations from generation to
generation. There is also nothing mathematically or computationally
special about the intelligent networks which act as intelligent agents
behind those DNA transformations. Such networks exist at all scales,
above and below human brain. They all interact in some way with
genomes, hence they build (as we do at our own level) their own
internal models of genomes and their envirnments. The most natural
position is to assume that they all exert in some way their
anticipatory influences on the genome, each one affecting the aspects
of DNA relevant for its own intearactions with it and the related
punishments/rewards. The ND conjecture -- that only one particular
network (the brain of 'molecular biologist') can guide mutations, and
all others are somehow excluded (despite interactions) so that except
for that one unique case, the mutations must be "random" -- is
strained, unnatural and ambiguous (e.g. how do you give precise
mathematical characterization or model this single odd exception).

Note also that molecular physics, which is an applied quantum
mechanics, is subject to quantum dice behind the curtain. No
theoretical result or existent experiment gives any clue what (if
anything) is behind the quantum indeterminism i.e. how do quantum
outcomes get picked. There are also plenty of technological and
mathematical modeling unknowns in molecular biology e.g. what does the
"junk" DNA do and how does it do it? How do biochemical reaction webs
which include the "junk" DNA work, what do they model, what do they
"think"? Just because you can sequence it, it doesn't mean you know or
understand how it works, how it affect phenotype and by which chain of
reactions.

The present level of knowledge in the molecular biology is not much
better than that of a caveman looking at a computer -- he can figure
out that by pressing this or that key he can get some light dots on the
screen or that pulling the cord darkens the screen and other such
superficial correlations. But he has no clue (or even a language to
conceptualize what the question is) about the underlying processes that
make such correlations happen or the (perfectly natural) intelligent
agency (modern technology & science) that designed it and built it. For
him it's a magic, yet we know that no magic is needed. Assigning magic
to phenomena you may not understand, as you repeatedly do, is a
primitive form of thinking which short-circuits any further query by
attaching to the puzzling the empty label "magic" as an sedative.
Scientific thinking is to look at it as a puzzle or an open question, a
challenge to ones imagination.

> Languages, unlike organisms, do not self-reproduce.

Neither do ecosystems (or species). But the elements/building blocks of
languages do reproduce, just as the building blocks of ecosystems or
species (organisms) reproduce. For example, you add suffix "ed" to
verbs to form past participle. The pattern "ed" is thus reproducing
itself (in its particular realm). Numerous forms of pattern
reproduction exist at every level (e.g. semantic, grammatical,
phonetic) of natural and artificial (mathematical, scientific)
languages, propagating via analogies. There is an even more elemental
reproduction (analogous to cellular reproduction) of patterns -- every
time someone learns or uses such patterns the pattern has managed to
reproduce. How many times have patterns DNA or "is" or "the" reproduced
in this post? Of course, this reproduction uses humans, books and
computers, instead of atoms and fields, as its substratum, but that is
a matter of network implementation. The basic mathematical properties
of such networks with adaptible (to punishments & rewards) links are
deduced independently of the implementation for the links, nodes and
punishments/rewards.

> They are manufactured by a known intelligent agent.

Known? Really. That's like the cavemen with computer saying he "knows"
how it works and to demonstrate that he "knows", he shows how when he
hits a key the light pattern pops up, just as he predicted it would. As
far as he knows he "knows" it. Similarly, with humans generating
languages -- for example, there is no scientific model at present which
tells you anything about the mind-stuff (what is it like to be some
arrangement of atoms and fields? what is redness like? what is it like
to get an idea?). We don't know whether the mind-stuff is an
epiphenomenon or a causal agent (which can affect the matter-energy
transformations). I am not talking about telekinesis but about quantum
dice. We don't know how it picks its quantum outcomes when "brain"
thinks (perhaps the mind-stuff does the job as von Neumann and Wigner
conjectured). Hence, you don't know the "intelligent agent" behind
language much better than the cavemen knows the one behind the
computer. Just as he can predict pattern from a button pressed we can
roughly predict neurological and behavioral effects of stimulation of
certain brain regions. In both cases there is also some "little" stuff
going on that is sort of mysterious, but that probably doesn't matter
much. We got these handful of correlations, so we both "know" the
agent. We can also name it as a "proof" that we know it. Yeah, sure.
(The same goes for "knowing" how evolution works.)


> I have explicitly pointed out that no DNA configuration is
> so deviant from configurations held in common by both
> chimps and humans that any search of some hypothetical
> total sequence space of possible sequences ever is likely
> to have occurred. *All* that is needed is *modification*
> of the subset of total sequence space that existed in
> the common ancestor.

You should read more carefully before responding. I was not talking
about space of all possible arrangements of a given numbers of atoms
but about all arrangements reachable by a single (or by some number n)
mutagenic event from a given initial state. That space (labeled as M in
the first post) is huge, too. To test "random" mutation conjecture
behind evolution, for single or for some n>1 mutations, you still need
to have combinatorial or probabilistic model of that "n-mutation
neighborhood" space and compare its predictions with the empirically
observed frequencies of favorable novelties at the distance of
n-mutations from the same initial state.

The ID conjecture is that the empirical frequencies of favorable
novelties (being offered to natural selection to weed out) are much
greater than what a "random" mutation model would predict. The
neo-Darwinian conjecture is that the observed empirical frequencies are
exactly what the "random" mutation model would predict. Neither side
can at present work out the math and computations of the model to prove
their case (due to our present limited computational and modeling
capabilities). The ND priesthood had not won this argument but has
merely bullied the alternatives out of academia through Lysenkoist
style censorship, economic and professional intimidation. The side
which has to reach for these kinds of "scientific" methods in order to
cling on just a bit longer, is as a rule the weaker one on the
substance (recall USSR and its communism).

> The underlying assumption of these arguments is a
> specific non-evolutionary mechanism that involves
> inventing a sequence from total sequence space rather
> than modifying a sequence from an ancestor.

Your sloppy reading again. The space M of possible DNA configurations
is the space at distance n mutagenic events from the original sequence,
where n=1,2... It is not a space of all possible configurations of
given numbers of atoms (which would be much bigger).

> Given that such calculations are utter nonsense wrt
> to modelling how evolution *actually* claims to work (and,
> from direct, observable *evidence*, actually did so in
> the case of chimps and humans), what is the use of
> presenting these ignorant "747 forming in a tornado"
> calculations? Evolution works by descent with modification.

You're getting quite a mileage out of that strawman. Read again in the
1st post what the 'total space of configurations' and M mean.


> Anyone can always claim that mutations occcurred by a
> God producing them, just as you can claim that Lady
> Luck is responsible for the cards you received.

There are quantitative criteria to distinguish whether the deck and
shuffling are stacked or fair. This is no different than testing a
quality of random number generator by applying random number tests. The
idea is to use presumed model of the random generator (such as binomial
or multinomial distribution) and produce its predictions for the
empirical counts obtained from the sequence. That lets you decide
whether the "random" model is valid and how valid.

> But you seem to be claiming that the rate of such mutations
> is too slow to be happening in the absence of a God (or
> that you could not have gotten those cards in the absence
> of Lady Luck).

No, I am only claiming that no one has computed whether "random"
mutations are too slow to generate the observed rates and quality of
evolutionary novelties. I am saying that this is an unsolved problem
which has, at least given sufficient computational and modeling
resources, a precise answer.

> All I am doing is pointing out that there is no *need* for
> such an outside agent.

You can "point out" whatever you want. That doesn't show that the
"random" mutations are just right. To show that there is no need for
any other but "random" mutations, you need to make prediction from the
random mutation statistical/combinatorial model of DNA mutations and
compare them with the observed frequencies.

As to "outside agent" that's your strawman again. "Outside" of what?
Outside of backward & forward light cones of a DNA molecule (which is
the space-time region which can propagate interactions at maximum light
speed)? Well that would cover the entire visible universe.

> Until ID presents a testable idea that would require the
> *necessity* of the action by some outside agent, my thesis
> that such an agent is *unnecessary* because *all* of
> the observed differences between humans and chimps are
> consistent with known rates and types of mutation from
> ancestral sequences, it remains ignored by and irrelevant
> to science because it fails the razor test.

I described a test. It is the same kind of test you would use to test
"random" generator for any other sequence, such as that of coin tosses.
Make prediction from that RM model of DNA and lets see whether it
matches the empirical frequencies of the evolutionary novelties being
submitted to the natural selection. There is plenty of unknown, from
quantum dice to present limitations of technology and mathematics, for
entire universe of intelligent agents to fit in and causally interface
to the DNA mutations.


> And that is relevant to the modification of pre-existing
> useful sequences how? That is an interesting model of
> the "747 in a tornado" explanation.

Shadow boxing again...

> There have been many, many, many empirical tests of the
> idea that mutation occurs *independently* of need for
> the mutation. In every case, from the Luria/Delbruck
> experiments on, the finding is that mutation occurs
> *independently* of the need for that mutation.

I responded to LD and similar experiments in the previous post:

http://groups.google.com/group/talk.origins/msg/748587b6e86a28c8?hl=en&

These experiments have nothing to do with question of "random" mutation
vs guided (anticipatory, intelligent,...) mutation. Namely, the same
kind of statistical behaviour can be observed for technological
innovations which clearly do have intelligent agents behind (see my
previous post for details on LD). Generally, any kind of statistical
phenomena and patterns you can observe in the technological evolution
(or of science, languages, cultures, etc) where the intelligent agents
are clearly behind, cannot be claimed as a proof that the similar or
analogous statistical phenomena and patterns imply "random" mutations
in the biological evolution.

Richard Forrest

unread,
May 13, 2006, 4:16:36 AM5/13/06
to

nightlight wrote:
> > What we see is easily accounted for based on the chemistry
> > of DNA and the ability of cells to repair DNA.
>
> That depends, to paraphrase one former US president, what the meaning
> of "is" is. Also on what you mean by "accounted for". As explained in
> the earlier posts:
>
> M1.
> http://groups.google.com/group/talk.origins/msg/788d751e9ac239da?hl=en&
> M2.
> http://groups.google.com/group/talk.origins/msg/d6349ead1e3ff646?hl=en&
>
> the "random mutation" half of the neo-Darwinian (ND) theory is a
> gratuitous ideological assumption (Darwin himself was more guarded in
> his writings).

There's nothing "ideological" about the observation that mutations
occur at random in respect to reproductive fitness. It's an hypothesis
which can be tested using the scientific method.

As for Darwin being "more guarded in his writings", that's because he
didn't know about genetics, and hence could not have made any comment
on mutuation patterns.

> While one can certainly generate "random" mutations in
> the lab, that doesn't imply that the mutations behind the observed
> evolution (micro & macro) are "random" i.e. independent from the
> environment in which they arose.

The independence of mutations from their envirionment is an hypothesis
which can be tested using the tools of science. It has been tested, and
there is little in the results to suggest any correlation.

> After all, it is _trivially_ true that
> mutation does depend on its physical cause (e.g. EM radiation, chemical
> reactions, cosmic rays,...). Hence mutations are at least correlated
> with the most immediate physical (quantum) state of the DNA

What the hell is the "quantum state" of DNA?
The term is meaningless at the scales of macromolecules.
Stop using scientific terms you don't understand.

> and its
> immediate environment, which in turn are correlated with the larger (in
> space) future and past environments i.e. mutations are trivially
> correlated with events and states of the past and the future
> environments of the organism.

So now you are suggesting that there is a correlation between mutations
and *future* events?

Would you care to provide some evidence to back up this assertion?

>
> The neo-Darwinian "random mutation conjecture" (RMC) is that the
> correlation between (evolutionary) mutations and the DNA environment
> stops precisely at the immediate preceding physical causes.

If you mean that the process does not react to events which have not
yet happened, that is the case for all processes in all fields of
science (with the possible exception of quantum physics, but then, as
Richard Feyman said, nobody understands quantum physics).

> If the
> physics were still the 19th century mechanistic theory, fixing the
> physical state of DNA and its "immediate" environment (within the near
> past light cone) automatically leads to a unique future state of the
> DNA, hence to a unique mutation. Thus you wouldn't have to care about
> wider correlations with future and past environments.
>

More gobbledegook.

> But the physics did advance from the 19th century mechanistic theory.
> Our present fundamental physics is Quantum Theory (QT). The key
> property of QT relevant for this discussion is non-determinism: the
> exactly same physical state of the DNA and its environment generally
> leads to different outcomes (mutations). The physics can only tell you
> the probabilities of various outcomes (mutations), but nothing in the
> most precise physical state of the DNA and its environment determines
> what the specific outcome will be in any given instance.
>

So the universe is inherently unpredictable.
What is your point?

> Therefore, even if one were to grant to ND that the immediate DNA
> environment (resulting in mutation) is independent from the physical
> state of any larger context (which might allow for purposeful
> mutations), there is always an impenetrable quantum curtain hiding the
> selection of particular outcome/mutation even if the state of DNA and
> its environment is known with theoretically maximum precision. Unlike
> classical physics, where the 'first mover'/external intelligent agency
> had to be constrained at the very beginning of the universe (to set up
> the deterministic laws of physics and the initial conditions of all
> particles and fields), the quantum physics provides an interface for an
> external (or even an internal/subquantum system built upon Planckian
> scale objects, see [M2]) intelligent agency to direct the universe
> continuously and in great deal of detail (within the statistical
> constraints of QT, since these are empirically fixed).

In other words, you are asserting from your poor understanding of
quantum physics that God fiddles with the universe via quantum events.

This does not work. The completely random nature of such events is what
allows quantum physics to make extremely acurate predictions about the
behaviour of matter and energy at larger scales.

So tough shit. Your assertion does not hold up to scrutiny.

>
> Of course, one wouldn't grant ND even that the immediate physical
> conditions (quantum state) causing mutations are uncorrelated with the
> wider past and future contexts.

Do you have any idea of the differences in scale between that at which
quantum events occur and macromolecules such as DNA?

I suggest that you find out.

> That would need to be established.
> Since at the most basic physical level each mutation _is_ always
> correlated with the quantum state of entire universe visible at that
> location at the time of the mutation (since physical interactions can
> travel at light speed, reach the mutation point and affect DNA
> environment), as well as with the state of entire universe within the
> future light cone of the mutation, one cannot a priory exclude
> correlation of the mutation with higher level patterns in the past and
> future much wider environments. One could _in principle_ exclude these
> types of correlations (thus a foresight) empirically if one could
> control the immediate environment precisely enough (as allowed by the
> QT state of the system). Since this is technologically not viable at
> present, even without the theoretical QT limitations noted earlier,
> there are technological limitations precluding anyone from declaring
> that even these less fundamental kinds of correlations have been
> excluded empirically.

And this word-salad means what?
It is fundamental to quantum physics that quantum events are purely
random.
This simple fact blows your wordy assertions appart.

<remainer snipped on the grounds that life is too short to waste it
reading badly worded bullshit>

RF

nightlight

unread,
May 13, 2006, 12:08:11 PM5/13/06
to
> There's nothing "ideological" about the observation
> that mutations occur at random in respect to
> reproductive fitness. It's an hypothesis
> which can be tested using the scientific method.

Testable scientific hypotheses are often being supported or opposed
based on ideology (or religion, tastes, greed, etc). You seem to live
in a glass bubble if you believe otherwise. Just peruse some science
on, say, genetics of race or gender or sexual orientation, IQ, to see
how ideology and scientific hypotheses and research interact in real
life. After all, science needs money and who pays the piper calls the
tune.

> As for Darwin being "more guarded in his writings",
> that's because he didn't know about genetics, and hence

> could not have made any comment on mutation patterns.

The RM conjecture isn't in any essential way reliant on the particular
molecule that transmits heritable traits. One can formulate RM with
respect to some generic carriers of heritable traits without knowing
what molecules implement such carrier.


>> Hence mutations are at least correlated with the most
>> immediate physical (quantum) state of the DNA
>
> What the hell is the "quantum state" of DNA?
> The term is meaningless at the scales of macromolecules.

The quantum state of a composite system is a vector (or generally a
statistical operator) in the Hilbert space constructed as a direct
product of Hilbert spaces of its subsystems. The term is perfectly
meaningful for any composite system. After all a classical system of
charged particles could not form a stable molecule at all (cf.
Earnshaw's theorem). In fact, the puzzle of stability of atoms was a
key impetus for creation of quantum mechanics.

> Stop using scientific terms you don't understand.

Please, teach me master. You can perhaps join some reacent threads in
sci.physics.research where I discussed quantum states of composite
systems, so you can teach me there:

http://groups.google.com/group/sci.physics.research/search?q=nightlight&start=0&filter=0

Since my thesis was on quantum correlations & related paradoxes, that
would be the area I would be interested to learn more about.

>> and its
>>> immediate environment, which in turn are correlated with
>> the larger (in space) future and past environments i.e.
>> mutations are trivially correlated with events and states
>> of the past and the future environments of the organism.
>
> So now you are suggesting that there is a correlation
> between mutations and *future* events?
>

The future and past physical states of any system are correlated. After
all that is what physical laws are -- a concise mathematical expression
of such correlations. The quantum (or classical) state of DNA is
correlated with states of all particles in its past and future light
cones (these are regions of space-time reachable from some point by
signals travelling no faster than light).

> Would you care to provide some evidence to back up
> this assertion?

Check any elementary physics textbook.

>> The neo-Darwinian "random mutation conjecture" (RMC) is
>> that the correlation between (evolutionary) mutations
>> and the DNA environment stops precisely at the immediate
>> preceding physical causes.
>
> If you mean that the process does not react to events
> which have not yet happened, that is the case for all
> processes in all fields of science

Did you ever watch any sports with ball? Or boxing? Any system which
can anticipate its environment (via modeling) does it so it can react
to future states of that environment.

>> If the physics were still the 19th century mechanistic

>> theory,...
>
> More gobbledegook.

If you wish to talk physics, you would need to learn some first. I am
sorry for discussing matters that you don't know much about. You can
always discuss politics, though.


> So the universe is inherently unpredictable.
> What is your point?

The point is that anyone who claims that they know the mechanism of
mutations, including the precise state of the DNA environment which
contains the physical causes of mutations (in order to claim that the
is no interface or degrees of freedom via which mutations can be
directed) is making things up. Such claims are false both in quantum
and classical models.

>> the quantum physics provides an interface for an
>> external (or even an internal/subquantum system built
>> upon Planckian scale objects, see [M2]) intelligent
>> agency to direct the universe continuously and in great
>> deal of detail (within the statistical constraints of
>> QT, since these are empirically fixed).
>
> In other words, you are asserting from your poor
> understanding of quantum physics that God fiddles
> with the universe via quantum events.
>

I am saying that QT leaves much more room for intervention from systems
not modeled by the present knowledge. In deterministic models, your
only free choices are the initial and boundary conditions. In
non-deterministic models the free choices are allowed at any point
where the model maps single past state into multiple future states (as
the projection postulate does in QT).

> Do you have any idea of the differences in scale between
> that at which quantum events occur and macromolecules
> such as DNA?

Do you know that human or animal eye can detect a single photon? Do you
know the difference in scale between human brain and a photon? Do you
know that a single alpha particle can kill human? Did you ever hear of
avalanche Did you ever hear of a noun "amplification"? ...

> And this word-salad means what?

You should search for a "(physics or math) tutor" on Google. Sorry, but
I haven't done any tutoring since graduate school.

Message has been deleted

Deadrat

unread,
May 13, 2006, 1:06:19 PM5/13/06
to

"nightlight" <night...@omegapoint.com> wrote in message
news:1147536491.2...@y43g2000cwc.googlegroups.com...

>> There's nothing "ideological" about the observation
>> that mutations occur at random in respect to
>> reproductive fitness. It's an hypothesis
>> which can be tested using the scientific method.
>
> Testable scientific hypotheses are often being supported or opposed
> based on ideology (or religion, tastes, greed, etc). You seem to live
> in a glass bubble if you believe otherwise. Just peruse some science
> on, say, genetics of race or gender or sexual orientation, IQ, to see
> how ideology and scientific hypotheses and research interact in real
> life. After all, science needs money and who pays the piper calls the
> tune.

It is almost impossible to tell whether you know what you're talking
about. Everything you say has some truth and relevance. Yes,
scientists are influenced by their biases (including those in favor of
their funders), and yes, scientists have advanced faulty hypotheses
based on ideology. Science is, after all, a human endeavor. But it
is also tends to be self-correcting -- if your peers can't reproduce your
results, then you and your biases are out of luck. Do you have some
evidence that the hypothesis of random mutation is ideologically grounded?

>
>> As for Darwin being "more guarded in his writings",
>> that's because he didn't know about genetics, and hence
>> could not have made any comment on mutation patterns.
>
> The RM conjecture isn't in any essential way reliant on the particular
> molecule that transmits heritable traits. One can formulate RM with
> respect to some generic carriers of heritable traits without knowing
> what molecules implement such carrier.

True, but Darwin had no reliable biochemical models whatsoever.
As I recall, he dabbled a bit with "blended" characteristics, but hindsight
tells us this was clearly off base.

>>> Hence mutations are at least correlated with the most
>>> immediate physical (quantum) state of the DNA
>>
>> What the hell is the "quantum state" of DNA?
>> The term is meaningless at the scales of macromolecules.
>
> The quantum state of a composite system is a vector (or generally a
> statistical operator) in the Hilbert space constructed as a direct
> product of Hilbert spaces of its subsystems. The term is perfectly
> meaningful for any composite system. After all a classical system of
> charged particles could not form a stable molecule at all (cf.
> Earnshaw's theorem). In fact, the puzzle of stability of atoms was a
> key impetus for creation of quantum mechanics.
>
>> Stop using scientific terms you don't understand.
>
> Please, teach me master. You can perhaps join some reacent threads in
> sci.physics.research where I discussed quantum states of composite
> systems, so you can teach me there:
>
> http://groups.google.com/group/sci.physics.research/search?q=nightlight&start=0&filter=0
>
> Since my thesis was on quantum correlations & related paradoxes, that
> would be the area I would be interested to learn more about.
>

Again, it's impossible to tell if you are a talented student of physics
or just a master of the obvious.

>>> and its
>>>> immediate environment, which in turn are correlated with
>>> the larger (in space) future and past environments i.e.
>>> mutations are trivially correlated with events and states
>>> of the past and the future environments of the organism.
>>
>> So now you are suggesting that there is a correlation
>> between mutations and *future* events?
>>
> The future and past physical states of any system are correlated. After
> all that is what physical laws are -- a concise mathematical expression
> of such correlations. The quantum (or classical) state of DNA is
> correlated with states of all particles in its past and future light
> cones (these are regions of space-time reachable from some point by
> signals travelling no faster than light).
>
>> Would you care to provide some evidence to back up
>> this assertion?
>
> Check any elementary physics textbook.

I doubt anyone would argue that the future and past states of any
system are not correlated. The question is whether one can find
a strong enough correlation between the environment -- say, the
food available to seed-eating birds -- and the resulting genetic pool
-- say, the birds' beak sizes -- to declare a causal link. Suppose
there were. Then we could check to see whether a scanty food
supply gives rise to a generation of larger-beaked offspring (who
could crack and eat the tougher seeds). But that's not what we
see. There's a large range of beak sizes around the mean, but in
a famine the larger beaks are more likely to survive. That is to
say that there's continuing variability in the production of beak sizes,
and the mean moves up in response to selection and not some
imagined quantum connection between the mass of seeds and the
reproductive processes of the birds.

>>> The neo-Darwinian "random mutation conjecture" (RMC) is
>>> that the correlation between (evolutionary) mutations
>>> and the DNA environment stops precisely at the immediate
>>> preceding physical causes.
>>
>> If you mean that the process does not react to events
>> which have not yet happened, that is the case for all
>> processes in all fields of science
>
> Did you ever watch any sports with ball? Or boxing? Any system which
> can anticipate its environment (via modeling) does it so it can react
> to future states of that environment.
>
>>> If the physics were still the 19th century mechanistic
>>> theory,...
>>
>> More gobbledegook.
>
> If you wish to talk physics, you would need to learn some first. I am
> sorry for discussing matters that you don't know much about. You can
> always discuss politics, though.

This is a bad sign.

>> So the universe is inherently unpredictable.
>> What is your point?
>
> The point is that anyone who claims that they know the mechanism of
> mutations, including the precise state of the DNA environment which
> contains the physical causes of mutations (in order to claim that the
> is no interface or degrees of freedom via which mutations can be
> directed) is making things up. Such claims are false both in quantum
> and classical models.

Yes, but we don't need to know the mechanism of mutation, if you
mean what causes the quantum states of the bonds of the base pairs
of the DNA.

<snip>

Deadrat

hersheyhv

unread,
May 13, 2006, 3:22:13 PM5/13/06
to
nightlight wrote:
> >> My argument is not about or based on short times available
> >> for evolution.
> >
> > Meaning that you do accept standard timeframes?
>
> Of course. Anything that has empirical support is fine within its
> uncertainty margins (provided you state them). That wasn't my argument.
> The argument was about the sufficiency, at the mathematical and the
> empirical level, of the "random" mutations in replicating the observed
> rates and quality (complexity) of evolutionary novelties.

I have presented empirical support for the sufficiency of known
*mutation rates* to be able to generate the evolutionary novelty of a
human being from its common ancestor with chimpanzees. If you want
evidence that all known mutation is *random wrt need* and that the
types of mutations that produced the novelty of a human being do not
differ in anyway from the types of mutation that have been repeatedly
and experimentally demonstrated to be generated at random wrt need,
there is a massive amount of experimentation that shows that.

But, as long as you posit a hypothetical director of mutation that
appears to be acting within this apparent randomness wrt need, such an
hypothesis is untestable and logically possible.

> You are
> welcome to point out what data and what calculations show that "random"
> mutations can replicate the observed rates of novelty.

I have presented empirical evidence that, at both the level of the
entire genome and (to the extent of our current knowlege) at the level
of specific genes, the known *rate* of mutation and fixation can
account, easily, for the entirety of the difference between chimps and
humans.

If you have some evidence for a type of mutation that is *non-random*
wrt need, I sure would like you to present it. All the empirical
*evidence* I have seen says that mutation, of every single type seen in
nature, is random wrt need.

The experiment that first demonstrated this (the Luria/Delbruck
experiment) is very simple and convincing. At the time there was a
question of whether or not the resistant bacteria seen were due to a
rare response to the selective agent (that is, the selective agent
*caused* the resistance) or was due to rare lucky mutants that occurred
at random in the population *before* the selective agent was added. If
it were the latter, if you took a large population and divided part of
it into much smaller pools and let these small pools grow up to the
same size as the original pool, some of these smaller pools would, by
chance, have mutants and would be rich in resistant colonies and others
would, by chance, be poor in colonies. In fact, you would expect a
Poisson distribution (which is what you get when you randomly sample a
large population with a small number of rare types) of mutants. If,
OTOH, the selecting agent were inducing resistance as a rare response,
there should be no significant difference between the pools. In fact,
you get a Poisson distribution with rare "jack-pots".

And every other experiment done in the last half century (in all kinds
of organisms) has further demonstrated that mutation is random wrt
need. If your claim is that mutation is NOT random wrt need, you need
to present some evidence that such an event is possible and observable.

> Note that you
> can't avoid the mathematical modeling part due, at the very least, to
> the quantum dice curtain which doesn't allow you to see the precise
> mechanism behind the individual choices (QT only lets you have
> probabilities of various choices and no theory or present observation
> will let you peak behind the QT curtain). In practice and at the
> present technological level the domain of ignorance of the precise
> detail of individual choices is much larger than what the final quantum
> curtain prescribes.
>
> >> The (approximately) "constant rate" of DNA change is an
> >> empirical fact. It tells you absolutely nothing about
> >> the presence or absence of any guidance (or anticipation,
> >> foresight...) in the generation of those DNA changes
> >> (which are then subject to natural selection).
> >
> > You mean does science exclude the possiblity of
> > theistic evolution?
>
> That's not what I meant and it has no relation to what I said. If you
> are looking at a sequence of say, presumed coin tosses, just counting,
> say totals for heads and tails doesn't tell you whether the sequence
> was from genuine "random" coin tosses or from hand put outcomes.

If you can't tell the difference between "random" and "hand-put"
outcomes, there is no difference. That makes your hypothetical agent
one that works in a way that is indistinguishable from pure chance
mutation and subsequent selection. That is theistic or theistically
guided evolution that is indistinguishable from what you would expect
by natural mechanisms alone.

> You
> can eliminate (in the probabilistic sense) the latter by comparing the
> prediction of "random" model (binomial distribution for genuine coin
> tosses) with the observed frequencies of various subsequences (e.g.
> counts of 01 vs 00 etc). Your RM argument seems to be that since you
> don't see any pattern by just eyeballing the sequence of 0's and 1's,
> then it must be due to random coin tosses.

I am not just "eyeballing" the sequences of A, T, G, and C. At every
level of analysis, from total genome to single genes or sequences (to
the extent that the two genomes have been compared) there is little or
no evidence that one need even *require* the faster rate of fixation of
mutations that natural selection provides over neutral fixation, which
means that the amount of genome change in these species due to
non-conservative selection is small, so small it is hard to find even
when you look for it. And one sees no evidence of any special kind of
mutation that has not been experimentally observed to happen (meaning
in time-frames much smaller than 5 million years) being required.

> My argument is that, since
> you're not allowed to watch the tossing but only to see the results (in
> biology, due to quantum and technological indeterminisms), you need a
> mathematical model for the generator, such as binomial distribution for
> coin tosses. Then you need to compare whether its predictions match
> (and how well) the empirical counts you can obtain from the actual
> sequence.

That is exactly what the evidence says: no special form of mutation and
no special rate of mutation fixation or mutation generation is required
to explain the difference between humans and chimps at *any* level of
analysis of the genomes.

> That was not done for RM conjecture in ND theory. Just
> declaring it so and applying Lysenkoist methods to academically destroy
> anyone doubting the RM conjecture merely emphasizes its fundamental
> hollowness and sterility.
>
> > No. Nothing can exclude the possibility of an unspecified
> > and unobservable appearing to act via naturalistic methods.
>
> Well, if logic, facts and imagination fail you, there is always that
> strawman of 'supernatural' to fall on.

I am not the one proposing something supernatural. I am specifically
stating that no special rate of mutation, no special rate of mutation
fixation, and no special type of mutation is required to explain any of
the observed differences in genomes seen between chimps and humans
regardless of the level of analysis (total genome or single gene). In
fact, one does not even need to invoke the much faster rate of fixation
for positive selection (at least in most of the genes that have
compared). [Positive selection does not increase the rate of mutation,
of course. It only increases the liklihood and rate of fixation of a
mutation.]

> > One can propose a magical entity as the cause for
> > *anything*. But what you have to do is demonstrate
> > that such a magical entity is *necessary*.
>
> Again the same strawman. You're the only one bringing in the magic. I
> am asking you to show results that confirm "random" nature of
> mutations, specifically the predictions of adaptation/innovation rates
> based on "random" model and their comparison to empirical frequencies.

To the extent that you consider humans to be "innovations" relative to
their common ancestor with chimps, I have done so. All the innovations
of humans can be easily accounted for by known mutation rates and
fixation rates at all levels of analysis of the respective chimp and
human genomes. No special type or radically different rate of mutation
is required to produce this difference. Of course, you may not
consider humans to be an "innovation" relative to chimps or our common
ancestor.

As to the random nature of the mutations, all I can do is state that
one does not *need* to introduce the idea of non-random mutation to
explain the result and that experimental evidence of mutations indicate
that they are random wrt need. If there is non-random mutation
occurring in the evolution of humans, it is indistinguishable from the
expectations that one would have if random mutation occurring by any
measure brought to bear. But how can I (or anyone) rule out non-random
mutation that looks exactly like (is indistinguishable from) what would
happen if mutation were random? And that, specifically, shows no sign
of the expectations of non-random mutation?

> The difference between the two would imply the degree of likelihood of
> the "random" mutation model of observed evolution rates (micro and
> macro). If you don't have any, then the question is open whether the
> mutations are random or not (which is my position here).
>
> In the case they're not "random" (poor predictions), there are plenty
> of alternative models (some of which I pointed out before e.g.
> subquantum complexity between the current "elementary" particles,
> 10^-16 cm, down to Planckian scale objects, 10^-33 cm). In fact you
> don't even need subquantum complexity, since you already know that
> there _are_ natural processes guiding non-random, even purposeful and
> anticipatory, generation of genetic novelty (e.g. brain of a molecular
> biologist, or for that matter even the brain and senses of any animal
> looking for a mate).

Is the above word-salad supposed to be taken seriously?

> There is no magic or supernatural elements (assuming you exclude the
> existence of "mind stuff" which, even though it exists, is presently
> not modeled by the natural science of matter-energy transformations) in
> any of them.

I am not proposing any magic or supernatural elements. I am
specifically saying that the *observed* genomic differences (and hence
the phenotypic difference) between chimps and humans can be explained
entirely by known natural processes (mutation random wrt need)
occurring at known rates over the available time frame of 5 million
years. I am not the one claiming that some special process or rate is
required; you are. Yet you cannot seem to point to any feature of the
difference between these two species which requires the special process
or rate.

> I see no reason, much less a proof, that those must be the
> sole examples of such purposeful natural processes. I don't even see
> how would one express such exception in a precise scientific
> mathematical language so one can model it.
>
> Yet the neo-Darwinist RM dogma claims it is so, that those exceptions

What exceptions?

> (which it can't even describe in a precise mathematical language) are
> the only exceptions to RM conjecture.

What exceptions?

> Why is it so? My guess is that
> such intelligent, purposeful processes exist at all levels, at least
> from the ecoweb scale down to the biochemical reaction webs within
> cells (including the "junk" DNA which I think is a key part of a highly
> intelligent neural network which models and evaluates internally the
> possible transformations of DNA and their phenotopic consequences
> before committing its best pick to the much more costly physical
> implementation/offspring).

Evidence? If I remove your brain, that affects your "intelligent
neural network". If I remove a lot of the "junk" (and the fugu has
done so in nature), I see "no effect". If I introduce more "junk"
(ferns), I see "no effect". [The above is an overstatement. There are
non-coding sequences that are important. But, by definition, they are
not "junk". There is dispensible DNA. That would be "junk".]

> >> After all, the linguists use similar methods as
> >> evolutionary molecular biologists to study the
> >> evolution and relations among natural languages
> >> (which are created and transformed by intelligent/
> >> purposeful agents).
> >
> > Yes. There is a lot of similarity in the two processes.
> > A difference, of course, is that the intelligent/purposeful
> > agent involved in the evolution of languages are not
> > undetectable or magical.
>
> That little strawman is quite busy in the debate here. Just because
> your imagination fails to come up with anything else besides magic for
> a given phenomenon, it doesn't mean that everyone else must also be
> assuming magic as the "answer".

I am not saying that the agent involved in the evolution of languages
is undetectable and magical. Quite the opposite. But the putative
designer of humans (as opposed to chimps) from their common ancestor
does seem rather unnecessary and undetectable in that he/she/it/they
seem to be working entirely within the type and rate constraints of
random mutation and fixation of a fraction of those mutations.

> There is nothing magical or grossly undetectable in the examples above
> of natural processes, that we all know to exist and which intelligently
> and purposefully guide DNA transformations from generation to
> generation.

You have empirical evidence of mutation being an intelligently guided
process? Do tell. What is your evidence?

> There is also nothing mathematically or computationally
> special about the intelligent networks which act as intelligent agents
> behind those DNA transformations. Such networks exist at all scales,
> above and below human brain. They all interact in some way with
> genomes, hence they build (as we do at our own level) their own
> internal models of genomes and their envirnments.

Your case would definitely be helped if you could tell us the "some
way" in which these undetectable hypothetical intelligent networks
interact with genomes. Be specific.

> The most natural
> position is to assume that they all exert in some way their
> anticipatory influences on the genome, each one affecting the aspects
> of DNA relevant for its own intearactions with it and the related
> punishments/rewards. The ND conjecture -- that only one particular
> network (the brain of 'molecular biologist') can guide mutations, and
> all others are somehow excluded (despite interactions) so that except
> for that one unique case, the mutations must be "random" -- is
> strained, unnatural and ambiguous (e.g. how do you give precise
> mathematical characterization or model this single odd exception).

Well, discovering this disembodied intelligence that vaguely exists
somehow in "junk" and directs future evolution would certainly help.
How does the fugu survive without this disembodied intelligence?

> Note also that molecular physics, which is an applied quantum
> mechanics, is subject to quantum dice behind the curtain. No
> theoretical result or existent experiment gives any clue what (if
> anything) is behind the quantum indeterminism i.e. how do quantum
> outcomes get picked. There are also plenty of technological and
> mathematical modeling unknowns in molecular biology e.g. what does the
> "junk" DNA do and how does it do it?

The algorithms that *real* scientists use to discover "junk" DNA that
actually has some sequence-specific functional utility is entirely
based on evolutionary conservation of such sequences.

> How do biochemical reaction webs
> which include the "junk" DNA work, what do they model, what do they
> "think"? Just because you can sequence it, it doesn't mean you know or
> understand how it works, how it affect phenotype and by which chain of
> reactions.

Vitalism and mystical magic is what you seem to be proposing.

> The present level of knowledge in the molecular biology is not much
> better than that of a caveman looking at a computer -- he can figure
> out that by pressing this or that key he can get some light dots on the
> screen or that pulling the cord darkens the screen and other such
> superficial correlations.

The present level of knowledge of molecular biology is much, much, much
greater than your knowledge of this field.

> But he has no clue (or even a language to
> conceptualize what the question is) about the underlying processes that
> make such correlations happen or the (perfectly natural) intelligent
> agency (modern technology & science) that designed it and built it. For
> him it's a magic, yet we know that no magic is needed. Assigning magic
> to phenomena you may not understand, as you repeatedly do, is a
> primitive form of thinking which short-circuits any further query by
> attaching to the puzzling the empty label "magic" as an sedative.
> Scientific thinking is to look at it as a puzzle or an open question, a
> challenge to ones imagination.

I am not the one proposing magic here. You are. I am proposing merely
the extension of known processes at known rates. You are proposing
mystical Kabbalah like information written into genomic detritus and
viral invaders.

> > Languages, unlike organisms, do not self-reproduce.
>
> Neither do ecosystems (or species).

Right. Organisms are not ecosystems nor species. But the latter are
composed of the former.

> But the elements/building blocks of
> languages do reproduce, just as the building blocks of ecosystems or
> species (organisms) reproduce.

They do not reproduce in the absence of the humans that produce
language. Language is manufactured. It does not reproduce
independently of humans.

> For example, you add suffix "ed" to
> verbs to form past participle.

Sometimes. Sometimes this is showed to be incorrect. ;-) BTW, most
asian languages do not have this way of indicating past tense. They
also lack plurals.

> The pattern "ed" is thus reproducing
> itself (in its particular realm).

No. It is being reproduced by human's mimicking each other. The
pattern specifically does not reproduce itself.

> Numerous forms of pattern
> reproduction exist at every level (e.g. semantic, grammatical,
> phonetic) of natural and artificial (mathematical, scientific)
> languages, propagating via analogies. There is an even more elemental
> reproduction (analogous to cellular reproduction) of patterns -- every
> time someone learns or uses such patterns the pattern has managed to
> reproduce. How many times have patterns DNA or "is" or "the" reproduced
> in this post? Of course, this reproduction uses humans, books and
> computers, instead of atoms and fields, as its substratum, but that is
> a matter of network implementation. The basic mathematical properties
> of such networks with adaptible (to punishments & rewards) links are
> deduced independently of the implementation for the links, nodes and
> punishments/rewards.
>
> > They are manufactured by a known intelligent agent.
>
> Known? Really. That's like the cavemen with computer saying he "knows"
> how it works and to demonstrate that he "knows", he shows how when he
> hits a key the light pattern pops up, just as he predicted it would. As
> far as he knows he "knows" it. Similarly, with humans generating
> languages -- for example, there is no scientific model at present which
> tells you anything about the mind-stuff (what is it like to be some
> arrangement of atoms and fields? what is redness like? what is it like
> to get an idea?). We don't know whether the mind-stuff is an
> epiphenomenon or a causal agent (which can affect the matter-energy
> transformations).

Your mind-stuff is empircally a product of the biochemistry of your
brain. It does not exist as a disembodied entity in its own right. I
won't even suggest the way to demonstrate that this is so. Too
gruesome.

> I am not talking about telekinesis but about quantum
> dice. We don't know how it picks its quantum outcomes when "brain"
> thinks (perhaps the mind-stuff does the job as von Neumann and Wigner
> conjectured). Hence, you don't know the "intelligent agent" behind
> language much better than the cavemen knows the one behind the
> computer. Just as he can predict pattern from a button pressed we can
> roughly predict neurological and behavioral effects of stimulation of
> certain brain regions. In both cases there is also some "little" stuff
> going on that is sort of mysterious, but that probably doesn't matter
> much. We got these handful of correlations, so we both "know" the
> agent. We can also name it as a "proof" that we know it. Yeah, sure.
> (The same goes for "knowing" how evolution works.)

> > I have explicitly pointed out that no DNA configuration is
> > so deviant from configurations held in common by both
> > chimps and humans that any search of some hypothetical
> > total sequence space of possible sequences ever is likely
> > to have occurred. *All* that is needed is *modification*
> > of the subset of total sequence space that existed in
> > the common ancestor.
>
> You should read more carefully before responding.

I do.

> I was not talking
> about space of all possible arrangements of a given numbers of atoms
> but about all arrangements reachable by a single (or by some number n)
> mutagenic event from a given initial state. That space (labeled as M in
> the first post) is huge, too. To test "random" mutation conjecture
> behind evolution, for single or for some n>1 mutations, you still need
> to have combinatorial or probabilistic model of that "n-mutation
> neighborhood" space and compare its predictions with the empirically
> observed frequencies of favorable novelties at the distance of
> n-mutations from the same initial state.

No I don't. All I need to do is demonstrate that known rates of change
and fixation are sufficient to produce the genomic difference we
actually observe from a common ancestor. And that no single site
requires a rate of change that is faster than is possible given the
time available. I can, in fact, do that for chimps and humans. But,
of course, I am proposing that any differences seen are due to
differential modification of pre-existing ancestral sequences rather
than by generation of a sequence from some random sequence in both
humans and chimps. IOW, I am proposing "descent with modification"
rather than "creating a 747 by means of a tornado" as you seem to be
doing.

> The ID conjecture is that the empirical frequencies of favorable
> novelties (being offered to natural selection to weed out) are much
> greater than what a "random" mutation model would predict.

And, for the changes that produced the differences between humans and
chimps, they are empirically wrong. Which is why they use arguments
about bacterial flagella. But, of course, they are wrong there to.
The *phenotype* of rotary motility of the bacteria flagella can, both
in principle and experimentally, be generated from a reasonable
expectation of what the closest ancetral species lacking the
*phenotype* could have looked like.

> The
> neo-Darwinian conjecture is that the observed empirical frequencies are
> exactly what the "random" mutation model would predict. Neither side
> can at present work out the math and computations of the model to prove
> their case (due to our present limited computational and modeling
> capabilities).

I have just demonstrated that the difference in genome between human
and chimp is quite well within observed rates of mutation and fixation
at both the genomic and individual smaller sequence level. That I
chose not to use bogus calculations based on "747 in a tornado" models
and used empirical observations instead makes my results that much
stronger. You can disprove the idea that a gene cannot be generated
by the "747 in a tornado" process all you want. Since that is not the
process of "descent with modification", all such calculations merely
disprove what evolution never claimed.

> The ND priesthood had not won this argument but has
> merely bullied the alternatives out of academia through Lysenkoist
> style censorship, economic and professional intimidation. The side
> which has to reach for these kinds of "scientific" methods in order to
> cling on just a bit longer, is as a rule the weaker one on the
> substance (recall USSR and its communism).

Paranoid fantasy is a sure sign of kookdom. If you have evidence,
present it. If you have nothing but a WAG based on silly assumptions,
it is hardly a conspiracy that keeps your ideas merely laughed at.

> > The underlying assumption of these arguments is a
> > specific non-evolutionary mechanism that involves
> > inventing a sequence from total sequence space rather
> > than modifying a sequence from an ancestor.
>
> Your sloppy reading again. The space M of possible DNA configurations
> is the space at distance n mutagenic events from the original sequence,
> where n=1,2... It is not a space of all possible configurations of
> given numbers of atoms (which would be much bigger).

Again, the empirical evidence says that the difference in sequence
space that *has been* reached by humans and chimps after divergence
from their common ancestor is just about what you would expect given
the known rates of mutation and neutral fixation, without even needing
to plead the faster rate of selection. What reason do we have for
saying that some other unknown process is necessary?

> > Given that such calculations are utter nonsense wrt
> > to modelling how evolution *actually* claims to work (and,
> > from direct, observable *evidence*, actually did so in
> > the case of chimps and humans), what is the use of
> > presenting these ignorant "747 forming in a tornado"
> > calculations? Evolution works by descent with modification.
>
> You're getting quite a mileage out of that strawman. Read again in the
> 1st post what the 'total space of configurations' and M mean.

Understand that you *are* saying that one cannot reach the difference
between humans and chimp genomes in the time available by any known
natural mechanism. You are wrong. Wrong. Wrong. Empirically wrong.
Ignorantly wrong. You are *necessarily* implying that the difference
between the starting point sequence and the current sequence is so
great that you cannot get from the ancestor to the current organisms.
The evidence says otherwise based on a comparison of two species that
evolved independently of each other after divergence. There is no
place in the genome where one must posit an ancestral genome so
different that the current genome cannot be reached. Most (if not all)
of the current genome can be reached by the slower selectively neutral
process rather than even needing to invoke selection. So whatever
values you give to the number of changes required, you simply cannot
assume that they are more than is possible by known mechanisms and
rates when the evidence says otherwise. Yet that is what you seem to
be doing. My only guess is that you are so wedded to the idea that a
large number of changes are needed that you cannot comprehend that this
assumption is demonstrably and empirically wrong; the difference
between humans and chimps does not involve a large and certainly not an
impossibly large number of changes or an impossibly high rate of change
and fixation.

> > Anyone can always claim that mutations occcurred by a
> > God producing them, just as you can claim that Lady
> > Luck is responsible for the cards you received.
>
> There are quantitative criteria to distinguish whether the deck and
> shuffling are stacked or fair. This is no different than testing a
> quality of random number generator by applying random number tests. The
> idea is to use presumed model of the random generator (such as binomial
> or multinomial distribution) and produce its predictions for the
> empirical counts obtained from the sequence. That lets you decide
> whether the "random" model is valid and how valid.

And that is exactly what you can do and has been done in comparisons of
human and chimp genomes. You can look for shorter stretches of
sequence which exhibit rates of change either significantly faster than
the mean rate of change or significantly slower than the mean rate of
change. The fact is that most of what you find is the latter (most
selection is conservative) and that the amount of the former is quite
small (and mostly due to things like deletions or insertions that
produce a large change in one fell swoop). Places where there has been
significantly more rapid change is where you would look for changes
that were *selected for* in one lineage and not the other.

> > But you seem to be claiming that the rate of such mutations
> > is too slow to be happening in the absence of a God (or
> > that you could not have gotten those cards in the absence
> > of Lady Luck).
>
> No, I am only claiming that no one has computed whether "random"
> mutations are too slow to generate the observed rates and quality of
> evolutionary novelties. I am saying that this is an unsolved problem
> which has, at least given sufficient computational and modeling
> resources, a precise answer.

And I will repeat. The evidence I am presenting *specifically* says
that the rate of random mutation and fixation (largely neutral) is
quite clearly not too slow to produce any of the novelties that
distinguish humans from chimps at either the whole genome or individual
gene level.

> > All I am doing is pointing out that there is no *need* for
> > such an outside agent.
>
> You can "point out" whatever you want. That doesn't show that the
> "random" mutations are just right. To show that there is no need for
> any other but "random" mutations, you need to make prediction from the
> random mutation statistical/combinatorial model of DNA mutations and
> compare them with the observed frequencies.

No I don't. I need to predict whether the observed amount of change in
these two species from the time since divergence is consistent with the
rates that are known from experiment. It is. And I don't even need to
invoke selection.

> As to "outside agent" that's your strawman again. "Outside" of what?
> Outside of backward & forward light cones of a DNA molecule (which is
> the space-time region which can propagate interactions at maximum light
> speed)? Well that would cover the entire visible universe.
>
> > Until ID presents a testable idea that would require the
> > *necessity* of the action by some outside agent, my thesis
> > that such an agent is *unnecessary* because *all* of
> > the observed differences between humans and chimps are
> > consistent with known rates and types of mutation from
> > ancestral sequences, it remains ignored by and irrelevant
> > to science because it fails the razor test.
>
> I described a test. It is the same kind of test you would use to test
> "random" generator for any other sequence, such as that of coin tosses.
> Make prediction from that RM model of DNA and lets see whether it
> matches the empirical frequencies of the evolutionary novelties being
> submitted to the natural selection.

What is the evolutionary novelty that I should be looking for when I
compare chimps and humans? Do you have evidence of some gene that is
present in one species and not the other? I don't. Evidence that the
difference between the two involves any "novelty" at all?

> There is plenty of unknown, from
> quantum dice to present limitations of technology and mathematics, for
> entire universe of intelligent agents to fit in and causally interface
> to the DNA mutations.
>
>
> > And that is relevant to the modification of pre-existing
> > useful sequences how? That is an interesting model of
> > the "747 in a tornado" explanation.
>
> Shadow boxing again...
>
> > There have been many, many, many empirical tests of the
> > idea that mutation occurs *independently* of need for
> > the mutation. In every case, from the Luria/Delbruck
> > experiments on, the finding is that mutation occurs
> > *independently* of the need for that mutation.
>
> I responded to LD and similar experiments in the previous post:
>
> http://groups.google.com/group/talk.origins/msg/748587b6e86a28c8?hl=en&
>
> These experiments have nothing to do with question of "random" mutation
> vs guided (anticipatory, intelligent,...) mutation.

That is *exactly* what the experiments explore. And they do
demonstrate that mutation is random wrt need.

> Namely, the same
> kind of statistical behaviour can be observed for technological
> innovations which clearly do have intelligent agents behind (see my
> previous post for details on LD).

So your argument is that the bacteria in the smaller pools actually
have some pools that are more intelligent than others and thus respond
quicker to produce the needed mutations? Or is it that the smaller
pools have some bacteria that have greater ESP and predict the need
better?

nightlight

unread,
May 13, 2006, 5:23:45 PM5/13/06
to
>> The RM conjecture isn't in any essential way
>> reliant on the particular molecule that transmits
>> heritable traits. One can formulate RM with
>> respect to some generic carriers of heritable traits
>> without knowing what molecules implement such carrier.
>
> True, but Darwin had no reliable biochemical
> models whatsoever.

He understood that it was some material component of
a cell. One can formulate an equivalent of RM for
any such mechanism without knowing its structure.
The basic question being debated between ID and ND
is whether the random alternation of that trait
transmitting component can account for the observed
rates of successful evolutionary innovation.


> The question is whether one can find a strong
> enough correlation between the environment -- say,
> the food available to seed-eating birds -- and
> the resulting genetic pool -- say, the birds'
> beak sizes -- to declare a causal link.

You're setting up a strawmen ID by presuming omniscient and omnipotent
entity capable of turning optimum traits on demand. The ID only assumes
existence of some intelligent agency (of unspecified nature and
computational capacity) which performs pre-selection among the
physically accessible variations of the DNA based on some internal
modeling process which takes into account the present state of the DNA
and its environment.

You also presume to know what the optimum solution ought to be. There
could be much more subtle solutions to food shortage than modifying
beak sizes. There are also competing intelligent agencies on behalf of
other organisms in the same ecosystems and these may have
countermeasures to block some more obvious solutions.

The basic fact is that the adaptations do occur at some empirically
observed rates. The question discussed is whether the random
alternations of DNA can produce such rates or not. My point is that
there are no calculations of such predictions from a random DNA change
model (the RM conjecture). Hence there is no basis to claim that RM is
an established fact. Even as a conjecture, the RM is weak since there
isn't a single established quantitave fact going in its favor.

If RM cannot reproduce the empirically observed rates, then there must
be additional pre-selection process which can model the requirements
(the environment, the DNA and its phenotopic expressions) and eliminate
much more quickly large classes of physically accessible but
phenotopically unviable DNA alternatives (within the limits of its
knowledge and computational capability). I suspect that at least one
such intelligent agency is the biochemical reaction web of the "junk"
DNA which can "sense" various challenges from the organism's
environment (transmitted via hunger, thirst, heat, cold,...) and
compute solutions (based on accumulated library of useful solutions
from its ancestors, going back to dawn of life). These solutions can
alter multiple genes, spanning entire genome, simultaneously. Such
solutions may not be the best, or may not work or may even result in
worse traits than the original ones. This is no different than any of
us solving our own problems. Dealing with problems, strategizing and
applying foresight will on average beat not dealing with them.

You should also recall that the intelligently guided alternation of the
genome is an extremely common phenomenon. I mentioned several examples
in previous posts. The obvious ones are molecular biology in research
and genetic engineering in agriculture. In fact every organism performs
such 'genetic engineering' when selecting a mate. Of course, the
organisms don't use DNA sequencing or molecular biology to purposefully
transform DNA to improved configurations. But that is just a matter of
instruments and technology. These are all natural processes using
foresight to reshape genome.


>> ... anyone who claims that they know the mechanism


>> of mutations, including the precise state of the
>> DNA environment whichcontains the physical causes of

>> mutations... Such claims are false both in quantum


>> and classical models.
>
> Yes, but we don't need to know the mechanism of mutation,
> if you mean what causes the quantum states of the bonds
> of the base pairs of the DNA.

You have missed the point. Argument I was countering (by pointing out
quantum & other uncertainties) is a claim that we allegedly know
exactly what is happening with DNA and that there is no room, or
interface, for any 'intelligent agency' to affect or guide mutations. I
am saying that this is not correct since there is more than enough
freedom for an intelligent agency to affect DNA changes in a purposeful
manner.

Vend

unread,
May 13, 2006, 5:21:07 PM5/13/06
to
nightlight wrote:

...

> M1.
> http://groups.google.com/group/talk.origins/msg/788d751e9ac239da?hl=en&
> M2.
> http://groups.google.com/group/talk.origins/msg/d6349ead1e3ff646?hl=en&
>
> the "random mutation" half of the neo-Darwinian (ND) theory is a
> gratuitous ideological assumption (Darwin himself was more guarded in
> his writings). While one can certainly generate "random" mutations in
> the lab, that doesn't imply that the mutations behind the observed
> evolution (micro & macro) are "random" i.e. independent from the
> environment in which they arose. After all, it is _trivially_ true that
> mutation does depend on its physical cause (e.g. EM radiation, chemical
> reactions, cosmic rays,...). Hence mutations are at least correlated
> with the most immediate physical (quantum) state of the DNA and its
> immediate environment, which in turn are correlated with the larger (in
> space) future and past environments i.e. mutations are trivially
> correlated with events and states of the past and the future
> environments of the organism.

Irrelevant

> The neo-Darwinian "random mutation conjecture" (RMC) is that the
> correlation between (evolutionary) mutations and the DNA environment
> stops precisely at the immediate preceding physical causes. If the
> physics were still the 19th century mechanistic theory, fixing the
> physical state of DNA and its "immediate" environment (within the near
> past light cone) automatically leads to a unique future state of the
> DNA, hence to a unique mutation. Thus you wouldn't have to care about
> wider correlations with future and past environments.

I don't understand this clearly but I stil think that there is at least
an error: you are talking about light cones, which are a concept of
einstanian relativity, which wasn't available before 1905. So it wasn't
19th century physics.

> But the physics did advance from the 19th century mechanistic theory.
> Our present fundamental physics is Quantum Theory (QT). The key
> property of QT relevant for this discussion is non-determinism:

Ok.

> the
> exactly same physical state of the DNA and its environment generally
> leads to different outcomes (mutations). The physics can only tell you
> the probabilities of various outcomes (mutations), but nothing in the
> most precise physical state of the DNA and its environment determines
> what the specific outcome will be in any given instance.

It's widely understood that quantum effects are not relevant at the
scale of organic molecules, like dna, for a phenomenon called
"decoherence". If you were truly knowledgeable about quantum mechanics
as you pretend to be, you should have known it. Statstical mechanics is
better suited to describe the behaviour of these systems.


> Therefore, even if one were to grant to ND that the immediate DNA
> environment (resulting in mutation) is independent from the physical
> state of any larger context (which might allow for purposeful
> mutations), there is always an impenetrable quantum curtain hiding the
> selection of particular outcome/mutation even if the state of DNA and
> its environment is known with theoretically maximum precision. Unlike
> classical physics, where the 'first mover'/external intelligent agency
> had to be constrained at the very beginning of the universe (to set up
> the deterministic laws of physics and the initial conditions of all
> particles and fields), the quantum physics provides an interface for an
> external (or even an internal/subquantum system built upon Planckian
> scale objects, see [M2]) intelligent agency to direct the universe
> continuously and in great deal of detail (within the statistical
> constraints of QT, since these are empirically fixed).

The "first mover" or the "quantum outcome decider" are metaphysical
hypothesis. They are not testable nor falsifiable, so they don't belong
to the realm of scientific questions.

Anyway I think that you are thinking of something slightly different.
You are asserting that the dna mutation events are biased towards
favorable outcomes. This would imply that both statistical and quantum
mechanics were wrong, since they predict that the random events follow
certain probabilty distributions that are inconsistent with
persistently mostly favorable outcomes.

For instance, imagine that a coin is tossed multiple times. Your coin
toss model tells you that the tosses are fair (0.5 probability of head
and indipendence of different tosses).
Now assume that for every head you win 1$ and for every tail you lose
1$. So head is the favourable (for you).

You claim that a mysterious force biases the tosses towards the
favourable outcome, but this is clearly inconsistent with the
assumption of the coint toss model. If the bias is present, then the
model is wrong and a statistical randomness test on the sequence of
outcomes is likely to detect it.
There is no way for an external force to influence favourably the
tosses and still adhere to the theoretical binomial random
distribution.

Quantum and statistical mechanics are the "coin toss" model for random
events that happen in nature. If an influencing force
(/process/agency/whatever) acts biasing the dna mutations, the
foundamental physics theories must be wrong, and this should be
detectable by some randomness test.

Thus, unless you provide adequate evidence that the dna mutation are
biased, we should hold that our present scientific knowledge is correct
and you are wrong.

> Of course, one wouldn't grant ND even that the immediate physical
> conditions (quantum state) causing mutations are uncorrelated with the
> wider past and future contexts. That would need to be established.
> Since at the most basic physical level each mutation _is_ always
> correlated with the quantum state of entire universe visible at that
> location at the time of the mutation (since physical interactions can
> travel at light speed, reach the mutation point and affect DNA
> environment), as well as with the state of entire universe within the
> future light cone of the mutation, one cannot a priory exclude
> correlation of the mutation with higher level patterns in the past and
> future much wider environments. One could _in principle_ exclude these
> types of correlations (thus a foresight) empirically if one could
> control the immediate environment precisely enough (as allowed by the
> QT state of the system). Since this is technologically not viable at
> present, even without the theoretical QT limitations noted earlier,
> there are technological limitations precluding anyone from declaring
> that even these less fundamental kinds of correlations have been
> excluded empirically.

The theory of evolution doesn't exclude correlations in general. It
excludes persistently favourable events.

> In fact we know that there are explicit empirical counterexamples to
> accepting even this weaker (since it ignores QT dice) RM conjecture.
> Namely, there _are_ perfectly natural "intelligently guided" mutations
> i.e. there are natural processes which model and anticipate the
> phenotopic outcomes of DNA changes and preselect which changes to
> perform based on some far reaching purposes well beyond the immediate
> physical DNA environment. This "natural process" occurs e.g. in the
> brains of molecular biologists. Hence the neo-Darwinian RM conjecture
> _must_ be refined in order not to outright contradict the established
> empirical facts -- RM conjecture can't do better than to state that all
> mutations are random, _except_ (at least) those guided by the
> computations in the brains of molecular biologists. Now, that's one
> very messy and ugly kind of conjecture.

Strictly speaking, organisms created by biotechnologists are not
biologicaly evolved. They are evolved if you consider evolution in a
broader sense, which includes both biological and cultural evolution.

Anyway this is irrelevant, since biological evolution can never prove
that an organism has not been designed by some "intelligent" entity. It
simply gives an acceptable explanation for its origin that can be
considered true until evidence is found that proves the design.

If you find a that new species of bacteria appered during your
lifetime, your default explanation for its origin would be evolution.

If you find later some evidence that points out that it has been creted
in some biotech lab, you would change your explanation.

Since we have no reson to belive that humans and the other "old"
organisms were created in a biotech lab we assume that they evolved. I
we'll find later some alien biotechnologist, god, invisible pink
unicorn, flying spagetti moster, or tiny plank-creature, that claim
(and provide adequate evidence) that they did the job, we'll revise our
explanation.

Scientific explanations are always tentative, but this doesn't mean
that we can question their value without good evidence as ID does.

> ...

> > There is no need for an outside agency to account for
> > the drift observed.
>

> Only if you use the _empirical_ mutation rate in your calculations. But
> that doesn't tell you whether these mutations were "random" or
> "intelligent" (purposeful, anticipatory...). For all you know, the
> "intelligent" mutations can yield the same exact empirical rates as
> those observed.

As I said before, this is impossible.

>After all evolution patterns in other realms, such as
> languages, exhibit analogous statistical phenomena and patterns, yet
> they're clearly evolving through a purposeful activity of intelligent
> agents.

If evolution works with random unbiased mutations, it works better with
biased mutations. That's why these theories can apply concepts of
biological evolution in fields were the mutations are somewhat biased
towards favourable outcomes.

Anyway, the bias is usually small, since humans have only very limited
foresight on events that extend past their lifetime and spatial
location. Humans can be good at selecting favorable outcomes when they
happen (thus are "intelligent selectors"), but the generation of
outcomes on these largescale phenomena can be considered close to
random.

> The objective, ideologically and religiously neutral science should say
> that the nature of the mutations (which resulted in choices being
> offered to the natural selection) is not known. Unfortunately, the ND
> priesthood declared that it knows the answer and that the answer is
> "random mutations", and anyone questioning this dictum will have their
> papers rejected, their academic career ruined. Well, thanks goodness at
> least they're not threatening to tear the Dembski's limbs apart and
> burn him along with the rest of the RM conjecture 'deniers' as in good
> old times.
>

This seems paranoid.

> That was described in the 1st post, [M1]. To test the neo-Darwinian RM
> hypothesis for a single mutation, you need to estimate how large is the
> combinatorial space containing all arrangements of the available atoms
> which can arise (via laws of QT) from a given DNA state upon action of
> a single mutagenic event (e.g. a cosmic ray, an alpha particle...).
> (See [M1] & [M2] for description on why you need such number.)

I think that "dna configuration" is described by a sequence of bases
for biological purposes. You don't need the exact configuration of the
atoms.

> As pointed out in [M1],[M2], the present mathematical and computational
> tools are much too primitive for such computations.

This means: "I don't have any evidence for claiming that ID is correct
and evolution is false but someone might have in the future".

Until you have some evidence that falsify evolution science should
assume that evolution is correct.

>And since the
> questioning the RM conjecture is officially forbidden, there was no
> need to create a term describing quantities no one in academia is
> allowed to think about.

Again, paranoid.

> ...

> > DNA is pretty much DNA at an atomic level. You gots your
> > A's, C's, G's. and T's.
>
> That comment illustrates the problem noted above. Once you eliminate
> all possible molecules which can arise e.g. upon a single alpha
> particle event, you have drastically reduced combinatorial space of all
> possible configurations in the 1-mutaion neighborhood of a given DNA
> state, hence you reduced the denominator in the expression P=F/M (cf.
> [M1]) for probability of favorable mutation, artificially inflating
> this probability.
>
> Most mutagenic events will likely result in molecules which are not DNA
> molecule proper.

This is not called dna mutation. This is called dna destruction.
Usually the cell dies.

>To test the RM conjecture, you need to allow all such
> configurations (all possible molecules which can arise from given
> physical causes) and give them equal a priori probability as outcomes,
> whether they are proper DNA molecule or some other 'improper DNA'
> molecule. After all, all such events take time and consume reproductive
> resources. The RM conjecture requires that you give all such events
> equal a priori probability, P1=1/M. What you suggest is a pre-selection
> based on consequences expected later, which removes _upfront_ all
> mutagenic events which result in improper DNA (i.e. create some other
> molecules), sets their cost (in time and resources) to 0. That is
> cheating since it contradicts the RM conjecture (which you wish to
> prove/test) by setting the probabilities of the 'excluded' outcomes to
> 0. It is also a blatantly teleological exclusion, which further
> contradicts the RM conjecture.

No one has ever claimed such things.
These dna-breaking events go under the probability of dying at a
certain instant.

> > Evolution is certainlly not guided, it is contingent.
> > Look at the Luria Delbruck experiment for a simple example.
>
> That and later similar experiments only show that the phage resistance
> arose via mutations in the exposed cultures (i.e. it wasn't
> pre-existent property). The results don't address at all the question
> of the nature of the mutation i.e. whether the mutations were "random"
> or "purposeful".

Unscientific question.

>Consider a counterexample where you get exactly the
> same adaptation pattern even though the innovation was clearly
> purposeful -- the technological evolution. Say, a sudden shortage of
> some vital raw material places some set of companies in danger of
> bankruptcy unless they can find a substitute or find some other product
> design. These companies are analogous to separate Petri dishes in LD
> experiment and the shortage is analogous to phage challenge (or even
> closer analogy to sugar in Cairn's experiment). The affected companies
> would switch into crisis mode, brainstorm, bring in new, more creative
> people, ... etc. At the end, depending on difficulty of the problem and
> the time constraints, you may have one or more companies that solved
> the problem and survived while all the rest went under. (If the problem
> was easy all of them may solve it.) Hence, you could get the same type
> of results as LD, yet here the "mutations" in the manufacturing process
> were "intelligent" (purposeful, guided....). Hence, this type of
> evolutionary pattern as observed by LD doesn't go either in favor or
> against the "random" mutation conjecture.

Read above for cultural evolution.

> You've lost the context of 10^X combinatorial space (see the
> explanation of "configuration" above). You need to count all outcomes
> of mutagenic events according to time and resources they consume, hence
> count all possible molecules which can arise as result of a mutagenic
> cause from a given initial state. You can, of course, take into account
> how long it takes to eliminate particular configuration e.g. some may
> destroy a cell right away, while others may allow it to live longer and
> propagate further. But you can't arbitrarily set to 0 cost of gross
> failures. That would be equivalent to introducing some kind of
> intelligent agency/process which can avoid such gross failures and
> their costs upfront (before consuming physical resources and time).
> That then contradicts your objective, to prove the RM conjecture.

Your calculation are clearly not carriable.
As I said before, the Radom Mutation hypothesis can be derived from
statistical mechanics, which can in turn be derived from quantum
mechanics. If RM is false QM is probably false. Since we trust QM very
much, we trust also RM until it's proven false. No need of impossible
calculations.

> ...

> Organisms are not free, but the molecules arising from mutagenic events
> are free to explore the combinatorial space of all possible molecules
> implied by QT model of the mutagenic events. For RM conjecture to be
> true, you cannot eliminate upfront any such configurations based upon
> much later consequences for the organism. You reasoning is clearly
> siding with ID.
>

Again, no one did.

> How does that relate to quantitative criteria (described above and in
> M1) needed to decide between "random" vs "intelligent".

Not true. At best these criteria could prove QM false. This wouldn't
imply ID = true.

> ...

> > b) incompetant/malevolent/trickster designer.
>
> Again look at the evolution of QWERTY keyboard layout or Windows OS.
> Both very kludgy, yet "intelligent" agents were behind it. The presence
> of foresight doesn't mean presence of perfect foresight. You are
> increasingly clinging onto strawman arguments (perfect designer,
> supernatural, magic,...etc). Not a good sign for the strength of your
> case.

What's wrong with the QWERTY layout?
Anyway the incompetant/malevolent/trickster argument is not unsound:
You claim that mutations are biased towards favorable outcomes, to do
so the biasing force must have an incredible foresight. But stil,
despite natural selection, we observe a lot of unfavourable mutations
that any silly human would never make. This seems a problem for your
theory.

> > And linkage disequilibrium of genes near these genes
> > showing strong selection indicate either a) evolution
> > or b) incompetant/malevolent/trickster designer.
> >
> > Which is simpler, and which has actual evidence?
>
> Simplicity is a useful criterium only when everything else is equal
> i.e. all candidates are equally correct (e.g. an arithmetic in which
> addition A+B always yields 1 for any A and B, might be simpler than the
> conventional one, yet it would be useless).

So?

>...


> > Actually, most mutations fall into the macroscopic realm,
> > at least for QT. We are talking about relatively large
> > chuncks of stuff being moved- deamination of cytosine,
> > oxidative adducts, strand breakage and repair etc. You
> > are blathering nonsense here if you think there are any
> > Schrodinger cats involved. On the off chance you and
> > think that most mutations involve radioactive decay,
> > you are very much mistaken.
>
> You have again lost the context for the QT part of the argument (see
> [M2] and links there for more details). QT is relevant in the sense
> that precisely same initial and boundary state of a DNA and its
> environment can yield different outcomes and that there is nothing in
> the physical state (as presently understood) that dictated which of the
> outcomes will occur. It is a curtain that no empirical means can peek
> behind. Hence, as explained in [M2] (with the analogy of 10 coins where
> you're trying to figure out whether I am throwing the coins or laying
> them out by hand behind the curtain) you need a combinatorial model (as
> sketched in M1,M2) to test whether the RM conjecture can reproduce the
> observed rates of evolutionary innovation. In the 10 coin example, the
> combinatorics is simple enough to test the nature of the tosses (random
> tosses or put by hand). With RM conjecture the combinatorics & physics
> is too complex for our present techniques.

No, postulates of Quantum Mechanics can be tested in systems much
simpler that biological systems, and the result is that the random
events should be unbiased towards anyone and anything.

> Therefore, the intelligent vs random mutations is an open question.

"Intelligent mutation" is a term ill-defined to me. I understand biased
mutation.
Occam's razors rules out biased mutation until it's proven necessary to
explain evidence.

> The
> ID arguments via "irreducible complexity" are merely plausibility
> arguments for ID, not a decisive criterium sketched in [M1].

The arguments via "irreducible complexity" are all flawed.

> ND is neo-Lysenkism, the "new and improved" Lysenkoism. Yet, you
> can still see the same kind of hyper-sensitive, overly defensive,
> zero-tolerance totalitarian mindset in the old and the new one. Just
> listen Ward's "arguments" (dictums) and arrogant totalitarian style in
> the two Ward-Meyer debates. It makes one kind of sad to think what kind
> of intellects teach kids nowdays.

Paranoid.

> > Your model would make life impossible, so I suggest a
> > change in model parameters.
>
> I didn't propose any model but merely sketched quantitative criteria
> needed to distinguish between random vs intelligent mutations. This is
> no different in principle than criteria one uses to test random number
> generators. The RM conjecture implies certain probabilities for
> favorable mutations for which we have empirical frequencies, hence once
> we can estimate these implied probabilities (which we cannot do at
> present) we can decide whether the conjecture is valid. Therefore the
> validity of RM is an open and legitimate question.

No, you proposed some unusable criteria to claim that ID vs Evolution
is an open question. But the burden of the proof is on you to prove
that Evolution is wrong.

> > And as a final point, the real problem of ID is the
> > invisible pink unicorn (or it's hipper cousin the FSM).
> > Any agent that can be postulated to do anything, in
> > any way, and hide itself from inspection, is by
> > definition unscientific. No predictive basis, no
> > testibility, no way to falsify.
>
> Why does the intelligent agent behind evolution have to be forever
> inaccessible? While it is conceivable that what e think of "universe"
> and ourselves may be a simulation in some gigantic game of Life, that
> is by no means the only possibility.

Metaphysical hypothesis.

> Even if one limits oneself to the present laws of matter-energy (thus
> ignoring the existence of mind stuff), it is perfectly conceivable that
> various biochemical networks (including "junk" DNA) are intelligent
> systems which can internally model DNA modifications and their possible
> phenotopic results in order to select the "optimum" (within the
> limitations of their computational capacity) mutation to make in the
> actual DNA.

Please define "mind-stuf". You can't? That's way science doesn't deal
with it.

> That such possibility is not far fetched at all is suggested by the
> already existent examples of genetic look-ahead by natural processes
> (besides the brains of molecular biologist or dog breeders). For
> example, we know that societies through customs, folklore, religions,
> arts, laws. etc also apply foresight regarding the future genetic
> combinations it wishes to create. E.g. consider the stigma (and all its
> manifestations) of criminal past - society does not wish to propagate
> the genes of a criminal. Even the mere act of locking up someone for
> several years performs the similar genetic look-ahead function (it
> reduces mating chances of prisoners). The content of our prisons shows
> which kind of genetic material the society wishes to eliminate.

That's called "artificial selection".

> Hence we know that such look-ahead regarding the future genetic content
> certainly exists at the level of individuals and the level of society.
> There is no reason or an argument why these two levels are all there
> is. I think similar genetic look-ahead occurs at all levels, above and
> below (possibly even below our present "elementary" particles), and in
> a multitude of ways. The neo-Darwinian dogmatic insistence on "random"
> mutations (which is rooted in 19th century mechanistic materialism,
> where everything was modeled as 'machines') along with its Lysenkoists
> methods of enforcing the thought conformity is a drag on science.

There is no satisfying definition of intelligence, but the only
intelligent beings we know for sure are (certain) humans and pheraps
some animals. So why are you claiming that intelligence exists at other
levels?

> If one also recognizes the existence of mind stuff i.e. that it may be
> possible to scientifically model answers to questions 'what is it like
> to be this particular arrangement of atoms and fields that make up
> you', other possibilities of 'intelligent agency' open. For example, it
> is conceivable that what in our present QT are 'irreducibly'
> non-deterministic choices are actually choices made by the future
> model's counterparts of the mind stuff.

Argument from ignorance.

Deadrat

unread,
May 13, 2006, 6:51:13 PM5/13/06
to

"nightlight" <night...@omegapoint.com> wrote in message
news:1147555425....@u72g2000cwu.googlegroups.com...

>>> The RM conjecture isn't in any essential way
>>> reliant on the particular molecule that transmits
>>> heritable traits. One can formulate RM with
>>> respect to some generic carriers of heritable traits
>>> without knowing what molecules implement such carrier.
>>
>> True, but Darwin had no reliable biochemical
>> models whatsoever.
>
> He understood that it was some material component of
> a cell. One can formulate an equivalent of RM for
> any such mechanism without knowing its structure.

Supposing only "some material component of a cell"? I doubt it.
That's hindsight talking.

> The basic question being debated between ID and ND
> is whether the random alternation of that trait
> transmitting component can account for the observed
> rates of successful evolutionary innovation.

There's little question that the alterations (which is what I
think you mean) of that trait-transmitting component accounts
of the change in alleles that we see. Someone with more
knowledge of the research will have to quote you studies
on the randomness of the changes. I know there are "hot
spots" on chromosomes, so the distribution isn't
uniform. But most mutations are neutral or deleterious,
so it seems unlikely that they are part of some directed
project.

>> The question is whether one can find a strong
>> enough correlation between the environment -- say,
>> the food available to seed-eating birds -- and
>> the resulting genetic pool -- say, the birds'
>> beak sizes -- to declare a causal link.
>
> You're setting up a strawmen ID by presuming omniscient and omnipotent
> entity capable of turning optimum traits on demand.

I do nothing of the sort. If there were some detectable intelligence
at work to push birds' beak sizes in some direction, then there wouldn't
be the variation that we see.

> The ID only assumes
> existence of some intelligent agency (of unspecified nature and
> computational capacity) which performs pre-selection among the
> physically accessible variations of the DNA based on some internal
> modeling process which takes into account the present state of the DNA
> and its environment.

And unfortunately for IDiocy, this "intelligent agency" acts only in way
that is completely undetectable as it is identical to the agency of inherited
variation and natural selection.

> You also presume to know what the optimum solution ought to be.

I do nothing of the sort. No "optimum" solution is necessary; the good
enough will do.

> There
> could be much more subtle solutions to food shortage than modifying
> beak sizes.

There could be. Beak size was simply an example. The argument applies
to whatever trait is selected. There will be variation around a mean, and
the environment will prune the results.

> There are also competing intelligent agencies on behalf of
> other organisms in the same ecosystems and these may have
> countermeasures to block some more obvious solutions.

There are no intelligent agencies (unless you're talking about husbandry),
let alone competing intelligent agencies. And whatever countermeasures there
are that "block" solutions, these are part of the environment that prunes the
randomly produced genetic changes.

> The basic fact is that the adaptations do occur at some empirically
> observed rates. The question discussed is whether the random
> alternations of DNA can produce such rates or not.

Random inherited variations in DNA plus natural selection.

> My point is that
> there are no calculations of such predictions from a random DNA change
> model (the RM conjecture). Hence there is no basis to claim that RM is
> an established fact.

Faulty replication is an established fact.

> Even as a conjecture, the RM is weak since there
> isn't a single established quantitave fact going in its favor.

Well, I guess if you don't count population studies and genetics.

> If RM cannot reproduce the empirically observed rates, then there must
> be additional pre-selection process which can model the requirements
> (the environment, the DNA and its phenotopic expressions) and eliminate
> much more quickly large classes of physically accessible but
> phenotopically unviable DNA alternatives (within the limits of its
> knowledge and computational capability). I suspect that at least one
> such intelligent agency is the biochemical reaction web of the "junk"
> DNA which can "sense" various challenges from the organism's
> environment (transmitted via hunger, thirst, heat, cold,...)

Might we have some evidence that junk DNA is a sensory or
computational apparatus?

> and
> compute solutions (based on accumulated library of useful solutions
> from its ancestors, going back to dawn of life). These solutions can
> alter multiple genes, spanning entire genome, simultaneously. Such
> solutions may not be the best, or may not work or may even result in
> worse traits than the original ones. This is no different than any of
> us solving our own problems. Dealing with problems, strategizing and
> applying foresight will on average beat not dealing with them.
>
> You should also recall that the intelligently guided alternation of the
> genome is an extremely common phenomenon. I mentioned several examples
> in previous posts. The obvious ones are molecular biology in research
> and genetic engineering in agriculture. In fact every organism performs
> such 'genetic engineering' when selecting a mate.

I believe you mean to say that *no* organism performs such
"genetic engineering." Except in an indirect and metaphorical way.
Mates are selected based on the expressions of DNA.

> Of course, the
> organisms don't use DNA sequencing or molecular biology to purposefully
> transform DNA to improved configurations. But that is just a matter of
> instruments and technology. These are all natural processes using
> foresight to reshape genome.

This is a perverse use of the word "foresight."

>>> ... anyone who claims that they know the mechanism
>>> of mutations, including the precise state of the
>>> DNA environment whichcontains the physical causes of
>>> mutations... Such claims are false both in quantum
>>> and classical models.
>>
>> Yes, but we don't need to know the mechanism of mutation,
>> if you mean what causes the quantum states of the bonds
>> of the base pairs of the DNA.
>
> You have missed the point. Argument I was countering (by pointing out
> quantum & other uncertainties) is a claim that we allegedly know
> exactly what is happening with DNA and that there is no room, or
> interface, for any 'intelligent agency' to affect or guide mutations. I
> am saying that this is not correct since there is more than enough
> freedom for an intelligent agency to affect DNA changes in a purposeful
> manner.

I may be forgiven for missing a point that talks about "freedom" to affect


DNA changes in a "purposeful" manner.

Because it's nonsense.

Deadrat

nightlight

unread,
May 13, 2006, 7:40:35 PM5/13/06
to
>> The argument was about the sufficiency, at the mathematical
>> and the empirical level, of the "random" mutations in
>> replicating the observed rates and quality (complexity)
>> of evolutionary novelties.
>
> I have presented empirical support for the sufficiency of
> known *mutation rates* to be able to generate the
> evolutionary novelty of a human being from its common
> ancestor with chimpanzees.

You have completely misunderstood the question and your story of humans
and chimpanzees is a non sequitur (the rest of your comments follow the
same track from this wrong turn at the start). The process being
discussed is the change of DNA (from parent to offspring) and its
relation to the organism's environment.

For a given parent DNA, there is a set of all possible changes to its
DNA molecules due to mutations. Denote this set of all possible
resulting molecules (some of which may be grossly unviable or not even
proper DNA molecules) as {M} and its size (the number of elements in
the set) as M. Consider now a subset of {M} containing all molecules
which result in "favorable" (or at least not "harmful"} phenotypes for
a given environment. Denote this "favorable" subset as {F} and its size
as F. The precise size F depends obviously on specific definitions of
"favorable" (or how "harmful" the mutation before the resulting DNA is
excluded from {F}).

With these definitions, the neo-Darwinian "random mutation"
(RM)conjecture is a model in which we assign each element of {M} the
same a priori probability of occurring, PM=1/M i.e. there is no a
priori greater or smaller preference for any particular change of DNA
to occur. { Note that even though some DNA changes may result in
grossly unviable organism (or may lead to the death of parent
organism), the RM conjecture doesn't allow use of teleological filter,
hence one can't discount all such grossly unviable DNA changes.}

The probability PF of a changed DNA belonging to set {F} will be
PF=F/M. With this value PF and the rates of tries RT (computed from
sizes of population and the reproduction rates), one can compute the
rates of favorable mutations RF=RT*PF. This RF is a prediction of
favorable mutations rates by the RM model of mutations. One can then
measure the empirical rates of all mutations EA and its subset, the
rate of favorable mutations EF (hence EF<EM), applying of course the
same specific criteria of "favorable" as the ones used in the
definition of set {F} in the mathematical model.

If RM based model is correct, the EF will be "close" to RF ("close" is
defined by the required likelihood margins). Hence, if RF is not close
to EF, the RM based model is not correct. The ID model requires that EF
is much larger than RF i.e. favorable mutations are much more frequent
than random causes would imply. For completeness, we can also introduce
a 'malicious designer' model, MD, in which EF is much smaller than RF,
i.e. designer is purposefully making life harder for the organisms. In
short:

ID => EF > RF .... (1)
RM => EF = RF .... (2)
MD => EF < RF .... (3)

At present no one can compute numbers M and F for any remotely
realistic organism (although one can do it for much simpler 'artificial
life' models), hence we don't know which of the three types of
statistical models of mutations, RM, ID or MD, is the best one. It is
an open question. Empirical rates of favorable mutations, EF, do not
allow you to decide which case (1)-(3) is the best model. For that, you
need the predictions of as model and the result of comparison of EF and
RF.

Your argument simply talks about properties of EF (empirical rate of
favorable mutations) and EA (empirical rate of all mutations). That
covers only the left side the of criteria (1)-(3). To decide whether RM
is a good model, you need to know what the model prediction is, the
value, RF. Hopefully, this is clear enough for you to revisit your
arguments and correct the trivial misunderstandings.

> The experiment that first demonstrated this (the Luria/Delbruck
> experiment) is very simple and convincing.

As already explained, the LD and similar experiments have no relation
to the RM vs ID criteria (1)-(2). LD only shows that the adaptive
mutations occurred after exposure to phage, as opposed to adaptation
being an infrequent activation of an existent response mechanism. There
is nothing in LD experiment that differentiates random mutation vs
purposeful mutation of the same empirical rate EF. The exactly same
statistical pattern as in LD would occur in technological evolution as
described earlier.

z

unread,
May 13, 2006, 9:02:20 PM5/13/06
to
On 12 May 2006 22:18:08 -0700, "nightlight"
<night...@omegapoint.com> wrote:

Gibberrish. We can most certainly determine what the mutations are,
after they occur. We do not need to invoke any quantum effects.


>
>Therefore, even if one were to grant to ND that the immediate DNA
>environment (resulting in mutation) is independent from the physical
>state of any larger context (which might allow for purposeful
>mutations), there is always an impenetrable quantum curtain hiding the
>selection of particular outcome/mutation even if the state of DNA and
>its environment is known with theoretically maximum precision. Unlike
>classical physics, where the 'first mover'/external intelligent agency
>had to be constrained at the very beginning of the universe (to set up
>the deterministic laws of physics and the initial conditions of all
>particles and fields), the quantum physics provides an interface for an
>external (or even an internal/subquantum system built upon Planckian
>scale objects, see [M2]) intelligent agency to direct the universe
>continuously and in great deal of detail (within the statistical
>constraints of QT, since these are empirically fixed).

A whole lot of nonsence. Random mutations are what drives evolution.
There are no intelligent agenceys behind them.

>Of course, one wouldn't grant ND even that the immediate physical
>conditions (quantum state) causing mutations are uncorrelated with the
>wider past and future contexts. That would need to be established.
>Since at the most basic physical level each mutation _is_ always
>correlated with the quantum state of entire universe visible at that
>location at the time of the mutation (since physical interactions can
>travel at light speed, reach the mutation point and affect DNA
>environment), as well as with the state of entire universe within the
>future light cone of the mutation, one cannot a priory exclude
>correlation of the mutation with higher level patterns in the past and
>future much wider environments. One could _in principle_ exclude these
>types of correlations (thus a foresight) empirically if one could
>control the immediate environment precisely enough (as allowed by the
>QT state of the system). Since this is technologically not viable at
>present, even without the theoretical QT limitations noted earlier,
>there are technological limitations precluding anyone from declaring
>that even these less fundamental kinds of correlations have been
>excluded empirically.

More gibberish. There is no need to determine the correlation between
the organism and the rest of the universe. It either reproduces or it
doesn't.

>
>In fact we know that there are explicit empirical counterexamples to
>accepting even this weaker (since it ignores QT dice) RM conjecture.
>Namely, there _are_ perfectly natural "intelligently guided" mutations
>i.e. there are natural processes which model and anticipate the
>phenotopic outcomes of DNA changes and preselect which changes to
>perform based on some far reaching purposes well beyond the immediate
>physical DNA environment. This "natural process" occurs e.g. in the
>brains of molecular biologists. Hence the neo-Darwinian RM conjecture
>_must_ be refined in order not to outright contradict the established
>empirical facts -- RM conjecture can't do better than to state that all
>mutations are random, _except_ (at least) those guided by the
>computations in the brains of molecular biologists. Now, that's one
>very messy and ugly kind of conjecture.

Irrelevant. Sure, I can and do intoduce mutations to reach a desired
goal. This has nothing to due with ND.


>
>For example, how would one make such necessary exception precise in a
>formal scientific and mathematical language? What is it exactly about
>the arrangements of atoms and fields in the brain of a molecular
>biologist that makes it different from all other arrangements? Do you
>include into such arrangement the atoms and fields of the computers
>biologist uses to anticipate and guide his experiments?...

Random vs non-random. See? Very easy.


>
>Additionally, on what scientific basis can ND claim that even the
>vaguely stated exception is the sole exception to the RM conjecture?
>After all the brains of molecular biologists, or brains in general, are
>not the sole instance of intelligent networks (complex systems). Such
>intelligent networks are ubiquitous at all levels in nature, from
>biochemical reaction webs within cells through social networks and
>ecowebs. All such networks model internally their 'environment' with
>its punishments and rewards, play what-if game on the models and select
>actions which optimize their particular punishments/rewards. The
>neo-Darwinian RM dictum insists that in the entire spectrum of such
>intelligent networks there is just a single point, the brain of a
>molecular biologist, which purposefully guides mutations, and all other
>networks are supposed to be entirely disinterested and uninvolved. When
>at it, they might as well declare that there are exactly 12 angels that
>can dance on a head of a pin and if you doubt you can forget about
>publishing any papers in leading journals or getting any research
>funding or any academic position.

You are confusing ND with site-directed mutagenesis.


>
>
>> There is no need for an outside agency to account for
>> the drift observed.
>
>Only if you use the _empirical_ mutation rate in your calculations. But
>that doesn't tell you whether these mutations were "random" or
>"intelligent" (purposeful, anticipatory...). For all you know, the
>"intelligent" mutations can yield the same exact empirical rates as
>those observed. After all evolution patterns in other realms, such as
>languages, exhibit analogous statistical phenomena and patterns, yet
>they're clearly evolving through a purposeful activity of intelligent
>agents.

So, you postulate a supernatural entity that preciselly mimics ND.
Again the choices are between an entirelly natural and explicable
process, or a malevolent/incompetant/trickster fairy.

>The objective, ideologically and religiously neutral science should say
>that the nature of the mutations (which resulted in choices being
>offered to the natural selection) is not known. Unfortunately, the ND
>priesthood declared that it knows the answer and that the answer is
>"random mutations", and anyone questioning this dictum will have their
>papers rejected, their academic career ruined. Well, thanks goodness at
>least they're not threatening to tear the Dembski's limbs apart and
>burn him along with the rest of the RM conjecture 'deniers' as in good
>old times.

No, anyone questioning this theory better have a better theory. Yes,
stating "the DNA genie did it!" will definately hurt your chances
come tenure time.

Historically it's been the scientists who have been burnt at the stake
by fundamentalists, not the other way around. We are not persecuting
anyone, just doing things the way science does them. Propose a theory
that is predictive, testable, and falsifiable. Dembski can't even get
the math right (other than the bits he copied).


>
>>> All the biological evolution patterns, ... occur in
>>> the evolution of natural and artificial (such as
>>> mathematical formalisms) languages, religions, arts,
>>> scientific theories, technologies,... (just recall
>>> Dawkins memes). ...
>>
>> You are confusing models with reality.
>
>Not at all. Your reasoning apparently lost steam after one step. The
>classification to "object" and "model of that object" is not absolute
>i.e. the model M1(A) of some object A may be itself an object for
>another model M2(M1(A))... etc. Hence, a language which models some
>'reality' (object) may be an object of some other modeling scheme and
>its language. {One can also define a model M3 via M3(A)=M2(M1(A)) and
>M3 is then usually called a meta-language or meta-model of A.} Your
>objection reminds me of a Galileo's inquisitor accusing Galileo of
>confusing the rest with motion.

Again, that is irrelevant. If your model does not refelct reality
it's very difficult to draw conclusions about reality from it.


>
>
>>>... since it cannot _compute_ either how many total DNA
>>> configurations could be produced in a given setting,
>>> much less how would every DNA change at the atomic
>>> level affect the phenotype (in order to enumerate
>>> the 'favorable' or at least neutral outcomes).
>>
>> WTH are DNA configurations?
>
>That was described in the 1st post, [M1]. To test the neo-Darwinian RM
>hypothesis for a single mutation, you need to estimate how large is the
>combinatorial space containing all arrangements of the available atoms
>which can arise (via laws of QT) from a given DNA state upon action of
>a single mutagenic event (e.g. a cosmic ray, an alpha particle...).
>(See [M1] & [M2] for description on why you need such number.)

No, you most certainly do not have to estimate all how large the
combinatorial space that all the atoms in the DNA can be arranged in.
As a matter of fact, that would be an exceptionally silly and
pointless excersize.


>
>> That certainlly is not a term used by us mol bio folks.
>
>As pointed out in [M1],[M2], the present mathematical and computational
>tools are much too primitive for such computations. And since the
>questioning the RM conjecture is officially forbidden, there was no
>need to create a term describing quantities no one in academia is
>allowed to think about.
>
>> And is "change at the atomic level" just an akward way
>> of saying mutation?
>
>It was stripped down to the minimum required by the combinatorial
>reasoning in order to avoid any extraneous connotations and arguments
>by dictums. Why drag all the baggage not needed for the combinatorics
>and waste time on irrelevant tangents and associations, as you next
>sentence illustrates:
>
>> DNA is pretty much DNA at an atomic level. You gots your
>> A's, C's, G's. and T's.
>
>That comment illustrates the problem noted above. Once you eliminate
>all possible molecules which can arise e.g. upon a single alpha
>particle event, you have drastically reduced combinatorial space of all
>possible configurations in the 1-mutaion neighborhood of a given DNA
>state, hence you reduced the denominator in the expression P=F/M (cf.
>[M1]) for probability of favorable mutation, artificially inflating
>this probability.

Actually, we have a fair idea about the types of mutations that are
induced by alpha particles.


>
>Most mutagenic events will likely result in molecules which are not DNA
>molecule proper. To test the RM conjecture, you need to allow all such
>configurations (all possible molecules which can arise from given
>physical causes) and give them equal a priori probability as outcomes,
>whether they are proper DNA molecule or some other 'improper DNA'
>molecule. After all, all such events take time and consume reproductive
>resources. The RM conjecture requires that you give all such events
>equal a priori probability, P1=1/M. What you suggest is a pre-selection
>based on consequences expected later, which removes _upfront_ all
>mutagenic events which result in improper DNA (i.e. create some other
>molecules), sets their cost (in time and resources) to 0. That is
>cheating since it contradicts the RM conjecture (which you wish to
>prove/test) by setting the probabilities of the 'excluded' outcomes to
>0. It is also a blatantly teleological exclusion, which further
>contradicts the RM conjecture.

Uhm, mutations are defined as being changes in the organisms DNA.
It's not cheating, it's the definition. And again, DNA is DNA. It is
neither proper nor improper at any stage.


>
>> Evolution is certainlly not guided, it is contingent.
>> Look at the Luria Delbruck experiment for a simple example.
>
>That and later similar experiments only show that the phage resistance
>arose via mutations in the exposed cultures (i.e. it wasn't
>pre-existent property).

No, the experiment did show that the mutations were pre-exisitiing.
It's a famouslly simple experiment that you seem to have not grasped
the meaning of.

>The results don't address at all the question
>of the nature of the mutation i.e. whether the mutations were "random"
>or "purposeful". Consider a counterexample where you get exactly the
>same adaptation pattern even though the innovation was clearly
>purposeful -- the technological evolution. Say, a sudden shortage of
>some vital raw material places some set of companies in danger of
>bankruptcy unless they can find a substitute or find some other product
>design. These companies are analogous to separate Petri dishes in LD
>experiment and the shortage is analogous to phage challenge (or even
>closer analogy to sugar in Cairn's experiment).

The Cairn experiments exposed an interesting bit of bacterial biology.
Some members of a population deliberately suppress noral error
correction built into the replication machinary when the population is
subject to severe stress. As far as we can tell it's a stochastic
behavior switch.

No, you miss the point. We count the survivors because they survive.
For simple organisms and simple experiments, I can calculate based on
average mutation rate for that organism, how many survivors I would
predict.

>
>For RM conjecture, you cannot assume that anything is eliminating gross
>failures upfront based on some foresight about consequences -- within
>RM all outcomes of mutagenic events are taken to be independent from
>the later consequences. In your reasoning, you explicitly invoke later
>consequences to eliminate _upfront_ non-functioning enzimes etc, and
>declare that you do not wish to count such configuration as possible
>physical/chemical outcomes of mutagenic event (say an alpha particle
>impact). So, you have already changed the sides and crossed into the ID
>camp.

Nope, you have trouble following the logic. I suspect you have very
little knowledge about the biology. We don't assume that anything is
"eliminating gross failures", we run an experiment and see what
happens.

>
>> Organisms are not free to explore the entire phase
>> space available based on their genome size and
>> randomlly rearranging the DNA.
>
>Organisms are not free, but the molecules arising from mutagenic events
>are free to explore the combinatorial space of all possible molecules
>implied by QT model of the mutagenic events. For RM conjecture to be
>true, you cannot eliminate upfront any such configurations based upon
>much later consequences for the organism. You reasoning is clearly
>siding with ID.
>

No, you simply don't understand the biology behind DNA. ID'ist are
the folks who make wildly misguided assumptions about exploring all
possible combinations as proof that a designer made genomes.

Let's do a hypothetical, shall we? Somewhere in the genome of a
bacterium, a cytosine spontaineouslly deaminates. You now have one
strand that has G across from a demthylated C aka a T. Come
replication time, you are gonna now create two daughter stands that
differ at that position. No mysteries involved, and I didnt have to
consult the rest of the universe.

>>> As explained, the empirical mutation rate is a fact
>>> orthogonal to the question of whether the mutations
>>> which differentiated humans from apes were random
>>> or guided (selected with look-ahead).
>>
>> What we see in comparison of the genomes shows that
>> the changes are biased towards genes implicated in
>> neural development, diet, and disease. Sort of what
>> you would expect based on the biology.
>
>How does that relate to quantitative criteria (described above and in
>M1) needed to decide between "random" vs "intelligent". After all,
>follow up your reasoning on evolution of technology or languages, which
>are driven/guided by intelligent agents and show how the analogous
>characterization to the one you give fails there. Your argument is
>simply non sequitur. Purposeful mutations in no way contradict the
>existence of gradual change (consider human languages).

I never discussed the evolution of languages- you must be arguing with
someone else.


>
>> b) incompetant/malevolent/trickster designer.
>
>Again look at the evolution of QWERTY keyboard layout or Windows OS.
>Both very kludgy, yet "intelligent" agents were behind it. The presence
>of foresight doesn't mean presence of perfect foresight. You are
>increasingly clinging onto strawman arguments (perfect designer,
>supernatural, magic,...etc). Not a good sign for the strength of your
>case.

The QWERTY keyboard was deliberately designed to be inefficient.

>> And linkage disequilibrium of genes near these genes
>> showing strong selection indicate either a) evolution
>> or b) incompetant/malevolent/trickster designer.
>>
>> Which is simpler, and which has actual evidence?
>
>Simplicity is a useful criterium only when everything else is equal
>i.e. all candidates are equally correct (e.g. an arithmetic in which
>addition A+B always yields 1 for any A and B, might be simpler than the
>conventional one, yet it would be useless).
>
>> Bad model, and even worse conclusion.
>
>You need to provide a counter argument that holds for longer than one
>post before we reach that conclusion.

No, the model of ND as a part of the ToE is sort of the starting
point. It is incumbent upon you to provide a better one.

Regardless, a bad model.

>
>> Actually, most mutations fall into the macroscopic realm,
>> at least for QT. We are talking about relatively large
>> chuncks of stuff being moved- deamination of cytosine,
>> oxidative adducts, strand breakage and repair etc. You
>> are blathering nonsense here if you think there are any
>> Schrodinger cats involved. On the off chance you and
>> think that most mutations involve radioactive decay,
>> you are very much mistaken.
>
>You have again lost the context for the QT part of the argument (see
>[M2] and links there for more details). QT is relevant in the sense
>that precisely same initial and boundary state of a DNA and its
>environment can yield different outcomes and that there is nothing in
>the physical state (as presently understood) that dictated which of the
>outcomes will occur. It is a curtain that no empirical means can peek
>behind. Hence, as explained in [M2] (with the analogy of 10 coins where
>you're trying to figure out whether I am throwing the coins or laying
>them out by hand behind the curtain) you need a combinatorial model (as
>sketched in M1,M2) to test whether the RM conjecture can reproduce the
>observed rates of evolutionary innovation. In the 10 coin example, the
>combinatorics is simple enough to test the nature of the tosses (random
>tosses or put by hand). With RM conjecture the combinatorics & physics
>is too complex for our present techniques.

See above.

>Therefore, the intelligent vs random mutations is an open question. The
>ID arguments via "irreducible complexity" are merely plausibility
>arguments for ID, not a decisive criterium sketched in [M1].
>
>> And before you call someone a Lysonkoist, please learn
>> what a Lysenkoist is. He certainlly did not beleive
>> in random changes as a driving force for changes in
>> organisms. If you are refering to his suppresion of
>> what you would call "Neo-Darwinists" in the former USSR,
>> then you should use the term "Stalinist". Calling a
>> biologist a "Lysenkoist" is confusing.
>
>ND is neo-Lysenkism, the "new and improved" Lysenkoism. Yet, you
>can still see the same kind of hyper-sensitive, overly defensive,
>zero-tolerance totalitarian mindset in the old and the new one. Just
>listen Ward's "arguments" (dictums) and arrogant totalitarian style in
>the two Ward-Meyer debates. It makes one kind of sad to think what kind
>of intellects teach kids nowdays.

The only point where I have zero tolerance is where someone tries to
pass off creationism as science. If you can come up with a betther
model, then I would be happy to learn it. But saying the invisible
pink unicorn is responsible for the diversity of life on this planet
is definately not gonna make me think differently.

>> Your model would make life impossible, so I suggest a
>> change in model parameters.
>
>I didn't propose any model but merely sketched quantitative criteria
>needed to distinguish between random vs intelligent mutations. This is
>no different in principle than criteria one uses to test random number
>generators. The RM conjecture implies certain probabilities for
>favorable mutations for which we have empirical frequencies, hence once
>we can estimate these implied probabilities (which we cannot do at
>present) we can decide whether the conjecture is valid. Therefore the
>validity of RM is an open and legitimate question.

No, your model's assumption about how DNA actually functions is what
would make life impossible.

>
>> And as a final point, the real problem of ID is the
>> invisible pink unicorn (or it's hipper cousin the FSM).
>> Any agent that can be postulated to do anything, in
>> any way, and hide itself from inspection, is by
>> definition unscientific. No predictive basis, no
>> testibility, no way to falsify.
>
>Why does the intelligent agent behind evolution have to be forever
>inaccessible? While it is conceivable that what e think of "universe"
>and ourselves may be a simulation in some gigantic game of Life, that
>is by no means the only possibility.

Neo, is that you?

>
>Even if one limits oneself to the present laws of matter-energy (thus
>ignoring the existence of mind stuff), it is perfectly conceivable that
>various biochemical networks (including "junk" DNA) are intelligent
>systems which can internally model DNA modifications and their possible
>phenotopic results in order to select the "optimum" (within the
>limitations of their computational capacity) mutation to make in the
>actual DNA.

"Mind stuff"?

Uhm, not to the extent that you are thinking. Organism's don't
generally mutagenize their own DNA. That would be bad in most cases.
Again, you need to learn some of the biology going on here.

I'm not saying organisms never muck with their DNA. It's either to a
very limited extent (impriniting, which doent change the sequence) or
does so only in non-reproductive cells.

>That such possibility is not far fetched at all is suggested by the
>already existent examples of genetic look-ahead by natural processes
>(besides the brains of molecular biologist or dog breeders). For
>example, we know that societies through customs, folklore, religions,
>arts, laws. etc also apply foresight regarding the future genetic
>combinations it wishes to create. E.g. consider the stigma (and all its
>manifestations) of criminal past - society does not wish to propagate
>the genes of a criminal. Even the mere act of locking up someone for
>several years performs the similar genetic look-ahead function (it
>reduces mating chances of prisoners). The content of our prisons shows
>which kind of genetic material the society wishes to eliminate.

Really, really faulty analalogy, showing a general lack of knowledge
of genetics.

>Hence we know that such look-ahead regarding the future genetic content
>certainly exists at the level of individuals and the level of society.
>There is no reason or an argument why these two levels are all there
>is. I think similar genetic look-ahead occurs at all levels, above and
>below (possibly even below our present "elementary" particles), and in
>a multitude of ways. The neo-Darwinian dogmatic insistence on "random"
>mutations (which is rooted in 19th century mechanistic materialism,
>where everything was modeled as 'machines') along with its Lysenkoists
>methods of enforcing the thought conformity is a drag on science.
>

If you had an actual clue about the biology, you would be on the way
to making a point. However, we know that there is no "look ahead
function" in selection. The closest thing is a process called
cooption. But cooption actually strngthens the ToE.

Again with the Lysenkosim. Scientists are the least conformist people
on the planet in many repects. Gather three scientists and you are
certain to get 4 different opinions for any novel result. In general,
they will not fall back on some mysterious hidden designer working
mischeif behind their backs.

>If one also recognizes the existence of mind stuff i.e. that it may be
>possible to scientifically model answers to questions 'what is it like
>to be this particular arrangement of atoms and fields that make up
>you', other possibilities of 'intelligent agency' open. For example, it
>is conceivable that what in our present QT are 'irreducibly'
>non-deterministic choices are actually choices made by the future
>model's counterparts of the mind stuff.

Again with the mind stuff. Just because we have only a limited
understand how the brain works does not automatically mean that there
is anything other than natural process at work.

B Miller

z

unread,
May 13, 2006, 9:15:39 PM5/13/06
to
On 13 May 2006 16:40:35 -0700, "nightlight"
<night...@omegapoint.com> wrote:


<snip>

>As already explained, the LD and similar experiments have no relation
>to the RM vs ID criteria (1)-(2). LD only shows that the adaptive
>mutations occurred after exposure to phage, as opposed to adaptation
>being an infrequent activation of an existent response mechanism. There
>is nothing in LD experiment that differentiates random mutation vs
>purposeful mutation of the same empirical rate EF. The exactly same
>statistical pattern as in LD would occur in technological evolution as
>described earlier.

You really missed the point of LD. They showed that the mutations
exist prior to exopsure to phage.

I suppose that one could postulate that an supernatural agent chose to
modify the bugs in such a way to make the outcome look exactly like
what would be expected by chance alone. If one goes down that road,
where do you draw the line at such interference? For example, all the
data supportive of QM mechanics may also be due to little quark
fairies manipulationg the results. Maybe they were pissed at Newton
for not playing more with his alchemy set. They could even be
manipulating your thoughts, choosing what you are going to have for
dinner.

We could even invoke this as a legal defence. "Your Honor, my client
is innocent because the FSM touched him with His Noodly Appendage, and
forced him to rob that little old lady."

I prefer to live in a rational universe.

B Miller

nightlight

unread,
May 14, 2006, 1:11:19 AM5/14/06
to
>> Hence mutations are at least correlated
>> with the most immediate physical (quantum)
>> state of the DNA and its immediate environment,
>> which in turn are correlated with the larger (in
>> space) future and past environments i.e. mutations
>> are trivially correlated with events and states of
>> the past and the future environments of the organism.
>
> Irrelevant.

It is relevant against the unconditional claim (to which it responded)
that mutations are absolutely not correlated with any future states of
the organism's environment.

> I don't understand this clearly but I still think that


> there is at least an error: you are talking about light
> cones, which are a concept of einstanian relativity,
> which wasn't available before 1905. So it wasn't
> 19th century physics.

The term "light cone" was from Minkowski's 4D formalism of Einstein's
SR. But the light speed (or speed of EM waves) was known throughout the
19th century.


> It's widely understood that quantum effects are not
> relevant at the scale of organic molecules, like dna,
> for a phenomenon called "decoherence".

That is a crude, low resolution statement. The 'decoherence' washes out
the aspects of QM which cannot even in principle be simulated by a
classical model, such as violations of Bell inequality. But this was
not the aspect being used in my argument. The QM aspect I used was QM
indeterminism, i.e. that even the theoretically most precise knowledge
of a state does not allow you in general to predict the outcome. { The
Bell's theorem and its tests are used to support QM claim that this
indeterminism is fundamental.} A larger system in which decoherence
would normally be significant, would not violate Bell's inequality,
hence it could be simulated with some classical indeterministic system
(e.g. where classical noise may play role of QM indeterminism). But my
argument only required necessary indeterminsm, which would apply to
exact quantum model as well to its classical simulation.

As to wether some system with large number of particles will exhibit QM
indeterminism, depends on the structure of the system. For example a
photodetector, which is a macroscopic system much larger than DNA,
exhibits QM indeterminism at the macroscopic level. The inelastic
scattering processes (of photons or atoms/ions on DNA atoms) which
yield mutations are generally indeterministic processes.


> Anyway I think that you are thinking of something
> slightly different. You are asserting that the dna
> mutation events are biased towards favorable outcomes.
> This would imply that both statistical and quantum
> mechanics were wrong, since they predict that the
> random events follow certain probabilty distributions
> that are inconsistent with persistently mostly
> favorable outcomes.

It wouldn't mean they are wrong. For example if QM predicts on average
F of some event E out of sample of N measurements, then if you repeat
the N measurements tests, you won't get exactly F events E with each
N-sample. You will get binomial distribution with an average number of
events F, but any number of events, call it K from 0 to N, has some
nonzero probability:

P(K)=p^K*(1-p)^(N-K)*C(N,K),

where p=F/N and C(N,K)=N!/(K!*(N-K)!). Additionally, QM doesn't tell
you which K measurements out of N will produce event E. There are
C(N,K) ways to pick these K out. Hence you can satisfy the
probabilistic constraints and still have plenty of freedom to send
message in a sequence of outcomes.

Note that this is similar to observing empirical probabilities of
various letters in English texts. For a given writer these tend to be
fixed (even word frequencies remain fixed and can be used to help
identify the author). Hence a series of books by a writer will have
letter & frequencies which comply with his signature probabilities, yet
the texts are entirely different.

Further, market research and social sciences use routinely
statistical/probabilistic methods, even though their subjects seem free
to choose what to do. A good poll of 5000 'likely voters' will
generally predict election outcome even though the remaining 300
million - 5000 are free to decide as they please.

> There is no way for an external force to influence
> favourably the tosses and still adhere to the
> theoretical binomial random distribution.

An agency with memory which keeps track of frequencies and can keep
blocks of multiple symbols before producing adjusted output can send
messages without violating binomial distribution. This is also trivial
to do if you set the encoded message to grow proportionately to
sqrt(N), where N is the length of the full sequence. Similarly, if you
compress your message optimally then encrypt it, it would be
indistinguishable (except to a cryptoanalyst) from a genuine Bernoulli
sequence.

> Thus, unless you provide adequate evidence that the
> dna mutation are biased, we should hold that our
> present scientific knowledge is correct and you are wrong.

As explained above, there is no need to violate QM or statistical
mechanics in order to have directed mutations. After all the directed
mutations already exist (e.g. produced by molecular biologist) and that
instance violates no natural laws. There is no a priori reason why
other (besides the brains of molecular biologists) intelligent
networks, from biochemical reaction networks within cells through
ecowebs, could not do the same. They all can interact at the physical
level (through any number of intermediate interaction chains) with the
cellular DNA, just as the brain of a molecular biologist does.

Finally, I have explained several times the explicit quantitative
criteria which distinguishe whether the "random" mutation model is
capable of reproducing the observed rates of evolutionary novelties or
not. Hence, whether such computation is presently practical or not, the
mere existence of a precise quantitative criteria implies that the
question of ID vs RM is a perfectly valid scientific question. There
may be other more accessible tests, but they are not necessary to
falsify the neo-Darwinian claims that ID vs RM is not a legitimate
scientific question). It is and there exist algorithms which
distinguish between the two.

> The theory of evolution doesn't exclude correlations
> in general. It excludes persistently favourable events.

To put it more precisely, it is the RM conjecture that excludes any
statistically significant "excess" of favorable mutations. And the
"excess" is understood to be with respect to the rates that the RM
model would predict. Thanks for agreeing that this is a perfectly valid
scientific question which can in principle be decided (once we can
compute what the prediction of RM are and how do they compare to the
empirical rates of favorable mutations).

>> This "natural process" occurs e.g. in the brains of molecular
>> biologists. Hence the neo-Darwinian RM conjecture _must_ be
>> refined in order not to outright contradict the established
>> empirical facts -- RM conjecture can't do better than to
>> state that all mutations are random, _except_ (at least)
>> those guided by the computations in the brains of molecular
>> biologists. Now, that's one very messy and ugly kind of
>> conjecture.

> Strictly speaking, organisms created by biotechnologists
> are not biologicaly evolved.

You're trying to weasel out by switching to debate about definition of
"biologically evolved". I am simply saying that "natural processes"
exist which do perform purposeful mutations. RM conjecture must take
that fact into account. The RM which agrees with empirical facts cannot
be defined or stated without introducing messy exceptions (which are
too vague to be a scientific theory).

> Anyway this is irrelevant, since biological evolution can
> never prove that an organism has not been designed by
> some "intelligent" entity.

But the RM conjecture is testable, in principle. Should it fail the
test, the remaining options is ID (intelligent benevolent designer) or
MD (malicious designer, which makes the favorable empirical rates lower
than the RM model prediction).

>> After all evolution patterns in other realms, such as
>> languages, exhibit analogous statistical phenomena and
>> patterns, yet they're clearly evolving through a
>> purposeful activity of intelligent agents.
>
> If evolution works with random unbiased mutations,
> it works better with biased mutations. That's why
> these theories can apply concepts of biological evolution
> in fields were the mutations are somewhat biased towards
> favourable outcomes.

The point was that you cannot use any statistical patterns which can be
found in evolution of technologies, languages, religions, human
societies,... as a proof that the biological mutations are
undirected/random, when you find analogous patterns in the biological
domain. If you bring up some statistical pattern from biology as a
proof or indicator in favor of RM conjecture, and if someone can point
out to an analogous pattern from these domains, your proof is invalid
and your indicator is irrelevant.

> Anyway, the bias is usually small, since humans have only
> very limited foresight on events that extend past their
> lifetime and spatial location.

Why would the intelligent agency/agencies guiding evolutionary
mutations be any smarter or more accurate? Are you perhaps trying to
sneak in the "perfect designer" strawman?

>> Unfortunately, the ND priesthood declared that it knows
>> the answer and that the answer is "random mutations",
>> and anyone questioning this dictum will have their
>> papers rejected, their academic career ruined. Well,
>> thanks goodness at least they're not threatening to tear
>> the Dembski's limbs apart and burn him along with the
>> rest of the RM conjecture 'deniers' as in good
>> old times.
>
> This seems paranoid.

The official public threats to destroy careers of anyone supporting ID
are no secret. There has been plenty of discussions on that on the pro
ID sites such as Dembski's:
http://www.uncommondescent.com/

where someone suggested that such blatant discrimination in conjunction
with recent court decisions which labeled ID as religion, could be used
to sue universities for religious discrimination.

> I think that "dna configuration" is described by a sequence
> of bases for biological purposes. You don't need the
> exact configuration of the atoms.

The term was used to denote all possible outcomes of all possible
mutagenic events on a given DNA (as part of suggested test of RM model
predictions).


>> As pointed out in [M1],[M2], the present mathematical
>> and computational tools are much too primitive for
>> such computations.
>
> This means: "I don't have any evidence for claiming that
> ID is correct and evolution is false but someone might
> have in the future".

You missed the point. As explained few paragraphs ago, as long as there
_exists_ an algorithm/test which can distinguish RM model vs ID model,
regardless of whether it is practical or not at present, the mere
existence invalidates the key neo-Darwinian dogma that ID vs RM cannot
be tested even in principle (hence that ID is not and it cannot ever be
a scientific position).

> Until you have some evidence that falsify evolution
> science should assume that evolution is correct.

Strawmen. The RM conjecture is not equivalent to "evolution". The RM is
falsifiable (at least in principle).

> Again, paranoid.

Nop. You're simply uninformed.

>> Most mutagenic events will likely result in molecules
>> which are not DNA molecule proper.
>
> This is not called dna mutation. This is called
> dna destruction. Usually the cell dies.

You can't use teleological argument (as you do) to eliminate such
configurations upfront. The test was for RM conjecture (not for natural
selection), hence all such mutagenic events have equal a priori
probabilities, regardless what may happen later to the cell. You can't
use teleological reasons within RM (the model whose predictions are
being tested) to eliminate such events from counting. When counting all
DNA configurations reachable via a single mutagenic event from a fixed
initial DNA state, the DNA change which subsequently kills the cell has
equal weight as the most favorable one.


> Your calculation are clearly not carriable.

The point is that algorithm _exists_ which can test/falsify RM. This
existence falsifies the ND dogma that ID is not a science.

> As I said before,
> the Radom Mutation hypothesis can be derived from statistical
> mechanics, which can in turn be derived from quantum mechanics.
> If RM is false QM is probably false. Since we trust QM very
> much, we trust also RM until it's proven false. No need of
> impossible calculations.

For any QM calculation you need to _put in by hand_ some initial and
boundary conditions (to solve the partial differential equations for
the system evolution). Since we know that there are perfectly natural
processes which can guide mutations (brains of molecular biologists),
that implies that there exist boundary conditions consistent with
guided mutations. Hence, you cannot flatly say that QM calculation can
prove RM conjecture. The guidance can be done via boundary conditions
and these are put in by hand. Therefore you need more steps in your
suggested derivation, and these are precisely those I was describing.

Namely, this QM calculation is merely a special case of a part of the
algorithm I am describing from the first post here (I didn't specify
how much of detailed physics is needed to compute set of DNA
configurations which are 1-mutagenic event away from a given DNA
state).

But in order to prove or falsify RM conjecture, you also need to derive
the number of favorable (to some criteria) configurations and use that
to compute the rates of "favorable" mutations for some given population
size and reproductive rates. The result would be the RM model's
prediction for favorable mutation rate, RF. Then you would measure the
empirical rates of "favorable" (in the same sense as used for RF)
mutations EF and compare the two. The ID conjecture is that the EF>RF,
while RM conjecture requires EF=RF. Malicious desgnier (MD) corresponds
to EF<RF.

Of course, to compute number of favorable mutations, you would need a
model which allows computations of phenotopic consequences of arbitrary
DNA changes and some conventional threshold of success defining the
"favorable". The present molecular biology is far from having any such
models. The present models for such mapping are not much better than a
model a caveman would have of a computer if allowed to play at a
computer and watch the screen responses.


> What's wrong with the QWERTY layout?

It was designed to slow down typing and prevent mechanical jams. So it
takes more effort and movement than more optimized Dvorak layout (or so
they say, never tried it).

> Anyway the incompetant/malevolent/trickster argument
> is not unsound:
>
> You claim that mutations are biased towards favorable
> outcomes, to do so the biasing force must have an
> incredible foresight.

Not at all. It only may appear "incredible foresight" to our present
molecular biology. But not to the intelligent agency itself (which may
well be, at least in part, implemented in the "junk" DNA). After all,
it would seem incredibly difficult for our present technology to create
a live cell from scratch. Yet cells do it with ease -- you put in one
cell with the required substratum and you check later and there are
millions of cells. Now you take in the same substratum and put in (or
around) the whole biotech industry and science, all the best brains and
biggest computers and labs, but without allowing them to call a cell
for help or to borrow cell's tools and materials (hence they can't use
anything produced by cells or viruses; that would be cheating), and you
can wait all you want, there won't any live cells in the dish. There
won't be even an organelle.

> we observe a lot of unfavourable mutations

> that any silly human would never make. ..This seems


> a problem for your theory.

It is not problem at all. Different intelligent agencies have different
computational capabilities and different optimization algorithms and
utility functions being optimized. To a cell all molecular biologists
would appear inept (after the thought experiment described above).

> No, postulates of Quantum Mechanics can be tested in
> systems much simpler that biological systems, and the
> result is that the random events should be unbiased
> towards anyone and anything.

As explained, guided mutations need not contradict QM. First, guided
mutations already exist (via biotechnology) without contradicting QM.

Second, the statistical constraints of QM are weak to preclude an
intelligent agency operating without violating QM probabilities. There
could be in principle entire subquantum realm through which an
underlying intelligent agency simulates (perhaps as a side effect of
some other optimization) QM probabilities at our level, along with the
rest of our physical laws. After all, there are as many orders of
magnitude between our elementary particles (10^-16 cm) and us, as there
are between Planckian scale objects (10^-33 cm) and our present
elementary particles. Hence, there is no shortage of room for as much
complexity to arise in the subquantum realm built upon Planckian scale
objects, as there is at our level.

Third, the boundary conditions used for QM computations of any system
are put in by hand -- they are free parameters. As noted above, one can
guide mutations entirely via boundary conditions without even dealing
with the encoding of the signals under the constraints of fixed QM
probabilities.


> "Intelligent mutation" is a term ill-defined to me.
> I understand biased mutation.

In this context "intelligent" means that there was some look-ahead (in
generation of mutations) by some agency capable of internally modelling
the effects of DNA transformations which allows it to eliminate many
possible transformations upfront, within the model itself, before
implementing its choice on actual DNA.

> Occam's razors rules out biased mutation until it's
> proven necessary to explain evidence.

The Occam's razor is relevant when selecting among theories which are
equally correct but differ in complexity. But the ND and ID cannot be
both correct since there exists an algorithm (practical or not) which
can distinguish between the two. Hence Occam's razor is inapplicable.

> No, you proposed some unusable criteria to claim that
> ID vs Evolution is an open question. But the burden
> of the proof is on you to prove that Evolution is wrong.

The "usability" of the algorithm doesn't matter for the argument, only
the existence of an algorithm matters. Also, it is not Evolution being
tested but only the RM conjecture.


> Please define "mind-stuf".

Mind stuff is an element of reality responsible for phenomena described
by questions such as: what is it like to be this particular arrangement
of atoms (e.g. which makes "you")? We may not now the answer in terms
of our present laws of matter-energy, but we certainly know that there
is something that it is like to be, at least for some arrangements. The
phenomenon exists, but our present natural laws do not capture it. We
do have plenty of words and literature referring to and describing the
phenomenon.

> You can't? That's way science doesn't deal with it.

The science doesn't deal with it because it is not advanced enough, not
because it doesn't exist. It will be figured out eventually. My
conjecture is that all objects, from elementary particles and up, have
mind-stuff.

>> E.g. consider the stigma (and all its manifestations)
>> of criminal past - society does not wish to propagate
>> the genes of a criminal. Even the mere act of locking
>> up someone for several years performs the similar
>> genetic look-ahead function (it reduces mating chances
>> of prisoners).
>

> That's called "artificial selection".

So? The same events can belong to multiple, related and unrelated
patterns of purposeful actions. The events of you reading this sentence
at your computer is also part of a process of your ISP making money
from selling you internet access. The one doesn't exclude the other.

> There is no satisfying definition of intelligence, but
> the only intelligent beings we know for sure are
> (certain) humans and pheraps some animals. So why are
> you claiming that intelligence exists at other levels?

I am using the term "intelligence" to mean process which uses
foresight, look-ahead to optimize some gain/utility function. The
neural networks, which are a mathematical abstraction capturing the
common regularities of variety of adaptible networks (complex systems)
in nature, are intelligent agencies. Simply by being exposed to
'sensory' inputs from environment and to punishments & rewards from
utility function (which slightly modify individual links), they
converge to a state which optimizes the punishments/rewards. In this
"learned" phase, they internally model their environment and the 'self'
and play what-if look-ahead game to discover the actions of the self
actor which optimize their punishments & rewards. Our brains are just
one example of such "intelligent" system. Even languages (natural and
artificial such as mathematical formalisms) form such intelligent
networks, living on humans as their substratum. There is even a paper
by Eugene Wigner titled "The Unreasonable Effectiveness of Mathematics
in the Natural Sciences"

http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html

marvelling over the apparent intelligence of mathematical formalism,
which somehow anticipates by decades or longer seemingly unrelated
empirical discoveries in physics. Mathematicians themselves merely
follow the aesthetics of the formalism as a motivation for
developments, without any particular knowledge or anticipation of the
phenomena in physics which get discovered decades or centuries later
and for which the already developed formalism (for its beauty) just
happens to come out as a perfect model.

nightlight

unread,
May 14, 2006, 3:08:46 AM5/14/06
to
>>The physics can only tell you the probabilities of
>> various outcomes (mutations), but nothing in the
>> most precise physical state of the DNA and its
>> environment determines what the specific outcome
>> will be in any given instance.
>
> Gibberrish. We can most certainly determine what
> the mutations are, after they occur. We do not
> need to invoke any quantum effects.

You're arguing completely out of context and consequently talking
nonsense. The point of QM argument is that one cannot claim that
initial state, even if known with maximum precision, fixes the outcome.
It is trivially true that once you examine ("measure" in QM) the
outcome, it is known. Now, the original reason for pointing out that
the outcome is not determined by the initial state is to demonstrate
that there is at least one free (within the probabilistic constraints
of QM) element in a any model which can serve as an interface for
intervention into selection of mutations.

In addition to this free element (the unavoidable QM dice), there are
boundary conditions needed to solve QM equations, and these are values
put in by hand, hence free parameters, to compute QM state (wave
functions) and its transformations. These free parameters would cover
any conventional/classical types of causes which can be activated by an
intelligent agency to produce desired mutation (or make it more likely
than some others or no mutations).

> A whole lot of nonsence. Random mutations are what
> drives evolution. There are no intelligent agenceys
> behind them.

You have flipped here and for much of the rest of your post into a
dictum proclamation mode, sprinkled here and there with simple minded,
out of context bumbling, due to pinhole vision reading, and spouting
trivialities, such as the previous non sequitur comments on QM. I'll
skip the bulk of the nonsense.

>> As already explained, the LD and similar experiments have
>> no relation to the RM vs ID criteria (1)-(2). LD only
>> shows that the adaptive mutations occurred after exposure
>> to phage, as opposed to adaptation being an infrequent

>> activation of an existent response mechanism....


>
>> You really missed the point of LD. They showed that the
>> mutations exist prior to exopsure to phage.

The statistics of LD populations is not sensitive to the presence or
existence of the mutation in the initial dish since such mutations are
occurring at some low rate (as a Poissonian process). As soon as one
bacterium acquires the resistance, then the partition of 'mother' dish
into subsequent 'daughter' dishes will automatically bring in the
resistant bacterium into single 'daughter' dish R and no resistant
bacteria into all other 'daughter' dishes S. Then, the next splits of
new 'daughter' dishes will yield more (than the Poissonian rate implied
by the new mutations occurring in all dishes independently from their
'mother' dishes) resistant 'daughter' dishes from the R dish than from
the S dishes. The reason for that excess in the 'descendant' tree of
the R dish is that the reproduction rate of the resistant bacteria was
much larger than the empirical mutation rate of non-resistant bacteria.
The experiment does not compare random vs guided mutation rates but the
rates of reproduction of resistant bacteria vs the rates of mutation of
non-resistant bacteria. Hence, as stated this experiment is irrelevant
for this discussion.

To test between random vs guided mutations, you need an experiment such
as Cairn's lactose enzyme mutation, where indeed it was found that the
rate of specific favorable double mutations was higher when they were
needed than when they were not needed. The interpretations of these
experiments from different quarters are conflicting and not worth
rearguing.

nightlight

unread,
May 14, 2006, 4:16:07 AM5/14/06
to
>> The basic question being debated between ID and ND
>> is whether the random alternation of that trait
>
> There's little question that the alterations (which
> is what I think you mean)

Yep, thanks.

> And unfortunately for IDiocy, this "intelligent agency"
> acts only in way that is completely undetectable as it
> is identical to the agency of inherited variation and
> natural selection.

People mostly find what they're looking for. Within current RM dogma
the fundamental research is not focused in finding such effects.
Whenever the guided mutations are found (such as Cairns experiments),
great controversy gets stirred up and ND noise drowns the ID signal.

>> My point is that there are no calculations of such
>> predictions from a random DNA change model (the RM
>> conjecture). Hence there is no basis to claim that
>> RM is an established fact.
>
> Faulty replication is an established fact.

So? Mutations are established fact. Just because most are harmful, it
doesn't mean that there is no guidance. Most day traders lose money
even though they are trying to anticipate the market fluctuations.
Presence of intelligent guidance does not preclude a failure. It only
makes failure less probable. But to detect the presence of guidance you
need to know what was the non-guided probability (which is the
computation that I am describing). Just because the guided probability
of success is small it doesn't mean that it can't be larger than some
even smaller non-guided probability of success.

>> Even as a conjecture, the RM is weak since there
>> isn't a single established quantitave fact going in its favor.
>
> Well, I guess if you don't count population studies
> and genetics.

Give me one quantitative fact supporting RM.

>> I suspect that at least one such intelligent agency
>> is the biochemical reaction web of the "junk"
>> DNA which can "sense" various challenges from the organism's
>> environment (transmitted via hunger, thirst, heat, cold,...)
>
> Might we have some evidence that junk DNA is a sensory or
> computational apparatus?

All of DNA is such computational apparatus (intelligent network or
complex system). It is a network of biochemical reactions with
adaptable links (the reaction pathways), exposed to punishment and
rewards. As a rule such networks are distributed computers.

>> You should also recall that the intelligently
>> guided alternation of the genome is an extremely
>> common phenomenon. I mentioned several examples
>> in previous posts. The obvious ones are molecular
>> biology in research and genetic engineering in
>> agriculture. In fact every organism performs
>> such 'genetic engineering' when selecting a mate.

> I believe you mean to say that *no* organism performs
> such "genetic engineering." Except in an indirect
> and metaphorical way. Mates are selected based on
> the expressions of DNA.

The genetic engineering is indirect, too. No one is picking a DNA
molecule by hand and tweaking its atoms this or that way. Molecular
biologist simply uses different chain of indirection, based on some
other correlations between structure of DNA and some other perceptible
events.

In the case of molecular biologist, these correlations are encapsulated
in the physical laws which were used in the design of the experiments,
instruments and the computer algorithms being used.

In the case of mate selection, the correlations are between the DNA
structure and its perceptible phenotopic expression i.e. the organism
itself is used as the amplifying instrument which correlates humanly
perceptible events (the perceived phenotopic traits) with the DNA
structure of interest.

The biologists simply use different amplifying system based on some
other correlations. But neither is a direct manipulation of the DNA.
And in both cases you have a perfectly natural process which uses
anticipation/foresight in order to shape the structure of DNA of
organisms they are producing. Of course, the internal models
(conceptualization, understanding) of their own activity and the means
they use to implement their plans are entirely different. But that's no
different (regarding the essential aspects of the process being
discussed, the intelligent guidance of DNA changes) than the difference
between the genetic engineer of today vs genetic engineer few hundred
years from now. The technology, conceptual models, instruments,
experimental setups, computers and programs will all be very different
and hardly recognizable.

Hence, I didn't mean it in metaphoric way but quite literally.

hersheyhv

unread,
May 14, 2006, 9:09:35 AM5/14/06
to
nightlight wrote:
> > What we see is easily accounted for based on the chemistry
> > of DNA and the ability of cells to repair DNA.
>
> That depends, to paraphrase one former US president, what the meaning
> of "is" is. Also on what you mean by "accounted for". As explained in
> the earlier posts:
>
> M1.
> http://groups.google.com/group/talk.origins/msg/788d751e9ac239da?hl=en&
> M2.
> http://groups.google.com/group/talk.origins/msg/d6349ead1e3ff646?hl=en&
>
> the "random mutation" half of the neo-Darwinian (ND) theory is a
> gratuitous ideological assumption (Darwin himself was more guarded in
> his writings).

Now I will define what I mean by random in this context. Mutation is
random wrt need. That does not mean that the *rate* of mutation is the
same everywhere in the genome. It does not mean that all types of
mutation are equally likely. It means that a specific mutation is not
occurring because the organism has or anticipates the need for that
specific mutation. There is no biological mechanism by which an
organism can *anticipate* the need for a specific mutation, as
organisms are not in the divination business. That leaves the idea
that specific mutations of need occur *after* and *because* the
environment has changed to produce the need for that mutation.

> While one can certainly generate "random" mutations in
> the lab, that doesn't imply that the mutations behind the observed
> evolution (micro & macro) are "random" i.e. independent from the
> environment in which they arose. After all, it is _trivially_ true that
> mutation does depend on its physical cause (e.g. EM radiation, chemical
> reactions, cosmic rays,...).

These factors increase the *rate* of mutation. They increase the
relative frequency of different *types* of mutation. They do not lead
to directed mutations that an organism needs.

> Hence mutations are at least correlated
> with the most immediate physical (quantum) state of the DNA and its
> immediate environment, which in turn are correlated with the larger (in
> space) future and past environments i.e. mutations are trivially
> correlated with events and states of the past and the future
> environments of the organism.
>
> The neo-Darwinian "random mutation conjecture" (RMC) is that the
> correlation between (evolutionary) mutations and the DNA environment
> stops precisely at the immediate preceding physical causes. If the
> physics were still the 19th century mechanistic theory, fixing the
> physical state of DNA and its "immediate" environment (within the near
> past light cone) automatically leads to a unique future state of the
> DNA, hence to a unique mutation. Thus you wouldn't have to care about
> wider correlations with future and past environments.
>
> But the physics did advance from the 19th century mechanistic theory.
> Our present fundamental physics is Quantum Theory (QT). The key
> property of QT relevant for this discussion is non-determinism: the
> exactly same physical state of the DNA and its environment generally
> leads to different outcomes (mutations). The physics can only tell you
> the probabilities of various outcomes (mutations), but nothing in the
> most precise physical state of the DNA and its environment determines
> what the specific outcome will be in any given instance.

And this is evidence in favor of directed or Lamarckian mutation
exactly how? The fact remains that mutation *is* demonstrably
independent of the need for that mutation. At best, there might be a
few phenomena that might potentially skirt the edge of being directed
mutation (the localized high rate of *somatic* mutation in various
cells like immunoglobin formation, mating type switches in yeast,
etc.), but even most of these merely affect the *rate* of specific
types of mutation or generate high rates of mutation at specific
enzymatically controlled sites. But these features are either highly
revertable (as in mating type switches in yeast which are local
transpositions) or are somatic and die with the organism
(immunoglobins).

A simple form of evidence is that mutations with very high rates of
occurence that we are aware of , such as the point mutation that
produces achondroplastic dwarfism, are not those that produce
*beneficial* effects. If there were some mechanism for generating
beneficial mutations at times of need, why would these deleterious ones
keep recurring and recurring?

> Therefore, even if one were to grant to ND that the immediate DNA
> environment (resulting in mutation) is independent from the physical
> state of any larger context (which might allow for purposeful
> mutations), there is always an impenetrable quantum curtain hiding the
> selection of particular outcome/mutation even if the state of DNA and
> its environment is known with theoretically maximum precision. Unlike
> classical physics, where the 'first mover'/external intelligent agency
> had to be constrained at the very beginning of the universe (to set up
> the deterministic laws of physics and the initial conditions of all
> particles and fields), the quantum physics provides an interface for an
> external (or even an internal/subquantum system built upon Planckian
> scale objects, see [M2]) intelligent agency to direct the universe
> continuously and in great deal of detail (within the statistical
> constraints of QT, since these are empirically fixed).

What a load of horseshit unsupported by *any* actual evidence. Do you
somehow think that it is impossible to test whether or not mutation
occurs "as needed" or occurs independently of need?

> Of course, one wouldn't grant ND even that the immediate physical
> conditions (quantum state) causing mutations are uncorrelated with the
> wider past and future contexts.

What mutations certainly are affected by the past. Mutations occur in
existing genomes rather than to total sequence space. What you are
claiming is that mutations occur with foresight of the need for the
mutation. That is what evidence specifically rules out.

> That would need to be established.
> Since at the most basic physical level each mutation _is_ always
> correlated with the quantum state of entire universe visible at that
> location at the time of the mutation (since physical interactions can
> travel at light speed, reach the mutation point and affect DNA
> environment), as well as with the state of entire universe within the
> future light cone of the mutation, one cannot a priory exclude
> correlation of the mutation with higher level patterns in the past and
> future much wider environments. One could _in principle_ exclude these
> types of correlations (thus a foresight) empirically if one could
> control the immediate environment precisely enough (as allowed by the
> QT state of the system). Since this is technologically not viable at
> present, even without the theoretical QT limitations noted earlier,
> there are technological limitations precluding anyone from declaring
> that even these less fundamental kinds of correlations have been
> excluded empirically.

Are you actually claiming that we have to know the QT state of things
to know what chemical reactions have occurred or to distinguish between
environments?

> In fact we know that there are explicit empirical counterexamples to
> accepting even this weaker (since it ignores QT dice) RM conjecture.
> Namely, there _are_ perfectly natural "intelligently guided" mutations
> i.e. there are natural processes which model and anticipate the
> phenotopic outcomes of DNA changes and preselect which changes to
> perform based on some far reaching purposes well beyond the immediate
> physical DNA environment. This "natural process" occurs e.g. in the
> brains of molecular biologists. Hence the neo-Darwinian RM conjecture
> _must_ be refined in order not to outright contradict the established
> empirical facts -- RM conjecture can't do better than to state that all
> mutations are random, _except_ (at least) those guided by the
> computations in the brains of molecular biologists. Now, that's one
> very messy and ugly kind of conjecture.

The Luria/Delbruck experiment's results were not a consequence of the
action of any detectable intelligent agent. Are you assuming that
everything that happens in organisms happens because of the
undetectable actions of undetectable molecular biologists?

> For example, how would one make such necessary exception precise in a
> formal scientific and mathematical language? What is it exactly about
> the arrangements of atoms and fields in the brain of a molecular
> biologist that makes it different from all other arrangements? Do you
> include into such arrangement the atoms and fields of the computers
> biologist uses to anticipate and guide his experiments?...

How is this relevant to anything unless you are assuming the invisible
action of invisible fairies? The fact remains that mutations occur
without any foresight for the need for specific mutations, and are
random in that sense.

> Additionally, on what scientific basis can ND claim that even the
> vaguely stated exception is the sole exception to the RM conjecture?
> After all the brains of molecular biologists, or brains in general, are
> not the sole instance of intelligent networks (complex systems). Such
> intelligent networks are ubiquitous at all levels in nature, from
> biochemical reaction webs within cells through social networks and
> ecowebs.

Do you actually have some evidence to support your hypothesis that
there are all these disembodied intelligences? What makes you think
that complex systems are neccessarily intelligent entities?

> All such networks model internally their 'environment' with
> its punishments and rewards, play what-if game on the models and select
> actions which optimize their particular punishments/rewards. The
> neo-Darwinian RM dictum insists that in the entire spectrum of such
> intelligent networks there is just a single point, the brain of a
> molecular biologist, which purposefully guides mutations, and all other
> networks are supposed to be entirely disinterested and uninvolved. When
> at it, they might as well declare that there are exactly 12 angels that
> can dance on a head of a pin and if you doubt you can forget about
> publishing any papers in leading journals or getting any research
> funding or any academic position.
>
>
> > There is no need for an outside agency to account for
> > the drift observed.
>
> Only if you use the _empirical_ mutation rate in your calculations.

As opposed to nonempirical mutation rates? I.e., rates which are
fantasies that you make up? Science, it may surprise you, is quite
firmly rooted in and addicted to using empirical information from the
real world rather than made up fantasy numbers that prove whatever
point one wants to make.

> But
> that doesn't tell you whether these mutations were "random" or
> "intelligent" (purposeful, anticipatory...). For all you know, the
> "intelligent" mutations can yield the same exact empirical rates as
> those observed. After all evolution patterns in other realms, such as
> languages, exhibit analogous statistical phenomena and patterns, yet
> they're clearly evolving through a purposeful activity of intelligent
> agents.
>
> The objective, ideologically and religiously neutral science should say
> that the nature of the mutations (which resulted in choices being
> offered to the natural selection) is not known. Unfortunately, the ND
> priesthood declared that it knows the answer and that the answer is
> "random mutations", and anyone questioning this dictum will have their
> papers rejected, their academic career ruined.

Well, duh. The reason why scientists say that mutations are random wrt
need (have no foresight or no increased frequency at times of need) is
that that is what 60+ years of replicated experiments say. And it is
not that the idea has gone unchallenged. Look up FRED, which
*initially* looked like it might be specific mutation as a consequence
of need but turned out to be merely an increase in mutation rate plus
selection.

> Well, thanks goodness at
> least they're not threatening to tear the Dembski's limbs apart and
> burn him along with the rest of the RM conjecture 'deniers' as in good
> old times.

Dembski, to the best of my knowledge, makes no comment on whether or
not mutation is random wrt need. Dembski does give the pig-ignorant
"747 in a tornado" argument when he calculates CSI. Neither does Behe.
Behe does give the pig-ignorant argument that life starts with all the
genetic information that all of history requires and selectively uses
it at times of need. So where does Dembski make the particular
ignorant argument you have him making? I sure wouldn't want to blame
him for your argument.

> >> All the biological evolution patterns, ... occur in
> >> the evolution of natural and artificial (such as
> >> mathematical formalisms) languages, religions, arts,
> >> scientific theories, technologies,... (just recall
> >> Dawkins memes). ...
> >
> > You are confusing models with reality.
>
> Not at all. Your reasoning apparently lost steam after one step. The
> classification to "object" and "model of that object" is not absolute
> i.e. the model M1(A) of some object A may be itself an object for
> another model M2(M1(A))... etc. Hence, a language which models some
> 'reality' (object) may be an object of some other modeling scheme and
> its language. {One can also define a model M3 via M3(A)=M2(M1(A)) and
> M3 is then usually called a meta-language or meta-model of A.} Your
> objection reminds me of a Galileo's inquisitor accusing Galileo of
> confusing the rest with motion.

Ah. Paranoid delusions of the massive conspiracy. The sure sign of
kookdom.

> >>... since it cannot _compute_ either how many total DNA
> >> configurations could be produced in a given setting,
> >> much less how would every DNA change at the atomic
> >> level affect the phenotype (in order to enumerate
> >> the 'favorable' or at least neutral outcomes).
> >
> > WTH are DNA configurations?
>
> That was described in the 1st post, [M1]. To test the neo-Darwinian RM
> hypothesis for a single mutation, you need to estimate how large is the
> combinatorial space containing all arrangements of the available atoms
> which can arise (via laws of QT) from a given DNA state upon action of
> a single mutagenic event (e.g. a cosmic ray, an alpha particle...).
> (See [M1] & [M2] for description on why you need such number.)
>
> > That certainlly is not a term used by us mol bio folks.
>
> As pointed out in [M1],[M2], the present mathematical and computational
> tools are much too primitive for such computations. And since the
> questioning the RM conjecture is officially forbidden, there was no
> need to create a term describing quantities no one in academia is
> allowed to think about.

The present mathematical and computational tools are quite capable of
demonstrating that mutation is random wrt need. It also can show that
mutation rates can vary, that different types of mutation occur at
different rates, and pretty much anything else one needs. It can even
measure subtle differences in *rate* of mutation dependent upon whether
or not the gene in question is being transcribed. But that is an
increase in *rate*, not an increase in specificity that is correlated
with need.

> > And is "change at the atomic level" just an akward way
> > of saying mutation?
>
> It was stripped down to the minimum required by the combinatorial
> reasoning in order to avoid any extraneous connotations and arguments
> by dictums. Why drag all the baggage not needed for the combinatorics
> and waste time on irrelevant tangents and associations, as you next
> sentence illustrates:
>
> > DNA is pretty much DNA at an atomic level. You gots your
> > A's, C's, G's. and T's.
>
> That comment illustrates the problem noted above. Once you eliminate
> all possible molecules which can arise e.g. upon a single alpha
> particle event, you have drastically reduced combinatorial space of all
> possible configurations in the 1-mutaion neighborhood of a given DNA
> state, hence you reduced the denominator in the expression P=F/M (cf.
> [M1]) for probability of favorable mutation, artificially inflating
> this probability.

Is the above supposed to carry intelligent meaning?

> Most mutagenic events will likely result in molecules which are not DNA
> molecule proper. To test the RM conjecture, you need to allow all such
> configurations (all possible molecules which can arise from given
> physical causes) and give them equal a priori probability as outcomes,
> whether they are proper DNA molecule or some other 'improper DNA'
> molecule.

The chemistry of mutational events, even those due to radiation, is
hardly a mystery. It has long been worked out. You know, all that
stuff you read about the formation of thymine dimers and alkylation
adducts. These transient intermediates do get resolved into standard
DNA, through replication if not earlier through repair. I don't see
how this chemistry can become "foresightful" or "anticipatory" and
generate specific mutations in greater amounts when some mutation fairy
sees the future need for that mutation. So that leaves the possibility
that *specific* needed mutations appear in greater relative amounts
when, but after, there is a need for these specific mutations. That
requires a feedback mechanism through which the cell senses the need
for specific mutation and produces those specific needed mutations in
response (to a greater extent than all the non-needed mutations). That
is, specific needed mutations occur adaptively as a response to
environmental stimuli. Not a generalized increase in all mutations,
but a specific differential increase in the specific needed genes.
That is testable. And it has been shown not to happen. Again and
again and again and again.

> After all, all such events take time and consume reproductive
> resources. The RM conjecture requires that you give all such events
> equal a priori probability, P1=1/M. What you suggest is a pre-selection
> based on consequences expected later, which removes _upfront_ all
> mutagenic events which result in improper DNA (i.e. create some other
> molecules), sets their cost (in time and resources) to 0. That is
> cheating since it contradicts the RM conjecture (which you wish to
> prove/test) by setting the probabilities of the 'excluded' outcomes to
> 0. It is also a blatantly teleological exclusion, which further
> contradicts the RM conjecture.
>
> > Evolution is certainlly not guided, it is contingent.
> > Look at the Luria Delbruck experiment for a simple example.
>
> That and later similar experiments only show that the phage resistance
> arose via mutations in the exposed cultures (i.e. it wasn't
> pre-existent property).

No. It shows that the mutations arose *before* exposure. You clearly
do not understand these experiments if you are claiming that the
resistance only occurred *after* exposure. The whole purpose of the
design is to demonstrate that mutations to resistance occurred prior to
the selection; that is, they occurred spontaneously without being
induced by the selective environment.

> The results don't address at all the question
> of the nature of the mutation i.e. whether the mutations were "random"
> or "purposeful".

Read it again. You are dead wrong. Completely wrong. So wrong as to
bizarrely hold the very opposite of what the results were.

> Consider a counterexample where you get exactly the
> same adaptation pattern even though the innovation was clearly
> purposeful -- the technological evolution. Say, a sudden shortage of
> some vital raw material places some set of companies in danger of
> bankruptcy unless they can find a substitute or find some other product
> design. These companies are analogous to separate Petri dishes in LD
> experiment and the shortage is analogous to phage challenge (or even
> closer analogy to sugar in Cairn's experiment). The affected companies
> would switch into crisis mode, brainstorm, bring in new, more creative
> people, ... etc. At the end, depending on difficulty of the problem and
> the time constraints, you may have one or more companies that solved
> the problem and survived while all the rest went under. (If the problem
> was easy all of them may solve it.) Hence, you could get the same type
> of results as LD, yet here the "mutations" in the manufacturing process
> were "intelligent" (purposeful, guided....). Hence, this type of
> evolutionary pattern as observed by LD doesn't go either in favor or
> against the "random" mutation conjecture.

Again, this is a bizzare misstatement and analogy to the real
experiments. Let me give you a real description. Take an overnight
growth of bacteria. Your claim would be that none of these bacteria
would have the *beneficial* mutation, because the *beneficial* mutation
only occurs *after* exposure to the stressor. That is, the stressor
*induces* adaptive mutation. If this is true and you take half of the
overnight growth and divide it into small tubes that got, say, 100
cells and allow these to grow up to the same concentration as the
overnight, *all* your tubes should have no cells with the *beneficial*
mutation, since none of them have been exposed to the stressor. Now,
if you plate 10^10 cells from either the original overnight or from any
of the tubes you grew up from 100 cells on a plate with the stressor
(the selective agent), this will be the first time that *any* of the
cultures have been exposed to the selective agent. If the beneficial
mutations do not occur until after exposure to the selective agent, all
the plates should have roughly the same number of surviving colonies.
That is what one expects if the selective agent *causes* or induces
beneficial (of need) mutation and such mutation is a rare response to
the agent.

Now, if instead, mutation is occurring during the time the population
is growing, the large tube will have a mean number of beneficial
mutants *before* exposure to the selective agent. The selective agent
merely reveals how many individuals were already resistant. It does
not induce the needed mutation. Now, in the small cultures that you
grow up, new mutations will occur. Some of these will be late in the
growth process; others will be early in the growth process. In some
cases, the 100 cells may already have, by chance, a mutant from
(already present in) the parent culture. Thus, instead of the
selective agent acting on a population that uniformly lacks the needed
mutation, the agent is acting on populations that will vary widely in
the frequency of mutations it has. When you plate the 10^10 cells on
the selective plates (the first time the cells are exposed to the
selective agent), the large culture will give about the mean number.
But the other smaller cultures will vary widely and in a Poisson
distribution if mutation occurs randomly. Some cultures will have
significantly fewer than the mean. Others will be "jack-pots" (because
they either initially got a pre-existing mutant or the mutation
occurred early) with much, much higher than the mean frequency of
beneficial mutations. This, of course, is what was found.

Note that the Poisson distribution also means that the cultures
apparently have no way of predicting beforehand that they will be
exposed to the agent.

> >> Ahhh, no. You seem to be caught in the logical loop of
> >> "if DNA has X # of atoms, then something like 10^X degrees
> >> of freedom are possible, therefore evolution would take
> >> Y^10^X sample size". While you certainly can shuffle
> >> numbers all you like you are confused about some basic
> >> chemistry and biology. Using that sort of logic we
> >> preclude any cell from existing for more than an fraction
> >> of a second. Using that sort of logic, there is no way
> >> that any enzyme could exist or function.
>
> You've lost the context of 10^X combinatorial space (see the
> explanation of "configuration" above). You need to count all outcomes
> of mutagenic events according to time and resources they consume, hence
> count all possible molecules which can arise as result of a mutagenic
> cause from a given initial state.

I prefer actual evidence to such nonsense calculations. I most
certainly do not need to "count all the outcomes of mutagenic events"
to determine whether or not the rate of mutation and fixation is
adequate to account for the differences between organisms.

> You can, of course, take into account
> how long it takes to eliminate particular configuration e.g. some may
> destroy a cell right away, while others may allow it to live longer and
> propagate further. But you can't arbitrarily set to 0 cost of gross
> failures. That would be equivalent to introducing some kind of
> intelligent agency/process which can avoid such gross failures and
> their costs upfront (before consuming physical resources and time).
> That then contradicts your objective, to prove the RM conjecture.
>
> For RM conjecture, you cannot assume that anything is eliminating gross
> failures upfront based on some foresight about consequences -- within
> RM all outcomes of mutagenic events are taken to be independent from
> the later consequences. In your reasoning, you explicitly invoke later
> consequences to eliminate _upfront_ non-functioning enzimes etc, and
> declare that you do not wish to count such configuration as possible
> physical/chemical outcomes of mutagenic event (say an alpha particle
> impact). So, you have already changed the sides and crossed into the ID
> camp.

Lethal mutations happen. They usually have little consequence because
they usually occur early in development and few resources are wasted.
In fact, it is undoubtedly true that many more conceptuses occur than
even detectable fetuses. How many is hard to tell. But, because few
resources are lost, early lethals are pretty insignificant in the long
or short run. I am not excluding lethal mutatons. But, of course,
non-lethal, in fact non-selective mutations are the most common type
when you look at the selective consequences of mutations.

But that would affect the *rate* of mutation, not the specificity for
benfit.

*Who* is proposing supernatural magic. Not I. You are. You need
supernatural magic to generate the putative foresight you claim exists
(or to generate the invisible and undetectable feedback that produces
beneficial mutation *after* exposure to the selective agent).

You know what Lysenko was proposing? He was proposing that there was
feedback from the selective environment that produced beneficial
mutations. Isn't that what you are proposing? Whatever ND is in your
brain, it is not Lysenkoism. Neodarwinism was the integration of
Mendelian genetics into evolution. Lysenkoism rejected, specifically,
the corrupt ideas of Mendel and the idea that gene mutation and
selection result in changes in population rather than that changes in
environment causes changes in genes. Lysenkoism and Lamarckism have a
lot in common. And your ideas also have a lot in common with both
Lysenkoism and Lamarckism.

> Yet, you
> can still see the same kind of hyper-sensitive, overly defensive,
> zero-tolerance totalitarian mindset in the old and the new one. Just
> listen Ward's "arguments" (dictums) and arrogant totalitarian style in
> the two Ward-Meyer debates. It makes one kind of sad to think what kind
> of intellects teach kids nowdays.

It makes me sad that you think you have an intellect. It isn't what
you don't know; it is what you think you do know.

You apparently read too much science fiction and too little science
fact.

> That such possibility is not far fetched at all is suggested by the
> already existent examples of genetic look-ahead by natural processes
> (besides the brains of molecular biologist or dog breeders). For
> example, we know that societies through customs, folklore, religions,
> arts, laws. etc also apply foresight regarding the future genetic
> combinations it wishes to create. E.g. consider the stigma (and all its
> manifestations) of criminal past - society does not wish to propagate
> the genes of a criminal. Even the mere act of locking up someone for
> several years performs the similar genetic look-ahead function (it
> reduces mating chances of prisoners).

Except where conjugal visits are allowed.

> The content of our prisons shows
> which kind of genetic material the society wishes to eliminate.

The putative "eugenic" benefits of prison are pretty far down the list
for why we have prisons. If it were "eugenics" that was the reason,
why did we empty all those mental assylums? Don't mental illnesses
have some genetic component that, if anything, is stronger than the
association of crime with a genetic component? The only real
association of crime with genes is that many prisoners in for street
crimes have lower intelligence than average. But that may be selective
justice. White collar criminals tend to be underrepresented.

> Hence we know that such look-ahead regarding the future genetic content
> certainly exists at the level of individuals and the level of society.
> There is no reason or an argument why these two levels are all there
> is. I think similar genetic look-ahead occurs at all levels, above and
> below (possibly even below our present "elementary" particles), and in
> a multitude of ways. The neo-Darwinian dogmatic insistence on "random"
> mutations (which is rooted in 19th century mechanistic materialism,
> where everything was modeled as 'machines') along with its Lysenkoists
> methods of enforcing the thought conformity is a drag on science.

Actually, the idea that mutation is random wrt need is much more
recent. Luria/Delbruck is only 60+ years old. Before that, it was
quite reasonable to think that there might be differentially selective
mutation or some kind of feedback, especially in bacteria. But the
idea was still being tested quite recently (less than 20 years ago).
It is not dogma to insist that overwhelming evidence consistent with
the idea that mutation is random wrt need (and that selection merely
judges rather than creates) be accorded more respect than fantasies
contradicted by evidence, like your ideas. Science does not give all
ideas equal weight. Some ideas are simply better supported than
others. Your ideas, for example, are not supported.


>
> If one also recognizes the existence of mind stuff i.e. that it may be
> possible to scientifically model answers to questions 'what is it like
> to be this particular arrangement of atoms and fields that make up
> you', other possibilities of 'intelligent agency' open. For example, it
> is conceivable that what in our present QT are 'irreducibly'
> non-deterministic choices are actually choices made by the future
> model's counterparts of the mind stuff.

This QT bullshit talk of yours may con some people. It is wearing thin
here, however.

hersheyhv

unread,
May 14, 2006, 9:53:17 AM5/14/06
to

nightlight wrote:
> >> The basic question being debated between ID and ND
> >> is whether the random alternation of that trait
> >
> > There's little question that the alterations (which
> > is what I think you mean)
>
> Yep, thanks.
>
> > And unfortunately for IDiocy, this "intelligent agency"
> > acts only in way that is completely undetectable as it
> > is identical to the agency of inherited variation and
> > natural selection.
>
> People mostly find what they're looking for. Within current RM dogma
> the fundamental research is not focused in finding such effects.
> Whenever the guided mutations are found (such as Cairns experiments),
> great controversy gets stirred up and ND noise drowns the ID signal.

Again, you have presented a paranoid conspiracy-theory rather than
evidence. The Cairns experiments produced great *interest* in the
scientific community and much further research precisely because it
seemed to be an example, unusual because of its rarity among the near
unanimity of experiments showing classic random mutation plus selection
which only selects among the randomly generated mutants, of Lamarckian
adaptation mutations. Further study and actual real experiments
demonstrated that this apparent exception was really another example of
random mutation plus selection. The elusive (invisible) ID signal
remains elusive (invisible).

> >> My point is that there are no calculations of such
> >> predictions from a random DNA change model (the RM
> >> conjecture). Hence there is no basis to claim that
> >> RM is an established fact.
> >
> > Faulty replication is an established fact.
>
> So? Mutations are established fact. Just because most are harmful, it
> doesn't mean that there is no guidance. Most day traders lose money
> even though they are trying to anticipate the market fluctuations.

So you mean that your guiding agent is so incompetent that
he/she/it/they look as if they, like most day traders, were merely
monkeys throwing darts at the target, with many more failures than
successes?

> Presence of intelligent guidance does not preclude a failure. It only
> makes failure less probable.

If the intelligent guidance is indistinguishable from random mutation,
how can you tell that there has been intelligent guidance?

> But to detect the presence of guidance you
> need to know what was the non-guided probability (which is the
> computation that I am describing). Just because the guided probability
> of success is small it doesn't mean that it can't be larger than some
> even smaller non-guided probability of success.

So what you need to be looking at are the rare sequences in humans that
show signs of having evolved significantly faster than the average
sequence relative to the sequences in the chimp (or point out the,
AFAIK, nonexistant sequences that must be invented from scratch in
humans and are missing in chimps -- I do know of one gene that is
*functionally* missing in humans but is present in chimps; that one is
due to a single deletion mutation). If you are claiming that these
sequences *must* be guided, you have to then demonstrate that the rate
of change above the 3% or so that could be due to drift alone is too
rapid to have been by selection. That, of course, is a fool's errand,
because no such sequence exists.

> >> Even as a conjecture, the RM is weak since there
> >> isn't a single established quantitave fact going in its favor.

The whole *purpose* of the Luria/Delbruck experiment (which you badly
misinterpret) and its many repetitions throughout the 20th and into the
21st century is to rule out non-random directed mutation that is either
induced by the selective agent or occurs by divination of future need.
These experiments are quite quantitative in nature.

> > Well, I guess if you don't count population studies
> > and genetics.
>
> Give me one quantitative fact supporting RM.

The observed data from the Luria/Delbruck experiment for starters. The
total absence of any mechanism by which information from the
environment can get converted into specific directed mutation aside
from a few rare inversion or transposon switches which are more like
domesticated viruses.

> >> I suspect that at least one such intelligent agency
> >> is the biochemical reaction web of the "junk"
> >> DNA which can "sense" various challenges from the organism's
> >> environment (transmitted via hunger, thirst, heat, cold,...)
> >
> > Might we have some evidence that junk DNA is a sensory or
> > computational apparatus?
>
> All of DNA is such computational apparatus (intelligent network or
> complex system). It is a network of biochemical reactions with
> adaptable links (the reaction pathways), exposed to punishment and
> rewards. As a rule such networks are distributed computers.

He asked for *evidence*, not your unsubstantiated and ignorant opinion.
DNA is not a computational apparatus. It is a chemical which contains
a sequence. DNA is not a network of biochemical reactions with
adaptable links. An *organism* may be considered a network of
biochemical reactions with adaptable links exposed to punishment and
rewards from the environment. But DNA is not an *organism*. It is a
sequence storage chemical that plays a role in the ability of organisms
to adapt to the environment by storing information for particular
proteins which can get transcribed or not dependent upon environmental
conditions. But there is no mechanism by which those environmental
conditions, in addition to changing transcription, also specifically
targets mutations of need. The dogma you are opposed to is not
evolution it is the "central dogma", which is that DNA <--> RNA -->
protein. [The way that RNA, rarely, gets converted into DNA, reverse
transcription, ensures that this process cannot act as your intelligent
mutator. But if you have evidence that it does, please present it.
There is no reverse translation.]

> >> You should also recall that the intelligently
> >> guided alternation of the genome is an extremely
> >> common phenomenon.

It is also an extremely recent phenomenon. Your examples first
required the evolution of humans.

> > > I mentioned several examples
> >> in previous posts. The obvious ones are molecular
> >> biology in research and genetic engineering in
> >> agriculture. In fact every organism performs
> >> such 'genetic engineering' when selecting a mate.

Proximity is genetic engineering? And bacteria do not "select" mates.
They don't mate at all. And remember that life on this planet was
bacterial for much longer than the amount of time eucaryotes have
existed (much less multicellulars that select mates).

> > I believe you mean to say that *no* organism performs
> > such "genetic engineering." Except in an indirect
> > and metaphorical way. Mates are selected based on
> > the expressions of DNA.
>
> The genetic engineering is indirect, too. No one is picking a DNA
> molecule by hand and tweaking its atoms this or that way. Molecular
> biologist simply uses different chain of indirection, based on some
> other correlations between structure of DNA and some other perceptible
> events.

Molecular biologists use natural biochemistry to generate, splice in,
and introduce changes in organisms that suit the molecular biologists
intent. All these changes occur in nature in the absence of molecular
biologists, but not with any specific intent. Sometimes these changes
that occur in the absence of molecular biologists are unintentionally
useful in specific environments. Mostly they are either neutral or
deleterious.

> In the case of molecular biologist, these correlations are encapsulated
> in the physical laws which were used in the design of the experiments,
> instruments and the computer algorithms being used.
>
> In the case of mate selection, the correlations are between the DNA
> structure and its perceptible phenotopic expression i.e. the organism
> itself is used as the amplifying instrument which correlates humanly
> perceptible events (the perceived phenotopic traits) with the DNA
> structure of interest.
>
> The biologists simply use different amplifying system based on some
> other correlations. But neither is a direct manipulation of the DNA.
> And in both cases you have a perfectly natural process which uses
> anticipation/foresight in order to shape the structure of DNA of
> organisms they are producing. Of course, the internal models
> (conceptualization, understanding) of their own activity and the means
> they use to implement their plans are entirely different. But that's no
> different (regarding the essential aspects of the process being
> discussed, the intelligent guidance of DNA changes) than the difference
> between the genetic engineer of today vs genetic engineer few hundred
> years from now. The technology, conceptual models, instruments,
> experimental setups, computers and programs will all be very different
> and hardly recognizable.
>
> Hence, I didn't mean it in metaphoric way but quite literally.

By anthropomorphizing DNA so that it has the characteristics of a human
rather than the characteristics of a *molecule*, you are either being a
"literal fool" or dealing in metaphor. DNA is not a little intelligent
homunculus. It is a storage molecule.

hersheyhv

unread,
May 14, 2006, 10:15:24 AM5/14/06
to

nightlight wrote:
> >>The physics can only tell you the probabilities of
> >> various outcomes (mutations), but nothing in the
> >> most precise physical state of the DNA and its
> >> environment determines what the specific outcome
> >> will be in any given instance.
> >
> > Gibberrish. We can most certainly determine what
> > the mutations are, after they occur. We do not
> > need to invoke any quantum effects.
>
> You're arguing completely out of context and consequently talking
> nonsense. The point of QM argument is that one cannot claim that
> initial state, even if known with maximum precision, fixes the outcome.

At the micro (atomic or subatomic) level where QM individual events are
examined. Not at the macro level of simple mass biochemistry where
stochastic averaging makes outcomes quite predictable from knowledge of
initial states.

In your case, a little knowledge seems to have festered into a
dangerous thing.

Then the replica plating equivalent of Luria/Delbruck will work. Plate
10^10 bacteria (derived from a single cloned bacteria and hence a
colony except for putative new mutation) on a non-selective plate.
Replica plate these bacteria to two or three or five selective plates.
If, as you claim, the mutations do not arise until *after* exposure to
the selective agent, the colonies that grow on each selective plate
should be different on each selective plate. If all the selective
plate is doing is selecting for colonies that already were resistant on
the non-selective plate, the colonies should be at the same position on
each of the selective plates.

Moreover, you can go back to the non-selective plate and take small
scrapings of cells from either the region where a resistant colony was
found on the selective plates or take it from one of the regions where
no resistant colony was found, grow up the cells and test it for the
relative frequency of resistant mutants. Guess which one will have the
highest frequency? This, BTW, is the basis for being able to isolate
mutants that cannot be directly selected *for* but only selected
*against* and was used early on to identify and isolate Hfr strains of
E. coli.

Guess which result has been seen in experiment after experiment? No
evidence of mutation as an adaptive response to the selective
environment. Evidence of selection merely choosing pre-existing
mutations that have occurred without need. This, of course, does not
eliminate the possibility of divination or foresight on the part of the
bacteria, but one can replica plate the original to several different
non-selective plates and (after growth has stopped; growth being
required for mutation to fix in) using a random number generator to
decide which plates get replica plated to the selective plates. Then
on can go to one of the other non-selective plates and test whether
that region is highly enriched in resistant cells.

Further more, if you go back to an original non-selective plate that
was NOT used to replica plate to a selective plate and do the scrapping
tests described above, you will find that the resistant mutations are
exactly where they would be expected if mutation had occurred without
any idea that it would be needed in the future and not present in the
other places on the same plate.

> To test between random vs guided mutations, you need an experiment such
> as Cairn's lactose enzyme mutation, where indeed it was found that the
> rate of specific favorable double mutations was higher when they were
> needed than when they were not needed. The interpretations of these
> experiments from different quarters are conflicting and not worth
> rearguing.

There *was* a considerable body of research that has been done on this
specific question. And it was demonstrated to all scientific audiences
(but obviously not to wishful thinkers like you) that this is not the
long-sought exception to the rule. Mutation remains random wrt need.

hersheyhv

unread,
May 14, 2006, 10:39:00 AM5/14/06
to

nightlight wrote:

[snip]

> You're setting up a strawmen ID by presuming omniscient and omnipotent
> entity capable of turning optimum traits on demand. The ID only assumes
> existence of some intelligent agency (of unspecified nature and
> computational capacity) which performs pre-selection among the
> physically accessible variations of the DNA based on some internal
> modeling process which takes into account the present state of the DNA
> and its environment.

And your evidence for this and how it *differs* from random mutation
without pre-selection and no internal modelling process? I certainly
agree that mutations do modify whatever DNA sequence previously existed
(rather than invent genes by poofing them into existence or from random
sequences obtained from somewhere or from total sequence space). And
the environment certainly is the selective agent (the agent that
determines retention in a population) and can affect the rate or type
of mutation that occurs. What evidence do you have that there is some
non-environmental pre-selective agency?

> You also presume to know what the optimum solution ought to be. There
> could be much more subtle solutions to food shortage than modifying
> beak sizes. There are also competing intelligent agencies on behalf of
> other organisms in the same ecosystems and these may have
> countermeasures to block some more obvious solutions.

You keep imagining these intelligent fairies to explain whatever you
want to explain without even trying to see if known processes that do
not include such fairies can explain it.

> The basic fact is that the adaptations do occur at some empirically
> observed rates. The question discussed is whether the random
> alternations of DNA can produce such rates or not. My point is that
> there are no calculations of such predictions from a random DNA change
> model (the RM conjecture). Hence there is no basis to claim that RM is
> an established fact. Even as a conjecture, the RM is weak since there
> isn't a single established quantitave fact going in its favor.
>
> If RM cannot reproduce the empirically observed rates, then there must
> be additional pre-selection process which can model the requirements
> (the environment, the DNA and its phenotopic expressions) and eliminate
> much more quickly large classes of physically accessible but
> phenotopically unviable DNA alternatives (within the limits of its
> knowledge and computational capability).

Any evidence that RM cannot reproduce empirically observed rates of
mutation? What class of physically accessible but phenotypically
inviable mutation is there? A type of mutation could happen chemically
but does not actually occur and subsequently get selected against but
rather never happens because of "pre-selection"?

> I suspect that at least one
> such intelligent agency is the biochemical reaction web of the "junk"
> DNA which can "sense" various challenges from the organism's
> environment (transmitted via hunger, thirst, heat, cold,...) and
> compute solutions (based on accumulated library of useful solutions
> from its ancestors, going back to dawn of life).

DNA is a storage molecule, not a little homunculus. How are you
proposing that DNA compute solutions, much less cause them to happen?
Since many organisms, and particularly the bacteria that have existed
since the dawn of life, have very little "junk", how did the bacteria
evolve all the much wider variations in metabolism than eucaryotes
have?

> These solutions can
> alter multiple genes, spanning entire genome, simultaneously. Such
> solutions may not be the best, or may not work or may even result in
> worse traits than the original ones. This is no different than any of
> us solving our own problems. Dealing with problems, strategizing and
> applying foresight will on average beat not dealing with them.

You seem to be anthropomorphizing DNA. DNA is a storage molecule that
gets acted upon. It does little acting itself. You have to have
*some* evidence that your mechanism exists. WAGs are not acceptable.

> You should also recall that the intelligently guided alternation of the
> genome is an extremely common phenomenon. I mentioned several examples
> in previous posts. The obvious ones are molecular biology in research
> and genetic engineering in agriculture. In fact every organism performs
> such 'genetic engineering' when selecting a mate. Of course, the
> organisms don't use DNA sequencing or molecular biology to purposefully
> transform DNA to improved configurations. But that is just a matter of
> instruments and technology. These are all natural processes using
> foresight to reshape genome.

Again, bacteria have existed from the dawn of time. Your *real*
genetic engineers certainly have not.

> >> ... anyone who claims that they know the mechanism
> >> of mutations, including the precise state of the
> >> DNA environment whichcontains the physical causes of
> >> mutations... Such claims are false both in quantum
> >> and classical models.
> >
> > Yes, but we don't need to know the mechanism of mutation,
> > if you mean what causes the quantum states of the bonds
> > of the base pairs of the DNA.
>
> You have missed the point. Argument I was countering (by pointing out
> quantum & other uncertainties) is a claim that we allegedly know
> exactly what is happening with DNA and that there is no room, or
> interface, for any 'intelligent agency' to affect or guide mutations.

There is "room" for 'intelligent agency' to affect or guide mutations
so long as it does so by mechanisms that do not differ significantly
from random mutation. "Junk" DNA most certainly is not an intelligent
agency.

Vend

unread,
May 14, 2006, 2:50:06 PM5/14/06
to
nightlight wrote:

> It is relevant against the unconditional claim (to which it responded)
> that mutations are absolutely not correlated with any future states of
> the organism's environment.

Of course present events are correlated to future events. It's likely
that they are not caused by them, since, as far as we know, physics
laws are causal.

> > It's widely understood that quantum effects are not
> > relevant at the scale of organic molecules, like dna,
> > for a phenomenon called "decoherence".
>
> That is a crude, low resolution statement. The 'decoherence' washes out
> the aspects of QM which cannot even in principle be simulated by a
> classical model, such as violations of Bell inequality. But this was
> not the aspect being used in my argument. The QM aspect I used was QM
> indeterminism, i.e. that even the theoretically most precise knowledge
> of a state does not allow you in general to predict the outcome. { The
> Bell's theorem and its tests are used to support QM claim that this
> indeterminism is fundamental.} A larger system in which decoherence
> would normally be significant, would not violate Bell's inequality,
> hence it could be simulated with some classical indeterministic system
> (e.g. where classical noise may play role of QM indeterminism). But my
> argument only required necessary indeterminsm, which would apply to
> exact quantum model as well to its classical simulation.

It's ok, but then I don't understand why you need to use quantum
mechanics, since statistical mechanics, which deals with indeterminism
at a classical level, would be fine.

> ...


I don't clearly understand why you started talking about messages. This
doesn't seem an appropriate formalism unless you explain better how
"messages" (and Information Theory, I suppose) are related to
favourable/neutral/unfavourable outcomes.

Quantum Events cannot be biased towards favourable outcomes and follow
exactly the theoretical distributions, because there would be certain
regions of the outcome space (the favourable regions) more probable
than the other regions.

Anyway I agree that Quantum Events might not be following the
theoretical distributions but some other probability distributions or
even deterministic algorithmic processes, immensely more complex, that
produce results indistinguishable from the theoretical distributions by
any know statistical test.

In fact, deterministic processes that generate apparently random
sequences, the pseudo-random number generators, exist and are widely
used in computer science.

However, such algorithms are usually relatively simple, while an
algorithm capable of generating pseudo-random sequences of mutations
biased towards some outcomes which are favourable for an organism,
would be extremely complex.

Thus, the Occam's razor tell us that, since there is no empirical
evidence to belive that such a complex process is at work, we stick
with the simpler "pure random" (unbiased) distributions.

> > Thus, unless you provide adequate evidence that the
> > dna mutation are biased, we should hold that our
> > present scientific knowledge is correct and you are wrong.
>
> As explained above, there is no need to violate QM or statistical
> mechanics in order to have directed mutations. After all the directed
> mutations already exist (e.g. produced by molecular biologist) and that
> instance violates no natural laws. There is no a priori reason why
> other (besides the brains of molecular biologists) intelligent
> networks, from biochemical reaction networks within cells through
> ecowebs, could not do the same. They all can interact at the physical
> level (through any number of intermediate interaction chains) with the
> cellular DNA, just as the brain of a molecular biologist does.

This seems confusing. I thought you were claiming that the source of
"intelligence/purpose/foresight"
in dna mutation was a bias in the underlaying quantum random events.

We know that humans can mutate a dna "intelligently" but we have no
reason to belive that this imply any sort of this "quantum bias". In
fact, as far as we know, the human intelligence is just an emergent
property of large number of random events that happen in the human
brain, which are, themselves, unbiased.

So the fact that human biotechnolgist can make "intelligent" mutations
by no way imply that such biased event exist.

> Finally, I have explained several times the explicit quantitative
> criteria which distinguishe whether the "random" mutation model is
> capable of reproducing the observed rates of evolutionary novelties or
> not. Hence, whether such computation is presently practical or not, the
> mere existence of a precise quantitative criteria implies that the
> question of ID vs RM is a perfectly valid scientific question. There
> may be other more accessible tests, but they are not necessary to
> falsify the neo-Darwinian claims that ID vs RM is not a legitimate
> scientific question). It is and there exist algorithms which
> distinguish between the two.

I think that ID is not science in general because it's always
impossible to prove or disprove that something was designed by an
"intelligent" entity/process, without asserting any proprety of the
process, and without even defining the term "intelligent", so ID is, in
general, unfalsifiable.

However, individual claims of ID can be falsifiable.
The test you proposed, which I think is too simplistic, could be in
principle carried out.
Even if it wasn't simplistic and if it proved that our model of RM is
not true, it would be difficult to claim that this had proved ID.

It could be simply that our model of Random Mutation, and perhaps even
Quantum Mechanics were not correct. This would be a very interesting
scientific result, but claiming that this was the proof of Intelligent
Design would be an Argument form Ignorance.

(And here I apologize for my Ignorance of English :) )


> To put it more precisely, it is the RM conjecture that excludes any
> statistically significant "excess" of favorable mutations. And the
> "excess" is understood to be with respect to the rates that the RM
> model would predict. Thanks for agreeing that this is a perfectly valid
> scientific question which can in principle be decided (once we can
> compute what the prediction of RM are and how do they compare to the
> empirical rates of favorable mutations).

Ok.

> You're trying to weasel out by switching to debate about definition of
> "biologically evolved". I am simply saying that "natural processes"
> exist which do perform purposeful mutations. RM conjecture must take
> that fact into account. The RM which agrees with empirical facts cannot
> be defined or stated without introducing messy exceptions (which are
> too vague to be a scientific theory).

No. The Theory of Evolution (RM+NS) doesn't claim that ALL living
organism MUST be generated by RM + NS. It proposes a reasonable process
which can explain the origins of organisms without introducing unknown
forces/entities/processes. If we have the evidence that some organism
was "designed" by some known entity (for instance an human
biotechnolgist), we accept the latter explanation. This by no way
contraddicts the theory.

Again, it's an example of the usage of the occam's razor.

> But the RM conjecture is testable, in principle. Should it fail the
> test, the remaining options is ID (intelligent benevolent designer) or
> MD (malicious designer, which makes the favorable empirical rates lower
> than the RM model prediction).

As I said before, implying Intelligent (or Malevolent) Design from the
failure of RM, would be an Argument from Ignorance.

> The point was that you cannot use any statistical patterns which can be
> found in evolution of technologies, languages, religions, human
> societies,... as a proof that the biological mutations are
> undirected/random, when you find analogous patterns in the biological
> domain. If you bring up some statistical pattern from biology as a
> proof or indicator in favor of RM conjecture, and if someone can point
> out to an analogous pattern from these domains, your proof is invalid
> and your indicator is irrelevant.

Ok.

> Why would the intelligent agency/agencies guiding evolutionary
> mutations be any smarter or more accurate? Are you perhaps trying to
> sneak in the "perfect designer" strawman?

No, I was just saying that using a random mutation model in these
fields doesn't lead to an high approximation error.

> The official public threats to destroy careers of anyone supporting ID
> are no secret. There has been plenty of discussions on that on the pro
> ID sites such as Dembski's:
> http://www.uncommondescent.com/
>
> where someone suggested that such blatant discrimination in conjunction
> with recent court decisions which labeled ID as religion, could be used
> to sue universities for religious discrimination.

It seems to me that these guys are more interested in legal questions
than scientific ones.

> > Until you have some evidence that falsify evolution
> > science should assume that evolution is correct.
>
> Strawmen. The RM conjecture is not equivalent to "evolution". The RM is
> falsifiable (at least in principle).

Until you have some evidence that falsify RM, science should assume
that RM is correct.

> You can't use teleological argument (as you do) to eliminate such
> configurations upfront. The test was for RM conjecture (not for natural
> selection), hence all such mutagenic events have equal a priori
> probabilities, regardless what may happen later to the cell. You can't
> use teleological reasons within RM (the model whose predictions are
> being tested) to eliminate such events from counting. When counting all
> DNA configurations reachable via a single mutagenic event from a fixed
> initial DNA state, the DNA change which subsequently kills the cell has
> equal weight as the most favorable one.

I was not using a teleological argument. I was just saing that the
events that break the dna, changing it in something that isn't dna
anymore, and thus killing the cell, should go under the "probability of
dying" factor, that seems not present in your model.

> ...

> Not at all. It only may appear "incredible foresight" to our present
> molecular biology. But not to the intelligent agency itself (which may
> well be, at least in part, implemented in the "junk" DNA). After all,
> it would seem incredibly difficult for our present technology to create
> a live cell from scratch. Yet cells do it with ease -- you put in one
> cell with the required substratum and you check later and there are
> millions of cells. Now you take in the same substratum and put in (or
> around) the whole biotech industry and science, all the best brains and
> biggest computers and labs, but without allowing them to call a cell
> for help or to borrow cell's tools and materials (hence they can't use
> anything produced by cells or viruses; that would be cheating), and you
> can wait all you want, there won't any live cells in the dish. There
> won't be even an organelle.

So the "Intelligent Designer" , if exists, is more intelligent and
expert that modern biologist.

> > we observe a lot of unfavourable mutations
> > that any silly human would never make. ..This seems
> > a problem for your theory.
>
> It is not problem at all. Different intelligent agencies have different
> computational capabilities and different optimization algorithms and
> utility functions being optimized. To a cell all molecular biologists
> would appear inept (after the thought experiment described above).

Still he wouldn't have done gross errors like breaking the vitamin C
gene in the human/chimp ancestor.

>...


> > Occam's razors rules out biased mutation until it's
> > proven necessary to explain evidence.
>
> The Occam's razor is relevant when selecting among theories which are
> equally correct but differ in complexity. But the ND and ID cannot be
> both correct since there exists an algorithm (practical or not) which
> can distinguish between the two. Hence Occam's razor is inapplicable.

Until the test is carried out, they are both coherent to empirical
evidence and RM is much simpler than ID.

> ...


> The science doesn't deal with it because it is not advanced enough, not
> because it doesn't exist. It will be figured out eventually. My
> conjecture is that all objects, from elementary particles and up, have
> mind-stuff.

But, until you can define "mind-stuff" in scientifical terms, this
remains a metaphysical conjecture.

> > That's called "artificial selection".
>
> So? The same events can belong to multiple, related and unrelated
> patterns of purposeful actions. The events of you reading this sentence
> at your computer is also part of a process of your ISP making money
> from selling you internet access. The one doesn't exclude the other.

It's not "guided mutation" but "guided selection" on quasi-random
mutations.

> I am using the term "intelligence" to mean process which uses
> foresight, look-ahead to optimize some gain/utility function. The
> neural networks, which are a mathematical abstraction capturing the
> common regularities of variety of adaptible networks (complex systems)
> in nature, are intelligent agencies.

I studied neural networks extensively in Artificial Intelligence and
Robotics courses. These are not models of natural processes but
mathematical objects "inspired" by nature.
And, in general, they have no foresight or look-ahead.

>Simply by being exposed to
> 'sensory' inputs from environment and to punishments & rewards from
> utility function (which slightly modify individual links), they
> converge to a state which optimizes the punishments/rewards.

If you are lucky.

>In this
> "learned" phase, they internally model their environment and the 'self'
> and play what-if look-ahead game to discover the actions of the self
> actor which optimize their punishments & rewards. Our brains are just
> one example of such "intelligent" system.

Currently, the only one known that works.

> Even languages (natural and
> artificial such as mathematical formalisms) form such intelligent
> networks, living on humans as their substratum. There is even a paper
> by Eugene Wigner titled "The Unreasonable Effectiveness of Mathematics
> in the Natural Sciences"
>
> http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html
>
> marvelling over the apparent intelligence of mathematical formalism,
> which somehow anticipates by decades or longer seemingly unrelated
> empirical discoveries in physics. Mathematicians themselves merely
> follow the aesthetics of the formalism as a motivation for
> developments, without any particular knowledge or anticipation of the
> phenomena in physics which get discovered decades or centuries later
> and for which the already developed formalism (for its beauty) just
> happens to come out as a perfect model.

These theories about "macro-organisms" (like Gaia) are surerly
fascinating, but they seem irrelevant to the debate.

z

unread,
May 15, 2006, 3:19:00 AM5/15/06
to
On 14 May 2006 00:08:46 -0700, "nightlight"
<night...@omegapoint.com> wrote:

The experiments are not conflicting, and you really need to know the
biology behind the experiments. Let's look at LD. Infection by phage
T1 is rapid, and lethal. Resistance is caused by mutation in the
mutation of the protein in the E. coli cell wall that T1 binds to. So
for a cell to be resistant to T1 infection, it has to have pretty much
no wild-type T1-receptors on it.

So, if you plate a population of cells onto a plate full of phage,
either the cells are resistant prior to plating, or they die. Or a
magic E. coli fairy guards a certain percentage of cells from
infection and then magically tinkers with them to make their offspring
resistant.

You could argue that there is a time window right after plating where
the bacteria could "adapt". The E. coli, sensing danger, mutagenizes
its DNA to make a resistant receptor and lives on. Some slight
problems with that scenario however. Phage binding to a cell at the
concentrations used is literally within seconds. Once bound, game
over for our plucky bacterium. Another problem is that phage
sensitivity is dominant. In other words, if you have a cell that has
both wild-type and mutant Ton (the receptor), the cell is sensitive to
infection by T1.

So, a cell that has a brand spanking new copy of the resistant Ton
gene is still sensitive to T1 infection. You might be asking yourself
"How often to E. coli proteins get replaced?". The answer for cell
wall proteins like Ton, by dilution. Divide enough times and the
progeny will have no wild-type copies of the proteins.

Again, knowing the biology, you get two choices. Mystical phage
slaying fairies, or random pre-existing mutations. Substitute any
desired mechanism for the MPSF as you like as it amounts to the same
thing. A nondeterministic supernatural event.

My description would play better with small children though.

The Cairn's experiments were positive selections. Can you grow on
this media? Yes, E. coli does not starve as gracefully as some other
bugs, and has a fairly steep death curve when you deprive it of usable
carbon sources. You get a lot of lysis. That releases low amounts of
nutrients in the culture, enough to keep a small population of cells
alive long enough. If the same experiments are performed in strains
that are devoid of PolIV, you get a lower frequency of revertants.
This enzyme is one of two of the so-called error prone DNA polymerases
involved in repairing damage that is bad enough to stall the "normal"
polymerases. The kicker is you almost eliminate the Cairns-type
mutations by mutating RpoS. RpoS a transcription factor that E. coli
uses to turn on the so called general stress response.

Oh, and the strains isolated as survivors from a Cairns type
experiment turn out to have a much higher level of mutation in
non-selected genes when compared to their parental strains. The
technology of the time prevented Cairns from looking at neutral
mutations. Science is like that, we get odd results now and then.
This one was a head scratcher when it first came out. It wasn't
ignored even though it appeared to be in direct conflict against
neutral evolution theories. People spent a lot of time on figuring
out what was going on.

And no fairies were harmed at any point in the process.

If you want to get down and dirty at the sequence level, we can do
that. But I think that would really bore people. The Cairns effect
is very interesting in it's own right. And it is real, at least in
enteric bacteria. But it is not directed in the sense that I think
you mean.

Oh, and at no time in in figuring out what was happening with Cairn's
type experiments was QM invoked. At least by any of the folks that
actually figured it out.

B Miller

nightlight

unread,
May 15, 2006, 9:02:06 AM5/15/06
to
>The experiments are not conflicting, and you really need to know
> the biology behind the experiments. Let's look at LD.
> Infection by phage T1 is rapid, and lethal. Resistance is
> caused by mutation in the mutation of the protein in the E.
> coli cell wall that T1 binds to. So for a cell to be
> resistant to T1 infection, it has to have pretty much
> no wild-type T1-receptors on it. ...

Yes, "pretty much". All that LD shows is that the receptor mutation is
rare enough so that its occurrence can be modeled as a Poissonian point
process (which is tautological for any very low rate mutation, anyway).
The LD does not compare rates of this mutation with and without phage
challenge, which is what an RM conjecture test would require. It only
compares (indirectly via implied statistics) the rates of mutation
(which were low) with the rates of reproduction of phage resistant
mutants (which were high). That is a completely irrelevant comparison
for testing the RM conjecture.

You seem to be confusing the fact that mutation doesn't arise sharply
as all or none (but rather with some probability PM < 1), with the RM
conjecture. While there is randmoness involved in mutations, it is the
wrong kind of randomness to be relevant for the RM conjecture. In fact
guided mutations can as easily arise with probability PM < 1 as do
non-guided ones. It doesn't mean that if there is an intelligent
process behind, it has to be 100% percent successful in solving any
challenge presented in any given time and population sizes. The
parameters (various rates) and the design of the LD experiment were
simply not suitable for testing RM vs non-RM. If you can see how LD
discriminates between RM and non-RM, you are welcome to share it. Your
attempts so far merely alternate between non sequitur and ad hominem
arguments.

> The Cairn's experiments ... If the same experiments


> are performed in strains that are devoid of PolIV,
> you get a lower frequency of revertants. This enzyme
> is one of two of the so-called error prone DNA polymerases
> involved in repairing damage that is bad enough to stall
> the "normal" polymerases. The kicker is you almost
> eliminate the Cairns-type mutations by mutating RpoS.
> RpoS a transcription factor that E. coli uses to turn

> on the so called general stress response. ... Oh, and


> the strains isolated as survivors from a Cairns type
> experiment turn out to have a much higher level of
> mutation in non-selected genes when compared to their
> parental strains. The technology of the time prevented
> Cairns from looking at neutral mutations.

So what? The intelligent agency doesn't have as perfect instant
solution to the challenge as the entire community of experts can
conceive _in hindsight_, years later. That doesn't affect the fact that
Cairns experiment clearly demonstrated that the probability of
favorable mutation was greater under challenge than without challenge.
That the probabilities of other mutations increased as well merely
shows that the intelligent solution was not overly specific to the
challenge, but more generic (thus favorable for variety of challenges).
Increasing generality of a solution tends to be a tradeoff with
downsides (as it is in any domain). After all, even the intelligently
designed drugs and medical procedures have downsides.

Note also that in Cairns case, we're dealing with artificially selected
strains of bacteria exposed to contrived challenge, hence the solution
was a fallback to a more general type of solution. The experiences with
antibiotic resistance suggest that we can reasonably expect that after
some decades of exposure to the challenge, the Cairns survivors would
evolve better and more specific solutions.

> The kicker is you almost eliminate the Cairns-type mutations
> by mutating RpoS. RpoS a transcription factor that E. coli
> uses to turn on the so called general stress response.

That's a gross non sequitur. Yep, you can also eliminate various
aspects of human intelligence by damaging various parts of human brain.
Or even easier, you can make a championship chess program (or any AI
program) become so dumb to crash right away by 'mutating' a single byte
(or even a single bit) of its executable.

Just because one can discover how was some intelligent response
implemented in the cellular biochemical web (hence one can disable it)
it doesn't mean that the response wasn't intelligent in the first place
(since the solution came as a response to the challenge and the odds of
survival increased compared to either non-response or to the opposite
response, the lowering of the mutation rates). With the kind of rigged
criteria, no computer program can behave intelligently (try playing
chess agains the current chess programs) by the same shifty criteria
and sophistry used to muffle the implications of the Cairns experiment.

As far as we know, any intelligent agency, a process with foresight,
has to have some material embodiment/implementation and once this
implementation is known well enough one may be able to tamper with it
(e.g. damage or disable it). The ID theory does not postulate
'supernatural' and 'omnipotent' tamper-proof intelligence behind
mutations. It simply says that the neo-Darwinian RM conjecture is
overlooking an important intelligent agency/agencies behind the
biological evolution and origins of life. The full nature of that
agency is something that can be a subject of research once the ND
priesthood's spell on academia has been lifted.

The Cairns outcome summary -- the neo-Darwinists are left clinging to a
straw: the intelligent agency is not perfect, it is not omnipotent and
it is not invisible (its functioning is not undetectable). First, the
ID doesn't claim that the overlooked intelligent agency is perfect (to
find optimum solution arbitrarily quickly) or omnipotent (e.g. to
create tamper-proof solutions). Finally, if its functioning were not
detectable, then the neo-Darwinists would crow trimufantly: See! It's
not a science since we can't detect its workings!

Stooping down to this kind of cheap sophistry is a sure indicator that
ND has already lost on the substance, that it knows it and that it is
merely blowing smoke to obscure the fall. All that obfuscation may also
gain it just enough time to come up with a convincing, or at least
obscure enough, verbiage to ease its unceremonious slither to the other
side, while appearing to the public as if everything is still exactly
the way they always thought it to be.

hersheyhv

unread,
May 15, 2006, 11:07:08 AM5/15/06
to
nightlight wrote:
> >The experiments are not conflicting, and you really need to know
> > the biology behind the experiments. Let's look at LD.
> > Infection by phage T1 is rapid, and lethal. Resistance is
> > caused by mutation in the mutation of the protein in the E.
> > coli cell wall that T1 binds to. So for a cell to be
> > resistant to T1 infection, it has to have pretty much
> > no wild-type T1-receptors on it. ...
>
> Yes, "pretty much". All that LD shows is that the receptor mutation is
> rare enough so that its occurrence can be modeled as a Poissonian point
> process (which is tautological for any very low rate mutation, anyway).
> The LD does not compare rates of this mutation with and without phage
> challenge, which is what an RM conjecture test would require.

LD (and the equivalent tests using replica plating) asks the question
of whether the mutations to resistance occur *in response to* exposure
or *in anticipation of* future exposure to the selective agent. The
clear evidence from these experiments is that mutation to resistance
occurs during the time when cells are NOT exposed to the selective
agent (not after exposure) and that the rate of such mutation is
uncorrelated to whether or not the population will be exposed to the
selective agent. That is, mutation to resistance is an independent
variable that is uncorrelated with exposure to a selective agent. Or,
more simply, mutation to resistance occurs without respect to need.
Cells do not produce *more* resistance after exposure to these rapidly
killing agents and they certainly do not produce more in prescient
anticipation of exposure. So, at least for rapidly selective agents,
it is quite clear. Mutation is random wrt need.

BTW, the *rate* of such mutation to resistance is indeed a constant for
a particular strain grown in a particular environment. One can alter
the *rate* of mutation by changing those conditions (the strain can be
mutant in repair; you can add a mutagen), but one cannot produce an
increase in the specific resistance gene by using the selective agent.
Changing the *rate* of mutation generally (or for specific types) is
not evidence against the randomness of mutation wrt need. Neither is
the fact that some sites are more mutable than others (e.g.
achondroplastic dwarfism). What you need to show is not merely a
change in *rate* of mutation, but specifically a *differential rate
change* favoring *specifically* mutations to resistance. That is what
Cairns seemed to have shown initially. Further analysis of the
biological mechanism showed that what was really happening was a
non-specific increase in mutation rate in desperate dying cells. But a
non-specific increase in the *rate* of mutation to resistance is not
what you need. What you need is a *differential* rate increase
*specific* to the needed resistance.

BTW, even Cairns recognized that his finding was *different* from what
one observes with rapid selective agents (like phage or antibiotics).
He noted that *if* this was Lamarckian inheritance, it only occurred
because bacteria do not die quickly in his selective conditions
(absence of added amino acids).

> It only
> compares (indirectly via implied statistics) the rates of mutation
> (which were low) with the rates of reproduction of phage resistant
> mutants (which were high). That is a completely irrelevant comparison
> for testing the RM conjecture.

What LD showed is that the mutations occurred in the non-selective
environment before there was any need for resistance, not as an
adaptive response to the need for resistance. That is, it showed that
mutation to resistance (at least for fast acting selective agents) is
random wrt need. The Cairns experiments *and* the follow-up
experiments (that you ignore) that explored the phenomenon he pointed
out (apparant adaptive mutation in response to need) revealed a
cell-based system of greatly increasing mutation rate (but not
specificly at adaptive sites) in stressful conditions. IOW, the *rate*
of mutation increases in Cairns-like selection, but not the specificity
that non-random or adaptive mutation requires. Mutation is still
random wrt need.

> You seem to be confusing the fact that mutation doesn't arise sharply
> as all or none (but rather with some probability PM < 1), with the RM
> conjecture.

Can you explain what you think you mean by the above? Mutation is a
rare event.

> While there is randmoness involved in mutations, it is the
> wrong kind of randomness to be relevant for the RM conjecture.

I would point out, in contrast, that while there is non-randomness
associated with mutation, with certain sites being more or less mutable
(over a range of 10^5-fold differences) and certain types of mutations
being more or less likely (transitions >> transversions) and rates that
can be varied by the genotype of the organism and the environment, we
have consistently found (even in the Cairns-like examples, once fully
explored) that mutation is random wrt need and is not a specific
adaptive response to a specific selective agent. And that *is* the
right kind of randomness relevant to what biologists mean by random
mutation.

> In fact
> guided mutations can as easily arise with probability PM < 1 as do
> non-guided ones.

In order to demonstrate that mutations are "guided" you have to
demonstrate that their appearance is correlated with and in response to
(or in anticipation of) need. The Cairns phenomena were initially
thought to be an indication of adaptive mutation (mutation in response
to a need). But the real experiments that the finding produced showed
that what was happening was simply a change in mutation *rate*, not a
change in mutation specificity.

> It doesn't mean that if there is an intelligent
> process behind, it has to be 100% percent successful in solving any
> challenge presented in any given time and population sizes. The
> parameters (various rates) and the design of the LD experiment were
> simply not suitable for testing RM vs non-RM.

For rapidly selecting agents, they most certainly were. They (and the
replica plating experiments confirmed it) clearly demonstrated that
mutation to resistance occurred *prior to* any exposure to a selective
agent and not adaptively *in response* to these agents. Adding certain
mutagens (but not the selective agent and unrelated to it) can
certainly increase the *rate* of mutation to resistance prior to adding
the selective agent. But it does so by lifting all boats (increasing
mutation in all genes), not by lifting a single one.

> If you can see how LD
> discriminates between RM and non-RM, you are welcome to share it. Your
> attempts so far merely alternate between non sequitur and ad hominem
> arguments.

I have done so by specifically pointing out the replica plating
extension of LD and the different expectations if what selective agents
did was to *induce* mutation to resistance (specifically lead to an
increase of mutation in resistance) or instead merely selected among
pre-existing mutants that formed in the absence of the selective agent.
Again, what we are concerned about is either *adaptive* mutation
(mutation that is correlated with and caused by the presence of the
selective agent) or *spooky* mutation (mutation that occurs because an
organism anticipates that it will need it). Mere *rate* of mutation to
resistance is irrelevant. What you are interested in (and what does
not exist, AFAIK, except among certain domesticated transposons, and
even those processes started out as a random event) is, at minimum, a
*differential* rate of specific mutation correlated with exposure to
selective agents.

> > The Cairn's experiments ... If the same experiments
> > are performed in strains that are devoid of PolIV,
> > you get a lower frequency of revertants. This enzyme
> > is one of two of the so-called error prone DNA polymerases
> > involved in repairing damage that is bad enough to stall
> > the "normal" polymerases. The kicker is you almost
> > eliminate the Cairns-type mutations by mutating RpoS.
> > RpoS a transcription factor that E. coli uses to turn
> > on the so called general stress response. ... Oh, and
> > the strains isolated as survivors from a Cairns type
> > experiment turn out to have a much higher level of
> > mutation in non-selected genes when compared to their
> > parental strains. The technology of the time prevented
> > Cairns from looking at neutral mutations.
>
> So what? The intelligent agency doesn't have as perfect instant
> solution to the challenge as the entire community of experts can
> conceive _in hindsight_, years later.

The explanation was not conceived "in hindsight". It took years of
experimental work to disect what was really happening in the Cairns
phenomena.

> That doesn't affect the fact that
> Cairns experiment clearly demonstrated that the probability of
> favorable mutation was greater under challenge than without challenge.

That is just it. It was, in fact, discovered that the Cairns phenomena
involved a hidden change in *rate* of mutation, not an adaptive change
in the *specificity* of the mutation. You cannot simply point to the
Cairns phenomena and ignore all the research that it spawned that
showed that what Cairns saw was NOT adaptive mutation.

> That the probabilities of other mutations increased as well merely
> shows that the intelligent solution was not overly specific to the
> challenge, but more generic (thus favorable for variety of challenges).

Again, to show *adaptive* mutation, you have to demonstrate a
*differential* change in the rate of mutation, specifically favoring
resistance (or in this case, reversion) of the specific gene of need.
If the increase in frequency of resistance is merely due to an increase
in *all* or non-specific mutation (as seems to be the case), that
undercuts the idea that the mutations are adaptive and supports the
idea that mutation is random wrt need and all selection does is to
select among the variants rather than to specifically induce their
production.

> Increasing generality of a solution tends to be a tradeoff with
> downsides (as it is in any domain). After all, even the intelligently
> designed drugs and medical procedures have downsides.
>
> Note also that in Cairns case, we're dealing with artificially selected
> strains of bacteria exposed to contrived challenge, hence the solution
> was a fallback to a more general type of solution. The experiences with
> antibiotic resistance suggest that we can reasonably expect that after
> some decades of exposure to the challenge, the Cairns survivors would
> evolve better and more specific solutions.

So now you are claiming that the Cairns experiment doesn't show what
you want because it involves artificially selected strains of bacteria
exposed to a contrived challenge? That particular whinny response
would rule out *any* experiment.

> > The kicker is you almost eliminate the Cairns-type mutations
> > by mutating RpoS. RpoS a transcription factor that E. coli
> > uses to turn on the so called general stress response.
>
> That's a gross non sequitur. Yep, you can also eliminate various
> aspects of human intelligence by damaging various parts of human brain.

So now the general stress response is the equivalent of the human
brain? Again, the point is that the stress response results in an
increased overall *rate* of mutation in the stressed cells. It does
not result in a specific overproduction of specific needed mutations.
And it is the latter that your argument against random mutation (wrt
need) requires. So your argument against random mutation is a failure
both when we look at rapid selecting agents and also in slower ones.
Do you have *any* evidence supporting adaptive or spooky mutation?

> Or even easier, you can make a championship chess program (or any AI
> program) become so dumb to crash right away by 'mutating' a single byte
> (or even a single bit) of its executable.
>
> Just because one can discover how was some intelligent response
> implemented in the cellular biochemical web (hence one can disable it)
> it doesn't mean that the response wasn't intelligent in the first place
> (since the solution came as a response to the challenge and the odds of
> survival increased compared to either non-response or to the opposite
> response, the lowering of the mutation rates).

The stress response does not "intelligently" (that is, differentially)
produce needed mutations. It dumbly and randomly and non-specifically
increases the *rate* of mutation in all kinds of sequences. Selection,
then, merely chooses the ones that happened to be mutants in the right
spot. IOW, random mutation plus selection after the fact.

> With the kind of rigged
> criteria, no computer program can behave intelligently (try playing
> chess agains the current chess programs) by the same shifty criteria
> and sophistry used to muffle the implications of the Cairns experiment.

The sophistry is all yours. You ignore all the post-Cairns work that
explored this phenomena and pretend that Cairns is the last and only
word. When it is pointed out that the process behind the Cairns
phenomena works not by increasing specificity of mutation but by
increasing the overall rate of mutation, you then claim that the
"intelligent" process is sloppy. So sloppy that it just so happens to
be indistinguishable from random mutation.

> As far as we know, any intelligent agency, a process with foresight,
> has to have some material embodiment/implementation and once this
> implementation is known well enough one may be able to tamper with it
> (e.g. damage or disable it).

Are you really claiming that the stress response is the equivalent of
"intelligence" despite the evidence that it does not show the
specificity required to be a cause of *adaptive* mutation?

> The ID theory does not postulate
> 'supernatural' and 'omnipotent' tamper-proof intelligence behind
> mutations. It simply says that the neo-Darwinian RM conjecture is
> overlooking an important intelligent agency/agencies behind the
> biological evolution and origins of life. The full nature of that
> agency is something that can be a subject of research once the ND
> priesthood's spell on academia has been lifted.

The community of real scientists took the Cairns phenomena seriously,
studied it, analysed it, and found out that it was not due to
*adaptive* mutation after all.

> The Cairns outcome summary -- the neo-Darwinists are left clinging to a
> straw: the intelligent agency is not perfect, it is not omnipotent and
> it is not invisible (its functioning is not undetectable).

Your "intelligent" agent is, however, indistinguishable from an agency
that produces random (wrt need) mutations that then get selected among.
So, in that way, it is invisible. It is indistinguishable from random
(wrt need) mutation.

> First, the
> ID doesn't claim that the overlooked intelligent agency is perfect (to
> find optimum solution arbitrarily quickly) or omnipotent (e.g. to
> create tamper-proof solutions). Finally, if its functioning were not
> detectable, then the neo-Darwinists would crow trimufantly: See! It's
> not a science since we can't detect its workings!

"See! It's not a science since we can't detect its workings!" (which
would require evidence of *differential* mutation in favor of
*specific* sites and all we see is a change in the *rate* of mutation).
There is no evidence of adaptive mutation that requires an unseen
*intelligent* agent. It can all be explained at the experimental
molecular level where we can see that all the mutation observed is
non-adaptive and non-spooky, with selection merely determining which
mutants survive, not inducing or causing them. It can be explained at
the level of evolution because the amount of change required to
generate, for example, humans and chimps from a common ancestor, is
well within what non-intelligent processes can accomodate.

> Stooping down to this kind of cheap sophistry is a sure indicator that
> ND has already lost on the substance, that it knows it and that it is
> merely blowing smoke to obscure the fall.

You obviously know all about sophistry and blowing smoke. That is why
you ignore or blow smoke to obscure the fact that the Cairns phenomena
has been explained by further experiment to be merely an increase in
rate of overall mutation, not the differential increase required of
adaptive mutation.

> All that obfuscation may also
> gain it just enough time to come up with a convincing, or at least
> obscure enough, verbiage to ease its unceremonious slither to the other
> side, while appearing to the public as if everything is still exactly
> the way they always thought it to be.

Where is your evidence that the process of stress-induced mutation is
adaptive?

nightlight

unread,
May 15, 2006, 3:54:35 PM5/15/06
to
>> The LD does not compare rates of this mutation with
>> and without phage challenge, which is what an RM
>> conjecture test would require.
>
> The clear evidence from these experiments is that
> mutation to resistance occurs during the time when
> cells are NOT exposed to the selective agent (not
> after exposure) and that the rate of such mutation
> is uncorrelated to whether or not the population
> will be exposed to the selective agent.

That a mutation can occur now which later may turn out to be favorable
in some environment is trivial. You don't need an experiment to know
that. For example, having a hand with dexterous fingers is quite
advantageous for typing on computers (e.g. compared to paws or hooves),
even though the mutations resulting in that trait occured long before
there were computers.

The problem is that LD does not compare rates of mutation before and
after exposure to phage. They only compare reproduction rate of mutants
(via statistics of resistant colonies in the 'descendant' dishes) with
the rate of mutation during the exposure. In this instance, the
mutations were much slower (even though there was an exposure) than the
reproduction of mutants. So, how would you know whether the
introduction of phage challenge changed the rates of favorable
mutations. You need to know what these rates were before and after the
introduction of challenge and compare the two before you can declare
that they didn't change. Only then you can say that they didn't
increase in response to phage, if that is what the two rates show.

Further, even if one were to augment the experiment and compare the two
mutation rates, and if one were to find no difference, the only
implication is that the particular intelligent agency of that strain of
bacteria isn't equipped to deal with that particular challenge. This
would be the same phenomenon as an unprepared student taking a multiple
choice test and doing no perceptibly better than chance (within some
convention for error margins). The same student may do quite well on
some other test.
The absence of evidence for a phenomenon in small fraction of instances
is no evidence of absence of phenomenon in all instances. The LD
experiment didn't even get to the point to demonstrate the absence of
evidence (for different rates of mutations, since it didn't compare the
two mutation rates) in that single instance, much less show the
evidence of absence (of such difference generally).


> Changing the *rate* of mutation generally (or for
> specific types) is not evidence against the randomness
> of mutation wrt need.

Of course it is. If the result of such response (increased mutation
rates for some or all sites) is statistically significant increased
fitness in that environment (compared to the fitness in that
environment for unchanged or lowered mutation rates), then the response
was guided by a (benevolent) intelligence. The response clearly
increased the rate of mutation favorable in that environment. It is
irrelevant whether there are also some side effects (such as excess of
unrelated harmful mutations), as long as the net probability of
survival and propagation is increased. After all, most intelligently
guided actions in every other realm tend to have tradeoffs and negative
side effects. ID doesn't postulate perfect intelligence.

The pertinent characteristic of an intelligent response is that the
response increases the chances of survival under a given challenge
(compared to non-response under the same challenge). You are simply
trying to shift the goalposts by declaring that this increased fitness
wasn't achieved in the most economical way conceivable (via exclusive
single site mutation), hence the response didn't increase the fitness
to its absolute maximum value, so you won't call it an intelligent
response. With that kind of shifty criteria, a student who doesn't make
1600 on SAT cannot be classified as an "intelligent agency". Similarly
any drug or medical procedure with any side-effects cannot be
classified as result of foresight. When you are forced to slide down
into that kind of sophistry, you've lost the argument.

> Further analysis of the biological mechanism showed that
> what was really happening was a non-specific increase
> in mutation rate in desperate dying cells. But a
> non-specific increase in the *rate* of mutation to
> resistance is not what you need. What you need is
> a *differential* rate increase *specific* to the
> needed resistance.

Why? What is exactly the hypothesis that you're trying to refute with
that particular threshold for favorable response? A perfect omniscient
and omnipotent intelligence? To show that Pat Robertson is wrong?... I
though your goal was to refute ID. You're certainly not refuting the
existence of the intelligent mutagenic response (the increased
probability of favorable mutation wrt that environment), which is in
fact what Cairns found.

>> You seem to be confusing the fact that mutation doesn't
>> arise sharply as all or none (but rather with some
>> probability PM < 1), with the RM conjecture.
>
> Can you explain what you think you mean by the above?
> Mutation is a rare event.

I mean that he and other ND defenders here argue that since there is a
chance that a favorable mutation (for given challenge) may not occur in
the time & with the population sizes available, then it is "random"
mutation, hence these mutations are "random", hence the "random
mutation" conjecture holds here. The error is of the same trivial kind
as arguing that since a loaded dice doesn't yield a favorable outcome
on each roll, the outcome of a roll is random, hence the dice roll is
random, hence the dice is not loaded.

> we have consistently found (even in the Cairns-like examples,
> once fully explored) that mutation is random wrt need and
> is not a specific adaptive response to a specific
> selective agent.

Why does mutagenic response have to be "specific" to the challenge?
That is not an ID hypothesis. ID only requires that the mutagenic
response is favorable under the given challenge. After all, intelligent
agencies of all kinds routinely perform tradeoffs in which they decide
on a simpler more generic response in favor of the most specific
response which may be more economical on resources and have fewer side
effects. That's simply a matter of choice of weights being assigned to
simplicity and quickness vs economy of resources and complexity of a
solution. ID has no particular a priori commitment to some particular
weights.

> In order to demonstrate that mutations are "guided" you
> have to demonstrate that their appearance is correlated
> with and in response to (or in anticipation of) need.

That's what Cairns found.

> But the real experiments that the finding produced
> showed that what was happening was simply a change in
> mutation *rate*, not a change in mutation specificity.
>

Now you moved the goalposts from your previous sentence. The Cairns
found that the appearance of favorable mutation is correlated with and
in response to the need, which was the goalpost in the first sentence.
The net effect of the mutagenic response is increased fitness. But, now
you suddenly insist that the response also must not have any side
effects (such as increased rate of other, including harmful mutations).
As explained, this new requirement is fine if you're trying to refute
Pat Robertson and his perfect designer. But, as explained, it is non
sequitur regarding the ID theory.


>> The parameters (various rates) and the design of the
>> LD experiment were simply not suitable for testing
>> RM vs non-RM.
>
> For rapidly selecting agents, they most certainly were.
> They (and the replica plating experiments confirmed it)
> clearly demonstrated that mutation to resistance
> occurred *prior to* any exposure to a selective agent
> and not adaptively *in response* to these agents.

That was already countered at the top. It is trivial that a mutation
can occur now which will be useful later. That doesn't logically
exclude the possibility that the mutation may occur faster when it is
needed.

> Adding certain mutagens (but not the selective agent
> and unrelated to it) can certainly increase the *rate*
> of mutation to resistance prior to adding the selective
> agent.

So? The trick is that when that is consistently done to increase the
odds of survival under a given challenge, then it is an intelligent
mutagenic response to the challenge. That someone can also perform the
same mutagenic action in a manner unrelated to the challenge is non
sequitur.

> What you are interested in (and what does not exist, AFAIK,
> except among certain domesticated transposons, and even those
> processes started out as a random event) is, at minimum, a
> *differential* rate of specific mutation correlated
> with exposure to selective agents.

No, the ID does not require differential (increased) rate of _specific_
mutation. It only requires increased rate of favorable mutation.
Inserting the qualifier "specific" is a strawman argument (Pat
Robertson's theory of evolution).

>> So what? The intelligent agency doesn't have as perfect
>> instant solution to the challenge as the entire community
>> of experts can conceive _in hindsight_, years later.
>
> The explanation was not conceived "in hindsight". It

> took years of experimental work to disect...

It also took years to move the goalposts slowly enough (to obfuscate
the fall) and make up the imagined ID that supposedly also requires
specific mutation. As explained, ID only requires that the net effect
of the mutagenic response be favorable, not that it also must the most
economical favorable response.

>> That doesn't affect the fact that Cairns experiment clearly
>> demonstrated that the probability of favorable mutation was
>> greater under challenge than without challenge.
>
> That is just it. It was, in fact, discovered that the
> Cairns phenomena involved a hidden change in *rate* of
> mutation, not an adaptive change in the *specificity* of
> the mutation.

Can you point out where does ID theory prohibit an intelligent agency
from using any particular mutagenic mechanism (including the changing
the rates of multiple mutations) to increase the fitness under given
challenge? You're making up such silly requirements.

>> Note also that in Cairns case, we're dealing with artificially
>> selected strains of bacteria exposed to contrived challenge,
>> hence the solution was a fallback to a more general type of
>> solution.
>

> So now you are claiming that the Cairns experiment doesn't
> show what you want because it involves artificially selected
> strains of bacteria exposed to a contrived challenge?
>

No, that's clearly not what I am saying. The point being made is that
the Cairns experiment didn't demonstrate the most economical favorable
mutagenic response. It only demonstrated a favorable mutagenic
response, which is all that ID requires. You're battling your own
"perfect designer" strawman.

> So now the general stress response is the equivalent of
> the human brain? Again, the point is that the stress
> response results in an increased overall *rate* of
> mutation in the stressed cells. It does not result
> in a specific overproduction of specific needed mutations.

ID doesn't require "specific overproduction". It only requires a
mutagenic response to a challenge which yields the net improvement of
fitness under that challenge. It doesn't need to be the cheapest
mutagenic response. Attaching the label "general stress response" or
any other, doesn't change the ID requirements or the experimental facts
supporting them.

> The stress response does not "intelligently" (that is,
> differentially) produce needed mutations. It dumbly
> and randomly and non-specifically increases the *rate*
> of mutation in all kinds of sequences.

It increased the fitness under the challenge. That's what ID requires.
If you wish to define some new PID theory, that requires perfect
response, so you can declare bacteria dumb in comparison, go ahead,
have fun with your PID bashing.

> You ignore all the post-Cairns work that explored this
> phenomena and pretend that Cairns is the last and only
> word.

I didn't ignore it. Half the post was about the later results.

> When it is pointed out that the process behind
> the Cairns phenomena works not by increasing specificity
> of mutation but by increasing the overall rate of mutation,
> you then claim that the "intelligent" process is sloppy.

I only said that there ID does not have any postulates requiring
optimality of mutagenic response. You're simply making ups such
requirement, some other PerfectID theory so you can have something to
refute, even if it is your own strawmen.

> So sloppy that it just so happens to be indistinguishable
> from random mutation.

It's not the optimum, but it's not quite that sloppy either. After all,
that's precisely what Cairns experiment shows -- the probability of
favorable mutation did increase with respect to the non-response rate
of favorable mutations. That's all that ID requires.


>> As far as we know, any intelligent agency, a process with
>> foresight, has to have some material embodiment /
>> implementation and once this implementation is known
>> well enough one may be able to tamper with it (e.g. damage
>> or disable it).
>
> Are you really claiming that the stress response is
> the equivalent of "intelligence"

It is not "equivalent" but it is a special case, an instance of
intelligent mutagenic response (the response which increased it fitness
under the challenge). "Equivalent" would also require that every
"intelligence" must be implemented as (or act through) a general stress
response. ID doesn't have such requirement on "intelligence".

Symbolically: A <=> B means A => B and B => A. In this case we only
have A => B (where A is favorable stress response, and B is
"intelligence). But ID does not require/postulate B => A.

> despite the evidence that it does not show the
> specificity required to be a cause of *adaptive* mutation?

ID has no "specificity" requirement for mutagenic response to a
challenge. Only that the mutagenic response is net favorable response
to a challenge.

> The community of real scientists took the Cairns phenomena
> seriously, studied it, analysed it, and found out that it
> was not due to *adaptive* mutation after all.

But it did confirm ID theory, since ID does not require the most
economical favorable response, silly semantic games notwithstanding.

>> The Cairns outcome summary -- the neo-Darwinists are
>> left clinging to a straw: the intelligent agency is
>> not perfect, it is not omnipotent and it is not invisible
>> (its functioning is not undetectable).
>
> Your "intelligent" agent is, however, indistinguishable from
> an agency that produces random (wrt need) mutations that
> then get selected among. So, in that way, it is invisible.
> It is indistinguishable from random (wrt need) mutation.

It is quite distinguishable and visible. That was the whole point of
the experiment and its results. It shows that the favorable double
mutation has much higher rate under the challenge (for which it is
favorable), than the random rate of the same double mutation measured
with that challenge absent. That's why the hole uproar.

> There is no evidence of adaptive mutation that requires
> an unseen *intelligent* agent.

ID does not have a postulate requiring that the "intelligent agent"
must be invisible. That's another strawman you just made up. There is
an evidence for a response via a favorable mutation, which is what ID
says to exist.

> It can all be explained at the experimental molecular
> level where we can see that all the mutation observed
> is non-adaptive and non-spooky, with selection merely
> determining which mutants survive, not inducing or
> causing them.

You seem be making up new requirements as you go, ever more ridiculous.
ID does not require that the implementations of the mutagenic responses
must be "spooky" or "hidden" or that subsequent 'natural selection' is
prohibited. It seems you are still arguing with Pat Robertson.

Just because you can find how the intelligent mutagenic response is
implemented (carried out), that does not mean that it cease being an
intelligent response (phenomenon of the net excess of favorable
outcomes resulting from the mutagenic response). After all you can take
a chess program and examine its source code -- does it mean it has lost
its foresight if you can figure out how it works? Will humans cease to
be 'intelligent agents' as soon as we have a more detailed description
on what brain does when humans solve problems? Is it a sudden kind of
loss of 'intelligent agent' status, or does it fade away gradually, it
goes a notch down as soon as we get in new data on how the brain works?

Your argument is analogous to a thief explaining to the judge, "No your
honor, I didn't steal that watch. It was just my neurotransmitters
which activated the motoric centers in my brain, which then sent
electrical pulses to the nerves in my hand, causing muscles to move the
watch molecules into the vicinity of my pocket molecules. See, there is
no stealing, just electric pulses and molecules doing their little
things."

Deadrat

unread,
May 15, 2006, 6:07:10 PM5/15/06
to

"nightlight" <night...@omegapoint.com> wrote in message
news:1147722875....@j33g2000cwa.googlegroups.com...
<snip>

In the category: OK, so the Intellgent Designer muffed the pop quiz,
but he's hoping his midterm grades bring up his average:

> Further, even if one were to augment the experiment and compare the two
> mutation rates, and if one were to find no difference, the only
> implication is that the particular intelligent agency of that strain of
> bacteria isn't equipped to deal with that particular challenge. This
> would be the same phenomenon as an unprepared student taking a multiple
> choice test and doing no perceptibly better than chance (within some
> convention for error margins). The same student may do quite well on
> some other test.
> The absence of evidence for a phenomenon in small fraction of instances
> is no evidence of absence of phenomenon in all instances. The LD
> experiment didn't even get to the point to demonstrate the absence of
> evidence (for different rates of mutations, since it didn't compare the
> two mutation rates) in that single instance, much less show the
> evidence of absence (of such difference generally).
>

<end Chez Watt nomination>

Why don't their heads just explode from all the cognitive dissonance?
The experiment doesn't show any evidence of an Intelligent Designer?
Must mean the Intelligent Designer just wasn't in the mood to design that
day. Doesn't mean he wasn't home sick or something.

Deadrat

<snip>

Deadrat

unread,
May 15, 2006, 6:17:19 PM5/15/06
to

"nightlight" <night...@omegapoint.com> wrote in message
news:1147722875....@j33g2000cwa.googlegroups.com...
<snip>

<snip>

>
>> In order to demonstrate that mutations are "guided" you
>> have to demonstrate that their appearance is correlated
>> with and in response to (or in anticipation of) need.
>
> That's what Cairns found.
>
>> But the real experiments that the finding produced
>> showed that what was happening was simply a change in
>> mutation *rate*, not a change in mutation specificity.
>>
>
> Now you moved the goalposts from your previous sentence. The Cairns
> found that the appearance of favorable mutation is correlated with and
> in response to the need, which was the goalpost in the first sentence.
> The net effect of the mutagenic response is increased fitness. But, now
> you suddenly insist that the response also must not have any side
> effects (such as increased rate of other, including harmful mutations).
> As explained, this new requirement is fine if you're trying to refute
> Pat Robertson and his perfect designer. But, as explained, it is non
> sequitur regarding the ID theory.

This isn't a sudden insistence. If the increased favorable mutations come
about from an increased mutagenic rate, then your Intelligent Designer
is forever hidden behind the statistics. Especially since you can claim
that the absence of an IDiotic signal is meaningless as evidence since
it just might mean that the Intelligent Designer was home sick that day.
IDiocy is non-refutable because it's not a scientific notion. Any evidence
is compatible with it.

Deadrat

AC

unread,
May 15, 2006, 6:38:17 PM5/15/06
to
On 9 May 2006 14:50:06 -0700,
Vend <ven...@virgilio.it> wrote:
> Natural selection can't explain the origin of life. It can explain the
> evolution of living beings but not the transition from inanimate and
> living systems, because to be selected, a system must be capable of
> reproduction, so must be alive.

I think that's carrying it too far, mainly because it limits reproduction
to chemical systems that we consider "alive".

--
Aaron Clausen
mightym...@gmail.com

Vend

unread,
May 15, 2006, 7:02:43 PM5/15/06
to
I thought that you were asserting that the Intelligent agent/process
acted by biasing the dna mutations, increasing the likelihood of
favourable outcomes.

So merely increasing the rate of mutation doesn't do the job, as
tossing a coin more often doesn't change the probabilites of the
outcomes.

No experiment produced any evidence that suggest that the mutations are
biased towards favourable outcomes.

So the Occam's razor cuts your Intelligent Designer head.

nightlight

unread,
May 15, 2006, 10:11:17 PM5/15/06
to
> I thought that you were asserting that the Intelligent
> agent/process acted by biasing the dna mutations, increasing
> the likelihood of favourable outcomes.

That is precisely what Cairns experiment shows. There is no ID
postulate that this net increase has to be maximal gain conceivable,
much less that it must be achived in arbitrary time and population
sizes given or in every try. The intelligent agency simply does the
best it can do within its computational limits and with the resources
and time available. The outcome of its attempts is probabilistic, with
a probability distribution over fitness values, where each resulting
fitness value has some probability (the distribution can be obtained
empirically from multiple runs of the experiment). Hence there may well
be nonzero probabilities in the negative fitness region, as it happens
in the Cairns experiment.

The requirement ID needs to meet in order to claim the existence of an
intelligent agency guiding the mutagenic response is that the
expectation value FI of the fitness over this probability distribution
is greater than the the corresponding expected fitness FR under the
empirical mutation rates measured in the absence of the challenge.

If FI > FR, we have detected a benevolent ID, if FI = FR we have a
random mutagenic response under the challenge (RM conjecture or no ID
detectable in the setup) and if FI < FR we have detected a malevolent
ID.

> No experiment produced any evidence that suggest that
> the mutations are biased towards favourable outcomes.

You must mean something else above than the "presence of intelligent
agency" criterium described above. Perhaps you require some more
perfect 'intelligent agency' than what the regular ID requires (as
described above). You're welcome to explain or sketch what is the
algorithm which computes whether (and by how much) "the mutations are
biased towards favorable outcome" for your variant of the intelligent
agency.

z

unread,
May 16, 2006, 1:00:45 AM5/16/06
to
On 15 May 2006 19:11:17 -0700, "nightlight"
<night...@omegapoint.com> wrote:

>> I thought that you were asserting that the Intelligent
>> agent/process acted by biasing the dna mutations, increasing
>> the likelihood of favourable outcomes.
>
>That is precisely what Cairns experiment shows. There is no ID
>postulate that this net increase has to be maximal gain conceivable,
>much less that it must be achived in arbitrary time and population
>sizes given or in every try. The intelligent agency simply does the
>best it can do within its computational limits and with the resources
>and time available. The outcome of its attempts is probabilistic, with
>a probability distribution over fitness values, where each resulting
>fitness value has some probability (the distribution can be obtained
>empirically from multiple runs of the experiment). Hence there may well
>be nonzero probabilities in the negative fitness region, as it happens
>in the Cairns experiment.
>
>The requirement ID needs to meet in order to claim the existence of an
>intelligent agency guiding the mutagenic response is that the
>expectation value FI of the fitness over this probability distribution
>is greater than the the corresponding expected fitness FR under the
>empirical mutation rates measured in the absence of the challenge.
>
>If FI > FR, we have detected a benevolent ID, if FI = FR we have a
>random mutagenic response under the challenge (RM conjecture or no ID
>detectable in the setup) and if FI < FR we have detected a malevolent
>ID.
>

The Cairns experiments have been shown to result from an increase in
the number of random mutations in stressed cells. So, from your
assertion, this is strong evidence of a lack of design since there is
no skew toward the mutations that would be expected to increase the
fitness of the bugs.

And your comments comparing RpoS to a brain are simply laughable.
RpoS is a switch between two states. If your pixie can jump in and
change the correct genes at will, it most certainly can grab another
sigma factor if it not allowed to bring it's own tools to the job
site. After all this is an agent that must be able to recognize DNA in
order to find the "right" gene.

As far as your complaints that mutation rate in LD was not determined
in the absence of phage, for that particular experiment this is indeed
true. The fact that a poisson distribution is a powerful hint that
this must be due to chance. Since this is a rapid and lethal
selection, the only explanation that fits with the biochemistry is
that these chance events had to occur prior to plating. The other
alternative is your magic pixies bravely defending these E. coli from
the evil scientists and their thuggish phage goons. Notice the
biochemistry bit. We can measure this stuff.

On the one side we have experimental evidence that indicates that the
basal mutation rate in E. coli is in line with what we expect from the
biology of replication and repair. On the other we have a belief that
there are supernatural agents that like to just make it look that way.

>> No experiment produced any evidence that suggest that
>> the mutations are biased towards favourable outcomes.
>
>You must mean something else above than the "presence of intelligent
>agency" criterium described above. Perhaps you require some more
>perfect 'intelligent agency' than what the regular ID requires (as
>described above). You're welcome to explain or sketch what is the
>algorithm which computes whether (and by how much) "the mutations are
>biased towards favorable outcome" for your variant of the intelligent
>agency.

So your agent gets pressed for time? Maybe they are union pixies and
have to take mandatory breaks. Or are you saying the bar is so
moveable you can never disprove the pixies existence?

Just for grins, tell me how you would do the LD to actually prove the
pixies exist. Don't go on about which fitness value would be higher,
enlighten us with a real metric. Step A, step B etc. Give concrete
examples of what you aim to test and how you'd measure it- I want
defined metrics. If you come back with "FI > FR" without defined
metrics for those fitness values, you've failed to answer the
question. Since you evidently have limited grasp on the biology, feel
free to ask questions about E. coli, selective agents, assays, etc.

Since my first guess (probably wrong) is that you will want to
sequence large numbers of E. coli chromosomes in the absence of
selection, I'll throw out some conservative numbers for you to model.
We will assume a genome size of 4 E6 bp for E. coli, our predictive
basal mutation rate of 1 E-8 per genome per generation, a sequencer
error rate of 1 E-4 per base per run, and $0.01 per base. The
sequencer error rate and cost are off the charts good in this model.
For thoughput, assume 1 E5 bp per sequencer per day. I dunno if we
can get pixies to run the damn things, but assume one person can run
10 sequencers, and that's all you need. Ballpark infrastructure costs
at 2X your final result- and that's a real steal.

And that's your baseline. Since you don't seem to like biological
readouts, you are gonna have to do that for every experiment, and
every test.

B Miller

nightlight

unread,
May 16, 2006, 2:01:44 AM5/16/06
to
>> But, now
>> you suddenly insist that the response also must not have any side
>> effects (such as increased rate of other, including harmful mutations).
>
> This isn't a sudden insistence. If the increased favorable mutations
> come about from an increased mutagenic rate, then your Intelligent
> Designer is forever hidden behind the statistics.

You are one response cycle behind since the very post you cite answers
that very objection few paragraphs later.

----- Excerpt:

It is quite distinguishable and visible. That was the whole point of
the experiment and its results. It shows that the favorable double
mutation has much higher rate under the challenge (for which it is
favorable), than the random rate of the same double mutation measured

with that challenge absent. That's why the whole uproar.
------------

> Especially since you can claim that the absence of an IDiotic
> signal is meaningless as evidence since it just might
> mean that the Intelligent Designer was home sick that day.

That question was also dealt with in that same post. ID does not
postulate its 'intelligent agency' (or agencies) as omniscient ,
omnipotent entity. It simply points out that RM conjecture has
overlooked an important 'intelligent agency' capable of initiating a
mutagenic response to a challenge, such that the response increases the
fitness of the organism compared to the non-response (i.e. where the
mutation rates are kept at their pre-challenge levels).

Obviously to test directly the fitness of undirected mutations under
the same challenge, one needs to reverse engineer the mutagenic
response mechanism (which changes the mutation rates from the baseline
state, the state in the absence of challenge) and to disable that
mechanism, so the mutation rates remain at the baseline even in the
presence of challenge. Then you compare the resulting fitnesses and
establish whether the original mutagenic response increases the
fitness, at least on average. If it does, as shown in Cairns experiment
and its followups, you have demonstrate the existence of an intelligent
mutagenic response. "Intelligence" here means only that under a
challenge some (intelligent) agency can respond by triggering mutations
which improve organism's fitness under that challenge compared to the
non-response fitness.

This doesn't imply that the response must be perfect and improve the
fitness to the theoretical maximum or work every time or under every
conceivable challenge or work under any time and population size
constraints or that mechanism implementing the response must be
hidden/undetectable or that the mechanism must be tamper proof .... (to
list few among the usual ND strawmen arguments). All that ID says is
that the mutagenic reponse will increase on average the fitness for
that challenge. The neo-Darwinian "random mutation" conjecture of
evolution prohibts such favorable mutagenic responses. The Cairns
experiment with its followups show that ID is correct here.

Whatever the label you attach to this mutation-improvement agency',
"intelligent" or something else, the point is that it exists and that
it performs as claimed.

nightlight

unread,
May 16, 2006, 2:54:02 AM5/16/06
to
> since there is no skew toward the mutations that
> would be expected to increase the fitness of the bugs.
...

> If your pixie can jump in and change the correct genes
> at will, it most certainly can grab another sigma factor
> if it not allowed to bring it's own tools to the job site. After
> all this is an agent that must be able to recognize DNA
> in order to find the "right" gene.
...

> The fact that a poisson distribution is a powerful hint
> that this must be due to chance.
...

> the only explanation that fits with the biochemistry
> is that these chance events had to occur prior to plating.
...

> On the other we have a belief that there are supernatural
> agents that like to just make it look that way.
...

> Just for grins, tell me how you would do the LD to actually
> prove the pixies exist.

... and so on...

You have recycled nearly every strawman tossed into the debate so far
by the desperate ND defenders (including yourself). Well, since all of
these were disposed of, several times each, in the earlier posts, you
can go and have a tea party with your strawmen all by yourself. Sorry,
but dancing in circles with strawmen, 'round and 'round, isn't
something I care to spend time on. When you're done with your little
party and if you happen to come up with some more pertinent
counterarguments to the earlier posts, you are welcome to resume the
discussion. (I don't hold grudges, so don't worry about having to
apologize.)

hersheyhv

unread,
May 16, 2006, 9:50:43 AM5/16/06
to
nightlight wrote:
> >> The LD does not compare rates of this mutation with
> >> and without phage challenge, which is what an RM
> >> conjecture test would require.
> >
> > The clear evidence from these experiments is that
> > mutation to resistance occurs during the time when
> > cells are NOT exposed to the selective agent (not
> > after exposure) and that the rate of such mutation
> > is uncorrelated to whether or not the population
> > will be exposed to the selective agent.
>
> That a mutation can occur now which later may turn out to be favorable
> in some environment is trivial.

That it does it *all* the time is not trivial. It tells us that
mutation is random wrt need.

> You don't need an experiment to know
> that. For example, having a hand with dexterous fingers is quite
> advantageous for typing on computers (e.g. compared to paws or hooves),
> even though the mutations resulting in that trait occured long before
> there were computers.

IOW, having an environment with computer keyboards designed for ten
finger typing is not the cause of the adaptive mutations that generated
a hand with dexterous fingers. [In this poor example, however, even
dolts like me can recognize that the design of computer keyboards has
more to do with the physical structure of the designer and user; that
is, the fact that we have ten fingers determined the design of computer
keyboards rather than the other way around.] OTOH, having a mutation
to antibiotic resistance in non-selective environments specifically
means that that mutation is occurring in the absence of need for it.
You are essentially saying that these mutations that occur in the
absence of need are trivial *despite* the fact that they are the only
types of mutations we actually observe.

> The problem is that LD does not compare rates of mutation before and
> after exposure to phage.

The rate after is essentially zero, since most bacteria are dead meat
after exposure to phage. A small percentage of sensitive cells might
survive a round or two of replication and undergo mutation, but it
would be quite unusual for any *new* mutations to arise. And, of
course, no mechanism to ensure that the new mutations will specifically
and differentially in the specific resistance gene. The replica
plating variant of LD shows that the resistant cells were essentially
*all* present before selection. Resistance was specifically shown NOT
to be a response that occurred *after* (as a rare response to) exposure
to the selective agent. It had occurred before exposure to the
selective agent, as demonstrated by the fact that the *same* colonies
are resistant in duplicate replica platings. *If* exposure to the
selective agent had *any* effect in causing or inducing resistance, one
would expect different colonies (the ones in which the resistance was
induced) to be resistant in each replica plating. That is not what we
see. This has nothing to do with rates. It has to do with when the
mutation occurred: before or after being in the selective environment.
And it has to do with *differential* mutation (specifically more of the
specific needed mutation than of all the other unneeded mutations -- an
increase in *all* mutations of the type needed is not *differential*
adaptive mutation) under those circumstances. LD shows unequivokably
that mutation for strongly selected traits (like phage resistance and
antibiotic resistance) occurs *before* the strong selection, making the
question of *differential* mutation moot. What the Cairns phenomena
did was test for the possibility of *differential* mutation when the
selective pressure was weaker and slower. Cairns observed an increase
in the number of resistant colonies that allowed for the *possibility*
of differential mutation of needed mutants. The further analysis of
what caused this phenomena showed that what was happening was not a
*differential* mutation of needed mutants after exposure to the weaker
selective agent, but rather a non-differential increase in mutations of
all types (a raising of all boats by the rising tide, so to speak,
rather than the raising of the single one you need by a crane) due to a
previously unexpected stress response. A non-differential increase in
mutations in such a hypermutagenic state is no different from the fact
that you get more mutants when you expose your bacteria to mutagens
than when you don't. Mutagens do not *specifically* and
*differentially* produce only the mutants you need. It increases all
kinds of mutations (well, which kind depends on the specific chemistry
or physics of the mutagen), including the one that is needed.
Selection weeds out all the other mutants.

> They only compare reproduction rate of mutants
> (via statistics of resistant colonies in the 'descendant' dishes) with
> the rate of mutation during the exposure. In this instance, the
> mutations were much slower (even though there was an exposure) than the
> reproduction of mutants.

This is not relevant to the plating experiments. They directly
demonstrate that the mutations in these strong selective conditions are
due to mutations that have occurred in the absence of need for the
mutation. Selection does not cause specific mutations (and usually
doesn't cause any mutation at all). It determines which mutations
survive.

> So, how would you know whether the
> introduction of phage challenge changed the rates of favorable
> mutations. You need to know what these rates were before and after the
> introduction of challenge and compare the two before you can declare
> that they didn't change. Only then you can say that they didn't
> increase in response to phage, if that is what the two rates show.

There are no "two rates" with strong selection. Only the previously
existing mutations present prior to selection survive. And, again, to
demonstrate non-randomness, you need not only an increase in the amount
of selectant mutants, but also specificity of mutation. Merely
changing the overall *rate* of mutation is NOT adaptive or specific
mutation induced by selection.

> Further, even if one were to augment the experiment and compare the two
> mutation rates, and if one were to find no difference, the only
> implication is that the particular intelligent agency of that strain of
> bacteria isn't equipped to deal with that particular challenge.

You obviously do not understand even the basic LD experiment. What
determined that the mutations had occurred prior to selection rather
than after is not a change in rate after selection (there was none).
It is a dispersion in frequency distribution that matters. If specific
mutation is induced by selection, then we should get roughly the same
frequency of mutants whether the initial culture started with a large
or small innoculum because the same number of cells, in each case, are
being exposed to the selective agent which is supposed to be inducing
the mutation. That is because, as far as being exposed to the
selective environment, all the samples, regardless of origin, are
equivalent at the time of exposure. You would expect a narrow
bell-shaped curve of distributions.

If, OTOH, the mutations are due to random generation *before* exposure
to the selective agent then the size of the initial innoculum *does*
matter. Not in the mean rate (which would be and is the same in both),
but in the distribution of numbers of mutant frequencies. That is what
the skewed Poisson distribution and the occassional "jackpot" tubes
show.

> This
> would be the same phenomenon as an unprepared student taking a multiple
> choice test and doing no perceptibly better than chance (within some
> convention for error margins). The same student may do quite well on
> some other test.
> The absence of evidence for a phenomenon in small fraction of instances
> is no evidence of absence of phenomenon in all instances.

We are not talking about a small fraction of instances. We are talking
about a highly repeatable phenomenon which has been tested and
re-tested every which way to Sunday. The plating versions are
particularly telling, since they do not require even the knowledge of
the difference between rate and distribution that you seem to lack.

> The LD
> experiment didn't even get to the point to demonstrate the absence of
> evidence (for different rates of mutations, since it didn't compare the
> two mutation rates) in that single instance, much less show the
> evidence of absence (of such difference generally).
>
>
> > Changing the *rate* of mutation generally (or for
> > specific types) is not evidence against the randomness
> > of mutation wrt need.
>
> Of course it is. If the result of such response (increased mutation
> rates for some or all sites) is statistically significant increased
> fitness in that environment (compared to the fitness in that
> environment for unchanged or lowered mutation rates), then the response
> was guided by a (benevolent) intelligence.

Then all your 'benevolent' intelligence would need to do is knock out
all our repair systems and add lots more mutagens to our environment.
That would increase the overall mutation rate. So, what the hey, let's
destroy the ozone layer and increase our mutation rate. That would be
beneficial. The problem with increasing all mutations is that you wind
up with all types of mutations and not just the ones you need. I don't
know about you, but I can see a downside to too much mutation. And
unless you have *differential* mutation of the needed mutation, you can
hardly call the increase in the needed mutation by increasing all kinds
of mutation an "intelligent" response. And, of course, unless you have
*differential* mutation of the needed mutation, you would still have
that mutation occurring at random wrt need. You need to demonstrate
*specificity* to claim that the selective environment is interacting
with the genome to generate *specifically* the mutations needed to
survive in that environment. A mere increase in overall mutation rate
is no more "intelligent" a way to get more of the desired mutation than
getting zapped by X-rays or exposed to ethylmethane sulfonate (not that
I would recommend either) is. No intelligent *choice* or
*differentiation between options* is being made. Merely more changing
of everything in the expectation that creating more changes will
increase the number of changes the environment will select).

Think of mutation as car repair by taking a monkey wrench and randomly
banging on the motors of a thousand cars ten times each.
Occassionally, but not too often, doing so will cause a rare car to run
better in a specific environment. Some of the cars will be destroyed,
many will run worse, and most will probably be unchanged (engines are
pretty sturdy). But you get to test the motors and choose the ones
that work best.

What you are calling "intelligent" mutation in the Cairns phenomena is
car repair by taking the monkey wrench and banging on the motors of a
thousand cars a hundred times. Doing so will, in some cases, cause an
increase in the frequency of the change needed so that more cars may
run better (and more will have been permanently destroyed and more will
run worse, with fewer remaining unchanged). Because you get to choose
just the ones that work, you think you are pretty smart because you now
have more cars that run better *after* selection.

Doing the same thing a thousand times (more mutation, more, more I say)
per car, however, will result in a *decrease* in useable cars and
improved cars. Too much random mutation is not a good thing.

Real mutation by *intelligent* agency would diagnose the problem and
bang on the specific part that causes the improved function. Bang just
once, not everywhere, but *differentially* only where you get the
result you need. That, alas, is not the way that *real* mutation
works. And it isn't the way that the Cairns phenomena works.

> The response clearly
> increased the rate of mutation favorable in that environment. It is
> irrelevant whether there are also some side effects (such as excess of
> unrelated harmful mutations), as long as the net probability of
> survival and propagation is increased. After all, most intelligently
> guided actions in every other realm tend to have tradeoffs and negative
> side effects. ID doesn't postulate perfect intelligence.

A non-differential increase in mutation rate cannot be considered to be
due to non-random mutation, precisely because it is non-differential.

> The pertinent characteristic of an intelligent response is that the
> response increases the chances of survival under a given challenge
> (compared to non-response under the same challenge). You are simply
> trying to shift the goalposts by declaring that this increased fitness
> wasn't achieved in the most economical way conceivable (via exclusive
> single site mutation), hence the response didn't increase the fitness
> to its absolute maximum value, so you won't call it an intelligent
> response.

I am not shifting the goal posts. Intelligent or adaptive response
requires the *differential* production of the desired mutant.
Otherwise the mutation is still produced at random wrt need. Unless
you consider mutagens to be intelligent agents.

> With that kind of shifty criteria, a student who doesn't make
> 1600 on SAT cannot be classified as an "intelligent agency".

A 1600 on the SAT is only an indication that the student is more
intelligent (differentially being produced according to need) if most
people do not score 1600. If everyone scores 800 on a test, you cannot
claim that any one student is smarter than any other. If everyone
scores 1600 on a test, you cannot claim that any one student is smarter
than any other. What your claim requires is differential production of
the desired mutant *relative to the production of other mutants*
according to need for that mutant after selection. A mere increase in
the amount of the desired mutant is not the same thing as saying that
that mutation is *specifically* being increased relative to other
mutants.

> Similarly
> any drug or medical procedure with any side-effects cannot be
> classified as result of foresight. When you are forced to slide down
> into that kind of sophistry, you've lost the argument.

You are the one engaged in sophistry. You hand wave vague ramblings
about QM when we are dealing with phenomena on an entirely different
scale. You claim that mere increase in mutation rate is the same thing
as specific action by an intelligent agent. *That* is sophistry. You
also misunderstand the point of the LD experiments (and apparently are
ignorant of the extensions of it) and purposely ignore the work that
has explored the Cairns phenomenon and explained it correctly. That is
either ignorance or willful blindness.

> > Further analysis of the biological mechanism showed that
> > what was really happening was a non-specific increase
> > in mutation rate in desperate dying cells. But a
> > non-specific increase in the *rate* of mutation to
> > resistance is not what you need. What you need is
> > a *differential* rate increase *specific* to the
> > needed resistance.
>
> Why? What is exactly the hypothesis that you're trying to refute with
> that particular threshold for favorable response? A perfect omniscient
> and omnipotent intelligence? To show that Pat Robertson is wrong?

Showing a *differential* rate increase is *required* for the result to
be a specific response to a specific selective requirement. And a
specific response is required for the response to indicate intelligence
that can distinguish different ways of responding. Merely increasing
the amount of a dumb, blind, random mutation unrelated to the specific
threat is not an intelligent response.

> ... I
> though your goal was to refute ID. You're certainly not refuting the
> existence of the intelligent mutagenic response (the increased
> probability of favorable mutation wrt that environment), which is in
> fact what Cairns found.

The same thing can be done by adding mutagenic chemicals. How does the
action of mutagenic chemicals (which also increases mutation rates)
represent an "intelligent" non-random process?

> >> You seem to be confusing the fact that mutation doesn't
> >> arise sharply as all or none (but rather with some
> >> probability PM < 1), with the RM conjecture.
> >
> > Can you explain what you think you mean by the above?
> > Mutation is a rare event.
>
> I mean that he and other ND defenders here argue that since there is a
> chance that a favorable mutation (for given challenge) may not occur in
> the time & with the population sizes available, then it is "random"
> mutation, hence these mutations are "random", hence the "random
> mutation" conjecture holds here. The error is of the same trivial kind
> as arguing that since a loaded dice doesn't yield a favorable outcome
> on each roll, the outcome of a roll is random, hence the dice roll is
> random, hence the dice is not loaded.

The argument is whether or not *specific* desirable mutations (as
opposed to all mutations) occur as a *consequence* of need. That is,
whether there is some mechanism that can recognize the need for a
particular mutation and can specifically and intelligently produce
*that* mutation. That requires something more than merely a change in
the rate of all mutations. It requires *differential* production of
specific mutations dependent upon the type of selective environment
they are exposed to.

> > we have consistently found (even in the Cairns-like examples,
> > once fully explored) that mutation is random wrt need and
> > is not a specific adaptive response to a specific
> > selective agent.
>
> Why does mutagenic response have to be "specific" to the challenge?

That is what YOU have proposed. That mutation is non-random wrt need.
For mutation to be non-random wrt need it must be *specifically*
increased when needed. Not indirectly increased because all sorts of
mutations (including those not needed) are increased. Nothing less is
required for "intelligently directed" mutation.

> That is not an ID hypothesis. ID only requires that the mutagenic
> response is favorable under the given challenge.

An increase in the frequency of a mutagenic response can be either
random or non-random. If the increase is a consequence of an increase
in all mutations, the mutation is still random mutation not
*differentially* related to the selective conditions.

If the increase in frequency is a consequence of the specific inducing
agent leading to a *differential* increase in the specific mutation
needed to survive in that selective environment, then the mutation is
non-random wrt need. That is the mutation is *differentially* or
*intelligently* related to the selective conditions.

All the evidence tells us that mutation is random and is not
*differentially* or *intelligently* related to the selective
conditions. The selective conditions merely chooses among mutations
that occur without any differential relationship or attention to the
need for those specific mutations.

> After all, intelligent
> agencies of all kinds routinely perform tradeoffs in which they decide
> on a simpler more generic response in favor of the most specific
> response which may be more economical on resources and have fewer side
> effects. That's simply a matter of choice of weights being assigned to
> simplicity and quickness vs economy of resources and complexity of a
> solution. ID has no particular a priori commitment to some particular
> weights.

How is doing more of the same dumb and blind thing "intelligent" or
"guided"?

> > In order to demonstrate that mutations are "guided" you
> > have to demonstrate that their appearance is correlated
> > with and in response to (or in anticipation of) need.
>
> That's what Cairns found.

No. It was a *possibility* that his experiment raised. Further work
demonstrated that the mutations were not guided by the selective
conditions but only randomly (wrt the selective need) increased by an
error-prone mechanism. No "guiding" was done. No "intelligent"
discrimination was done. Nothing was done that adding a mutagen could
not also do.

> > But the real experiments that the finding produced
> > showed that what was happening was simply a change in
> > mutation *rate*, not a change in mutation specificity.
> >
>
> Now you moved the goalposts from your previous sentence. The Cairns
> found that the appearance of favorable mutation is correlated with and
> in response to the need, which was the goalpost in the first sentence.

Cairns found an *increase* (not "the appearance of") in the *frequency*
of favorable double mutants, to be specific, when the selective
pressure was not rapidly lethal. Again, an *increase in frequency* in
mutants does not mean that the specific beneficial mutants are guided
or intelligently increased *relative* to all other mutations and
further experiment showed that specific guidance or specific
intelligent increase in the frequency of beneficial mutants was not, in
fact, the mechanism by which the *frequency* increased. Rather, the
frequency increased because of an increase in pure random non-specific
mutation rates. If you want to claim that a pure dumb blind stress
response that acts like a major dollop of mutagen is the sign of what
you consider a guiding intelligence, feel free. But don't even hint
that the mutations that occur are occurring in a non-random (or guided)
way in the Cairns phenomena. Nor make claims that somehow the "junk
DNA" of bacteria (very little "junk" there) somehow "thinks" and causes
*specific* needed mutations.

Feel free to describe what the Cairns phenomena *really* is: An
increase in overall mutation rate under stress by a biochemical system
getting activated that involves an error-prone polymerase. With no
specifity or selective targetting of genes as a function of the need
for mutation in those genes. Merely a generalized increase in mutation
rates.

By your argument a river is more intelligently designed and less random
than a straight manufactured irrigation ditch because it has a larger
amount of water flowing past per hour.

> The net effect of the mutagenic response is increased fitness.

Fitness is not generally defined or determined by the *frequency* of
mutants in any population, but by how well the mutants do relative to
the non-mutants in a specified environment. Which is why they call it
"relative fitness". In this case, the revertant phenotype is fitter
than the mutant phenotype.

The net fitness of the population (which is what you may be meaning) is
the summation of individual fitnesses for phenotypes divided by
population size. It is in extreme flux, changing dramatically each
generation in the case of the slower change of the Cairns phenomena.
In the rapid cases, the fitness of the w.t. non-resistant changes from
1 to 0 in a single generation.

> But, now
> you suddenly insist that the response also must not have any side
> effects (such as increased rate of other, including harmful mutations).

It can have side effects. But the process still must *differentially*
affect mutation rates of the beneficial changes in order to be
non-random production of beneficial changes as a consequence of the
presence of the selective conditions.

> As explained, this new requirement is fine if you're trying to refute
> Pat Robertson and his perfect designer. But, as explained, it is non
> sequitur regarding the ID theory.

Except that you have no evidence of any sort of design or non-random
mutation. There is no *differential* effect of the selective
environment on the types of mutation that appear. The selective
environment is still only choosing among randomly generated mutations,
only a higher level of randomly generated mutations.

> >> The parameters (various rates) and the design of the
> >> LD experiment were simply not suitable for testing
> >> RM vs non-RM.
> >
> > For rapidly selecting agents, they most certainly were.
> > They (and the replica plating experiments confirmed it)
> > clearly demonstrated that mutation to resistance
> > occurred *prior to* any exposure to a selective agent
> > and not adaptively *in response* to these agents.
>
> That was already countered at the top. It is trivial that a mutation
> can occur now which will be useful later.

Do you have an argument or evidence that these mutations that occur now
do so *because* the cell (its "junk" DNA brain?) can forsee a future
need for them? It is trivial that a mutation can occur now which will
be harmful later as well. Does that mean that potentially harmful
mutations occur now *because* the same "intelligent" something in the
cell can forsee their future deleterious effects? In both cases, the
important point is that the mutations occur *now* without being induced
by the future selective conditions and without foreknowledge of them.
That is, all the mutations that occur before the selective conditions
are occurring at random wrt the future selective conditions. In the LD
experiments, these are the only mutations seen. The ones that occur
before selection. Because selection is rapid.

In the Cairns experiment we see some mutations that occur after
selection (selection is slower) and, as we now know, the rate of random
(non-specific) mutation is increased during this time of stress. But
the mutations are still random mutations and not mutations specific to
the selective conditions used. If I use a phe-, trp- double mutant
(and initially select for phe+ revertants, but not trp+ revertants) I
will get the same % of trp+ co-revertants among the phe+ revertants as
I would if I initially selected for both phe+ and trp+. That is,
whether or not I select for trp+, I will get the same % which means
that mutation to trp+ under these conditions is not correlated with
whether or not I need (select for) it.

> That doesn't logically
> exclude the possibility that the mutation may occur faster when it is
> needed.

Where is the evidence that any mutation occurs *differentially* faster
when that specific mutation is needed? If you want *guided* or
*designed* mutation or *adaptive* mutation, you need to differentially
produce the mutants you need, not just raise all the boats (increasing
the rate so that you get *more* completely random mutation). Raising
all the boats is not guided or specifically producing the mutations you
need.

> > Adding certain mutagens (but not the selective agent
> > and unrelated to it) can certainly increase the *rate*
> > of mutation to resistance prior to adding the selective
> > agent.
>
> So? The trick is that when that is consistently done to increase the
> odds of survival under a given challenge, then it is an intelligent
> mutagenic response to the challenge. That someone can also perform the
> same mutagenic action in a manner unrelated to the challenge is non
> sequitur.

So then, what it comes down to is the idea that because *some* cells
have a desperation move that increases the rate of random mutation when
they are dying, that that is evidence of intelligent design? What
biochemical pathway or process, then, couldn't be asserted to be the
result of intelligent design? The SOS response is no more or less
brilliant than any other biochemical pathway that cells use to survive.
I thought your argument was based on the idea that mutation was
non-random. Now it is based on the idea that mutation *is* random, but
that there are biochemical pathways that can increase the amount of
random mutation in stressful conditions (so long as the selective
conditions don't happen too quickly).

> > What you are interested in (and what does not exist, AFAIK,
> > except among certain domesticated transposons, and even those
> > processes started out as a random event) is, at minimum, a
> > *differential* rate of specific mutation correlated
> > with exposure to selective agents.
>
> No, the ID does not require differential (increased) rate of _specific_
> mutation. It only requires increased rate of favorable mutation.
> Inserting the qualifier "specific" is a strawman argument (Pat
> Robertson's theory of evolution).

Not if you want anything that anyone would recognize as being *guided*
or *intelligently directed* mutation. What is *guided* or
*intelligently directed* or *non-random* about the mutations that are
being observed? That there are more of them (and every other kind of
mutation as well)?

> >> So what? The intelligent agency doesn't have as perfect
> >> instant solution to the challenge as the entire community
> >> of experts can conceive _in hindsight_, years later.
> >
> > The explanation was not conceived "in hindsight". It
> > took years of experimental work to disect...
>
> It also took years to move the goalposts slowly enough (to obfuscate
> the fall) and make up the imagined ID that supposedly also requires
> specific mutation. As explained, ID only requires that the net effect
> of the mutagenic response be favorable, not that it also must the most
> economical favorable response.

If the only thing that *Intelligent* Design requires is *more* pure
dumb *undirected* and non-specific random mutation, why do you call it
"Intelligent"? What is intelligent here? What is guided? In fact,
the goalpost that Cairns first thought he had was *real* directed
(specific) mutation related to need. You apparently still do, but have
moved the goalposts so that you are willing to now claim that merely
*more* pure dumb random mutation that produces *more* of all kinds of
mutants is what you regard as "intelligent". What a pathetic whimper
of an intelligent designer that is. From engineering life and
*directing* (intelligently via "junk DNA") or *guiding* the formation
of *specific* genes one needs to acting like a pathetic mutagenic
chemical. And not even able to detect the guiding because all the
results look like random mutation before selection.

> >> That doesn't affect the fact that Cairns experiment clearly
> >> demonstrated that the probability of favorable mutation was
> >> greater under challenge than without challenge.
> >
> > That is just it. It was, in fact, discovered that the
> > Cairns phenomena involved a hidden change in *rate* of
> > mutation, not an adaptive change in the *specificity* of
> > the mutation.
>
> Can you point out where does ID theory prohibit an intelligent agency
> from using any particular mutagenic mechanism (including the changing
> the rates of multiple mutations) to increase the fitness under given
> challenge? You're making up such silly requirements.

Is it that you have this idea that all *bad* mutations are due to
random events and it is only *good* mutations that the ID fairy
"guides"? I ask because the random mutation of which I speak doesn't
*just* produce mutations that increase fitness. In fact, if you induce
(intelligently design artificial triggers) the stress system without
there being any selection for specific required features, you will get
the same increase in mutations of all sorts, but now the net effect on
fitness will be to decrease it relative to the w.t.

> >> Note also that in Cairns case, we're dealing with artificially
> >> selected strains of bacteria exposed to contrived challenge,
> >> hence the solution was a fallback to a more general type of
> >> solution.
> >
> > So now you are claiming that the Cairns experiment doesn't
> > show what you want because it involves artificially selected
> > strains of bacteria exposed to a contrived challenge?
> >
> No, that's clearly not what I am saying. The point being made is that
> the Cairns experiment didn't demonstrate the most economical favorable
> mutagenic response. It only demonstrated a favorable mutagenic
> response, which is all that ID requires. You're battling your own
> "perfect designer" strawman.

I am perfectly willing to accept multiple changes that affect the
selective phenotype. I don't care which gene(s) is/are
*differentially* mutated to produce the phenotypic effect. So long as
there is a differential favoring of genes that produce the needed
beneficial effect.


>
> > So now the general stress response is the equivalent of
> > the human brain? Again, the point is that the stress
> > response results in an increased overall *rate* of
> > mutation in the stressed cells. It does not result
> > in a specific overproduction of specific needed mutations.
>
> ID doesn't require "specific overproduction". It only requires a
> mutagenic response to a challenge which yields the net improvement of
> fitness under that challenge. It doesn't need to be the cheapest
> mutagenic response. Attaching the label "general stress response" or
> any other, doesn't change the ID requirements or the experimental facts
> supporting them.

How can you claim that there is any sort of intelligent *guiding*
process if there is no differential effect? Dumbly increasing the rate
of random mutation produces more bad than good results. It is still
random mutation *followed* by selection which is what discriminates
among the random mutations. Until you can convince me that the
selective environment not only discriminates but specifically and
differentially produces beneficial changes. It can still produce
detrimental changes, but it does have to have a significant or
detectable differential *production* of beneficial mutations and not
just an overall increase in all kinds of mutations. Otherwise you
cannot claim that any guiding of the types of mutations is going on.
You are still merely producing random mutations and selection is still
acting only after mutation, not as an inducer of needed mutations,
specifically.

> > The stress response does not "intelligently" (that is,
> > differentially) produce needed mutations. It dumbly
> > and randomly and non-specifically increases the *rate*
> > of mutation in all kinds of sequences.
>
> It increased the fitness under the challenge. That's what ID requires.

ID requires *intelligent* guidance, not simply more dumb random
mutation. Even if more dumb random mutation, after selection, produces
more winners. [That only occurs under certain conditions, of course.]

Oh, BTW, what sort of "fitness" are you measuring and when and how?
And why can't the same process we are discussing be aptly described as
random mutation followed by selection for useful variants? Are you
claiming that more random mutation plus selection cannot increase
fitness? How are you distinguishing more random mutation plus
selection due to "intelligent guidance" and more random mutation plus
selection without "intelligent guidance"?

> If you wish to define some new PID theory, that requires perfect
> response, so you can declare bacteria dumb in comparison, go ahead,
> have fun with your PID bashing.

I am most certainly not requiring PID, I am merely requiring
detectability of the hypothesized *intelligence* or *guidance* of the
mutational events. All I hear is that *more* completely random
mutation followed by selection is a sign of intelligent guidance
because it increased fitness. So are you claiming that more completly
random mutation followed by selection that decreased fitness would be
what happens in the absence of your hypothetical ID?

> > You ignore all the post-Cairns work that explored this
> > phenomena and pretend that Cairns is the last and only
> > word.
>
> I didn't ignore it. Half the post was about the later results.

You mean your assertions that the conclusions are part of an evil plot
by neo-Darwinists to deny what Cairns found and reinterpret it into the
LD evil conspiracy?

> > When it is pointed out that the process behind
> > the Cairns phenomena works not by increasing specificity
> > of mutation but by increasing the overall rate of mutation,
> > you then claim that the "intelligent" process is sloppy.
>
> I only said that there ID does not have any postulates requiring
> optimality of mutagenic response.

I am not asking for optimality. I am asking for some evidence of
*specificity* and *intelligent guidance* in the types of mutations that
occur. I am asking for *detectablity*. For *significance*. Not
perfection. Right now there is no difference between your more random
mutation plus subsequent (or concommitant) selection due to intelligent
design and more random mutation plus subsequent (or concommitant)
selection without any intelligent agent other than your assertion that
the mutations are not random.

> You're simply making ups such
> requirement, some other PerfectID theory so you can have something to
> refute, even if it is your own strawmen.
>
> > So sloppy that it just so happens to be indistinguishable
> > from random mutation.
>
> It's not the optimum, but it's not quite that sloppy either. After all,
> that's precisely what Cairns experiment shows -- the probability of
> favorable mutation did increase with respect to the non-response rate
> of favorable mutations. That's all that ID requires.

No. Not the *probability* of favorable mutation. That would require
*differential* production of favorable mutations. What increased was
the *frequency* of favorable mutations. Due, as we now know, because
of an increase in the rate of *all* mutations, favorable and
unfavorable, concommitant with any selection that induces the
activation of the stress response. The *probability* of favorable
mutations relative to unfavorable is not changed.

> >> As far as we know, any intelligent agency, a process with
> >> foresight, has to have some material embodiment /
> >> implementation and once this implementation is known
> >> well enough one may be able to tamper with it (e.g. damage
> >> or disable it).
> >
> > Are you really claiming that the stress response is
> > the equivalent of "intelligence"
>
> It is not "equivalent" but it is a special case, an instance of
> intelligent mutagenic response (the response which increased it fitness
> under the challenge). "Equivalent" would also require that every
> "intelligence" must be implemented as (or act through) a general stress
> response. ID doesn't have such requirement on "intelligence".
>
> Symbolically: A <=> B means A => B and B => A. In this case we only
> have A => B (where A is favorable stress response, and B is
> "intelligence). But ID does not require/postulate B => A.

The stress response does NOT produce only favorable mutations. It
increase *all* mutations.

> > despite the evidence that it does not show the
> > specificity required to be a cause of *adaptive* mutation?
>
> ID has no "specificity" requirement for mutagenic response to a
> challenge. Only that the mutagenic response is net favorable response
> to a challenge.

Look. What I (and any biologist) means by random wrt need is the
following: Regardless of the conditions of the experiment, the
mutations that get selected for (the beneficial ones) have no
privileged position. The formal way to express this is to generate the
following table.

Variable 1 is the type of mutation, with two categories (mutations that
are beneficial and mutations that are NOT beneficial). Variable 2 is
some other condition that you think makes a difference and leads to
mutation that is non-random wrt need, which specifically means that you
get more *beneficial* mutations than non-beneficial mutations, and a
control category of what happens in the absence of selection. So
variable 1 is the categories 'beneficial' or 'not beneficial' and
variable 2 is 'after selection' or 'no selection'.

At the time that the Cairns phenomenon was observed (in the 1980s --
why are creationists always just finding out about 20 year old
research), we did not have the technical ability to measure the
frequency of some measure of relevant non-beneficial mutations under
non-selective conditions. Now we do. It is easy to measure the
molecular nature of the changes that produced the beneficial mutation.
After you select for the beneficial phenotype you examine the nature of
the mutations that produce the beneficial effect. For example, if the
only way to generate a revertant of trp- to trp+ is a transversion of A
to G at position 36 that reverts the original mutation, you essentially
know what kind of mutation represents a "beneficial" mutation and can
measure the rate at which that occurs. Then, if you look at another
sequence where the transversion of A to G has no phenotypic effect on
trp metabolism (or anything at all), you have a way of measuring
whether there is a significant non-random wrt benefit effect. That is,
do variables 1 and 2 interact or are they independent variables. You
measure the number of beneficial A to G transitions per 10^6 cells and
the number of non-beneficial A to G transitions that occur under the
same conditions. You do this in a population of cells that have
*never* undergone selection and a population that has undergone
Cairns-like selection (the very rapid selection seen in LD experiments
basically makes unnecessary this type of experiment in order to
demonsrate that selection is not *causing* or even correlated with an
increase in beneficial mutations, but it can also be done by looking
among the survivors for the frequency of changes in the non-beneficial
transitions).

beneficial site non-beneficial site

non-selective 500 750
Cairns selection 1500 2250

Then you do a simple contingency test. In this example, you would
clearly come to the correct conclusion that Cairns selection or no
selection does not interact with whether or not the observed trait is
beneficial. That is, whether a mutation is beneficial or not is not
affected by selection or non-selection but is random wrt the type of
selection. There is no evidence that the selection process did
anything other than increase the amount of random mutation. It
specifically does NOT show that mutation is, under the Cairns
conditions, non-random wrt need. For that, you would need the
following type of result:


beneficial site non-beneficial site

non-selective 500 750
Cairns selection 1500 1250

If this were to actually be the result, then you could acurately claim
that Cairns selection is producing a non-random increase in the
frequency of beneficial mutations. Note that this is not perfect
production of beneficial mutations, just a significant effect on the
production of beneficial mutations (that is mutation that is non-random
wrt need). This, of course, is not what has been actually found.

> > The community of real scientists took the Cairns phenomena
> > seriously, studied it, analysed it, and found out that it
> > was not due to *adaptive* mutation after all.
>
> But it did confirm ID theory, since ID does not require the most
> economical favorable response, silly semantic games notwithstanding.

ID that implies that it works by intelligently guiding or directing the
types of mutations depending on the need for them *does* require a
statistically significant preference in favor of beneficial mutation
when such is needed. What do you think we mean when we say that
mutation is random wrt need? We mean that there is no differential
preference for beneficial mutations when such are needed.

But, hell, you even reject LD, where, because selection is so rapid
(chosen precisely to determine whether the mutations being found were
already present or were being (albeit rarely) induced by the selective
conditions), there cannot even be mutation in response to the selective
agent and the evidence *clearly* and *unequivokably* shows that all
selection does in these cases is choose the winners among mutations
that occurred without the need for those mutations (during the time of
non-selection). That is beyond mere ignorance and into willful
ignorance.

> >> The Cairns outcome summary -- the neo-Darwinists are
> >> left clinging to a straw: the intelligent agency is
> >> not perfect, it is not omnipotent and it is not invisible
> >> (its functioning is not undetectable).
> >
> > Your "intelligent" agent is, however, indistinguishable from
> > an agency that produces random (wrt need) mutations that
> > then get selected among. So, in that way, it is invisible.
> > It is indistinguishable from random (wrt need) mutation.
>
> It is quite distinguishable and visible. That was the whole point of
> the experiment and its results. It shows that the favorable double
> mutation has much higher rate under the challenge (for which it is
> favorable), than the random rate of the same double mutation measured
> with that challenge absent. That's why the hole uproar.

And, as pointed out, further research demonstrates that the most
radical interpretation (adaptive mutation) was wrong. What happened
was an increase in sloppiness during replication and repair that
increased the mutation rate. Mutation was *still* random wrt need.
All the selection did was choose the winners of this random wrt need
mutation.

> > There is no evidence of adaptive mutation that requires
> > an unseen *intelligent* agent.
>
> ID does not have a postulate requiring that the "intelligent agent"
> must be invisible. That's another strawman you just made up. There is
> an evidence for a response via a favorable mutation, which is what ID
> says to exist.

What I am saying is that the mutations in the Cairns phenomena are NOT
evidence of non-random (wrt need) mutation. It is evidence of an
increase in the frequency of random (wrt need) mutations. You keep
misinterpreting the Cairns experiment by claiming that somehow an
increase in the frequency of random (wrt need) mutations is really a
*non-random* generation of beneficial mutations; claiming that the
process somehow intelligently produces *intentionally* more benefical
mutations than non-beneficial ones. That is NOT what the process does.
Mutation in this process is still random wrt need. There is simply
enough more of *random wrt need* mutation to produce a higher frequency
of winners (but not so much that all the cells die). There is
absolutely no evidence of a direction or guidance to the mutational
process so as to produce differentially more beneficial mutations.
Mutation is *still* random wrt need.

> > It can all be explained at the experimental molecular
> > level where we can see that all the mutation observed
> > is non-adaptive and non-spooky, with selection merely
> > determining which mutants survive, not inducing or
> > causing them.
>
> You seem be making up new requirements as you go, ever more ridiculous.

No. I am being quite consistent in requiring that you demonstrate
*non-random wrt need* mutation. So far all you have done is
demonstrate that when you increase the rate of *random wrt need*
mutation you get more mutations of the type you are selecting for. But
that does not demonstrate that the selection is changing the nature of
mutation in favor of beneficial mutations as opposed to non-beneficial
mutations.

> ID does not require that the implementations of the mutagenic responses
> must be "spooky" or "hidden" or that subsequent 'natural selection' is
> prohibited. It seems you are still arguing with Pat Robertson.

No. I am demanding that you present evidence that the mutatagenic
process in the Cairns phenomena represents mutation that is *non-random
wrt need*. That qualifier requires a significant or detectable
increase in *beneficial* mutation relative to *non-beneficial*
mutation. I am not asking for perfect interference by a guided
intelligent process; merely a detectable one that exhibits a detectable
preference for beneficial mutation over non-beneficial. You seem to
think that merely increasing the amount of random wrt need mutation
represents a guided or intelligent process. I do not regard a rate
change to be either intelligent or guided if the mutation is still
random wrt need. But you seem to have a different understanding of
what it means to have non-random wrt need mutation.

> Just because you can find how the intelligent mutagenic response is
> implemented (carried out),

I am saying that you have NOT presented any evidence of an
*intelligent* or goal-oriented mutation process -- one that
distinguishes between beneficial and non-beneficial mutation in favor
of those that are beneficial. All I require is that the difference be
*detectable* and *significant*. All you do is argue that *more* random
wrt need mutation represents an intelligent or guided process of
mutagenisis which is non-random wrt need. But whether the mutagenisis
process is *random* wrt need or *non-random* wrt need is determined
independently of the rate of mutation. It is determined by whether or
not the process can discriminate between beneficial and non-benefical
mutation and selectively (significantly, not perfectly) produce one
type over the other when the conditions require it. The only process
that can, AFAIK, distinguish between beneficial and non-beneficial
mutants is the selection process, not the mutagenesis process. And
there remains no way that the selection process can induce *non-random
wrt need* mutations. It can induce higher rates of mutation, but that
is not a guided or intelligent preference for needed beneficial
mutation.

hersheyhv

unread,
May 16, 2006, 10:15:58 AM5/16/06
to

Vend wrote:
> I thought that you were asserting that the Intelligent agent/process
> acted by biasing the dna mutations, increasing the likelihood of
> favourable outcomes.
>
> So merely increasing the rate of mutation doesn't do the job, as
> tossing a coin more often doesn't change the probabilites of the
> outcomes.

It does, however, increase the number of heads. Which he takes as
evidence of intelligent guidance.

> No experiment produced any evidence that suggest that the mutations are
> biased towards favourable outcomes.

Nope. But he seems to think that the process that *randomly* produces
*more* mutation under stress is the intelligent design at work. He
just weirdly mislabels the process by claiming that it is
*non-randomly* producing favorable outcomes rather than randomly
producing more mutants (variants) for selection to weed through.

It is loading more messages.
0 new messages