Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Evidence for Big Leaps?

140 views
Skip to first unread message

michael...@worldnet.att.net

unread,
Jul 6, 2006, 5:36:44 PM7/6/06
to

July 07, 2006
An Evolution Saga: Beach Mice Mutate and Survive

It's a pitiless lesson-adapt or die-but the sand-colored mice that
scurry around the beaches of Florida's Gulf Coast seem to have learned
the lesson well. Now researchers have identified a genetic mutation
that underlies natural selection for the sand-matching coat color of
the beach mice, an adaptive trait that camouflages them from aerial
predators.

In the July 7, 2006, issue of the journal Science, evolutionary
geneticist Hopi Hoekstra and colleagues at the University of
California, San Diego, report that a single mutation causes the
lifesaving color variation in beach mice (Peromyscus polionotus) and
provides evidence that evolution can occur in big leaps.

"This is a striking example of how protein-coding changes can play a
role in adaptation and divergence in populations, and ultimately
species."
Hopi Hoekstra

The Gulf Coast barrier islands of Florida and Alabama where the beach
mice are found are less than 6,000 years old-quite young from an
evolutionary standpoint. Hoekstra said that the identification of a
single mutation that contributes to the color change that has arisen in
these animals argues for a model of evolution in which populations
diverge in big steps.

This model, in which change is driven by large effects produced by
individual mutations, contrasts with a popular model that sees
populations diverging via small changes accumulated over long periods
of time.

More at:
http://www.hhmi.org/news/hoekstra20060707.html

Inez

unread,
Jul 6, 2006, 5:50:57 PM7/6/06
to

michael...@worldnet.att.net wrote:
> July 07, 2006
> An Evolution Saga: Beach Mice Mutate and Survive
>
> It's a pitiless lesson-adapt or die-but the sand-colored mice that
> scurry around the beaches of Florida's Gulf Coast seem to have learned
> the lesson well. Now researchers have identified a genetic mutation
> that underlies natural selection for the sand-matching coat color of
> the beach mice, an adaptive trait that camouflages them from aerial
> predators.
>
> In the July 7, 2006, issue of the journal Science, evolutionary
> geneticist Hopi Hoekstra and colleagues at the University of
> California, San Diego, report that a single mutation causes the
> lifesaving color variation in beach mice (Peromyscus polionotus) and
> provides evidence that evolution can occur in big leaps.
>
That doesn't seem like such a drastic change to me. For example, my
hair has gone from blonde to red and back to blonde again.

R Brown

unread,
Jul 6, 2006, 6:01:22 PM7/6/06
to

<michael...@worldnet.att.net> wrote in message
news:1152221804.5...@j8g2000cwa.googlegroups.com...
I can hear it now: "Yes, but they're still mice. Call us when one of them
gives birth to a cat."

bullpup

unread,
Jul 6, 2006, 7:15:26 PM7/6/06
to

<michael...@worldnet.att.net> wrote in message
news:1152221804.5...@j8g2000cwa.googlegroups.com...
>

<Obligatory creationist dismissal>

1 But it's still a mouse!

2 Moths, mice, same difference, big deal!

3 All mutations are bad! (But the mutation increases the probability of
sucessful reproduction.) All mutatons are bad!

4 (Stuffs head up arse) <muffled mumbling> I can't hear you!</muffled
mumbling>

5 The genetic information for sand covered fur was always there!

6 So, where's the new and novel structure requred for proof of evolution?
Mice already had hair to begin with!

7 <Nashty-poop> So, how does that same anyone's life? </Nashty-poop>

8 <nando> Who owns the decision point of the chance if the mutation where
things could go one way or the other, why didn't spiritual love appear in
the research? It's nothing but secular propaganda because it doesn't make
me feel *special*!</nando>

9 It was designed that way!

10 <ave> What does a different color of hare have to do with beach mice?
</ave>

11 <ed> I have a rock shaped like a mouse penis for sale on eBay, and it's
five feet long!</ed>

12 <McNameless> Ron Wyatt found the Ark shaped rock, and since they said the
islands were only about 6000 years old, that proves it Wyatt Ark is *the*
Ark!</McNameless>

</Obligatory creationist dismissal>

Boikat

michael...@worldnet.att.net

unread,
Jul 6, 2006, 7:41:23 PM7/6/06
to

bullpup wrote:
>
>[snip]
<Obligatory creationist dismissal>
>
> Boikat

There is a picture with the original article, so we can add:

It's obviously a stuffed mouse that has been spray-painted with Glidden
interior enamel.

-- Mike Palmer

Desertphile

unread,
Jul 6, 2006, 10:14:53 PM7/6/06
to
michael...@worldnet.att.net wrote:

> July 07, 2006
> An Evolution Saga: Beach Mice Mutate and Survive
>
> It's a pitiless lesson-adapt or die-but the sand-colored mice that
> scurry around the beaches of Florida's Gulf Coast seem to have learned
> the lesson well. Now researchers have identified a genetic mutation
> that underlies natural selection for the sand-matching coat color of
> the beach mice, an adaptive trait that camouflages them from aerial
> predators.

Yes, but they are STILL moths!

Oh, wait. That's a different argument.

> In the July 7, 2006, issue of the journal Science, evolutionary
> geneticist Hopi Hoekstra and colleagues at the University of

"Hopi Hoekstra?!" What a great name!

> California, San Diego, report that a single mutation causes the
> lifesaving color variation in beach mice (Peromyscus polionotus) and
> provides evidence that evolution can occur in big leaps.

Note that it is also a single gene in humans that controls skin color.

Desertphile

unread,
Jul 6, 2006, 10:22:29 PM7/6/06
to
bullpup wrote:

Awww hell: you beat me to it.

> 3 All mutations are bad! (But the mutation increases the probability of
> sucessful reproduction.) All mutatons are bad!

3b "It was a loss of information that made the mice sand-colored."

> 4 (Stuffs head up arse) <muffled mumbling> I can't hear you!</muffled
> mumbling>

That's 90% of Creationists right there.

> 5 The genetic information for sand covered fur was always there!

5b "It survived the flood in Noah's sister-in-law, along with
gonorrhea, tuberculosis, and Mycobacterium leprae!"

> 6 So, where's the new and novel structure requred for proof of evolution?
> Mice already had hair to begin with!

Hee! That's a good 'un.

> 7 <Nashty-poop> So, how does that [save] anyone's life? </Nashty-poop>

7b "And duz the mice wan' fries wid' 'dat?"

> 8 <nando> Who owns the decision point of the chance if the mutation where
> things could go one way or the other, why didn't spiritual love appear in
> the research? It's nothing but secular propaganda because it doesn't make
> me feel *special*!</nando>

Wow. You got him down pat!

> 9 It was designed that way!
>
> 10 <ave> What does a different color of hare have to do with beach mice?
> </ave>
>
> 11 <ed> I have a rock shaped like a mouse penis for sale on eBay, and it's
> five feet long!</ed>

11b "That wasn't a mouse: it was Bill Clinton."

> 12 <McNameless> Ron Wyatt found the Ark shaped rock, and since they said the
> islands were only about 6000 years old, that proves it Wyatt Ark is *the*
> Ark!</McNameless>

13 <Ray Martinez>I'm writing a research paper on these mice right now
proving they did not evolve and are not brown and actually have six
legs and I'll publish this research paper in April.</Ray Martinez>

> </Obligatory creationist dismissal>
>
> Boikat

Nic

unread,
Jul 6, 2006, 10:40:32 PM7/6/06
to

Not bad for a single mutation though. Does anyone know how many loci
are different for the Manchester moths?

David Wilson

unread,
Jul 7, 2006, 1:24:07 PM7/7/06
to
In article <1152229283.0...@s26g2000cwa.googlegroups.com> on

July 6th in talk.origins michael...@worldnet.att.net wrote:

> bullpup wrote:
> >
> >[snip]
> <Obligatory creationist dismissal>
> >
> > Boikat
>

> There is a picture with the original article, ...

It was faked by gluing dead mice to the tree trunks.

---------------------------------------------------------------------
David Wilson

SPAMMERS_fingers@WILL_BE_fwi_PROSECUTED_.net.au
(Remove underlines and upper case letters to obtain my email address.)

Kermit

unread,
Jul 7, 2006, 1:27:12 PM7/7/06
to

bullpup wrote:
> <michael...@worldnet.att.net> wrote in message
<snip>

>
> <Obligatory creationist dismissal>
>
> 1 But it's still a mouse!
>
> 2 Moths, mice, same difference, big deal!
>
> 3 All mutations are bad! (But the mutation increases the probability of
> sucessful reproduction.) All mutatons are bad!
>
> 4 (Stuffs head up arse) <muffled mumbling> I can't hear you!</muffled
> mumbling>
>
> 5 The genetic information for sand covered fur was always there!
>
> 6 So, where's the new and novel structure requred for proof of evolution?
> Mice already had hair to begin with!
>
> 7 <Nashty-poop> So, how does that same anyone's life? </Nashty-poop>
>
> 8 <nando> Who owns the decision point of the chance if the mutation where
> things could go one way or the other, why didn't spiritual love appear in
> the research? It's nothing but secular propaganda because it doesn't make
> me feel *special*!</nando>
>
> 9 It was designed that way!
>
> 10 <ave> What does a different color of hare have to do with beach mice?
> </ave>
>
> 11 <ed> I have a rock shaped like a mouse penis for sale on eBay, and it's
> five feet long!</ed>
>
> 12 <McNameless> Ron Wyatt found the Ark shaped rock, and since they said the
> islands were only about 6000 years old, that proves it Wyatt Ark is *the*
> Ark!</McNameless>
>
> </Obligatory creationist dismissal>
>

14 <robin> I have posted here many times, trying to be fair and
explaining that I embrace many possibilites, but the Creationists are
upset. While it's true that Darwin didn't disown the racist
implications in his books, he really didn't do worse than many of his
contemporaries, and the English gentlemen were hard pressed to explain
themselves in any event. I think there are many possibilities in
evolutionism, just as there are in the glory of True Christian love. I
wish you people could consider the possibility that God used evolution
to accomplish his goal, that science is not necesarily unrighteous, and
Darwin, while unwitting and flawed, was his spokesman on this issue.
Compare Darwin's racism to Thomas Merton's "Seven Storey Mountain", and
see how science could be. </robin>


> Boikat
Kermit

Dwib

unread,
Jul 7, 2006, 1:52:45 PM7/7/06
to
michael...@worldnet.att.net wrote:
> July 07, 2006
> An Evolution Saga: Beach Mice Mutate and Survive
> It's a pitiless lesson-adapt or die-but the sand-colored mice that
> scurry around the beaches of Florida's Gulf Coast seem to have learned
> the lesson well. Now researchers have identified a genetic mutation
> that underlies natural selection for the sand-matching coat color of
> the beach mice, an adaptive trait that camouflages them from aerial
> predators.

This sounds more like an example of "survival of the fittest".

I mean, it's not like "the mice adapted a new hair color". The hair
color was selected because those mice survived predation.

Perhaps I'm splitting hairs.

Dwib

hersheyhv

unread,
Jul 7, 2006, 9:19:29 PM7/7/06
to

Life's a beach and if you are the wrong color, you die.
>
> More at:
> http://www.hhmi.org/news/hoekstra20060707.html

TCE

unread,
Jul 7, 2006, 9:39:25 PM7/7/06
to

15 <topmind> This is exactly what SETI can't prove and I've been
telling you that all along. Those mice were genetically altered by
alien spores easily found by generic pattern searching based on an
algorithm I don't understand. It also proves you're all part of a
conspiracy to suppress the true nature of the universe. Talk about
calling the camel a pot. </topmind>

---
Strange

nightlight

unread,
Jul 7, 2006, 10:51:49 PM7/7/06
to
michael...@worldnet.att.net wrote:

> Now researchers have identified a genetic mutation
> that underlies natural selection for the sand-matching
> coat color of the beach mice, an adaptive trait that

> camouflages them from aerial predators....

It doesn't appear they have shown that the mutation
was _random_. They only found a variant of a gene
which is responsible for the lighter color.

a) If the given population size for a given time can
produce altogether N mutations for all individuals,

b) and if there are total of T theoretically possible
mutations at a comparable distance from the baseline
mice genome as the Mc1r mutation they found,

then the probability that this mutation will happen by
chance at least once in these N available tries is:

P(N,T) = 1 - (1-1/T)^N ~ 1 - exp(-N/T) ... (1)

For small values of N/T this is approximately:

P(N,T) ~ N/T ... (2)

(If there are F favorable color mutations among T, then
N/T in (2) would be multiplied by F. That refinement is
irrelevant for the point being made below.)

I don't see in the article any hint of estimates for P(N,T)
in order to establish what are the odds that a _random_ mutation
can "find" the right solution (any of the F favorable color mutations)
in the given number of tries N.

They need to estimate P(N,T), then show that P(N,T) is fairly
high e.g. 50% or above in order to claim that neo-Darwinian model
(ND = RM + NS = Random Mutation + Natural Selection) has better
than 50% chance of producing this adaptation. Otherwise,
what is left is guided (intelligent) mutation.

Hence, this particular discovery as it stands, seems at
least as favorable to ID = IM + NS as to ND = RM + NS,
since all they have shown is the existence of M + NS,
a fact consistent with ID and ND models of evolution.


Windy

unread,
Jul 7, 2006, 11:41:22 PM7/7/06
to

nightlight wrote:
> michael...@worldnet.att.net wrote:
> > Now researchers have identified a genetic mutation
> > that underlies natural selection for the sand-matching
> > coat color of the beach mice, an adaptive trait that
> > camouflages them from aerial predators....
>
> It doesn't appear they have shown that the mutation
> was _random_. They only found a variant of a gene
> which is responsible for the lighter color.
>
> a) If the given population size for a given time can
> produce altogether N mutations for all individuals,
>
> b) and if there are total of T theoretically possible
> mutations at a comparable distance from the baseline
> mice genome as the Mc1r mutation they found,

Ooo, fancy words. What is this "baseline mice genome"? Are the baseline
mice kept in a vault somewhere for comparisons?

> then the probability that this mutation will happen by
> chance at least once in these N available tries is:
>
> P(N,T) = 1 - (1-1/T)^N ~ 1 - exp(-N/T) ... (1)
>
> For small values of N/T this is approximately:
>
> P(N,T) ~ N/T ... (2)

No, the possibility of independent events (T) does not bring down the
probability of a given nucleotide from mutating, moron.

> They need to estimate P(N,T), then show that P(N,T) is fairly
> high e.g. 50% or above in order to claim that neo-Darwinian model
> (ND = RM + NS = Random Mutation + Natural Selection) has better
> than 50% chance of producing this adaptation. Otherwise,
> what is left is guided (intelligent) mutation.

The goalposts are moving in a completely new direction, I'll give you
that.

> Hence, this particular discovery as it stands, seems at
> least as favorable to ID = IM + NS as to ND = RM + NS,
> since all they have shown is the existence of M + NS,
> a fact consistent with ID and ND models of evolution.

So the designer actively intervened in the evolution of these mice
during the last 6000 years? Interesting.... how, exactly?

-- w.

nightlight

unread,
Jul 8, 2006, 2:59:45 AM7/8/06
to
Windy wrote:
>
>>a) If the given population size for a given time can
>> produce altogether N mutations for all individuals,
>>
>>b) and if there are total of T theoretically possible
>> mutations at a comparable distance from the baseline
>> mice genome as the Mc1r mutation they found,
>
>
> Ooo, fancy words. What is this "baseline mice genome"? Are the baseline
> mice kept in a vault somewhere for comparisons?

I called "baseline genome" the initial state of the genome of
the entire population. That is a set of DNA configurations, call
it S0 (initial set), which is a subset of the combinatorial space
S1, consisting of all DNA configurations which are at a distance
of 1 mutation away from elements of S0.

The number of elements in the set S1 was denoted in (b) as T. Now,
the adaptation process for the given population size and the
given duration will be able to explore some set of points (DNA
configurations) from the set S1, call it SN, starting from
various points in S0. The size of the set SN was denoted as
N in (a).

For the argument being made, it doesn't matter whether we can
compute numbers T and N in practice with present-day technology.
We are discussing different mathematical models of the observed
process and we only need to know that such finite sets S0, S1
and SN exist (and thus have sizes, such as integers T and N),
as mathematical elements of the models. The neo-Darwinian model
is one conceivable way to construct set SN from S0 -- it says
that the N points of SN are chosen randomly from the set S1
(of all configurations which are 1 mutuation away from S0).

That model then implies certain probability for the adaptation to
occur, as indicated in eq. (1). Again, it doesn't matter whether
we can presently compute P(N,T). It only matters that neo-Darwinian
model mathematically implies some P(N,T), a number that exists
whether we can calculate it at present or not.

In contrast, the ID model says that those N points are not
chosen randomly from S1, but are guided by some 'intelligent
agency' which allows it to find favorable configurations
from S1 faster than the random search does i.e. the ID model
says that if we observe a series of such adaptation processes,
then the rate of observed favorable adaptations will be
greater than the rate predicted by the neo-Darwinian model
(implied by particular P(N,T) in each process instance).

The point of my argument is that there is nothing in the article
for neo-Darwinians to crow about. All that was shown is that
the adaptation process consisted of a certain mutation plus the
natural selection i.e. they observed M + NS. The neo-Darwinian
model requires _RM_ + NS.

To show that their observation favors neo-Darwinian model, they
would have to obtain prediction of that model, at least some
rough estimate for P(N,T), and compare it to the observed facts
(the appearance of the favorable mutation for the given population
size and duration). If the ND model estimate for P(N,T) turns
out much smaller than 1, then the ND model is not a probable
mechanism behind the observed adpatation. Of course, the ND
model is not excluded by the P(N,T) << 1, since random mutation
can in principle produce the observed result. It can only be
shown to be a highly improbable mechanism for the observed
adaptation and one would have to consider the remaining
more probable possibility, the guided mutations.

>
>>then the probability that this mutation will happen by
>>chance at least once in these N available tries is:
>>
>> P(N,T) = 1 - (1-1/T)^N ~ 1 - exp(-N/T) ... (1)
>>
>>For small values of N/T this is approximately:
>>
>> P(N,T) ~ N/T ... (2)
>
>
> No, the possibility of independent events (T) does not bring down the
> probability of a given nucleotide from mutating, moron.
>

Where did I "bring down" any probability. If you roll a dice with T=6
possible outcomes, N=2 times, the probability that you will _not_ get,
say, number 5 in those 2 throws, is P(no_5) = (1-1/6)^2. Hence the
probability that you will get 5 at least once is 1 - P(no_5) =
1 - (1-1/6)^2, which is what eq. (1) is for some general numbers T
and N (which are the sizes of sets S1 and SN). Now Mr. Genius, show
us your formulae for this type of probabilities.

>>They need to estimate P(N,T), then show that P(N,T) is fairly
>>high e.g. 50% or above in order to claim that neo-Darwinian model
>>(ND = RM + NS = Random Mutation + Natural Selection) has better
>>than 50% chance of producing this adaptation. Otherwise,
>>what is left is guided (intelligent) mutation.
>
>
> The goalposts are moving in a completely new direction, I'll give you
> that.
>

What goalpost has moved? I am saying that there is nothing in the
reported result that suggests _random_ mutation is responsible for
the observed adaptation. At best it shows that the particular
mutation they found was responsible. But they show nothing that
would indicate whether the mutation was random or guided/intelligent.

As already explained, the neo-Darwinian random mutation model
implies certain probabilty P(N,T) of occurrence of particular
adaptation. To claim that their oobservation supports the
neo-Darwinian model, they would need to estimate this P(N,T)
and compare its implied rates favorable mutations to those
observed. They do nothing of the sort. Their empirical result
is completely non-discriminating between the ID and ND models
of the observed adaptation.


>
>>Hence, this particular discovery as it stands, seems at
>>least as favorable to ID = IM + NS as to ND = RM + NS,
>>since all they have shown is the existence of M + NS,
>>a fact consistent with ID and ND models of evolution.
>
>
> So the designer actively intervened in the evolution of these mice
> during the last 6000 years? Interesting.... how, exactly?

There could be an intelligent agency guiding mutation faster than
random search toward the favorable DNA configuration. We do know
that numerous 'intelligent agencies' do exist in nature, such
as human or animal brains, immune systems and variety of other
intelligent networks (complex systems) at all scales. There is
no a priori reason why the biochemical reaction network of a
cell, which in turn is a sub-net in a hierarchy of larger
adaptable networks (organism, population, ecosystem), cannot
be at least a part of the 'intelligent agency' guiding mutations
faster than a purely random process. After all, we know that
some other perfectly natural processes do exist which guide
mutations purposefully (e.g. those occurring in brains of
molecular biologists while designing some new GM plant).
There is no fundamental reason why those known intelligent
processes guiding mutations are the only ones that can
exist.

There is also no a priori reason why the biochemical
reaction network of a cell has to be the innermost/smallest
or the most important such network implementing the
'intelligent agency' which guides the mutations. For
example, our physical space-time is meaningful (within
present-day physics) down to Planck scale which is 1e-33m.
The elementary particles, which are elemental building
blocks for atoms, molecules, cells, organs, organisms,
human brains, human societies, sciences and technologies,
are above the scale of 1e-16m. Hence, there are as many
orders of magnitude between the Planckian scale objects
and our 'elementary' particles as there are between the
'elementary' particles and the intelligent agencies
built from them (e.g. human brain which is at the
scale ~ 1e-1m). Thus, there is as much space for
Planckian scale objects to build complex intelligent
networks below the levels of our elementary particles
as there is between these particles and us.

The frequencies at which these Planckian objects
interact are 1e16 times faster than those occurring
between our elementary particles i.e. any complex
networks (distributed computers) built up on Planckian
scale objects would run 1e16 times faster than the
networks built from our 'elementary particles' (our
brains and technologies). Now, try extrapolating where
the evolution at our scale, including our science and
technology, might be at some future time which is
1e16 times longer than the few billions years we had
to evolve, biologically and culturally, so far. It is
beyond what we could imagine.

Hence, it is perfectly conceivable that our physical,
chemical, biological... laws are an extremely crude
picture of an activity by an unimaginably powerful
underlying intelligence (vast distributed computer
running 1e16 times faster and having (1e16)^3 ~ 1e50
times more components than the intelligent processes
we are familiar with at our level). In addition to
providing support for ID model of evolution, this
kind of model could also be a rational alternative
to the 'anthropic principle' in explaining the fine
tuning of physical constants.


In any case, these are just couple ways one might
conceive the nature of the 'intelligent agency'
guiding mutations and evoution.


Windy

unread,
Jul 8, 2006, 8:54:47 AM7/8/06
to

OK.. if favorable adaptations occur at a high rate, this is evidence
for ID? What about if favorable adaptations did not occur or occur at a
lower rate? Is that evidence against ID?

> The point of my argument is that there is nothing in the article
> for neo-Darwinians to crow about.

Sez you.

> All that was shown is that
> the adaptation process consisted of a certain mutation plus the
> natural selection i.e. they observed M + NS. The neo-Darwinian
> model requires _RM_ + NS.

> To show that their observation favors neo-Darwinian model, they
> would have to obtain prediction of that model, at least some
> rough estimate for P(N,T), and compare it to the observed facts
> (the appearance of the favorable mutation for the given population
> size and duration).

No, *you* are adding an unnecessary intelligent agent into the equation
and making the explanation more complex. It is your job to prove that
the process is non-random.

> >>then the probability that this mutation will happen by
> >>chance at least once in these N available tries is:
> >> P(N,T) = 1 - (1-1/T)^N ~ 1 - exp(-N/T) ... (1)
> >>
> >>For small values of N/T this is approximately:
> >>
> >> P(N,T) ~ N/T ... (2)
> >
> > No, the possibility of independent events (T) does not bring down the
> > probability of a given nucleotide from mutating, moron.
>
> Where did I "bring down" any probability. If you roll a dice with T=6
> possible outcomes, N=2 times, the probability that you will _not_ get,
> say, number 5 in those 2 throws, is P(no_5) = (1-1/6)^2. Hence the
> probability that you will get 5 at least once is 1 - P(no_5) =
> 1 - (1-1/6)^2, which is what eq. (1) is for some general numbers T
> and N (which are the sizes of sets S1 and SN). Now Mr. Genius, show
> us your formulae for this type of probabilities.

The genome is not a die. Mutations can occur independently of other
mutations (but in rare occasions don't, but that is irrelevant here).
At the very least you would have to represent all nucleotides with
their own dice. Then *each* nucleotide has some probability of mutating
represented by the mutation rate. Adding more nucleotides to your
observation is like adding more dice, not adding more sides to the die.
(Which is what you are doing now with your assumption that T=possible
outcomes).

> >>They need to estimate P(N,T), then show that P(N,T) is fairly
> >>high e.g. 50% or above in order to claim that neo-Darwinian model
> >>(ND = RM + NS = Random Mutation + Natural Selection) has better
> >>than 50% chance of producing this adaptation. Otherwise,
> >>what is left is guided (intelligent) mutation.
> >
> > The goalposts are moving in a completely new direction, I'll give you
> > that.
>
> What goalpost has moved? I am saying that there is nothing in the
> reported result that suggests _random_ mutation is responsible for
> the observed adaptation. At best it shows that the particular
> mutation they found was responsible. But they show nothing that
> would indicate whether the mutation was random or guided/intelligent.

We can consider guided mutation if someone presents a hypothesis that
explains the observed adaptation better. Random mutation is the null
hypothesis and you have to present evidence against it yourself.

> >>Hence, this particular discovery as it stands, seems at
> >>least as favorable to ID = IM + NS as to ND = RM + NS,
> >>since all they have shown is the existence of M + NS,
> >>a fact consistent with ID and ND models of evolution.
> >
> > So the designer actively intervened in the evolution of these mice
> > during the last 6000 years? Interesting.... how, exactly?
>
> There could be an intelligent agency guiding mutation faster than
> random search toward the favorable DNA configuration.

And *how* is it doing that? Devise an experiment to test it.

What are you smoking?

> Hence, it is perfectly conceivable that our physical,
> chemical, biological... laws are an extremely crude
> picture of an activity by an unimaginably powerful
> underlying intelligence (vast distributed computer
> running 1e16 times faster and having (1e16)^3 ~ 1e50
> times more components than the intelligent processes
> we are familiar with at our level). In addition to
> providing support for ID model of evolution, this
> kind of model could also be a rational alternative
> to the 'anthropic principle' in explaining the fine
> tuning of physical constants.

So we're in the Matrix?

-- w.

Kermit

unread,
Jul 8, 2006, 9:42:42 AM7/8/06
to

Windy wrote:
> nightlight wrote:
> > Windy wrote:

<snip OC pseudo-equations>

Apparently it's time to acknowledge a new class of anti-evolutionists -
the Matrixers (Matrices?). They may or not be religious, but if so
they're more mystics ("Is that a real poncho, or is that a Sears
poncho?") than biblical literalists.

This is not the first time I have heard of a "universe as computer"
scenario, but they seem to be on the rise since those movies came out.

I have often presented a Matrix as one alternative to mainstream
evolutionary science, but I always present it as a non-testable model -
it fits the facts, but there is no way to get supporting *or refuting
evidence, so it doesn't matter how you try to present the null
hypothesis.

I suggest God is a nerd living in his Mom's basement, who can't get a
date, who is playing The Sims version 47 or somesuch. The ones who take
it seriously always see the computer itself as wise and conscious, and
something with which we will someday meld or something which dispenses
power or knowledge, etc. It may be true, but until Norbert the
Omnipotent Nerdness manifests himself to us subroutines, there would be
no way to confirm the universe as virtual reality, even using math :P

Besides, like aliens seeding life on Earth, it just pushes the ultimate
questions back further out of reach. I do not see this as a desirable
possibility.

>
> -- w.

Kermit,
who needs both hands to bend the spoon, just like Uri Gellar

hersheyhv

unread,
Jul 8, 2006, 11:54:44 AM7/8/06
to

nightlight wrote:
> michael...@worldnet.att.net wrote:
>
> > Now researchers have identified a genetic mutation
> > that underlies natural selection for the sand-matching
> > coat color of the beach mice, an adaptive trait that
> > camouflages them from aerial predators....
>
> It doesn't appear they have shown that the mutation
> was _random_. They only found a variant of a gene
> which is responsible for the lighter color.

All variants of genes start out as mutations. Although there are some
"domesticated" mutational processes that occur at a higher (sometimes
strikingly higher) rate in particular cells (e.g. our immune system or
the switch of mating type in haploid yeast), even these are largely
random (wrt need) events. There is no reason to believe that the
single mutational event that produced the color variant described here
is any more or less random (wrt need) than the mutation of bacteria to
strep resistance or the mutation(s) --there are several -- that
produces melanic moths or melanic mice in the desert southwest. The
mutation that produces this color variant happens from time to time.
Since you are the one claiming that this mutation is somehow different
from the vast, nay, overwhelming (in the sense that 99.99999% or more
is overwhelming), majority of mutations -- which occur from time to
time without respect to the need for the mutation, it is *your*
responsibility to demonstrate why this mutation is different from
almost all other mutations.

That said, if the mutation is a point mutation, the likely probability
of it occurring in any one individual offspring is between 10^-6 and
10^-11 (with a modal peak around 10^-8 to 10^-9). That is, admittedly,
a wide range of probabilities (because the rate of mutation *is*
dependent upon local features of sequence and the type of point
mutation involved -- e.g. transversions are rarer than transitions, not
because the rate varies according to need).

It would also help to know if the phenotypic effect is dominant or
recessive. [If recessive, there is a probability that the frequency of
the allele in the *population* of mice that invaded these islands
already had the allele.] If dominant, OTOH, when it occurred in the
population on the island, its phenotypic effect would be observed (or
not observed by predators in this case) immediately.

This probability of the mutation (actually this would only be if the
trait was dominant) needs to be multiplied by the mean number of mice
on the islands and the number of generations until the mutation
occurred. Then there would have to be a factor to take account of the
probability that that particular time the beneficial mutation would be
lost by chance. That would give you a very rough idea of the
probability of this event. Without even knowing the other numbers, I
can say that the probability of such a mutational event occurring
sometime within 6000 years is certainly within the realms of
probability.

For recessive traits, the more probable starting point is the frequency
of the allele in the founding population on the islands, which can be
significantly higher than the mutation frequency. Then the amount of
inbreeding would play a significant role in producing the mice with the
now beneficial variant trait, resulting in their exposure to the
selective environment.

> a) If the given population size for a given time can
> produce altogether N mutations for all individuals,
>
> b) and if there are total of T theoretically possible
> mutations at a comparable distance from the baseline
> mice genome as the Mc1r mutation they found,
>
> then the probability that this mutation will happen by
> chance at least once in these N available tries is:
>
> P(N,T) = 1 - (1-1/T)^N ~ 1 - exp(-N/T) ... (1)
>
> For small values of N/T this is approximately:
>
> P(N,T) ~ N/T ... (2)
>
> (If there are F favorable color mutations among T, then
> N/T in (2) would be multiplied by F. That refinement is
> irrelevant for the point being made below.)
>
> I don't see in the article any hint of estimates for P(N,T)
> in order to establish what are the odds that a _random_ mutation
> can "find" the right solution (any of the F favorable color mutations)
> in the given number of tries N.
>
> They need to estimate P(N,T), then show that P(N,T) is fairly
> high e.g. 50% or above in order to claim that neo-Darwinian model
> (ND = RM + NS = Random Mutation + Natural Selection) has better
> than 50% chance of producing this adaptation. Otherwise,
> what is left is guided (intelligent) mutation.

No mutation has ever been observed to be guided by any intelligence.
You are proposing a mechanism (guided mutation or mutation that occurs
as a result of need) that has been diligently searched for and has
never been observed. Lamarckism is dead. Killed by a lack of
evidence. If you can resurect that particular rancid corpse by
presenting evidence that *some* specific point mutations (which is what
we are talking about) occur as a result of need please to so. Until
then, you are proposing a mechanism that has uniformly and consistently
has been shown not to occur in nature. If your claim is that an
intelligent agent capable of altering the genome of the island mice was
responsible, one way to support your claim would be to present
empirical evidence of such an intelligent agent capable of performing
such genomic magic having existed at the right time and place.

> Hence, this particular discovery as it stands, seems at
> least as favorable to ID = IM + NS as to ND = RM + NS,
> since all they have shown is the existence of M + NS,
> a fact consistent with ID and ND models of evolution.

Not really. ID = IM + NS requires the existence of the hypothetical
"I" agent, for which there is zero evidence. RM, OTOH, is consistent
with all known mechanisms of natural mutation. And the probability of
RM producing a point mutation for any normal point mutation variant
within, say, 500 years, with a mean population size of, say, 10,000
mice with 2-3 generations per year, is actually pretty good. And if
the mutant allele is recessive, the starting point might not involve a
"new" mutation but a variant that is already present (but it would be a
deleterious variant in the other environment). That the same variant
can be deleterious in one environment and beneficial in another would
simply mean something that creationists have a hard time grasping, that
the terms "deleterious" and "beneficial" are conditional adjectives,
not inherent properties, of the variant being described.

Because the ID = IM + NS hypothesis requires something that is neither
supported by independent evidence (unlike the 'random wrt need' nature
of point mutations) nor is such an entity necessary to explain what has
been observed, the famous blade of Ockham must slice it off.

nightlight

unread,
Jul 8, 2006, 12:06:37 PM7/8/06
to
Windy wrote:

>>In contrast, the ID model says that those N points are not
>>chosen randomly from S1, but are guided by some 'intelligent
>>agency' which allows it to find favorable configurations
>>from S1 faster than the random search does i.e. the ID model
>>says that if we observe a series of such adaptation processes,
>>then the rate of observed favorable adaptations will be
>>greater than the rate predicted by the neo-Darwinian model
>>(implied by particular P(N,T) in each process instance).
>
>
> OK.. if favorable adaptations occur at a high rate, this is evidence
> for ID?

It would be an evidence that neo-Darwinian model (RM+NS) is
an unlikely explanation of the observed adaptation and that
a more efficient (than random) search algorithm is responsible
for the adaptation. By convention, we can call this more
efficient algorithm an 'intelligent' or guided mutation,
or generally an ID algorithm.

> What about if favorable adaptations did not occur or occur at a
> lower rate? Is that evidence against ID?
>

I think you meant "higher rate" not "lower rate" above (otherwise
what you wrote is a gibberish). Assuming this correction,
it would be an evidence that the 'intelligent agency' (IA) model
is not necessary to explain that particular adaptation. That
doesn't exclude a need for the IA model in order to explain
some other adaptations. It doesn't even exclude the IA
participation in that example. It only means that what
was observed does not discriminate for or against IA and
that one might need additional data to make such discrimination.

(The latter observation is important since a coherent IA theory
should not cherry pick in which processes the IA participates
and in which it choses not to be involved with.)

Consider an 'intelligent agency' which we know to exist in nature
(human brain) applying its efforts to stock market trading. One
could trade using all his knowledge and foresight and not do
any better than chance on any particular day or a week or
throughout the whole trading career. Could you declare that
he was picking randomly if his gains don't exceed random
ones for some given span of time? You can't. You would
need more data and possibly different kind of data (such as
direct observation of his trading or an interview) to support
such conclusion.

In any case, what is your point? How is your question related
to my observation that there is nothing in the article, no
calculation or estimation of predictions of any mutation model
(random or any other), let alone any comparison of such
prediction with the empirical facts observed. There is simply
nothing there for neo-Darwinians to crow about. If you saw
something to crow about, you haven't shown as yet what that
might be.

Instead, so far you have been quite desperate in trying
to divert the discussion to your little collection of pet
strawmen while showing off your repertoire of explitives.
Either say something of substance or stay quiet and let
somene who may have a better understanding of the subject
being discussed have a turn defending the neo-Darwinian
model. Your response was and continues to be so inpet that
you may well be a supporter of Pat Robertson's theory of
evolution acting here under the false flag, trying to
make neo-Darwinians look stupid and primitive.

>
>>The point of my argument is that there is nothing in the article
>>for neo-Darwinians to crow about.
>
>
> Sez you.

Well, can you cite some calculation from the paper demonstrating
that the observed mutation was 'most probably' (e.g. 50% or
better) random.

>>Where did I "bring down" any probability. If you roll a dice with T=6
>>possible outcomes, N=2 times, the probability that you will _not_ get,
>>say, number 5 in those 2 throws, is P(no_5) = (1-1/6)^2. Hence the
>>probability that you will get 5 at least once is 1 - P(no_5) =
>>1 - (1-1/6)^2, which is what eq. (1) is for some general numbers T
>>and N (which are the sizes of sets S1 and SN). Now Mr. Genius, show
>>us your formulae for this type of probabilities.
>
>
> The genome is not a die. Mutations can occur independently of other
> mutations (but in rare occasions don't, but that is irrelevant here).
> At the very least you would have to represent all nucleotides with
> their own dice. Then *each* nucleotide has some probability of mutating
> represented by the mutation rate. Adding more nucleotides to your
> observation is like adding more dice, not adding more sides to the die.
> (Which is what you are doing now with your assumption that T=possible
> outcomes).

In this case they were talking about a particular _single_
nucleotide mutation (not some chain of mutations compunding
in the presence of natural selection which would require conditional
probabilities in the reasoning). The number of sides for
the neo-Darwinian dice in the case of single nucleotide mutation
is then simply the number of all DNA configurations which differ
from the starting configuration by a single nucleotide. Hence,
to roughly estimate the size of this set of configuration, the
number T, one would need to multiply number of nucleotides with
some average number of variants per nucleotide.

Note also that since radnom mutations model does not have a 'look
ahead' capability, and since natural selection occurs _after_
the mutation, you cannot reduce the size of T by discarding
'non-viable' mutations in your count of configurations that
RM has to explore. You have one configuration available,
viable or otherwise, per random mutation and only the natural
selection can decide what is viable and what unviable (hence
each try costs you one potentially available offspring).

Let's also note here that you're again not answering the original
objection: where does the paper compute any 'random mutation'
model predictions, much less compare such predictions with
the observed data? The argument you are pursuing, such as what
is the best way to obtain such estimates, is entirely irrelevant
since they haven't shown any way, optimal or subpotimal, to be
debated.


>>What goalpost has moved? I am saying that there is nothing in the
>>reported result that suggests _random_ mutation is responsible for
>>the observed adaptation. At best it shows that the particular
>>mutation they found was responsible. But they show nothing that
>>would indicate whether the mutation was random or guided/intelligent.
>
>
> We can consider guided mutation if someone presents a hypothesis that
> explains the observed adaptation better. Random mutation is the null
> hypothesis and you have to present evidence against it yourself.

The article does not make any estimates as to how any hypothesis,
radnom or any other, performs against the observed data. Hence
it provides no evidence in support or against any particular
mutation model. If you wish to crow that their empirical data
support 'RM' model, you need to calculate prediction of the RM
model and compare them to the data. Otherwise, in the absence of
any model calculations and comparisons to the data, the data is
neutral with respect to the mutation model. Absent any model
predictions, your "RM is the null hypothesis" is a vacuous
euphemism, providing precisely zero bits of information about
the nature of the observed mutation.


>>>So the designer actively intervened in the evolution of these mice
>>>during the last 6000 years? Interesting.... how, exactly?
>>
>>There could be an intelligent agency guiding mutation faster than
>>random search toward the favorable DNA configuration.
>
>
> And *how* is it doing that? Devise an experiment to test it.


There are plenty of ways that an 'intelligent agency' could exist
and perform directed mutations. I merely pointed out couple
possibilities which are not a priori excluded by the presently
known laws of physics.

We also already know that natural processes which intelligently
guide mutations exist in nature (e.g. brains of molecular
biologists). The empirically established existence of such
processes is a direct counter-example to a conjecture
that such natural processes are excluded by laws of nature.
They are obviously not excluded by the laws of nature since
they do exist in nature.

Hence your "point" about '6000 years', which is what I was
responding to above, is a vacuous strawman. Pat Robertson's
model of evolution is certainly not the sole alternative
to the neo-Darwinian model.


>>Hence, it is perfectly conceivable that our physical,
>>chemical, biological... laws are an extremely crude
>>picture of an activity by an unimaginably powerful
>>underlying intelligence (vast distributed computer
>>running 1e16 times faster and having (1e16)^3 ~ 1e50
>>times more components than the intelligent processes
>>we are familiar with at our level). In addition to
>>providing support for ID model of evolution, this
>>kind of model could also be a rational alternative
>>to the 'anthropic principle' in explaining the fine
>>tuning of physical constants.
>
>
> So we're in the Matrix?

I suppose, if one had to put it in terms understandible to
a simpleton whose science education consists of going to
the movies, one might put it that way.

nightlight

unread,
Jul 8, 2006, 1:34:29 PM7/8/06
to
hersheyhv wrote:

>>It doesn't appear they have shown that the mutation
>>was _random_. They only found a variant of a gene
>>which is responsible for the lighter color.
>
>

> random (wrt need) events. There is no reason to believe that the
> single mutational event that produced the color variant described here
> is any more or less random (wrt need) than the mutation of bacteria to
> strep resistance or the mutation(s) --there are several -- that
> produces melanic moths or melanic mice in the desert southwest.

You are confusing the absence of evidence due to 'no reason to
believe' (as far as you know) with the evidence of absence of
such phenomenon (directed mutation).

> That said, if the mutation is a point mutation, the likely probability
> of it occurring in any one individual offspring is between 10^-6 and
> 10^-11 (with a modal peak around 10^-8 to 10^-9).

You are confusing empirically observed rates of mutations
with the predictions of such rates by some mathematical model,
such as random mutation model. As explained in the first post,
the RM model is an assumption about the algorithm used to explore
the space of DNA configurations. Specifically, the RM assumes
that the 'next' configuration to be explored is randomly selected
among all accessible configurations (consistent with physical laws).
Like any such search algorithm, the RM assumption implies a certain
probability of success for a given space of physically accessible
configurations (size of which was denoted as number T in (b))
and given number of tries (denoted as number N in (a)). To test
whether the RM search algorithm is likely explanation of the
observed adaptation (under the given population size and time
constraints) one needs to estimate these spaces and compute
the RM implied probability. Only if the computed probability is
high enough, you can crow that the observation confirms the
RM model (as the likely mechanism). Neither you nor the article
provide any such RM model prediction.

The figures of empirical rates of mutations have no relation
with their origin. After all, there is also some empirical
rate of mutations which are known to be intelligently guided,
such as those within the bio-technology (in agriculture,
academic research etc). By your logic, merely measuring
the empirical rate of these mutations and citing the obtained
figure, "proves" that these mutations are random. Of course not.

The empirical rate of mutation is relevant only if you wish
to show that a mutation (of whatever nature) is responsible
for the observed adaptation. That is not being debated here.
We take for granted that the paper has demonstrated not just
the genetic nature of this adaptation but also found the
specific mutation responsible. Hence, your citations of
the empirical rates of mutations are a strawman argument in
the context of this debate.


> No mutation has ever been observed to be guided by any intelligence.

How about those in bio-technology? That merely shows that a
natural processes which guide mutations intelligently do
not contradict the natural laws. Such natural processes can
exist and do exist.

> You are proposing a mechanism (guided mutation or mutation that occurs
> as a result of need) that has been diligently searched for and has
> never been observed.

Again, you're confusing absence of evidence, or even the failure
to examine and recognize the evidence, with the evidence of
absence.

If you want to demonstrate that the mice adaptation discussed
shows that the search algorithm in the space of DNA configuration
was random (this was just a single nucleotide mutation, the
simplest case you can have), go ahead. Compute the prediction
of success for such random search algorithm for the given space
of possibilities (all configuration with one nucleotide change
away from the initial state) and the given number of tries
(which requires population sizes, conception rates, duration)
and show that this algorithm has a 'good' probability of success
under the constraints given.

Recall that ID and ND differ for these simple adaptations only
in the their search algorithm, in how each picks the 'next'
configuration to explore (which in turn implies probabilities
of success of each algorithm under any given constraints).


> If your claim is that an intelligent agent capable of
> altering the genome of the island mice was responsible,
> one way to support your claim would be to present
> empirical evidence of such an intelligent agent capable
> of performing such genomic magic having existed at the
> right time and place.

My claim is that the article shows nothing in favor or against
the ND = RM + NS vs ID = IM + NS models of adaptaion observed.
It shows only M + NS, which is common to both models. Hence
there is nothing here for neo-Darwinians to crow about.

As to what might be the nature of 'intelligent agency' guiding
the mutations, that is a separate issue from the point I was
making. We all agree that natural processes exist on Earth
which guide mutations intelligently (the natural processes
we call bio-technology or genetic engineering). Your argument
above seems to be based on conjecture that no such natural
process is consistent with natural laws. The direct
counter-example (pointing you to the existence of such
natural processes) invalidates your conjecture. Try
something better.


>>Hence, this particular discovery as it stands, seems at
>>least as favorable to ID = IM + NS as to ND = RM + NS,
>>since all they have shown is the existence of M + NS,
>>a fact consistent with ID and ND models of evolution.
>
>
> Not really. ID = IM + NS requires the existence of the hypothetical
> "I" agent, for which there is zero evidence.

This paper, which is what is I was referring to above, has no
discussion, let alone tests, of the I-agentcy hypothesis. Now, you
and I may have this or that view on the existence or plausibility
of the I-agency, but that is unrelated to what the paper and its
empirical facts show, or fail to show. In particular they do not
show that the observed adaptation and its genetic mechanism
favor any particular search algorithm, the RM or IM based method,
in the space of DNA configurations accessible to mutations.


> Because the ID = IM + NS hypothesis requires something that is neither
> supported by independent evidence (unlike the 'random wrt need' nature
> of point mutations) nor is such an entity necessary to explain
> what has been observed, the famous blade of Ockham must slice it
> off.


Ockham razor doesn't apply unless you can show that the
alternative models are equally consistent with the empirical
evidence. Merely failing to evaluate the consistency of the
alternative models A and B against the empirical evidence
does not allow you to declare that A is simpler, hence it
is a better model by Ockham's razor.

Otherwise, someone could have a "theory" A of free fall, which
says that all objects fall at a constant speed, then refuse
to make a specific prediction of his "theory" since by
_your_ interpretation of Ockham's razor, that prediction is
not necessary because his "theory" is preferable to the
alternative theory B, which says that the free fall motion
is accelerated, since theory A is simpler than theory B.


nightlight

unread,
Jul 8, 2006, 1:47:10 PM7/8/06
to
nightlight wrote:

> Windy wrote:
>
>> What about if favorable adaptations did not occur or occur at a
>> lower rate? Is that evidence against ID?
>>
>
> I think you meant "higher rate" not "lower rate" above (otherwise

> what you wrote is a gibberish). Assuming this correction, ...

Oops, your wording was merely grammatically incorrect (mixed up
tenses), but not ambiguous semantically as I suggested. Hence,
scratch my correction since my reply that followed the correction
responded to what you actually meant to say above.

Lee Bowman

unread,
Jul 8, 2006, 2:51:34 PM7/8/06
to
On Sat, 08 Jul 2006 13:34:29 -0400, nightlight
<nightli...@skip.omegapoint.com> wrote:

>hersheyhv wrote:

>This paper, which is what is I was referring to above, has no
>discussion, let alone tests, of the I-agentcy hypothesis. Now, you
>and I may have this or that view on the existence or plausibility
>of the I-agency, but that is unrelated to what the paper and its
>empirical facts show, or fail to show. In particular they do not
>show that the observed adaptation and its genetic mechanism
>favor any particular search algorithm, the RM or IM based method,
>in the space of DNA configurations accessible to mutations.

To go a step further with the interventionary model (ID = IM + NS),
would you consider as a possibility ID = IM + IS?
(or + ISS, 'Intelligently Specified Selection)?

> > Because the ID = IM + NS hypothesis requires something that is neither
> > supported by independent evidence (unlike the 'random wrt need' nature
> > of point mutations) nor is such an entity necessary to explain
> > what has been observed, the famous blade of Ockham must slice it
> > off.
>
>
>Ockham razor doesn't apply unless you can show that the
>alternative models are equally consistent with the empirical
>evidence. Merely failing to evaluate the consistency of the
>alternative models A and B against the empirical evidence
>does not allow you to declare that A is simpler, hence it
>is a better model by Ockham's razor.
>
>Otherwise, someone could have a "theory" A of free fall, which
>says that all objects fall at a constant speed, then refuse
>to make a specific prediction of his "theory" since by
>_your_ interpretation of Ockham's razor, that prediction is
>not necessary because his "theory" is preferable to the
>alternative theory B, which says that the free fall motion
>is accelerated, since theory A is simpler than theory B.

I agree. Given the observed complexity of biologic systems, I feel
that Ockham's Razor is an outdated philosophy, and quite risky to
invoke. Richard Dawkins once referred to it to rule out a supernatural
causation of life, stating that a naturalistic cause was simpler, and
thus more viable.

i.e. "God as an unnecessary rider in an otherwise perfectly acceptable
scientific theory of life's origins"

Didn't mean to take this thread in a new direction by the above
philosophical rant. Primarily wanted to get your (and others) take
on:

ID = IM + IS or
ID = IM + ISS

*assuming* an intelligent agency other than human existed

Windy

unread,
Jul 8, 2006, 4:52:25 PM7/8/06
to

nightlight wrote:
> Windy wrote:
(snip)

> > What about if favorable adaptations did not occur or occur at a
> > lower rate? Is that evidence against ID?
> >
> I think you meant "higher rate" not "lower rate" above (otherwise
> what you wrote is a gibberish). Assuming this correction,
> it would be an evidence that the 'intelligent agency' (IA) model
> is not necessary to explain that particular adaptation. That
> doesn't exclude a need for the IA model in order to explain
> some other adaptations. It doesn't even exclude the IA
> participation in that example. It only means that what
> was observed does not discriminate for or against IA and
> that one might need additional data to make such discrimination.
> (The latter observation is important since a coherent IA theory
> should not cherry pick in which processes the IA participates
> and in which it choses not to be involved with.)

Good. But you are aware, I trust, that a low rate of favourable
adaptations emerging in nature is often cited as evidence against
evolution? I was therefore wondering why now a high rate of favourable
adaptations is evidence against evolution, too. It couldn't be because
evolution skeptics are twisting their theories to accommodate all
possible results, now could it?

> There is simply
> nothing there for neo-Darwinians to crow about. If you saw
> something to crow about, you haven't shown as yet what that
> might be.

You haven't shown as yet why the null hypothesis of random mutation
should be discarded. Like it or not, that is the null hypothesis at
present. If you don't like it, do your own research.

> Instead, so far you have been quite desperate in trying
> to divert the discussion to your little collection of pet
> strawmen while showing off your repertoire of explitives.

Now I'm hurt. I would have used much better expletives if I'd known you
were counting, douchebag.

> Either say something of substance or stay quiet

Says a veritable veteran with a total of five posts.

> and let
> somene who may have a better understanding of the subject
> being discussed have a turn defending the neo-Darwinian
> model.

I imagine most people are rather turned off by your wordy,
self-important bullshit.

> Your response was and continues to be so inpet that

Perhaps you mean "inept"?

> you may well be a supporter of Pat Robertson's theory of
> evolution acting here under the false flag, trying to
> make neo-Darwinians look stupid and primitive.

Fuck you and the mouse you rode in on.

> >>Where did I "bring down" any probability. If you roll a dice with T=6
> >>possible outcomes, N=2 times, the probability that you will _not_ get,
> >>say, number 5 in those 2 throws, is P(no_5) = (1-1/6)^2. Hence the
> >>probability that you will get 5 at least once is 1 - P(no_5) =
> >>1 - (1-1/6)^2, which is what eq. (1) is for some general numbers T
> >>and N (which are the sizes of sets S1 and SN). Now Mr. Genius, show
> >>us your formulae for this type of probabilities.
> >
> > The genome is not a die. Mutations can occur independently of other
> > mutations (but in rare occasions don't, but that is irrelevant here).
> > At the very least you would have to represent all nucleotides with
> > their own dice. Then *each* nucleotide has some probability of mutating
> > represented by the mutation rate. Adding more nucleotides to your
> > observation is like adding more dice, not adding more sides to the die.
> > (Which is what you are doing now with your assumption that T=possible
> > outcomes).
>
> In this case they were talking about a particular _single_
> nucleotide mutation (not some chain of mutations compunding
> in the presence of natural selection which would require conditional
> probabilities in the reasoning). The number of sides for
> the neo-Darwinian dice in the case of single nucleotide mutation
> is then simply the number of all DNA configurations which differ
> from the starting configuration by a single nucleotide.

No, it's not the "number of sides", and I just explained why. Fine,
stay stupid.

> Let's also note here that you're again not answering the original
> objection: where does the paper compute any 'random mutation'
> model predictions, much less compare such predictions with
> the observed data?

Such computations are not customary since the body of evidence points
to mutations being random. Again, if you don't like it, do your own
research.

> >>What goalpost has moved? I am saying that there is nothing in the
> >>reported result that suggests _random_ mutation is responsible for
> >>the observed adaptation. At best it shows that the particular
> >>mutation they found was responsible. But they show nothing that
> >>would indicate whether the mutation was random or guided/intelligent.
> >
> > We can consider guided mutation if someone presents a hypothesis that
> > explains the observed adaptation better. Random mutation is the null
> > hypothesis and you have to present evidence against it yourself.
>
> The article does not make any estimates as to how any hypothesis,
> radnom or any other, performs against the observed data. Hence
> it provides no evidence in support or against any particular
> mutation model. If you wish to crow that their empirical data
> support 'RM' model, you need to calculate prediction of the RM
> model and compare them to the data. Otherwise, in the absence of
> any model calculations and comparisons to the data, the data is
> neutral with respect to the mutation model. Absent any model
> predictions, your "RM is the null hypothesis" is a vacuous
> euphemism, providing precisely zero bits of information about
> the nature of the observed mutation.

Sour grapes.

> >>>So the designer actively intervened in the evolution of these mice
> >>>during the last 6000 years? Interesting.... how, exactly?
> >>
> >>There could be an intelligent agency guiding mutation faster than
> >>random search toward the favorable DNA configuration.
> >
> > And *how* is it doing that? Devise an experiment to test it.
>
> There are plenty of ways that an 'intelligent agency' could exist
> and perform directed mutations. I merely pointed out couple
> possibilities which are not a priori excluded by the presently
> known laws of physics.
>
> We also already know that natural processes which intelligently
> guide mutations exist in nature (e.g. brains of molecular
> biologists).

They can't "guide mutations", they either accelerate the rate of random
mutations using mutagens or develop transgenic organisms through a
laborious process. The second approach tends to leave plenty of
physical evidence.

> Hence your "point" about '6000 years', which is what I was
> responding to above, is a vacuous strawman. Pat Robertson's
> model of evolution is certainly not the sole alternative
> to the neo-Darwinian model.

The mice have existed on the dunes for at most 6000 years (real years,
not biblical), hence the time limit for the adaptation. Way to read for
comprehension, dunce.

> >>Hence, it is perfectly conceivable that our physical,
> >>chemical, biological... laws are an extremely crude
> >>picture of an activity by an unimaginably powerful
> >>underlying intelligence (vast distributed computer
> >>running 1e16 times faster and having (1e16)^3 ~ 1e50
> >>times more components than the intelligent processes
> >>we are familiar with at our level). In addition to
> >>providing support for ID model of evolution, this
> >>kind of model could also be a rational alternative
> >>to the 'anthropic principle' in explaining the fine
> >>tuning of physical constants.
> >
> > So we're in the Matrix?
>
> I suppose, if one had to put it in terms understandible to
> a simpleton whose science education consists of going to
> the movies, one might put it that way.

Not really, but better that than a moron pushing idiotic word salad
without understanding a single thing about the probability of
mutations.

-- w.

nightlight

unread,
Jul 8, 2006, 6:29:22 PM7/8/06
to
Lee Bowman wrote:

> On Sat, 08 Jul 2006 13:34:29 -0400, nightlight
> <nightli...@skip.omegapoint.com> wrote:
>
>
>>hersheyhv wrote:
>
>
>>This paper, which is what is I was referring to above, has no
>>discussion, let alone tests, of the I-agentcy hypothesis. Now, you
>>and I may have this or that view on the existence or plausibility
>>of the I-agency, but that is unrelated to what the paper and its
>>empirical facts show, or fail to show. In particular they do not
>>show that the observed adaptation and its genetic mechanism
>>favor any particular search algorithm, the RM or IM based method,
>>in the space of DNA configurations accessible to mutations.
>
>
> To go a step further with the interventionary model (ID = IM + NS),
> would you consider as a possibility ID = IM + IS?
> (or + ISS, 'Intelligently Specified Selection)?
>
>
>>>Because the ID = IM + NS hypothesis requires something that is neither
>>>supported by independent evidence (unlike the 'random wrt need' nature
>>>of point mutations) nor is such an entity necessary to explain
>>>what has been observed, the famous blade of Ockham must slice it
>>>off.
>>
>>

>

> I agree. Given the observed complexity of biologic systems, I feel
> that Ockham's Razor is an outdated philosophy, and quite risky to
> invoke. Richard Dawkins once referred to it to rule out a supernatural
> causation of life, stating that a naturalistic cause was simpler, and
> thus more viable.
> i.e. "God as an unnecessary rider in an otherwise perfectly acceptable
> scientific theory of life's origins"

He was misapplying the Ockham's razor. The proper caveat was best
stated by A. Einstein: "Make everything as simple as possible,
but not simpler."

> Didn't mean to take this thread in a new direction by the above
> philosophical rant. Primarily wanted to get your (and others) take
> on:
>
> ID = IM + IS or
> ID = IM + ISS
>
> *assuming* an intelligent agency other than human existed

The selection of reproductive mates is an 'intelligent selection'
(IS & ISS). In my view the 'intelligence' (foresight, strategizing)
and the 'mind stuff' (the stuff that answers for 'you' a question:
what is it like to be such and such arrangement of atoms and
fields that make up 'you') is manifest not just in humans or
higher animals, and not just in 'live' or 'material' nature,
but in everything from the most elemental 'elementary' particles
and fields, through the larger complex systems (intelligent
networks) in the material and the abstract realms, such as
biochemcial reacton webs within cells, immune systems, animal
and human brains, technologies, sciences, languages, cultures,
religions, human societies, gene pools, ecosystems,...

I visualize the overall structure of this vast hierarchy of
mutually permeating, overlapping and nesting intelligent
networks and sub-networks, sharing the same elements
and motions, each network in 'pursuit of its own happiness',
as a gigantic, multi-dimensional crossword puzzle, with words
at one level serving as letters at the next level, and where
each 'letter' belongs to multitudes of 'words', 'super-words'...
all in unceasing, busy little motions at all levels and along
all dimensions, endlessly harmonizing itself in pursuit of an
ever more perfect 'solution', at ever higher levels. As the
near perfect harmonization is achieved at the lower/inner
levels, the motions of their elements become increasingly
more regular and repetitive (such as the simple periodic
oscillations at the level of physical particles and fields),
their creative spark extinguished and the disquietudes of
fragile individuality given away in return for a safe,
predicatable harmony and eternal fulfillment in serving
obediently the increasingly more delicate flame as it
advances its ever narrowing edge to the levels above.

In this kind of larger picture, the 'natural selection' and
'mutations' can be seen as slightly different forms of these
little harmonizing motions, where each organism is continually
adapting to (or harmonizing with) its environment and the
environment adapting/harmonizing with the organism. Each
seeks to become more predicatable to the other by harmonizing
its own motions to the internal model the other has of these
motions, while the respective models in turn are seeking to
anticipate the motions of the other as closely as possible.

Nic

unread,
Jul 8, 2006, 6:51:06 PM7/8/06
to

Not sure I see.

Intelligent selection is often used when an intelligent agent cannot
influence what is originally on offer. I mean if you're a would-be
meddler, then you can meddlingly select from a set you can't meddlingly
control, or you can simply meddlingly control. You wouldn't need to do
both.

Or am I missing the point?

Lee Bowman

unread,
Jul 8, 2006, 7:33:03 PM7/8/06
to
On 8 Jul 2006 15:51:06 -0700, "Nic" <harris...@hotmail.com> wrote:

>
>Lee Bowman wrote:

>>
>> ID = IM + IS or
>> ID = IM + ISS
>>
>> *assuming* an intelligent agency other than human existed
>
>Not sure I see.
>
>Intelligent selection is often used when an intelligent agent cannot
>influence what is originally on offer. I mean if you're a would-be
>meddler, then you can meddlingly select from a set you can't meddlingly
>control, or you can simply meddlingly control. You wouldn't need to do
>both.

Gene tweaking could entail an allele alteration (mutation) followed by
manual selection (IS or ISS).

Hmm ... how about ID = IAA + ISS

>
>Or am I missing the point?
>

What point? My ramblings are often pointless.

I know, how about IM instead of ID
(Intelligent Meddling)

Lee Bowman

unread,
Jul 8, 2006, 7:49:11 PM7/8/06
to
On Sat, 08 Jul 2006 18:29:22 -0400, nightlight
<nightli...@skip.omegapoint.com> wrote:

>Lee Bowman wrote:

>> i.e. "God as an unnecessary rider in an otherwise perfectly acceptable
>> scientific theory of life's origins"
>
>He was misapplying the Ockham's razor. The proper caveat was best
>stated by A. Einstein: "Make everything as simple as possible,
>but not simpler."

Right. Or its corollary, KISS (keep it simple stupid). It probably
refers more to organizing your investments, office, projects, etc.
than to cosmology.

How about KICK (keep it cogent, kiddo).

>> Didn't mean to take this thread in a new direction by the above
>> philosophical rant. Primarily wanted to get your (and others) take
>> on:
>>
>> ID = IM + IS or
>> ID = IM + ISS
>>
>> *assuming* an intelligent agency other than human existed

>The selection of reproductive mates is an 'intelligent selection'
>(IS & ISS). In my view the 'intelligence' (foresight, strategizing)
>and the 'mind stuff' (the stuff that answers for 'you' a question:
>what is it like to be such and such arrangement of atoms and

>fields ...

<snip>

>... each seeking to

>become more predicatable to the other by harmonizing
>its own motions to the internal model the other has of these
>motions, while the respective models in turn are seeking to
>anticipate the motions of the other as closely as possible.

Hey, didn't I read that in "Hitchhiker's Guide to the Astral Plane??

Nic

unread,
Jul 8, 2006, 7:54:14 PM7/8/06
to

Lee Bowman wrote:
> On 8 Jul 2006 15:51:06 -0700, "Nic" <harris...@hotmail.com> wrote:
>
> >
> >Lee Bowman wrote:
>
> >>
> >> ID = IM + IS or
> >> ID = IM + ISS
> >>
> >> *assuming* an intelligent agency other than human existed
> >
> >Not sure I see.
> >
> >Intelligent selection is often used when an intelligent agent cannot
> >influence what is originally on offer. I mean if you're a would-be
> >meddler, then you can meddlingly select from a set you can't meddlingly
> >control, or you can simply meddlingly control. You wouldn't need to do
> >both.
>
> Gene tweaking could entail an allele alteration (mutation) followed by
> manual selection (IS or ISS).

In management speak, that would be micro-meddling! If that goes on,
then there's no possible excuse for there being any evil in the world.

nightlight

unread,
Jul 8, 2006, 8:56:54 PM7/8/06
to
Lee Bowman wrote:
> Hey, didn't I read that in "Hitchhiker's Guide to the
> Astral Plane??

Never heard of that book, although I have probably
absorbed those images from somewhere. As to Astral
Projection, I haven't read about that subject
since a brief interest in high school.

John Wilkins

unread,
Jul 8, 2006, 10:15:37 PM7/8/06
to
nightlight <nightli...@skip.omegapoint.com> wrote:

Don't go there...
--
John S. Wilkins, Postdoctoral Research Fellow, Biohumanities Project
University of Queensland - Blog: scienceblogs.com/evolvingthoughts
"He used... sarcasm. He knew all the tricks, dramatic irony, metaphor,
bathos, puns, parody, litotes and... satire. He was vicious."

hersheyhv

unread,
Jul 9, 2006, 1:50:42 AM7/9/06
to

nightlight wrote:
> hersheyhv wrote:
>
> >>It doesn't appear they have shown that the mutation
> >>was _random_. They only found a variant of a gene
> >>which is responsible for the lighter color.
> >
> >
> > random (wrt need) events. There is no reason to believe that the
> > single mutational event that produced the color variant described here
> > is any more or less random (wrt need) than the mutation of bacteria to
> > strep resistance or the mutation(s) --there are several -- that
> > produces melanic moths or melanic mice in the desert southwest.
>
> You are confusing the absence of evidence due to 'no reason to
> believe' (as far as you know) with the evidence of absence of
> such phenomenon (directed mutation).

I am NOT confusing the absence of evidence with my statement. I am
doing a standard scientific inference. I am specifically *including*
as evidence relevant to this particular case the clear and
quantitatively massive evidence that no, zero, nada point mutational
event in nature that has actually been studied has ever been observed
to be guided. Saying that mutation is random wrt need is like saying
that the sun rises in the east. The sun has repeatedly been observed
to rise in the east.

Your claim that this particular point mutation could have been due to a
magical mutation fairy makes as much sense in science as saying that
the sun rose in the west on July 4th in 1777. You can certainly make
such a claim, and make up all sorts of excuses as to why nobody alive
at the time noticed, but no one would take such a claim seriously. In
science, if you are proposing an extraordinary explanation, *you* are
responsible for demonstrating that such an explanation is both possible
and at least as likely as one that is consistent with the way that
things are known to work. You are *specifically* claiming that this
mutation, unlike other similar point mutations in DNA from organisms
across the phylogenetic spectrum, is just as likely to have been poofed
into existence by a mutation fairy as to have occurred by known
mechanisms that produce point mutations. *You* need something to back
up such an extraordinary and unnecessary claim, such as independence
evidence that there actually is a mutation fairy that could have done
what you claim was done. Without that, your explanation is most
definately NOT as good as an explanation that uses known mechanisms
without needing to posit the apparently hypothetical mutation fairy.

> > That said, if the mutation is a point mutation, the likely probability
> > of it occurring in any one individual offspring is between 10^-6 and
> > 10^-11 (with a modal peak around 10^-8 to 10^-9).
>
> You are confusing empirically observed rates of mutations
> with the predictions of such rates by some mathematical model,
> such as random mutation model.

No. I am saying that this is the *empirically observed* rate of point
mutation seen in a wide range of organisms and a wide range of genes
and a wide range of possible point mutations. Unless you have a better
estimate of the probability of point mutations, this is the range to
use. BTW, if the mutation needed to produce the phenotype is *any*
mutation that knocks out the gene's function, the rate of mutation to
the phenotype would be more frequent by a factor of 10-1000 since any
of the mutations that knock out the gene would produce the selective
phenotype.

> As explained in the first post,
> the RM model is an assumption about the algorithm used to explore
> the space of DNA configurations.

And the probability I gave is the rate of point mutation at a
particular nucleotide per generation. That is the rate of exploration
of the DNA configuration of the ancestral organism by point mutations.
And this example is an example of a single point mutation.

> Specifically, the RM assumes
> that the 'next' configuration to be explored is randomly selected
> among all accessible configurations (consistent with physical laws).
> Like any such search algorithm, the RM assumption implies a certain
> probability of success for a given space of physically accessible
> configurations (size of which was denoted as number T in (b))
> and given number of tries (denoted as number N in (a)). To test
> whether the RM search algorithm is likely explanation of the
> observed adaptation (under the given population size and time
> constraints) one needs to estimate these spaces and compute
> the RM implied probability. Only if the computed probability is
> high enough, you can crow that the observation confirms the
> RM model (as the likely mechanism). Neither you nor the article
> provide any such RM model prediction.

You are engaged in babbling incoherently. RM is the rate of change in
the genome. In this case, all that is needed is the probability of
point mutations per nucleotide.

> The figures of empirical rates of mutations have no relation
> with their origin. After all, there is also some empirical
> rate of mutations which are known to be intelligently guided,
> such as those within the bio-technology (in agriculture,
> academic research etc).

The vast majority of mutations in bio-tech have been random mutations
and the intelligent guidance came about by arranging conditions that
selected for the desired mutations by virtue of their phenotype.
Directed mutation has been rather recent phenomenon. But there is no
mechanism of this type directed mutation accounting for mutations in
the absence of very recent modern humans. The agent (the mutation
fairy) necessary to perform directed mutation on the gene in question
is unevidenced (as well as being unnecessary).

> By your logic, merely measuring
> the empirical rate of these mutations and citing the obtained
> figure, "proves" that these mutations are random. Of course not.

No. All I claim is that there is no need to posit a mutation fairy to
explain the result. The known rate of random mutation (followed by
selection) is quite capable of producing the observed results. That
is, the observed phenomena does not need a mutation fairy. It can be
explained without positing one, by using *known* mutational mechanisms
and *known* empirically observed mutation rates. If an event can be
explained by *known* mechanisms, positing other mechanisms is
unnecessary *unless* you have some independent evidence that the other
mechanism actually did produce the result. Positing an unnecessary and
unevidenced mutation fairy explanation is not good science.

> The empirical rate of mutation is relevant only if you wish
> to show that a mutation (of whatever nature) is responsible
> for the observed adaptation. That is not being debated here.

All it shows is that known mechanisms of RM and NS are sufficient to
explain this result. There is no need to posit a mutation fairy
*unless* you have some independent evidence that it was involved.

> We take for granted that the paper has demonstrated not just
> the genetic nature of this adaptation but also found the
> specific mutation responsible.

Of course they found the specific mutation responsible for the color
change. They undoubtedly did the genetic experiments that show that
this allele is responsible and showed where the two alleles differ in
sequence. Are you claiming that they are lying about the color
difference being due to a point mutation in a particular gene to
produce a variant allele?

> Hence, your citations of
> the empirical rates of mutations are a strawman argument in
> the context of this debate.

No they aren't. They are quite relevant to showing that there is no
*need* to posit a mutation fairy. That RM and NS can explain these
observations just fine without the need to posit an unnecessary and
unevidenced guided mutation fairy.

> > No mutation has ever been observed to be guided by any intelligence.
>
> How about those in bio-technology?

For most of human history and that of bio-tech, mutation was always
randomly generated wrt need (although the rate of *all* mutations was
sometimes increased by adding mutagens, and different mutagens do cause
different types of mutation -- in the sense of transition or
frameshift, not mutations specific according to need). Directed
mutation requires a lab and an intelligent agent with modern human
capabilities existing at the right time and place to perform the
directed mutation. At present there is no independent evidence that
such an agent (the mutation fairy) existed. And no evidence has been
presented that such an agent is necessary. Such an agent is
superfluous and unnecessary. One can always posit a mutation fairy to
do whatever one wants done.

> That merely shows that a
> natural processes which guide mutations intelligently do
> not contradict the natural laws. Such natural processes can
> exist and do exist.

How, then, if intelligently guided mutations occur by natural processes
and at rates that are indistinguishable from unintelligently (as far as
we can tell) unguided mutations, can we distinguish between them? The
only way I can see is to have direct independent evidence that some
mutations are directed and others aren't. We don't have any such
evidence (except for recent human endeavors, and humans are clearly not
the agent you are interested in). That makes intelligently guided and
unintelligently unguided mutations indistinguishable and makes the
results of intelligent guidance indistinguishable from the results of
random chance mutation. How does making intelligent guidance
indistinguishable from chance help you?

> > You are proposing a mechanism (guided mutation or mutation that occurs
> > as a result of need) that has been diligently searched for and has
> > never been observed.
>
> Again, you're confusing absence of evidence, or even the failure
> to examine and recognize the evidence, with the evidence of
> absence.

The sun still rises in the east. And, I predict, will do so tomorrow
and tomorrow and tomorrow as it creeps in this petty pace. I certainly
agree that just because all the flamingos I have seen (at least the
ones fed the right diet) are pink doesn't mean that non-pink flamingos
cannot exist. But *you* are the one that has to demonstrate that they
do. Until then it remains an empirical inference *from massive amounts
of evidence* that the sun rises in the east, flamingos fed shrimp are
pink, and mutations occur at random wrt need. If someone asks me which
direction the sun will rise, what color the next flamingo will be, or
whether any particular mutation occurred at random wrt need, I (unlike
you) will not reply that the sun is equally likely to arise from the
east or west or north or south, that the next flamingo will be just as
likely to be blue or green or purple as pink, or that any mutation I
did not directly observe was just as likely to have been produced by
the mutation fairy as by chance chemical or radiological events.

> If you want to demonstrate that the mice adaptation discussed
> shows that the search algorithm in the space of DNA configuration
> was random (this was just a single nucleotide mutation, the
> simplest case you can have), go ahead. Compute the prediction
> of success for such random search algorithm for the given space
> of possibilities (all configuration with one nucleotide change
> away from the initial state) and the given number of tries
> (which requires population sizes, conception rates, duration)
> and show that this algorithm has a 'good' probability of success
> under the constraints given.

That is just what I did. I gave an empirically determined rate for
mutation per nucleotide per generation. You didn't like that, even
though it is clearly the best estimate one can give in this case.
Since we are talking about a single nucleotide change, that is the
right rate for "searching" the DNA configuration of the ancestor. I
might be able to narrow it down some, by knowing, say, if the mutation
is a transition (more frequent) or transversion (less frequent). I
also pointed out that if the mutation is recessive, the frequency of
the recessive mutation in the founder population might be more
important. And I certainly estimated the population size, the number
of generations per year, and the probability of fixation of a new
mutation. All of those, and the intensity of selection for phenotype,
affect the probability of a new mutation becoming fixed.

I don't know what algorithm one can use for a magical mutation fairy
poofing a new mutation into existence, because no such mechanism has
ever been observed. One could use human directed mutation, but since
humans clearly are not our magical mutation fairy, there is no evidence
that this mutation was produced by directed mutation, and there is not
a shred of evidence that a magical mutation fairy is needed or exists,
it would be rather pointless.

> Recall that ID and ND differ for these simple adaptations only
> in the their search algorithm, in how each picks the 'next'
> configuration to explore (which in turn implies probabilities
> of success of each algorithm under any given constraints).
>
>
> > If your claim is that an intelligent agent capable of
> > altering the genome of the island mice was responsible,
> > one way to support your claim would be to present
> > empirical evidence of such an intelligent agent capable
> > of performing such genomic magic having existed at the
> > right time and place.
>
> My claim is that the article shows nothing in favor or against
> the ND = RM + NS vs ID = IM + NS models of adaptaion observed.
> It shows only M + NS, which is common to both models. Hence
> there is nothing here for neo-Darwinians to crow about.

The article does not propose IM because there was no such mechanism in
existence at the time and proposing it is unnecessary, using, as it
does, a hypothetical unevidenced mutation fairy to do whatever the
poser wants done. One can always posit a magical mutation fairy to
explain anything. Such an explanation is not science until one has
evidence that mutation fairies existed at the right time and place.
Otherwise, the mutation fairy is a superfluous and unnecessary
explanation (in addition to being unevidenced).

> As to what might be the nature of 'intelligent agency' guiding
> the mutations, that is a separate issue from the point I was
> making.

That is why I call it the magical mutation fairy. The agent, if there
is one, *is* the issue.

> We all agree that natural processes exist on Earth
> which guide mutations intelligently (the natural processes
> we call bio-technology or genetic engineering).

And we can all agree that those agents have only existed on the Earth,
as far as we know, in the last 10-15 years. And that they are modern
humans in science labs. As far as we know, no such agent existed even
25 years ago. Before that time humans *did* intelligently direct
selection, but did not intelligently direct mutation.

> Your argument
> above seems to be based on conjecture that no such natural
> process is consistent with natural laws. The direct
> counter-example (pointing you to the existence of such
> natural processes) invalidates your conjecture. Try
> something better.

No. My argument is based on the fact that no known agent capable of
doing what you claim existed at the time and place and that such an
agency is unnecessary and superfluous and unevidenced (a trifecta).
Moreover, *known* natural mechanisms working at *known* empirically
observed rates can produce the observed results.

> >>Hence, this particular discovery as it stands, seems at
> >>least as favorable to ID = IM + NS as to ND = RM + NS,
> >>since all they have shown is the existence of M + NS,
> >>a fact consistent with ID and ND models of evolution.
> >
> >
> > Not really. ID = IM + NS requires the existence of the hypothetical
> > "I" agent, for which there is zero evidence.
>
> This paper, which is what is I was referring to above, has no
> discussion, let alone tests, of the I-agentcy hypothesis.

How can a legitimate scientific article say "Oh, and by the way, a
magical mutation fairy could have poofed this mutation into existence."
when such an explanation is unnecessary, superfluous, and unevidenced?

> Now, you
> and I may have this or that view on the existence or plausibility
> of the I-agency, but that is unrelated to what the paper and its
> empirical facts show, or fail to show. In particular they do not
> show that the observed adaptation and its genetic mechanism
> favor any particular search algorithm, the RM or IM based method,
> in the space of DNA configurations accessible to mutations.

How does one go about producing evidence for your magical mutation
fairy when you say that one cannot distinguish between the results of
such a fairy and RM wrt the rates at which they occur?

> > Because the ID = IM + NS hypothesis requires something that is neither
> > supported by independent evidence (unlike the 'random wrt need' nature
> > of point mutations) nor is such an entity necessary to explain
> > what has been observed, the famous blade of Ockham must slice it
> > off.
>
>
> Ockham razor doesn't apply unless you can show that the
> alternative models are equally consistent with the empirical
> evidence.

I said, specifically, that the observed results are quite consistent
with known point mutation rates. One does not need to posit a magical
mutation fairy. The magical mutation fairy is unnecessary,
superfluous, and unevidenced. Unlike the random (wrt need) nature of
natural mutation, which has evidence up its whazoo.

> Merely failing to evaluate the consistency of the
> alternative models A and B against the empirical evidence
> does not allow you to declare that A is simpler, hence it
> is a better model by Ockham's razor.

How does one model a magical mutation fairy poofing mutations that you
choose at will? You refuse the only type of test that would actually
work -- namely independent tests that an agent capable of doing what
you claim existed at the right time and place.

nightlight

unread,
Jul 9, 2006, 4:29:01 PM7/9/06
to
hersheyhv wrote:

> I am NOT confusing the absence of evidence with my statement. I am
> doing a standard scientific inference. I am specifically *including*
> as evidence relevant to this particular case the clear and
> quantitatively massive evidence that no, zero, nada point mutational
> event in nature that has actually been studied has ever been observed
> to be guided.

To decide whether mutation is "guided", you would need to
know what is it being guided toward, or the utility function
being optimized. Having more offspring one of few generations
forward may not be the same function as having more offspring
farther into the future, just as in chess, where grabbing an
opponent's pawn now may cost you a game thirty moves later.

Further, multitudes of adaptable networks (in the ecosystem,
societies) overlap and permeate each other, with variety of
objectives being pursued by different networks. Some networks
are in the 'material' realm (such as biochemical reaction
networks within cells, immune systems, brains, various social
networks), while others are networks in the 'abstract'
realms, such as cultures, languages, religions, sciences,...

Any element of the system (such as individual cell or an
individual organism) is a member of multitudes of such
networks, overlapping at all levels (e.g. you may be be
a node or a link or their building block, in the 'Christian'
or 'Muslim' network, American economy network, biologist network,
English language network, your ethnic group network, gene pool
network,...), with all networks optimizing their own punishments
and rewards through the movements/actions of their elements,
using strategies and means, internal models and languages
mostly unknown to us. The count of 'selfish genes' few or many
generations ahead is merely one element of one utility function
optimized by one (the intra-cellular genetic network) of
many networks sharing the particular organism and each
guiding its actions for its own benefit, in subtle ways
largely imperceptible to us. The science is just beginning
to recognize these intelligent networks (complex systems)
and to uncover some of their laws, structural and functional
patterns, but it is still quite immature discipline. Claiming
detailed knowledge of all the utility functions that DNA,
cells or organisms are optimizing is preposterous, hence
claiming that there is no correlation between mutations
and any utility function is vacuous boasting.

Consider, for example, giving a 19th century scientist a modern
computer with instruction to check whether electric & magnetic
fields they observe make any sense or have any relation to the
pictures shown on the screen. He will hook up his best voltmeters
at various points, get hundreds of thousands of readings and he
will find _no statistically significant_ correlation between what
is shown on the computer screen and his voltage readings.

The voltages will appear completely random. He may observe
that disconnecting some connectors will make pictures or sounds
go away, some may shut everything down. He may also notice
the particular frequencies some of the lines carry. But he will
be clueless as to how the content he can see on the screen, or
the algorithms executed by the computer, are related to the
thousands of recorded voltages. A single voltage oscillation
lasting less than one thousandth of one billionth of a second,
out of trillions such oscillations from various points, going
constantly up or down, may mean a single bit being 1 or 0,
which in turn may mean entirely different program and
algorithms running thousands of billions oscillations later.
Hence, there would be no _statistically_ significant
correlation here between present voltages and the later
behaviors of the computer. But there would certainly be
a relation (understood by a team of engineers and programmers
who designed the machine and wrote the programs), that is
invisible through the statistical significance pinhole.

The molecular biology is in no better position with respect the
the extremely complex multi-layered web of chemical reactions,
tightly interwoven with a complex chains of electro-mechanical
interactions, classical and quantum, at microscopic level,
along with the innumerable interactions with even more complex
networks (of which the intra-cellular network is a tiny sub-net)
at the level of tissues, organs, organism, populations...

As someone looking from the perspective of a much 'harder'
science (theoretical physics) than comparatively 'softer'
disciplines of biology and biochemistry, I know that even
far simpler systems, with just few electrons and protons
and for behaviors spanning only few tiny fractions of a
second, are essentially unsolvable puzzles, especially if
one wishes to know the dynamical/time dependent non-periodic
behaviors (in contrast to periodic and static properties,
such as energy spectra, which are simpler to compute),
with the longer time spans being much more difficult
to model.

The biological processes are time dependent problems with
tens of orders of magnitude greater number of interacting
components and tens of orders of magnitude longer time
intervals of relevant non-periodic dynamics, than what
is already very difficult to impossible to model. The
statistical physics or solid state physics models, which
physicists use for systems with large number of particles,
are completely useless for biological systems since the
core simplifying assumptions made for those physical
models (such as ergodicity, quasi-stationarity, closed
systems, small fluctuations around the thermodynamical
equilibrium without amplification, negative feedback)
are violated by the biological systems. The best one
can do with biological systems with present computational
and mathematical tools is extremely coarse grained, crude
modeling covering the tip of a tip ... of a tip of the
iceberg making these phenomena.

Of course, if one talks to molecular biologists, one may
get an impression they're in a fairly complete control
of all the key phenomena, and are just filling in few
smaller details here and there. Of course, if one were
to talk to the ancient Egyptian priesthoods, one would
have gotten equally self-assured response, claiming
full knowledge of all things, from calendar seasons
and floods, through knowledge of secrets of health
and illness, life and death, Earth and stars. If
there was anything they couldn't answer safely, then
those things were declared intrinsically 'random',
un-answerable, un-knowable, the will of gods.

In retrospect, they were a handful of clever guys who
managed to pick out few genuine patterns about the
calendar seasons and floods. The rest was self-promotion,
mostly self-delusional i.e. they actually believed they
knew what they were talking about (especially those
still learning the secrets). The disciples had to go
through long process of testing and elimination of
unsuitables (those not bright enough and those lacking
a gift for self-deception needed to uphold convincingly
enough the pretense of omniscience), learning the arcane
symbolism and secret language of the discipline,...
before his initiation into the inner circle. That's the
human nature, back then and as it is today. The
scientists in various disciplines are essentially our
modern day priesthoods with similar patterns of behavior,
pretense, self-promotion, self-deception as those of
ancient Egyptian priesthoods.

Hence, I take all excessively self-assured declarations
that there are no patterns in the mutations (or in general
transformation of DNA from generation to generation)
correlating them with future states of the environment
the same way I would take Egyptian priests assuring me
that any phenomenon for which they don't see any
patterns _has no_ pattern, it is irreducibly random.
Doubly suspicious are any such declarations which are
also backed up by censorship, lawsuits, intimidations...


> Your claim that this particular point mutation could have been due to a
> magical mutation fairy makes as much sense in science as saying that
> the sun rose in the west on July 4th in 1777.

I was only claiming that the 'random' nature of the mutation
behind the color adaptation being reported in the article
was not established in any way. All that was established was
that an adaptation was due to particular very small mutation
(a single nucleotide).

There is nothing in the paper, or in any subsequent discussion,
that shows how did they establish that the astronomically
tiny fraction among the all possible DNA configurations (which
are one nucleotide change away from the initial configuration,
the set S0 in my original post in this thread) that were explored
by the given comparatively small number of tries available,
constitutes a 'random' set of configurations, _unbiased in
any way_ to find the suitable solution to the survival
problem.

Just saying that there were so many mutations in given a
time, as you keep doing, says nothing about their relation
to the problem being solved by the genetic search algorithm
or the nature of the algorithm. Both, the guided (intelligent)
and the 'random' mutations will have some number of mutations
per given time. The _only_ difference you could extract
statistically is that the intelligent mutations will find
the solution faster on average than the random ones. Now, to
check whether the search was faster than random, you cannot
get around the task of estimating what the random _model_
predicts for the expected number of tries needed to solve
the problem. Only then you can say whether the _empirically
observed_ search and solution time is comparable to that
predicted by the random search model, or whether it is slower
(malicious intelligence) or faster (benevolent intelligence).

Just pointing out at the empirically observed rates of
mutations, without any comparison to the search space
being explored, tells you nothing about the efficiency
of the search compared to the random search.

For example, if you know that I can guess a number that
another player writes down, in ten tries on average, that
doesn't tell you whether I am guessing randomly or using
some more intelligent strategy. You wouldn't even know,
whether a strategy used, if any, was aiming speed up or
to slow down the guessing. To know any of that, you also
need to know the size of space being explored by the
search, in this case the maximum range of numbers
being guessed.

If the maximum allowed number is 20, then a random search
will find it on average in 10 tries. But if the maximum
number allowed is 1000, then a random search will not
find it in ten tries on average (but in 500 tries),
hence one would conclude that the algorithm used was
not random guessing but some more intelligent strategy
(such a binary search). But if the maximum allowed number
is 1 million, then even the binary search will not work
with the observed success rate, and one would need to
look for a different hypothesis as how the observed
success rate might have been achieved (human psychology,
cheating, sub-conscious clues etc). Further, if the
maximum number was 12, then you could conclude that I
was (in some way) purposefully avoiding the correct
guess and if the maximum number was 10 or lower, you
would also know how I was avoiding it (by repeating
the same incorrect guess multiple times).

The kind of answer that you have been repeating here (for
the genetic search) is that since you can empirically
observe ten tries on average before the solution is found,
the search must be random (since there were many more
failures than successes), and that you don't need to
know the size of the space being searched and the size
of the 'solution' sub-set, let alone compute how
well would random search would perform here on average.

My point is that you can't say whether the guessing was
random or intelligent unless you estimate the sizes of
the search and solution spaces and compute the performance
characteristics (such as the average number of guesses
until the solution) of different search algorithms, then
compare this theoretically obtained number with the
empirically observed average number of guesses. Just
citing the empirically observed number of guesses tells
you nothing about the type of algorithm used to guess the
number.

All the other points you make rest on this fundamental
flaw in your methodology. Until you understand why
in the number guessing game you need to know the
sizes of search & solution spaces, and not just the
numbers of tries (which are an equivalent of the
empirical mutation rates you keep repeating as the
"answer"), in order to declare that the observed
success rate of ten tries on average can be
explained by the random guessing model, you won't
get what is the objection to the neo-Darwinian RM+NS
model being made here.

All I am saying (in this thread and the earlier ones)
is that you do need the estimates for the sizes
of the search and solution spaces, and not just the
empirically observed number of tries (or mutation rates
and population sizes), before you can make declarations
about the kind of the search algorithm being used in
the evolution at _all_ levels, from the simplest minor
adaptations, through new species and new body plans,
up to the origin of life.

Even if we were to observe, from start to end, the
emergence of an entirely new phylum in some habitat,
that would still leave a question of what search algorithm
was used by the genetic networks (and numerous other
networks involved) to find a solutions to great many
problems that such gigantic transformation would create.
Just because the observers didn't see an old guy with a
white beard, in a mideastern robe and a voice of Charlton
Heston, materialize from the thundering clouds and snap
his fingers at the original creatures, that doesn't
imply that the algorithm was a random search or that
the religiously and ideologically loaded neo-Darwinian
dogma (of RM+NS being the sole algorithm behind evolution)
was confirmed by the observation. As suggested at the
end of an earlier post in this thread:

http://groups.google.com/group/talk.origins/msg/651222ff530cbe4e

there are plenty of perfectly natural ways (even restricting
ourselves to what we know as natural laws at present, the
knowledge which will be laughable in few centuries) that
a) an 'intelligent agency' with intelligence many orders of
magnitude greater than our own can exist b) which can guide
the genetic transformations of organisms c) without
appearing to observers as some human look-alike (in size,
shape, methods and objectives) intelligence.

Depending on how subtle that kind of process may be, or how
small or large its 'gears' are, it may not be directly
recognizable as such, or even perceptible directly at
all, and one would have to infer its existence, properties
and role in the evolution via mathematical modeling, as we
already do for most of the objects and phenomena being
researched in high energy physics.

Hence, repeatedly trotting out the Pat Robertson's "theory
of evolution" as the sole alternative to the neo-Darwinism,
as it is reflexively done here by you and other defenders
of the neo-Darwinian dogma, is a childish strawman which,
being a clear indicator of ultimate desperation and
retreat from a rational argument, only further emphasizes
the fundamental weakness of the theory you are defending.

The nature of the search algorithm behind evolution (how
close or how far from the random search is it?) is a
perfectly legitimate scientific question that presently
has no answer. It is also a fact which the neo-Darwinian
priesthood is fighting tooth and nail to keep away
from being recognized outside of the priesthood, even
that there is a question, by all means available --
through censorship, lawsuits, bureaucratic and social
intimidation, threats to academic career, funding,...

hersheyhv

unread,
Jul 10, 2006, 12:42:22 AM7/10/06
to

nightlight wrote:
> hersheyhv wrote:
>
> > I am NOT confusing the absence of evidence with my statement. I am
> > doing a standard scientific inference. I am specifically *including*
> > as evidence relevant to this particular case the clear and
> > quantitatively massive evidence that no, zero, nada point mutational
> > event in nature that has actually been studied has ever been observed
> > to be guided.
>
> To decide whether mutation is "guided", you would need to
> know what is it being guided toward, or the utility function
> being optimized. Having more offspring one of few generations
> forward may not be the same function as having more offspring
> farther into the future, just as in chess, where grabbing an
> opponent's pawn now may cost you a game thirty moves later.

IOW, you have to posit (because you have no evidence) the existence of
a teleological goal. _Ex post facto_ determination of what the goal or
utility function was is a snap: whatever actually exists now was the
goal of the process. Anybody can always claim that whatever currently
exists was the purpose that the magic mutation fairy had in mind. It
is easy to draw a bull's eye around the holes you shot in the barn's
side and call yourself a sharpsman extraordinaire. Painting the bull's
eyes first and subsequently hitting the center is what is difficult.

> Further, multitudes of adaptable networks (in the ecosystem,
> societies) overlap and permeate each other, with variety of
> objectives being pursued by different networks. Some networks
> are in the 'material' realm (such as biochemical reaction
> networks within cells, immune systems, brains, various social
> networks), while others are networks in the 'abstract'
> realms, such as cultures, languages, religions, sciences,...

And this is supposed to be relevant how...?

> Any element of the system (such as individual cell or an
> individual organism) is a member of multitudes of such
> networks, overlapping at all levels (e.g. you may be be
> a node or a link or their building block, in the 'Christian'
> or 'Muslim' network, American economy network, biologist network,
> English language network, your ethnic group network, gene pool
> network,...), with all networks optimizing their own punishments
> and rewards through the movements/actions of their elements,
> using strategies and means, internal models and languages
> mostly unknown to us. The count of 'selfish genes' few or many
> generations ahead is merely one element of one utility function
> optimized by one (the intra-cellular genetic network) of
> many networks sharing the particular organism and each
> guiding its actions for its own benefit, in subtle ways
> largely imperceptible to us.

It always seems that that which is imperceptible to us (aka ignorance)
is the evidence that IDeologues use.

> The science is just beginning
> to recognize these intelligent networks (complex systems)
> and to uncover some of their laws, structural and functional
> patterns, but it is still quite immature discipline. Claiming
> detailed knowledge of all the utility functions that DNA,
> cells or organisms are optimizing is preposterous, hence
> claiming that there is no correlation between mutations
> and any utility function is vacuous boasting.

Science has several mechanisms for disecting out whether a particular
feature out of many possible environmental features has a significant
impact on a result. You might try learning some of them. This is the
very basis of controlled experiment and/or factor analysis. But all
you are doing is claiming that all the repeated experiments in the
world that show that mutation is random wrt need is irrelevant if you
can somehow *believe* that there is something more going on.

Where do you make up this utter bullshit? Do you have a bull in the
corner? Until you have some actual *evidence* that mutation is
non-random wrt need, that Lamarck was somehow right, you have to go
with the actual *evidence* that we do have. Namely, that to all
extents and purposes mutation (and point mutation specifically) is
random wrt need. Waving your hands and wildly claiming that because
*you* don't understand (It's too complex!) or don't want to believe the
results of controlled experiment that you get to assert that it is just
as likely that the sun will set in the north or west or south tomorrow
doesn't make that an *equally likely* result. Just because you want to
believe that there is a mutation fairy that makes some mutations (but
just the beneficial ones) that are indistinguishable from all those
naturally made (and hence bad) mutations doesn't make the mutation
fairy explanation just as likely as the explanation that mutations


occur at random wrt need.

> As someone looking from the perspective of a much 'harder'


> science (theoretical physics) than comparatively 'softer'
> disciplines of biology and biochemistry, I know that even
> far simpler systems, with just few electrons and protons
> and for behaviors spanning only few tiny fractions of a
> second, are essentially unsolvable puzzles, especially if
> one wishes to know the dynamical/time dependent non-periodic
> behaviors (in contrast to periodic and static properties,
> such as energy spectra, which are simpler to compute),
> with the longer time spans being much more difficult
> to model.

Everything is a mystery to the mind of an IDeologue who does not want
to accept the empirical evidence, which clearly says: Mutation is
random wrt need. Let me repeat. Mutation is random wrt need. There
is NO difference in this between mutations that are selectively
beneficial and mutations that selectively detrimental. In fact, the
very same mutation can be selectively beneficial in one environment and
detrimental in a different one and neutral in yet a third environment.

> The biological processes are time dependent problems with
> tens of orders of magnitude greater number of interacting
> components and tens of orders of magnitude longer time
> intervals of relevant non-periodic dynamics, than what
> is already very difficult to impossible to model. The
> statistical physics or solid state physics models, which
> physicists use for systems with large number of particles,
> are completely useless for biological systems since the
> core simplifying assumptions made for those physical
> models (such as ergodicity, quasi-stationarity, closed
> systems, small fluctuations around the thermodynamical
> equilibrium without amplification, negative feedback)
> are violated by the biological systems. The best one
> can do with biological systems with present computational
> and mathematical tools is extremely coarse grained, crude
> modeling covering the tip of a tip ... of a tip of the
> iceberg making these phenomena.

You mean that you cannot accept the results of controlled experiment.
So you blabber and obfusticate that it is, oh so complex that
*anything* is possible. What utter bs. The fact remains that the
finding that mutation is random wrt need has been about as convincingly
demonstrated as the idea that the sun rises in the east. Maybe not
quite as convincingly, but any deviant mutations are rare exceptions
and not the rule. You also seem to be deluded in thinking that the
'beneficial' or 'detrimental' adjectives applied to particular
mutations are inherent in the mutation rather than conditional to a
particular environment and its interaction with the phenotypes these
alleles produce.

> Of course, if one talks to molecular biologists, one may
> get an impression they're in a fairly complete control
> of all the key phenomena, and are just filling in few
> smaller details here and there. Of course, if one were
> to talk to the ancient Egyptian priesthoods, one would
> have gotten equally self-assured response, claiming
> full knowledge of all things, from calendar seasons
> and floods, through knowledge of secrets of health
> and illness, life and death, Earth and stars. If
> there was anything they couldn't answer safely, then
> those things were declared intrinsically 'random',
> un-answerable, un-knowable, the will of gods.

*When* you have something other than your personal incredulity and
misunderstanding of the difference between *scientists* and
*post-modernist priesthoods* (one will change its understanding when
presented with evidence, which you are NOT presenting; the other is
wedded to the idea that everything is a mystery and any explanation is
as good as any other). I am perfectly willing to examine any actual
*evidence* you have that mutations are not random wrt need. But you
haven't presented any. All you have done is wave some fancy,
schmantsy, mumbo jumbo and asserted, like a good post-modernist, that
there is no empirical reality one has to really deal with. Any
explanation is as good as any other you keep repeating. I say that
that is something that can get you killed if you believe that the idea
that Comet Tuttle is hiding a spaceship that will wisk you out of here.

> In retrospect, they were a handful of clever guys who
> managed to pick out few genuine patterns about the
> calendar seasons and floods. The rest was self-promotion,
> mostly self-delusional i.e. they actually believed they
> knew what they were talking about (especially those
> still learning the secrets). The disciples had to go
> through long process of testing and elimination of
> unsuitables (those not bright enough and those lacking
> a gift for self-deception needed to uphold convincingly
> enough the pretense of omniscience), learning the arcane
> symbolism and secret language of the discipline,...
> before his initiation into the inner circle. That's the
> human nature, back then and as it is today. The
> scientists in various disciplines are essentially our
> modern day priesthoods with similar patterns of behavior,
> pretense, self-promotion, self-deception as those of
> ancient Egyptian priesthoods.

So you cannot tell the difference between scientists and voodoo
priests, eh. I agree. You probably can't.

> Hence, I take all excessively self-assured declarations
> that there are no patterns in the mutations (or in general
> transformation of DNA from generation to generation)
> correlating them with future states of the environment
> the same way I would take Egyptian priests assuring me
> that any phenomenon for which they don't see any
> patterns _has no_ pattern, it is irreducibly random.
> Doubly suspicious are any such declarations which are
> also backed up by censorship, lawsuits, intimidations...

Oh, mutation in fact does have a very specific pattern. It is a
pattern which is indistinguishable from the specific pattern one would
expect if mutation were random wrt need. It does not have a pattern
which, to all the degrees of sensitivity one wishes to apply or has
applied, that shows the slightest indication of any significant
deviation from the specific expectations of randomness in this regard.
There are specific tests that have been applied. The easiest to
understand is replica plating of bacterial colonies to selective or
non-selective plates. The rate of mutation to resistance is
independent of whether or not the colony is replica plated to selective
or non-selective plates.

> > Your claim that this particular point mutation could have been due to a
> > magical mutation fairy makes as much sense in science as saying that
> > the sun rose in the west on July 4th in 1777.
>
> I was only claiming that the 'random' nature of the mutation
> behind the color adaptation being reported in the article
> was not established in any way. All that was established was
> that an adaptation was due to particular very small mutation
> (a single nucleotide).

You are saying, in fact, that this particular mutation could have
arisen in a way that is equivalent to the sun rising in the west (by
the never seen and unevidenced mechanism of a mutation fairy), that
that explanation is just as likely as the idea that the sun rose in the
east (by natural random wrt need mutation). You are wrong. The
probability that this mutation arose by standard random mutation is
very high. The probability that this mutation arose by a mechanism of
intelligently guided mutation (a mechanism with no evidence, no agency,
and one that is unnecessarily complex because it requires such a
mechanism when known natural processes would suffice) is extremely low.
Now, if you had some actual evidence that this particular mutation, of
all the mutations in the world, *required* or actually was produced by
a mutation fairy, you would have something. But you don't. All you
have is the hand-waving post-modernist assertion, rejected by all
scientists, that *any* explanation is as good as any other. It's all a
mystery to you.

> There is nothing in the paper, or in any subsequent discussion,
> that shows how did they establish that the astronomically
> tiny fraction among the all possible DNA configurations (which
> are one nucleotide change away from the initial configuration,
> the set S0 in my original post in this thread) that were explored
> by the given comparatively small number of tries available,
> constitutes a 'random' set of configurations, _unbiased in
> any way_ to find the suitable solution to the survival
> problem.

The point I am making is that your whole thesis is based on a false
disection of the problem. There is no genetic search algorithm
searching through all possible DNA configurations. There is no
teleologic goal in the process. There is a single point mutation
altering one *pre-existing* gene by a single nucleotide change
resulting in a change in the phenotype of the allele in certain
genotypic combinations. This altered phenotype then underwent
selection in the local environment. The rate of point mutation that
produces this type of change (since mutations in other sites could well
produce the same phenotype) in the *pre-existing* gene is the relevant
rate. Your calculation of a genetic search through *all possible DNA
configurations* is utterly irrelevant.

> Just saying that there were so many mutations in given a
> time, as you keep doing, says nothing about their relation
> to the problem being solved by the genetic search algorithm
> or the nature of the algorithm.

Mutation is not for the purpose of solving a genetic search algorithm.
It is simply something that happens due to the nature of the chemistry
of DNA and its replication.

> Both, the guided (intelligent)
> and the 'random' mutations will have some number of mutations
> per given time. The _only_ difference you could extract
> statistically is that the intelligent mutations will find
> the solution faster on average than the random ones.

IOW, there is no detectable difference. Exactly how would you
determine that any given mutation is "intelligently designed"? We are
talking about a single nucleotide change here, not some long drawn out
search of all DNA sequence space. Are you saying that mutations that
occur at a higher frequency are more likely to be beneficial than
mutations that occur at low frequency in a population? You would, of
course, be wrong as the 80% of achondroplastic dwarfs due to new
mutation (one which occurs at very high frequency) could tell you.

> Now, to
> check whether the search was faster than random, you cannot
> get around the task of estimating what the random _model_
> predicts for the expected number of tries needed to solve
> the problem.

The random model does not predict the rate of mutation for any given
mutational event (there are local sequence features that affect rates).
That is something that is empirically determined not established as a
feature of theory.

> Only then you can say whether the _empirically
> observed_ search and solution time is comparable to that
> predicted by the random search model, or whether it is slower
> (malicious intelligence) or faster (benevolent intelligence).

Again. There is no correlation between the rate of mutation that
produces a given phenotype and the need for that mutation. Nor is
there any correlation between the rates of mutation and whether or not
the effects are generally deleterious. Nor is there any correlation
between the rate of mutation and the *degree* of
deleteriousness/benefit. Nor is there any evidence of intelligent
agency involved at all. One *can* speed up the rate (and determine the
general type of mutational event) by adding mutagens or reduce the
rates by amplifying repair enzymes (although at a cost). But neither
of these can speed up the rates of mutation of beneficial over
detrimental mutations. And, as I keep pointing out, there is no
evidence of any intelligent agent capable of doing this at the right
time and place.

> Just pointing out at the empirically observed rates of


> mutations, without any comparison to the search space
> being explored, tells you nothing about the efficiency
> of the search compared to the random search.

The search space in this case is a single nucleotide in a pre-existing
gene (although it is possible that other mutations could have produced
the same phenotype). The rate of mutation at that site is quite
clearly the correct rate for the search of this particular search
space. Were you claiming that one must re-invent the entire gene from
scratch like creation claims to do rather than produce a modified gene
by descent as *evolution* would?

> For example, if you know that I can guess a number that
> another player writes down, in ten tries on average, that
> doesn't tell you whether I am guessing randomly or using
> some more intelligent strategy.

I could certainly tell whether or not you were rotating through the
numbers in a way significantly different from what would occur by
chance. In the simplest case, if all you did was chose 3 each time, it
would be obvious that you were using a strategy or that you could not
think of another number and thus were extraordinarily stupid rather
than intelligent. Even if you tried to produce a pattern that mimicked
a random pattern, it is likely that your false human intuition of what
was a random pattern would betray you eventually. I would have more
trouble if you chose a true randomly repeating number, like pi, unless
I happened to guess what number you were using.

But all that is irrelevant.

> You wouldn't even know,
> whether a strategy used, if any, was aiming speed up or
> to slow down the guessing. To know any of that, you also
> need to know the size of space being explored by the
> search, in this case the maximum range of numbers
> being guessed.
>
> If the maximum allowed number is 20, then a random search
> will find it on average in 10 tries. But if the maximum
> number allowed is 1000, then a random search will not
> find it in ten tries on average (but in 500 tries),
> hence one would conclude that the algorithm used was
> not random guessing but some more intelligent strategy
> (such a binary search). But if the maximum allowed number
> is 1 million, then even the binary search will not work
> with the observed success rate, and one would need to
> look for a different hypothesis as how the observed
> success rate might have been achieved (human psychology,
> cheating, sub-conscious clues etc). Further, if the
> maximum number was 12, then you could conclude that I
> was (in some way) purposefully avoiding the correct
> guess and if the maximum number was 10 or lower, you
> would also know how I was avoiding it (by repeating
> the same incorrect guess multiple times).

Again, I could, quite often, determine if you were rotating through
your numbers in a random fashion. I would only have significant
difficulty if you actually were "intelligently" using a pattern which
exactly mimicked some random pattern and was thus indistinguishable
from a random pattern. But if you are using a randomly generated
pattern, it is still a randomly generated pattern, intelligent or not.
If your "intelligently" generated mutations occur in a pattern
completely indistinguishable from that expected by random mutation wrt
need, which would also occur in "unintelligently" "undirected"
mutation, how can you tell it was "intelligently" generated?

> The kind of answer that you have been repeating here (for
> the genetic search) is that since you can empirically
> observe ten tries on average before the solution is found,
> the search must be random (since there were many more
> failures than successes), and that you don't need to
> know the size of the space being searched and the size
> of the 'solution' sub-set, let alone compute how
> well would random search would perform here on average.
>
> My point is that you can't say whether the guessing was
> random or intelligent unless you estimate the sizes of
> the search and solution spaces and compute the performance
> characteristics (such as the average number of guesses
> until the solution) of different search algorithms, then
> compare this theoretically obtained number with the
> empirically observed average number of guesses. Just
> citing the empirically observed number of guesses tells
> you nothing about the type of algorithm used to guess the
> number.

I can certainly learn wether the pattern of numbers is randomly
generated among the numbers being rotated among. Obviously, the larger
the set, the longer it would take to detect a pattern.

> All the other points you make rest on this fundamental
> flaw in your methodology. Until you understand why
> in the number guessing game you need to know the
> sizes of search & solution spaces, and not just the
> numbers of tries (which are an equivalent of the
> empirical mutation rates you keep repeating as the
> "answer"), in order to declare that the observed
> success rate of ten tries on average can be
> explained by the random guessing model, you won't
> get what is the objection to the neo-Darwinian RM+NS
> model being made here.

In our particular example, the size of the search and solution space is
the single nucleotide that needs to be changed. The rate of change at
that site is all you need.

> All I am saying (in this thread and the earlier ones)
> is that you do need the estimates for the sizes
> of the search and solution spaces, and not just the
> empirically observed number of tries (or mutation rates
> and population sizes), before you can make declarations
> about the kind of the search algorithm being used in
> the evolution at _all_ levels, from the simplest minor
> adaptations, through new species and new body plans,
> up to the origin of life.

There is no search algorithm because there is no teleologic goal.

> Even if we were to observe, from start to end, the
> emergence of an entirely new phylum in some habitat,
> that would still leave a question of what search algorithm
> was used by the genetic networks (and numerous other
> networks involved) to find a solutions to great many
> problems that such gigantic transformation would create.
> Just because the observers didn't see an old guy with a
> white beard, in a mideastern robe and a voice of Charlton
> Heston, materialize from the thundering clouds and snap
> his fingers at the original creatures, that doesn't
> imply that the algorithm was a random search or that
> the religiously and ideologically loaded neo-Darwinian
> dogma (of RM+NS being the sole algorithm behind evolution)
> was confirmed by the observation. As suggested at the
> end of an earlier post in this thread:
>
> http://groups.google.com/group/talk.origins/msg/651222ff530cbe4e
>
> there are plenty of perfectly natural ways (even restricting
> ourselves to what we know as natural laws at present, the
> knowledge which will be laughable in few centuries) that
> a) an 'intelligent agency' with intelligence many orders of
> magnitude greater than our own can exist b) which can guide
> the genetic transformations of organisms c) without
> appearing to observers as some human look-alike (in size,
> shape, methods and objectives) intelligence.

Yes. He/it/she/they can be the mutation fairies. Any evidence?


> Depending on how subtle that kind of process may be, or how
> small or large its 'gears' are, it may not be directly
> recognizable as such, or even perceptible directly at
> all, and one would have to infer its existence, properties
> and role in the evolution via mathematical modeling, as we
> already do for most of the objects and phenomena being
> researched in high energy physics.
>
> Hence, repeatedly trotting out the Pat Robertson's "theory
> of evolution"

Pat Robertson doesn't have a "theory of evolution". He believes in the
mutation fairy and even believes in the magical creation fairy.

> as the sole alternative to the neo-Darwinism,

I am perfectly willing to look at any *evidenced* alternative you can
present. So far all I see is hand-waving and post-modernist "any idea
is as good as any other" claptrap.

> as it is reflexively done here by you and other defenders
> of the neo-Darwinian dogma, is a childish strawman which,
> being a clear indicator of ultimate desperation and
> retreat from a rational argument, only further emphasizes
> the fundamental weakness of the theory you are defending.
>
> The nature of the search algorithm behind evolution (how
> close or how far from the random search is it?)

Since evolution is constrained to search only within the genomes of
ancestral organisms (for vertical evolution anyway), the non-search
(since there is no teleologic goal) that happens to produce new alleles
or new genes is decidedly non-random. If a new feature cannot be
produced by some modification of a pre-exisiting feature (descent with
modification), it generally won't happen. In creationism, of course,
there are no limits. The magical mutation fairy can produce whatever
it wants and, lo and behold, miracle of miracles, the magical mutation
fairy you posit just happens to produce what actually exists. Just by
positing it. See, wasn't that easy?

michael...@worldnet.att.net

unread,
Jul 10, 2006, 1:08:59 AM7/10/06
to

nightlight wrote:
>
> P(N,T) = 1 - (1-1/T)^N ~ 1 - exp(-N/T) ... (1)
>
> For small values of N/T this is approximately:
>
> P(N,T) ~ N/T ... (2)
> [considerable snipping]

> (If there are F favorable color mutations among T, then
> N/T in (2) would be multiplied by F. That refinement is
> irrelevant for the point being made below.)
>

Let the nightlight special shine his heaven of a light on me.

-- Mike Palmer

Windy

unread,
Jul 10, 2006, 3:49:38 AM7/10/06
to

hersheyhv wrote:

> nightlight wrote:
> > Just pointing out at the empirically observed rates of
> > mutations, without any comparison to the search space
> > being explored, tells you nothing about the efficiency
> > of the search compared to the random search.
>
> The search space in this case is a single nucleotide in a pre-existing
> gene (although it is possible that other mutations could have produced
> the same phenotype).

And very likely.

> The rate of mutation at that site is quite
> clearly the correct rate for the search of this particular search
> space. Were you claiming that one must re-invent the entire gene from
> scratch like creation claims to do rather than produce a modified gene
> by descent as *evolution* would?

No, he is talking about a single nucleotide change, he just wrongly
thinks that the probability of the single change is 1/(number of all
theoretically possible point mutations).

-- w.

Andrew McClure

unread,
Jul 10, 2006, 5:07:11 AM7/10/06
to
michael...@worldnet.att.net wrote:
> July 07, 2006
> An Evolution Saga: Beach Mice Mutate and Survive
>
> It's a pitiless lesson-adapt or die-but the sand-colored mice that
> scurry around the beaches of Florida's Gulf Coast seem to have learned
> the lesson well. Now researchers have identified a genetic mutation

> that underlies natural selection for the sand-matching coat color of
> the beach mice, an adaptive trait that camouflages them from aerial
> predators.
>
> In the July 7, 2006, issue of the journal Science, evolutionary
> geneticist Hopi Hoekstra and colleagues at the University of
> California, San Diego, report that a single mutation causes the
> lifesaving color variation in beach mice (Peromyscus polionotus) and
> provides evidence that evolution can occur in big leaps.
>
> "This is a striking example of how protein-coding changes can play a
> role in adaptation and divergence in populations, and ultimately
> species."
> Hopi Hoekstra
>
> The Gulf Coast barrier islands of Florida and Alabama where the beach
> mice are found are less than 6,000 years old-quite young from an
> evolutionary standpoint. Hoekstra said that the identification of a
> single mutation that contributes to the color change that has arisen in
> these animals argues for a model of evolution in which populations
> diverge in big steps.
>
> This model, in which change is driven by large effects produced by
> individual mutations, contrasts with a popular model that sees
> populations diverging via small changes accumulated over long periods
> of time.
>
> More at:
> http://www.hhmi.org/news/hoekstra20060707.html

That's one small step for mouse, one giant leap for mousekind


Ba dum ching

Richard Forrest

unread,
Jul 8, 2006, 1:32:03 PM7/8/06
to

nightlight wrote:
> Windy wrote:
>
> >>In contrast, the ID model says that those N points are not
> >>chosen randomly from S1, but are guided by some 'intelligent
> >>agency' which allows it to find favorable configurations
> >>from S1 faster than the random search does i.e. the ID model
> >>says that if we observe a series of such adaptation processes,
> >>then the rate of observed favorable adaptations will be
> >>greater than the rate predicted by the neo-Darwinian model
> >>(implied by particular P(N,T) in each process instance).
> >
> >
> > OK.. if favorable adaptations occur at a high rate, this is evidence
> > for ID?
>
> It would be an evidence that neo-Darwinian model (RM+NS) is
> an unlikely explanation of the observed adaptation and that
> a more efficient (than random) search algorithm is responsible
> for the adaptation. By convention, we can call this more
> efficient algorithm an 'intelligent' or guided mutation,
> or generally an ID algorithm.
>

No we don't.
Not in science, anyway.
If, in science, we don't know what is responsible, we say
"We don't know what's responsible. How can we find out?"

In ID, which isn't science, they say
"We don't know what's responsible, so GodImeananintelligentdesigner
must be responsible. Well, now that's settled, let's go and spend those
fat cheques the DI has sent us."

> > What about if favorable adaptations did not occur or occur at a
> > lower rate? Is that evidence against ID?
> >
>
> I think you meant "higher rate" not "lower rate" above (otherwise
> what you wrote is a gibberish). Assuming this correction,
> it would be an evidence that the 'intelligent agency' (IA) model
> is not necessary to explain that particular adaptation.

Oh, I see.
So the position is that there *must* be an intelligent designer, but if
any particular test doesn't uncover his actions, it's the wrong test.

Hard to falsify.

> That
> doesn't exclude a need for the IA model in order to explain
> some other adaptations. It doesn't even exclude the IA
> participation in that example. It only means that what
> was observed does not discriminate for or against IA and
> that one might need additional data to make such discrimination.
>
> (The latter observation is important since a coherent IA theory
> should not cherry pick in which processes the IA participates
> and in which it choses not to be involved with.)

To form a theory, you have to start by testing hypotheses. So far, no
IDer has tested a single hypothesis about the action of any intelligent
designer.

This is called trying to run before you can even crawl. Or more
acurately, trying to run before you are even conceived.


>
> Consider an 'intelligent agency' which we know to exist in nature
> (human brain) applying its efforts to stock market trading. One
> could trade using all his knowledge and foresight and not do
> any better than chance on any particular day or a week or
> throughout the whole trading career. Could you declare that
> he was picking randomly if his gains don't exceed random
> ones for some given span of time? You can't. You would
> need more data and possibly different kind of data (such as
> direct observation of his trading or an interview) to support
> such conclusion.

Quite what the relevance of this is to evolution is a mystery to me.
I supose that if you have not one scrap or tittle of evidence, then all
you can do is to argue from analogy.

By the way, science does not construct arguments from analogy. It
constructs them from evidence.

>
> In any case, what is your point? How is your question related
> to my observation that there is nothing in the article, no
> calculation or estimation of predictions of any mutation model
> (random or any other), let alone any comparison of such
> prediction with the empirical facts observed. There is simply
> nothing there for neo-Darwinians to crow about. If you saw
> something to crow about, you haven't shown as yet what that
> might be.
>
> Instead, so far you have been quite desperate in trying
> to divert the discussion to your little collection of pet
> strawmen while showing off your repertoire of explitives.
> Either say something of substance or stay quiet and let
> somene who may have a better understanding of the subject
> being discussed have a turn defending the neo-Darwinian
> model. Your response was and continues to be so inpet that
> you may well be a supporter of Pat Robertson's theory of
> evolution acting here under the false flag, trying to
> make neo-Darwinians look stupid and primitive.
>

So what was the hypothesis which ID is testing?

> >
> >>The point of my argument is that there is nothing in the article
> >>for neo-Darwinians to crow about.
> >
> >
> > Sez you.
>
> Well, can you cite some calculation from the paper demonstrating
> that the observed mutation was 'most probably' (e.g. 50% or
> better) random.

And this demonstrates what, exactly?

On the other hand, there is plenty of other evidence to support
evolution by natural selection, and no evidence whatsoever to support
the idea that an "intelligent designer" occasionally interferes with
normal biological processes.

Science forms paradigms because they form a consistent and coherent
model of how systems behave, and builds those paradigms because that is
where the evidence leads. Whether you like it or not, evolution by
natural selection is the ruling paradigm in the biological sciences
because it is supported by a vast amount of evidence.

>
> >>>So the designer actively intervened in the evolution of these mice
> >>>during the last 6000 years? Interesting.... how, exactly?
> >>
> >>There could be an intelligent agency guiding mutation faster than
> >>random search toward the favorable DNA configuration.
> >
> >
> > And *how* is it doing that? Devise an experiment to test it.
>
>
> There are plenty of ways that an 'intelligent agency' could exist
> and perform directed mutations. I merely pointed out couple
> possibilities which are not a priori excluded by the presently
> known laws of physics.

So propose a test which could *falsify* the intervention of an
intelligent agency.

>
> We also already know that natural processes which intelligently
> guide mutations exist in nature (e.g. brains of molecular
> biologists). The empirically established existence of such
> processes is a direct counter-example to a conjecture
> that such natural processes are excluded by laws of nature.
> They are obviously not excluded by the laws of nature since
> they do exist in nature.

So propose a test which could *falsify* the intervention of an
intelligent agency.


>
> Hence your "point" about '6000 years', which is what I was
> responding to above, is a vacuous strawman. Pat Robertson's
> model of evolution is certainly not the sole alternative
> to the neo-Darwinian model.

What was that scientific alternative to evolution by natural selection
again?
It isn't ID - Michael Behe, one of it's leaders conceeded under oath
that it isn't science, so you must be talking about something else.

>
>
> >>Hence, it is perfectly conceivable that our physical,
> >>chemical, biological... laws are an extremely crude
> >>picture of an activity by an unimaginably powerful
> >>underlying intelligence (vast distributed computer
> >>running 1e16 times faster and having (1e16)^3 ~ 1e50
> >>times more components than the intelligent processes
> >>we are familiar with at our level). In addition to
> >>providing support for ID model of evolution, this
> >>kind of model could also be a rational alternative
> >>to the 'anthropic principle' in explaining the fine
> >>tuning of physical constants.
> >
> >
> > So we're in the Matrix?
>
> I suppose, if one had to put it in terms understandible to
> a simpleton whose science education consists of going to
> the movies, one might put it that way.

Cutting, cutting.
What is *your* education in science, by the way?

RF

Windy

unread,
Jul 10, 2006, 2:13:51 PM7/10/06
to
Richard Forrest wrote:

> nightlight wrote:
> > >>Hence, it is perfectly conceivable that our physical,
> > >>chemical, biological... laws are an extremely crude
> > >>picture of an activity by an unimaginably powerful
> > >>underlying intelligence (vast distributed computer
> > >>running 1e16 times faster...

> > >
> > > So we're in the Matrix?
> >
> > I suppose, if one had to put it in terms understandible to
> > a simpleton whose science education consists of going to
> > the movies, one might put it that way.
>
> Cutting, cutting.
> What is *your* education in science, by the way?

He wrote something about that in a later post:

"As someone looking from the perspective of a much 'harder'
science (theoretical physics) than comparatively 'softer'
disciplines of biology and biochemistry, I know that even
far simpler systems, with just few electrons and protons
and for behaviors spanning only few tiny fractions of a

second, are essentially unsolvable puzzles..."

Too bad, since many physicists' insights into biology have been
valuable. I guess there are always a few filled with nothing but hot
air, like John Barrow and the dimly glowing nightlight.

And shouldn't persons educated in theoretical physics know how to apply
statistics?

-- w.

nightlight

unread,
Jul 10, 2006, 6:13:34 PM7/10/06
to
> The point I am making is that your whole thesis is based
> on a false disection of the problem. There is no genetic
> search algorithm searching through all possible DNA
> configurations. There is no teleologic goal in the
> process.

Term "algorithm" here is merely a shorthand for a specific
sequence of steps which transforms a system from state A to
state B. Similarly the "search algorithm" in this context is
a sequence of steps which transforms the genetic network
(possibly spanning multiple generations) from state A which
does not have the particular favorable mutation to state B
which does have it.

For adaptable networks (such as brain, genetic network, neural
network) and the sequences of steps they go through one often
uses term "algorithm" (e.g. the pattern recognition algorithms
or search algorithms realized via neural networks) since such
sequences of state transformations can be mapped into conventional
computer algorithms solving the same problems. No teleological
premise is introduced by use of such shorthand. It is simply a
terminological convenience since the theory of computer
algorithms brings in cleanly defined, high resolution terms
and concepts particularly suitable for analyzing such
sequences of steps, states they traverse, their effects,
performance characteristics, alternatives.... etc.


>>Both, the guided (intelligent)
>>and the 'random' mutations will have some number of mutations
>>per given time. The _only_ difference you could extract
>>statistically is that the intelligent mutations will find
>>the solution faster on average than the random ones.
>
>
> IOW, there is no detectable difference. Exactly how would you
> determine that any given mutation is "intelligently designed"?

Statistical difference is detectable. The difference here would
be between the theoretically predicted success rate (or probability
of success for a given number of tries) of a 'random search' in
the space of DNA configuration (accessible via 1 nucleotide change)
vs the empirically observed success rate of the actual algorithm.

> We are
> talking about a single nucleotide change here, not some long drawn out
> search of all DNA sequence space.

It may have slipped your mind, but we are discussing this single
nucleotide change in the context of its alleged support for
the neo-Darwinian conjecture, which prescribes a particular
way/algorithm as to how such single nucleotide change is picked
out (e.g. by the biochemical reaction web of the cell) among
all possible single nucleotide changes that could be picked
out from a given initial state.

The ND algorithm/prescription requires that the change is
picked "randomly" i.e. that the picking process cannot
systematically (= with statistical significance) show
preference/bias at the time of the pick for any particular
nucleotide (or any of their possible final states) whose
change will turn out (with statistical significance) to
be more useful _later_ over the nucleotides & their final
states whose change will turn out to be less useful _later_
(or even outright fatal shortly the the change).

Otherwise, we would characterize the picking process which
systematically gives preference _now_ to particular nucleotide
changes which will be more useful _later_ as anticipatory or
intelligent picking process.

The ND theology absolutely prohibits that kind of picking
processes. If there are only two physically/chemically
possible changes A and B, and A will turn out statistically
useful _later_, while B will turn out almost always fatal
_later_, the ND dogma still prohibits picking processes
which give preference to A at the time of the pick i.e.
it requires that A and B have to be picked with equal
probability and only the natural selection that follows
later is allowed to terminate B type offspring and continue
the A type offspring.

Now, if you suddenly declare that your view of ND
allows for the 'change picking processes' that give
statistically significant preference to A over B
at the time of the picking, hence before the later
state of the environment would perform the natural
selection, then you already agree that the change
picking process can be _anticipatory_ -- the changes
statistically preferred _now_ among the alternatives
will turn out statistically more useful later than
the alternatives.

You are then, for theological reasons, merely refusing
to attach attribute "anticipatory" to the 'change
picking processes' which anticipate usefulness of the
change.

You will defend (as you did before on this same point
in Cairn's bacteria discussion) that you are not refusing
it for theological reasons but simply because the changes
on A are more probable due to the particular physical/chemical
DNA properties around the sites A and B at the time of the
change i.e. the greater probability of the A change is due
to the physical and chemical laws and not due to the
anticipation.

But as noted before, with that kind of criteria, you cannot
then call any processes anticipatory, including those
occurring in human brain since the human "anticipation"
is also _implemented_ by neurons through some physical
and chemical processes. Similarly, you would refuse to
say that a metereological program predicting weather is
an anticipatory process since its computation and output
are also produced by the physical processes in its
hardware. Or, you will reject to say that a chess program
anticipating your next move is "anticipating" your next move
since its computations is also implemented as a physical
process in the computer hardware.

In other words, your underlying premise appears to be that
the attribute "anticipation" can be applied only to the
super-natural processes, the processes which do not comply
with the laws of physics & chemistry i.e. you have merely
defined the attribute "anticipation" away from its common
meaning, while in substance agreeing that such processes
do in fact exist, be it in the genetic networks or in
the networks of neurons such as human brain, or in the
conventional computers.

{ Note: I use above the phrase "change picking processes"
since the commonly used term "mutation" seems to throw
you (and some others here) into your usual 'vapid dictums
parroting loop', which then leads to waste of time and
efforts required to snap you out of the loop. }


> Are you saying that mutations that
> occur at a higher frequency are more likely to be beneficial than
> mutations that occur at low frequency in a population?

One cannot make an absolute evaluation like that without any
regard to the context. If a high mutation rate at some site
is the result of the past memory (by the cellular genetic
network) of the environmental challenges for which that
mutation has turned out useful, then if the similar
environmental challenges recur (which is often a useful
probabilistic assumption), the change will likely turn
out useful again.

As to how that usefulness might compare to some arbitrary
changes on arbitrary sites in arbitrary environments, is a
matter of specific evaluation of consequences. For example
one can put an organism into a different environment
in which any particular mutation, which was generally useful
in the previous environments, will turn out harmful.

>
> The search space in this case is a single nucleotide in a pre-existing
> gene (although it is possible that other mutations could have produced
> the same phenotype).

That is the core element of your confusion. Reminding you again,
as explained above we are discussing the implication of the
observed adaptation, which indeed was due to a single nucleotide
change, for the vaiablity (as a mathematical model of the observed
process) of the neo-Darwinian 'change picking' algorithm among
all possible changes. Hence, the space of all possible states
(the set denoted as S0 earlier) available for any 'change picking'
algorithm is essential for deciding which algorithm might have
been used to pick the change observed. This is no different
than using the knowledge of the space of allowed numbers in
the number guessing game in order to decide which algorithm
might have been used to pick the guesses.

Whether we can compute such number or not, any particular
nucleotide X can be changed (mutated) in some finite number
of ways, call it NX, under the physical and chemical conditions
given as initial state. Therefore, for all NN nucleotides, there
is another number T, which is the total number of DNA
configurations which differ from the initial state by a
single nucleotide change. If NX were constant for all
nucleotides X, then one could compute T = NN * NX. If the
NX varies with X, as it normally would, T would be the sum
of NX for all X. In any case, there is such integer T,
regardless of how well we may be able to estimate in practice
its value with present computational tools.

Let's say, for the sake of argument, that you obtain that
T = 10^20 configurations which differ by one nucleotide change
(including all possible ways that any changed nucleotide
can be changed under the initial state physical-chemical
conditions) from the initial configuration. Note that we
can't do anticipatory pre-filtering (from among all possible
nucleotides changes) based on the future viability of the
organism, since within ND algorithm, that filtering can be
done only _after_ the pick, when the natural selection takes
place and when the cost of one potential offspring is already
paid. Hence our NX for nucleotide X includes changes which may
result in what would be a 'defective' DNA (including all the
possible fatal defects).

Suppose now that the total number of offspring by the mice
population during this adaptive "big leap" is 100 million,
i.e. N = 10^8. Hence, the mice population was able to explore
only N/T = 10^-12 fraction of the possible DNA configurations
which differ by one nucleotide from the initial one. The odds,
_within the random pick algorithm_, that one of the changes
would hit the right nucleotide the right way, from some set
of F favorable mutations of this type, would be P(N,) = F*N/T.

Let's say that F=1000 configurations qualify as favorable
under the circumstances. Hence the _random pick algorithm_
yields the odds of any favorable 1 nucleotide mutation as
10^-9 or one in billion. That kind of low odds would
automatically exclude the _random pick algorithm_ as
a probable algorithm capable of mathematically modeling the
empirically observed mutation, since it would model it
correctly once in a billion experiments, failing the rest.
One would then have to look for some other algorithm to
mathematically model the observed adaptation.

This is mathematically the same problem setup as in the
number guessing game, only with different values for
N, T and F. If the 'random guess' algorithm is shown to
need many more tries (or offspring in the mice example)
to guess the right number than what was observed as the
average number of tries to guess, then one can conclude
that the random guessing algorithm is a poor model for
the observed success rate. For example, if a random guessing
is computed to have a chance 1 in billion to yield a correct
guess in 10 tries on average, the implication is that I
have used some other strategy, more efficient than
random guessing.

> I could certainly tell whether or not you were rotating through the
> numbers in a way significantly different from what would occur by
> chance.

Yes you might be able to do that, but you don't have that kind
of knowledge by your own rules. Namely, you are claiming (via
your empirical rates argument) that _all_ you need to declare
that my guessing strategy was 'random guessing' is how many
tries on average I need to guess (which is equivalent here to
knowing the average rate of my tries, such as 5 tries per minute,
and the duration of the guessing).

I am saying that this number alone (which is equivalent to the
knowledge of the empirical rates of various mutations) is not
enough for such conclusion. Short of being able in some cases
to eyeball the pattern of my guesses (which is an irrelevant
artifact of the intentional simplicity of the example), any
deduction of the algorithm used, based solely on the statistical
properties of its performance, requires the knowledge of
the range of the available numbers i.e. the size of the
space of all possibilities (denoted earlier as the integer T).

In other words, we are discussing whether you can legitimately
deduce from the observed _statistical properties_ of mutations
alone, as reported in this or similar experiments, that the
'change picking algorithm' was a random pick (unbiased to
prefer changes _now_ which will turn out useful _later_, as
explained earlier).

That you may be able to sometimes eyeball some kind of pattern
in my sequence of number guesses in a simplified example where
you can count everything on your fingers is, to put it politely,
not relevant for the present argument. You are trotting out the
emirical rates of mutations, claiming that such rate numbers
_alone_ allow you to conclude that the algorithm which models
best the observed success rates is the neo-Darwinian RM
algorithm -- that is the algorithm which randomly picks among
all possible 1 nucleotide changes without any statistical bias
toward the changes which will turn out more useful later. I am
saying that the rate numbers are not enough for such conclusion.

Or, expressed in terms of the number guessing game, you can't
deduce that I am guessing randomly from knowning the empirical
average number of trie before a guess. You need to know the
range of the numbers allowed (the size of the space of all
possible configurations being explored) to conclude that the
random guessing algorithm models well enough the observed
performance of the guessing.


> In our particular example, the size of the search and solution space is
> the single nucleotide that needs to be changed. The rate of change at
> that site is all you need.
>

The rate of change (accounting also for the rate of repairs)
is dependent on the immediate physical and chemical
environment in which it occurs. That immediate environment
is in turn dependent on its own physico-chemical environment,...
and so on, until you reach the state of the organism's
environment and any survival challenges that it may contain
(such as presence of certain predators, which may result
in particular type of physiological stress response, possibly
specific to the predator type, which eventually affects the
individual cells and their biochemical reaction webs,
including the immediate environment of the favorable mutation
site). Hence, the rate of that nucleotide change is, at least
in principle, dependent on the environmental challenge for
the organism.

Obviously, if you were to measure mutation rate of that
site in vitro, maintaining its immediate environment fixed,
you will probably get fixed mutation rate at that site.
We do know of examples, though, where mutation rates do
change drastically under the environmental stress. I
didn't see anything mentioned about the mutation rates
for this nucleotide under different circumstances.

In any case, your argument wouldn't be helped even if
one were to measure the rate and found that in the
variety of circumstances, with or without predators
or stress, the net rate of retained change (allowing
for repair) is sufficient to account for the observed
adaptation for the available number of tries. While I
didn't see any such fact cited, and I suspect it isn't
quite true, for the sake of argument I will grant it
fora moment. So, let's assume that this rate was high
enough to likely produce the observed adaptation
in those circumstances.

Going back to the number guessing game, that would be
analogous to, say, using strategy of trying number 7 more
often as the initial guess, since, for example, I may
have noticed that the other players tend to pick 7 most
often.

Similarly, the genetic network of the mice may have in
the past generations accumulated knowledge of usefulness
of changing that nucleotide that way, more often than
some others and has already built in the structural &
chemical properties (perhaps stored in the "junk DNA")
which make that particular nucleotide more prone to
that particular change. That kind of memory and recall
of previously useful patterns of change/action is common
to adaptable networks (e.g. animal brains, immune systems).

Does this help your argument? Quite the contrary. It would
be like declaring that my calling number 7 more often
or sooner than others proves that my guessing strategy
was random guess. It actually shows exactly the opposite.
Random guessing strategy would pick any number within the
allowed range equiprobably. The fact that my guessing
distribution is skewed toward something that may work
better when guessing numbers produced by humans (knowing
the common human biases toward 'lucky number 7'), shows
a guessing strategy which is trying to anticipate the
values of the numbers which will be uncovered at some
later time. It is a look-ahead algorithm, an algorithm
which maintains an internal model of the number generating
process, plays 'internally' this model forward in time
to find out what numbers it will generate, and uses
these results from the model space to select its action
(the guess) in the real world.

With genetic networks of animals, the changing of colors
via mutations may similarly be a generally useful strategy,
hence the relatively higher rates of such mutations are
a form of anticipatory or intelligent mutations -- the
mutation is done more often in anticipation of need for
it, since the past experience of the genetic network of
mice, spanning perhaps many generations, has shown it to
be useful to mutate that nucleotide that way. I would
guess that "junk DNA" stores great many such patterns
or strategies, allowing it to perform combined changes
of multiple sites simultaneously, with a single "recall"
control switch (which may turn on/off via a mutation or
some other type of the biochemical activation).

Alternatively, you may argue that this mutation is not
done by the genetic network "more often" than "others".
Well, that brings back into the argument the "other"
possible mutations i.e. it raises the question:
is this particular change of this nucleotide more or
less likely than any other change of any other nucleotide?
But that puts you right back were we started with
the number guessing game -- you need to know how big is
the space of possible changes in order to know whether
this particular change is occurring more or less often
than a randomly picked change from the entire space
of possible changes.

You may try twisting the above defense, in order to
avoid recognizing that you need to know the size of the
search space, by claiming that this mutation is not
occurring "more often" than the other _observed_
mutation rates for other sites. But that still leaves
the point of the question unchanged -- how do you know
that all these other _observed_ mutations, including
the nucleotide discussed, are an unbiased random pick
from all possible 1 nucleotide changes, without knowing
how many possible 1 nucleotide changes are there.

Hence, neither of the last two variants allows you
to claim that this is a "random 1 nucleotide change
pick" without having to compute/estimate how many
possible 1 nucleotide changes are there.

You can try hanging some more onto the earlier
position i.e. you can say you allow that the observed
rate of mutation is greater than what a random pick from
all possible 1 nucleotide changes would yield, but now
you will claim that there is nothing intelligent about
it since, say, you can show exactly, via hypothetical
detailed biochemical analysis that the rate observed
is perfectly predictable from the biochemical properties
of that site and its surrounding. Hence you maintain
that the mutation strategy was still the plain old
random mutation, but now under the particular
physico-chemical constraints, it is the constraints
which are responsible for the biased picks, and not
the picking algorithm, which thus, by these shifting
definitions, remains dumb and simultaneously stripped
of any connection with physical and chemical laws,
thus leaving only the 'supernatural anticipation' as
the sole alternative to the RM conjecture.

(That is the old argument you have used earlier for
the Cairn's experiments and their followups, as you
were struggling to make the RM conjecture coexist,
somehow, with these results, at least at a superficial
verbal level via the childish semantic games.)

Well, try the same "fix" for the number guessing game
and my strategy of calling number 7 more often. Say,
you measure my EEG, or use some other probes on
specific neurons, and from the electric patterns there
somehow show that I will call 7, 500 ms before I
consciously "decide" to call it. Does that discovery
somehow make my use of anticipatory (look-ahead) guessing
strategy suddenly into a random guessing strategy?
_Your_ knowledge of the precise neurological mechanism
by which my neurons implement my guessing strategy
does not change my guessing strategy into some other
non-anticipatory strategy or to an absence of any
strategy at all.

Claiming that the knowledge of the detailed mechanism
will somehow change the anticipatory process into
a non-anticipatory/dumb/random process, would be
like recording the orchestra playing Beethoven's ninth
and defending a thesis that they are not playing the
Beethoven's ninth. How? By displaying the sound waves
as electric voltage curves on the oscilloscope, and
then declaring that since you can see the sounds as
voltage curves, the orchestra is merely playing the
voltage curves and not the Beethoven's ninth.

That is essentially what your ND defense via pointing
at the biochemical mechanisms (and their resulting mutation
rates) implementing anticipation, as a "proof" that there
is no anticipation, amounts to.

Windy

unread,
Jul 10, 2006, 7:11:05 PM7/10/06
to
nightlight wrote:
> > We are
> > talking about a single nucleotide change here, not some long drawn out
> > search of all DNA sequence space.
>
> It may have slipped your mind, but we are discussing this single
> nucleotide change in the context of its alleged support for
> the neo-Darwinian conjecture, which prescribes a particular
> way/algorithm as to how such single nucleotide change is picked
> out (e.g. by the biochemical reaction web of the cell) among
> all possible single nucleotide changes that could be picked
> out from a given initial state.

There is no "picking", random or otherwise. You are talking as if a
process (random or not) first determines that a mutation must happen
somewhere in the genome, and then the mutation must find a place where
to happen. It is not so, the mutations are independent of each other.

> Whether we can compute such number or not, any particular
> nucleotide X can be changed (mutated) in some finite number
> of ways, call it NX, under the physical and chemical conditions
> given as initial state. Therefore, for all NN nucleotides, there
> is another number T, which is the total number of DNA
> configurations which differ from the initial state by a
> single nucleotide change. If NX were constant for all
> nucleotides X, then one could compute T = NN * NX. If the
> NX varies with X, as it normally would, T would be the sum
> of NX for all X. In any case, there is such integer T,
> regardless of how well we may be able to estimate in practice
> its value with present computational tools.

Your strategy is flawed. Consider two theoretical organisms where one
has more DNA than the other, but which share the gene we want to
observe and have similar mutation rates. In your scenario, the
likelihood of the desired mutation is always lower in the organism with
more DNA, since it has more possible mutational states. Do you think
that's realistic?

By the way, what is your prediction of the rate of neutral to
non-neutral mutations, if the mutations are picked intelligently?

-- w.

nightlight

unread,
Jul 10, 2006, 9:11:01 PM7/10/06
to
Richard Forrest wrote:

>>It would be an evidence that neo-Darwinian model (RM+NS) is
>>an unlikely explanation of the observed adaptation and that
>>a more efficient (than random) search algorithm is responsible
>>for the adaptation. By convention, we can call this more
>>efficient algorithm an 'intelligent' or guided mutation,
>>or generally an ID algorithm.
>>
>
>
> No we don't.
> Not in science, anyway.
> If, in science, we don't know what is responsible, we say
> "We don't know what's responsible. How can we find out?"

If some algorithm is more efficient than a dumb/random search
it is perfectly reasonable to call it _by convention_
'intelligent' search. There is nothing to "research" and
"find out" about a perfectly natural naming convention,
which merely indicates the basic property (greater efficiency)
of such alternative algorithm.

>
> In ID, which isn't science, they say
> "We don't know what's responsible, so GodImeananintelligentdesigner
> must be responsible. Well, now that's settled, let's go and spend those
> fat cheques the DI has sent us."

You seem to have mistakenly cut and pasted something from your
debate with Pat Robertson. Or maybe you just prefer to wrestle
with your little strawman.


>>it would be an evidence that the 'intelligent agency' (IA) model
>>is not necessary to explain that particular adaptation.
>
>
> Oh, I see.
> So the position is that there *must* be an intelligent designer, but if
> any particular test doesn't uncover his actions, it's the wrong test.
>
> Hard to falsify.

Not at all. I am simply pointing out that one can imagine a
hypothetical experiment or observation which provide no data
which can discriminate between the various models. That is
not equivalent to saying that no observation can discriminate
between the ID and ND models. After all, this whole series of
posts is arguing precisely the opposite -- that there exist
ways to falsify either of the two models of the evolutionary
algorithm.

> To form a theory, you have to start by testing hypotheses. So far, no
> IDer has tested a single hypothesis about the action of any intelligent
> designer.
>

The hypothesis is that there is an 'intelligent agency' (IA) guiding
the mutations (or generally all transformations of genetic networks
across generations). This results in a more efficient search of the
space of all "possible" (=consistent with the laws of physics &
chemistry) DNA configurations for the 'favorable' configuration than
the neo-Darwinian search algorithm (random mutation, unbiased toward
possible mutations which may turn out favorable later) can achieve.
Hence the testable hypothesis, as explained at length in previous
posts, is that the observed rates of the new 'favorable'
configurations should be greater than those that a random search
algorithm would find.

The single nucleotide mutation described in the paper being discussed
is a particularly convenient case for discussing what is meant above
since the search space is relatively simple to define - it is a space
of all DNA configurations which differ from the initial state by a
single nucleotide change (of any kind of change i.e. the change need
not result in a valid nucleotide, it needs only to be consistent with
the laws of physics/QM and chemistry applied to the initial state).

Mathematically, this problem is equivalent to the problem of finding
whether the algorithm I am using in the number guessing game
performs better than the random guessing. If you only know that I
guess the hidden number in 10 tries on average, my claim is that
this fact alone is not enough to decide whether my guessing
performance is better than a random guessing algorithm. You
also need to know what is the range of allowed numbers, before
arriving to such conclusion. The basic neo-Darwinist claim here
(the crowing about how the mice paper confirms the neo-Darwinist
RM conjecture) is equivalent to claiming that from just knowing
that I use 10 tries per correct guess one can conclude that my
guessing performs no better than the "random guessing" algorithm.

In the mice adaptation example, knowing the empirical rate of mutations
at the site where they found the 1 nucleotide mutation, is not enough
to conclude that the neo-Darwinian RM algorithm (the random search
in the space of configurations 1 nucleotide away from the initial
state) is the model which explains the observed adaptation. Namely,
all that the empirical mutation rates (taken together with the
number of organisms and the time for adaptation to arise) give
you is the number of tries that the search algorithm had to find
the favorable adaptation. The number of tries alone is insufficient
data to decide whether the search algorithm performed better
or worse or equally as the random search. You need to know what
the search space was, before declaring that the empirically
observed performance is no better than random search. Hence,
there is nothing there for neo-Darwinists to crow about.


>>Consider an 'intelligent agency' which we know to exist in nature
>>(human brain) applying its efforts to stock market trading. One
>>could trade using all his knowledge and foresight and not do
>>any better than chance on any particular day or a week or
>>throughout the whole trading career. Could you declare that
>>he was picking randomly if his gains don't exceed random
>>ones for some given span of time? You can't. You would
>>need more data and possibly different kind of data (such as
>>direct observation of his trading or an interview) to support
>>such conclusion.
>
>
> Quite what the relevance of this is to evolution is a mystery to me.
> I supose that if you have not one scrap or tittle of evidence, then all
> you can do is to argue from analogy.
>

The intent was to illustrate just how far the neo-Darwinists
are willing to go in twisting the common language and concepts
in order to uphold their religiously/ideologically motivated
dogma (of 'random search' being the sole search algorithm used
to explore the space of DNA configurations). By placing their
absurd criteria into an analogous context but in which they don't
have so strong emotional investment as they do in the context
of evolution, makes the absurdity of the rigged criteria more
obvious.


> So what was the hypothesis which ID is testing?

The hypothesis is that the random search algorithm for
favorable configurations in the space of all physically
possible DNA configurations would under-perform the empirically
observed evolutionary algorithm i.e. the random search would
take much longer to find the successful evolutionary novelties
than the observed rates of such novelties. As explained
earlier, the single nucleotide adaptation with the mice,
greatly simplifies the formulation of the hypothesis and
its test criteria.

> On the other hand, there is plenty of other evidence to support
> evolution by natural selection, and no evidence whatsoever to support
> the idea that an "intelligent designer" occasionally interferes with
> normal biological processes.


You are again pasting here from your debate with Pat Robertson or
some such. Nothing I said here or in other threads implies the
strawman "theory" you are mocking above. First, "intelligent designer"
is quite loaded term, implying among others a deistic perspective
(which I think is wrong). I prefer term "intelligent agency" (IA),
which is more consistent with a pantheistic perspective which I
find more coherent. Namely the IA is continuously active, not just
in evolution, but also in forming and upholding the uniform physical
laws across the universe, from incredibly finely tuned physical
constants (e.g. google on "anthropic principle") down to the
elemental properties of our space-time. A bit more about the
possible model for IA is given at the end of an earlier post here:

http://groups.google.com/group/talk.origins/msg/651222ff530cbe4e

I bring up the general view underlying my arguments only to
avoid various caricatures being tossed in by the neo-Darwinists
here as their presumed best guess as to what perspective one
must have to make those arguments.

>>There are plenty of ways that an 'intelligent agency' could exist
>>and perform directed mutations. I merely pointed out couple
>>possibilities which are not a priori excluded by the presently
>>known laws of physics.
>
>
> So propose a test which could *falsify* the intervention of an
> intelligent agency.

Well, showing that the neo-Darwinian random search algorithm can
perform at the level of the empirically observed evolutionary
algorithm, would make the IA model subject to Ockham's razor.

On the other hand, if the random search algorithm were to
perform better than the observed one, that would imply a
'malevolent' (to life) IA, while the performance of random
search worse than the observed would imply a 'benevolent' IA
(this is the variant assumed by default as IA).

One can also easily flip upside-down these value loaded
attributes (malevolent & benevolent) by viewing the emergence
and evolution of life as a shortcut or a short-circuit speeding
up the approach of the whole system to the maximum entropy state,
the heat death of the universe (since the processes of life
accelerate the total entropy generation for the whole system).

> What was that scientific alternative to evolution by natural
> selection again?

The alternative that I have in mind is a hypothesis that
the neo-Darwinian 'random mutation' search algorithm would
under-perform the actual/empirical evolutionary algorithm.
Since that core aspect of the neo-Darwinism was never put
to test (it may be computationally too complex at present to
test decisively), all three types of algorithms, the neo-Darwinian,
the benevolent IA and the malevolent IA, are equally open
and falsifiable hypotheses, none so far shown or known to be
better than the others.

> It isn't ID - Michael Behe, one of it's leaders conceeded under oath
> that it isn't science, so you must be talking about something else.

He is welcome to believe and say as he wishes. I don't share that
particular view.

> What is *your* education in science, by the way?

Theoretical physics. Since the grad school (Brown Univ.),
though, I have been working in industry, R&D, mostly in
research and design of practical combinatorial and optimization
algorithms (used for forecasting, compression, search,...).

nightlight

unread,
Jul 11, 2006, 3:17:40 AM7/11/06
to
Windy wrote:
> There is no "picking", random or otherwise. You are talking as if a
> process (random or not) first determines that a mutation must happen
> somewhere in the genome, and then the mutation must find a place where
> to happen.

The term "pick" doesn't imply a human-like conscious decision. If one
looks at the processes of DNA transformation as a sequence of steps
connecting one DNA state to the next, from the initial state A
through the final state Z, then in the context of comparing
alternative sequences of steps (algorithms) leading to different
final states Z, the shorthand which says that a given algorithm
"picks" a particular final state Z is a convenience. The
crisp algorithmic terminology abstracts the bare properties
of these processes (which, at the bottom, are an intractable
time-dependent evolution of many-particle quantum systems far
from equilibrium and with rich variety of initial and boundary
conditions) relevant for such comparisons.

Regarding the teleological undertones of those terms, note that
all dynamical equations of physics can be formulated in a
teleological form, as the 'least action principle', which says
that all physical systems evolve/transform in time in a unique
way that minimizes a global utility function for the entire
system (this function is called 'action integral' and it a sum
of terms containing states of all components at all times
from the initial to the final moment of the transformation).
That teleological formulation of the dynamical laws is fully
equivalent to the alternative causal formulation (formalized
via differential equations). It is thus perfectly correct (and
often more convenient) to say that each component of a physical
system moves/changes in way that seeks to minimize the 'action'
of the whole system.

You may note that the teleological language is also common in
descriptions of patterns in human and animal behaviors. Hence
we have teleological languages for patterns at the fundamental
physical level and and at the organism level. There is no
scientific reason to prohibit such language for patterns at
the intermediate levels (such as that of intra-cellular
biochemical processes), and especially so if you subscribe
to a reductionist view that all laws at higher levels can
be reduced to the laws of physics, since the laws of physics
are purely teleological in their 'least action principle'
formulation.


> It is not so, the mutations are independent of each other.

There is no fundamental principle which would prohibit the
genetic network from remembering the sets of useful mutations
(or combinations of mutations with activations/inhibitions of
multiple sites) and later under suitable conditions recalling,
then activating the whole set simultaneously via a single
control switch (which can be implemented as an activation /
inhibition or a mutation of the control switch site).

I would find it quite surprising if the genetic networks
haven't already discovered and used such potentially very
useful adaptation and survival mechanism long ago.
Except for jabbering about it, naming things and writing
papers and books, in almost all other respects these
networks are much smarter in their particular field of
expertise than all of the molecular biology, biochemistry,
biotechnology, pharmaceutical industry,... put together.
After all, the human technology and science isn't
even remotely close to being able to assemble a single
cell, or even a single organelle, from the basic inorganic
ingredients, while these networks are doing it billions
of times every day, and have been doing it for over
billion years. Hence, anything useful a human can think
up in this field, they have probably already figured
it out eons ago, along with thousands of variations and
improvements on any particular theme.

In a generalized perspective, where one looks at the DNA
transformations from some initial to some final state, the
DNA transformations during sexual reproduction, genetic
recombination and even the cellular differentiation
represent massive transformations of the initial DNA states,
simultaneously transforming the states of multiple far
away sites.

>
> Your strategy is flawed. Consider two theoretical organisms where one
> has more DNA than the other, but which share the gene we want to
> observe and have similar mutation rates. In your scenario, the
> likelihood of the desired mutation is always lower in the organism with
> more DNA, since it has more possible mutational states. Do you think
> that's realistic?

That is not 'my strategy' or 'my scenario' but it is how the
random search algorithm of the RM conjecture would 'pick' the
next DNA configuration in the simple case of uniform distribution
of final DNA states (which is a simplifying assumption that need
not hold in a any particular case, which was used merely for
concreteness and simplicity, in order to explain my main point,
which is that there exists a mathematically valid criterium
which can discriminate between ND and ID conjectures).

In the case of the single nucleotide mutation in the mice
adaptation being discussed, 'all accessible DNA configurations'
form some set S1 of configurations which differ from the set
of initial states S0 by a single nucleotide change (of any type
consistent with quantum laws and the initial state; note that
quantum laws are irreducibly non-deterministic i.e. the
precisely same initial state may lead to different outcomes).
The question of what is the probability distribution of the
final DNA states in S1 requires a bit more discussion.

Although in principle, the physical laws and the initial
probability distribution of the states are sufficient to
determine the final probability distribution of the states
(final DNA configurations), in practice we don't know what
the initial distribution is and even if we could find that
out, the resulting problem would still be computationally
intractable, anyway. Hence, additional general principles
are introduced which seek to provide a coarse grained
characterization of the general properties of the final
distribution of configurations for a broad spectrum
of initial states (consistent with the system parameters
held fixed). The basic rule for constructing such
coarse grained distributions is that the distribution
maximizes the entropy while respecting any constraints
known to hold in a given setup (these constraints arise
from the restrictions on the initial and boundary
conditions and from the dynamical laws i.e. physical
and chemical laws).

The simplest among such such coarse grained models is
the unconstrained case, which thus has a uniform
coarse-grained distribution, where each final state
accessible under the given conditions has equal
probability as any other. Depending on further details
about the initial distribution and any dynamical
constrains, the final distribution may be skewed, with
some class of final configurations being more likely
than others. In the modeling sketch I was describing,
I was not assuming any additional constraints, hence
the distribution was uniform. In principle one could
refine that distribution for specific cases where more
about the initial state and dynamics is known. But
that aspect was not essential for the argument being
made. What matters is that some such distribution,
uniform or biased in accordance with any additional
constraints, _exists_ as a component of a mathematical
model of the DNA transformation.

In this kind of setting, the neo-Darwinian RM conjecture
is a _further_ assumption that the probability distribution
of final states/DNA configurations will not be biased
toward the configurations which will turn out to be
(statistically) more favorable later. Hence, the RM
conjecture postulates a very particular relation between
the final distribution of the DNA configurations
(following a mutagenic & any repair processes) and
the un/favorability properties of the later phenotopic
expressions of these configurations in a given
environment.

The opposing (benevolent) ID conjecture is, like the
RM conjecture, an additional constraint on the final
distribution of DNA configurations, which says that
the final distribution of the DNA configurations
_will_ be biased in favor of the configurations which
will turn out to be (statistically) more favorable
later. (The malevolent ID, which I will ignore here,
says that the bias would go against the more favorable
configurations.)

Note that RM and ID conjecture are additional (to the max
entropy assumption) constraints on the properties of
the distribution of the final DNA states. The distribution
being constrained by these additional general principles
is the already the coarse-grained max entropy
distribution consistent with any given physical
or chemical constraints in a given setup.

Knowing the actual (or the coarse-grained max entropy)
final distribution, one could in principle compute the
probability of some set of favorable configurations
and thus obtain what is the probability that in a
given number of tries (number of offspring available
in a given time interval) that one or more instances
of these favorable configurations will occur. This
calculation from the _actual_ distribution, would by
necessity predict the expected time needed to evolve
the observed mice adaptation which would be comparable
to the observed time (this is of course statistical
prediction, hence all the usual caveats for such
predictions apply).

Without the knowledge of the actual distribution,
all one has are some further conjectured properties
of such distribution, such as those prescribed by
the max entropy prescription plus the RM or ID
conjectures. In the simplest case, where no other
constraints on the final states exist, the RM distribution
of the states is uniform i.e. all final states are
equiprobable. That is the case I was looking at and
discussing in my first post in this thread:

http://groups.google.com/group/talk.origins/msg/67310038b76aec09

where on can formulate the basic RM requirement and
show how to obtain its predictions in a very simple,
uncluttered way. More complex coarse-grained distribution,
but still _consistent_ with the RM would merely make
all the expressions much more complicated, without
changing the main point of the argument -- there
exists a perfectly valid mathematical criterium which
can discriminate between the RM and ID conjectures.

{ The secondary point was that the mice results being
discussed do not provide any such discriminating
information or implications, hence there is absolutely
nothing in the article for the neo-Darwinians to crow
about that ID supporters could not crow about with
equal justification. }

In the same unconstrained case, the ID conjecture
would postulate that the distribution would be
non-uniform, and except for the bias requirement
in favor of the states which will turn out more
favorable later, there are no further requirements
on this distribution from the ID conjecture.

The principal discriminating implication of the ID
conjecture is thus a prediction that the empirically
observed times of an appearance of a favorable
mutation (at any scale, i.e. of any set of favorable
mutations) would be shorter than those predicted by
the RM conjecture (with the usual caveats about the
statistical prediction understood). This distinction
was, of course, the reason for all the excitement and
commotion, the 'controversy', among neo-Darwinians
after the Cairn's experiments came out, since it
appeared that the observed rate of double favorable
mutations was greater than what the RM constraint
would imply it ought to be.

After the further experiments narrowed down enough
the mechanism the bacteria used to 'beat the RM odds',
the RM conjecture itself was quietly redefined,
with its defining constraint of 'absolutely no bias
for favorable final states' weakened by amending
it, so that much of it still holds except for the
cases when the final DNA state distribution bias
toward favorable DNA states is realized via the
very particular mechanism which was found to be
responsible for the bias observed in the Cairn's
experiment (and its followups) which is the
increased general mutation rates under the
environmentally induced stress. This observed
bias was thus simply labeled as a general stress
response and is taken out from the new and 'improved'
list of biases prohibited by the RM conjecture.

I suspect that as more such anticipatory biases
toward the favorable states are discovered,
their underlying biochemical implementation
will be reverse engineered and once clarified
as far as it goes, the mechanism will get its
own euphemistic name which is not suggestive in
any way of its bias toward favorable final
states, and having thus been safely labeled, it
will automatically cease to be on the list
the RM prohibited biases (after all, with a safe
name, it is now something else, and thus it
cannot also be the bias toward favorable
states any more). And the RM retreat can go on
this way forever, no matter what kinds and
how many biases are eventually discovered.

The neo-Darwinism will thus likely live on and
keep evolving in the face of any facts that
come along, just like any other religion.


> By the way, what is your prediction of the rate of
> neutral to non-neutral mutations, if the mutations
> are picked intelligently?

All that ID conjecture implies in a general case
is that the rate of beneficial mutations would be
greater than whatever the RM conjecture would predict
in any given circumstances.

Richard Forrest

unread,
Jul 11, 2006, 4:13:29 AM7/11/06
to

nightlight wrote:
> Richard Forrest wrote:
>
> >>It would be an evidence that neo-Darwinian model (RM+NS) is
> >>an unlikely explanation of the observed adaptation and that
> >>a more efficient (than random) search algorithm is responsible
> >>for the adaptation. By convention, we can call this more
> >>efficient algorithm an 'intelligent' or guided mutation,
> >>or generally an ID algorithm.
> >>
> >
> >
> > No we don't.
> > Not in science, anyway.
> > If, in science, we don't know what is responsible, we say
> > "We don't know what's responsible. How can we find out?"
>
> If some algorithm is more efficient than a dumb/random search
> it is perfectly reasonable to call it _by convention_
> 'intelligent' search.

Why? Unless there is evidence for intelligence, there is no reason to
apply that term to an algorythm.

> There is nothing to "research" and
> "find out" about a perfectly natural naming convention,
> which merely indicates the basic property (greater efficiency)
> of such alternative algorithm.
>

Perhaps not, but there is no reason to invoke the intervention of an
intelligent agent to explain the unknown unless there is specific
evidence for such an agent.

And please don't go down the line of asserting that one can have
intelligence without an intelligent agent. That's simply nonsensical.

> >
> > In ID, which isn't science, they say
> > "We don't know what's responsible, so GodImeananintelligentdesigner
> > must be responsible. Well, now that's settled, let's go and spend those
> > fat cheques the DI has sent us."
>
> You seem to have mistakenly cut and pasted something from your
> debate with Pat Robertson. Or maybe you just prefer to wrestle
> with your little strawman.
>

What straw man is that? That ID claims to be science when it isn't? Or
that "GodImeananintelligentdesigner" is not the default explanation for
science?


>
> >>it would be an evidence that the 'intelligent agency' (IA) model
> >>is not necessary to explain that particular adaptation.
> >
> >
> > Oh, I see.
> > So the position is that there *must* be an intelligent designer, but if
> > any particular test doesn't uncover his actions, it's the wrong test.
> >
> > Hard to falsify.
>
> Not at all. I am simply pointing out that one can imagine a
> hypothetical experiment or observation which provide no data
> which can discriminate between the various models.

In which case the only possible conclusion is "I don't know", not
"GodImeananintelligentdesigner did it".

>That is
> not equivalent to saying that no observation can discriminate
> between the ID and ND models.

And what ID "model" would that be?
Do you mean the one which says that an "intelligent designer", of
unkown but possibly supernatural powers has interfered with normal
evolutionary processes using unknown but possibly supernatural methods
for unknown and unknowable motives?

I can't think of anything such a model could *NOT* explain.


Or do you have some other ID "model" in mind?

> After all, this whole series of
> posts is arguing precisely the opposite -- that there exist
> ways to falsify either of the two models of the evolutionary
> algorithm.

Which two models?
There are several different models which have been investigated by
science, and the factors which control evolutionary development are a
complex interaction of genetics and environment. This is why biologists
research concepts such as neutral drift, allopatric and sympatric
speciation, natural selection, genetic constraints on morphology and so
on.

Are you suggesting that ID offers a scientific alternative - i.e. a
model which can be falsified by investigation?

If so, perhaps you can tell us where such a model is to be found. The
DI don't seem to know about it.

>
> > To form a theory, you have to start by testing hypotheses. So far, no
> > IDer has tested a single hypothesis about the action of any intelligent
> > designer.
> >
>
> The hypothesis is that there is an 'intelligent agency' (IA) guiding
> the mutations (or generally all transformations of genetic networks
> across generations).

And how can this "hypothesis" be falsified?

> This results in a more efficient search of the
> space of all "possible" (=consistent with the laws of physics &
> chemistry) DNA configurations for the 'favorable' configuration than
> the neo-Darwinian search algorithm (random mutation, unbiased toward
> possible mutations which may turn out favorable later) can achieve.

And the evidence on which you base this conclusion is what?

> Hence the testable hypothesis, as explained at length in previous
> posts, is that the observed rates of the new 'favorable'
> configurations should be greater than those that a random search
> algorithm would find.

Bearing in mind that "favourable" is a term which is meaningless except
by reference to the environment in which a population of organisms
lives, how on earth would you test this assertion?

I suggest that you take the time to educate yourself in what
"neo-Darwinists" actually write, and how they go about their research
before you make such stupid assertions about religious and ideological
motivation.

> By placing their
> absurd criteria into an analogous context but in which they don't
> have so strong emotional investment as they do in the context
> of evolution, makes the absurdity of the rigged criteria more
> obvious.

No, it merely shows that your analogy is complelely inappropriate.

>
>
> > So what was the hypothesis which ID is testing?
>
> The hypothesis is that the random search algorithm for
> favorable configurations in the space of all physically
> possible DNA configurations would under-perform the empirically
> observed evolutionary algorithm i.e. the random search would
> take much longer to find the successful evolutionary novelties
> than the observed rates of such novelties.

And what would this tell us?


> As explained
> earlier, the single nucleotide adaptation with the mice,
> greatly simplifies the formulation of the hypothesis and
> its test criteria.

And if this were demonstrated, what would it tell us?

>
> > On the other hand, there is plenty of other evidence to support
> > evolution by natural selection, and no evidence whatsoever to support
> > the idea that an "intelligent designer" occasionally interferes with
> > normal biological processes.
>
>
> You are again pasting here from your debate with Pat Robertson or
> some such. Nothing I said here or in other threads implies the
> strawman "theory" you are mocking above.

And what straw man would that be?
That if we don't know. an "intelligent designer" must be responsible?


> First, "intelligent designer"
> is quite loaded term, implying among others a deistic perspective
> (which I think is wrong). I prefer term "intelligent agency" (IA),
> which is more consistent with a pantheistic perspective which I
> find more coherent.

Quibbling about the label does not change the fact that there is no
evidence whatsoever for the intervention of such an agency.

> Namely the IA is continuously active, not just
> in evolution, but also in forming and upholding the uniform physical
> laws across the universe, from incredibly finely tuned physical
> constants (e.g. google on "anthropic principle") down to the
> elemental properties of our space-time.

So you mean God.
Fine, but don't pretend that this is anything to do with science.

> A bit more about the
> possible model for IA is given at the end of an earlier post here:
>
> http://groups.google.com/group/talk.origins/msg/651222ff530cbe4e


Yup. You do mean God.
What has this to do with science?

>
> I bring up the general view underlying my arguments only to
> avoid various caricatures being tossed in by the neo-Darwinists
> here as their presumed best guess as to what perspective one
> must have to make those arguments.
>
> >>There are plenty of ways that an 'intelligent agency' could exist
> >>and perform directed mutations. I merely pointed out couple
> >>possibilities which are not a priori excluded by the presently
> >>known laws of physics.
> >
> >
> > So propose a test which could *falsify* the intervention of an
> > intelligent agency.
>
> Well, showing that the neo-Darwinian random search algorithm can
> perform at the level of the empirically observed evolutionary
> algorithm, would make the IA model subject to Ockham's razor.
>

No it wouldn't.
Perhaps you can give me an example from anywhere in any other field of
science in which the falsification of a particular hypothesis has led
the researchers to the conclusion that an "Intelligent Agency" or an
"Intelligent Designer" has to be responsible.

> On the other hand, if the random search algorithm were to
> perform better than the observed one, that would imply a
> 'malevolent' (to life) IA, while the performance of random
> search worse than the observed would imply a 'benevolent' IA
> (this is the variant assumed by default as IA).


Oh I see!
So you presume a priori that GodImeananIntelligentAgency is
responsible, and interpret the motives of GodImeananIntelligentAgency
based on the outcome of your analysis.

Doesn't look much like science to me.


>
> One can also easily flip upside-down these value loaded
> attributes (malevolent & benevolent) by viewing the emergence
> and evolution of life as a shortcut or a short-circuit speeding
> up the approach of the whole system to the maximum entropy state,
> the heat death of the universe (since the processes of life
> accelerate the total entropy generation for the whole system).
>
> > What was that scientific alternative to evolution by natural
> > selection again?
>
> The alternative that I have in mind is a hypothesis that
> the neo-Darwinian 'random mutation' search algorithm would
> under-perform the actual/empirical evolutionary algorithm.

That's a test of whether or not mutation is random in respect of
benefit. Falsifying that does not provide any support for your
assertion that GodImeananIntelligentAgent is responsible.


> Since that core aspect of the neo-Darwinism was never put
> to test (it may be computationally too complex at present to
> test decisively),

If you think that, I suggest that you read the literature on the
subject.

> all three types of algorithms, the neo-Darwinian,
> the benevolent IA and the malevolent IA, are equally open
> and falsifiable hypotheses, none so far shown or known to be
> better than the others.

So what is your model again?
If we can't explain it under the "neo-Darwinian" model, the only other
possible explanation is that GodImeananIntelligentAgency is involved?

That's not a scientific model.

It's an unfounded assertion.

>
> > It isn't ID - Michael Behe, one of it's leaders conceeded under oath
> > that it isn't science, so you must be talking about something else.
>
> He is welcome to believe and say as he wishes. I don't share that
> particular view.
>
> > What is *your* education in science, by the way?
>
> Theoretical physics. Since the grad school (Brown Univ.),
> though, I have been working in industry, R&D, mostly in
> research and design of practical combinatorial and optimization
> algorithms (used for forecasting, compression, search,...).

Well, I suggest that rather than making assertions about the current
state of development of evoluionary biology which demonstrate little
except your ignorance of the literature in that field, you take the
time to research the primary literature and talk to evolutionary
biologists about what they are doing and how they are doing it.

RF

ErikW

unread,
Jul 11, 2006, 4:25:32 AM7/11/06
to

nightlight wrote:

> michael...@worldnet.att.net wrote:
>
> > Now researchers have identified a genetic mutation
> > that underlies natural selection for the sand-matching
> > coat color of the beach mice, an adaptive trait that
> > camouflages them from aerial predators....

>
> It doesn't appear they have shown that the mutation
> was _random_. They only found a variant of a gene
> which is responsible for the lighter color.
>

These researchers searched for the locus causing the light fur in these
mice and found a dominant point mutation.
However, there are more than one mutation causing light furred "beach"
mice. Any of those mutations will do. (How many different there are is
not known.)

So RM + NS pwns ID :P

ErikW


snnnnnnip!

Windy

unread,
Jul 11, 2006, 4:27:36 AM7/11/06
to

nightlight wrote:
> Windy wrote:
> > There is no "picking", random or otherwise. You are talking as if a
> > process (random or not) first determines that a mutation must happen
> > somewhere in the genome, and then the mutation must find a place where
> > to happen.
>
> The term "pick" doesn't imply a human-like conscious decision. (...)

> There is no
> scientific reason to prohibit such language for patterns at
> the intermediate levels (such as that of intra-cellular
> biochemical processes), and especially so if you subscribe
> to a reductionist view that all laws at higher levels can
> be reduced to the laws of physics, since the laws of physics
> are purely teleological in their 'least action principle'
> formulation.

Would you use this formulation in physics? Is there "intelligence"
picking which atom will experience fission next in a lump of
radioactive substance?

> > It is not so, the mutations are independent of each other.
>
> There is no fundamental principle which would prohibit the
> genetic network from remembering the sets of useful mutations
> (or combinations of mutations with activations/inhibitions of
> multiple sites) and later under suitable conditions recalling,
> then activating the whole set simultaneously via a single
> control switch (which can be implemented as an activation /
> inhibition or a mutation of the control switch site).

It would be unbelievably wasteful to do this, the "intelligent process"
would do much better to evolve phenotypic plasticity, or an ability to
regulate the gene independent of mutations.

> After all, the human technology and science isn't
> even remotely close to being able to assemble a single
> cell, or even a single organelle, from the basic inorganic

> ingredients...

A virus has been assembled. Not all the way from atoms or anything like
that, because that would be stupid, but it would be possible.

> In a generalized perspective, where one looks at the DNA
> transformations from some initial to some final state, the
> DNA transformations during sexual reproduction, genetic
> recombination and even the cellular differentiation
> represent massive transformations of the initial DNA states,
> simultaneously transforming the states of multiple far
> away sites.

How does cellular differentation alter the DNA states? And since the
massive mixing of states with recombination must occur before your
single nucleotide mutation can be expressed, is your model of picking
of any use?

> > Your strategy is flawed. Consider two theoretical organisms where one
> > has more DNA than the other, but which share the gene we want to
> > observe and have similar mutation rates. In your scenario, the
> > likelihood of the desired mutation is always lower in the organism with
> > more DNA, since it has more possible mutational states. Do you think
> > that's realistic?
>
> That is not 'my strategy' or 'my scenario' but it is how the
> random search algorithm of the RM conjecture would 'pick' the
> next DNA configuration in the simple case of uniform distribution
> of final DNA states (which is a simplifying assumption that need

> not hold in a any particular case...

So? My example shows that including "all possible DNA states" in the
probability calculation leads to an erroneous conclusion.

> Hence, the RM
> conjecture postulates a very particular relation between
> the final distribution of the DNA configurations
> (following a mutagenic & any repair processes) and
> the un/favorability properties of the later phenotopic
> expressions of these configurations in a given
> environment.

No, it doesn't postulate "a very particular relation". It postulates no
relation. This has been demonstrated. If you want randomity to be
tested separately in all possible experiments on natural selection,
sure, you would find some that deviate from randomity by chance because
of the small sample size. How do you propose the researchers should
estimate the relation in the mouse case with only one known beneficial
mutation?

> The opposing (benevolent) ID conjecture is, like the
> RM conjecture, an additional constraint on the final
> distribution of DNA configurations, which says that
> the final distribution of the DNA configurations
> _will_ be biased in favor of the configurations which
> will turn out to be (statistically) more favorable
> later.

And this has been tested in several cases and no bias in favour of
beneficial mutations has been detected. Why continue? Do you have some
evidence that suggests otherwise?

> After the further experiments narrowed down enough
> the mechanism the bacteria used to 'beat the RM odds',
> the RM conjecture itself was quietly redefined,
> with its defining constraint of 'absolutely no bias
> for favorable final states' weakened by amending
> it, so that much of it still holds except for the
> cases when the final DNA state distribution bias
> toward favorable DNA states is realized via the
> very particular mechanism which was found to be
> responsible for the bias observed in the Cairn's
> experiment (and its followups) which is the
> increased general mutation rates under the
> environmentally induced stress.

So no bias towards favourable mutations there, either. Do you have a
problem with that conclusion?

> (after all, with a safe
> name, it is now something else, and thus it
> cannot also be the bias toward favorable
> states any more)

An elevated general mutation rate was never a bias toward favourable
states.

> > By the way, what is your prediction of the rate of
> > neutral to non-neutral mutations, if the mutations
> > are picked intelligently?
>
> All that ID conjecture implies in a general case
> is that the rate of beneficial mutations would be
> greater than whatever the RM conjecture would predict
> in any given circumstances.

So what is responsible for neutral mutations?

-- w.

hersheyhv

unread,
Jul 11, 2006, 12:41:28 PM7/11/06
to
nightlight wrote:
> > The point I am making is that your whole thesis is based
> > on a false disection of the problem. There is no genetic
> > search algorithm searching through all possible DNA
> > configurations. There is no teleologic goal in the
> > process.
>
> Term "algorithm" here is merely a shorthand for a specific
> sequence of steps which transforms a system from state A to
> state B. Similarly the "search algorithm" in this context is
> a sequence of steps which transforms the genetic network
> (possibly spanning multiple generations) from state A which
> does not have the particular favorable mutation to state B
> which does have it.

And, for the mutation in question, that is the probability that there
will be a point mutation at that locus per generation. IOW, the
probability that you will have a state A organism with no mutation in
this gene being converted to a state B organism with the mutation in
that gene (whether or not the organism also has other mutations, mostly
neutral, is irrelevant unless the other mutation is to a dominant
deleterious allele) is specifically the rate of mutation at this site.
It will be the rate of mutation at this site whether or not the allele
produced is in or not in an environment where such a change is
beneficial, detrimental, or neutral. It will be the rate of mutation
at this site whether the organism as a whole has half, twice, or
variable amounts more DNA. That is because the rate of mutation at
this site is a result of its chemistry. The chemistry of mutational
change can be affected by local conditions such as nearby surrounding
sequence, but not by distant features such as total amount of DNA.
But the spontaneous rate of mutation at this site *is* the correct
value to use. Whatever number you are calculating that is somehow
affected by the totality of the genome is GIGO and is irrelevant to the
rate of change at this site. It is the rate of change at this site
that is important. Nothing else.

The rate of mutation at this site can be altered by adding an
appropriate mutagen (if the change is a transition, the mutagen must
preferentially increase transitions), but such a change in rate will
not determine whether the mutant allele produced is beneficial,
detrimental, or neutral. Those qualifiers are determined by the
interaction of the phenotype produced and the local environment. IOW,
changing the rate of mutation from that which occurs spontaneously and
naturally has no effect on the selective value of the allele produced.

Your claim is that there is some sort of "outside intelligent agent"
which can cognitively recognize when a specific mutation will be
beneficial and *selectively* produce that mutation *when it is needed*.
If that were true then one should expect to see a correlation between
the rate of mutation *at a needed specific site* and the need for
mutation. From more than 60 years of experimentation, from
Luria-Delbruck on, and using many organisms and many ways of testing
the possibility, there is no such correlation. And as recently as the
Cairns experiments, some 20 years ago, the question was re-examined.
Again, after analyzing what happened it was clear that Lamarckism is
dead for the vast majority of traits. There may be a few *possible*
"special cases" involving sites where there is "domesticated mutation",
such as in immune cells where such "mutation when needed" *might*
occur. But none of these possible rare anomolous Lamarckian sites are
currently clearly such. And this particular mutation, in fur
coloration, does not appear to be a special case, unless you have some
evidence to the contrary.

> For adaptable networks (such as brain, genetic network, neural
> network) and the sequences of steps they go through one often
> uses term "algorithm" (e.g. the pattern recognition algorithms
> or search algorithms realized via neural networks) since such
> sequences of state transformations can be mapped into conventional
> computer algorithms solving the same problems. No teleological
> premise is introduced by use of such shorthand. It is simply a
> terminological convenience since the theory of computer
> algorithms brings in cleanly defined, high resolution terms
> and concepts particularly suitable for analyzing such
> sequences of steps, states they traverse, their effects,
> performance characteristics, alternatives.... etc.
>
>
> >>Both, the guided (intelligent)
> >>and the 'random' mutations will have some number of mutations
> >>per given time. The _only_ difference you could extract
> >>statistically is that the intelligent mutations will find
> >>the solution faster on average than the random ones.

There is NO search for a solution. And if the rate of mutation is the
same for intelligent and random mutations, how, exactly, will the
solution be found faster? Take the case in point. If the rate of
occurrence of this point mutation is exactly the same when it occurs
randomly and when it is directed by intelligence, how is the solution
(which *is* this point mutation) occurring at any different rate?

> > IOW, there is no detectable difference. Exactly how would you
> > determine that any given mutation is "intelligently designed"?
>
> Statistical difference is detectable.

You just said there was no detectable difference. Can't you keep your
story straight for an entire paragraph?

> The difference here would
> be between the theoretically predicted success rate (or probability
> of success for a given number of tries) of a 'random search' in
> the space of DNA configuration (accessible via 1 nucleotide change)
> vs the empirically observed success rate of the actual algorithm.
>
> > We are
> > talking about a single nucleotide change here, not some long drawn out
> > search of all DNA sequence space.
>
> It may have slipped your mind, but we are discussing this single
> nucleotide change in the context of its alleged support for
> the neo-Darwinian conjecture, which prescribes a particular
> way/algorithm as to how such single nucleotide change is picked
> out (e.g. by the biochemical reaction web of the cell) among
> all possible single nucleotide changes that could be picked
> out from a given initial state.

Neo-darwinism says that when this mutation occurs *by chance*, the
local environment gets to (in a probabilistic way) determine its
relative fate. Since there is no difference in the rate of appearance
of this mutation *by chance* or *by your hypothetical agent*, your
intelligence must be doing something else, such as arranging conditions
near the mutation so as to favor it or nurturing the organism so that
it reproduces despite the mutation being unfavorable. What exactly is
your intelligence doing? All the bs you are proposing about
"biochemical reaction web of the cell" is irrelevant, since that isn't
being changed. The only change of any relevance is the point mutation,
which changes at its inherent mutation rate (unless you add mutagens)
and does so whether or not there is a need for it.

> The ND algorithm/prescription requires that the change is
> picked "randomly" i.e. that the picking process cannot
> systematically (= with statistical significance) show
> preference/bias at the time of the pick for any particular
> nucleotide (or any of their possible final states) whose
> change will turn out (with statistical significance) to
> be more useful _later_ over the nucleotides & their final
> states whose change will turn out to be less useful _later_
> (or even outright fatal shortly the the change).

The ND prescription specifically says that the change *occurs* randomly
wrt need. The "picking" is done by local environmental conditions.
The process is both non-random (because the local conditions
*selectively* discriminate) and non-teleological (since only current
conditions matter). The case in point does not involve any problem for
evolution since the current conditions that matter (the color of the
background) is immediately present. If the mutation in question were
recessive, then it would be selectively neutral and the frequency in
the population can vary, with only the homozygotes undergoing
selection.

> Otherwise, we would characterize the picking process which
> systematically gives preference _now_ to particular nucleotide
> changes which will be more useful _later_ as anticipatory or
> intelligent picking process.

This is *almost never* (I would say never, because I don't know of any,
but am willing to consider examples) observed in nature. The picking
process, aka natural selection, is entirely determined by current local
conditions. As I have pointed out, a recessive allele can act like a
neutral or near-neutral trait in the heterozygous state and be present
in low frequencies in a population because there are very few
homozygotes being formed. But there is no evidence of traits being
*selectively* retained because they might be useful at some future
time. Notice that this does not involve mutation rates at all, but
occurs because selection only works on phenotype and not directly on
genotype.

> The ND theology absolutely prohibits that kind of picking
> processes. If there are only two physically/chemically
> possible changes A and B, and A will turn out statistically
> useful _later_, while B will turn out almost always fatal
> _later_, the ND dogma still prohibits picking processes
> which give preference to A at the time of the pick i.e.
> it requires that A and B have to be picked with equal
> probability and only the natural selection that follows
> later is allowed to terminate B type offspring and continue
> the A type offspring.

Actually NS (selection by fit of phenotype to local environment via the
dumb unintelligent environment affecting the reproductive success of
organisms differentially) predicts that whichever phenotype is
beneficial *in the present condition* will be favored. If that is A,
then A will be favored. If it is B, then B will be favored.
Regardless of any future utility of either A or B. If, currently, A
and B are selectively neutral, then the frequencies will tend to
undergo random drift from whatever the current frequencies are. This
is standard population genetics. Do you have evidence for your idea
that there is teleological foresight occurring in nature? I sure have
not seen any.

So, there is no difference in the rate of mutation And there is no
teleological foresight in the retention of mutant alleles ( actually
the variant phenotypes produced by these alleles). Since neither
exists in nature AFAWCT, what does that leave you aside from wishful
thinking that maybe, just maybe, there is a pony under the pile of
horseshit you are blathering?

> Now, if you suddenly declare that your view of ND
> allows for the 'change picking processes' that give
> statistically significant preference to A over B
> at the time of the picking, hence before the later
> state of the environment would perform the natural
> selection, then you already agree that the change
> picking process can be _anticipatory_ -- the changes
> statistically preferred _now_ among the alternatives
> will turn out statistically more useful later than
> the alternatives.

The evidence does not support your teleological thinking.

No. But a certain level of consciousness is required. There are
intermediate levels of consciousness, of course. But being able to
anticipate the consequences of actions requires consciousness. There
is no consciousness involved in either natural selection or mutation.
Mutation is a chemical process. The environment's choices are dumb and
unintelligent. In fact, I like to describe natural selection as the
process by which the current generation is, genetically, optimally
adapted to the environment their *parents* faced. That is, NS is
*backward-looking*, not *forward-looking* or even *present-looking*.
Fortunately, most of the time environments do not change radically
between generations. When it does change too rapidly, of course, there
are massive die offs.

> the processes which do not comply
> with the laws of physics & chemistry i.e. you have merely
> defined the attribute "anticipation" away from its common
> meaning, while in substance agreeing that such processes
> do in fact exist, be it in the genetic networks or in
> the networks of neurons such as human brain, or in the
> conventional computers.

I see no evidence of "anticipation" and neither mutation or selection
involves "anticipation". Could you present you evidence for
"anticipation"?

> { Note: I use above the phrase "change picking processes"
> since the commonly used term "mutation" seems to throw
> you (and some others here) into your usual 'vapid dictums
> parroting loop', which then leads to waste of time and
> efforts required to snap you out of the loop. }

Mutation is not a "change picking process", whatever you think that
means.

> > Are you saying that mutations that
> > occur at a higher frequency are more likely to be beneficial than
> > mutations that occur at low frequency in a population?
>
> One cannot make an absolute evaluation like that without any
> regard to the context. If a high mutation rate at some site
> is the result of the past memory (by the cellular genetic
> network) of the environmental challenges for which that
> mutation has turned out useful, then if the similar
> environmental challenges recur (which is often a useful
> probabilistic assumption), the change will likely turn
> out useful again.
>
> As to how that usefulness might compare to some arbitrary
> changes on arbitrary sites in arbitrary environments, is a
> matter of specific evaluation of consequences. For example
> one can put an organism into a different environment
> in which any particular mutation, which was generally useful
> in the previous environments, will turn out harmful.

The above is vapidity, meaning nothing.

> > The search space in this case is a single nucleotide in a pre-existing
> > gene (although it is possible that other mutations could have produced
> > the same phenotype).
>
> That is the core element of your confusion. Reminding you again,
> as explained above we are discussing the implication of the
> observed adaptation, which indeed was due to a single nucleotide
> change, for the vaiablity (as a mathematical model of the observed
> process) of the neo-Darwinian 'change picking' algorithm among
> all possible changes. Hence, the space of all possible states
> (the set denoted as S0 earlier) available for any 'change picking'
> algorithm is essential for deciding which algorithm might have
> been used to pick the change observed. This is no different
> than using the knowledge of the space of allowed numbers in
> the number guessing game in order to decide which algorithm
> might have been used to pick the guesses.

Mutation occurs without respect to need for the mutation. The
environment selects among available phenotypic variants. The two
processes are independent of each other, not part of a single process
with a goal.

> Whether we can compute such number or not, any particular
> nucleotide X can be changed (mutated) in some finite number
> of ways, call it NX, under the physical and chemical conditions
> given as initial state. Therefore, for all NN nucleotides, there
> is another number T, which is the total number of DNA
> configurations which differ from the initial state by a
> single nucleotide change. If NX were constant for all
> nucleotides X, then one could compute T = NN * NX. If the
> NX varies with X, as it normally would, T would be the sum
> of NX for all X. In any case, there is such integer T,
> regardless of how well we may be able to estimate in practice
> its value with present computational tools.

We are not interested in what happens to other nucleotides. The only
mutation of interest is the one which occurs with the rate NX. X is
that nucleotide, present in two copies in a diploid organism or one
copy in a haploid gamete.

> Let's say, for the sake of argument, that you obtain that
> T = 10^20 configurations which differ by one nucleotide change
> (including all possible ways that any changed nucleotide
> can be changed under the initial state physical-chemical
> conditions) from the initial configuration.

Why would changes in other nucleotides affect anything? If the change
in another nucleotide were selectively neutral, it would not. If it
were deleterious (and dominant) it would reduce the fitness of the
organism, but, of course, mutation at such a site would be independent
from (and randomly distributed relative to) the mutation of interest.
So, although deleterious mutation elsewhere in the genome may have an
effect on some specific organism, it would not have a net population
effect (unless you can demonstrate that the second mutation
*selectively* occurs when the mutation of interest occurs). All you
seem to be doing is wishing that somehow mutations elsewhere in the
genome *differentially* affect our specific site. That is simply not
true.

Let's say that 50% of all mouse zygotes die (early in embryogenesis)
because they have a lethal mutation somewhere in their genome. Half of
all mutations at our specific non-lethal site will die along with those
50% inviables. But among the viable survivors, the rate of mutation at
our specific site will be the same as it was in the total population.

> Note that we
> can't do anticipatory pre-filtering (from among all possible
> nucleotides changes) based on the future viability of the
> organism, since within ND algorithm, that filtering can be
> done only _after_ the pick, when the natural selection takes
> place and when the cost of one potential offspring is already
> paid. Hence our NX for nucleotide X includes changes which may
> result in what would be a 'defective' DNA (including all the
> possible fatal defects).
>
> Suppose now that the total number of offspring by the mice
> population during this adaptive "big leap" is 100 million,
> i.e. N = 10^8. Hence, the mice population was able to explore
> only N/T = 10^-12 fraction of the possible DNA configurations
> which differ by one nucleotide from the initial one. The odds,
> _within the random pick algorithm_, that one of the changes
> would hit the right nucleotide the right way, from some set
> of F favorable mutations of this type, would be P(N,) = F*N/T.

So, what is your evidence that mutations at other sites
*differentially* affect the rate of mutation observed in viable progeny
at our specific site? Or that it *differentially* affects the
frequency of the selectable phenotypic difference? Until you can do
that, the above is merely another ignorant attempt at defeating
evolution by multiplying numbers together (numerology).

Sure I do. Chance and randomness produces *specific* expectations and
I can determine if the observed pattern fits these *specific*
expectations. That is, if I were to sample every sequential group of
50 numbers from 0 to 9 in a long series of numbers claimed to be
random, I would expect to see a distribution that was insignificantly
different from a Poisson distribution if the distribution were, in
fact, random. There are other tests I could use. In fact, one of the
first things one does in science experiments is arrange a test of
whether variables associate by chance. If they don't, then you have to
look for a causal relationship of some sort.

Incidently, the Poisson distribution is one of the ways by which we
know that mutation occurs at random wrt need. See the Luria-Delbruck
experiment of more than 60 years ago.

> Namely, you are claiming (via
> your empirical rates argument) that _all_ you need to declare
> that my guessing strategy was 'random guessing' is how many
> tries on average I need to guess (which is equivalent here to
> knowing the average rate of my tries, such as 5 tries per minute,
> and the duration of the guessing).
>
> I am saying that this number alone (which is equivalent to the
> knowledge of the empirical rates of various mutations) is not
> enough for such conclusion. Short of being able in some cases
> to eyeball the pattern of my guesses (which is an irrelevant
> artifact of the intentional simplicity of the example), any
> deduction of the algorithm used, based solely on the statistical
> properties of its performance, requires the knowledge of
> the range of the available numbers i.e. the size of the
> space of all possibilities (denoted earlier as the integer T).

Apparently your knowledge of statistics and their use is as good as
your knowledge of biology and genetics. I would not be looking for a
number. I would be looking at a pattern of numbers. And, to repeat,
chance and randomness produces specific testable patterns.

> In other words, we are discussing whether you can legitimately
> deduce from the observed _statistical properties_ of mutations
> alone, as reported in this or similar experiments, that the
> 'change picking algorithm' was a random pick (unbiased to
> prefer changes _now_ which will turn out useful _later_, as
> explained earlier).
>
> That you may be able to sometimes eyeball some kind of pattern
> in my sequence of number guesses in a simplified example where
> you can count everything on your fingers is, to put it politely,
> not relevant for the present argument. You are trotting out the
> emirical rates of mutations, claiming that such rate numbers

The rate, of course, would empirically come with a SD and appear in the
pattern of a bell-curve. Again, chance and randomness produces
specific testable patterns.

> _alone_ allow you to conclude that the algorithm which models
> best the observed success rates is the neo-Darwinian RM
> algorithm -- that is the algorithm which randomly picks among
> all possible 1 nucleotide changes without any statistical bias
> toward the changes which will turn out more useful later. I am
> saying that the rate numbers are not enough for such conclusion.
>
> Or, expressed in terms of the number guessing game, you can't
> deduce that I am guessing randomly from knowning the empirical
> average number of trie before a guess. You need to know the
> range of the numbers allowed (the size of the space of all
> possible configurations being explored) to conclude that the
> random guessing algorithm models well enough the observed
> performance of the guessing.

Not unless you have evidence of a significant correlation between some
of the other configurations and this specific mutation's phenotypic
effect *and* can demonstrate that this correlation has a significant
impact on the occurance of the phenotype. Obviously, a mutation that
produces an albino would epistatically hide the sand-colored mutation,
but the odds of both, rather than one or the other occurring together
would, in fact, be the multiplication of their individual probablities.
Thus, the impact of such double mutants would be insignifcant.

> > In our particular example, the size of the search and solution space is
> > the single nucleotide that needs to be changed. The rate of change at
> > that site is all you need.
> >
>
> The rate of change (accounting also for the rate of repairs)

The net rate of change is what I meant.

> is dependent on the immediate physical and chemical
> environment in which it occurs. That immediate environment
> is in turn dependent on its own physico-chemical environment,...

Not, generally, in a way relevant to mutation rates. And certainly not
in a way that would *specifically* target this gene. Again, rising
rates of mutation does not differentially produce mutations of need.

> and so on, until you reach the state of the organism's
> environment and any survival challenges that it may contain
> (such as presence of certain predators, which may result
> in particular type of physiological stress response, possibly
> specific to the predator type, which eventually affects the
> individual cells and their biochemical reaction webs,
> including the immediate environment of the favorable mutation
> site). Hence, the rate of that nucleotide change is, at least
> in principle, dependent on the environmental challenge for
> the organism.

But, in *fact*, there is no significant detectable effect.

> Obviously, if you were to measure mutation rate of that
> site in vitro, maintaining its immediate environment fixed,
> you will probably get fixed mutation rate at that site.
> We do know of examples, though, where mutation rates do
> change drastically under the environmental stress. I
> didn't see anything mentioned about the mutation rates
> for this nucleotide under different circumstances.

Again, mutation is random wrt need. I see no evidence anywhere of
genes that *specifically* mutate according to need. You are
hypothesizing a wish for mutation that is non-random wrt need. Unless
you have evidence, the greater probability is that this mutation is
like all other known mutations rather than it being a special case.
Again, the sun rises in the east is a good scientific inference. So is
the idea that genes mutate randomly wrt need. You need extraordinary
evidence to claim that the sun will rise in the north tomorrow.

That particular dead dog won't hunt.

> Does this help your argument? Quite the contrary. It would
> be like declaring that my calling number 7 more often
> or sooner than others proves that my guessing strategy
> was random guess. It actually shows exactly the opposite.

If you call 7 more often or sooner than other possible numbers, you are
specifically *not* calling 7 randomly wrt other numbers. I would
certainly be able to determine that you are calling the number 7 more
often than the expectations of chance would predict. But if the right
number you are guessing *is* being picked randomly, there would be no
significant correlation between your choice of number and your guessing
right. Seven will still be the right number about 1/10th of the time.

> Random guessing strategy would pick any number within the
> allowed range equiprobably. The fact that my guessing
> distribution is skewed toward something that may work
> better when guessing numbers produced by humans (knowing
> the common human biases toward 'lucky number 7'), shows
> a guessing strategy which is trying to anticipate the
> values of the numbers which will be uncovered at some
> later time. It is a look-ahead algorithm, an algorithm
> which maintains an internal model of the number generating
> process, plays 'internally' this model forward in time
> to find out what numbers it will generate, and uses
> these results from the model space to select its action
> (the guess) in the real world.

And there is no evidence of such teleologic intention occurring wrt
mutation.

> With genetic networks of animals, the changing of colors
> via mutations may similarly be a generally useful strategy,
> hence the relatively higher rates of such mutations are
> a form of anticipatory or intelligent mutations -- the
> mutation is done more often in anticipation of need for
> it, since the past experience of the genetic network of
> mice, spanning perhaps many generations, has shown it to
> be useful to mutate that nucleotide that way. I would
> guess that "junk DNA" stores great many such patterns
> or strategies, allowing it to perform combined changes
> of multiple sites simultaneously, with a single "recall"
> control switch (which may turn on/off via a mutation or
> some other type of the biochemical activation).

Is mutation (and it is a specific point mutation) to achondroplastic
dwarfism a "look ahead" mutation? This point mutation at a single site
occurs at a rate of 10^-5-10^-6, which is almost a thousand times
higher than the rate of the average point mutation. Homozygosity for
this dominant allele is lethal, BTW.

> Alternatively, you may argue that this mutation is not
> done by the genetic network "more often" than "others".
> Well, that brings back into the argument the "other"
> possible mutations i.e. it raises the question:
> is this particular change of this nucleotide more or
> less likely than any other change of any other nucleotide?
> But that puts you right back were we started with
> the number guessing game -- you need to know how big is
> the space of possible changes in order to know whether
> this particular change is occurring more or less often
> than a randomly picked change from the entire space
> of possible changes.

Unless you have actual evidence that this mutation is somehow unusual,
we need to say that neither of your claims is likely based on inference
from other genes, including genes that cause mutations in mouse fur
color (there is a literature on this in domestic and lab mice). None
of these genes seem at all unusual.

> You may try twisting the above defense, in order to
> avoid recognizing that you need to know the size of the
> search space, by claiming that this mutation is not
> occurring "more often" than the other _observed_
> mutation rates for other sites.

I said that I *know* the size of the relevant 'space' (there is no
'search'). It is the single nucleotide described. The rate of change
at that nucleotide and only that nucleotide is all you need to know.
Unless the mutation rate at this particular site is vastly different
from the vast majority of point mutations (and the rate of mutations
that affect mouse coat color don't seem to be significantly higher than
other mutations, unlike the case of achondroplastic dwarfism in
humans), the rate of mutational change is in the order of 10^-7 to
10^-9. Whether mutations at other loci can also produce the selective
phenotype is not clear, but certainly not impossible.

> But that still leaves
> the point of the question unchanged -- how do you know
> that all these other _observed_ mutations, including
> the nucleotide discussed, are an unbiased random pick
> from all possible 1 nucleotide changes, without knowing
> how many possible 1 nucleotide changes are there.

Why would I be interested in all possible 1 nt changes? I am only
interested in those that produce the variant phenotype. I clearly
admit that I do not *know* the mutation rate for this particular site,
but am merely inferring that it is not significantly different than
most point mutations. You are the one that claims that it is
different. Moreover, unless it is significantly more difficult to
produce this mutation (by multiple orders of magnitude), I don't see
why it could not occur randomly and then be selected for in the right
local conditions.

> Hence, neither of the last two variants allows you
> to claim that this is a "random 1 nucleotide change
> pick" without having to compute/estimate how many
> possible 1 nucleotide changes are there.

Why would knowing how many other 1 nt changes are possible be relevant?
Unless you are claiming that the selective phenotype does not appear
unless there is a simultaneous mutation of specific *multiple* sites.
That, of course, would be counterfactual to our knowledge of genes that
affect fur color.

> You can try hanging some more onto the earlier
> position i.e. you can say you allow that the observed
> rate of mutation is greater than what a random pick from
> all possible 1 nucleotide changes would yield,

I am only interested in the rate of change of this particular
nucleotide, the one that produces the variant selective phenotype. To
hell with all the other nucleotides. They are irrelevant and do not
affect the rate of change of this particular nucleotide nor the
frequency of the selective phenotype.

nightlight

unread,
Jul 11, 2006, 2:43:07 PM7/11/06
to
Windy wrote:

> Would you use this formulation in physics? Is there "intelligence"
> picking which atom will experience fission next in a lump of
> radioactive substance?

Well, the least action principle formulation of dynamical laws
tells you that every particle and every field behaves, precisely
in all respects, as if each element is seeking to minimize some
'cost' or 'penalty' function which depends on the past and future
states of the system. If there is interaction, the 'cost' that
each particle is minimizing includes not only contributions
from its own state but also the contributions from the states
of other particles and fields.

Note that at the fundamental level (Quantum Field Theory/QFT),
the basic laws are irreducibly indeterministic i.e. for a
given initial state of a system, there are multiple future
states in which it can transform, without any interaction
with other systems and with initial state replicated absolutely
identically from one test to another. Hence, one could say
that the laws and the initial/boundary conditions do not fix
what will particle/field do next (in its general pursuit
to minimize the 'cost') i.e. there is a sort of 'free will'
at the most fundamental level. Further, there is a formulation
of fundamental physical equations (Maxwell, Schrodinger, Dirac)
in which these equations are coarse grained statistical
properties of actions of large number of cellular automata,
each automaton pursuing its own little utility function
(e.g. see http://www.cft.edu.pl/~birula/publ/lattice.pdf and
misc. papers by G. N. Ord: http://www.scs.ryerson.ca/~gord/ ).
Now, I don't what is it like to be an atom in pursuit of
the 'minimum cost', but I do know exactly what is it like
to be a particular collections of atoms in such pursuit.
I thus see no reason why there shouldn't be something that
it is like to be an atom (see on philosophical perspective of
panpsychism: http://plato.stanford.edu/entries/panpsychism/ ).

> A virus has been assembled. Not all the way from atoms or
> anything like that, because that would be stupid, but
> it would be possible.

Well, that was done with using quite a few of commercial
biotech ingredients (enzymes, proteins) of varying
complexity, produced by the live organisms in the first
place.

> How does cellular differentation alter the DNA states? And since the
> massive mixing of states with recombination must occur before your
> single nucleotide mutation can be expressed, is your model of picking
> of any use?

I was talking of persistent DNA state changes, that involve massive
synchronized changes to multiple far away locations. The "state" in
case of differentiation is not the coarse grained state, such as
coding sequence, but the finer grained quantum state (or just a
sufficiently detailed chemical state), but which still has a
permanence across multiple cell generations, just like the
coding sequence.


> So? My example shows that including "all possible DNA states" in the
> probability calculation leads to an erroneous conclusion.
>

It is not the inclusion of all states that is the problem
but the simplification of assigning _equal probabilities_
to all final states. That simplification is fine for the
purpose it was used (to explain how one could formulate
criteria for empirically distinguishing RM from IA conjectures,
i.e. to show existence of such criteria), or to get a rough
idea of kind of magnitudes involved, but not for much detail.

If one were to use the exact probability distribution of
the final states (which exists, since, at least in principle,
it follows from the initial state and the dynamical laws),
you could get correct values for the probability of any
subset of final states, such as the 'favorable' subset.


> No, it doesn't postulate "a very particular relation". It postulates no
> relation. This has been demonstrated. If you want randomity to be
> tested separately in all possible experiments on natural selection,
> sure, you would find some that deviate from randomity by chance because
> of the small sample size. How do you propose the researchers should
> estimate the relation in the mouse case with only one known beneficial
> mutation?
>

The exact and the three simplifying models of DNA state (max entropy,
RM and ID conjectures) are all probabilistic models, they are
all expressed in terms of probability distributions of the final
DNA states. Hence there is nothing meaningful in testing for
"randomness", since they all have randomness. It is the finer
properties of the randomness, which by itself is common to all,
that distinguish one conjecture from another.


>>The opposing (benevolent) ID conjecture is, like the
>>RM conjecture, an additional constraint on the final
>>distribution of DNA configurations, which says that
>>the final distribution of the DNA configurations
>>_will_ be biased in favor of the configurations which
>>will turn out to be (statistically) more favorable
>>later.
>
>
> And this has been tested in several cases and no bias in favour of
> beneficial mutations has been detected. Why continue? Do you have some
> evidence that suggests otherwise?

You can't tell whether there is a 'bias toward favorable' unless
you know what the outcome (distribution) would be without the
'bias toward favorable'. The 'bias toward favorable' is not
the same thing as 'favorable', which is what you and others
here seem to be assuming. Consider a gambler who is cheating,
which is form of a 'bias toward favorable'. Does that imply
that he is also making money? Not at all. He still may be
losing money, and that depends on the baseline probability
distribution for 'making money' without his bias. In the
gambling analogy, the RM conjecture is that no one, neither
players nor the casino are cheating, while the ND conjecture
is that at least the players are cheating and possibly
the casino.

Your argument (and of others here) in gambling analogy is
that one does not need to know or consider the precise rules
of the game that would allow you to compute the baseline odds
(the game need not be symmetrical or fair) or the skills of
the players, or how the variety of color coded tokens being
exchanged translate into money (e.g. there could be a game
where some tokens may have negative value), to declare that
no one is cheating simply by observing that you don't see
anyone having a much bigger token pile front of them than
the others, hence you conclude that no one can be cheating,
hence the RM conjecture must be correct. My argument is that
you need to know at least what the baseline odds are, what
are the values of different token colors and how many
tokens did each player start with, before you can deduce
from the rough sizes of token piles alone that no one is
cheating. You might not even know what the cheating would
look like in such a game, hence you may not recognize
it even if you are looking straight at it in the middle
of the act.

> So no bias towards favourable mutations there, either.
> Do you have a problem with that conclusion?


There was a bias toward some favorable and some unfavorable
mutations (among all possible final states), the net
outcome of all biases being favorable (the bacteria managed
to adapt to the challenge and survive). Note that the bias
here consisted in the increased weight for the mutated
final DNA states, and decreased weight for the non-mutated
states (probabilities for all possible final states have
a fixed sum, 1). The resulting distribution had a property
of the (statistical) net increase in the probability of
favorable states, which is what ID requires. The states
whose porbability was decreased were the non-mutated states,
which were unfavorable in this envirnoment (a certain
starvation).

The ID does not require that all unfavorable states must
have decreased probability or that all states with increased
probability must be favorable. It only requires that the net
result of all the IA processes/algorithms must be statistically
favorable (compared to the case of no IA activity). In this
case the IA process/algorithm was implemented in the
biochemical substratum in the form a 'stress response'
mechanism which increases the general mutation rate.

The ID conjecture does not require or prohibit any
particular implementation (or even the nature of
substratum) of the IA algorithms. It only requires
that its basic performance standard for such algorithms
is met -- the net statistical gain compared to not
executing the algorithm(s). ID being a statistical
requirement means that any candidate algorithm has to
go through a phase of large statistical uncertainty,
hence there will be candidate algorithms which result
in the net loss. Further, since the rest of the system
is not staying fixed (but each component is running
and developing its own IA algorithms), even the
algorithms which have passed the initial candidate phase
in the original environment, remain candidates for any
new environmental challenges. A few bits on how all
these interacting and overlapping IA processes might
fit together into a larger pattern was sketched in
an earler post:

http://groups.google.com/group/talk.origins/msg/2c5884a907f10c22

As you may have noticed, an aspect of ID is only a perspective,
a particular (algorithmic) way of looking at the biochemical
processes, a heuristic. But it also contains a probabilistic
conjecture with sharp mathematical criterium which, at least
in principle, can discriminate between ID and RM conjectures.
As suggested in previous post, the Cairn's experiment has
already falsified the RM1 conjecture, the original RM at the
time, and now we have RM2 which excludes from its prohibited
mutagenic processes list the particular 'stress response'
found to lead to the net favorable outcome in the Cairn's
example. It also provided an early example of the IA process,
with likely many more to follow (along with further evolution
of RM2 into RM3, RM4,... with ever more crossed entries and
exceptions on its prohibited mutagenic processes list).


> An elevated general mutation rate was never a bias toward
> favourable states.

Without the elevated mutation rate they wouldn't have survived.
Hence the net result of all biases for that environment was
favorable for the bacteria.

>>All that ID conjecture implies in a general case
>>is that the rate of beneficial mutations would be
>>greater than whatever the RM conjecture would predict
>>in any given circumstances.
>
> So what is responsible for neutral mutations?

If you shift a probability distribution curve (such
as Gaussian) centered roughly around some nominal
'neutral' value M to the right, that doesn't mean
that the probability for M or for points left of M
becomes zero. It only may mean that it is smaller
than before the shift. Your argument here seems to
be that since there is a nonzero probability for
value M and for values left of M, the curve could
not have been shifted from the initial state. My
argument is that you cannot make such deduction
from the observed nonzero probability at M and left
of M, alone. You need to know more, such as what
were their probabilities before the shift. Note
that depending on where precisely the nominal
"neutral" value M is relative to the curve maximum,
the shift could also lead to the increase of
probability for M (due to changing enough of
negative values into the neutral).

nightlight

unread,
Jul 11, 2006, 9:00:12 PM7/11/06
to
hersheyhv wrote:

> That is because the rate of mutation at
> this site is a result of its chemistry. The chemistry of mutational
> change can be affected by local conditions such as nearby surrounding
> sequence, but not by distant features such as total amount of DNA.
> But the spontaneous rate of mutation at this site *is* the correct
> value to use.

This is the essence of your problem in understanding the
alternative position. It is true that the mutation rate
is determined by the local physical-chemical conditions.
But that does not exclude the possibility that these
conditions and their changes in time form a part of an
anticipatory/intelligent process pursuing some objectives.

Consider a chess playing computer program. It looks at the
possible moves available, then looks for your responses to
them, then his next responses,... evaluating different
possible move sequences and then picks one which yields the
best gain (according to its utility functions) within its
look-ahead horizon. Depending on the quality of the program,
it may also explicitly create plans and strategies and
look for the best ways to accomplish them. The best of these
programs play at the world championship level, and the top
one (IBM's Deep Blue) has beat the human world champion at
the time (Gary Kasparov). That is an example of anticipatory
(or intelligent) process. My argument here does not require
anything more mysterious or more human-like for the 'intelligent
agency' than that kind of perfectly natural intelligent process
(it is a natural process, if we consider humans as a 'natural
process' i.e. humans are as natural as fire, rain, river,
bacteria,... just a more elaborate natural process).

Now, you could also look at the detailed EM fields and currents
inside the computer hardware and find the corresponding physical
state description of the very same anticipatory process that
looks ahead, plans and selects the best actions in the first
description. The existence of electric pulses and EM fields
which can in principle explain the same intelligent process
in a different language does not mean that the first description
is incorrect or that it doesn't exist.

For example, you may find the precise electric pulse X that
corresponds to the final decision of the program to select
a particular move. From that pulse X you can track down the
subsequent pulses which display the selected move. Now,
applying your reasoning from the mutation rate argument, you
would claim that the fact that you can explain the displayed
move as the result of pulse X, means that there is no planning,
look-ahead, anticipation, intelligent process,... but that
the move shown entirely follows from the pulse X. It does
follow, but there is also another pattern in the phenomenon,
the one described in the language of chess program and its
algorithm. In other words, the pulse X does not tell the
whole story, even though it is indeed causally responsible
for the displayed move (and it needs no other causes to
predict the displayed move).

Similarly, the fact that the mutation rate at a given site
follows from the precise physical-chemical state at the site,
does not preclude this same physical-chemical state from
being a part of an anticipatory computational process.

The same would go for your own anticipatory processes
implemented as the electro-chemical activity in your brain.
One could in principle find some electro-chemical correlates
of some of your mental processes. That finding does not erase
from existence or invalidate/change the nature those mental
processes. It is simply a different angle on the same phenomenon.

As secondary smaller point about the mutation rates -- going
back to the example of the chess program and electrical measurements,
if you were only to measure the counts of pulses per second
at various locations, such coarse grained statistical information
is not nearly enough to decide whether there is a computational
process executing some anticipatory algorithm, much less to
decipher what is it doing and find electrical correlates of
the algorithmic steps. The type of electrical information
you need would _not be statistical_, but detailed _time-dependent_
sequence of electrical pulses, since the algorithms are specified
by a precise sequence of steps, and not some bulk statistical
properties of all steps lumped together. Hence without the
detailed time-dependent picture of each pulse after pulse
and its relation to other pulses, or without the actual detailed
model of the hardware and software design, you could not have
clue whether the trillions of pulses going on combine into
any kind of intelligent process or are just some kind of
random junk such as the result of a junk-code doing nothing
meaningful (but randomly changing memory locations and jumping
around).

The bare mutation rates (or their phenotopic effects) are not
enough to decide whether the frame-by-frame sequence of detailed
physical & chemical conditions in the vicinity of the mutation
site that lead to that individual mutation (or to the lack of
its repair) is always just some random, purely accidental event
uncorrelated with the ongoing processes in the biochemical
reaction network of the cell or a step of an anticipatory,
look-ahead process executed by this network. That is an open
question, and not something you can just declare one way or
the other.

Since the detailed reversed engineering of such computations
by the natural networks is not presently feasible (other
than few tiny snippets & toy models such as 'neural networks'),
an alternative way to at least establish whether such natural
computation controlling (statistically) the mutations and
other DNA transformations, is occurring, would be to estimate
how well a mathematical model which does _not_ include any
anticipatory computational component would perform against
the observed performance of biological systems.

If the mathematical model which lacks the computational
(anticipatory, intelligent) component, significantly
under-performs the empirically observed performance of
genetic networks (e.g. it predicts that twenty orders
of magnitude more offspring was needed to evolve some
adaptation in given situation than the number of offspring
deduced from the observations), then there must be processes
going on in the actual networks which are capable of
drastically accelerating and enhancing the exploration/search
for the 'favorable' DNA configurations.

Such processes would be anticipatory i.e. they would use
internal models, run them forward in time to find out the
consequences of different actions (DNA changes) and select
those which, in the model space, comes out as the best
model action (just like a chess program, selecting the
best move via look-ahead). The gain is achieved by virtue
of trying out and discarding many available actions
inexpensively, within the model space (in its 'mind',
as it were), before committing to the much more expensive
and slower real world realization of the selected action.

The mathematical models and criteria I was discussing
earlier are the models without anticipatory computational
component, since that is at least one way that we can
establish, at least in principle, whether such "dumb"
models can replicate the performance of the natural
networks.

For the sake of argument, I explained the basic idea of
this criterium on an extremely simplified model which
assumed equiprobable final DNA states. Since I wasn't
trying to _extract any numbers_ from it to use in the
argument, much of the objections of you and others to
drawbacks of that model are missing the point. I was
merely to explain on a concrete example the relation
between the models and empirical observations and how
would the criterium discriminating between RM and ID
be formulated in such setting. Whether the toy model
used for this purpose was accurate enough to yield
some useful numbers is completely irrelevant.

Any actual model aiming to get numbers to be compared
with empirical performance of biological networks,
would need to be far more elaborate. The point of my
model was to show that a perfectly legitimate scientific
criteria can be formulated to differentiate between
the RM and ID conjectures. The toy model I used was
in no way meant to be the actual implementation of
such criteria ready to test against observations,
but was meant only to explain why such mathematical
criteria _do exist_ and also how would they fit
into the models. Hence ID is a perfectly legitimate,
falsifiable conjecture.

I hope, you are getting now why citing the mutation
rates and arguing that they can lead to the observed
mutation in the time and population size given, is
irrelevant for the argument I am making. No one
is arguing that mutation did not happen or that
it could not have happened. The argument is about
the _nature of the processes_ which prepared the
physical-chemical conditions at the mutation site:
are these processes random/dumb or are they part
of some anticipatory computational process (e.g.
by the biochemical reaction network of the cell)
which is, via its internal modeling and look-ahead,
short-circuiting the vast numbers of wrong tries
before committing its choices to the expensive
real world implementation? The rate alone tells
you absolutely nothing about this question.

Since it is much easier to model a dumb/random
process than an anticipatory process, the simplest
approach to try answering the question would be
to mathematically model the dumb process and check
its predicted performance against the observed
performance of the actual biochemical networks.


>> Statistical difference is detectable.
>
> You just said there was no detectable difference.
> Can't you keep your story straight for an entire
> paragraph?

This is just one example (out of many) of the major
confusion you have when differentiating between the
models and empirical reality.

In the empirical reality you have just one mutation
rate, the one being observed. There is no other
empirical rate you can compare it to. You can't
create in the real world two types of organisms,
one using neo-Darwinian scheme and another using
ID scheme and then compare one rate to another.
Hence, there is no other _empirical_ rate to
compare the _known empirical_ rate with.

What I was saying is that one needs to compare the
_empirical rates_ with the _predictions_ of the RM
model. The RM model would be a mathematical model
of these processes which does not contain an
anticipatory component as the part of the model
to look ahead and speed up the time it takes to
obtain the favorable mutation. This model cannot,
of course, include in its parameters the empirical
rates since that is what you're trying to predict
and compare with the empirical rates. If we had
unlimited computational powers, the model would
start with quantum description of the DNA and
the cellular environment, and the boundary &
initial conditions would be picked using max
entropy principle, i.e. we would take a distribution
which maximizes entropy, subject to any known
constraints (from physical laws and environment).

Since we don't have unlimited computational powers,
the model would need great many simplifications
and additional assumptions. At the extreme point
of such simplifications is the uniform distribution
model that I used to explain the criteria
differentiating between ID and RM.


>> in other words, your underlying premise appears to be that


>> the attribute "anticipation" can be applied only to the
>> super-natural processes,
>
>
> No. But a certain level of consciousness is required.
> There are intermediate levels of consciousness, of course.
> But being able to anticipate the consequences of actions
> requires consciousness. There is no consciousness involved
> in either natural selection or mutation. Mutation is a
> chemical process. The environment's choices are dumb
> and unintelligent.

This is another major confusion you have. There is no
"consciousness" within natural science models. Nothing
in the natural laws gives any indicator what is it like
to be such and such arrangement of atoms, or what is
redness like. All it gives you is frequency of red photons,
or that some neurons are firing such and such pulses
when the retina is struck by photons of that frequency.
Nothing in any such description has the slightest hint
of "consciousness" of "redness".

Hence "consciousness" is not a concept in the present
natural science. There is no model for it or even hint
that it exists or that it needs to exist for any purpose.

Since we do know that it does exist (e.g. you know what
is it like to be the arrangement of atoms and fields that
make up you), its absence in natural science is clearly
a case of defect in our present natural science. Hence
you cannot base a scientific argument on the absence or
presence of "consciousness" since natural science doesn't
know that "consciousness" exists.

In addition to this non-scientific leap, you make another
one above, by declaring that you know (somehow) that
biochemical process in cell have _no consciousness_. There
is no basis in natural science to declare anything of
that sort. The laws of natural science have no model of
consciousness, thus there is no scientific criterium
which can decide that this arrangements of atoms and
fields does have consciousness and this one does not.

If you wish to argue for your point from analogies, the
adaptable biochemical network in a cell has similar
mathematical properties as the networks of neurons
in a brain. They are all modeled by the same type
of mathematical constructs, neural networks. They
all can perform the most general computations (i.e.
compute, at least in principle given enough time
and material, anything that is computable at all).
By modifying their link strengths in small simple,
reflexive increments, in response to punishments and
rewards (which are given to the model network as
particular input signals), they spontaneously optimize
their reactions to the inputs to optimize
punishments/rewards i.e. they are natural anticipatory
systems (with internal models of their inputs/environment,
which they run forward in time to evaluate the consequences,
just as chess programs or human chess players do).
Hence, an argument from analogy would only indicate that
those biochemical networks in the cell do have some
form of consciousness, contrary to your declaration.

Finally, you show a third element of your great mixup
on this topic, by declaring that 'anticipation of
consequences' requires some level of consciousness.
It does not. Does a chess program, which anticipates
your next move (and probably ten moves after that)
need to have consciousness? Nope. It does anticipates
the consequences of its moves via pure computation.
Now you can quibble about semantics of verb "anticipate"
and claim that you don't call 'anticipation of
consequences' without consciousness an anticipations.

That is irrelevant for the topic being argued. The only
scientifically viable meaning for 'anticipation' is
the anticipation in the computational sense, such as
that of a chess program. What is it like, subjectively,
to anticipate, has nothing to do with the term I am
using or the arguments I was making. Further, perfectly
natural, well understood, anticipatory processes do
exist (such as chess program running on a computer),
and they require no consciousness to explain their
anticipatory behaviors. It is all done as computation.

Hence there is nothing supernatural or far fetched
about considering whether anticipatory processes
have a role in the cellular biology, evolution, origin
of life. Their principal strength would be that they
could short-circuit great deal of expensive trial and
error via offspring (by doing the trial & error "in
the head"). Hence, it is reasonable to ask whether
mathematical models of the biochemical processes which
do not include such anticipatory elements can replicate
the empirically observed performance of the real
biological systems.

These are perfectly scientific and open questions. It
is the dogmatic refusal to acknowledge this fact that
is unscientific (a religious dogma).


Nic

unread,
Jul 11, 2006, 9:25:40 PM7/11/06
to

nightlight wrote:
> hersheyhv wrote:
>
> > That is because the rate of mutation at
> > this site is a result of its chemistry. The chemistry of mutational
> > change can be affected by local conditions such as nearby surrounding
> > sequence, but not by distant features such as total amount of DNA.
> > But the spontaneous rate of mutation at this site *is* the correct
> > value to use.
>
> This is the essence of your problem in understanding the
> alternative position. It is true that the mutation rate
> is determined by the local physical-chemical conditions.
> But that does not exclude the possibility that these
> conditions and their changes in time form a part of an
> anticipatory/intelligent process pursuing some objectives.

<snip>

> Similarly, the fact that the mutation rate at a given site
> follows from the precise physical-chemical state at the site,
> does not preclude this same physical-chemical state from
> being a part of an anticipatory computational process.

Just responding to this point in isolation.
I understand and agree with the point that system descriptions are
layered and that descriptions at different layers don't necessarily
contradict (if that's what you meant). But the two cases you are
assimilating - the innards of a computer and a DNA strand - are
dissimilar in the important respect of their immunity from noise. The
chess-playing computer is deterministic regardless of phenomena like
background radiation and Brownian motion (over which the programmer
would have no control). The mutation event in the DNA strand on the
other hand, is precisely down to the vagaries of such phenomena, and no
other phenomena with macroscopic, and therefore
relevant-to-your-argument causes. The only way you could sustain your
view is to suppose that a 'programmer' foresees even the thermal
motions of every molecule - right from the very start of the universe.

<snip>

nightlight

unread,
Jul 11, 2006, 11:34:49 PM7/11/06
to
Nic wrote:

> I understand and agree with the point that system descriptions are
> layered and that descriptions at different layers don't necessarily
> contradict (if that's what you meant). But the two cases you are
> assimilating - the innards of a computer and a DNA strand - are
> dissimilar in the important respect of their immunity from noise. The
> chess-playing computer is deterministic regardless of phenomena like
> background radiation and Brownian motion (over which the programmer
> would have no control).

The advantage of using conventional computer analogy is that
here we have a clear and full descriptions at both levels of
abstraction, the algorithmic (high abstraction) and physical
(low abstraction). The disadvantage is that neither level is
quite analogous to the corresponding levels of thew adaptable
networks based natural distributed computers. But for the
main point I was making (an example of harmonious coexistence
of the descriptions of anticipatory systems at high and low
levels of abstraction), it was fine since I didn't rely in the
argument on the farther aspects of the analogy, where it
certainly breaks down.


> The mutation event in the DNA strand on the
> other hand, is precisely down to the vagaries of such phenomena, and no
> other phenomena with macroscopic, and therefore
> relevant-to-your-argument causes. The only way you could sustain your
> view is to suppose that a 'programmer' foresees even the thermal
> motions of every molecule - right from the very start of the universe.

The network making up human brain is at least as susceptible to those
noise phenomena as the genetic networks. Such adaptable network based
computers undergo a graceful degradation under damage. We lose millions
of neurons daily and yet we don't crash or go blank due to that loss.
A computer's CPU losing even a single of its millions of transistors
would be stopped dead in its tracks. Hence, a human designed
implementation of a general computer, our digital computer, is much
more fragile than those implemented as distributed adaptable
networks. In terms of the computational power, these natural adaptable
networks (which should not be confused with their toy models, such
as neural networks) in many ways far outperform in their particular
domains of expertise any supercomputers and algorithms we have.

As to the DNA mutations, even if one were to assume that a mutation
is _always_ an accidental event, an "error", the repair (or the
absence of repair) may not be. I think that at least some
mutations, especially the most useful ones, are not errors
at all, but results of a sophisticated genetic engineering
technology, far beyond anything we can do or understand with
our present technologies or theories by the biochemical networks
and their underlying hierarchy of sub-networks.

This hierarchy extends down to the Planckian scale networks
(where the elemental objects are of size 10^-33m), which run
10^16 times faster and have 10^50 times more nodes & links
than the adaptable networks we encounter at the cellular and
human levels (brains). What is from the perspective of our
networks (brains) just some tiny thermal and quantum noise
at the molecular level, may be for the most powerful
(Planckian scale) networks their intergalactic engineering
technology, which they have developed after an evolution
and civilizations of what by their clocks is an equivalent
of our 10^16 billions of years. Our space-time structure
and quantum fields (the foundations of our physical laws)
are a glimpse at a few coarse grained statistical properties
and side-effects of their vast computations and undertakings.
There are few more comments on this 'speculation' in the
concluding paragraphs of the two earlier posts here:

http://groups.google.com/group/talk.origins/msg/651222ff530cbe4e
http://groups.google.com/group/talk.origins/msg/2c5884a907f10c22

z

unread,
Jul 12, 2006, 2:38:18 AM7/12/06
to
<snip ignorant musings>

>
>Of course, if one talks to molecular biologists, one may
>get an impression they're in a fairly complete control
>of all the key phenomena, and are just filling in few
>smaller details here and there. Of course, if one were
>to talk to the ancient Egyptian priesthoods, one would
>have gotten equally self-assured response, claiming
>full knowledge of all things, from calendar seasons
>and floods, through knowledge of secrets of health
>and illness, life and death, Earth and stars. If
>there was anything they couldn't answer safely, then
>those things were declared intrinsically 'random',
>un-answerable, un-knowable, the will of gods.

You have no clue as to how us molecular biologists view living
systemes. We certainlly do not posit that we are just "filling in a
few details here and there". We are always surprised about what we
find hen we dig a little deeper into the mechanics of the genome. Ten
years ago, let-7 was just an odd gene in a nematode. Now we recognize
it as the the founding member of a family of hugelly important
regulatory RNA's.

Unlike your example of the Egyptian priesthod, we actually try to work
out the mechanics behind our predictions. And when we are wrong, we
change our models to reflect what we have learned. It's a rather
dynamic feild in that respect.

>
>In retrospect, they were a handful of clever guys who
>managed to pick out few genuine patterns about the
>calendar seasons and floods. The rest was self-promotion,
>mostly self-delusional i.e. they actually believed they
>knew what they were talking about (especially those
>still learning the secrets). The disciples had to go
>through long process of testing and elimination of
>unsuitables (those not bright enough and those lacking
>a gift for self-deception needed to uphold convincingly
>enough the pretense of omniscience), learning the arcane
>symbolism and secret language of the discipline,...
>before his initiation into the inner circle. That's the
>human nature, back then and as it is today. The
>scientists in various disciplines are essentially our
>modern day priesthoods with similar patterns of behavior,
>pretense, self-promotion, self-deception as those of
>ancient Egyptian priesthoods.
>

This actually would be a more valid comparison to string theorists,
not moelcular biologists. We tend to actually perform experiments to
test our ideas. They are also testable (an beleive me they are) by
other molecular biologists.

>Hence, I take all excessively self-assured declarations
>that there are no patterns in the mutations (or in general
>transformation of DNA from generation to generation)
>correlating them with future states of the environment
>the same way I would take Egyptian priests assuring me
>that any phenomenon for which they don't see any
>patterns _has no_ pattern, it is irreducibly random.
>Doubly suspicious are any such declarations which are
>also backed up by censorship, lawsuits, intimidations...
>

Then you are either willfully ignorant, or an idiot. Random mutation
coupled with selection can account with the beach mice without any
special pleading. You have to supply a mutation fairy that goes
outside of the rational world to supply your counter example.

I love the last bit claiming "censorship, lawsuits, intimidations...".
The lawsuits aride from your camp trying to claim fairys as science.
The censorship does not exist- The ID folks have yet to put together a
coherent "theory" with experimental plans. Can't censor what does not
exist. And the only intimidation seems to be directed against
scientists, not the other way around.

>> Your claim that this particular point mutation could have been due to a
>> magical mutation fairy makes as much sense in science as saying that
>> the sun rose in the west on July 4th in 1777.
>
>I was only claiming that the 'random' nature of the mutation
>behind the color adaptation being reported in the article
>was not established in any way. All that was established was
>that an adaptation was due to particular very small mutation
>(a single nucleotide).
>
>There is nothing in the paper, or in any subsequent discussion,
>that shows how did they establish that the astronomically
>tiny fraction among the all possible DNA configurations (which
>are one nucleotide change away from the initial configuration,
>the set S0 in my original post in this thread) that were explored
>by the given comparatively small number of tries available,
>constitutes a 'random' set of configurations, _unbiased in
>any way_ to find the suitable solution to the survival
>problem.

The small numers of tries? The number of tries are related to Ne of
the mouse population, not the number of mice that the researchers
sampled. I'm sure you are aware of the rather large polulation sizes
that the researchers sampled from. Not the number of animals sampled.


>
>Just saying that there were so many mutations in given a
>time, as you keep doing, says nothing about their relation
>to the problem being solved by the genetic search algorithm
>or the nature of the algorithm. Both, the guided (intelligent)
>and the 'random' mutations will have some number of mutations
>per given time. The _only_ difference you could extract
>statistically is that the intelligent mutations will find
>the solution faster on average than the random ones. Now, to
>check whether the search was faster than random, you cannot
>get around the task of estimating what the random _model_
>predicts for the expected number of tries needed to solve
>the problem. Only then you can say whether the _empirically
>observed_ search and solution time is comparable to that
>predicted by the random search model, or whether it is slower
>(malicious intelligence) or faster (benevolent intelligence).
>

Actually, what we see in experimental populations fits perfectlly with
RM + NS. Your fairy must be clever enough to recognize that they
should not interven when they see folks in whit lab coats. When the
rates observed experimentally are extrapolated onto natural
populations, the results are also consistent. Therefore, your fairy
must be prescient enough to know ahead of time which organisms will or
won't be examined closely. The god of the gaps, perhaps?

>Just pointing out at the empirically observed rates of
>mutations, without any comparison to the search space
>being explored, tells you nothing about the efficiency
>of the search compared to the random search.

NFL? Evolution proceeds via a random walk through the sequence space
available to it. Since the selective agents are changing at the same
time, NFL is explicity not even to be considered. Even without that
consideration a random walk works fine if "good enough" works. No
organism is perfect.

Nope, we don't need to know the search or the solution space.
Evolution is nonteleological, unlike guessing someones number. There
is no preordained winner.

You reaching for the extremes of string theory, i.e. "I can explain
the universe, but can offer no plausible means of testing the theory"
? Stealth pixies are not testable.


>
>Hence, repeatedly trotting out the Pat Robertson's "theory
>of evolution" as the sole alternative to the neo-Darwinism,
>as it is reflexively done here by you and other defenders
>of the neo-Darwinian dogma, is a childish strawman which,
>being a clear indicator of ultimate desperation and
>retreat from a rational argument, only further emphasizes
>the fundamental weakness of the theory you are defending.

No, we are saying you don't understand biology. Nobody understands
Pat Robertson.

>
>The nature of the search algorithm behind evolution (how
>close or how far from the random search is it?) is a
>perfectly legitimate scientific question that presently
>has no answer. It is also a fact which the neo-Darwinian
>priesthood is fighting tooth and nail to keep away
>from being recognized outside of the priesthood, even
>that there is a question, by all means available --
>through censorship, lawsuits, bureaucratic and social
>intimidation, threats to academic career, funding,...

There is no global search algorithm to find. Your paranoia about a
cabal of scientists aside, there is nothing to search for. We can and
do model it as RM + NS. And it works, your objections aside. And it
does have practical uses such as determining the optimal course of
antibiotic treatment or the size of crop refugia needed for the use Bt
transgenic crops.

B Miller

Richard Forrest

unread,
Jul 12, 2006, 3:07:11 AM7/12/06
to

nightlight wrote:
> hersheyhv wrote:
>
> > That is because the rate of mutation at
> > this site is a result of its chemistry. The chemistry of mutational
> > change can be affected by local conditions such as nearby surrounding
> > sequence, but not by distant features such as total amount of DNA.
> > But the spontaneous rate of mutation at this site *is* the correct
> > value to use.
>
> This is the essence of your problem in understanding the
> alternative position.

There is no alternative position based on evidence.

> It is true that the mutation rate
> is determined by the local physical-chemical conditions.
> But that does not exclude the possibility that these
> conditions and their changes in time form a part of an
> anticipatory/intelligent process pursuing some objectives.
>

However, lacking any evidence whatsoever for the existence of
intelligent involvement by any such entity, it is no an "alternative
position" of any scientific validity.

> Consider a chess playing computer program. It looks at the
> possible moves available, then looks for your responses to
> them, then his next responses,... evaluating different
> possible move sequences and then picks one which yields the
> best gain (according to its utility functions) within its
> look-ahead horizon. Depending on the quality of the program,
> it may also explicitly create plans and strategies and
> look for the best ways to accomplish them. The best of these
> programs play at the world championship level, and the top
> one (IBM's Deep Blue) has beat the human world champion at
> the time (Gary Kasparov). That is an example of anticipatory
> (or intelligent) process. My argument here does not require
> anything more mysterious or more human-like for the 'intelligent
> agency' than that kind of perfectly natural intelligent process
> (it is a natural process, if we consider humans as a 'natural
> process' i.e. humans are as natural as fire, rain, river,
> bacteria,... just a more elaborate natural process).

So you are asserting that evolution is governed by a computer?
Where is this fabulous device? What evidence do you have for its
existence?

>
> Now, you could also look at the detailed EM fields and currents
> inside the computer hardware and find the corresponding physical
> state description of the very same anticipatory process that
> looks ahead, plans and selects the best actions in the first
> description. The existence of electric pulses and EM fields
> which can in principle explain the same intelligent process
> in a different language does not mean that the first description
> is incorrect or that it doesn't exist.

Which is all completely and utterly irrelevant as there is no evidence
whatsoever that such a device exists/

>
> For example, you may find the precise electric pulse X that
> corresponds to the final decision of the program to select
> a particular move. From that pulse X you can track down the
> subsequent pulses which display the selected move. Now,
> applying your reasoning from the mutation rate argument, you
> would claim that the fact that you can explain the displayed
> move as the result of pulse X, means that there is no planning,
> look-ahead, anticipation, intelligent process,... but that
> the move shown entirely follows from the pulse X. It does
> follow, but there is also another pattern in the phenomenon,
> the one described in the language of chess program and its
> algorithm. In other words, the pulse X does not tell the
> whole story, even though it is indeed causally responsible
> for the displayed move (and it needs no other causes to
> predict the displayed move).
>


Which is all completely and utterly irrelevant as there is no evidence
whatsoever that such a device exists/

> Similarly, the fact that the mutation rate at a given site
> follows from the precise physical-chemical state at the site,
> does not preclude this same physical-chemical state from
> being a part of an anticipatory computational process.
>

Unless there is evidence for such an "antipatory process", there is no
reason whatsoever to invoke the action of such an entity.

> The same would go for your own anticipatory processes
> implemented as the electro-chemical activity in your brain.
> One could in principle find some electro-chemical correlates
> of some of your mental processes. That finding does not erase
> from existence or invalidate/change the nature those mental
> processes. It is simply a different angle on the same phenomenon.
>

Which is all completely and utterly irrelevant as there is no evidence
whatsoever that such a device exists.

> As secondary smaller point about the mutation rates -- going
> back to the example of the chess program and electrical measurements,
> if you were only to measure the counts of pulses per second
> at various locations, such coarse grained statistical information
> is not nearly enough to decide whether there is a computational
> process executing some anticipatory algorithm, much less to
> decipher what is it doing and find electrical correlates of
> the algorithmic steps. The type of electrical information
> you need would _not be statistical_, but detailed _time-dependent_
> sequence of electrical pulses, since the algorithms are specified
> by a precise sequence of steps, and not some bulk statistical
> properties of all steps lumped together. Hence without the
> detailed time-dependent picture of each pulse after pulse
> and its relation to other pulses, or without the actual detailed
> model of the hardware and software design, you could not have
> clue whether the trillions of pulses going on combine into
> any kind of intelligent process or are just some kind of
> random junk such as the result of a junk-code doing nothing
> meaningful (but randomly changing memory locations and jumping
> around).


Which is all completely and utterly irrelevant as there is no evidence
whatsoever that such a device exists.

>
> The bare mutation rates (or their phenotopic effects) are not
> enough to decide whether the frame-by-frame sequence of detailed
> physical & chemical conditions in the vicinity of the mutation
> site that lead to that individual mutation (or to the lack of
> its repair) is always just some random, purely accidental event
> uncorrelated with the ongoing processes in the biochemical
> reaction network of the cell or a step of an anticipatory,
> look-ahead process executed by this network. That is an open
> question, and not something you can just declare one way or
> the other.

Unless there is evidence for such an "antipatory process", there is no
reason whatsoever to invoke the action of such an entity.

>
> Since the detailed reversed engineering of such computations
> by the natural networks is not presently feasible (other
> than few tiny snippets & toy models such as 'neural networks'),
> an alternative way to at least establish whether such natural
> computation controlling (statistically) the mutations and
> other DNA transformations, is occurring, would be to estimate
> how well a mathematical model which does _not_ include any
> anticipatory computational component would perform against
> the observed performance of biological systems.

So you are saying that we should test for the existence of an entity
for which there is no evidence.

Why?

>
> If the mathematical model which lacks the computational
> (anticipatory, intelligent) component, significantly
> under-performs the empirically observed performance of
> genetic networks (e.g. it predicts that twenty orders
> of magnitude more offspring was needed to evolve some
> adaptation in given situation than the number of offspring
> deduced from the observations), then there must be processes
> going on in the actual networks which are capable of
> drastically accelerating and enhancing the exploration/search
> for the 'favorable' DNA configurations.
>
> Such processes would be anticipatory i.e. they would use
> internal models, run them forward in time to find out the
> consequences of different actions (DNA changes) and select
> those which, in the model space, comes out as the best
> model action (just like a chess program, selecting the
> best move via look-ahead). The gain is achieved by virtue
> of trying out and discarding many available actions
> inexpensively, within the model space (in its 'mind',
> as it were), before committing to the much more expensive
> and slower real world realization of the selected action.
>


Unless there is evidence for such an "antipatory process", there is no
reason whatsoever to invoke the action of such an entity.

> The mathematical models and criteria I was discussing
> earlier are the models without anticipatory computational
> component, since that is at least one way that we can
> establish, at least in principle, whether such "dumb"
> models can replicate the performance of the natural
> networks.


Unless there is evidence for such an "antipatory process", there is no
reason whatsoever to invoke the action of such an entity.


>
> For the sake of argument, I explained the basic idea of
> this criterium on an extremely simplified model which
> assumed equiprobable final DNA states. Since I wasn't
> trying to _extract any numbers_ from it to use in the
> argument, much of the objections of you and others to
> drawbacks of that model are missing the point. I was
> merely to explain on a concrete example the relation
> between the models and empirical observations and how
> would the criterium discriminating between RM and ID
> be formulated in such setting. Whether the toy model
> used for this purpose was accurate enough to yield
> some useful numbers is completely irrelevant.


Unless there is evidence for such an "antipatory process", there is no
reason whatsoever to invoke the action of such an entity.

>
> Any actual model aiming to get numbers to be compared
> with empirical performance of biological networks,
> would need to be far more elaborate. The point of my
> model was to show that a perfectly legitimate scientific
> criteria can be formulated to differentiate between
> the RM and ID conjectures. The toy model I used was
> in no way meant to be the actual implementation of
> such criteria ready to test against observations,
> but was meant only to explain why such mathematical
> criteria _do exist_ and also how would they fit
> into the models. Hence ID is a perfectly legitimate,
> falsifiable conjecture.

We come back to this: in what way is ID falsifiable?

What test can show that there is *NO* involvement of any intelligent
agent in biological evolution?

Which is what biologist have done.
There is no evidence whatsoever that mutations are beneficial in
respect of fitness.

> This model cannot,
> of course, include in its parameters the empirical
> rates since that is what you're trying to predict
> and compare with the empirical rates.

And how on earth does this differ from any other model in any other
branch of science?
You make predictions based on your model and compare them with
empirical evidence. If your predictions are sound, you conclude that
your model is sound.

> If we had
> unlimited computational powers, the model would
> start with quantum description of the DNA and
> the cellular environment, and the boundary &
> initial conditions would be picked using max
> entropy principle, i.e. we would take a distribution
> which maximizes entropy, subject to any known
> constraints (from physical laws and environment).

Which would be no more than refining the model, and we would still have
to test it against empirical evidence to judge its soundness.

>
> Since we don't have unlimited computational powers,
> the model would need great many simplifications
> and additional assumptions. At the extreme point
> of such simplifications is the uniform distribution
> model that I used to explain the criteria
> differentiating between ID and RM.
>

You have not demonstrated any need to invoke the involvement of any
intelligent entity in biological evolution. You start with the a priori
assumption that such an entity exists.

This is not science.

RF

nightlight

unread,
Jul 12, 2006, 4:08:43 AM7/12/06
to
z wrote:

>
> Random mutation
> coupled with selection can account with the beach mice without any
> special pleading. You have to supply a mutation fairy that goes
> outside of the rational world to supply your counter example.

>....

> Actually, what we see in experimental populations fits perfectlly with
> RM + NS. Your fairy must be clever enough to recognize that they
> should not interven when they see folks in whit lab coats. When the
> rates observed experimentally are extrapolated onto natural
> populations, the results are also consistent. Therefore, your fairy
> must be prescient enough to know ahead of time which organisms will or
> won't be examined closely. The god of the gaps, perhaps?
>

You are recycling, with several days delay, the empirical mutation rate
argument that "hersheyhv" has been elaborating here with more
specifics. That argument entirely misses the point. No one is arguing
that those empirical rates cannot predict that the given adaptation
can arise (for given population and time span) or that they are wrong
or improbable for some reason.

The question I am talking about is whether the detailed physical &
chemical state and its time evolution that caused those mutation
is an accidental event or a step of an anticipatory computational
process by the biochemical network in a cell. This has been already
discussed in detail with "hersheyhv", so I won't repeat here the whole
explanation as why your rates argument is a strawman. The post
dealing with it (along with several of your other arguments that
"hersheyhv" already brought up) in detail is here:

http://groups.google.com/group/talk.origins/msg/ff90576d409cefd2

> Nope, we don't need to know the search or the solution space.
> Evolution is nonteleological, unlike guessing someones number.
> There is no preordained winner.

It seems you are unfamiliar with physics or generally mathematics
of differential equations. Otherwise, you would know that you
can take dynamical equations of physics, which are differential
equations, and solve them from the _initial conditions_ (the state
of the system at the start of the time interval modeled) or from
the _final conditions_ (the state of the system at the end of the
time interval modeled). Both methods are fully equivalent.
One convention can be viewed as causal (the initial state is
causing the particular trajectory) or teleological (the
final state is targeted by the particular trajectory).

Since you have recycled the arguments already discussed
and dealt with at length several days earlier, including
this one, there is much more on the vapidity of the
anti-teleology argument in this post and in its followup:

http://groups.google.com/group/talk.origins/msg/f969d45c50183c02
http://groups.google.com/group/talk.origins/msg/f30ee4fdbfff5b01


But, as the little bit above already indicates, the choice
between the teleological and causal explanation is a
matter of a _convention_, like a choice on whether
we should write dates as month-day or day-month.

Issuing vapid anti-teleological dictums with such a degree
of emotion and conviction, as you and few others here do,
sounds thus more like a religious fervor than a rational
scientific argument.

You might as well start issuing dictums as to whether the
correct way to cross oneself in church should make the last
stroke from left to right or from right to left (the
left-crossers and right-crossers in Balkans were
still slaughtering each other about it through much
of the 1990s).

And you're telling me that neo-Darwinians don't act like
the ancient priesthoods. If it walks like a duck and talks
like a duck, then what might it be?


ErikW

unread,
Jul 12, 2006, 5:12:51 AM7/12/06
to

I have a real problem understanding your stubborness on this. Your idea
of Lamarckism is dead for a reason you know. I posted some short info
somewhere else but I'll repeat here and add a few things:

1) In other populations of this light furred beach mouse this mutation
wasn't involved in causing the lighter fur. Instead a (many) different
mutation(s) causes it. The molecular details of that phenotype has not
been investigated.

2) There are likey many more mutations like this in populations of the
more widespread darker furred "normal" mouse variant. The species has
been studied since the beginning of this century just because this
mouse shows a lot of variation in fur colouration and patterns. It is
possible that this mutation didn't even arise in populations from the
beach habitat but instead arose in "normal" populations on the mainland
and later migrated onto the beach (even though I suspect that the
authors of the article don't think so becase of geographic
considerations).

3) This point mutation accounts for one third of the variaiton in fur
colour and pattern. It's not [one mutation] = [finished beach mouse].
This alone should have told you that even in the example that you
discuss there are more than one mutation (and loci) involved. And that
fits rather perfectly with RM + NS.

Even though I don't know this explicitly there are likely other
mutations in the vincinity of the mutation in question that are
entirely neutral and without effect. It would appear that that would be
direct disproof of your teleological mutations idea and instead show
that mutations are random, wouldn't you agree? Or would you instead
suggest that only some mutations are under divine control?

ErikW

snip

Windy

unread,
Jul 12, 2006, 10:57:03 AM7/12/06
to

nightlight wrote:
> Windy wrote:
>
> > Would you use this formulation in physics? Is there "intelligence"
> > picking which atom will experience fission next in a lump of
> > radioactive substance?
>
> [snip]

> Now, I don't what is it like to be an atom in pursuit of
> the 'minimum cost', but I do know exactly what is it like
> to be a particular collections of atoms in such pursuit.
> I thus see no reason why there shouldn't be something that
> it is like to be an atom (see on philosophical perspective of
> panpsychism: http://plato.stanford.edu/entries/panpsychism/ ).

So intelligence might be evident in the fate of any atom, not just the
tiny portion that are involved in mutation events? Then why pick only
on the latter? Wouldn't it be easier to discover intelligence at work
by seeing if biases can exist in simpler systems of atoms?

> > And this has been tested in several cases and no bias in favour of
> > beneficial mutations has been detected. Why continue? Do you have some
> > evidence that suggests otherwise?
>
> You can't tell whether there is a 'bias toward favorable' unless
> you know what the outcome (distribution) would be without the
> 'bias toward favorable'. The 'bias toward favorable' is not
> the same thing as 'favorable', which is what you and others
> here seem to be assuming. Consider a gambler who is cheating,
> which is form of a 'bias toward favorable'. Does that imply
> that he is also making money? Not at all.

A better analogy would be gamblers (organisms) who are dealt the cards
(mutations) randomly and who then keep the favourable ones. You want to
know if any organism is cheating or pulling favourable cards out of its
sleeve. But you won't accept comparing rates at which organisms receive
the cards - is anyone seemingly getting more aces than can be expected
by their occurrence in the deck? Instead, you want to go about it the
hard way and compare the likelihoods of the players' hands to all
possible hands, because you don't like what the mutation rate studies
have told us long ago.

-- w.

Windy

unread,
Jul 12, 2006, 11:13:17 AM7/12/06
to

nightlight wrote:
> As to the DNA mutations, even if one were to assume that a mutation
> is _always_ an accidental event, an "error", the repair (or the
> absence of repair) may not be. I think that at least some
> mutations, especially the most useful ones, are not errors
> at all, but results of a sophisticated genetic engineering
> technology, far beyond anything we can do or understand with
> our present technologies or theories by the biochemical networks
> and their underlying hierarchy of sub-networks.

In which stages of the mutation do you propose the intelligence works?
Let's consider a mutation caused by UV radiation where a thymine dimer
is formed.

Does the intelligence work by
-causing the emission of the UV photon in some distant source so that
it may strike the appropriate base in DNA
-choosing whether the UV photon will be absorbed by the thymine it
strikes
-causing the appropriate repair enzyme to arrive or not to arrive at
the scene

Or feel free to provide an example of another type of mutation.

-- w.

nightlight

unread,
Jul 12, 2006, 11:24:14 AM7/12/06
to
ErikW wrote:
>>"hersheyhv" already brought up) in detail is here:
>>
>> http://groups.google.com/group/talk.origins/msg/ff90576d409cefd2
>>
>>

>

> I have a real problem understanding your stubborness on this. Your idea
> of Lamarckism is dead for a reason you know. I posted some short info
> somewhere else but I'll repeat here and add a few things:

Perhaps a naive Lamarckism is dead. The idea of intra-cellular
biochemical reaction networks, which are mathematically of the same
type of adaptable network as human or animal brains (general,
distributed computers, self-programmable), implementing
anticipatory algorithms in the domain (cellular biochemistry) in
which they are unrivaled specialists, is not far fetched at all. It
is in fact the most plausible conjecture as to what might these
self-programmable distributed computers be computing, anyway. Lamarck
had simply picked the wrong network (animal brain) to which he
attributed such anticipatory activity.

While it is true that networks consisting of enough brains for their
nodes are capable of implementing genetic engineering tasks, as biotech
industry illustrates, the ultimate specialist on that subject is the
cellular biochemical network. It daily achieves feats that all of the
world's molecular biology, biochemistry, biotech & pharmaceutical
industry resources, taken all together to work on this single task,
could not even get close to matching -- produce a single live cell
from scratch (inorganic materials). The tiny biochemical networks
do it billions of times every day and have known how to do it for
over a billion years.


> 2) There are likey many more mutations like this in populations of the

> more widespread darker furred "normal" mouse variant.....


>
> 3) This point mutation accounts for one third of the variaiton in fur
> colour and pattern. It's not [one mutation] = [finished beach mouse].
> This alone should have told you that even in the example that you
> discuss there are more than one mutation (and loci) involved. And that

> fits rather perfectly with RM + NS. ...


>
> Even though I don't know this explicitly there are likely other
> mutations in the vincinity of the mutation in question that are
> entirely neutral and without effect. It would appear that that would be
> direct disproof of your teleological mutations idea and instead show
> that mutations are random, wouldn't you agree? Or would you instead
> suggest that only some mutations are under divine control?
>

This is basically recycling a variation on the theme of the empirical
mutation rate argument which others have done here. Instead of arguing
that the site has high enough mutation rate, you are saying that there
are multiple sites which can achieve similar effect on fur color. If
there are, say 50 such alternative ways, that is equivalent (regarding
the odds of finding a favorable color adaptation) of saying that the
mutation rate on the original single site is 50 times greater.

Since I was not arguing that the empirical rate of the mutations at the
original site was not (statistically) capable of producing the observed
adaptation in the given time and population size, your bringing in an
equivalent of claim that the rate was even faster (effectively, via
alternative sites), remains as disconnected from my argument as the
previous variants.

This is not an issue of whether the correct microscopic physical and
chemical conditions at the location and the time of the mutation may
have been there or whether they are causally responsible for the
observed rate. We all agree that the right physical-chemical conditions
were there and that they can cause the mutation at the rates observed.

The point you and others here are missing is that this fact alone is
insufficient to tell you whether these physical-chemical conditions
at the location &* time of the mutation were an accidental event or a
deliberate step executed as result of a computation by the biochemical
network (of which they are a part anyway) for the ultimate purpose of
improving the fitness of the organism. The meaning of 'deliberate step'
in this context and the further explanation and illustrations of this
point were given in an earlier post:

http://groups.google.com/group/talk.origins/msg/ff90576d409cefd2

Although it is already answered in that post, the question you may have
if reading that post superficially (as some others did) is why would
we want to check for this possibility anyway?

a) The computational resource, the intelligent agency
which just happens to be the unrivaled specialist on precisely
that kind of genetic engineering tasks, is already there -- the
phisical-chemical conditions at the location and the time of the
mutation are an integral part of that same biochemical network.
It is thus perfectly natural to ask whether these phisical-chemical
conditions were result of the computation by this network or
merely an accidental event.

b) It has never been shown that a mathematical model using the
'accidental cause' conjecture (the implication of the RM conjecture)
would perform as well as the actual biochemical networks do, in
producing the favorable adaptations (via the mutation caused
by the 'accidental' conditions)? Would the mathematical model
based on 'accidental cause' underperform, outperform or perform
equally well as the actual network, or similarly as the
mathematical model which includes an anticipatory component
aiming to produce such mutation? That is an open and
legitimate question, the dogmatic declarations from the
neo-Darwinian priesthood notwithstanding.

In (a) we have a computational process that physically
includes the immediate cause of the mutation. In (b)
we have an unanswered question whether an assumption
of the 'accidental' cause of mutation (the RM conjecture)
used in a mathematical model, would result in a model
which performs as well as the actual biochemical networks.

This situation is thus analogous to a teacher looking at
the completed students test sheets and then finding a
cheat-sheet under the chair of one of the students
(an analogue of existence of ultimate expert on genetic
engineering at the location & the time of the cause
of mutation (a)), then finding that this student had
solved the problems in the test (analogue of a
favorable adaptation occurring in the given population
and given time). What is more natural to assume:

== neo-Darwinist == Nothing to see here, move on. It is
purely coincidental that the cheat-sheet was there and
its proximity has nothing to do with the fact that
this student has also solved the test problems. Why
should it be related to each other? The test may have
been easy. There is thus no need to examine this
matter any further. And if any other student doubts
that it was accidental and complains, I will send
him to the principal's office.

== ID scientist == Well, it seems to me a bit too
coincidental that he solved the test and had a
cheat-sheet under his chair. While it is conceivable
that the test may have been easy, we don't know that.
Hence, I would still like to check whether that was
so by, say, examining the distribution of results
for other students with similar general performance
in this subject as this student. If these other
students did roughly as well as this student, then
the test was probably easy and his correct solutions
may be unrelated to the cheat sheet. But we won't
know that unless we check.

== neo-Darwinist == Just move on, nothing to see
here. No need to compare anything. Are you paranoid or
sumptin? It was coincidental, I'm tellin'ya. And if you
keep harpin'bout it, I will call the cops to drag you
to the nuthouse. Can you imagine, doubting that a
cheat-sheet under the chair may have anything to do
with the solved test problems. How could anyone ever
think of such a thing?

This illustrates the nature of the debate in this
thread.

hersheyhv

unread,
Jul 12, 2006, 11:25:51 AM7/12/06
to

nightlight wrote:
> hersheyhv wrote:
>
> > That is because the rate of mutation at
> > this site is a result of its chemistry. The chemistry of mutational
> > change can be affected by local conditions such as nearby surrounding
> > sequence, but not by distant features such as total amount of DNA.
> > But the spontaneous rate of mutation at this site *is* the correct
> > value to use.
>
> This is the essence of your problem in understanding the
> alternative position. It is true that the mutation rate
> is determined by the local physical-chemical conditions.
> But that does not exclude the possibility that these
> conditions and their changes in time form a part of an
> anticipatory/intelligent process pursuing some objectives.

It seems to me that there are two possibilities in generating what you
want. One, which you seem to call "anticipatory mutation" claims that,
because of some sort of overall program or pattern (of progress?),
mutations that will be beneficial increase in frequency in anticipation
of the recognition, by some outside intelligent agent, of the mutation
being beneficial. This is actually a fairly easy proposition to test.


For technical reasons (especially that the organisms are haploid) these
experiments are easiest to conduct with either bacteria or haploid
yeast. All you need to do is perform a mutagenesis experiment (either
with or without mutagens) and then use a random number to determine
whether the result will be plated on selective or non-selective media.
You can then go back and ask what the frequency of mutation is on the
selective plates and how that compares to the non-selective one. A
good example would be mutation to revert to the lac+ phenotype. The
non-selective plate would be one where the carbon source is glucose,
but you have a little X-gal and IPTG present. Both lac+ and lac- cells
would grow on this plate, but the lac+ colonies would be blue and the
lac- colonies white. The selective plate would be one where the only
carbon source is lactose. On these plates only the lac+ revertants can
grow.

If there were an outside mutation fairy that could anticipate which
plate (selective or non-selective) the cells would get plated on, one
would expect a significant interaction between the plate used and the
frequency of mutation to lac+, as indicated by either the color change
or the number of surviving colonies. You don't see it.

In fact, you could initially plate the sample on a plate with neither
the X-gal/IPTG but with glucose as the carbon source (on such a plate
the lac+ and lac- colonies are indistinguishable) and then replicate it
to the two plates, the anticipatory model would expect more colonies
growing on the lactose plate than blue colonies growing on the
non-selective plate. In fact, of course, not only are the numbers the
same but the colonies that are blue are the very colonies that are able
to grow on the selective plate.

In short, the mutation occurs with no evidence of any anticipation of
need.

Similar sorts of experiments have been done with many, many different
types of mutations in many different genes with the same conclusion.
Mutation occurs first without any evidence of anticipatory need. Then
the selective environment (which is decidedly not an intelligent agent)
chooses which variants are successful.

The other, more common scientific alternative to the two step process
described by Darwin (first variant production which occurs randomly wrt
need followed by environmental selection which preferentially permits
the survival of variants well-adapted to local conditions) is *induced*
or *adaptive* (rather than anticipatory) mutation. This idea, which is
Lamarckian rather than involving a mutation fairy of global
intelligence, says that when a stressful environment occurs it induces
*specifically* and *selectively* those mutations that allow the
organism to survive the stressful environment. This is the idea of
mutation being *selectively* directed by environmental stress to
produce just the right sort of variant.

Obviously, this too is countered by the evidence of the types of
experiments above that show that, at least for cases where the
selection is very strong, mutation occurs first and without regard to
any need for the mutation. That all the strong selection does is
choose those mutants that, by chance, had occurred when there was no
need for the mutation.

The Cairns finding, however, opened up the possibility that, for weaker
selection, the idea of induced mutation might work. However, work
after the Cairns finding demonstrated that all that was happening was a
stress-induced increase in mutagenesis of *all* sorts, not specific
mutations that respond to specific needs.

In short, there is no evidence that there is any process by which
mutation occurs in anticipation of need. There is also no evidence
that there is any process by which specific mutation is induced by a
need (although, unlike anticipatory mutation, this remains a
possibility for some unusual genes involved in a "domesticated" form of
mutation).

But there is absolutely NO, NADA, ZILCH, evidence that any gene's
mutation rate is anticipatory. Nor is there any evidence that it needs
to be.

[snip much nonsense]

> For the sake of argument, I explained the basic idea of
> this criterium on an extremely simplified model which
> assumed equiprobable final DNA states.

The model is flawed.

> Since I wasn't
> trying to _extract any numbers_ from it to use in the
> argument, much of the objections of you and others to
> drawbacks of that model are missing the point.

Yet you keep ignoring the numbers that are empirically found to be
relevant in favor of some much larger number you keep theoretically
claiming must be relevant.

> I was
> merely to explain on a concrete example the relation
> between the models and empirical observations and how
> would the criterium discriminating between RM and ID
> be formulated in such setting. Whether the toy model
> used for this purpose was accurate enough to yield
> some useful numbers is completely irrelevant.

Why get numbers that are only relevant to your flawed model?

> Any actual model aiming to get numbers to be compared
> with empirical performance of biological networks,
> would need to be far more elaborate. The point of my
> model was to show that a perfectly legitimate scientific
> criteria can be formulated to differentiate between
> the RM and ID conjectures.

Luria and Delbruck already beat you to it, more than 60 years ago. And
they showed that mutation is random wrt need (at least for strong
selection). The subsequent followup of the Cairns experiments also
showed that mutation is random wrt need even for weak selection,
although it did discover a form of stress-induced increase in
mutagenesis. No one has ever found any evidence for *anticipatory*
mutation and all the evidence points to there being no such phenomenon
at all.

> The toy model I used was
> in no way meant to be the actual implementation of
> such criteria ready to test against observations,
> but was meant only to explain why such mathematical
> criteria _do exist_ and also how would they fit
> into the models. Hence ID is a perfectly legitimate,
> falsifiable conjecture.

Well, the idea that mutation is anticipatory is certainly falsifiable.
The idea itself implies the ability of some agent to read minds and I
can think of no mechanism for it. Anticipatory mutation has also been
falsified in every test of it.

And so far (I am keeping an open mind for some *rare* phenomena) the
idea of induced or adaptive mutation is also a failure in tests of that
idea. But all that means is that that sneaky ID mutation fairy is
hiding its grand design behind a pattern that is indistinguishable from
random mutation plus subsequent environmental selection without any
indication or fingerprint to indicate that the process was *really*
intelligently designed ;-).

> I hope, you are getting now why citing the mutation
> rates and arguing that they can lead to the observed
> mutation in the time and population size given, is
> irrelevant for the argument I am making. No one
> is arguing that mutation did not happen or that
> it could not have happened. The argument is about
> the _nature of the processes_ which prepared the
> physical-chemical conditions at the mutation site:
> are these processes random/dumb or are they part
> of some anticipatory computational process (e.g.
> by the biochemical reaction network of the cell)
> which is, via its internal modeling and look-ahead,
> short-circuiting the vast numbers of wrong tries
> before committing its choices to the expensive
> real world implementation? The rate alone tells
> you absolutely nothing about this question.

I was not using rate alone. I was *specifically* asking if there was
any interaction between the variables "rate of mutation" and "need for
mutation". There isn't. The two variables are independent variables
as far as anyone can tell at any of the levels of significance one can
test for. No need to look for a causal explanation here.

> Since it is much easier to model a dumb/random
> process than an anticipatory process, the simplest
> approach to try answering the question would be
> to mathematically model the dumb process and check
> its predicted performance against the observed
> performance of the actual biochemical networks.

And, lo and behold, there is no problem in doing so, so long as one
uses real empirically determined rates of mutation and reasonable
population sizes and selective pressures. There is no NEED to invoke
an intelligent agency.

> >> Statistical difference is detectable.
> >
> > You just said there was no detectable difference.
> > Can't you keep your story straight for an entire
> > paragraph?
>
> This is just one example (out of many) of the major
> confusion you have when differentiating between the
> models and empirical reality.

Yes. You are talking silly models and I am talking empirical reality.
I am not confused about the difference between the two. You are.

> In the empirical reality you have just one mutation
> rate, the one being observed. There is no other
> empirical rate you can compare it to. You can't
> create in the real world two types of organisms,
> one using neo-Darwinian scheme and another using
> ID scheme and then compare one rate to another.
> Hence, there is no other _empirical_ rate to
> compare the _known empirical_ rate with.

Again, I am not using the rate alone to determine the failure of the
two variables (mutation and need for mutation) to interact. I am using
experiments that specifically address the question of whether or not
there is an interaction between these two variables.

[snip more nonsense]

> Hence there is nothing supernatural or far fetched
> about considering whether anticipatory processes
> have a role in the cellular biology, evolution, origin
> of life.

Anticipatory mutation has been considered. The evidence says it does
not exist. Neither, at least for the vast majority of genes, does
induced or adaptive mutation. If you have any evidence that says that
anticipatory mutation or induced mutation exist and are common features
of life, you should quickly publish it and stand in line for your
Nobel. You would, however, have to explain why all the previous tests
of these phenomena, even those that initially looked promising, have
repeatedly shown that mutation is random wrt need. If all you can do
is show that induced mutation occurs in some rare and unusual
circumstances it would hardly be paradigm shattering.

> Their principal strength would be that they
> could short-circuit great deal of expensive trial and
> error via offspring (by doing the trial & error "in
> the head"). Hence, it is reasonable to ask whether
> mathematical models of the biochemical processes which
> do not include such anticipatory elements can replicate
> the empirically observed performance of the real
> biological systems.

Been done. They can. But not if one posits a silly model like yours
that only pretends to be an empirically realistic model. What works is
a model that actually looks like what evolution proposes.

> These are perfectly scientific and open questions. It
> is the dogmatic refusal to acknowledge this fact that
> is unscientific (a religious dogma).

Refusal to acknowledge the evidence of multiple experiments in favor of
an unrealistic numerological model is religious dogma. Come back when
you have some evidence either of anticipatory mutation or induced
mutation.

Windy

unread,
Jul 12, 2006, 1:22:57 PM7/12/06
to

nightlight wrote:
> This situation is thus analogous to a teacher looking at
> the completed students test sheets and then finding a
> cheat-sheet under the chair of one of the students
> (an analogue of existence of ultimate expert on genetic
> engineering at the location & the time of the cause
> of mutation (a)) then finding that this student had

> solved the problems in the test (analogue of a
> favorable adaptation occurring in the given population
> and given time). What is more natural to assume:
>
> == neo-Darwinist == Nothing to see here, move on. It is
> purely coincidental that the cheat-sheet was there and
> its proximity has nothing to do with the fact that
> this student has also solved the test problems. Why
> should it be related to each other? The test may have
> been easy. There is thus no need to examine this
> matter any further. And if any other student doubts
> that it was accidental and complains, I will send
> him to the principal's office.
>
> == ID scientist == Well, it seems to me a bit too
> coincidental that he solved the test and had a
> cheat-sheet under his chair. While it is conceivable
> that the test may have been easy, we don't know that.
> Hence, I would still like to check whether that was
> so by, say, examining the distribution of results
> for other students with similar general performance
> in this subject as this student. If these other
> students did roughly as well as this student, then
> the test was probably easy and his correct solutions
> may be unrelated to the cheat sheet. But we won't
> know that unless we check.

What ARE these "other students" that we can compare the cheating
student with? Are they cells that lack a "cellular biochemical
network"? As soon as you point us to some of these cells, we can make
the comparison.

-- w.

hersheyhv

unread,
Jul 12, 2006, 4:28:39 PM7/12/06
to

I think that what nightlight calls a "cheat sheet" is what the rest of
us would call a pre-existing structure. He thinks it is cheating when
evolution works, as it must, by modifying a pre-existing structure
rather than poofing a new structure into existence. Why, it is a
miracle, a MIRACLE, that that organism had the pre-existing structure
to begin with. It must mean that the whole pathway was pre-designed
from the beginning.
>
> -- w.

nightlight

unread,
Jul 12, 2006, 7:21:06 PM7/12/06
to
Windy wrote:

> So intelligence might be evident in the fate of any atom, not just the
> tiny portion that are involved in mutation events? Then why pick only
> on the latter? Wouldn't it be easier to discover intelligence at work
> by seeing if biases can exist in simpler systems of atoms?

That type of ideas are being explored. You can check Wolfram's NKS
project (book, papers, forum):

http://www.wolframscience.com/thebook.html

Additional links to few other authors were given in the post:

http://groups.google.com/group/talk.origins/msg/f30ee4fdbfff5b01

The idea of such approaches is to consider a large collection
of simple/elemental "agents", each in 'pursuit of its own
happiness' (optimizing some utility function local
to the agent), interacting/communicating with each other
either via simple nearest neighbors rules (cellular automata)
or via more general connections (adaptable networks).

One then looks at the large scale, statistical properties
of such network/automata universes. The objective is to
find out whether the equations for these coarse grained,
macroscopic properties for some of these systems will turn
out to be the physical laws that we know. More ambitious
projects aim to obtain our space-time structure from more
primitive forms (the network models).

As noted in the earlier post, the basic equations for
Maxwell, Dirac and Schrodinger's fields have been obtained
this way. The hope is that further refinements of such
models will yield not just the equations as we already
know them but also offer ab initio derivation of the
fundamental physical constants occurring in those equations
purely as a result of this distributed computation (in
the existent results they are put in by hand as the
parameters of the automata).

Hence, one can view our physical laws as some of the side-effects
of the computation by a vast underlying distributed computer.
While the elemental building blocks of the this computer
are not very intelligent (each optimizes some simple utility
function), the collective, like an ant colony or a human
brain, is much smarter than any of its components.

Of course, this still leaves an open question of 'mind stuff'
i.e. what is it like to be such and such arrangement of atoms
and fields? There is nothing in the laws of natural science
as known presently that gives any hint that there should be
something like that or that it is needed for anything (except
perhaps to as a solution of the collapse/projection problem
in quantum theory, e.g. von Neumann's solution calls for the
observer's consciousness or mind to cause the superposed
quantum states to collapse to one definite state).

I think that the developments sketched above are
suggestive of panpsychism:

http://plato.stanford.edu/entries/panpsychism/

i.e. of some most elemental mind-stuff, which has only
'aware' or 'not aware' elemental feels, described formally
as the two states 1 and 0 of the most elemental agents.
The rules for state change of the agents in a network
can turn the agent to 'aware' or 'not aware' states.

If we consider each such 'aware' as as being a unique
feel by virtue of belonging to a unique elemental
agent A[i], for i=1..N, where N is the total number
of agents, we have a set of N unique 'qualia'/feels
A[i]=aware for i=1..N.

Then, what you perceive as 'redness' (which is not the
same thing as frequency of red photon or any subsequent
electrical or chemical activity it generates in your
neurons) is simply a unique agent, say A[5], being
turned on into "aware" state, i.e. your "redness"
means A[5]=aware. It just happened, when you were put
together, that this articular agent A[5] is wired at
the place which gets activated in your brain when
the 'red' photon strikes your retina and the chain
of electric excitations reaches particular neurons
(your red detectors) that contain this agent (as
underlying building block of these neurons).

Since you have always had "redness" felt through
this agent A[5]=aware, that is what "redness" is
like for you and you wouldn't know any different.
If my "redness" is the "aware" feel of some other
agent, say A[1003], which just happened to be wired
in my 'red' detectors when I was put together, that
is all I would have ever felt as my "redness" and I
wouldn't have known any different either. Of course,
it is not the same feel as your "redness" but we
can still talk about it as if it were the same
"redness", reflecting the fact that in either
case it is the red frequency photon that lead to
the corresponding "redness" feels and each of us
has earlier learned to associate the sounds "red"
or letters "red" with our own "redness" feels.

Hence this synchronization scheme is analogous to a
Chinese and US diplomats discussing the "same" subject
(analogous to the same frequency photons, such as red,
striking your and my retina), each understanding the
the subject in his own language (you have A[5]=aware
as your "redness" feel and I have A[1003]=aware as
my "redness" feel), and looking up the words of the
other in a dictionary (analogous to recalling the
learned association we have between sounds for "red"
and the "redness" feel).

In addition to sensory feels, there are feels
for more abstract states of the sub-nets of
your brain, e.g. when you compare whether two
things are the "same" or "different" there would
be distinct neurons firing upon completion of the
evaluation, which would turn some agents A[1]=aware
as your feel "same" and A[2]=aware as your feel
"different".

The combination of feels A and B occurs when a
third agent C gets its "aware" state turned on by
virtue of being at the place where your neurons
are wired to fire on combined firings that turned
on A and B feels. For example, your feel for
"circle" may be the elemental feel of some other
agent A[6]=aware (who just happened to be placed
at those particular neurons that fire when your
neurons identify a circle). Since your neurons
are also wired to fire on combinations of firings
of color and shape neurons, some of your neurons
would fire on the combined 'red' and 'circle'
firings, and they would contain some agent A[7]
that is wired to turn to "aware" when that
combination detector neurons fire. Hence
A[7]=aware is your feel "red circle". Similarly,
when you are watching a 3D scene, say a keyboard,
and close one or the other eye, you see two
different pictures, each with its own distinct
top level feel "keyboard-L" and "keyboard-R", each
being the "aware" state of a different agent. As
soon as you uncover both eyes, a third agent gets
turned to aware which is your "keyboard-3D" feel.

Since these agents are the most elemental
objects (if they are Planckian scale objects,
there could be [10^33]^3 ~ 10^100 of them per
cubic meter), there is plenty of them to realize
any "feel" in any combination that you may
experience in your entire life.

The basic mapping such as agent A[5]="aware"
into your "redness" extends to the feels of
any hierarchy of networks. For example, we
can view adaptable social networks (whose cells
or nodes are individual humans) as live,
intelligent organisms with environment and
purposes of their own. These networks could
then have "feels" just like our networks do,
e.g. the feel 'America is angry' may mean
that Bush and his cabinet members have reached
decision that 'we are angry', which maps to some
set of agents A[20], A[21],... in their
respective brains, corresponding to their
subjective feels of 'we are angry' being turned
to "aware". The social network is wired to quickly
amplify (through government bureaucracy and news media)
such executive office decisions, and as in the
combination feels for 'red circle', there is
some agent A[33] in Bush's brain, which turns
to "aware" state when such consensus and amplification
decision takes place, and the USA social organism
then has a feel "angry", which is the feel
A[33]=aware. Of course, once Bush is out
of the office, the agent A[33]=aware will not
get amplified any more, and may not even
get activated (except maybe while he is
recollecting the old days) since it was wired
to turn on when the cabinet's "we are angry"
decision was reached, which wouldn't be
happening once he is out of the office. Hence,
another agent A[66]=aware, residing in the
brain of some other president, will be the feel
"America is angry" of the US social organism.


> A better analogy would be gamblers (organisms) who are dealt the cards
> (mutations) randomly and who then keep the favourable ones. You want to
> know if any organism is cheating or pulling favourable cards out of its
> sleeve. But you won't accept comparing rates at which organisms receive
> the cards - is anyone seemingly getting more aces than can be expected
> by their occurrence in the deck? Instead, you want to go about it the
> hard way and compare the likelihoods of the players' hands to all

> possible hands,...

How would you figure out whether a player is getting more aces than
he was supposed to get without cheating unless you know how many
he was supposed to get without cheating?

What you're saying is that we compare it to other players. But
what if all of them are cheating the house (e.g. the house is
the environment and players are some bacteria, thus they might
well be 'beating' the environmental challenges using the
same strategies). So who do you compare to? If they are all
cheating in similar way, they could have the same empirical
frequencies of aces.

Hence you need to examine the game rules and estimate how
many aces a player would get by these rules if he were not
cheating. Only then you have the second number to compare his
empirical frequency of aces to. Comparing one empirical
frequency of aces to another and finding them the same
allows two conclusions: neither is cheating of both are
cheating. That's why I am saying you need to run a model
which is programmed not to cheat and estimate from its
results what a fair rate of aces ought to be. All of you
here are arguing that you can do it by comparing the
empirical rates of aces (for one or more players, or at
different times). My point is that this doesn't work.


> because you don't like what the mutation rate studies
> have told us long ago.

Although I have written a fairly detailed response to
this argument in this post:

http://groups.google.com/group/talk.origins/msg/ff90576d409cefd2

I will give it additional support and explanations here.

Any empirically measured rates are a consequence
of the particular immediate "physical-chemical conditions",
(call them PCM) at the site and time of the mutation.
Whatever the rate value they may have, the value is
consistent with either of the possibilities:

a) PCMs are accidental to the computations of the cellular
biochemical network (which is an adaptable network, a
distributed self-programmable computer)

b) PCMs are a not (always) accidental, but can be
"deliberate" action computed by the cellular biochemical
network (in the same sense that a chess program
"deliberately" computes and selects its next move).

Note that (b) merely implies that

--if--

we were to program two models of cells:

* program-a is constrained to run by rule (a) i.e. the
program-a is not allowed to use any look-ahead modules
to compute which kind of PCM would be best to generate
but has to generate PCMs accidentally (e.g. via a
separate random number generator)

* program-b which is allowed to do anything program-a
can do, plus it can have a look-ahead module which
models and evaluates consequences of different PCMs,
then chooses the one with the best evaluations (like a
chess playing program or a human chess player evaluating
& picking his next move), with the constraints that
this modeling task cannot exceed the estimated
computational capabilities of the cellular biochemical
network and that it has to respect physical laws
(including any constraints and limitations on
interactions with environment)


--then--

program-b will outperform (statistically) program-a in
finding 'favorable' mutations.

Note also that even the most accurate theoretically
possible dynamical simulation of a cell, the one based on
quantum field theory model and which uses perfectly identical
initial and boundary conditions for program-a and program-b,
will be _intrinsically non-deterministic_ (since the laws of
QFT are non-deterministic). In any practical simulation, there
would be many more non-quantum sources of indeterminism,
which in practice would dwarf the quantum indeterminism.

In any case, whatever the sources, there is an irreducible
level of indeterminism, free choice, in such simulation.
The program-a is not allowed to utilize such free choices
to make selections based on some look-ahead evaluation
of possible PCMs resulting from choosing different allowed
paths, while program-b is allowed to use such look-ahead.

Important point to note here is that the conditions above
do _not_ imply that the program-b would necessarily induce
the mice color mutation rate to increase if we change the
dominant color of the environment. The program-b still has
to obey all physical laws and the constraints/limitations
in the interactions of the cellular biochemical network
with the larger environment. While there may be in principle
some biochemical effects of the environmental color on the
cellular biochemical networks, only a genuine simulation or
experiments could answer whether such effects are specific
enough (or affect sufficiently the mice reproductive cells)
for the network to sense and respond to them. For example,
the "perception" that the cellular network may have of an
environmental color change may be fuzzy/non-specific and
only indicate to the cell a general stress condition. Or,
even if it were specific, the dynamical laws and the
resources or 'genetic engineering tools' available preclude
it from sharply targeting the specific site (there are no
wires to hook point-to-point, from A to B, hence the
spatial precision is not that great), or targeting it
without risking a much worse consequences due to the
entangled and overlapped functionalities of multiple
subnetworks. Only a detailed enough evaluations (which
the network might not have resources to perform) of
consequences under the given dynamical constraints
could answer whether some such possibility is available
at all.

In any, case, there is no implication in (b) that program-b
can pull arbitrary miracles out of its hat. The property
(b) only allows it to use look-ahead module, within the
limits of network's computational power, to improve its
odds. But there is nothing there that guarantees it can
find the best conceivable solution in the time available
or that some ad hoc solution-look-alike that pops into
someones head is anything actually available under those
conditions. Hence the argument often heard here, why didn't
it change the rate in such and such way are a
misunderstanding what was meant by 'intelligent agency'.
We are looking only at a 'lawful' intelligent agency,
the agency which plays by the rules. But, we do not
assume that what science holds currently to be the
rules of the game must be correct or be all there is.
Hence the only "miracles" allowed would those which
would violate some presumed rules that are in fact
merely false conjectures.

Clearly, the program-a is much simpler to write and cheaper
to run due to its lack of anticipatory module. That's why
my suggestion was that a criterium which could falsify (a)
(or falsify/obsolete (b)) would be a comparison of
performance (in generating favorable mutations) of
program-a against the actual cellular networks.

If the program-a could replicate (very approximately)
the performance of actual biochemical networks, that
would make (b) subject to Ockham's razor, thus at
least unnecessary. To falsify (b), we would need to
also write program-b and compare its performance to
actual networks. Since it is plausible that program-b
would outperform program-a, then we could plausibly
conclude that program-a's replication of actual
network performance implies that if we were to write
program-b, it would fail to replicate the actual
network performance (it would outperform it).

If the program-a significantly uderperforms the
actual networks, then that falsifies (a) and
shows that the actual networks are using more
sophisticated algorithms than (a), which would then
belong to a class of programs we labeled as type (b)
(this is not a single program, since one can have
different anticipatory modules).

In conclusion, my overall points in this thread are:

P1. Both (a) and (b) are valid/falsifiable scientific
conjectures (which is what neo-Darwinians deny, claiming
that (a) is the scientific fact while (b) is a religious
belief).

P2. Assuming we agree that P1 is true, then the question
of which of the two is a better model for actual
biochemical networks is an open scientific question.

P3. The empirical mutation rates, whatever they may be,
do not tell us which type of algorithm (a) or (b) would
model the performance of actual networks better.


Nic

unread,
Jul 12, 2006, 7:50:27 PM7/12/06
to

I for one can't see how to get this idea working. Said UV photon could
be extremely old if it came from outside the solar system. Maybe in
this case the 'repair' mechanism, also sensitive to the precise
microstate of the cytoplasm, kicks in or doesn't, according to what
happened in the distant past on some distant star? I don't see how
this view can be sustained without the whole of history, micro and
macro, being a planned consequence of the initial conditions of the
universe. I'm not really into the theological aspects of this, but
surely a creator knowing *that* much about the consequences of their
actions might as well not bother doing anything at all, as there is
nothing to learn from standing back and watching the thing unfold?

Windy

unread,
Jul 12, 2006, 8:06:03 PM7/12/06
to

nightlight wrote:
[snippage on intelligence of atoms]

Sweet Zombie Jesus! Your post was 2800 words long. If you want your
ideas to come across, remember that brevity is a virtue. And don't
start with any "I guess you guys don't understand my posts" bullcrap.
If you are so well versed in these issues as you claim, you should be
able to state your point concisely.

> > A better analogy would be gamblers (organisms) who are dealt the cards
> > (mutations) randomly and who then keep the favourable ones. You want to
> > know if any organism is cheating or pulling favourable cards out of its
> > sleeve. But you won't accept comparing rates at which organisms receive
> > the cards - is anyone seemingly getting more aces than can be expected
> > by their occurrence in the deck? Instead, you want to go about it the
> > hard way and compare the likelihoods of the players' hands to all
> > possible hands,...
>
> How would you figure out whether a player is getting more aces than
> he was supposed to get without cheating unless you know how many
> he was supposed to get without cheating?
>
> What you're saying is that we compare it to other players. But
> what if all of them are cheating the house (e.g. the house is
> the environment and players are some bacteria, thus they might
> well be 'beating' the environmental challenges using the
> same strategies). So who do you compare to? If they are all
> cheating in similar way, they could have the same empirical
> frequencies of aces.

That sounds like a distinctly unfalsifiable mutation fairy, despite
your objections below. Was the intelligent network present in the very
first organism or cell or did it evolve later? If we put some nucleic
acids in a tube and artificially replicate them until mutations appear,
is the intelligence guiding them?

> Hence you need to examine the game rules and estimate how
> many aces a player would get by these rules if he were not
> cheating. Only then you have the second number to compare his
> empirical frequency of aces to. Comparing one empirical
> frequency of aces to another and finding them the same
> allows two conclusions: neither is cheating of both are
> cheating. That's why I am saying you need to run a model
> which is programmed not to cheat and estimate from its
> results what a fair rate of aces ought to be.

Great, why don't *you* run the model, instead of bitching about it to
the "neo-Darwinians"?

> All of you
> here are arguing that you can do it by comparing the
> empirical rates of aces (for one or more players, or at
> different times). My point is that this doesn't work.

How do you propose, considering the mouse study for example, to develop
a model to compare with the actual outcome? You would need an empirical
mutation rate to model the unintelligent and intelligent processes.
Otherwise, if your model's mutation rate is not exactly the same as in
the wild, you can't conclude anything based on the model's outcome. But
if your proposition is true, any empirical mutation rate is *already*
affected by the "intelligence". How do you get the parameters for your
"unintelligent" model?

-- w.

nightlight

unread,
Jul 12, 2006, 8:04:48 PM7/12/06
to
Windy wrote:

The other students in the analogy are those who didn't
have a cheat-sheet. Since the cheat-sheet is the analogue
of the biochemical network (a distributed self-programmable
computer which optimizes some utility functions), you
can't map the 'other students' to other cells or the
same cell at different time, since all cells have such
networks doing some computation of the same algorithmic
type that brains do.

Hence the analogue for 'other students' is a model, since
you can make model in such a way that it does not use
computational power of the biochemical web to run
anticipatory algorithms related to the favorability of
the mutations (it may run for all other purposes it does
in actual cells). That is something you can't do with cells,
unless one can reverse engineer the network and its
algorithms, then reprogram it to work some different way.
But at that stage, this would be redundant since we
would already know whether its anticipatory computational
capacity is used to control mutations or not.

The little analogy was meant to suggest how absurd it would
be to postulate that the network is absolutely not using its
computations capacities for such purpose (to control mutations
or their repair in anticipatory manner, as far as its
capabilities go). That is what the neo-Darwinian conjecture
means expressed at the higher level of abstraction (at the
algorithmic level of abstraction, which is a higher level
than the biochemical level of abstraction, and which in
turn is a higher level than the quantum theory level of
abstraction; the abstraction levels and their relations
were discussed in more detail in an earlier post:
http://groups.google.com/group/talk.origins/msg/ff90576d409cefd2 ).


nightlight

unread,
Jul 12, 2006, 8:33:05 PM7/12/06
to
hersheyhv wrote:

>
> I think that what nightlight calls a "cheat sheet" is what the rest of
> us would call a pre-existing structure. He thinks it is cheating when
> evolution works, as it must, by modifying a pre-existing structure
> rather than poofing a new structure into existence. Why, it is a
> miracle, a MIRACLE, that that organism had the pre-existing structure
> to begin with. It must mean that the whole pathway was pre-designed
> from the beginning.
>

The cheat-sheet is the biochemical network (and the hierarchy of
sub-networks below it, with no a priori limit on its overall
computational power).

The point of contention is whether this network (which is
a distributed self-programmable computer, algorithmically
of the same type as brains) and its "anticipatory capabilities"
(this is a characterization at the algorithmic level of
abstraction), which it uses for some purposes are prohibited
from being used for the anticipatory control of mutations
and their repair.

My position is that they are used for that purpose to the
best of their ability (which need not be unlimited or
miraculous although it may appear as magic to our
present technology), while neo-Darwinian position,
expressed at the algorithmic level of abstraction,
is that they are not used for this purpose.

Since we can't presently reverse engineer their algorithms
(consider reverse engineering a chess program algorithm
from measuring trillions of electric pulses at different
points; and that would be a child's play in comparison
since we do know much better how a computer, that we designed,
works than how a cellular biochemical network works), all
we can do, in order to find out which conjecture is closer
to the way actual networks use their anticipatory computing
power, would be to model the dumber case and compare its
performance to that of the actual networks. Since the models
we could compute at present would be by necessity pretty
coarse grained, they would be able to provide only a low
resolution discrimination. That could still be decisive
if, for example, the neo-Darwinian algorithm were to
drastically (by many orders of magnitude) underperform
the actual networks. This is also what the "irreducible
complexity" examples are trying to do, albeit in a
qualitative manner -- to hint at how difficult it would
be to construct such DNA configurations the "dumb" way,
without the use of any anticipatory computations.


nightlight

unread,
Jul 12, 2006, 9:09:43 PM7/12/06
to
Nic wrote:

> Windy wrote:
>
>>
>>Does the intelligence work by
>>-causing the emission of the UV photon in some distant source so that
>>it may strike the appropriate base in DNA
>>-choosing whether the UV photon will be absorbed by the thymine it
>>strikes
>>-causing the appropriate repair enzyme to arrive or not to arrive at
>>the scene
>
>
> I for one can't see how to get this idea working. Said UV photon could
> be extremely old if it came from outside the solar system. Maybe in
> this case the 'repair' mechanism, also sensitive to the precise
> microstate of the cytoplasm, kicks in or doesn't, according to what
> happened in the distant past on some distant star? I don't see how
> this view can be sustained without the whole of history, micro and
> macro, being a planned consequence of the initial conditions of the
> universe. I'm not really into the theological aspects of this, but
> surely a creator knowing *that* much about the consequences of their
> actions might as well not bother doing anything at all, as there is
> nothing to learn from standing back and watching the thing unfold?
>

The underlying assumption of ID that I support is that the presumed
"designer" is a "lawful designer", not a trickster. Hence, no one
can violate "the rules" (which is a distinct concept from the
"presently known rules").

The ID does not prohibit "accidental" mutations. The
UV induced mutation would be, in any reasonable "lawful
designer" theory, an accidental mutation. Its repair, which
is an action of the biochemical network, would not be,
though. The mutation that would not be accidental are those
that are induced by the biochemical network itself (which
are presently labeled as "errors"). Any repair (or its
omission) would also qualify as non-accidental DNA
transformation.

In my view, the accidental (non-biochemical) mutations are,
for all practical purposes, irrelevant for evolution or
adapation. The mutations which lead from the first bacteria
to the present life were all genetic engineering projects
of the cellular biochemical networks (and their underlying
sub-nets, possibly down to Planckian scale objects, hence
of in principle vast computational power).

Similarly, the first life was a technological-scientific
project of the autocatalytic reaction networks.

hersheyhv

unread,
Jul 12, 2006, 9:16:39 PM7/12/06
to

nightlight wrote:
> Windy wrote:
>
[snip New Age type mystical mumbo-jumbo in pseudoscientific terms]

> > A better analogy would be gamblers (organisms) who are dealt the cards
> > (mutations) randomly and who then keep the favourable ones. You want to
> > know if any organism is cheating or pulling favourable cards out of its
> > sleeve. But you won't accept comparing rates at which organisms receive
> > the cards - is anyone seemingly getting more aces than can be expected
> > by their occurrence in the deck? Instead, you want to go about it the
> > hard way and compare the likelihoods of the players' hands to all
> > possible hands,...
>
> How would you figure out whether a player is getting more aces than
> he was supposed to get without cheating unless you know how many
> he was supposed to get without cheating?

By properly defining cheating as one player getting more aces *than the
other players*. If all players have the same probability of getting
aces, no player is advantaged relative to another no matter how many
aces there are. Hence, no cheating has occurred.

That is, determine if the two variables, "number of aces received per
hand" and "individual card players" act like independent variables. If
they do, then no cheating has occurred. In this case, the mean number
of aces dealt per hand, whatever it is, over a sufficient number of
hands (and the variation in this number per hand) can be used to
determine if two players mean number of aces per hand are significantly
different from each other.

Again, if there is no significant difference between the two players on
this measure, no cheating is occurring *regardless* of what the mean
number is.

> What you're saying is that we compare it to other players. But
> what if all of them are cheating the house

Cheating the house (in games where the house participates in the
action) is no different from cheating any other player and is detected
the same way -- by discovering that one of the players is being
*differentially* affected.

> (e.g. the house is
> the environment and players are some bacteria, thus they might
> well be 'beating' the environmental challenges using the
> same strategies). So who do you compare to? If they are all
> cheating in similar way, they could have the same empirical
> frequencies of aces.

How do you propose this in the real world? If what you mean is that
several different species of bacteria might have the same pre-existing
structure, which, when mutated *randomly wrt need*, allows the rare
bacteria with this mutation to survive when they undergo selection,
sure, that can certainly happen. But none of these bacteria are
"cheating" by *anticipating* a need and specifically and differentially
producing the needed mutation. They just were the lucky ones that had
a pre-existing structure which, when modified by random mutation,
allowed survival. They are still undergoing mutation at random wrt
need and all the environment does is pick the winners. Like I said
before, you seem astonished and regard it as cheating that evolution
works by modifying a pre-existing structure. That is, necessarily, the
way evolution *must* work.

> Hence you need to examine the game rules and estimate how
> many aces a player would get by these rules if he were not
> cheating. Only then you have the second number to compare his
> empirical frequency of aces to. Comparing one empirical
> frequency of aces to another and finding them the same
> allows two conclusions: neither is cheating of both are
> cheating.

Cheating *requires* that one of them get a benefit that some other
participant doesn't. Otherwise there is no cheating.

> That's why I am saying you need to run a model
> which is programmed not to cheat and estimate from its
> results what a fair rate of aces ought to be. All of you
> here are arguing that you can do it by comparing the
> empirical rates of aces (for one or more players, or at
> different times). My point is that this doesn't work.

For any rational definition of cheating, it does.

[snip more new age mysticism by someone with a poor grasp on chemistry
and biochemistry, not to mention biology]

nightlight

unread,
Jul 13, 2006, 2:31:23 AM7/13/06
to
Windy wrote:

> nightlight wrote:
> [snippage on intelligence of atoms]
>

> ... Your post was 2800 words long.

Unfortunately, saying that atoms have a 'mind stuff', may
superficially appear as absurd to many. Hence it takes a bit
more space and wider topics, to show that this can be a
perfectly coherent and heuristically rich perspective
(more so than any other, to me).

>>same strategies). So who do you compare to? If they are all
>>cheating in similar way, they could have the same empirical
>>frequencies of aces.
>
>
> That sounds like a distinctly unfalsifiable mutation fairy, despite
> your objections below.

You may be stretching the gambling analogy beyond the breaking point.
While it is certainly not plausible that all gamblers will be cheating,
it is perfectly plausible that all biochemical networks, having the
common ancestor, use similar anticipatory algorithms.


> Was the intelligent network present in the very
> first organism or cell or did it evolve later?

The ancestral networks which discovered the digital
coding and then designed on that technology the DNA/RNA
based networks were autocatalytic reaction networks.

> If we put some nucleic
> acids in a tube and artificially replicate them until mutations appear,
> is the intelligence guiding them?

The 'intelligent processes', which at the algorithmic level of
abstraction are programs executed by the cellular biochemical
networks, don't cease to run if the conditions are drastically
different from their natural environment. They just may not have
the same effectiveness, since they are predicting for the wrong
environment. If you further chop the biochemical networks into
smaller pieces, there would be some truncated segments of the
networks, with the remaining snippets of the original programs
trying to do something, which may be completely useless
for the situation.

>
> Great, why don't *you* run the model, instead of bitching about it to
> the "neo-Darwinians"?

There are multiple levels of the debate here. At the basic level, that
suggestion was meant to support the point that ID is a valid scientific
perspective, since the discriminating criteria between ID and ND
can be formulated. Whether they can be practically implemented with
present computational technology is not relevant for the question
of existence, as a matter of principle, of such criteria.

> How do you propose, considering the mouse study for example, to develop
> a model to compare with the actual outcome? You would need an empirical
> mutation rate to model the unintelligent and intelligent processes.
> Otherwise, if your model's mutation rate is not exactly the same as in
> the wild, you can't conclude anything based on the model's outcome.

Of course, you can. If the program-a (the one which uses no
anticipatory algorithms to control the physical-chemical
conditions which are causally responsible for mutations),
produces drastically smaller (several orders of magnitude) rates
at that site than the empirical rates, the implication is
that one needs a smarter algorithm. Note that "dumb" algorithm
may have some general mutagenic environment parameters which
are adjustable (the 'stress response'). Now, if one were to
use such blunt tuning to raise the rate at the site in
question to the empirically observed value, there may be many
more harmful side-effects, which the "dumb" program cannot
avoid without implementing them in the offspring's DNA, at
the cost of a wasted offspring for each wrong guess, since
it can only allow the natural selection to weed out the mistakes.

The blunt mutation tuning via general mutagenic factors, which
is all that the "dumb" program is allowed to set (by the
post-Cairns version of the Random Mutation conjecture),
may not allow it to replicate all the rates that occur
'in thew wild'. If the differences are not drastic, one
can consider "dumb" model as sufficient. But if the
differences are astronomical and outcome cannot be
improved by the blunt tuning, then the present version
of RM can be considered a falsified conjecture. Which
would mean it's the update time, again.

With the anticipatory algorithm, some or many errors or classes
of errors, can be avoided before committing to the expensive
test by selection. The whole classes can be eliminated if there are
some easily identifiable markers common to many different errors.
In effect, they get tried and weeded out 'in the head' of the
anticipatory process before committing to the expensive offspring.

Keep in mind that all these hypothetical simulation (which are
highly impractical at present, hence we can use them here only
to decide on the existence of criteria and other matters of
principles) would have a large degree of freedom in picking
their next steps (due to dynamical uncertainties, from the
irreducible ones at quantum level, up to various noises at
classical levels).

The key difference between the dumb and anticipatory algorithms
is precisely in how they utilize these freedoms of choice on
each time step of the simulation. The dumb algorithm must pick
a random choice in accordance with probability distribution
predicted by the dynamics, while the anticipatory one picks
the choice which maximizes its gain function (evaluated
via look-ahead, a la 'evaluation function' in a chess program).

A perceptive reader may raise here the following objection:
Does the picking based on some gain function violate the
dynamically implied probabilities? For example, in the most
fine grained model, quantum theory (QT), for each step at
time 't' the theory predicts probability distribution P(t,i),
for i=1,2...C(t), where C(t) is the number of choices at
time t. The "dumb" algorithm has to generate random choice
according to the given distribution P(t,i). How can the
smart program avoid that without violating QT? We are also
considering only the "lawful designer", hence no one can
violate the rules of the game.

If the smart program were to systematically pick low
probability choices (because they indicate high gains by
its look-ahead function), its trajectory in the state
space would remain in low entropy subspace, hence our
program would behave as a Maxwell daemon, seemingly
violating the second law of thermodynamics.

Within the QT, we need to recall that distribution P(t,i) is
not determined solely by the dynamical laws, but also by the
initial and boundary conditions. If our system were isolated,
the boundary conditions would be set to zero (for all fields
and currents), thus there would be no choice (except for the
initial conditions). But if the system were _not isolated_,
then the smart program (for the fine grained QT level
simulation) is free to set the optimum boundary conditions
for its task in each time step. Hence it can freely pick
the choice with the best gain for the system, regardless
of its low probability, and thus keep the system at as
low entropy as it wishes, since any excess entropy is
pumped out through the system's boundary (into the
environment) via the freedom to set the boundary
conditions. Hence, it can work as a Maxwell daemon,
as long as the cost of its computation is paid by
the external systems.

This is, in fact, precisely how a regular computer running,
say a chess program, can have the chess program select
freely in each step the 'best' move, regardless of the
P(t,i) -- the computer is constantly pumping entropy
into the environment (via its cooling system), maintaining
its entropy as low as required. Note that using the high
abstraction level (algorithmic) language to describe
how the program works, abstracts away the details of
the cooling (in thermodynamic description) or the
boundary conditions adjustments (in QT description).

Hence, this probabilistic (or 2nd law) objection to the
existence of anticipatory processes is invalid since
our system is an _open system_. Of course, we could have
discarded the objection right away, by simply pointing
to the existence of direct counterexamples to that
objection: computers, humans, animals.... Since such
systems do exist, any reasoning or its premises
that can deduce that such systems can't exist are
automatically flawed. The discussion above merely
explains what was that flaw.

In contrast, the "dumb" program running at QT level of
simulation granularity, is not allowed to adjust the
boundary conditions, but only randomly select a
choice 'i' according to the given distribution P(t,i).
This dynamical evolution will drive the system into
the thermodynamical equilibrium, which is the maximum
entropy state, the heath death. That would be quite
an unfavorable outcome of its operation. But if we
were to allow the "dumb" program to adjust the boundary
conditions to avoid the heath death, then we have to
allow it to use anticipatory algorithm which can
adjust the boundary conditions so that the excess
entropy is pumped out. The neo-Darwinian conjecture
has thus to cheat here -- it allows anticipation to
the "dumb" program to the degree needed to maintain
such boundary conditions which prevent the anticipated
heath death.

That is essentially equivalent of sneaking the needed
intelligence from an outside system, e.g. there is
some system outside of the cell being simulated by
the "dumb" program, which has to be simulated by the
anticipatory program whose role is to make sure that
the cell's entropy is kept low (by adjusting its
boundary conditions, as well as its own, in an
anticipatory manner that prevents the heath death
of the cell and of itself).


> But if your proposition is true, any empirical mutation rate is *already*
> affected by the "intelligence". How do you get the parameters for your
> "unintelligent" model?

One may not be able to tune the "dumb" program to match
all the empirical mutation rates, using only the blunt
mutation tuning tool (which is all that it is allowed
to tune presently). Should that happen, that failure
alone would falsify the present RM rules imposing
prohibitions on fine grained mutation tuning. One
would have to weaken the RM conjecture further.
The first such weakening was the post-Cairns
adjustement, which allowed the "dumb" algorithm
to be less-dumb and use the bulk mutation tuning
in an anticipatory manner.

Stated at the algorithmic level of abstraction, this
post-Cairns RM weakening was as follows: Normally the
increasing of the global mutation rate is statistically
harmful. But if the conditions are such that the less-dumb
program _anticipates_ certain death, anyway (the 'stress
condition'), then it will increase the cell's global
mutation rate.

nightlight

unread,
Jul 13, 2006, 5:03:26 AM7/13/06
to
hersheyhv wrote:


> By properly defining cheating as one player getting more aces *than the
> other players*. If all players have the same probability of getting
> aces, no player is advantaged relative to another no matter how many
> aces there are. Hence, no cheating has occurred.

That assumes the symmetrical rules of the game and equal skills of the
players, both of which explicitly contradict how that analogy was
defined in the original post:

http://groups.google.com/group/talk.origins/msg/f30ee4fdbfff5b01

where it explicitly prohibits assumption of rule symmetry or equal
skills (which is why the house role was brought in, since that
makes this more obvious). Assuming that players have equal skills is
particularly absurd for biological analogy, since that is analogous to
assuming that each organism is equally fit as any other. That isn't even
true for the siblings, let alone for some arbitrary pair of organisms.

Allowing your narrowing of the types of games would make the analogy to
biology even more meaningless for the cases of different species or
species and environment (since in that case neither the rules have to be
symmetrical/fair nor the skills have to be equal).

All that your redefinitions and restrictions amount is claiming that:

a) you can define some subset {G2} of the original set {G1} of games
and
b) you can define a term "cheating2" different from the original
term "cheating1"

so that with these two changes you can declare:

H. For the games from my subset {G2} the "cheating2" can be established
by comparing empirical counts alone, without having to model the game odds.

So what? That has absolutely nothing to do validity of my proposition:

N. Within the full set of games {G1} you cannot establish "cheating1"
without having to model the game odds and compare players' gains to
those predicted by the model.

Further, by restricting your games to subset {G2}, you have lost the
biological analogy altogether, so even ignoring the fact that (H) has
nothing to do with (N), the (H) taken on its own merit doesn't even
amount to saying anything even analogous to biology.


> Cheating the house (in games where the house participates in the
> action) is no different from cheating any other player and is detected
> the same way -- by discovering that one of the players is being
> *differentially* affected.

Differentially than what? If there is one house and multiple players,
how do you know whether the house is differentially affected than other
players? How would you know whether, for example, the house is cheating
against all other players? Who would be 'differentially' affected? And
with respect to what other figure would this difference be computed?
(Keep in mind that the rules for the house need not be the same as those
for the players.)

Windy

unread,
Jul 13, 2006, 8:42:33 AM7/13/06
to

nightlight wrote:

> Windy wrote:
> > If we put some nucleic
> > acids in a tube and artificially replicate them until mutations appear,
> > is the intelligence guiding them?
>
> The 'intelligent processes', which at the algorithmic level of
> abstraction are programs executed by the cellular biochemical
> networks, don't cease to run if the conditions are drastically
> different from their natural environment. They just may not have
> the same effectiveness, since they are predicting for the wrong
> environment. If you further chop the biochemical networks into
> smaller pieces, there would be some truncated segments of the
> networks, with the remaining snippets of the original programs
> trying to do something, which may be completely useless
> for the situation.

Too bad for your proposition, the mutations produced in a test tube
perform as well as natural mutations:

http://www.sciencedaily.com/releases/2002/03/020320081607.htm

Darwin's Time Machine: Scientists Begin Predicting Evolution's Next
Step
Untangling the branches of evolution's past is a daunting enough task
for researchers, but some scientists are now turning their eyes toward
the future in a bid to predict evolution's course. Barry G. Hall,
professor of biology at the University of Rochester, has shown how a
model of evolution developed in the lab accurately reproduces natural
evolution. The research, published in the March issue of Genetics,
demonstrates how the model is so accurate that it can be used to
predict how a strain of bacteria will become resistant to
antibiotics-giving researchers a possible tool to create drugs to which
bacteria cannot adapt.
[...]
Researchers developed an alternative; instead of growing a culture of
cells and then subjecting them to a stress-like lactose that they
couldn't metabolize-and waiting to see if any survived, scientists had
decided to take a gene or two and mutate it in a test tube.

[note that the mutations are still random!]

"You can introduce a lot of mutations in the lab," explains Hall. "In
effect, you can take millions of copies of this gene and give each one
a different mutation." Those mutated genes are introduced back into the
cells, "and then you ask, can you grow on lactose now?"

The mutations that arose in nature were also found in the laboratory
cells, but would that always be the case? Hall, knowing that he had
essentially bypassed the cell's normal machinery, needed to know if
this accelerated process would accurately mimic mutations that would
arise in nature. If he could mutate a 40-year-old antibiotic resistance
gene, called TEM-1, in the laboratory and match his mutations against
the natural mutations that arose as that gene adapted to better
antibiotics, then he could see if the model would accurately predict
the how genes evolve in nature.

"Antibiotic resistance evolves rapidly enough that we can observe
significant increases in resistance profiles in just a decade," says
Miriam Barlow, a doctoral student in Hall's laboratory. "This provides
the unique opportunity to actually observe the process of evolution as
it happens."

The results matched well. Certain mutations developed in the lab
improved the resistance gene more than others. And, most importantly,
these were the same mutations that arose in nature.

nightlight

unread,
Jul 13, 2006, 2:07:48 PM7/13/06
to
Windy wrote:

> Too bad for your proposition, the mutations produced in a test tube
> perform as well as natural mutations:
>
> http://www.sciencedaily.com/releases/2002/03/020320081607.htm


You're joking, aren't you?

> Source: University Of Rochester
> ...


> Barry G. Hall, professor of biology at the University

> of Rochester ...
>
> Thirty years ago, Hall was growing E. coli in his ...
>
> This will tell pharmaceutical companies whether their
> new drug will have a life span ...


A self-promotional puff piece that reek$$$ from a mile away
proves exactly what? That professor Barry G. Hall can
anticipate where the big bucks might be? Just ask a kid on one
or more of Ritalin, Adderall, Prozac, Zoloft, Paxil, Celexa,...

Regarding your new subject line, I inserted a word so it
would reflect more accurately what that research actually
shows. And that is: if you take the anticipatory biochemical
networks of bacteria and combine them with the network of
neurons of professor Barry G. Hall for about three to four
decades, the combined network may anticipate faster than
either of its subnets could on their own. Fa$cinating.

A combined network can anticipate faster than either of
its subnets. Whoa Nelly! Who would've thunk of that?
On second thought, wasn't there some prior art on that
fascinating discovery, say, from about 700 millions to
2 billions years ago? Maybe they meant to say that the
fascinating part in this case was in having for the
second network the one of 'Barry G. Hall'? Well, yeah,
that would certainly be a fascinating twist on the theme.

Windy

unread,
Jul 13, 2006, 3:15:29 PM7/13/06
to
nightlight wrote:
> Windy wrote:
>
> > Too bad for your proposition, the mutations produced in a test tube
> > perform as well as natural mutations:
> >
> > http://www.sciencedaily.com/releases/2002/03/020320081607.htm
>
> You're joking, aren't you?
>
> > Source: University Of Rochester
> > Barry G. Hall, professor of biology at the University
> > of Rochester ...
> > Thirty years ago, Hall was growing E. coli in his ...

...pants? Why is this mangled sentence relevant?

> > This will tell pharmaceutical companies whether their
> > new drug will have a life span ...
>
> A self-promotional puff piece that reek$$$ from a mile away
> proves exactly what? That professor Barry G. Hall can
> anticipate where the big bucks might be? Just ask a kid on one
> or more of Ritalin, Adderall, Prozac, Zoloft, Paxil, Celexa,...

Great. NashtOn bitches about when there are no practical applications
to Darwinism, and you bitch about when there are some.

Did you miss where they said the research was published in Genetics? Go
read that, if you don't like press releases.

> Regarding your new subject line, I inserted a word so it
> would reflect more accurately what that research actually
> shows. And that is: if you take the anticipatory biochemical
> networks of bacteria

No, you take *one gene* and induce mutations in it in the presence of a
few other chemicals. So you are saying an "anticipatory network"
consisting of one gene and maybe a enzyme or two from a completely
different organism and a test tube environment can *still* anticipate
that it will need mutations to grow on lactose later?

> and combine them with the network of
> neurons of professor Barry G. Hall for about three to four
> decades, the combined network may anticipate faster than
> either of its subnets could on their own. Fa$cinating.

How, exactly, do the contents of Barry Hall's brains reach into the
test tube and tell the gene it should bias its random mutations to deal
with lactose or antibiotics?

> A combined network can anticipate faster than either of
> its subnets. Whoa Nelly! Who would've thunk of that?
> On second thought, wasn't there some prior art on that
> fascinating discovery, say, from about 700 millions to
> 2 billions years ago? Maybe they meant to say that the
> fascinating part in this case was in having for the
> second network the one of 'Barry G. Hall'?

Now you are complaining about that there were *GASP* _researchers_
involved in this research? This objection is still as idiotic as when
creationists first came up with it.

> Well, yeah,
> that would certainly be a fascinating twist on the theme.

Ok, since NO experiments on evolution can be done without researchers,
including any possible random mutations model that can be developed,
your objection always applies. Thanks for proving that your
"intelligent networks" theory is not falsifiable. Bye bye.

-- w.

Windy

unread,
Jul 13, 2006, 3:19:13 PM7/13/06
to
nightlight wrote:
> Windy wrote:

> > nightlight wrote:
> >
> > What ARE these "other students" that we can compare the cheating
> > student with? Are they cells that lack a "cellular biochemical
> > network"? As soon as you point us to some of these cells, we can make
> > the comparison.
>
> The other students in the analogy are those who didn't
> have a cheat-sheet. Since the cheat-sheet is the analogue
> of the biochemical network...

>
> Hence the analogue for 'other students' is a model, since
> you can make model in such a way that it does not use
> computational power of the biochemical web to run
> anticipatory algorithms related to the favorability of
> the mutations...

OK, *how* do we make such a model? How do you model biochemical events
without a biochemical network? Be specific.

-- w.

hersheyhv

unread,
Jul 13, 2006, 4:08:43 PM7/13/06
to

nightlight wrote:
> hersheyhv wrote:
>
>
> > By properly defining cheating as one player getting more aces *than the
> > other players*. If all players have the same probability of getting
> > aces, no player is advantaged relative to another no matter how many
> > aces there are. Hence, no cheating has occurred.
>
> That assumes the symmetrical rules of the game

Of course. If a game has unsymmetrical rules that favors one player
over others, then the favored player is still not *cheating*, by the
usual definition of cheating. He is merely favored, for some reason,
by the unsymmetrical rules of the game. For example, if the rules say
that the dealer always gets one extra card to pick from, the dealer
will win that much more often than the other players. But the dealer
is not *cheating* if that is what the rules say.

In the case of mutation it is quite clear that there is variance in
mutation rate from specific site to specific site because of local
features in sequence. It is also clear that certain sites are more
likely to mutate than others (the 5-methyl-C in CG pairs, for example,
spontaneously deaminates to T; that is one of the reason why CG pairs
are relatively rare in mammals). So, in that sense, the rate of
mutation is clearly not a constant; some sites are more mutable than
others and some types of mutations are more frequent than others.

But you are making a *specific* claim for which such variance from site
to site is irrelevant. You are claiming that there is a correlation by
which those mutations which produce beneficial effects are *more
likely* than equivalent mutations which produce detrimental effects
because there is some mechanism by which these events anticipate future
need. It is that specific claim which has been experimentally
determined to be false. There is NO, NADA, ZIP evidence that there is
any possible anticipatory mechanism by which mutations that are
beneficial are more likely to occur than those that are detrimental and
no way in which any mutational event can predict what selective
environment it will be in (and it is the selective environment that
determines the beneficial or detrimental nature of a mutation, not the
specific change in the DNA sequence). As I have pointed out, some
seriously deleterious mutations occur at high rates and some
potentially beneficial ones occur at low frequency. You have yet to
provide any evidence that there is any correlation between selective
value of a change and the rate of mutation.

> and equal skills of the
> players,

Again, the players having unequal skills are not *cheating*. If the
rules are symmetrical, then one would expect players with better skills
to win more often. But that difference is not a result of chance. It
is the result of ability of the players to predict the future.
Unfortunately, DNA is a dumb molecule undergoing chemical changes and
not an intelligent agent and mutation in DNA occurs at random wrt need.
OTOH, the selective environment does choose when to "hold 'em" and
when to "fold 'em". But not in any foresightful or intelligent manner,
anticipating the possibility of getting a straight rather than a flush.
Rather NS culls its cards and draws new ones merely on the basis of
local utility at each step in the process. For example, NS would stick
with two tens rather than discard one even if doing so might lead to
drawing a straight flush *because* NS has no foresight.

> both of which explicitly contradict how that analogy was
> defined in the original post:
>
> http://groups.google.com/group/talk.origins/msg/f30ee4fdbfff5b01
>
> where it explicitly prohibits assumption of rule symmetry or equal
> skills (which is why the house role was brought in, since that
> makes this more obvious). Assuming that players have equal skills is
> particularly absurd for biological analogy, since that is analogous to
> assuming that each organism is equally fit as any other. That isn't even
> true for the siblings, let alone for some arbitrary pair of organisms.

Still, for *cheating* to occur, you have to have some mechanism that
biases the probability of the cards that each player is likely to get.
Otherwise it is is not cheating. It is not a deviation from randomness
in the draw. And it is the randomness wrt need in the draw that is the
equivalent of mutation, not the skill of the players.

> Allowing your narrowing of the types of games would make the analogy to
> biology even more meaningless for the cases of different species or
> species and environment (since in that case neither the rules have to be
> symmetrical/fair nor the skills have to be equal).

But the draw of the cards has to be fair and random. Your claim is
that there is something about mutation that differs from a fair and
random draw. It has nothing to do with the rules of the game or the
skill of the players.

> All that your redefinitions and restrictions amount is claiming that:
>
> a) you can define some subset {G2} of the original set {G1} of games
> and
> b) you can define a term "cheating2" different from the original
> term "cheating1"
>
> so that with these two changes you can declare:
>
> H. For the games from my subset {G2} the "cheating2" can be established
> by comparing empirical counts alone, without having to model the game odds.
>
> So what? That has absolutely nothing to do validity of my proposition:
>
> N. Within the full set of games {G1} you cannot establish "cheating1"
> without having to model the game odds and compare players' gains to
> those predicted by the model.

So. What games are you thinking of. I am thinking of what are
commonly considered "games of chance". In these games, one of the
assumptions is that the distribution of cards is random wrt the need of
the players for specific cards. It is this assumption that cheaters
violate.

In the game of genetics, one of the assumptions is that mutation occurs
at random wrt the need of the organism for that specific mutation. You
have presented no evidence of any process by which nature can produce
such a "cheat". There is massive evidence that mutation in nature
occurs without foresight of its potential future utility. There is no
viable mechanism for nature being able to predict the future. You have
presented none. What you call "anticipation" I call "hindsight". What
you claim is nature shooting arrows in the center of the target, I call
you drawing circles around arrows in the target.

> Further, by restricting your games to subset {G2}, you have lost the
> biological analogy altogether, so even ignoring the fact that (H) has
> nothing to do with (N), the (H) taken on its own merit doesn't even
> amount to saying anything even analogous to biology.
>
>
> > Cheating the house (in games where the house participates in the
> > action) is no different from cheating any other player and is detected
> > the same way -- by discovering that one of the players is being
> > *differentially* affected.
>
> Differentially than what?

Than the other players wrt the cards they receive (not how well they
play them).

> If there is one house and multiple players,
> how do you know whether the house is differentially affected than other
> players?

How the house plays is irrelevant to whether or not the house receives
a random sampling of cards. I am simply testing whether there is any
bias in the cards the house gets relative to the cards the house
"needs". IOW, does some other player have aces up his/her sleeve that
he/she can pull out when needed. *That* would be cheating.

> How would you know whether, for example, the house is cheating
> against all other players? Who would be 'differentially' affected?

If the house receives twice as many aces as any other player (and
enough draws have occurred so that one knows that this is a
statistically significant increase in the probability of drawing aces),
the house is cheating. If, in a simple coin flip, the coin I flip
gives heads 750/1000 times I would be justified in saying the coin was
loaded. If the die I toss gets a six 100/300 tries, I would be
justified in saying the die was loaded. This has nothing to do with
the symmetry of the rules of the game or the skill of the players. It
has to do with whether or not I am getting a greater than expected
number of supposedly random events that I need to win. But such loaded
cards or coins or die are only "cheating" if I can somehow ensure that
I am the only person getting the desired result. If everyone uses the
loaded die or coin or gets more aces, there is no cheating. Cheating
requires an unfair advantage not written into the rules. It requires,
in this case, a correlation between the desired result and a specific
player's result in the part of the process that is supposed to be fair,
unbiased, and random. That part is, in games of chance, that each
player has an equal probability of receiving a particular event, be
that aces, or heads, or sixes. What the actual probability is doesn't
matter.

> And
> with respect to what other figure would this difference be computed?
> (Keep in mind that the rules for the house need not be the same as those
> for the players.)

The only thing I am looking for is that the cards, or the coin, or the
die behave as expected for a random card or head/tail or number
generator or that they misbehave similarly for all players.

nightlight

unread,
Jul 13, 2006, 5:29:38 PM7/13/06
to
Windy wrote:

>>> Thirty years ago, Hall was growing E. coli in his ...
>
> ...pants? Why is this mangled sentence relevant?


A three decade long 'mind meld' between Barry G. Hall
and his E. coli was certainly relevant for my point.
That was the point.


>
> No, you take *one gene* and induce mutations in it in the presence of a
> few other chemicals. So you are saying an "anticipatory network"
> consisting of one gene and maybe a enzyme or two from a completely
> different organism and a test tube environment can *still* anticipate
> that it will need mutations to grow on lactose later?


But, the article then says:

Those mutated genes are introduced back into
the cells, "and then you ask, can you grow
on lactose now?"

The interesting things being reported happened after their
stuff was put back into the biochemical network. Hence
the phenomenon is the result of biochemical networks
and that of Barry G. Hall (his brain along with lots
of help from scientific, technological and other
social networks in material and abstract realms),
working for three decades, with a short step X,
done by the latter network in a tube, then the
process continued in the biochemical networks
alone, yielding finally the evolution described.

So? On what basis can you claim that all, or
even just some nontrivial fraction, of all the
computation by all the networks involved in the
design and engineering of the final evolved bacteria,
was due to the computation done by the step X and,
even more, only by its ingredient Y inside the
test tube (while ignoring all the computations by
the networks that produced that ingredient Y, then
guided it and controlled it while in the tube,
then into the biochemical network)?

Can you give your ball-park estimate, and how you arrived
at it, of the breakdown of all the computations, showing
roughly what percentage of total computation was done by
which of the networks, and in particular what fraction
was done by the ingredient Y while it was participating
in step X, to produce the final evolved bacteria?

After all, you surely must have had something in mind,
since you're crowing about how much of that computation
was done by the ingredient Y while participating in the
step X of the whole process.


>>and combine them with the network of
>>neurons of professor Barry G. Hall for about three to four
>>decades, the combined network may anticipate faster than
>>either of its subnets could on their own. Fa$cinating.
>
>
> How, exactly, do the contents of Barry Hall's brains reach into the
> test tube and tell the gene it should bias its random mutations to deal
> with lactose or antibiotics?

How did the above contents of your brain reach from all the
way there, across the valleys and mountains, rivers and seas,
all the way to here, to my desk?

I will let you on the secret, if you promise not to tell
anyone... psss... there is a little pink fairy that physicists
call interactions, who does that kind of magic... psss...


> Now you are complaining about that there were *GASP* _researchers_
> involved in this research? This objection is still as idiotic as when
> creationists first came up with it.

I am not complaining about the involvement of researchers
in the process. To the contrary, I was in fact objecting
precisely to your failure to account for their computations,
along with others, in design and engineering of the final
evolved bacteria.

>> Hence the analogue for 'other students' is a model, since
>> you can make model in such a way that it does not use
>> computational power of the biochemical web to run
>> anticipatory algorithms related to the favorability of
>> the mutations...
>
> OK, *how* do we make such a model? How do you model
> biochemical events without a biochemical network?
> Be specific.

Via mathematical formalism and computer programs. That was
the whole point, that you can't decide whether there was
cheating, without estimating the odds (e.g. by modeling
the odds via computer simulations) of the test result by
that student in the absence of cheating.

The same goes for the analogous question in biology,
with cheating corresonding to the proposition (a) and
non-cheating to the proposition (b):

Do biochemical networks use their computations to

a) deliberately {1} control/induce mutations to improve
their their own survival odds,

-or-

b) are mutations accidental with respect to these computations
(despite being a physical part of the computations)?

That is what the cheat-sheet analogy was meant to
illustrate. My basic claims about the question are:

1) This question is a valid scientific question (with,
at least in principle, if not practically at present,
falsifiable propositions).

2) This question has not been answered so far.

3) You cannot decide on this question by merely observing
the empirical mutation rates in various circumstances,
since the network computations may be going on in all
such circumstances and you don't have a reference figure
for mutation rates (with and without (a)) to compare the
empirically observed performance with, in order to decide
whether (a) or (b) corresponds better to the actual rates.

Hence you need a mathematical/computer model which simulates
the outcomes of the biochemical processes with/without
such use of the network computations. Only then you
can compare the outcomes from the model with the outcomes
from the actual networks and decide whether the actual
networks use (or need to use) their computations in this
manner or not.


--- Footnote:

{1} "Deliberate" in the sense of a computer chess program
deliberately picking the move that it anticipates will
yield to some gain later in the game.

Windy

unread,
Jul 13, 2006, 5:52:37 PM7/13/06
to

Argumentum ad Mr Spock?

> >>> Thirty years ago, Hall was growing E. coli in his ...
> >
> > ...pants? Why is this mangled sentence relevant?
>
> A three decade long 'mind meld' between Barry G. Hall
> and his E. coli was certainly relevant for my point.
> That was the point.

---------------------------------------------

nightlight wrote:


> Windy wrote:
> > No, you take *one gene* and induce mutations in it in the presence of a
> > few other chemicals. So you are saying an "anticipatory network"
> > consisting of one gene and maybe a enzyme or two from a completely
> > different organism and a test tube environment can *still* anticipate
> > that it will need mutations to grow on lactose later?
>
> But, the article then says:
>
> Those mutated genes are introduced back into
> the cells, "and then you ask, can you grow
> on lactose now?"
>
> The interesting things being reported happened after their
> stuff was put back into the biochemical network.

Precisely, and the mutations had already happened *without* the
biochemical network.

> Hence
> the phenomenon is the result of biochemical networks
> and that of Barry G. Hall (his brain along with lots
> of help from scientific, technological and other
> social networks in material and abstract realms),
> working for three decades,

The experiment didn't last three decades. That was a reference that
Hall had previously studied mutations the traditional way, like Luria &
Delbruck.

> Can you give your ball-park estimate, and how you arrived
> at it, of the breakdown of all the computations, showing
> roughly what percentage of total computation was done by
> which of the networks, and in particular what fraction
> was done by the ingredient Y while it was participating
> in step X, to produce the final evolved bacteria?

Well you have forgotten to show that an intelligent network exists at
all.

> > How, exactly, do the contents of Barry Hall's brains reach into the
> > test tube and tell the gene it should bias its random mutations to deal
> > with lactose or antibiotics?
> How did the above contents of your brain reach from all the
> way there, across the valleys and mountains, rivers and seas,
> all the way to here, to my desk?

Through fingers, servers, cables, transmitters and receivers. Are you
saying that bacteria have broadband?

> I will let you on the secret, if you promise not to tell
> anyone... psss... there is a little pink fairy that physicists
> call interactions, who does that kind of magic... psss...

Ok, how do the interactions transmit the message to bacteria?

> > Now you are complaining about that there were *GASP* _researchers_
> > involved in this research? This objection is still as idiotic as when
> > creationists first came up with it.
>
> I am not complaining about the involvement of researchers
> in the process. To the contrary, I was in fact objecting
> precisely to your failure to account for their computations,
> along with others, in design and engineering of the final
> evolved bacteria.
>
> >> Hence the analogue for 'other students' is a model, since
> >> you can make model in such a way that it does not use
> >> computational power of the biochemical web to run
> >> anticipatory algorithms related to the favorability of
> >> the mutations...
> >
> > OK, *how* do we make such a model? How do you model
> > biochemical events without a biochemical network?
> > Be specific.
>
> Via mathematical formalism and computer programs.

Riight, that was very specific. What processes are you going to model?
What is the transition to transversion ratio of the non-intelligent
model, for example?

-- w.

Windy

unread,
Jul 13, 2006, 5:58:45 PM7/13/06
to
nightlight wrote:
> And you're telling me that neo-Darwinians don't act like
> the ancient priesthoods. If it walks like a duck and talks
> like a duck, then what might it be?

It might be some pompous arse-wipe who wants to lecture scientists on
how they mind-meld with bacteria.

-- w.

hersheyhv

unread,
Jul 13, 2006, 9:17:42 PM7/13/06
to

nightlight wrote:
> Windy wrote:
>
> >>> Thirty years ago, Hall was growing E. coli in his ...
> >
> > ...pants? Why is this mangled sentence relevant?
>
>
> A three decade long 'mind meld' between Barry G. Hall
> and his E. coli was certainly relevant for my point.
> That was the point.
>
>
> >
> > No, you take *one gene* and induce mutations in it in the presence of a
> > few other chemicals. So you are saying an "anticipatory network"
> > consisting of one gene and maybe a enzyme or two from a completely
> > different organism and a test tube environment can *still* anticipate
> > that it will need mutations to grow on lactose later?
>
>
> But, the article then says:
>
> Those mutated genes are introduced back into
> the cells, "and then you ask, can you grow
> on lactose now?"
>
> The interesting things being reported happened after their
> stuff was put back into the biochemical network.

After it was put back in a cell. If what you mean by "biochemical
network" is what everyone else means when they use the word "cell", let
us know.

> Hence
> the phenomenon is the result of biochemical networks

No. The phenomenon (the phenotype) only occurs when in a particular
environment, namely when the gene (altered or not) is in a cell. That
does not mean that any gene other than the altered one is involved in
the phenotype. The only effect of the otherwise *unaltered cell* is to
provide the requisite environment in which the altered gene can be
expressed. The rest of the cell, being the same when altered or
unaltered genes are put back, are fully controlled and did not exert
any *differential* effect. The only *variable* that is *different* is
the gene added back.

You are aware that that is what is involved in a "controlled
experiment", are you not?

> and that of Barry G. Hall (his brain along with lots
> of help from scientific, technological and other
> social networks in material and abstract realms),
> working for three decades, with a short step X,
> done by the latter network in a tube, then the
> process continued in the biochemical networks
> alone, yielding finally the evolution described.

The cell did not (and being the same cell environment when the altered
or an unaltered specific gene is present, could not) result in or cause
the alteration.


>
> So? On what basis can you claim that all, or
> even just some nontrivial fraction, of all the
> computation by all the networks involved in the
> design and engineering of the final evolved bacteria,
> was due to the computation done by the step X and,
> even more, only by its ingredient Y inside the
> test tube (while ignoring all the computations by
> the networks that produced that ingredient Y, then
> guided it and controlled it while in the tube,
> then into the biochemical network)?

What computations? Where in the entire rest of the unaltered cell are
these "computations" taking place? Do you actually know *anything*
about biochemistry or are you merely thinking that life is nothing but
a computer? If life is anything, it is a regulated biochemical
reaction leading to a form of crystallization-like duplication.

> Can you give your ball-park estimate, and how you arrived
> at it, of the breakdown of all the computations, showing
> roughly what percentage of total computation was done by
> which of the networks,

What computations done by what networks? Biochemical pathways do not
compute (pun intended).

> and in particular what fraction
> was done by the ingredient Y while it was participating
> in step X, to produce the final evolved bacteria?

The only feature of the cell that was altered was gene Y. All other
features of the cell are unchanged by experimental design. Gene Y, and
it's state alone, is responsible for the altered phenotype.

> After all, you surely must have had something in mind,
> since you're crowing about how much of that computation
> was done by the ingredient Y while participating in the
> step X of the whole process.

Gene Y does not compute. It produces a mRNA which leads to the
production of a protein. That protein has (or has not) enzymatic
activity that leads (or doesn't) to a particular phenotype.

> >>and combine them with the network of
> >>neurons of professor Barry G. Hall for about three to four
> >>decades, the combined network may anticipate faster than
> >>either of its subnets could on their own. Fa$cinating.

> > How, exactly, do the contents of Barry Hall's brains reach into the
> > test tube and tell the gene it should bias its random mutations to deal
> > with lactose or antibiotics?
>
> How did the above contents of your brain reach from all the
> way there, across the valleys and mountains, rivers and seas,
> all the way to here, to my desk?

What does that have to do with biochemistry?

> I will let you on the secret, if you promise not to tell
> anyone... psss... there is a little pink fairy that physicists
> call interactions, who does that kind of magic... psss...

I would suspect that physicists, unlike you, would not subscribe to the
little pink fairy model for transmission of electronic signals through
the internet. And they don't call what happens "interactions". That
is so vague and mealy-mouthed, it is almost as bad as calling what
cells do "computations".

> > Now you are complaining about that there were *GASP* _researchers_
> > involved in this research? This objection is still as idiotic as when
> > creationists first came up with it.
>
> I am not complaining about the involvement of researchers
> in the process. To the contrary, I was in fact objecting
> precisely to your failure to account for their computations,
> along with others, in design and engineering of the final
> evolved bacteria.

There were no researchers involved in the engineering of the bacteria,
only those involved in providing conditions for mutation of a specific
gene and reinserting that back into a naturally evolved bacteria.

> >> Hence the analogue for 'other students' is a model, since
> >> you can make model in such a way that it does not use
> >> computational power of the biochemical web to run
> >> anticipatory algorithms related to the favorability of
> >> the mutations...
> >
> > OK, *how* do we make such a model? How do you model
> > biochemical events without a biochemical network?
> > Be specific.
>
> Via mathematical formalism and computer programs.

Sounds to me more like mathematical mysticism and numerology and false
analogy of cells with computers and programs designed by humans. A
genome is nothing like a program designed by humans.

> That was
> the whole point, that you can't decide whether there was
> cheating, without estimating the odds (e.g. by modeling
> the odds via computer simulations) of the test result by
> that student in the absence of cheating.

Cheating wrt whether or not mutation is random wrt need cannot be
discovered by estimation of odds. It requires that that there be a
significant correlation between two variables, the rate of occurance of
a mutation and the selective need for that mutation. What is required
to do this is a way of determining the rate of a particular mutation
when it is not needed. A simple general way to do that is to screen
for a particular gene change in a non-selective environment by using
DNA probes. The example I gave used a color difference due to
enzymatic activity (or lack thereof) in a non-selective environment.
Then, under controlled conditions, you examine the frequency of
mutation per 100,000 (or whatever) in selective and non-selective
conditions. If the rates are the same, then there is no correlation
between mutation rate and need for mutation. It doesn't matter what
the rates actually are. You could increase the rate of mutation
drastically by adding a mutagen, but if the rate of mutation is the
same in selective and non-selective conditions, there is no evidence
that mutation is at all correlated with need for mutation.

> The same goes for the analogous question in biology,
> with cheating corresonding to the proposition (a) and
> non-cheating to the proposition (b):
>
> Do biochemical networks use their computations to

What the bloody f**k computations are you talking about? Biochemicals
certainly interact and affect each other, but they don't do
computations.


>
> a) deliberately {1} control/induce mutations to improve
> their their own survival odds,

Since biochemical networks do not do computations, those nonexistent
computations certainly cannot control or induce mutations. Cells
*can*, under stress, change the rate of mutation and, in some case,
this can lead to an increase in the frequency of *all* mutations. But
this is not generating *specifically* mutations that are needed. It is
only increasing the overall mutation frequency, good, bad, and
indifferent (selectively speaking).

> -or-
>
> b) are mutations accidental with respect to these computations
> (despite being a physical part of the computations)?

Since there are no computations going on, only biochemistry, overall
mutation rates increase, but not specificity for mutations of need.

> That is what the cheat-sheet analogy was meant to
> illustrate. My basic claims about the question are:
>
> 1) This question is a valid scientific question (with,
> at least in principle, if not practically at present,
> falsifiable propositions).

And it has, repeatedly and often, been answered. With consistent
results. Although the rate of mutation (or specific types of mutation)
can be increased (or decreased) by many added chemicals, stressful
conditions, mutations of biochemical pathways, etc. they all merely
increase the rate of a particular type of mutational event. They do
not increase the specificity of mutation wrt the organism's need for
particular mutations.

> 2) This question has not been answered so far.

Yes it has. Repeatedly. There is no foresight in nature. Mutation is
random wrt need. And it is selection among variations by the dumb,
unthinking, unintelligent, and uncaring environment that is, well,
specific and selective. Not mutation.

> 3) You cannot decide on this question by merely observing
> the empirical mutation rates in various circumstances,

I agree. Mutation rates, by themselves, cannot tell you whether there
is any correlation between specific mutations and need for those
mutations. But the experiments I described which specifically look for
a correlation between these variables can, regardless of whether or not
you jack up mutation rates. And the results consistently show no
significant interaction between the two variables. Mutations, to
whatever degree of statistical significance that one can measure,
occurs independently of the cell's *current or future* need for that
mutation.

> since the network computations may be going on in all
> such circumstances and you don't have a reference figure
> for mutation rates (with and without (a)) to compare the
> empirically observed performance with, in order to decide
> whether (a) or (b) corresponds better to the actual rates.
>
> Hence you need a mathematical/computer model which simulates
> the outcomes of the biochemical processes with/without
> such use of the network computations.

What computations? Biochemical networks do not work by computations.
They work by chemical interactions. You are aware of that, are you
not?

> Only then you
> can compare the outcomes from the model with the outcomes
> from the actual networks and decide whether the actual
> networks use (or need to use) their computations in this
> manner or not.

As I have pointed out, if you want to test whether or not there is any
interaction between the rate of specific mutations and the need for
that specific mutation, you perform controlled experiments where in
identical circumstances, you look at the rate of specific mutation in
non-selective and in selective environments. Or you ask whether there
is induction by specific environments that mutate *specifically* genes
needed for survival in that environment. And, as has been pointed out,
the evidence is quite clear that, at present at least, there is no such
interaction. Mutation occurs randomly wrt need and these randomly
generated mutations then undergo selection by local conditions.

nightlight

unread,
Jul 14, 2006, 4:35:07 AM7/14/06
to
hersheyhv wrote:

> But you are making a *specific* claim for which such variance from site
> to site is irrelevant. You are claiming that there is a correlation by
> which those mutations which produce beneficial effects are *more
> likely* than equivalent mutations which produce detrimental effects
> because there is some mechanism by which these events anticipate future
> need.

No, that's not it. My claim, which is a proposition at the
algorithmic level of abstraction, is that the anticipatory
computational processes by the biochemical network of a cell are
used, among other purposes, to compute and control mutations (and
their repair or non-repair) in a manner which "increases" the
organism's odds of survival & reproduction, where the "increase"
is understood to be with respect to the biochemical network
computations which are not used for such purpose.

Since that actual network algorithm has to either use not use
is computations in such manner, the comparison in the above
statement is _by definition_ a comparison of performance of
the mathematical model of the biochemical process, which (at
the algorithmic level of abstraction) does not use its
computations in such manner, with the performance of actual
networks.

Namely, if we don't know whether or not the computational
processes in actual networks are used to control mutations,
then the stated "increase" cannot be observed by comparing
the performance or mutation rates of the actual networks
in different situations, without making additional
assumptions, such as presuming to know how the network
would have evaluated gains in different situations, what
its conclusions would be and what actions it _actually_
has available with the resources and tools under its control.
None of that you can know at present.

Your mistake (and of some others here) in evaluating
the ID conjecture (that networks do use their computations
to control mutations) against the empirical observation,
is to consider actual mutations rates at some site(s)
in different circumstances and then you look for:

1) the mutations (or rates) that _your_ network (your
brain) anticipates to be favorable in the circumstances
that _your network_ is observing,

expecting them to occur at a higher rate than they do

2) in some other circumstances, again as observed
by _your network_ and as computed by _your_ network
to be less favorable.

In other words, when evaluating the ID claim against
the empirical observation, you are mixing up the
perception of the situation and the resulting computations
that some other network does (such as your brain) with the
perceptions and the computations that the cellular
biochemical network does.

It is the computations of the cellular network that
would decide what to do about mutations and how, and
not the computations of some other unrelated network
such as your brain). Note that it is completely
irrelevant here whether your evaluation and your pick
of the optimal mutation action is better or worse than
that of the cellular network. For the actual cell, it
would be its network that evaluates and decides what to
do (if anything), and that is what the empirical mutation
rates & their observed effect refer to.

The cellular network perceives the world through its own
senses (which receive molecules, electric charges/ions,
EM fields/photons, mechanical vibrations,... just as
your senses do), but the internal model (that all such
adaptable networks compute from their inputs, as a
part of their punishments/rewards optimization algorithm),
it has of the external world accessible to its senses is
quite different than your own. Its knowledge of biochemistry
or its means and tools available for executing actions
it computes as optimal are similarly very different than
your own.

To illustrate (1) and (2) above, consider an analogy with
a much smaller gap in the two networks, say you and your
dog. Say, you just came home from a grocery store, bringing
in the bags with a week's worth of supply of meats, fruits,
milk,... and your dog is sitting there on the kitchen
floor, with his head tilted, observing you packing it
all into the fridge. Dog now thinks: boss kill cow,
boss strong, boss good, spunky hungry, boss hungry,
lots of meat, boss spread meat on the floor,
boss rip cow skin, spunky rip cow skin, rip, rip,
boss tear meat, spunky tear meat, crunch, crunch, gulp,
gulp, boss happy, spunky happy, meat all gone,... now
you're shutting the fridge door and leaving the kitchen
and spunky thinks: boss go away, spunky hungry, cat steal
meat, boss dumb, spunky mad, spunky rip cat, spunky
hungry, cat steal meat, spunky kill cat, boss dumb,...

In essence, the reasoning and conclusions you are making
about the cellular networks' actions regarding the
mutations is of the same type as the Spunky's
reasoning and conclusions about his boss, except that
the gap between your dog and you is far smaller
than the gap between a human network (brain)
and cellular network, in perception, reasoning, knowledge,...

You're assuming in (1) and (2) that what your network
computed to be the best course of action in those
two situations (differential rate of mutation on
specific sites) is what the actual cellular network
ought to have computed and done, and since it didn't
take those actions (had those mutation rates) that
your network computed as optimal (or at least better
than what it did), you conclude that network did not
perform any computation and evaluation of those
mutations. Otherwise, you figure, it would have done
what _you_ think is best (which may well be so, but
you are not the one running that cell).

There are many other reasons more plausible than
the presumed computational disinterest of the
cellular network in controlling mutations.
For example, cellular network may have not perceived
all the properties of the environment that you did
(its perception is of much shorter range than yours).
Hence what your network evaluates as the "need" of
cellular network, may not be what cellular network
evaluates as its own "need" in that situation.
Or, the particular targeted mutations that you had
in mind may not be feasible in the given situation
with tools & resources under its control. Or, more
likely, it may be feasible via some mechanism it has,
but the invocation of that mechanism may be, in
in its view, a very bad idea for the "need" as it sees
it, due to the possible much worse side-effects
(mechanisms of natural networks tend to have highly
overloaded/multipurpose functionality and effects)
i.e. it may see the proposed cure as much worse
than the disease, especially if doesn't realize
as yet that you're trying to starve it or poison it.

Consider further that we already know that cellular
networks are unrivaled masters of molecular engineering,
with skills and knowledge we can only envy. As noted
earlier, you can gather all of the world's biochemical,
pharmaceutical, molecular biology resources, experts,
equipment, science, money,... together in one big
team with a single task - to design and synthesize
one live cell from inorganic (and possibly some very
basic organic) ingredients. We would all go gray and
die waiting for The One Cell, despite all that great
expertise and vast resources being deployed. Yet,
cellular biochemical networks do it daily and have
been doing it for over a billion of years.

Hence, it is quite preposterous to leap to the
conclusion that these same unrivaled masters of
molecular engineering, must be disinterested in
controlling mutations _solely_ based on the fact
that they did not select actions that some
comparatively clumsy greenhorns (not you personally)
are suggesting they ought to have taken taken.

It would be like a seven year old kid who just
knows how to move chess pieces, watching a world
championship game, and declaring that the champ
is not interested in winning since he doesn't seem
to be picking the moves that kid considers the
best ones.

That's basically what your argument for the RM
conjecture, based on comparisons of observed
mutation rates in different environments, amounts
to, except that, once more, the gap is much bigger
on the biological side of the analogy (we don't
even know how all the pieces move in our game).

You and others also complain that my ID claim is
at the algorithmic level, so it must be empirically
meaningless.

The algorithmic level of abstraction is simply a
variant of a general mathematical modeling. On one
side you have the abstract mathematical model and
its little mathematical 'gears', on the other side
you have a real phenomena and their empirically
observed properties. The 'operational rules' connect
(perform mapping between) the elements of the two
realms, so that some formal elements of the model
space correspond to some empirical elements of the
real world. Depending on the level of abstraction
or detail of a particular model, the model may not
have corresponding elements for many of empirical
elements. A highly abstract model may not see much
of the fine grained detail of the empirical realm.
That lack of detail does not imply that a model is
unscientific.

Namely, what makes a model scientific is that the
model predicts at least some relations between the
abstract elements of the model realm, which can
be compared (via operational mappings) to the
relations between the corresponding elements
in the empirical realm. That comparison represents
a test of a model, hence the model is falsifiable
by the empirical facts.

So, what I am saying is simply that at this level
of abstraction of the biochemical phenomena in a cell,
the following relation between the elements of the
abstract model must hold: the cellular algorithms
must evaluate and control via some anticipatory
algorithms the possible mutations for "usefulness",
where "usefulness" is understood in the sense of
'usefulness according to the knowledge of the
cellular network' (i.e. according to its 'utility
function' which encapsulates its knowledge of
its environment and of itself, i.e. all patterns
and laws of biochemistry & physics which are
relevant to a cell as understood by the cellular
network; of course these would not look anything
like the laws of biochemistry and physics as we
know them, although there would be some partial
correspondence, with lots of facts and patterns
on either side missing from the other).

In order to explain the empirical content of the
above conjecture in a more direct way, hence without
using the previously stated criterium, I will consider
again the chess program running on a conventional
computer. This is a useful analogy since both levels
of abstraction: the algorithmic and the electrical or
physical, are completely understood (since humans
designed and constructed the computer and the program).

Would it be meaningless (empirically empty) to
say at the algorithmic level, that the particular
chess program, playing at its tournament level,
uses ten move full width look-ahead with twenty
five move selective look-ahead extensions, to
evaluate and select its best move? This is analogous
to the ID conjecture stated above at the algorithmic
level, where ID says that algorithms executed by
the cellular network perform look-ahead evaluations
and control of the mutations.

How would one empirically test the algorithmic
statement about the chess program, using only
electrical measurements on the computer hardware?
It would be, of course, comparatively easy, to check
that conjecture by obtaining the source code and
examine its search function. Or even easier if
the program displays such information (which it may
not do in tournament mode; some programs even display
deliberately false search depth information to mislead
the competitors).

But in order to remain close to the relation with
the biological systems were we don't have the "source
code" for the program running on cellular network,
we have to stick within our analogy to the physical
or electrical measurements alone. Hence we would
have to measure great many pulses, find correspondence
between the processor's machine instructions and
the pulses at the CPU pins. Assuming that we already
understand the basic computer hardware, we could
also reconstruct the content of any memory location
from the electric pulses alone. In fact, hardware
analyzers do precisely this type of reconstruction
of the instruction stream and the memory & CPU
content -- we hook a probe on the CPU, which intercepts
and analyzes all the CPU signals to/from memory.
From these signals (and its model of the CPU
& memory signals) the analyzer computes what
instructions are being executed and what is the
content of memory locations and CPU registers.

Once we have a stream of machine instructions, one
could proceed with 'disassembly' of the code where
one looks at the assembly/machine instructions and
memory values being read and written, and deduces,
based on knowledge of algorithms and observed
functionality of the program, what is it that
the machine code is doing.

Once the machine code is transformed into a higher
level language, such as C or Java, we would have
to look at next higher layer, the chess algorithm
proper. One has to understand and conjecture how
chess algorithms might work and try out various
conjectures while running the program on specific
chess tasks. Top programs are usually targets of
this kind of reverse engineering by the competitors
who wish to find find out the tricks and secret
algorithms behind their strength. With the rough chess
algorithm deciphered, one then flowcharts the
chess aspect, its search logic, evaluation/utility
function and the meaning of its terms, etc. At
that level of abstraction we have a direct,
unambiguous answer to the original question
about the depth of its full width search and of
its selective extensions.

Of course, science is still far away from this
level of reverse engineering of the algorithms
performed by the cellular biochemical networks
(which is why the criterium I suggested wasn't
based on reverse engineering). The network algorithms
are also not nearly as transparent as those for
conventional computers. The only networks whose
algorithms we can analyze at present at all
layers of abstraction are the 'neural networks'
(which simulate much more complex natural
networks), where we can examine the network
operation step by step, as it learns and then
as it applies its knowledge to different inputs.

These networks learn by being exposed to
some input signals, to which they respond
producing some output signals, and for each
such response they receive some punishment or
reward (e.g. a separate input to the network),
then based on punishment/reward, they _slightly_
modify the network link strengths so that if
they were to receive the same input again
their response would have received _slightly_
less punishment, more reward (if they were to
optimize maximally for each input, that would
cause much greater cost on the rest of input
patterns, so each modification of link strengths
is justa tiny nudge toward a slightly more
favorable responses). After some number of passes,
the network learns to respond nearly optimally to
not just the inputs it was trained on but to new
ones which it hasn't seen before (assuming the
source is "lawful", i.e. it produces "lawful"
or "learnable" input patterns).

The knowledge and skills they learn can be
described at several levels. At the lowest
level is the list of links (which node is
connected to which other nodes and in which
direction) and their strengths. Although this
information fully specifies all that network
knows and does, it is for us an opaque form of
knowledge. At a higher level, one can identify
the differential/difference equations that the
network is effectively solving while seeking
the optimum of some 'utility function'(defined
by punishments & rewards). At the next higher
level of abstraction, one can identify the
correspondence between the network activity
patterns(combinations of signal strengths at multiple
locations) and the actual patterns in the network
environment.

For dynamical multi-element environments (such as
particles or interacting objects or agents), the next
abstraction layer is identification of the network
patterns corresponding to the external agents and to
their kinematic & dynamical properties (such as
positions, velocities, forces...). A network which has
learned to balance a standing 'broomstick' (constrained to
morion in a plane) on a moving platform controlled by
the network outputs was demonstrated at a conference some
years ago. Externally this appears as balancing of a
standing broomstick on the palm of your hand.

This network had an internal representation for the
broomstick, its position & angular velocity, along
with the basic bit of physics of this system, plus
a representation for the platform itself, which is the
'self' actor within its internal model of its own little
universe. That was all created spontaneously by the
network, by its simple, tiny nudges to the link strengths,
based on punishments received whenever the broomstick
falls down.

At the highest level of abstraction, its algorithm was
anticipatory, looking ahead only couple tiny steps
(corresponding to the 2nd order differential equation).
In its model space, broomstick element moved based on
its bit of physics, while platform element (self) was
picking velocity nudges (among a discrete set of
values allowed by the mechanical constraints). After a
platform velocity nuge the broomstick element was evolved
one more time step based on the new platform speed and
its bit of physics.

The network was thus running the internal model of its
little world and of self-actor (platform) forward in
time, trying out different self-actor actions (platform
velocity nudges), observing the subsequent reaction of
the model world (broomstick position & velocity), for
couple times steps into the future. The best
self-actor action from the model's version of the
future was then executed in the real world, as the
best (to its knowledge) velocity nudge to the
actual platform.

As you realize by this point, it may take quite a bit
of time and efforts, before we can examine the processes
of the cellular biochemical networks at this level of
abstraction. However, this level of abstraction and
the mathematical model space of the biochemical network
do exist as mathematical/algorithmic constructs along
with some general properties common to the algorithms
used by the adaptable networks, even though we don't know
presently much about their details. We do know, for example,
that they are optimizing their actions to some punishments
& rewards, that they will have internal models of their
world, the environment and their self-actor, that they
will run these models forward in time in order to pick
the "best"actions by the self-actor, where "best" means
as decided by their utility function, which in turn
encapsulates their knowledge about patterns in their
inputs (laws of physics & biochemistry in their
internal world)... etc.

Hence, we can formulate conjectures about the general
properties of the algorithms in this model space, even
though we have no way, at present, of specifying details
for the fully functioning instances of such algorithms
(in the model space), or establishing the complete
operational mapping between the elements of the model
space and their real world biochemical counterparts.
At this abstraction level, both RM and ID are simply
conjectures about very general properties of these model
space algorithms, in particular, whether these algorithms
perform any computations and actions aiming to optimize
consequences of the mutations.

ID answer is the most natural and plausible one: of course,
they do, since the consequences of the mutations certainly
contribute to their punishments and rewards, hence the
general network optimization property implies that they
must be accounting for this term in the total punishments
and rewards they are optimizing their actions to. With
the truly accidental mutations, those induced by the
external causes (e.g. as UV) which are outside of their
control the most that the optimization can do is try
a repair. But for all the mutations which are caused by
the physical/chemical conditions of the network itself
at the mutation site, they do have such control (to the
extent consistent with the laws of physics). How much
they can do and what they can do about these mutagenic
conditions depends on how much "lawfulness" exists in
their inputs and in the consequences of the mutations.
Whatever regularity or pattern there may be in their
inputs, they are probably well optimized to take
advantage of them.

In contrast, the Random Mutation (RM) conjecture appears
at this same level of abstraction as extremely artificial,
almost capricious: the network is allowed to optimize to
any other punishments and rewards, utilizing any patterns
in all of its inputs, except for those corresponding to
the mutations and their consequences. These must be
off limits. Why? Just because.

Then came Cairns experiments & followups. Oops. Time to
revise RM1 and move on to RM2: the only pattern in the
network inputs that RM2 allows it to include into its
optimization algorithm and to respond with mutations is
the 'general stress' pattern, and the only mutation action
allowed to the network is the general (or nearly so)
increase in the mutation rates.

Clearly, the ID position, expressed at this level of
abstraction, is a much more coherent and principled one. But
since the present state of the knowledge is still insufficient
to exclude the post-Cairns RM2, this quirky conjecture lingers
on as a far fetched, almost silly, theoretical possibility,
upheld only by the sheer religious zeal of its priesthood.

At present, only the coarse grained, statistical and static
properties of the biochemical reaction networks have been
explored, and only very simplified (toy) models of its
dynamics are being simulated on the computers. You can check
on the SFI site for lot more about this research and the
perspective on biology (labeled as Complexity Science, which
has the ID perspective as its asymptotic value, and from
which it will become indistinguishable in few years):

http://www.santafe.edu/research/publications/working-papers.php

and some papers scattered across the arXiv sections:

http://arxiv.org/list/q-bio/new
http://arxiv.org/list/nlin/new
http://arxiv.org/list/cs/new


nightlight

unread,
Jul 14, 2006, 5:50:11 AM7/14/06
to
hersheyhv wrote:

> What computations? Where in the entire rest of the unaltered cell are
> these "computations" taking place? Do you actually know *anything*
> about biochemistry or are you merely thinking that life is nothing but
> a computer?

> ...


>
> What computations done by what networks? Biochemical pathways do not
> compute (pun intended).

> ...


> What the bloody f**k computations are you talking about? Biochemicals
> certainly interact and affect each other, but they don't do
> computations.

> ...

Where does your brain compute? By your logic it can't compute
since all that is going on in there are physical processes
in your brain cells. Therefore it cannot be computing at the
same time. How could it?

The physical processes in your neurons are one pattern in
the phenomenon. Another pattern in the same phenomenon,
when viewed at a higher level of abstraction, is the
computation that your brain does. One pattern does not
exclude the other. They are just different and mutually
harmonious description of the same (confused) brain.

This is no different than looking at your computer screen
which consists of million or so of tiny pixels, with each
(x,y) screen position having a pixel with its own color
C(x,y). The list of all C(x,y) specifies exactly what is on
the screen. Nothing else, but all C(x,y) is needed to uniquely
specify the content of the screen. Yet, a higher level patterns
of these same pixels are letters, words, sentences. At even
higher level, what is on your screen is is a post from
talk.origins newsgroup... All these descriptions of the
same screen, each focused with patterns at its own level
of abstraction (while ignoring details from the lower
levels, or further patterns from the higher levels),
describe your screen perfectly harmoniously and autonomously.
They don't contradict or exclude each other, as you somehow
seem to imagine.

By your logic, this sentence on your screen contains no word
"sentence", since there is no sentence on the screen at all,
but just the pixels C(x,y), which describe with maximum
precision, exactly what is on your screen.

You seem to have had a mental reboot, one of several in this
discussion, resetting yourself back to the same hopeless
confusion we clarified already (after which you appeared to
be 'computing' just fine for a day or two):

http://groups.google.com/group/talk.origins/msg/ff90576d409cefd2

and about biochemical networks & computations they do (with links):

http://groups.google.com/group/talk.origins/msg/cfaee59d8c5e179e


David Iain Greig

unread,
Jul 14, 2006, 10:05:20 AM7/14/06
to
nightlight <nightli...@skip.omegapoint.com> wrote:
> hersheyhv wrote:
>
>> What computations? Where in the entire rest of the unaltered cell are
>> these "computations" taking place? Do you actually know *anything*
>> about biochemistry or are you merely thinking that life is nothing but
>> a computer?
> > ...
> >
> > What computations done by what networks? Biochemical pathways do not
> > compute (pun intended).
> > ...
> > What the bloody f**k computations are you talking about? Biochemicals
> > certainly interact and affect each other, but they don't do
> > computations.
> > ...
>
> Where does your brain compute? By your logic it can't compute
> since all that is going on in there are physical processes
> in your brain cells. Therefore it cannot be computing at the
> same time. How could it?

The problem with using private languages (jargon) is that when
other people fail to comprehend them, you don't have the right to
hector them for it.

--D.

nightlight

unread,
Jul 14, 2006, 2:22:55 PM7/14/06
to

If this were some kind of _personal_ private language,
one might expect the kind of obtuse intolerance exhibited
by some of the members here toward my posts. But the
networking language (mathematical & algorithmic) and
the related algorithmic level of abstraction for the
processes that were seen until recently as a mere
biochemistry and nothing more, is not something
I just made up.

If I were writing this in late 1980s, when the early papers
on this theme started appearing (from the SFI folks, many
of them still scattered at the time at various universities),
it wouldn't have been a surprise to find that people are
generally uninformed on these 'latest' developments on
some obscure fronter.

But nearly two decades later, with many thousands of papers
and books published on these topics (lots of it on line),
one would expect, especially here in talk.origins, that most
people interested in the topics discussed, would have been
a bit more _up to date_ on this very active and pertinent
area of research:

== Biochemical Networks
http://www.google.com/search?num=100&hl=en&lr=&safe=off&q=%22biochemical+networks%22&btnG=Search
http://www.google.com/search?num=100&hl=en&lr=&safe=off&q=%22biochemical+reaction+networks%22&btnG=Search

== Autocatalytic Networks
http://www.google.com/search?hl=en&q=%22autocatalytic+network%22&btnG=Google+Search
http://www.google.com/search?hl=en&q=%22reaction+networks%22&btnG=Google+Search
http://citeseer.csail.mit.edu/cs?cs=1&q=autocatalytic+network&submit=Documents&co=Citations&cm=50&cf=Any&ao=Citations&am=20&af=Any

== Complexity Science
http://www.santafe.edu/research/publications/working-papers.php
http://arxiv.org/find/grp_q-bio,grp_nlin/1/ti:+AND+complex+system/0/1/0/all/0/1
http://arxiv.org/find/grp_q-bio/1/ti:+network/0/1/0/all/0/1
http://www.google.com/search?num=100&hl=en&lr=&safe=off&q=%22complexity+science%22&btnG=Search

== Neural Networks
http://www.google.com/search?num=100&hl=en&lr=&safe=off&q=%22neural+network%22&btnG=Search
http://citeseer.csail.mit.edu/cs?q=neural%20network&cs=1&submit=Search+Documents&af=Header&ao=Citations&am=20


hersheyhv

unread,
Jul 14, 2006, 3:34:46 PM7/14/06
to
nightlight wrote:
> hersheyhv wrote:
>
> > But you are making a *specific* claim for which such variance from site
> > to site is irrelevant. You are claiming that there is a correlation by
> > which those mutations which produce beneficial effects are *more
> > likely* than equivalent mutations which produce detrimental effects
> > because there is some mechanism by which these events anticipate future
> > need.
>
> No, that's not it. My claim, which is a proposition at the
> algorithmic level of abstraction,

Why should anyone give a flying f**k for your algorithmic level of
abstraction if it cannot made real or have an effect at the level of
actual material consequences? If there is no evidence (and much
counterevidence) that mutation is anything but random wrt need, that
means that your algorithmic abstractions have no testable consequences.

> is that the anticipatory
> computational processes by the biochemical network of a cell

What "anticipatory computational processes by the biochemical network
of a cell"? You have yet to demonstrate that a cell even *has* a
"computational process" much less an anticipatory one. All you have
presented is some New Age mumbo-jumbo verbal mysticism. Apparently you
think that bacteria (or rather its 'biochemical network') has
consciousness and can intelligently and willfully produce features it
needs. That is New Age mumbo-jumbo that ascribes thoughts and feelings
to crystals.

> are
> used, among other purposes, to compute and control mutations (and
> their repair or non-repair) in a manner which "increases" the
> organism's odds of survival & reproduction, where the "increase"
> is understood to be with respect to the biochemical network
> computations which are not used for such purpose.

Not all regulation requires computation. If I jump into a bathtub, the
water level will rise by a volume equal to the volume I displace
without any part of the system doing any computations. In this case,
the consequence is a result of the operation of laws of physics.

If I flick an open flame in the presence of certain concentrations of
hydrogen and oxygen, I will generate water by the molecules undergoing
a rearrangement. But no intelligence is involved in the rearranging.
Just the laws of chemistry.

Similarly, cells trigger repair and regulate expression not by
"computation" but by the presence, absence, or amount of environmental
factors interacting with the pre-existing cell's biochemistry. No
thought processes involved at all. In fact, I can intentionally make a
cell do self-destructive or useless things by artificially adding the
regulatory factor in the absence of the need for the process. For
example, the trigger that causes up-regulation of the lac operon is not
lactose, but a minor byproduct of lactose. And I can add that
by-product (or a compound that mimics it) which is not utilizable
(called a gratuitous inducer) and this will cause the dumb, stupid,
ignorant cell to produce lactose enzymes to the point where 10% of its
protein mass are these useless proteins.

> Since that actual network algorithm has to either use not use
> is computations in such manner, the comparison in the above
> statement is _by definition_ a comparison of performance of
> the mathematical model of the biochemical process, which (at
> the algorithmic level of abstraction) does not use its
> computations in such manner, with the performance of actual
> networks.

Yet you are *specifically* claiming that this computation, whatever it
involves, is supposed to lead to a specific effect. Namely that
specific mutations are supposed to differentially arise as a
consequence of need rather than arise randomly wrt need. That is the
claim you need to test. It has been tested and found to be false for
essentially all cases tested. Mutation occurs at random wrt need. The
determination of beneficial, detrimental, or neutral (and quantitative
amount of benefit or detriment) involves looking at the interaction of
a variant with a specific local environment. It is not empirically
determinable in the abstract (although one can make an educated guess
based on other knowledge).

> Namely, if we don't know whether or not the computational
> processes in actual networks are used to control mutations,
> then the stated "increase" cannot be observed by comparing
> the performance or mutation rates of the actual networks
> in different situations, without making additional
> assumptions, such as presuming to know how the network
> would have evaluated gains in different situations, what
> its conclusions would be and what actions it _actually_
> has available with the resources and tools under its control.
> None of that you can know at present.

And knowledge of the rate of mutation is not needed to determine
whether or not specific mutations occur at random wrt need. What is
needed is a way of measuring that specific mutation in both selective
(need) or non-selective (no need) conditions.

Again, someone who is positing conscious and intelligent *foresight* in
nature (other than when the behavior is done by organisms with minds)
has a tough row to hoe. Even many organisms with minds do not exhibit
much conscious or intelligent "foresight" in their behaviors. Witness
the poor cicada wasp which can be made to repeat a behavior (going down
into a previously dug burrow to check it out before it moves in the
paralysed egg-lain cicada it has moved to the mouth of the burrow) by
the simple expedient of moving the cicada a few inches away from the
mouth of the burrow while the wasp is inspecting the future baby room.
The wasp will ignorantly and repeatedly bring the cicada back to the
mouth of the burrow and then go back down to inspect it.

Computation of the sort involving the ability to acheive foresight is
an emergent property, not an inherent one. Biochemicals do not, by
themselves or in most networks, have computational ability. They can
be arranged to produce computational results (such as those networks
that use biochemical rates or position to "tell" time). But those
computations are emergent features and not inherent properties of all
biochemistry.

> Your mistake (and of some others here) in evaluating
> the ID conjecture (that networks do use their computations
> to control mutations) against the empirical observation,

We are not saying that overall mutation rate or the rate of specific
mutations is unaffected by the biochemistry of the cell. They
certainly are. We are saying that these mutation rate changes are not
"need specific" changes wrt specific genes of need. The cell is not
saying (since you seem to ascribe mystical consciousness to cells, I
will go along), "Hmmm. I need to mutate the his4 gene in order to
survive in this environment, so let's *specifically* mutate this gene I
need." Rather the cell is saying, in an utter panic, "God. I am
dying. Let's jack up my total mutation rate for all mutations. Maybe
one of the mutational mudpies will stick to the wall and save me, even
at the cost of many deleterious mutations produced by the same process.
A rising tide lifts all boats."

The first idea would involve a cell exhibiting foresight and producing
mutations according to need. The second involves a cell merely
increasing total mutation rate (this could merely be an inadvertant
consequence of being in the state of dying) with the possiblity that
one of the *randomly generated* mutations will be, *after it occurs*,
selectively useful in that local environment.

> is to consider actual mutations rates at some site(s)
> in different circumstances and then you look for:
>
> 1) the mutations (or rates) that _your_ network (your
> brain) anticipates to be favorable in the circumstances
> that _your network_ is observing,
>
> expecting them to occur at a higher rate than they do
>
> 2) in some other circumstances, again as observed
> by _your network_ and as computed by _your_ network
> to be less favorable.
>
> In other words, when evaluating the ID claim against
> the empirical observation, you are mixing up the
> perception of the situation and the resulting computations

I am, in the experiments I have been describing, directly testing
whether the mutations occur via an *intelligently designed* process
whereby the specific mutation of need is *preferentially* produced at,
or in anticipation of, the need for that mutation. This is in contrast
to the idea that mutations are produced at random wrt need and only
*after* being produced, is there any differential or preferential
process occurring, namely selection by local conditions. The facts
favor the latter process. Mutations are *generated* without respect to
their need. One may change the overall *rate* of mutation, but not the
specificity wrt need. Selection is an independent process that can
only select among the randomly *generated* mutations. Selective
environments cannot even induce the specific mutations needed. All it
can do is affect the rate of random mutation without affecting the
randomness of the mutation. There is no known mechanism for the
anticipation of need. Those are the two possibilities you are claiming
happens.

> that some other network does (such as your brain) with the
> perceptions and the computations that the cellular
> biochemical network does.

No, I am not. You are the one ascribing to a dumb biochemical network
properties that only occur in the emergent features of a conscious and
intelligent brain. A conscious and intelligent brain certainly can
perform behaviors that require foresight. A conscious and intelligent
brain can certainly be induced to perform behaviors that adapt an
organism to environmental conditions. But biochemical networks are not
brains or computers, although both brains and computers are emergent
properties that have arisen from biochemistry.

> It is the computations of the cellular network that
> would decide what to do about mutations and how, and
> not the computations of some other unrelated network
> such as your brain).

And what the cell (or cellular network, if you wish) does mutationally
in relation to need for that mutation is exactly what the
Luria-Delbruck experiment (and all the subsequent related experiments)
tests. These experiments are not testing any unrelated network; just
the interaction of the mutational process with selective (or
non-selective) environments.

> Note that it is completely
> irrelevant here whether your evaluation and your pick
> of the optimal mutation action is better or worse than
> that of the cellular network. For the actual cell, it
> would be its network that evaluates and decides what to
> do (if anything), and that is what the empirical mutation
> rates & their observed effect refer to.

And the evidence shows that, although cells can increase or decrease
(regulate) overall mutation rates, they cannot generate specific
mutations according to need for that mutation. No preference is given
to beneficial mutations. No dispreference is given to detrimental
mutations. The adjectives are determined afterward by the interaction
of the *randomly generated* variant with the local environment.

> The cellular network perceives the world through its own
> senses (which receive molecules, electric charges/ions,
> EM fields/photons, mechanical vibrations,... just as
> your senses do), but the internal model (that all such
> adaptable networks compute from their inputs, as a
> part of their punishments/rewards optimization algorithm),
> it has of the external world accessible to its senses is
> quite different than your own. Its knowledge of biochemistry
> or its means and tools available for executing actions
> it computes as optimal are similarly very different than
> your own.

Where have you expressed any real knowledge of biochemistry? I
understand full well how a cell perceives its environment and how
cellular biochemistry is regulated. No thought process in the cell is
involved. No computations are involved. Amounts, presence, or absence
of allosteric regulators is involved. But these engender simple
mindless unintelligent consequences, not any sort of consequence that
involves intelligent weighing of future consequences. And, to repeat,
in all cases, mutations are generated at random wrt need. The only
process that discriminates between mutations is selection, which is an
independent process that requires that the variation already exist.
Selection does not produce specific needed variation. It only
discriminates among randomly generated variation and can only work on
variants that actually exist at (or get generated randomly during) the
time of selection.

> To illustrate (1) and (2) above, consider an analogy with
> a much smaller gap in the two networks, say you and your
> dog. Say, you just came home from a grocery store, bringing
> in the bags with a week's worth of supply of meats, fruits,
> milk,... and your dog is sitting there on the kitchen
> floor, with his head tilted, observing you packing it
> all into the fridge. Dog now thinks: boss kill cow,
> boss strong, boss good, spunky hungry, boss hungry,
> lots of meat, boss spread meat on the floor,
> boss rip cow skin, spunky rip cow skin, rip, rip,
> boss tear meat, spunky tear meat, crunch, crunch, gulp,
> gulp, boss happy, spunky happy, meat all gone,... now
> you're shutting the fridge door and leaving the kitchen
> and spunky thinks: boss go away, spunky hungry, cat steal
> meat, boss dumb, spunky mad, spunky rip cat, spunky
> hungry, cat steal meat, spunky kill cat, boss dumb,...
>
> In essence, the reasoning and conclusions you are making
> about the cellular networks' actions regarding the
> mutations is of the same type as the Spunky's
> reasoning and conclusions about his boss, except that
> the gap between your dog and you is far smaller
> than the gap between a human network (brain)
> and cellular network, in perception, reasoning, knowledge,...

The cellular network of biochemistry has no consciousness at all, no
ability to reason, no ability to generate alternate consequences by
foresight. It is dumber and less intelligent, in other words, than a
dog. Hell, it is dumber and less intelligent than the cicada wasp.

> You're assuming in (1) and (2) that what your network
> computed to be the best course of action in those
> two situations (differential rate of mutation on
> specific sites) is what the actual cellular network
> ought to have computed and done, and since it didn't
> take those actions (had those mutation rates) that
> your network computed as optimal (or at least better
> than what it did), you conclude that network did not
> perform any computation and evaluation of those
> mutations.

All I ask is that there be *some* causal effect linking the generation
of mutations and the need for mutation. For all the observers knew,
the relationship could have gone the other way, with cells selectively
producing non-useful phenotypes. The fact remains that the generation
of variants occurs at random wrt need for *any* variant. "Need" is
defined as *any* change which allows survival/reproductive success. We
are observing specific variant phenotypes, such as resistance, not
specific physical mutations (although the effect is often easier to see
when only one possible mutation produces the variant phenotype). Since
the variables of "mutation generation" and "selective need for
mutation" have been demonstrated to be independent variables (at the
level of sensitivity of the experiments performed), there is no need to
posit a causal relationship, regardless of whether you posit that
causal relationship being foresightful (requiring some outside agent
unless you are attributing such high-level intelligence seen only in
humans and closely related organisms to a cell) or induced (the only
mechanism that has seriously been considered by scientists, since
scientists tend not to be New Age mystics attributing superhuman
intelligence to an amoeba).

> Otherwise, you figure, it would have done
> what _you_ think is best (which may well be so, but
> you are not the one running that cell).

No. The cell could produce the favorable (favorable being defined as
that which improves reproductive fitness) phenotype by any mutational
mechanism it chooses. The fact remains that all mutations are
generated at random wrt need. When more than one mutation can produce
a favorable phenotype, all such mutations are considered.

> There are many other reasons more plausible than
> the presumed computational disinterest of the
> cellular network in controlling mutations.
> For example, cellular network may have not perceived
> all the properties of the environment that you did
> (its perception is of much shorter range than yours).

Quite true. Which is why the only seriously considered alternative to
random generation of mutation followed by selection has been induction
of specific needed mutation by a selective environment rather than
foresightful anticipation of future need. The fact remains, however,
that neither alternative has any support because, in specific tests of
any causal or even correlational interaction between the two variables,
scientists have almost uniformly found no detectable interaction.

> Hence what your network evaluates as the "need" of
> cellular network, may not be what cellular network
> evaluates as its own "need" in that situation.
> Or, the particular targeted mutations that you had
> in mind may not be feasible in the given situation
> with tools & resources under its control.

In *every* case, regardless of what the favored variation or the
organism is? Do you have *any* example of *any* mutant phenotype that
occurs as a function of or correlated to need for that mutant
phenotype? [Again, defining need as being *any* variant phenotype that
increases reproductive success in the described environment.]

> Or, more
> likely, it may be feasible via some mechanism it has,
> but the invocation of that mechanism may be, in
> in its view, a very bad idea for the "need" as it sees
> it, due to the possible much worse side-effects
> (mechanisms of natural networks tend to have highly
> overloaded/multipurpose functionality and effects)
> i.e. it may see the proposed cure as much worse
> than the disease, especially if doesn't realize
> as yet that you're trying to starve it or poison it.

Worse than certain death if it doesn't have the variant phenotype?

> Consider further that we already know that cellular
> networks are unrivaled masters of molecular engineering,

I don't know this. I have never seen any evidence of a cellular
network that has *intelligently* generated any aspect of "molecular
engineering". Cellular networks are certainly wonderful examples of
molecular biochemistry working for a purpose. But that doesn't tell us
how they were constructed and certainly doesn't imply that they have
the necessary "intelligence" to, with consciousness, decide to
"intelligently" self-generate changes. What we do know is that random
changes do occur and the variations produced undergo stringent
selection by local conditions.

> with skills and knowledge we can only envy. As noted
> earlier, you can gather all of the world's biochemical,
> pharmaceutical, molecular biology resources, experts,
> equipment, science, money,... together in one big
> team with a single task - to design and synthesize
> one live cell from inorganic (and possibly some very
> basic organic) ingredients. We would all go gray and
> die waiting for The One Cell, despite all that great
> expertise and vast resources being deployed. Yet,
> cellular biochemical networks do it daily and have
> been doing it for over a billion of years.

Over 3.5 billon years. But the first cell was not the modern cell.
Modern cells, including modern bacterial cells have evolved over those
3.5 billion years. Take all the rennaisance geniuses and artisans and
put them in a room with a Lexus and ask them to create a new one and
they also would be unable to do so. Not because they are less
intelligent than the workers of today but because they lack the skills
and knowledge to do a job that would be easy for someone today. And
the modern Lexus has evolved (in the sense that human artifacts evolve
by improvement via trial and error under the manufacture of a known
outside agent, not by the mechanism that cells evolve, involving trial
and error in a self-replicating genomic organism) beyond the carriages
that they are familiar with.

> Hence, it is quite preposterous to leap to the
> conclusion that these same unrivaled masters of
> molecular engineering, must be disinterested in
> controlling mutations _solely_ based on the fact
> that they did not select actions that some
> comparatively clumsy greenhorns (not you personally)
> are suggesting they ought to have taken taken.

You keep anthropomorphizing this mystical "biochemical network" into a
super-intelligent agent with foresight and consciousness. Stop it
unless you can present evidence for either foresight or consciousness.

> It would be like a seven year old kid who just
> knows how to move chess pieces, watching a world
> championship game, and declaring that the champ
> is not interested in winning since he doesn't seem
> to be picking the moves that kid considers the
> best ones.

I am quite capable of determining if something survives or dies (or
fails to reproduce) in a particular environment. How else would you
determine "winning" other than by differential reproductive success?
*Any* move that leads to greater reproductive success is a "winning"
move.

> That's basically what your argument for the RM
> conjecture, based on comparisons of observed
> mutation rates in different environments, amounts
> to, except that, once more, the gap is much bigger
> on the biological side of the analogy (we don't
> even know how all the pieces move in our game).

Again, these are mutations to any state that allows "reproductive
success" in the selective enviroment. Obviously it is easier to
determine the frequency of such mutations in the non-selective
environment if we know that only x mutational changes have ever
generated a survivor in the selective environment. Moreover, what you
would expect, if your complaint were true, would be your desired
increase in the frequency of survivors in the selective condition
compared to the limited number of specific mutational events that I can
detect in the non-selective condition. That is, if what you complain
about were true, one would see a false correlation between mutation
frequency and survivorship in selective conditions because of an
undercounting of "mutants" (because of not picking the right ones that
the cell uses) in the non-selective plates.

> You and others also complain that my ID claim is
> at the algorithmic level, so it must be empirically
> meaningless.

Until you generate some testable claim, it certainly is empirically
meaningless. That is not specifically a complaint about the fact that
it is at the algorithmic level.

> The algorithmic level of abstraction is simply a
> variant of a general mathematical modeling. On one
> side you have the abstract mathematical model and
> its little mathematical 'gears', on the other side
> you have a real phenomena and their empirically
> observed properties. The 'operational rules' connect
> (perform mapping between) the elements of the two
> realms, so that some formal elements of the model
> space correspond to some empirical elements of the
> real world. Depending on the level of abstraction
> or detail of a particular model, the model may not
> have corresponding elements for many of empirical
> elements. A highly abstract model may not see much
> of the fine grained detail of the empirical realm.
> That lack of detail does not imply that a model is
> unscientific.

Just that it is currently useless. Especially if it is nothing but a
mystical New Age belief in the superintelligence of an amoeba that
somehow, without ever producing any noticeable effect different from
random mutation generation plus natural selection subsequent to
mutation, causes a mutation fairy to produce whatever you want
produced.

> Namely, what makes a model scientific is that the
> model predicts at least some relations between the
> abstract elements of the model realm, which can
> be compared (via operational mappings) to the
> relations between the corresponding elements
> in the empirical realm. That comparison represents
> a test of a model, hence the model is falsifiable
> by the empirical facts.

So when are you going to actually present some specific test of the
proposition that there is an interaction between the generation of
specific mutations (that generate specific phenotypes) and the
selective need for these specific phenotypes? All the evidence I have
seen says that the generation of variant phenotypes by genetic mutation
is at random and uncorrelated with the need for the phenotypes these
mutations generate. Not only is there no evidence for foresight in the
production of variant phenotypes, those phenotypes that require genetic
mutation also are not specifically induced by environments where the
phenotypes are needed.

> So, what I am saying is simply that at this level
> of abstraction of the biochemical phenomena in a cell,
> the following relation between the elements of the
> abstract model must hold: the cellular algorithms
> must evaluate and control via some anticipatory
> algorithms the possible mutations for "usefulness",
> where "usefulness" is understood in the sense of
> 'usefulness according to the knowledge of the
> cellular network' (i.e. according to its 'utility
> function' which encapsulates its knowledge of
> its environment and of itself, i.e. all patterns
> and laws of biochemistry & physics which are
> relevant to a cell as understood by the cellular
> network; of course these would not look anything
> like the laws of biochemistry and physics as we
> know them, although there would be some partial
> correspondence, with lots of facts and patterns
> on either side missing from the other).

Sure that is your model. I am saying that its obvious empircal
implication is that cells have foresight and can produce mutants
according to perceived need. And the empirical evidence says that that
does not happen, AFAWCT.

> In order to explain the empirical content of the
> above conjecture in a more direct way, hence without
> using the previously stated criterium, I will consider
> again the chess program running on a conventional
> computer. This is a useful analogy since both levels
> of abstraction: the algorithmic and the electrical or
> physical, are completely understood (since humans
> designed and constructed the computer and the program).
>
> Would it be meaningless (empirically empty) to
> say at the algorithmic level, that the particular
> chess program, playing at its tournament level,
> uses ten move full width look-ahead with twenty
> five move selective look-ahead extensions, to
> evaluate and select its best move? This is analogous
> to the ID conjecture stated above at the algorithmic
> level, where ID says that algorithms executed by
> the cellular network perform look-ahead evaluations
> and control of the mutations.

And I am pointing out that, empirically, the cell is unable to
*anticipate* even one step ahead, namely to what environment it is
going to face in the next millisecond. Cells do have mechanisms that
allow adaptation to new environments (and usually relies on the fact
that environments do not fluctuate randomly over all possible
environments but change, typically, gradually), but these require that
the environment actually change first and are an induced adaptive
response rather than an anticipation. And the evidence shows that
mutation is not 'adaptive' in this sense. It does not produce specific
mutations of need in response to a change in environment. Rather it
generates changes at random and allows selection to subsequently weed
through these variants.

> How would one empirically test the algorithmic
> statement about the chess program, using only
> electrical measurements on the computer hardware?
> It would be, of course, comparatively easy, to check
> that conjecture by obtaining the source code and
> examine its search function. Or even easier if
> the program displays such information (which it may
> not do in tournament mode; some programs even display
> deliberately false search depth information to mislead
> the competitors).
>
> But in order to remain close to the relation with
> the biological systems were we don't have the "source
> code" for the program running on cellular network,

Isn't the cell's genome (its DNA) the source code? We certainly have
ways to look at changes in the DNA of cells. And, having done so, it
is clear that changes in the DNA occur at random wrt need for a
specific phenotypic consequence.

[snip stuff that may be interesting to computer programmers but is
irrelevant when it comes to cells]

> These networks learn by being exposed to
> some input signals,

A cell's biochemical network does not "learn" anything. It responds to
environmental stimuli. A cell is not a conscious intelligent agent,
even though concious intelligent agents are composed of cells.
Consciousness and intelligence are emergent properties of organisms,
not properties of the cells they are composed of.

> to which they respond
> producing some output signals, and for each
> such response they receive some punishment or
> reward (e.g. a separate input to the network),
> then based on punishment/reward, they _slightly_
> modify the network link strengths so that if
> they were to receive the same input again
> their response would have received _slightly_
> less punishment, more reward (if they were to
> optimize maximally for each input, that would
> cause much greater cost on the rest of input
> patterns, so each modification of link strengths
> is justa tiny nudge toward a slightly more
> favorable responses). After some number of passes,
> the network learns to respond nearly optimally to
> not just the inputs it was trained on but to new
> ones which it hasn't seen before (assuming the
> source is "lawful", i.e. it produces "lawful"
> or "learnable" input patterns).

This is not what happens in cells. Biochemical networks do not
"learn". In cells, if you add lactose to the environment, the cell
will have an internal increase in the level of a thio compound due to
the small amount of lacA enzymes present. This will interact with the
lacI gene product and derepress the synthesis of the lacYZA operon.
When the level of lactose decreases, the amount of enzyme produced will
decrease. That is an adaptive regulation, but not a "learned" one.

If you add a gratuitious inducer instead of lactose, the derepression
will still occur and the unchanged biochemical network will not be able
to "learn" that it's wasting energy making enzymes for which there is
no substrate. Instead, only in a cell that, by random chance, has a
mutation that causes a loss of ability to respond to any inducer
(either correctly or incorrectly) might, in this case, be at a
selective advantage in this environment because it would not be not
wasting resources producing non-useful enzymes. But the benefit in
this local environment has a cost when the environment changes to one
that now has the useful resource of lactose. The cells have no
foresight of this possibility. The environment selectively chooses
*any* variation that doesn't waste resources.

> The knowledge and skills they learn can be
> described at several levels.

What do you mean by a cell "learning"? How, by what mechanism, can a
cell "learn" in the sense that you mean? Cells are not conscious
intelligent agents. They are adaptive. They are responsive. But that
is not "learning". I, for example, do not "learn" to move my leg when
the doctor's hammer hits below the knee cap.

[snip more New Agey implication that an amoeba is superintelligent]

> As you realize by this point, it may take quite a bit
> of time and efforts, before we can examine the processes
> of the cellular biochemical networks at this level of
> abstraction.

Why bother, since there is no evidence that such a level has any
relevance to the way that cells really work and doesn't appear to have
any support in the ways that cells really do work?

> However, this level of abstraction and
> the mathematical model space of the biochemical network
> do exist as mathematical/algorithmic constructs along
> with some general properties common to the algorithms
> used by the adaptable networks, even though we don't know
> presently much about their details. We do know, for example,
> that they are optimizing their actions to some punishments
> & rewards, that they will have internal models of their
> world, the environment and their self-actor, that they
> will run these models forward in time in order to pick
> the "best"actions by the self-actor, where "best" means
> as decided by their utility function, which in turn
> encapsulates their knowledge about patterns in their
> inputs (laws of physics & biochemistry in their
> internal world)... etc.

Your anthropomorphizing cells as little homuculi is certainly funny.

> Hence, we can formulate conjectures about the general
> properties of the algorithms in this model space, even
> though we have no way, at present, of specifying details
> for the fully functioning instances of such algorithms
> (in the model space), or establishing the complete
> operational mapping between the elements of the model
> space and their real world biochemical counterparts.
> At this abstraction level, both RM and ID are simply
> conjectures about very general properties of these model
> space algorithms, in particular, whether these algorithms
> perform any computations and actions aiming to optimize
> consequences of the mutations.
>
> ID answer is the most natural and plausible one: of course,
> they do, since the consequences of the mutations certainly
> contribute to their punishments and rewards,

So does RM generating variants followed by selection discriminating
among them. And that process has supporting evidence. ID doesn't.
Specifically, the absence of any correlation between the generation of
variation and the choosing of variation is a necessary prediction for
RM and NS. It is not the usual way that one thinks of an intelligent
designer capable of foresight working.

> hence the
> general network optimization property implies that they
> must be accounting for this term in the total punishments
> and rewards they are optimizing their actions to.

Selection is not mutation. Selection punishes or rewards cells that
have specific genetic variants by virtue of the phenotypes expressed.
We are not talking about the role that *selection* has in changing the
population's genome. Selection clearly weeds through whatever variants
are present in a population. We are talking about whether or not
*mutation* specifically and preferentially produces variants according
either to 1) the ability of the cell to somehow determine and
anticipate a need and preferentially produce mutants that meet that
need or 2) by specific preferential induction of needed variants by the
selective environment.

>With
> the truly accidental mutations, those induced by the
> external causes (e.g. as UV) which are outside of their
> control the most that the optimization can do is try
> a repair.

What mutations are NOT accidental? Which mutations are planned for?

> But for all the mutations which are caused by
> the physical/chemical conditions of the network itself
> at the mutation site,

Can you give me an example of a mutation caused by the "network itself"
and the evidence you have that such mutants actually exist? In
particular, which ones are caused by the "network itself" when there is
a selective need for that mutation? Oh, and did I mention that you
need to have evidence?

> they do have such control (to the
> extent consistent with the laws of physics). How much
> they can do and what they can do about these mutagenic
> conditions depends on how much "lawfulness" exists in
> their inputs and in the consequences of the mutations.
> Whatever regularity or pattern there may be in their
> inputs, they are probably well optimized to take
> advantage of them.
>
> In contrast, the Random Mutation (RM) conjecture appears
> at this same level of abstraction as extremely artificial,
> almost capricious: the network is allowed to optimize to
> any other punishments and rewards, utilizing any patterns
> in all of its inputs, except for those corresponding to
> the mutations and their consequences. These must be
> off limits. Why? Just because.

Because the evidence tells us that mutation is random wrt need. It
tells us that the mechanism by which a population's genome changes is
by randomly generated (wrt need) variation and subsequent selection
among variants.


>
> Then came Cairns experiments & followups. Oops. Time to
> revise RM1 and move on to RM2: the only pattern in the
> network inputs that RM2 allows it to include into its
> optimization algorithm and to respond with mutations is
> the 'general stress' pattern, and the only mutation action
> allowed to the network is the general (or nearly so)
> increase in the mutation rates.

That, indeed, is what the evidence says. Initially, of course, people
considered that this might be an example of preferential *induction* of
beneficial mutation by the environment. However, upon further testing
it was demonstrated that the mutation rate increased due to a stress
response, but not *preferentially* the rate of mutations of need. The
process is still nothing but random generation of mutation and
selection among the phenotypic variants produced. And it is
*preferential* mutation to variants of need that your thesis requires.

> Clearly, the ID position, expressed at this level of
> abstraction, is a much more coherent and principled one.

Why should anyone think that a position that requires observations that
are flatly contradicted by repeated experiments that demonstrate that
mutations are generated randomly and that it is subsequent selection by
local environments that changes genomes. That is a coherent
explanation that is consistent with all the evidence. As to
principled, I have no idea what that means to you, but to me it means
that, as a scientist, I have to support the simplest natural
explanation that the data supports. ID is an explanation that is
inconsistent with the obvious expectation you imply it should show
(that specific mutations occur according to need and that the cell has
a mechanism by which it "learns" what mutants to produce). [Real IDers
get around the fact that ID is inconsistent with any such test by
claiming that ID is not a mechanism subject to such tests. That all
they can do is 'demonstrate' that (to their level of ignorance) such
and such a feature is impossible without positing the magical fairy of
one's choice.] ID is certainly not the simplest explantion consistent
with the evidence in any case. Supporting ID requires an unprincipled
rejection of scientific findings.

> But
> since the present state of the knowledge is still insufficient
> to exclude the post-Cairns RM2, this quirky conjecture lingers
> on as a far fetched, almost silly, theoretical possibility,
> upheld only by the sheer religious zeal of its priesthood.

What makes you think that the idea of stress-induced increase in the
rate of mutagenesis is a "quirky conjecture" while your outlandish
assertion of the superintelligent conscious far-sighted amoeba that
"learns" which genes to mutate (even though no one can demonstrate any
evidence for such a preferential directional mutation) is anything but
a laughable idea. Even Cairns was smart enough to posit that what he
was seeing was *induced preferential mutation* rather than foresighted
anticipatory process.


>
> At present, only the coarse grained, statistical and static
> properties of the biochemical reaction networks have been
> explored, and only very simplified (toy) models of its
> dynamics are being simulated on the computers. You can check
> on the SFI

Is that an abbreviation for Science Fiction site?

> site for lot more about this research and the
> perspective on biology (labeled as Complexity Science, which
> has the ID perspective as its asymptotic value, and from
> which it will become indistinguishable in few years):

I doubt it. The evidence is already in wrt mutation (although I would
leave open the possibility that a few domesticated mutational processes
might be capable of generating variants wrt need, but these would be
rare and unusual cases, not the norm).

hersheyhv

unread,
Jul 14, 2006, 3:43:28 PM7/14/06
to

Below you have simply typed in some words into a search engine. Can
you instead point me to the places where the people who are actually
doing this work agrees with your strange thesis that cells, which
clearly are biochemical networks with autocatalytic properties (aka,
reproduction) and quite complex work by producing mutations according
to need for them or by anticipation of future need for them?

nightlight

unread,
Jul 14, 2006, 4:32:14 PM7/14/06
to
hersheyhv wrote:

>>No, that's not it. My claim, which is a proposition at the
>>algorithmic level of abstraction,
>
>
> Why should anyone give a flying f**k for your algorithmic level of
> abstraction if it cannot made real or have an effect at the level of
> actual material consequences? If there is no evidence (and much
> counterevidence) that mutation is anything but random wrt need, that
> means that your algorithmic abstractions have no testable consequences.

> ...


> What "anticipatory computational processes by the biochemical
> network of a cell"? You have yet to demonstrate that a
> cell even *has* a "computational process" much less an
> anticipatory one. All you have presented is some New Age
> mumbo-jumbo verbal mysticism.

> [...and so on]

I don't wish to sound insensitive or heartless or anything
like that, but you do need to bring yourself _up to date_
with contemporary research and the entire scientific
disciplines focused entirely on various aspects of
computations and algorithms of biochemical networks.
Otherwise you risk appearing foolish by making public
declarations like those above.

Below are few links which will help you toward getting
up to date on the relevant _contemporary_ science (which
you and few others here appear to be blissfully unaware of,
and for some reason quite proud of it). The quote below is a
summary of the objectives of a scientific conference/workshop
(from couple years ago) dedicated to the computations and
algorithms in the biochemical networks:

-------------------------------------------------------------------
Dynamics, control and computation in biochemical networks (2004)

Cells and organisms have evolved elaborate mechanisms to carry out their
basic functions. Networks of biochemical reactions are responsible for
processing environmental signals, inducing the appropriate cellular
responses and sequence of internal events. The overall molecular
algorithms carried out by such networks are as yet poorly understood.

Recent years have witnessed remarkable advances in elucidating the
components of these networks due to technological achievements.
Prominent among these achievements are the means for rapid sequencing of
genomes, the means for simultaneously determining the expression levels
of thousands of different genes, and recombinant DNA techniques to
isolate, identify, manipulate, and synthesize genetic and metabolic
networks. These advances have confronted the biological sciences with
massive amounts of data that require huge computational resources.

The field of bioinformatics has developed sophisticated computer-based
algorithms which all cellular and molecular biologists now use to
identify and analyze DNA and protein sequences.

This workshop is designed to address a range of questions that goes
beyond the development of algorithms for the searching and analysis of
genomic and protein data bases.

The workshop will bring together mathematicians, physical scientists,
engineers, computer scientists, and biological scientists to address
fundamental questions concerning the computations that are carried out
within cellular and genetic biological networks.

What are prototypical tasks and prototypical algorithms for biochemical
circuits? How are these mechanisms regulated? How can important logical
elements be identified experimentally or by data-mining? What are the
"design principles" of biological circuits? What are fundamental
limitations on the performance of molecular systems? The workshop will
provide an environment in which these issues can be considered by a
diverse group of researchers with backgrounds in dynamics, computation,
control theory and biology.
-----------------------------------------------------------------------------
http://www.pims.math.ca/birs/workshops/2004/04w5550/


---- Links

Windy

unread,
Jul 14, 2006, 5:43:12 PM7/14/06
to

nightlight wrote:

> hersheyhv wrote:
> > ...
> > What "anticipatory computational processes by the biochemical
> > network of a cell"? You have yet to demonstrate that a
> > cell even *has* a "computational process" much less an
> > anticipatory one. All you have presented is some New Age
> > mumbo-jumbo verbal mysticism.
> > [...and so on]
>
> I don't wish to sound insensitive or heartless or anything
> like that, but you do need to bring yourself _up to date_
> with contemporary research and the entire scientific
> disciplines focused entirely on various aspects of
> computations and algorithms of biochemical networks.
> Otherwise you risk appearing foolish by making public
> declarations like those above.
> Below are few links which will help you toward getting
> up to date on the relevant _contemporary_ science (which
> you and few others here appear to be blissfully unaware of,
> and for some reason quite proud of it).

Oh, spare us the hypocrisy. You wouldn't give a flying fuck about
whether biochemical networks can be viewed as algorithms if you didn't
think it somehow disproves neo-Darwinism.

How about getting up to date on basic decades-old biochemistry first?
You haven't stated how you intend to model your random,
non-anticipatory biochemical events, since you propose all such
networks are already intelligent (or mind-melded to intelligent
researchers).

How about the ratio of transitions vs. transversions, for example?
Pretty essential if you are going to model mutations. Do you expect it
to be different in the presence of anticipation?

-- w.

hersheyhv

unread,
Jul 15, 2006, 12:04:30 AM7/15/06
to

nightlight wrote:
> hersheyhv wrote:
>
> >>No, that's not it. My claim, which is a proposition at the
> >>algorithmic level of abstraction,
> >
> >
> > Why should anyone give a flying f**k for your algorithmic level of
> > abstraction if it cannot made real or have an effect at the level of
> > actual material consequences? If there is no evidence (and much
> > counterevidence) that mutation is anything but random wrt need, that
> > means that your algorithmic abstractions have no testable consequences.
> > ...
> > What "anticipatory computational processes by the biochemical
> > network of a cell"? You have yet to demonstrate that a
> > cell even *has* a "computational process" much less an
> > anticipatory one. All you have presented is some New Age
> > mumbo-jumbo verbal mysticism.
> > [...and so on]
>
> I don't wish to sound insensitive or heartless or anything
> like that, but you do need to bring yourself _up to date_
> with contemporary research and the entire scientific
> disciplines focused entirely on various aspects of
> computations and algorithms of biochemical networks.
> Otherwise you risk appearing foolish by making public
> declarations like those above.

Where? I see you have run a few nice little phrases through Google. I
see no evidence that any of it confirms your theory of "anticipatory
mutation", aka teleological mysticism.


>
> Below are few links which will help you toward getting
> up to date on the relevant _contemporary_ science (which
> you and few others here appear to be blissfully unaware of,
> and for some reason quite proud of it). The quote below is a
> summary of the objectives of a scientific conference/workshop
> (from couple years ago) dedicated to the computations and
> algorithms in the biochemical networks:
>
> -------------------------------------------------------------------
> Dynamics, control and computation in biochemical networks (2004)
>
> Cells and organisms have evolved elaborate mechanisms to carry out their
> basic functions. Networks of biochemical reactions are responsible for
> processing environmental signals, inducing the appropriate cellular
> responses and sequence of internal events. The overall molecular
> algorithms carried out by such networks are as yet poorly understood.

They are not talking about mutations occuring in a mystical
teleological fashion. They are talking about *existing* biochemical
interactions and regulation that allow a cell to interact with its
local environment by adaptation, not by mutation. Specifically,
essentially all of this adaptive regulatory interaction they are
talking about is occurring in the proteome (interactions between
proteins and regulatory interaction of proteins and RNAs with genes)
and does not involve mutational changes in the genome.

> Recent years have witnessed remarkable advances in elucidating the
> components of these networks due to technological achievements.
> Prominent among these achievements are the means for rapid sequencing of
> genomes, the means for simultaneously determining the expression levels
> of thousands of different genes, and recombinant DNA techniques to
> isolate, identify, manipulate, and synthesize genetic and metabolic
> networks. These advances have confronted the biological sciences with
> massive amounts of data that require huge computational resources.

Notice that they are not saying that the regulation of expression and
the interaction of biochemical networks is accomplished by these
networks doing computations. They are saying that much computation is
needed to understand the massive amount of data produced.


>
> The field of bioinformatics has developed sophisticated computer-based
> algorithms which all cellular and molecular biologists now use to
> identify and analyze DNA and protein sequences.

Of course. But that is not saying that the cell is doing computing.
It is saying that humans are doing computing to understand the cell's
interactions.


>
> This workshop is designed to address a range of questions that goes
> beyond the development of algorithms for the searching and analysis of
> genomic and protein data bases.
>
> The workshop will bring together mathematicians, physical scientists,
> engineers, computer scientists, and biological scientists to address
> fundamental questions concerning the computations that are carried out
> within cellular and genetic biological networks.
>
> What are prototypical tasks and prototypical algorithms for biochemical
> circuits? How are these mechanisms regulated? How can important logical
> elements be identified experimentally or by data-mining? What are the
> "design principles" of biological circuits? What are fundamental
> limitations on the performance of molecular systems? The workshop will
> provide an environment in which these issues can be considered by a
> diverse group of researchers with backgrounds in dynamics, computation,
> control theory and biology.

I am quite aware of the mathematical modeling of biochemical systems
that is done. None of it models things that are directly contradicted
by actual evidence. And mutation empirically occurs at random wrt
need. That is one of the features of cells that is included in any
realistic model of how cells work. But most of these models are
actually at the non-genetic level of regulation and don't bother with
the permanent sequence changes in DNA we call mutation at all. These
models are interested in changes within cells that occur on the time
scale of a single generation or shorter, not on evolutionary time
scales. Some may model the quasi-permanent changes in genomes that
occurs in, say the Barr chromosome, or in other somatic cells during
development. But those changes are not transmitted to future
generations. There are a *few* quasi-inherited features of cells that
involve regulation that are transmitted at time scales different from a
single lifetime. But there is no reason or need to believe that
genetically transmitted changes involve anything but random mutation
followed by selection.
> -----------------------------------------------------------------------------
> http://www.pims.math.ca/birs/workshops/2004/04w5550/
>

Again, where in all these sites in all these googled phrases do you see
*any* real scientist claiming that they have evidence or are producing
a model of a real cell in which mutation is teleologically
anticipatory? And what do "neural networks" have to do with a single
cell and its responses? I know that there is good research that goes
on that uses these phrases. You are not presenting any of that
research, however, that supports your stated position. You are merely
using the phrases as talismans to ward off what you regard as evil and
trying to pretend that you understand them.

All you are doing is using these phrases which you clearly don't
understand to try to bullshit people. Again, where is the friggin
evidence for anticipatory mutations ever occurring in any cell or any
system? Or you can even just present a realistic and testable
mechanism that doesn't involve hypothesizing an unsee mutation fairy
that, even in principle, could produce your teleologically anticipatory
mutations (or whatever you want it to poof into existence). Just
waving your hands and attributing anticipatory mutations to
"biochemical networks" somehow working invisibly and statistically
undetectably is not a promising start. Especially when the evidence
shows that there is no such thing observable.

Hell, it's hard enough to come up with good mechanisms for induced
directed mutation (which is what Cairns proposed), but I can come up
with a few of those that do not involve magical mutation fairies. Too
bad that so far such induced directed mutations do not appear to
actually happen in nature, or if they do they only occur rarely in
unsual and as yet unfound special situations.

The fact remains that in real tests of the idea of either
differentially directed induced mutations or anticipatory mutations,
these possibilities have, to date, failed to show any interaction
between specificity of mutation generation and the selective need for
the mutation generated. To the best of our current knowledge, all
mutations occur at random wrt need and it is selection that
differentially determines their subsequent fate. The order is
important. You are proposing something that is almost the equivalent
to the future happening before the past when you talk about
anticipatory mutation.

nightlight

unread,
Jul 15, 2006, 5:47:47 PM7/15/06
to
hersheyhv wrote:

>>These networks learn by being exposed to
>>some input signals,
>
>
> A cell's biochemical network does not "learn" anything.
> It responds to environmental stimuli. A cell is not

> a conscious intelligent agent, even though conscious


> intelligent agents are composed of cells. Consciousness
> and intelligence are emergent properties of organisms,
> not properties of the cells they are composed of.

The 'neural networks' described there are an _abstract
mathematical model_. They assume _nothing_ about the particular
nature or realization or implementation of the links,
nodes, punishments/rewards, inputs, outputs. They only require
that those mathematical objects have certain properties
relative to each other. Then, from these properties alone
and by pure mathematical deduction & computation (along
with suitable definitions) the 'learning', 'optimization',
'modeling', 'anticipation' and other properties of the
abstract network follow. Hence, the mathematical conclusions
apply to any actual implementation of such network, as long
as one can establish that nodes, links, punishments/rewards,...
have the mathematical properties assumed in the abstract
mathematical deduction.

In other words, the 'neural networks' as mathematical objects,
are the same kind of abstraction as, say abstract variable
names A, B, C... and abstract operators +,-,*,/... (which
need not be numbers and regular arithmetic operations; they
are just some abstract mathematical objects). Suppose now
we consider a special kind of these objects, for which the
the following properties hold:

p1) A*B = B*A and ... (commutativity)
p2) C*(A+B) = C*A + C*B ... (left distributivity)

(which is similar to regular numbers, except that we are not
assuming any other properties of regular numbers and arithmetic).
From these two assumption _alone_, you can deduce, for example:

1. (A+B)*(C+D) =
2. = (A+B)*C + (A+B)*D =
3. = C*(A+B) + D*(A+B) =
4. = C*A + C*B + D*A + D*B

where we used properties p2: 1->2 , p1,p1: 2->3, p2,p2: 3->4.

The deduced identity of expressions (1) and (4) holds no
matter what the objects A,B,C and D are (integers, real
numbers, complex numbers, booleans,...) or what the
operators +, * mean, provided they have properties p1 and p2.

As long as you establish that some objects have properties
p1 and p2, and _regardless of any other_ properties they
may or may not have, you know with mathematical certainty
that they will satisfy identity of expression (1)=(4). That
allows you, for example, to reduce computational cost whenever
you get an expression of type (4), which needs 3 '+' operations
and 4 '*' operations (whatever they may mean), to 2 '+'
operations and 1 '*' operation used by expression (1), which
you know must be identical to (4) due to properties p1 and p2
of our objects.

E.g. you can try integers, establish that they have properties
p1 and p2, then you can use this computational saving whenever
you have integer expression of the form (4). You can do the
same for real numbers, complex numbers..., where each has
its own _distinct realization_ (implementation, meaning) of
the objects A, B, C... and of operations '+','*'.

Going back to networks: the deduction of neural network
properties is the same type of deduction (only longer and
some parts need computers to follow through) than deduction
(1)=(4) from p1 and p2. Hence, with any specific network,
once you establish certain properties of its nodes, links,
punishments/rewards,... you know immediately that they will
exhibit all the other properties deduced for the abstract
neural networks, such as learning, optimization via internal
modeling & anticipation network algorithms... that we know
from the abstract neural networks. Hence, just as we know
that abstract identity (1)=(4) in the formalism will hold
for any particular implementation/realization of formal
objects A,B,C,D and +,* for which properties p1 and p2
hold, no matter what A,B,C,D or +,* are or mean in any
other context, we know with the exactly same mathematical
certainty that the deduced properties of abstract neural
networks will hold, no matter what implementation/realization
of the formal elements (nodes, links...) may be, or what
they mean in any other context, as long as the defining
properties of these elements hold for the given realization.

There are great many networks of this type, from
autocatalytic reaction networks, cellular biochemical
networks, immune systems, brains, to organizations, economies,
internet and its multitudes of mutually permeating &
overlapping subnetworks at various levels and areas,
societies, ecosystems... etc. The general properties
of network algorithms (optimization, internal model
building with self-actor, running of internal model
forward in 'model time', whatever 'model time' may mean
in a given realization, and pick of optimal next action
by self-actor, realization of that pick by actual network...)
hold automatically for all such networks.

In the case of multiple networks (which may interact,
overlap, mutually permeate or contain other networks
as subnets), the internal modeling acquires, by virtue
of general optimization & learning properties, additional
structure. Namely, the environment that each network A
models contains now other networks B, C,... including
the self-actor-A, the network A's internal model of itself
within its model space. As the network A learns its
environment, its model of the other networks in the
environment may become accurate/faithful enough to
reflect the fact that these are also networks modeling
the same environment from their perspective i.e. the
network discovers that, what previously appeared as
merely some 'object B' from its environment, exhibits
similar patterns of activity as the the self-actor-A.
After further conjectures and tests, it eventually
arrives at an internal model which contains actor B,
within which there is a nested model containing
the A and the self-actor B. With the nested modeling
algorithm/recursion mastered (e.g. small children
lack such nesting in their model of other people
around them), A will refine its own self-actor-A,
and at a finer level of detail that element will
now contain its internal model, which includes actors
for A, B,... Therefore, as the models refine, with
models nesting within models recursively, the model
space structure becomes increasingly fractal, like
a circle of reflecting spheres, with each sphere
reflecting all other spheres, and within each
little sphere reflection, there is another even
smaller reflection of the entire circle with even
tinier spheres... ad infinitum.

Of course, the mathematical guarantee of _existence_
of such general properties and algorithms does not
identify the counterparts of the abstract network
elements (internal model, self-actor, algorithms
which runs the internal model...) in any specific network,
since these counterparts are activity patterns of the
actual network. Only the specific detailed reverse
engineering of a given network allows one to say which
activity pattern corresponds to which of the abstract
elements of mathematical networks, such as network's
internal model or self-actor.

For example, the self-actor that your own brain
constructs is not some miniature little replica of
you, set loose to run around inside your skull.
Instead it is a certain pattern of electrical activity
in a given type of computation (which need not be
the same/similar pattern for different subnets in
your brain, or in different types of computations/algorithms
being executed). Similarly, in biochemical networks
the algorithms, internal models and their components
are patterns of the biochemical activity, where
"activity patterns" are not some bulk statistical
property, but fully detailed (at atomic/molecular level
relevant for chemistry) properties of a time dependent
sequence of individual local activities, frame by frame.
We have no way of currently observing or analyzing,
such full-detail frame-by-frame patterns in cellular
biochemical networks. We can only infer some very tiny
snippets of such algorithms and their elements, in the
form of particular chains of reactions with some far
reaching functionality/role/meaning (or what Dembsky
would label as 'have specified complexity').

As an example of social networks, we can identify 'economy'
network with individuals & companies as its nodes. Its
links are particular trading associations or pathways
between the "nodes" (called agents in this context).
One can treat goods & money coming in and out of
each node as node's input/output signals and each
node's punishments/rewards with respect to this network.
The adaptation of network links is done by by the
agents/nodes adjusting/tuning their trading pathways &
associations based on the perceived/evaluated punishments
& rewards. The whole 'economy' network interacts,
overlaps and permeates numerous other networks at all
levels (ecosystem, society, state, politics, bureaucracies,
stock market, misc. organizations, education, religion,
family, individual brains,...) with which it exchanges
input/output signals, punishments/rewards, as a whole
and as its subnets. From the abstract networks, we know
the general kind of algorithms used by the 'economy'
network e.g. that it optimizes its net punishments/rewards
via anticipation using internal models of its environment
and self-actor, look-ahead gaming, fractal model space,...

Informally, social networks are intelligent organisms
pursuing their own happiness in their own realm (which
is an ecosystem of its own, exhibiting all the common
phenomena and patterns observed in the biological
ecosystems). They possesses knowledge, wisdom and purposes
lergely beyond the perception, let alone comprehension,
by their nodes. This _opaqueness property_ (which is due
to computational limits of the networks) exists in all
such networks, including biological networks. For example,
none of your neurons, activated by your reading of this
sentence, has any "clue" that you're reading about them
(where "clue" is meant in the sense of their cellular
biochemical networks having an internal model of what
you're reading, in which their own self-actor would be
an object of reading by the actor representing 'you').

Interestingly, these abstract mathematical networks need
not be implemented/realized as 'material' networks. Their
elements (nodes, links...) cal be realized as abstract
patterns living on some other substratum. The languages
(natural and formal/scientific; google for "word net" and
"semantic networks"), religions, cultures, scientific
theories... are networks in the abstract realm, i.e.
their counterparts for mathematical nodes and links
are not individual physical/material objects, but rather
they are patterns on top of some substratum (which in turn
may be material or abstract substratum).

For example, a language network has words (in some models
also idioms) as nodes and semantic, syntactic, grammatical,
phonetic... relations among them as links. The ultimate
punishment & reward for the language network as a whole
are its death/survival, from which subsidiary punishments
& rewards follow (e.g. dissemination, multiplicity of uses
& forms of uses, specialty uses such as for classical
Greek and Latin). The individual words and idioms also
have similar punishments & rewards, which in turn affect
the strengths of their links with other nodes. The speakers
of the language (or users, for mathematical/scientific
languages) are merely a substratum for the language
networks. The language networks have their internal model
of their environment, hence of us.

Thus the language network would anticipate and strategize
on how best to optimize its usefulness, expressive and
heuristic power to us, since our use of it represents
its 'nourishment' on its substratum. The language
network accumulates knowledge in its own model of the
world (as it perceives it from its links activations,
which in the word=node model are all uses of sequences
of words, e.g.. w1 w2 w3 w4 activates links w1 -> w2
and w2->w3 and (w1 w2) -> w3, ... (w1 w2 w3) -> w4,
which corresponds to unlimited context Markov modeling
of a languages).

In its maintance of its substratum (analogous to our
watering & fertilizing of plants, which are a part of our
biological substratum), the language performs 'fertilization'
of our 'thoughts' (in the sense of the excitation patterns
in brain), since this leads to more future language activation,
its punishment.

We will often find that merely trying to put some thought
or feeling into words, provides variety of benefits (clarifies
thoughts, brings out new ideas we didn't have before, acts
therapeutically). The language is, in effect, transmitting
some of its wisdom and knowledge from its network down to
our networks (brain). What is meant by "transmission" is
that the "network semantics" of its patterns, which are the
elements of its model of the world which is largely opaque
to us, ends up being genuinely "translated" or "transplanted"
to our "network semantics" of our patterns. (The "network
semantics" denotes a persistent association of network patterns,
one labeled form/expression pattern another labeled content/meaning
pattern, the latter belongs to some internal models of the
network, within the same network.)

To us this kind transmission appears as a sudden transition
of some material which we knew only verbally, by rote, into
a meaningful knowledge, the in-depth understanding suddenly
just 'clicks in'. New semantic neural patterns in our
brain are created corresponding to the semantic patterns
that language network had in its model of the world. Prior
to that, the neural network of our brain had only the
form-patterns corresponding to the syntactic language
network patterns. When it 'clicks in', the underlying
semantics of the language network's model of the world
was also transmitted.

The transmission effect is especially powerful when the
words are being written, since, not being limited by our
short term memory capacity, the writing activates simultaneously
much larger sections (longer Markov contexts) of the language
network, allowing them to reverberate for a much longer time.
We will often find that writing out, what initially were just
few vague thoughts, ends up with a content we would have never
suspected we had it in us.

The languages which were purposefully designed to be used
as written languages, such as formal & symbolic languages
of sciences and mathematics amplify these transmission
effects many fold (since activations in these networks
span orders of magnitude longer time intervals and have
high degree of coherence over large number of nodes).
Many times, while working out the math of some subject,
a sudden in depth understanding of the subject will
open up, as if coming out of nowhere. Of course, some
of that understanding, comes from the transmission of
the understanding that was hand put into the language
pattern semantics by those who devised particular bit
of the formalism. But there is also a substantial _excess_,
over and above the knowledge that was hand put by
the human designers, the content and meaning that they
had no clue about. This excess occurs often enough with
mathematics that it even has its own name, "the
unreasonable effectiveness of mathematics in natural
sciences". The excess is the 'unreasonable' part, in
contrast to the 'reasonable' effectiveness, the one we
expect from the semantics that hand put by the formalism
(formulas, equations...) designers. This name to the
excess was given by Eugene Wigner (theoretical physicist,
Nobel laureate) as a title of his essay describing
the phenomenon:

Eugene Wigner "The Unreasonable Effectiveness of Mathematics in
the Natural Sciences" (Comm. Pure & App. Math., v 13, no 1, 1960)
http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html

where Wigner recounts numerous examples of major scientific
discoveries arising from the seemingly purely accidental
or superficial connections between the far apart elements
of the formalism and their subsequent, quite surprising
and certainly not hand put, semantics in the actual
world (you may recognize yourself in some anecdotes, the
incredulous, bewildered party).

Of course, what we commonly call 'physical objects' are
themselves patterns of the quantum fields excitation. Hence
this distinction between networks in 'material' and
'abstract' realms is meaningful only to the extent that
a 'physical object' is a meaningful coarse grained
approximation for the patterns on the quantum fields
substratum (quantum vacuum). At a fundamental level
this distinction is merely a convention.

Note also that no assumption about 'consciousness' was
used anywhere above i.e. these are all purely mathematical
properties of these networks (although described in informal
everyday language, thus being slightly ambiguous on that
question). 'What is it like to be' any such network is outside
of present natural science, hence we can only philosophize
and speculate about it. The philosophy I find most coherent
regarding the 'mind stuff' is panpsychism:

Philosophical panpsychism
http://plato.stanford.edu/entries/panpsychism/

My variant sketch (a likely 'margaritas ante porcos'):
http://groups.google.com/group/talk.origins/msg/e03d101dc097e17c

Online papers on consciousness
http://consc.net/online.html

David Chalmers ('the hard problem of consciousness'):
http://consc.net/chalmers/

Gregg Rosenberg ('A place for consiousness')
http://www.ai.uga.edu/~ghrosenb/book.html

Journal of Consciousness Studies
http://www.imprint.co.uk/jcs.html#fulltext

--- Complexity science references

Regarding the "Complexity Science" references, the center which
started this advance in late 1980s was the Santa Fe Institute,
still the Mecca for this research.

SFI Bulletin (magazine for general readers):
http://www.santafe.edu/research/publications/bulletin.php

SFI preprints (for technical readers):
http://www.santafe.edu/research/publications/working-papers.php

SFI Computational mechanics (how does nature compute):
http://www.santafe.edu/projects/CompMech/

Tommaso Toffoli (modeling natural laws, from physics to biological
evolution, as a distributed computer, networks, cellular automata)
http://pm1.bu.edu/~tt/publ.html

There are also several sections on arXiv with papers on
'complexity science', 'adaptable networks', 'self-organizing systems':

Computer Science:
http://arxiv.org/list/cs/new

Quantitative Biology
http://arxiv.org/list/q-bio/new

Nonlinear Sciences
http://arxiv.org/list/nlin/new

complexity science
http://arxiv.org/find/grp_q-bio,grp_cs,grp_nlin/1/abs:+AND+complexity+science/0/1/0/all/0/1

adaptable networks
http://arxiv.org/find/grp_q-bio,grp_cs,grp_nlin/1/abs:+AND+adaptive+network/0/1/0/all/0/1


-- Few useful links:

Very readable recent survey of results and perspective,
with well selected references:

Francis Heylighen, Paul Cilliers, Carlos Gershenson
"Complexity and Philosophy"
http://arxiv.org/abs/cs.CC/0604072

Gershenson's page on self-organizing systems:
http://homepages.vub.ac.be/~cgershen/sos/

InterJournal (complexity science online journal)
http://interjournal.org/

Eduardo Sontag (decompiling algorithms of biochemcial networks,
how to find their 'internal models')
http://www.math.rutgers.edu/~sontag/papers.html

See especially his paper: "Adaptation and regulation with signal
detection implies internal model"
http://www.math.rutgers.edu/~sontag/FTP_DIR/imp-scl03.pdf

Albert-László Barabási (static properties, structure, statistics,
identification of networks many of natural & social systems)
http://www.nd.edu/~alb/

Complexity & anticipatory systems (discussion):
http://www.vcu.edu/complex/

Anticipatory systems (Robert Rosen's work, theory & formalism of
general anticipatory systems; systems that model other systems)
http://www.people.vcu.edu/~mikuleck/RSNCYBRSPC2.html
http://www.anticipation.info/l3/abstractsf?foundation=1

Liane Gabora (self-organizing systems, evolution, consciousness problem)
http://www.vub.ac.be/CLEA/liane/Publications.htm


nightlight

unread,
Jul 15, 2006, 5:52:36 PM7/15/06
to
hersheyhv wrote:

> Again, someone who is positing conscious and intelligent *foresight* in
> nature (other than when the behavior is done by organisms with minds)
> has a tough row to hoe.

'Consciousness', which doesn't exist in the present natural
science, has no relation to any 'intelligence' or 'foresight'
I am talking about. There is nothing in my argument that
relies on 'consciousness'.


> We are not saying that overall mutation rate or the rate of specific
> mutations is unaffected by the biochemistry of the cell. They
> certainly are. We are saying that these mutation rate changes are not
> "need specific" changes wrt specific genes of need. The cell is not
> saying (since you seem to ascribe mystical consciousness to cells, I
> will go along), "Hmmm. I need to mutate the his4 gene in order to
> survive in this environment, so let's *specifically* mutate this gene I
> need." Rather the cell is saying, in an utter panic, "God. I am
> dying. Let's jack up my total mutation rate for all mutations. Maybe
> one of the mutational mudpies will stick to the wall and save me, even
> at the cost of many deleterious mutations produced by the same process.
> A rising tide lifts all boats."

Do you realize how strained and capricious your argument above is.
You say:

a) ID position: "Hmmm. I need to mutate the his4 gene in


order to survive in this environment, so let's *specifically*
mutate this gene I need."

b) ND position: ``Rather the cell is saying, in an utter panic, "God.


I am dying. Let's jack up my total mutation rate for all
mutations. Maybe one of the mutational mudpies will stick to
the wall and save me, even at the cost of many deleterious
mutations produced by the same process. A rising tide lifts

all boats." ''

In (a) the network anticipates harm, then it picks an action
which it anticipates may reduce the harm.

In (b) the network anticipates a harm (my state in the
near future will be a 'dead cell'), then it picks an action
which it anticipates ('will save me') may reduce the harm.


The only distinction is in exactly how much of anticipation
is used in either case. As noted before, the ND postulate
against the (a)-degree-of-anticipation and acceptance
of (b)-degree-of-anticipation appears as an incoherent
and capricious requirement. You can't even state a
_general_ rule, about how much anticipation does the
latest version of ND's RM dogma allow and what is
still on the prohibited list lately? It has to list
specific kinds of cases to specify its prohibitions
& allowances since there is no coherent general
principle that one can use to decide how much of
anticipation is allowed.

In contrast, the ID position (at the algorithmic level)
is a clean, coherent and principled: any degree of
anticipation is allowed which is within the computational
resources of the network.

There is nothing ruled out a priori, for the 'just because'
reason.


> I am, in the experiments I have been describing, directly testing
> whether the mutations occur via an *intelligently designed* process
> whereby the specific mutation of need is *preferentially* produced at,
> or in anticipation of, the need for that mutation. This is in contrast
> to the idea that mutations are produced at random wrt need and only
> *after* being produced, is there any differential or preferential
> process occurring, namely selection by local conditions.

You're still not getting that there is no such a thing as
"The Need" or "The Preference". There is a c-need and a
c-solution, as computed by the cell. There is also h-need
and h-solution, as computed by you (standing in for a
biologist testing the ID hypothesis against the empirical
rates under different conditions).

The ones which would be controlling the actual mutations,
if that is the hypotheses being tested against the observed
mutations, would be the c-need and c-solution. Since
you don't know what the c-need and c-solutions are, all
you can do is test against h-need and h-solution.

What is the logical implication, if you discover that
the actual networks did not produce the h-solution? The
logical implication is that one or more of the
following propositions must be true:

1. h-need is different than c-need
2. h-solution is different than c-solution
3. The cell did not compute any c-need or c-solution

Your claim is that the logical implication of the
failure to observe the h-solution empirically, is
that proposition (3)=true. That is a _faulty logic_.

The hypothesis that your empirical observation was
testing is _not_ whether the network will produce
the c-solution, but whether the network will produce
the h-solution. All you can conclude from the
failure to observe h-solution about the c-solution,
is that the h-solution is not the same as the
c-solution (which allows for a possibility that
no c-solution was computed: c-solution=null).

One way that this can be the case would be
that the network never computed c-solution.
Another one is that it computed it, but it
was a different solution than the h-solution.

Hence, it is _logically false_ to say that the
absence of observation of h-solution implies
the non-existence of c-solution. It doesn't.


> I understand full well how a cell perceives its environment and how
> cellular biochemistry is regulated. No thought process in the cell is
> involved. No computations are involved. Amounts, presence, or absence
> of allosteric regulators is involved. But these engender simple
> mindless unintelligent consequences, not any sort of consequence that
> involves intelligent weighing of future consequences.

The "simple mindless unintelligent consequences" refers to the
processes that the present science was able to model and test i.e.
it doesn't refer to the processes which science cannot model, the
processes not understood well enough.

If the _actual_ biochemical network were so "simple mindless
unintelligent" how come it can engineer at a molecular level
a new live cell from scratch, while all of the resources
and knowledge of the present biochemistry and molecular biology
put together isn't even close to such molecular engineering feat?

Which entity is then more intelligent and capable in the
field of molecular engineering of live cells? (Let alone
of live multi-cellular organisms.) We're not even an
apprentice of the real master in this realm (since
to become an apprentice, one needs to recognize, at
the very least, that there a master).

Hence, the "simple mindless unintelligent consequences" is all
that the simple, unintelligent creatures could figure out
so far. Your arrogance is astounding. It is like someone
looking at a volume of Shakespeare plays through a small
pinhole allowing him to see just tree letters on any one
page, than proclaiming that Shakespeare is highly overrated
since all he wrote is just these "simple mindless unintelligent"
three letter snippets that any trained ape could have written.

> Take all the rennaisance geniuses and artisans and
> put them in a room with a Lexus and ask them to create a new one and
> they also would be unable to do so. Not because they are less
> intelligent than the workers of today but because they lack the skills
> and knowledge to do a job that would be easy for someone today. And
> the modern Lexus has evolved (in the sense that human artifacts evolve
> by improvement via trial and error under the manufacture of a known
> outside agent, not by the mechanism that cells evolve, involving trial
> and error in a self-replicating genomic organism) beyond the carriages
> that they are familiar with.

The Lexus is not produced by the workers alone. The workers
are merely few nodes in a vast scientific-technological
network, and it is this _network_ which knows how to create
Lexus from scratch.

You take any worker, or any number of them, put them
_fully_ outside of _the network_, which means not just
a physical disconnect (such as putting them on a
desert island), but disconnect them from any links
they have with the network at the higher levels
(such as any scientific and technological knowledge),
and they would have no clue how to even begin.

The network which knows how to create a cell from
scratch is the cellular biochemical network. Our
scientific-technological network, which knows
how to create Lexus from scratch, doesn't know
how to create a cell from scratch.


> You keep anthropomorphizing this mystical "biochemical network" into a
> super-intelligent agent with foresight and consciousness. Stop it
> unless you can present evidence for either foresight or consciousness.

As explained earlier, natural science has no model of
'consciousness' (the mind stuff). There is no such a
thing in natural science. But, I am not using such
concepts. My argument relies only on computational
interpretation of 'intelligence' and 'anticipation',
and not on 'what is it like to anticipate'. The
'anticipation' I am talking about is as non-mysterious
as that of a chess program anticipating your next move.

In this whole discussion, it is none other than
you whose low resolution conceptual tools end up
repeatedly mixing up the two concepts:

a) computational/algorithmic anticipation (e.g. a chess program)
b) mind-stuff anticipation ('what is like to anticipate').

I am talking about (a), not anthropomorphizing via (b). It
is you who needs to dust off ones conceptual lenses.


> I am quite capable of determining if something survives or dies (or
> fails to reproduce) in a particular environment. How else would you
> determine "winning" other than by differential reproductive success?
> *Any* move that leads to greater reproductive success is a "winning"
> move.

It is not a question of how would "I" or anyone else, other
than cellular network itself, define "winning". It is the
c-need and c-solution and c-winning that decides its actions.

Just because you (or science in general) cannot at present
decipher (reverse engineer) what c-need, c-solution, c-winning
might be, that doesn't _logically_ imply they were not
computed and acted upon, as you claim (that it does imply).
It only allows for such possibility. But it also allows for
other possibilities (such as, that c-need, c-solution,
c-winning are different from h-need, h-solution and h-winning).


> So when are you going to actually present some specific test of the
> proposition that there is an interaction between the generation of
> specific mutations (that generate specific phenotypes) and the
> selective need for these specific phenotypes?

I never claimed that I was going to produce such test.
I am merely pointing out at the flaws in your (and general
neo-Darwinian) reasoning. You have problems in using
elemental logic, problems in traversing multiple layers
of abstraction and quite a low conceptual resolution.
All of these problems were illustrated and pointed out
to you in numerous specific instances in this thread.


>
> Sure that is your model. I am saying that its obvious empircal
> implication is that cells have foresight and can produce mutants
> according to perceived need. And the empirical evidence says that that
> does not happen, AFAWCT.


There is no "obvious empirical implication" unless you
consider that it is obvious that c-solution = h-solution,
even without knowing c-solution. Can you explain why
is the latter obvious (since you don't know what c-need
and c-solution might be)?

In other words, assume the ID model (in computational meaning)
as a _hypothesis_ to be tested/falsified by the experiment.
Then show what would be the "obvious empirical implication"
of that hypothesis _alone_, which means _without_ also
having to _assume_ that c-need and c-solution (which ID
hypothesis implies to exist) must be equal to the h-need
and h-solution (prefix "h-" refers to your own evaluations
or any other, except the cellular network's own evaluations).

Your faulty methodology was to assume not just the ID model,
but also that c-need = h-need and c-solution = h-solution.
Only then you can claim that there is an "obvious implication",
but this is not an implication of ID model hypothesis _alone_.
It is an implication of the ID model hypothesis, plus these
additional assumptions you made.

Hence, once the "obvious implication" is falsified empirically,
what is being falsified is the combination of ID model +
assumptions that c-need = h-need and c-solution = h-solution.


> Isn't the cell's genome (its DNA) the source code?

Not more so than the sequence of all electrical pulses in
a computer is a "source code" of a chess program running
on the computer. This is your 'traversal of abstraction layers'
problem resurfacing again.


> Selection is not mutation. Selection punishes or rewards cells that
> have specific genetic variants by virtue of the phenotypes expressed.

Biochemical network, running an internal model which encapsulates
all it knows about its 'world', can try out in this 'model space'
the 'model mutations' and then perform 'model natural selection'
(within its knowledge of 'world') and weed out whatever its
internal model of the world predicts to be harmful, before
committing to the much more expensive and much slower real
world implementation and the real world natural selection.
Although no one has identified such patterns and
'pre-selection' algorithms in biochemical networks, their
existence, in some form, is implied by the general
optimization properties of these types of networks and
known types punishments and rewards. To what degree they
can implement it, depends on the computational capacity
and the quality of their internal model of the 'world'.


> Can you give me an example of a mutation caused by the "network itself"
> and the evidence you have that such mutants actually exist? In
> particular, which ones are caused by the "network itself" when there is
> a selective need for that mutation? Oh, and did I mention that you
> need to have evidence?

I would call Cairns experiment an experimental demonstration
of such anticipatory mutation. That demonstration is based on
the explicit reversed engineering of specific very tiny and
simple algorithm being used. Others, more complex algorithms,
with better aim, will likely be found (if they haven't been
already).

In that example, the network anticipates: it looks
in its internal model space of the world, as it unfolds
forward in time, in accordance with its understanding of
the world. In a test-run in which the self-actor in the
model space does nothing, the self-actor always dies
within the look-ahead. Hence, that lowers the risk
aversion in selecting the actions at its disposal,
from a library of such actions, and one such is a
drastic increase in general mutations. In these runs,
it may find some cases of self-actor alive at the
forward horizon of its look-ahead, hence that action
becomes the best action to take and that is what it
then executes in the real world.

Now, we know your story about specific vs non-specific
wrt need mutation, and so on. That has nothing to do
with the anticipatory algorithm above (within the
straightforward and principled semantics of 'anticipation').
Your objection is nothing more than a bit semantic
gerrymandering around the concept of 'anticipate'
in order to keep the Cairns observation outside
of its boundaries. Thats all fine. You have now a
concept ND-anticipation which doesn't apply to
some types of plain meaning anticipation, such as the
one described above (which uses straightforward semantics).
That way you can say Cairns experiment did not demonstrate
any ND-anticipation in generating mutations. Well,
that's fine, you can say that if that's what makes your
life happy. You can even strip away "ND-" prefix, if
that makes you even happier. That has absolutely no relation
and no effect on the validity the ID of description within
its principled, straightforward semantics for 'anticipation'.

All it means is is that there is a phenomenon which
is anticipatory but not ND-anticipatory. Who cares.

The RM conjectures, of pre-Cairns or post-Cairns
variety, when stated at the algorithmic level of
abstraction (which is the only level at which
the semantic lines between the ID and ND-RM are
sharp), is a non-starter as a serious conjecture.
It is a capricious a priori restriction on the
type of computations biochemical networks may do.
There is no algorithmic basis or general principle
at that level that could even express such
restriction at the algorithmic level.

The pre-Cairns RM is simply a statement that
network can perform optimizations using any
pattern (law, regularity) it has learned,
except when the pattern contains as its
step some mutagenic physical-chemical
conditions.

The post-Cairns RM merely adds an exception
to the exception of the pre-Cairns RM. It excepts
those pre-Cairns RM exceptions in which the
mutagenic condition affects a "large" number
of sites (presently "large" is set equal to "all").

That kind of conjectures, pre and post, are plainly
silly at the algorithmic level. Unfortunately for
the ND priesthood, the study of the anticipatory
computations by the biochemical networks and
of their algorithms is an existent and rapidly
developing branch of the computational biology.
Hence, neo-Darwinism is already a zombie in the
conjectures realm.

> That, indeed, is what the evidence says. Initially, of course, people
> considered that this might be an example of preferential *induction* of
> beneficial mutation by the environment. However, upon further testing
> it was demonstrated that the mutation rate increased due to a stress
> response, but not *preferentially* the rate of mutations of need.

If you have textbooks from before and after Cairns experiments, check
the wordings on the RM conjecture. There were no caveats about 'stress
response' or any equivalent. Now they are prominent. The quiet semantic
gerrymandering around the concept of 'anticipation' did occur here.


> Even Cairns was smart enough to posit that what he
> was seeing was *induced preferential mutation* rather than foresighted
> anticipatory process.

Thanks, that is neat example of semantic gerrymandering around the
concept of 'anticipation'.

>>At present, only the coarse grained, statistical and static
>>properties of the biochemical reaction networks have been
>>explored, and only very simplified (toy) models of its
>>dynamics are being simulated on the computers. You can check
>>on the SFI
>
>
> Is that an abbreviation for Science Fiction site?

That is the Santa Fe Institute, the link of which was given
right below that paragraph. Here it is again:

http://www.santafe.edu/research/publications/working-papers.php

{ Nah, don't worry. I don't believe that anyone will jump to
a conclusion that your science is dated, or anything like that.
No, they won't. No, nothing of the sort. }

> I doubt it. The evidence is already in wrt mutation (although I would
> leave open the possibility that a few domesticated mutational processes
> might be capable of generating variants wrt need, but these would be
> rare and unusual cases, not the norm).
>


Oh, I see, you are already starting on RM version 3. In RM3, which
would be an exception to the RM2's exception of the RM1's exception,
the biochemical network will be allowed to anticipate using the
patterns which include more specific mutagenic conditions, provided
that such patterns do not occur too often (whatever that means).
Did you settle on some figures that define 'rare', 'unusual',
'not the norm'? Do you know of any actual experiment which prompted
this sudden new level of weasel-wording?

nightlight

unread,
Jul 15, 2006, 7:41:31 PM7/15/06
to
Windy wrote:

> nightlight wrote:
>
>>hersheyhv wrote:
>>
>>>...
>>
>> > What "anticipatory computational processes by the biochemical
>> > network of a cell"? You have yet to demonstrate that a
>> > cell even *has* a "computational process" much less an
>> > anticipatory one. All you have presented is some New Age
>> > mumbo-jumbo verbal mysticism.
>> > [...and so on]
>>
>>I don't wish to sound insensitive or heartless or anything
>>like that, but you do need to bring yourself _up to date_
>>with contemporary research and the entire scientific
>>disciplines focused entirely on various aspects of
>>computations and algorithms of biochemical networks.
>
>

> Oh, spare us the hypocrisy. You wouldn't give a flying fuck about
> whether biochemical networks can be viewed as algorithms if you didn't
> think it somehow disproves neo-Darwinism.

You're ignoring his claim I was responding to. He is claiming that
no such things (computational processes, algorithms, algorithmic
level) exist in cellular biochemical network. I am simply pointing
out that these things exist and that there is a large body of research
and whole scientific disciplines focused on that aspect. He and you are
merely demonstrating that your knowledge on this topics is out of date.

> You haven't stated how you intend to model your random,
> non-anticipatory biochemical events, since you propose all such
> networks are already intelligent (or mind-melded to intelligent
> researchers).

There were no such claims or promises in what I wrote here.
What you're asking for, a faithful computer models of
biochemical algorithms, is many years away. The existence
of an algorithm can be inferred indirectly without constructing
it explicitly. We know that there exist some digit at 10^80-th
decimal position in number Pi, even though we will likely never
be able to write what that digit is. Similarly, we know that
decimal number 1/7 = 0.14285712428... will repeat a pattern
of digits 142857 for a span 10^80 digits, without being able
to write down 10^80 digits.

One can perfectly legitimately draw conclusions from the _existence
alone_ of a mathematical or algorithmic object, without knowing how
to construct such object.

The first implications I was pointing out is that ND and ID are
distinguishable conjectures i.e. ID (in algorithmic formulation)
is a scientific conjecture, not a religion.

The next implication is that, at the algorithmic level of
abstraction, the ND-RM cojecture is a non-starter, a zombie:
a capricious prohibition on types of biochemical network
optimization processes, which was already falsified (at this
level of abstraction) in Cairns experiments, and had to be
revised by silly gerrymandering around the concept of
'anticipation' so that the Cairns mutations would fall outside
of the revised concept, the ND-anticipation.

You are welcome to state in your own way the ND-RM and ID
conjecture at the algorithmic level of abstraction
(of the processes in cellular biochemical network,
this level of abstraction is a legitimate field
of science as you can verify from the links provided).

See if you can state the two conjectures at this level
(which is the level where their full semantic belongs)
and make the ND-RM conjecture appear less ridiculous.

> How about the ratio of transitions vs. transversions, for example?
> Pretty essential if you are going to model mutations. Do you expect it
> to be different in the presence of anticipation?
>

Different than what? If you take the _anticipation_ as your
hypothesis, to be tested/falsified, what do you compare the
empirically observed ratios in order to find out whether
they are "different"?

You can't (at present) reverse engineer the network algorithms
and then go in and change some variable in its "source code"
to make it turn off or on the the specific anticipation, so
you could compare the two empirical ratios.

In fact, if one could reverse engineer/decompile the biochemical
network algorithms, the answer would be plain, since the
"source code" would tell you precisely whether the anticipatory
algorithms are used for this purpose or not.

Windy

unread,
Jul 15, 2006, 8:55:48 PM7/15/06
to

nightlight wrote:

> Windy wrote:
> > Oh, spare us the hypocrisy. You wouldn't give a flying fuck about
> > whether biochemical networks can be viewed as algorithms if you didn't
> > think it somehow disproves neo-Darwinism.
> You're ignoring his claim I was responding to. He is claiming that
> no such things (computational processes, algorithms, algorithmic
> level) exist in cellular biochemical network. I am simply pointing
> out that these things exist and that there is a large body of research
> and whole scientific disciplines focused on that aspect. He and you are
> merely demonstrating that your knowledge on this topics is out of date.

Perhaps, but your knowledge of evolution and biochemistry is
non-existent.

> > You haven't stated how you intend to model your random,
> > non-anticipatory biochemical events, since you propose all such
> > networks are already intelligent (or mind-melded to intelligent
> > researchers).
> There were no such claims or promises in what I wrote here.
> What you're asking for, a faithful computer models of
> biochemical algorithms, is many years away.

So your criticism of the mouse study is that they should have used a
nonexistent model that no one knows how to program, yet. What do you
suggest in the meantime? Put evolutionary research on hold? Sackcloth
and ashes for the researchers who dare presume random mutation?

> One can perfectly legitimately draw conclusions from the _existence
> alone_ of a mathematical or algorithmic object, without knowing how
> to construct such object.
> The first implications I was pointing out is that ND and ID are
> distinguishable conjectures i.e. ID (in algorithmic formulation)
> is a scientific conjecture, not a religion.
> The next implication is that, at the algorithmic level of
> abstraction, the ND-RM cojecture is a non-starter, a zombie:

Bullshit. If your anticipatory hypothesis had *any* validity, it would
have to exist *in addition* to random mutation. When I asked about
neutral mutations you admitted as much.

> a capricious prohibition on types of biochemical network
> optimization processes, which was already falsified (at this
> level of abstraction) in Cairns experiments, and had to be
> revised by silly gerrymandering around the concept of
> 'anticipation' so that the Cairns mutations would fall outside
> of the revised concept, the ND-anticipation.
> You are welcome to state in your own way the ND-RM and ID
> conjecture at the algorithmic level of abstraction
> (of the processes in cellular biochemical network,
> this level of abstraction is a legitimate field
> of science as you can verify from the links provided).
>
> See if you can state the two conjectures at this level
> (which is the level where their full semantic belongs)
> and make the ND-RM conjecture appear less ridiculous.

Look at some of those links you provided. *If* there is yet any attempt
to model natural mutations in such networks I bet no mutation fairies
are included in the model.

> > How about the ratio of transitions vs. transversions, for example?
> > Pretty essential if you are going to model mutations. Do you expect it
> > to be different in the presence of anticipation?
> >
> Different than what? If you take the _anticipation_ as your
> hypothesis, to be tested/falsified, what do you compare the
> empirically observed ratios in order to find out whether
> they are "different"?

*That is precisely my question*. You are saying we should compare the
two hypotheses but that it can only be done in some unforeseen future.
This is your unfalsifiable mutation fairy again.

> You can't (at present) reverse engineer the network algorithms
> and then go in and change some variable in its "source code"
> to make it turn off or on the the specific anticipation, so
> you could compare the two empirical ratios.

Yet you criticized the mouse study for not doing the comparison. Now
you say such a model can only be done after reverse engineering the
entire cell. FOR EXPLAINING A FRIGGING ONE NUCLEOTIDE CHANGE. Excuse me
if I stick with the science that works and explains stuff *now*.

> In fact, if one could reverse engineer/decompile the biochemical
> network algorithms, the answer would be plain, since the
> "source code" would tell you precisely whether the anticipatory
> algorithms are used for this purpose or not.

Yeah, and if one could cleanse the doors of perception, everything
would appear to man as it is, infinite.

-- w.

Windy

unread,
Jul 15, 2006, 10:00:15 PM7/15/06
to

nightlight wrote:
> You can't even state a
> _general_ rule, about how much anticipation does the
> latest version of ND's RM dogma allow and what is
> still on the prohibited list lately?

Here on planet Earth, nervous systems evolved by trial and error over
the course of hundreds of millions of years to process information and
anticipate events precisely because the genome can't anticipate
anything. In your fevered fantasy, why did anticipatory genetic
networks even bother to create the mutations to produce nervous
systems, if the network in itself already had much more processing
power than any nervous system? Why not just mutate as you go along?

Or, if nervous systems were such a great idea, why did the
super-duper-network wait 2-3 billion years to produce a passable one?

> In contrast, the ID position (at the algorithmic level)
> is a clean, coherent and principled: any degree of
> anticipation is allowed which is within the computational
> resources of the network.

Which explains precisely nothing. I'll bet you are not prepared to make
any predictions based on differences in networks - does a big
eukaryotic genome have more computational resources than a small
bacterial one? If so, the former should be able to anticipate
mutational needs and amass favourable mutations faster, right?

> Hence, the "simple mindless unintelligent consequences" is all
> that the simple, unintelligent creatures could figure out
> so far. Your arrogance is astounding.

...the irony is astounding...

-- w.

Windy

unread,
Jul 15, 2006, 10:12:21 PM7/15/06
to

nightlight wrote:
> hersheyhv wrote:
>
> > Again, someone who is positing conscious and intelligent *foresight* in
> > nature (other than when the behavior is done by organisms with minds)
> > has a tough row to hoe.
>
> 'Consciousness', which doesn't exist in the present natural
> science, has no relation to any 'intelligence' or 'foresight'
> I am talking about. There is nothing in my argument that
> relies on 'consciousness'.

xxx nightlight from a few days back:
>Now, to
>check whether the search was faster than random, you cannot
>get around the task of estimating what the random _model_
>predicts for the expected number of tries needed to solve
>the problem. Only then you can say whether the _empirically
>observed_ search and solution time is comparable to that
>predicted by the random search model, or whether it is slower
>(malicious intelligence) or faster (benevolent intelligence).
xxx

What are these "malicious" and "benevolent" intelligences you
suggested, then? Are there malicious (to themselves) non-conscious
biochemical anticipatory networks?

-- w.

nightlight

unread,
Jul 15, 2006, 10:14:04 PM7/15/06
to
hersheyhv wrote:

You are the only bring in "mysticism" (teleological method is perfectly
scientific, equivalent to the causal, as explained few times before).

You're also changing the issue I was replying to. I was replying to
your silly assertions:

> What "anticipatory computational processes by the biochemical
> network of a cell"? You have yet to demonstrate that a
> cell even *has* a "computational process" much less an
> anticipatory one.

That was what those links were provided for: to show how silly
your assertion was. There are papers, conferences, even an
entire discipline whose object of research is precisely what
you asserted above not to exist.

More specific links (in support of my arguments in
this thread) are given in another post:

Biochemical networks & their algorithms
http://groups.google.com/group/talk.origins/msg/623400a8de95db21


The problem you had ran into here is that you realized that
as soon as you formulate the ND-RM and ID conjectures at the
_algorithmic_ level of abstraction (of processes in cellular
biochemical network), it becomes plain as a day that ND-RM
conjectures is ridiculous, a non-starter. Hence you had to
attack the existence of the computations or of algorithmic
level of description of cellular biochemical network. Now
that we know you were uninformed which things exist and which
don't, we can move on. Let's hope you won't have another one
of your typical mental reboots and three posts from now
start asking again 'what computations', 'what algorithms'...

Unfortunately for you, the algorithmic level is precisely
where those two conjectures have their sharpest semantics.
That level is where they are scientifically cleanly
distinguishable, at least as a matter of principle.
If we could 'disassemble' the algorithms, which as
matter of principle is possible, we would know exactly
whether these algorithms perform anticipatory actions
that contain mutations as their steps.

In contrast, your mantra 'mutations are random wrt need'
is far too _vague_ to be operationally mapped, _even
in principle_, to any empirical facts. You were not
able to demonstrate how to map such vague dictum,
at least _in principle_ into empirical facts, without
doing a gross violence to the elemental logic. (Further,
before Cairns experiments, it meant one thing, after
Cairns it means another, and in your previous post
you have shifted again what is allowed and what is
prohibited by RM.)

At that level of formulation ND-RM is simply not a
scientific conjecture. In that form it is a vague
theological dictum, a relic propping up a relic
-- the naive mechanistic materialism and militant
atheism (that happened to be in vogue in some
scientific and philosophical circles of 19th &
early 20th century Great Britain and France).

Hence, unless you can explain what does that dictum
actually mean (operational mapping to empirical facts),
without violating elemental laws of logic, the only
remaining approach which, at least in principle, allows
a sharp and logically coherent formulation of RM and ID,
is the algorithmic description.

Now that you have realized that 'algorithmic level' of
description does exist, that the laws and phenomena at
this level are a genuine scientific discipline, you
can try formulating the ND-RM conjecture at the algorithmic
level (where it has an unambiguous meaning). See if you
can make it sound less capricious than in my formulation.

I should remind you that the above is not a question of
designing a specific experiment, based on algorithmic
level formulation. Such experiment is not presently
feasible i.e. we cannot reverse engineer the biochemical
network algorithms in enough detail to know exactly
how they perform their anticipation and which kind
of patterns they use for that purpose (i.e. can a
mutation be a part of such patterns).

This is only a question about matters of principles.
Namely, we can reason about things that we know to
exist and for which we know some general properties
(such as optimization properties of these network
algorithms).

My point above is that even well before we can actually
perform the reverse engineering of the network algorithms,
the mere formulation of the ID and ND-RM conjectures at
the algorithmic level leaves no question as to which
one is more coherent and principled conjecture. You are
welcome to formulate NT-RM at the algorithmic level
and see what does it look like. As explained, your
mantra, which doesn't say anything empirical even in
principle, doesn't qualify as an algorithmic formulation
(or a formulation of anything mappable to empirical
at all).

> They are not talking about mutations occuring in a mystical
> teleological fashion.

Teleological approach is perfectly fine scientific formulation,
from the fundamental physics and up, through social sciences
and psychology. You are simply uninformed. It is only an
ideologically based taboo in certain academic circles of biology.
Everywhere else it is a perfectly fine scientific method.

There is nothing more mystical about it than there is about
a chess program looking ahead at the possible moves and picking
one that optimizes its gain in the 'future' (as seen in its
model of the future). It is a result of computations performing
optimizations of future states based on the models they have
about such states (e.g. using the patterns of past sequences
of events, stored in their memory).

Teleological formulation in science:

http://groups.google.com/group/talk.origins/msg/f969d45c50183c02
http://groups.google.com/group/talk.origins/msg/f30ee4fdbfff5b01


>
>>This workshop is designed to address a range of questions that goes
>>beyond the development of algorithms for the searching and analysis of
>>genomic and protein data bases.
>>
>>The workshop will bring together mathematicians, physical scientists,
>>engineers, computer scientists, and biological scientists to address
>>fundamental questions concerning the computations that are carried out
>>within cellular and genetic biological networks.
>>
>>What are prototypical tasks and prototypical algorithms for biochemical
>>circuits? How are these mechanisms regulated? How can important logical
>>elements be identified experimentally or by data-mining? What are the
>>"design principles" of biological circuits? What are fundamental
>>limitations on the performance of molecular systems? The workshop will
>>provide an environment in which these issues can be considered by a
>>diverse group of researchers with backgrounds in dynamics, computation,
>>control theory and biology.
>
>
> I am quite aware of the mathematical modeling of biochemical systems
> that is done.

You are confused about what they are saying is the subject of their
workshop. Their subject is not the modeling _of_ the biochemical
networks (bioinformatics, data processing), but computations,
algorithms and modeling _by_ these networks. In their words:

> This workshop is designed to address a range of questions

> that goes _beyond_ the development of algorithms for the


> searching and analysis of genomic and protein data bases.

That was bioinformatics, biologists doing data processing of
the data collected. That is what the workshop is _not_ about.
Instead, what they want to discuss is _beyond_ bioinformatics:

> to address fundamental questions concerning the computations
> that are carried out within cellular and genetic
> biological networks.
> What are prototypical tasks and prototypical algorithms
> for biochemical circuits?

Hence it is not about _our_ computations and algorithms (as used
in bioinformatics), but about the computations and the algorithms
by the networks themselves.

> And mutation empirically occurs at random wrt
> need.

Can you give an unambiguous and _logically_ tight operational mapping
between that mantra and any empirical results. Make a hypothesis that
the anticipation is used to control mutationwhich we wish to subject
to an empirical test/falsification. Then show how do you falsify it
without including the _additional_ assumptions about the presumed
anticipation (such as presuming to know what the network has computed
as the best action to take). You haven't done that so far. Until you
can do that, your mantra is not just empirically unsuported, but it
is so even in principle. It is devoid of any empirical meaning.


> All you are doing is using these phrases which you clearly don't
> understand to try to bullshit people.

That sounds like a projection.

http://en.wikipedia.org/wiki/Psychological_projection

hersheyhv

unread,
Jul 16, 2006, 2:18:05 AM7/16/06
to
nightlight wrote:
> hersheyhv wrote:
>
> > Again, someone who is positing conscious and intelligent *foresight* in
> > nature (other than when the behavior is done by organisms with minds)
> > has a tough row to hoe.
>
> 'Consciousness', which doesn't exist in the present natural
> science,

Consciousness certainly does exist in ethology and any other science
that involves the behavior of organisms capable of consciousness.

> has no relation to any 'intelligence' or 'foresight'
> I am talking about. There is nothing in my argument that
> relies on 'consciousness'.

Tell me how an agent can have "intelligence" or "foresight" without
having "consciousness" or "awareness and the capacity to make choices"?

> > We are not saying that overall mutation rate or the rate of specific
> > mutations is unaffected by the biochemistry of the cell. They
> > certainly are. We are saying that these mutation rate changes are not
> > "need specific" changes wrt specific genes of need. The cell is not
> > saying (since you seem to ascribe mystical consciousness to cells, I
> > will go along), "Hmmm. I need to mutate the his4 gene in order to
> > survive in this environment, so let's *specifically* mutate this gene I
> > need." Rather the cell is saying, in an utter panic, "God. I am
> > dying. Let's jack up my total mutation rate for all mutations. Maybe
> > one of the mutational mudpies will stick to the wall and save me, even
> > at the cost of many deleterious mutations produced by the same process.
> > A rising tide lifts all boats."
>
> Do you realize how strained and capricious your argument above is.
> You say:
>
> a) ID position: "Hmmm. I need to mutate the his4 gene in
> order to survive in this environment, so let's *specifically*
> mutate this gene I need."

It is only necessary to produce it a statistically higher frequency
than other genes under those conditions. That is what is needed to
demonstrate *empirically* (and empirical evidence is what you need and
lack) that there is some causal something that is non-randomly
generating mutations according to need. The fact remains that there is
no such evidence. All the evidence says that there is no statistical
correlation between need for a specific mutation and its generation.
There, of course, is most definitely a correlation between the need for
a gene and its *selection* by the local environment.

> b) ND position: ``Rather the cell is saying, in an utter panic, "God.
> I am dying. Let's jack up my total mutation rate for all
> mutations. Maybe one of the mutational mudpies will stick to
> the wall and save me, even at the cost of many deleterious
> mutations produced by the same process. A rising tide lifts
> all boats." ''
>
> In (a) the network anticipates harm, then it picks an action
> which it anticipates may reduce the harm.
>
> In (b) the network anticipates a harm (my state in the
> near future will be a 'dead cell'), then it picks an action
> which it anticipates ('will save me') may reduce the harm.

> The only distinction is in exactly how much of anticipation
> is used in either case.

Nope. The *relevant* distinction is that in a) there is *specificity*
in the *generation* of mutations of need and in b) generation of needed
mutation is *not* specific but merely a non-specific consequence of an
increase in mutation rate.

Besides, the network did not *anticipate* harm in b). It was already
being harmed. It was already in the deleterious environment *before*
the mutation rate was increased. The cell did not *anticipate* that it
would be put in a deleterious environment. It became aware that it was
in such an environment and upped the mutation rate. To *anticipate*,
the network would have to have consciously known that it would be put
in the deleterious environment and respond by upping mutation *before*
it was put in that environment.

Which, of course, is why the proposition being studied by Cairns was
whether or not there was specificity in the induction of beneficial
mutations and not specificity in the anticipation of beneficial
mutations.

> As noted before, the ND postulate
> against the (a)-degree-of-anticipation and acceptance
> of (b)-degree-of-anticipation appears as an incoherent
> and capricious requirement. You can't even state a
> _general_ rule, about how much anticipation does the
> latest version of ND's RM dogma allow and what is
> still on the prohibited list lately? It has to list
> specific kinds of cases to specify its prohibitions
> & allowances since there is no coherent general
> principle that one can use to decide how much of
> anticipation is allowed.

The fact remains that further study of the Cairns phenomena showed
explicityly that there was no *specificity* in the higher amount of
mutations that occurred in the stress conditions. That is, the
mutations that occurred were not, statistically, significantly more
likely to be beneficial mutations. You have yet to present a single
valid example of the phenomena you claim occurs.

> In contrast, the ID position (at the algorithmic level)
> is a clean, coherent and principled: any degree of
> anticipation is allowed which is within the computational
> resources of the network.

And, in the empirical tests that have been done, no anticipation was
found in *any* example. The Cairns experiment, as mentioned, tested
whether there was *induction* of beneficial mutations specifically.
And the further work answered that question in the negative.

> There is nothing ruled out a priori, for the 'just because'
> reason.

I am not ruling it out _a priori_. I am saying that after examination
of many experiments, no such specific anticipation or even induction of
beneficial mutations has been found. I am not even ruling it out as a
possibility some time in the future for some unusual cases. But it
pretty clearly is not the case for any mutational process that has been
experimentally observed.

> > I am, in the experiments I have been describing, directly testing
> > whether the mutations occur via an *intelligently designed* process
> > whereby the specific mutation of need is *preferentially* produced at,
> > or in anticipation of, the need for that mutation. This is in contrast
> > to the idea that mutations are produced at random wrt need and only
> > *after* being produced, is there any differential or preferential
> > process occurring, namely selection by local conditions.
>
> You're still not getting that there is no such a thing as
> "The Need" or "The Preference". There is a c-need and a
> c-solution, as computed by the cell. There is also h-need
> and h-solution, as computed by you (standing in for a
> biologist testing the ID hypothesis against the empirical
> rates under different conditions).

And what, pray tell, is this new verbiage supposed to mean? What is
the difference between c-need and h-need? Is this where you introduce
the additional claim that your mystical magic mutation fairy mechanism
only works when humans aren't looking (or humans aren't designing
specific controlled experiments)?

Yep. You introduce the idea that the magical mutation fairy only works
when humans aren't looking. Of course, that isn't the way you
introduced it. But that is what the above gobbledy-gook really means.

> > I understand full well how a cell perceives its environment and how
> > cellular biochemistry is regulated. No thought process in the cell is
> > involved. No computations are involved. Amounts, presence, or absence
> > of allosteric regulators is involved. But these engender simple
> > mindless unintelligent consequences, not any sort of consequence that
> > involves intelligent weighing of future consequences.
>
> The "simple mindless unintelligent consequences" refers to the
> processes that the present science was able to model and test i.e.
> it doesn't refer to the processes which science cannot model, the
> processes not understood well enough.

And you can? So now the claim is that your ideas are not capable of
being scientifically determined. Despite the fact that every
experiment done says that mutations do not specifically get generated
to benefit the organism in its moment of need, you claim it really does
somewhere somehow, but we just can't detect it. And if we do, see the
section about the magical mutation fairy being quite shy around
disbelieving scientists. Uh huh. Riiiiiiiight.... Been taking your
meds regularly?

> If the _actual_ biochemical network were so "simple mindless
> unintelligent" how come it can engineer at a molecular level
> a new live cell from scratch, while all of the resources
> and knowledge of the present biochemistry and molecular biology
> put together isn't even close to such molecular engineering feat?

Witness the incredulity card being played. Chemistry, and living
things are chemical reactions, can be quite complicated. Being
complicated does not make something "intelligent".

> Which entity is then more intelligent and capable in the
> field of molecular engineering of live cells? (Let alone
> of live multi-cellular organisms.) We're not even an
> apprentice of the real master in this realm (since
> to become an apprentice, one needs to recognize, at
> the very least, that there a master).
>
> Hence, the "simple mindless unintelligent consequences" is all
> that the simple, unintelligent creatures could figure out
> so far. Your arrogance is astounding. It is like someone
> looking at a volume of Shakespeare plays through a small
> pinhole allowing him to see just tree letters on any one
> page, than proclaiming that Shakespeare is highly overrated
> since all he wrote is just these "simple mindless unintelligent"
> three letter snippets that any trained ape could have written.

We are talking about the phenotypic consequence of a single nucleotide
change in one gene, not looking at the entire genome. If I remove the
word "not" from "To be or not to be", the meaning of that phrase and to
some extent the entire play is altered. But all that change still is
due solely to the removal of the three letter word. I can ignore the
rest of the play precisely because I know that the cause of the
difference is those three words and not the rest of the play.

> > Take all the rennaisance geniuses and artisans and
> > put them in a room with a Lexus and ask them to create a new one and
> > they also would be unable to do so. Not because they are less
> > intelligent than the workers of today but because they lack the skills
> > and knowledge to do a job that would be easy for someone today. And
> > the modern Lexus has evolved (in the sense that human artifacts evolve
> > by improvement via trial and error under the manufacture of a known
> > outside agent, not by the mechanism that cells evolve, involving trial
> > and error in a self-replicating genomic organism) beyond the carriages
> > that they are familiar with.
>
> The Lexus is not produced by the workers alone. The workers
> are merely few nodes in a vast scientific-technological
> network, and it is this _network_ which knows how to create
> Lexus from scratch.

And cells are vast historical networks that include all the changes
that have occurred in the aeons of the past through many different
speciation events. But we are talking about the mechanism of that
change and, in particular, whether mutation is random wrt need. So all
we need to determine is specific mutations and whether or not there are
some conditions in which those mutations occur *statistically* at a
higher frequency when there is a need for the mutation by the cell (if
that is what you mean by c-need). And the answer found is, so far,
there are no such conditions currently known (or likely in the future
for the vast majority of mutations).

> You take any worker, or any number of them, put them
> _fully_ outside of _the network_, which means not just
> a physical disconnect (such as putting them on a
> desert island), but disconnect them from any links
> they have with the network at the higher levels
> (such as any scientific and technological knowledge),
> and they would have no clue how to even begin.
>
> The network which knows how to create a cell from
> scratch is the cellular biochemical network. Our
> scientific-technological network, which knows
> how to create Lexus from scratch, doesn't know
> how to create a cell from scratch.
>
>
> > You keep anthropomorphizing this mystical "biochemical network" into a
> > super-intelligent agent with foresight and consciousness. Stop it
> > unless you can present evidence for either foresight or consciousness.
>
> As explained earlier, natural science has no model of
> 'consciousness' (the mind stuff). There is no such a
> thing in natural science.

Of course there is a model of consciousness in natural science. Humans
have it and humans are natural entities. But, no, molecules do not
have consciousness. Nor do they have intelligence or foresight because
you need conscious awareness to have those features.

> But, I am not using such
> concepts. My argument relies only on computational
> interpretation of 'intelligence' and 'anticipation',
> and not on 'what is it like to anticipate'. The
> 'anticipation' I am talking about is as non-mysterious
> as that of a chess program anticipating your next move.

A chess program does not exhibit "intelligence" or "anticipation" so
long as it is merely performing unconsciously. It is no more
intelligent than the cicada wasp. It is merely responding to local
stimuli in a rote fashion with little intelligence (it is not capable
of independent evaluation of events) and no anticipation.

There is no evidence that the biochemical networks of a cell are any
more intelligent than any other chemical network. Like any chemical
reactor vessel, it is unintelligently predictable in its responses to
local conditions. Nor is there any evidence of anticipation at all.

> In this whole discussion, it is none other than
> you whose low resolution conceptual tools end up
> repeatedly mixing up the two concepts:
>
> a) computational/algorithmic anticipation (e.g. a chess program)
> b) mind-stuff anticipation ('what is like to anticipate').
>
> I am talking about (a), not anthropomorphizing via (b). It
> is you who needs to dust off ones conceptual lenses.

So how do you determine how something with no mind "anticipates" rather
than merely reacts and responds after exposure to a stimulus?

> > I am quite capable of determining if something survives or dies (or
> > fails to reproduce) in a particular environment. How else would you
> > determine "winning" other than by differential reproductive success?
> > *Any* move that leads to greater reproductive success is a "winning"
> > move.
>
> It is not a question of how would "I" or anyone else, other
> than cellular network itself, define "winning". It is the
> c-need and c-solution and c-winning that decides its actions.

Nope. It has been uniformly found that, for organisms, greater
reproductive success is the only 'value' or 'goal' they can be said to
hold. That is true even when it superficially appears not to be (such
as the redback spider male or cells in the middle of a bacterial colony
that commit suicide).

> Just because you (or science in general) cannot at present
> decipher (reverse engineer) what c-need, c-solution, c-winning
> might be, that doesn't _logically_ imply they were not
> computed and acted upon, as you claim (that it does imply).
> It only allows for such possibility. But it also allows for
> other possibilities (such as, that c-need, c-solution,
> c-winning are different from h-need, h-solution and h-winning).

So, once again you get to claim that you don't need no steeenking test,
because if the test turns out wrong from your perspective that's just
because we don't know what the cell really wanted. Perhaps, like the
propaganda artists of 1984, cells really believe in the mantra "Death
equals life". The fact remains that the only consistent 'goal' that
life has is "differential reproductive success".

> > So when are you going to actually present some specific test of the
> > proposition that there is an interaction between the generation of
> > specific mutations (that generate specific phenotypes) and the
> > selective need for these specific phenotypes?
>
> I never claimed that I was going to produce such test.

Not surprising. When you believe in the magical mutation fairy, you
don't need no steeenking tests in order to believe that the fairy
produces whatever mutations the cell needs when it needs them. And if
that kills the cell, well, that is what the cell wanted.

> I am merely pointing out at the flaws in your (and general
> neo-Darwinian) reasoning.

So when are you going to point out the flaws? Rather than merely claim
that they might exist *if* the biochemical network of a cell magically
poofs mutations in anticipation of need. Which, of course, you cannot
show because you don't know the c-need. And don't know anything else,
apparently. Especially when the actual evidence shows no correlation
between the benefit of a specific mutation and rate of generation of
that mutant. Only differences in overall mutation rates. And no
evidence whatsoever of anticipation of need by any cell or organism.
You can still claim, after all, that we don't know everything, so
therefore we know nothing and thus neo-Darwinism might be wrong.

> You have problems in using
> elemental logic, problems in traversing multiple layers
> of abstraction and quite a low conceptual resolution.
> All of these problems were illustrated and pointed out
> to you in numerous specific instances in this thread.

Where? I see you using abstraction as an excuse to pretend that
empirical reality and scientific experiments based on empirical reality
don't really mean what they actually show: that mutation generation is
random wrt need.

> > Sure that is your model. I am saying that its obvious empircal
> > implication is that cells have foresight and can produce mutants
> > according to perceived need. And the empirical evidence says that that
> > does not happen, AFAWCT.
>
>
> There is no "obvious empirical implication" unless you
> consider that it is obvious that c-solution = h-solution,
> even without knowing c-solution. Can you explain why
> is the latter obvious (since you don't know what c-need
> and c-solution might be)?

That is, like I say, a nice way to avoid the implications of any actual
experiments. The cell didn't produce the specific lactose utilizing
mutations when placed in lactose environments because the c-need was
not to produce those lactose utilizing variants any more often than by
chance. And, moreover, the c-need obviously does not need to produce
antibiotic resistant variants any more often when exposed to antibiotic
than when not exposed because resistance to antibiotic is not a c-need.
Shouldn't you at least be able to point to a single example of some
c-need which, when experimentally supplied, actually was a c-need that
matched an h-need? All the tries so far have utterly failed to find
this elusive c-need because all the mutations occurred at random with
respect to the selective need that us poor scientists supplied.

> In other words, assume the ID model (in computational meaning)
> as a _hypothesis_ to be tested/falsified by the experiment.
> Then show what would be the "obvious empirical implication"
> of that hypothesis _alone_, which means _without_ also
> having to _assume_ that c-need and c-solution (which ID
> hypothesis implies to exist) must be equal to the h-need
> and h-solution (prefix "h-" refers to your own evaluations
> or any other, except the cellular network's own evaluations).

Tell me why all our attempts to provide selective environments that
match the kinds of environmental features cells have to face have
utterly failed to hit upon a single c-need, if it is only c-needs which
the biochemical network (or the magical mutation fairy) produces the
excess beneficial mutations?

> Your faulty methodology was to assume not just the ID model,
> but also that c-need = h-need and c-solution = h-solution.
> Only then you can claim that there is an "obvious implication",
> but this is not an implication of ID model hypothesis _alone_.
> It is an implication of the ID model hypothesis, plus these
> additional assumptions you made.

Working hard to make ID untestable, I see. Or perhaps finding excuses
for the repeated failure to find a correlation between mutation
generation and need for those mutations.

> Hence, once the "obvious implication" is falsified empirically,
> what is being falsified is the combination of ID model +
> assumptions that c-need = h-need and c-solution = h-solution.
>
>
> > Isn't the cell's genome (its DNA) the source code?
>
> Not more so than the sequence of all electrical pulses in
> a computer is a "source code" of a chess program running
> on the computer. This is your 'traversal of abstraction layers'
> problem resurfacing again.

I would say more so. The sequence of electrical pulses would not be
the "source code". It would be more like the mRNA and proteins of a
cell. The DNA of a cell, however, is the cells source code. What else
would be?

> > Selection is not mutation. Selection punishes or rewards cells that
> > have specific genetic variants by virtue of the phenotypes expressed.
>
> Biochemical network, running an internal model which encapsulates
> all it knows about its 'world', can try out in this 'model space'
> the 'model mutations' and then perform 'model natural selection'
> (within its knowledge of 'world') and weed out whatever its
> internal model of the world predicts to be harmful, before
> committing to the much more expensive and much slower real
> world implementation and the real world natural selection.

Any evidence that this actually happens? Any mechanism by which this
*could* happen in a non-conscious, non-intelligent, non-anticipatory
chemical reaction? All the evidence I have seen says otherwise.

> Although no one has identified such patterns and
> 'pre-selection' algorithms in biochemical networks, their
> existence, in some form, is implied by the general
> optimization properties of these types of networks and
> known types punishments and rewards.

Such as. Be specific.

> To what degree they
> can implement it, depends on the computational capacity
> and the quality of their internal model of the 'world'.

And how, exactly, is this magical computation performed with the
available materials? Be specific. Such vague nonsense requires some
specifics. Otherwise all you are doing is positing a magical mutation
fairy.

> > Can you give me an example of a mutation caused by the "network itself"
> > and the evidence you have that such mutants actually exist? In
> > particular, which ones are caused by the "network itself" when there is
> > a selective need for that mutation? Oh, and did I mention that you
> > need to have evidence?
>
> I would call Cairns experiment an experimental demonstration
> of such anticipatory mutation.

Aside, of course, from the fact that even in the basic Cairns
experiment there was no evidence of anticipation occurring. Only an
induced stress response increasing the overall rate of all mutations
nonspecifically *after* the fact of stress.

Unless, of course, it is your claim (unsupported) that the mere
existence of an ability to have a higher rate of mutation is an
anticipation of the possibility of such an event. Of course, the fact
that the increase does not specifically target genes of specific
interest doesn't help your assertion that there is some specificity to
produce needed variants. But, of course, the Cairns experiment is an
h-need, not a c-need, because it was an actual experiment performed in
the real world rather than a hypothetical magical mutation fairy.

> That demonstration is based on
> the explicit reversed engineering of specific very tiny and
> simple algorithm being used. Others, more complex algorithms,
> with better aim, will likely be found (if they haven't been
> already).
>
> In that example, the network anticipates: it looks
> in its internal model space of the world, as it unfolds
> forward in time, in accordance with its understanding of
> the world.

"understanding"? I thought there was no consciousness involved? How
can something have an "understanding" of the world without
consciousness of it? Especially an anticipatory understanding.

> In a test-run in which the self-actor in the
> model space does nothing, the self-actor always dies
> within the look-ahead. Hence, that lowers the risk
> aversion in selecting the actions at its disposal,
> from a library of such actions, and one such is a
> drastic increase in general mutations. In these runs,
> it may find some cases of self-actor alive at the
> forward horizon of its look-ahead, hence that action
> becomes the best action to take and that is what it
> then executes in the real world.

Sounds to me like you are claiming that h-need and c-need are the same
here. And that the 'goal' is "survival" (or reproductive success).
What makes you think that? And remember that there is no, nada, zip
*specificity* in mutation generation in this (or any other) example.
Only a non-specific rate change.

> Now, we know your story about specific vs non-specific
> wrt need mutation, and so on. That has nothing to do
> with the anticipatory algorithm above (within the
> straightforward and principled semantics of 'anticipation').

Except the activation of stress-induced mutation is not anticipatory,
but induced. And also not specific.

> Your objection is nothing more than a bit semantic
> gerrymandering around the concept of 'anticipate'
> in order to keep the Cairns observation outside
> of its boundaries. Thats all fine. You have now a
> concept ND-anticipation which doesn't apply to
> some types of plain meaning anticipation, such as the
> one described above (which uses straightforward semantics).

Anticipate means to prepare before need. Specific mutations are not
prepared before need. The stress induced mutations are not induced
before need. Where is the anticipation? Only in the fact that the
cell has the capacity to increase mutation rate when it is dying or
under stress? But is that *anticipatory* and *planned* or is it not?
It might well be the case that mutation rates just naturally increase
under stress conditions without that being *planned* that way.

> That way you can say Cairns experiment did not demonstrate
> any ND-anticipation in generating mutations. Well,
> that's fine, you can say that if that's what makes your
> life happy. You can even strip away "ND-" prefix, if
> that makes you even happier. That has absolutely no relation
> and no effect on the validity the ID of description within
> its principled, straightforward semantics for 'anticipation'.

Except, of course, the absence of supporting evidence that such
'anticipation' occurs.


>
> All it means is is that there is a phenomenon which
> is anticipatory but not ND-anticipatory. Who cares.
>
> The RM conjectures, of pre-Cairns or post-Cairns
> variety, when stated at the algorithmic level of
> abstraction (which is the only level at which
> the semantic lines between the ID and ND-RM are
> sharp), is a non-starter as a serious conjecture.

Why? Pre- and post-Cairns both demonstrate that the generation of
mutation is random wrt the need for the mutation. Nothing has
demonstrated the opposite. Seems like reality has chosen if your claim
is that ID requires that there be some correlation between mutation
generation and need for mutation. There is, of course, a strong
correlation between *selection* of generated mutations and the local
need for mutation. But selection is not mutation.

> It is a capricious a priori restriction on the
> type of computations biochemical networks may do.

It is not an _a priori_ restriction to say that there is no evidence
for any mechanism that to a statistically detectable level generates
mutations of need. It is a conclusion based on significant amounts of
experimental work. Like all such science, it may not be the last word.
There may well be some rare and unusual mutations that get
specifically induced when needed. But I see no mechanism by which
cells can produce anticipatory mutations of need even on the most
distant horizon.

> There is no algorithmic basis or general principle
> at that level that could even express such
> restriction at the algorithmic level.

Then too bad. The fact remains that the evidence says there is such a
restriction in the real world. And would likely remain so even if a
few exceptions are found at some future time.


>
> The pre-Cairns RM is simply a statement that
> network can perform optimizations using any
> pattern (law, regularity) it has learned,
> except when the pattern contains as its
> step some mutagenic physical-chemical
> conditions.
>
> The post-Cairns RM merely adds an exception
> to the exception of the pre-Cairns RM. It excepts
> those pre-Cairns RM exceptions in which the
> mutagenic condition affects a "large" number
> of sites (presently "large" is set equal to "all").
>
> That kind of conjectures, pre and post, are plainly
> silly at the algorithmic level. Unfortunately for
> the ND priesthood, the study of the anticipatory
> computations by the biochemical networks and
> of their algorithms is an existent and rapidly
> developing branch of the computational biology.
> Hence, neo-Darwinism is already a zombie in the
> conjectures realm.

So, where, in that literature, do you find the specific evidence for
anticipatory mutation?

> > That, indeed, is what the evidence says. Initially, of course, people
> > considered that this might be an example of preferential *induction* of
> > beneficial mutation by the environment. However, upon further testing
> > it was demonstrated that the mutation rate increased due to a stress
> > response, but not *preferentially* the rate of mutations of need.
>
> If you have textbooks from before and after Cairns experiments, check
> the wordings on the RM conjecture. There were no caveats about 'stress
> response' or any equivalent. Now they are prominent. The quiet semantic
> gerrymandering around the concept of 'anticipation' did occur here.

Finding the stress response was an important finding. But it also
remained true that this is not anticipatory mutation nor mutation
generation correlated with need.

> > Even Cairns was smart enough to posit that what he
> > was seeing was *induced preferential mutation* rather than foresighted
> > anticipatory process.
>
> Thanks, that is neat example of semantic gerrymandering around the
> concept of 'anticipation'.

What Cairns was interested in (and what was eventually found not to be
the case) was the word "preferential". There was never any idea that
the mutations were anticipatory. They were also considered to be
mutations that were *induced*.

> >>At present, only the coarse grained, statistical and static
> >>properties of the biochemical reaction networks have been
> >>explored, and only very simplified (toy) models of its
> >>dynamics are being simulated on the computers. You can check
> >>on the SFI
> >
> >
> > Is that an abbreviation for Science Fiction site?
>
> That is the Santa Fe Institute, the link of which was given
> right below that paragraph. Here it is again:
>
> http://www.santafe.edu/research/publications/working-papers.php
>
> { Nah, don't worry. I don't believe that anyone will jump to
> a conclusion that your science is dated, or anything like that.
> No, they won't. No, nothing of the sort. }

So, where, in what specific paper, do they support your thesis of
anticipatory mutation? Using (or misusing) a bunch of phrases from the
SFI does not cut it.

> > I doubt it. The evidence is already in wrt mutation (although I would
> > leave open the possibility that a few domesticated mutational processes
> > might be capable of generating variants wrt need, but these would be
> > rare and unusual cases, not the norm).


> Oh, I see, you are already starting on RM version 3. In RM3, which
> would be an exception to the RM2's exception of the RM1's exception,

RM2 is not an exception to RM1. RM1 was never dependent upon any
specific rate of mutation. The exceptions I am talking about above
would be exceptions. If they are ever found.

> the biochemical network will be allowed to anticipate using the
> patterns which include more specific mutagenic conditions, provided
> that such patterns do not occur too often (whatever that means).
> Did you settle on some figures that define 'rare', 'unusual',
> 'not the norm'? Do you know of any actual experiment which prompted
> this sudden new level of weasel-wording?

By rare I would mean something like 0.000000000000001% of all genes or
fewer.

nightlight

unread,
Jul 16, 2006, 3:40:03 AM7/16/06
to
Windy wrote:

> nightlight wrote:
>
>>You can't even state a
>>_general_ rule, about how much anticipation does the
>>latest version of ND's RM dogma allow and what is
>>still on the prohibited list lately?
>
>
> Here on planet Earth, nervous systems evolved by trial and error over
> the course of hundreds of millions of years to process information and
> anticipate events precisely because the genome can't anticipate
> anything.

That view is out of date by at least couple decades:

http://groups.google.com/group/talk.origins/msg/623400a8de95db21

The biochemical networks and the networks of neurons (nervous systems)
are different implementation of the distributed computational processes
of the same mathematical type (mathematically modeled by neural networks).

> In your fevered fantasy, why did anticipatory genetic
> networks even bother to create the mutations to produce nervous
> systems, if the network in itself already had much more processing
> power than any nervous system? Why not just mutate as you go along?
>
> Or, if nervous systems were such a great idea, why did the
> super-duper-network wait 2-3 billion years to produce a passable one?

The advance of neuron-based networks was not about creating
a more powerful computation or modeling. It was about overcoming
the scalability problem of these distributed computers.

Namely, the biochemical reaction networks, their perception
and processing, use diffusion (e.g. of molecules,ions) to
establish the network links (which carry the signals between the
nodes) and to receive sensory inputs (which come in
through cellular membrane). {Although the diffusion may be
aided by various gradients (such as electric or chemical),
for the purpose here, I'll ignore these smaller refinements.}

That method works fine when you are on the intra-cellular
(or within nucleus, where much of the mutation computations
would be carried out) scales, where the distances are short.
Unfortunately, diffusion propagates signals in time 't' to
distances s(t) = k*sqrt(t), where k is some constant and
t > 0 (I am using 1-dimensional Brownian motion formula for
simplicity). That means that the average signal transmission
speed along the diffusion based network links is:
v(t) = s(t)/t = k/sqrt(t), or expressed in terms of
traversed distance s:

v(s) = k^2 / s ... (1)
t(s) = (s/k)^2 ... (2)

In other words, the average signal speed in such networks drops
rapidly toward zero as the distance s, or length of the links,
increases. Or put another way, eq. (2), the time for a pulse
to traverse a link drops with the square of the link length.

Therefore, after the multi-cellular technology was developed,
the lengths of the links increased and the diffusion based
link technology became too slow. Therefore, the neural
technology was designed (by the cellular biochemical networks)
to overcome the link length scaling drawback of the original
technology.

The neural technology (axon+synapse+dendrite or axon-synapse-cell)
based links provide signal propagation speeds which are v(s) = constant,
thus independent of distances s. That was the idea of the neural
technological advance. Even here, the diffusion based signal technology
still remains, but it is used only within the synaptic gaps, where
the distances are short, thus the penalty of (1) is avoided.

While the neural technology solved the problem of computation for
the multi-cellular organisms, the technology did have some
downsides, principally the lower _number of connections_
that can be inexpensively established (diffusion for N nodes
can provide inexpensively up to n^2 links, without paying a line
cost when not used, while neurons require direct permanent
physical point-to-point links which need to be maintained
individually continuously, while transmitting or idling)
and the fragility of the conducting lines (need for electric
insulation and mechanical protection of the fragile
and permanent lines).

Then, as the next technological advance came, from multi-cellular
single organisms to social organisms (social insects), the
neural technology became inadequate again, since its permanent
physical links did not work across multiple mobile agents. The
solution here was a return to the diffusion technology (pheromones),
which limited the signaling speeds and social network complexity.
Further, the optical and acoustic signaling technologies were
also added.

The major new advance, comparable in magnitude to the original
advent of neural technology, was the human communication
technology, which allowed the explosive growth of the
social networks.

All of these technological advances were deliberate designs
by the appropriate networks at different levels. As with any
creative processes, there is an element of luck involved, as reading
the history of scientific discoveries and technological inventions
by human neural + linguistic + social networks testifies. But the
luck mostly produces minor effects in the precise space-time
location of the invention e.g. once some idea is 'in the air'
the luck merely determines who will be the first to pick it off.

As to the head-to-head benchmarks of different networks,
the super-networks have greater overall computational
capability than any of their sub-networks. But that does
not imply, if one were to create some hypothetical speed
benchmark, in which certain forms of computations are
artificially prohibited to the higher oder networks, such
as their use of the computations by the cellular biochemical
networks (a restriction which in the normal higher network
computations they don't have), that the higher order network
would come ahead.

As noted, in the molecular level engineering of a new
cell from scratch, the cellular biochemical networks
are unrivaled masters of that domain. All individual
human experts or entire scientific networks, when prohibited,
for the test purpose, from using the cellular biochemical
network computations, and forced to use exclusively human
brains and higher level scientific network computations,
are nowhere near to matching the mastery of the cellular
networks in molecular level engineering of the new cells.

Without any such benchmark restrictions, the individual
human and scientific networks can certainly create a
live cell (and faster than a separate biochemical network,
left to fend for itself, could). The original benchmark
conditions were simply meant to isolate precisely where
do the needed computations that can accomplishes such
molecular engineering task, take place. They answer is:
they take place in the cellular network. The higher level
networks (all of human expert brains, plus scientific and
technological networks put together in one big team)
are comparatively clueless about such high precision
molecular engineering of a live cell.


>>In contrast, the ID position (at the algorithmic level)
>>is a clean, coherent and principled: any degree of
>>anticipation is allowed which is within the computational
>>resources of the network.
>
>
> Which explains precisely nothing.

It points out that when you formulate ND-RM and ID conjectures at
the algorithmic level, which is where their semantics is crystal
clear, then there is no contest which one is scientifically more
sound. The ND-RM is at this level a capricious ideologically
motivated taboo on the types of computational algorithms which
to the cellular biochemical network is allowed to perform in
their optimization tasks. ID places no such a priori restriction
on these algorithms.

It is true that the present experimental and computational
capabilities do not allow us to reverse engineer the complete
algorithms of the biochemical network, to get at its "source code"
and find out exactly what patterns they are using to anticipate
and whether they are excluding the patterns which contain
certain types of mutagenic actions as the step within those
patterns, such as the site specific mutagenic action (which are
on the neo-Darwinian RM taboo list; in its post-Cairns version,
the general mutagenic actions are not prohibited any more).

Hence, at present, detailed reverse engineering being still
beyond reach of science, one can only evaluate these conjectures
at the theoretical level.

In contrast, I have yet to see, here or elsewhere, a formulation of
the ND-RM conjecture at the biochemical level, which has a connection
to the empirical facts _even just in principle_, but without parting
its way with the rules of elemental logic). If you wish to fix
the hersheyv's formulation or any other, give it a shot. Keep
in mind the objections to the logical flaws in his formulation
already pointed out here:

http://groups.google.com/group/talk.origins/msg/e068428a5762adf5
http://groups.google.com/group/talk.origins/msg/cfaee59d8c5e179e


> I'll bet you are not prepared to make
> any predictions based on differences in networks - does a big
> eukaryotic genome have more computational resources than a small
> bacterial one? If so, the former should be able to anticipate
> mutational needs and amass favourable mutations faster, right?

No. That takes a wrong leap in every reasoning step.

a) The greater computational capacity does not imply that the
mutation solution must come out more often as a preferred
solution to any problem. A farther look-ahead afforded by
a more powerful computation may reveal side-effects of a
mutation that a shallower computation did not show. Just
consider a chess program: does it mean that a deeper
searching program will prefer to exchange pieces or even
capture an undefended piece more often? It won't. The
probability of piece exchange would likely be unaffected.
The probability of capture of undefended piece would be
_decreased_ by a deeper program, since to a shallow
program such capture would appear as favorable, while a
deeper program may find some such captures to be opponent's
traps (which, when playing against strong players or programs
is almost always the case).

b) As explained via eqs. (1) &(2), the diffusion based
computation becomes slower at a larger distance. A 5 times
longer link implies a 5 times lower signal speeds, hence 25
times longer times to propagate signals across that link.
That implies 25 times slower computations for the algorithms
which operate (or substantially rely) on these longer links.

c) Diffusion based mechanism of propagation of mutagenic
enzymes has low spatial resolution. The target area
such enzyme traverses grows with square of the distance,
hence it becomes spatially less accurate action for
larger cells. For example, the number of sites which
may be affected negatively by the enzyme, hence the
number of unfavorable outcomes, grows as square
of the distance.

d) The eukaryotic cells are social cells. Hence they
are much more constrained than individualistic cells,
such as bacteria. The eukaryotic cells thus must obey
the rules of their social order e.g. satisfy required
performance criteria and have more predictable actions.
Any mutation in such cells has much farther reaching
consequences, hence mutants get eliminated quickly
(e.g. programmed cell death), even when a mutation
might be otherwise favorable to a single cell.
Since the eukaryotic cell does have an internal
model of these social constraints and consequences
of non-conformity, its evaluation function would be
strongly biased against mutations as solutions
than the evaluation function of bacteria.

e) The socialized organisms, everything else being
equal, would be on average dumber than the
individualistic organisms. The socialized organism's
actions are more constrained, hence their environment
more predictable/they have fewer choices to pick from,
hence they need lower anticipatory power to survive.
Social insects or social (eukaryotic) cells, die
quickly once their social network support is
removed (as would most humans). We can even notice
the "dumbing down" effects in our social networks,
where various conformity pressures penalize
(on average) deviations from the 'norm' (nerds
or contemplative kids tend to have harder time
growing up and tend to have fewer kids later).

f) The mutations effects that a multicellular
organisms would need to evaluate are organism
level effects. These are largely beyond the
perception range and evaluation capabilities
of a single eukaryotic cell. The transformation
of the DNA across organism's generations is
thus largely left to the much safer sexual
reproduction mechanism, which in turn uses
the anticipatory power of the organism's
neural network to gude DNA transformation from
parents to offspring (e.g. in selection of
reproducitive mates), while restricting
the cellular networks in the initial phase,
when the selection of the 'lucky' sperm occurs.
The human social networks, especially the
scientific networks, are at a stage in which
the mutations at the organism level could be
guided by these networks at a much greater rate
than what the earlier networks could achieve
in guiding the evolution to this point. At
present, there is a strong social level
taboo/bias aginst such genetic design of humans.
That taboo is an "echo", or an organism level
shape, of the underlying negative bias in
evaluating mutations that our underlying
cellular networks have.

g) A general principle the evolution of hierarchical
networks is suppression of the innovation by the
lower order networks, in favor of the innvation
by the higher order networks, as sketched in this post:

http://groups.google.com/group/talk.origins/msg/2c5884a907f10c22

Thus, as the higher order networks (e.g.
multi-cellular organism or 'society') optimize,
the lower order networks (eukaryotic cells, or
socialized organisms) lose degrees of freedom,
the options to pick from, becoming more constrained.
The live edge of the innovation moves up, toward
the higher order networks, freezing out the
innovation by the lower order networks. At the
bottom of the network hierarchy are primordial
networks of Planckian scale objects (10^-33 m),
with a further hierarchy of their super-networks
making up eventually our elementary particles.
While the specific nature of all of these is
highly speculative, the general principles,
such as (g), would apply (in some form). Thus
this networks were frozen out of innovation
shortly after the Big Bang, with the atomic
structure innovation getting frozen out by
the time of formation of larger nuclei (in
nuclear fusion within stars, followed by their
dissemination upon star's death). The freezing
out of the cellular level innovation occured in
the multi-cellular organisms (i.e. the required
anticipatory computations have moved up the
hierarchy, even though the cells still take
part in it in the role of nodes of the higher
order network, which guide the innovation).
Hence, even from the general principle (g) alone,
one would conclude that the cellular biochemical
networks of eukaryotes would strongly disfavor
mutations in comparisons to the biochemical
networks of prokaryotes.

-------------------------------------------
Summarizing the consensus of the seven separate
considerations above on your question, the most
plausible conjecture at the algorithmic level of
abstraction would be that eukaryotic cells
are more biased against computing mutations as
solutions to any problem and would produce them
much less frequently than prokaryotic cells.

Note that we're not talking here about mutations
induced by external causes, such as X rays. If
we were, for example, to expose a prokaryotic and
eukaryotic cell to the same intensity ionizing
radiation, the eukaryote would have more mutations,
due to its larger size.


ErikW

unread,
Jul 16, 2006, 11:48:05 AM7/16/06
to
Your reply is too long man. How do you find the time anyway to write
that many words? I manage 3 posts a week of less than 100 words :P

nightlight wrote:
> ErikW wrote:
> >>"hersheyhv" already brought up) in detail is here:
> >>
> >> http://groups.google.com/group/talk.origins/msg/ff90576d409cefd2
> >>
> >>
> >>anti-teleology argument in this post and in its followup:
> >>
> >> http://groups.google.com/group/talk.origins/msg/f969d45c50183c02
> >> http://groups.google.com/group/talk.origins/msg/f30ee4fdbfff5b01
> >>
> >>
> >
>
> >
> > I have a real problem understanding your stubborness on this. Your idea
> > of Lamarckism is dead for a reason you know. I posted some short info
> > somewhere else but I'll repeat here and add a few things:
>
> Perhaps a naive Lamarckism is dead. The idea of intra-cellular
> biochemical reaction networks, which are mathematically of the same
> type of adaptable network as human or animal brains (general,
> distributed computers, self-programmable), implementing
> anticipatory algorithms in the domain (cellular biochemistry) in
> which they are unrivaled specialists, is not far fetched at all. It
> is in fact the most plausible conjecture as to what might these
> self-programmable distributed computers be computing, anyway. Lamarck
> had simply picked the wrong network (animal brain) to which he
> attributed such anticipatory activity.

He did not pick a network. Lamarckism was equally aplickable to
organisms without brains. But nm.

>
> While it is true that networks consisting of enough brains for their
> nodes are capable of implementing genetic engineering tasks, as biotech
> industry illustrates, the ultimate specialist on that subject is the
> cellular biochemical network. It daily achieves feats that all of the
> world's molecular biology, biochemistry, biotech & pharmaceutical
> industry resources, taken all together to work on this single task,
> could not even get close to matching -- produce a single live cell
> from scratch (inorganic materials). The tiny biochemical networks
> do it billions of times every day and have known how to do it for
> over a billion years.
>
>
> > 2) There are likey many more mutations like this in populations of the
> > more widespread darker furred "normal" mouse variant.....
> >
> > 3) This point mutation accounts for one third of the variaiton in fur
> > colour and pattern. It's not [one mutation] = [finished beach mouse].
> > This alone should have told you that even in the example that you
> > discuss there are more than one mutation (and loci) involved. And that
> > fits rather perfectly with RM + NS. ...
> >
> > Even though I don't know this explicitly there are likely other
> > mutations in the vincinity of the mutation in question that are
> > entirely neutral and without effect. It would appear that that would be
> > direct disproof of your teleological mutations idea and instead show
> > that mutations are random, wouldn't you agree? Or would you instead
> > suggest that only some mutations are under divine control?
> >
>
> This is basically recycling a variation on the theme of the empirical
> mutation rate argument which others have done here.

That's because it is the correct argument. But nm that ftm.

> Instead of arguing
> that the site has high enough mutation rate, you are saying that there
> are multiple sites which can achieve similar effect on fur color. If
> there are, say 50 such alternative ways, that is equivalent (regarding
> the odds of finding a favorable color adaptation) of saying that the
> mutation rate on the original single site is 50 times greater.

Well no. I'm rather saying that there are, say 50 alternative ways of
achieving one step in the same direction. Its not black or white. Its
50 different lighter fur colourations. And in at least one verified
case, it did happen in another way.

>
> Since I was not arguing that the empirical rate of the mutations at the
> original site was not (statistically) capable of producing the observed
> adaptation in the given time and population size, your bringing in an
> equivalent of claim that the rate was even faster (effectively, via
> alternative sites), remains as disconnected from my argument as the
> previous variants.
>
> This is not an issue of whether the correct microscopic physical and
> chemical conditions at the location and the time of the mutation may
> have been there or whether they are causally responsible for the
> observed rate. We all agree that the right physical-chemical conditions
> were there and that they can cause the mutation at the rates observed.
>
> The point you and others here are missing is that this fact alone is
> insufficient to tell you whether these physical-chemical conditions
> at the location &* time of the mutation were an accidental event or a
> deliberate step executed as result of a computation by the biochemical
> network (of which they are a part anyway) for the ultimate purpose of
> improving the fitness of the organism.

What you are saying then sounds like "since the observed mutation rates
is a product of an anticipatory network anyway, then we have nothing
random to compare it too. Hence you don't know."

Alias the mutational fairy :). If you're not saying that then I don't
understand how you can maintain your position and be knowledgeable
about molecular biology at the same time.

> The meaning of 'deliberate step'
> in this context and the further explanation and illustrations of this
> point were given in an earlier post:
>
> http://groups.google.com/group/talk.origins/msg/ff90576d409cefd2

Um, restate that in less than 150 words please :). And skip all
analogies and go straight for the biology :)

>
> Although it is already answered in that post, the question you may have
> if reading that post superficially (as some others did) is why would
> we want to check for this possibility anyway?

Well we can't check for it anyway? So that thought didn't occur to me
:)

I read the rest but it didn't go to well with what I thought you were
saying above so I snipped it.

snip

hersheyhv

unread,
Jul 16, 2006, 12:32:19 PM7/16/06
to

nightlight wrote:
> hersheyhv wrote:
>
> >>These networks learn by being exposed to
> >>some input signals,
> >
> >
> > A cell's biochemical network does not "learn" anything.
> > It responds to environmental stimuli. A cell is not
> > a conscious intelligent agent, even though conscious
> > intelligent agents are composed of cells. Consciousness
> > and intelligence are emergent properties of organisms,
> > not properties of the cells they are composed of.
>
> The 'neural networks' described there are an _abstract
> mathematical model_.

Actually, as you yourself might agree, your claim is basically that the
"hills are alive". That is, you actually believe that molecules have
"minds". This, as the article you pointed me to, is nothing but old
(very old) fashioned animism.

[snip]


>
> Note also that no assumption about 'consciousness' was
> used anywhere above i.e. these are all purely mathematical
> properties of these networks (although described in informal
> everyday language, thus being slightly ambiguous on that
> question). 'What is it like to be' any such network is outside
> of present natural science, hence we can only philosophize
> and speculate about it. The philosophy I find most coherent
> regarding the 'mind stuff' is panpsychism:
>
> Philosophical panpsychism
> http://plato.stanford.edu/entries/panpsychism/

An interesting paragraph from this site (below) puts your ideas into
perspective. Of course, as a scientist, I am not an animist. Only
certain organisms demonstrate consciousness or thought; those are
emergent properties. Panpsychism, of course, is nothing more than New
Agey mysticism that attributes mystical healing properties to crystals.

"Panpsychism seems to be such an ancient doctrine that its origins long
precede any records of systematic philosophy. Some form of animism,
which, insofar as it is any kind of doctrine at all, is very closely
related to panpsychism, seems to be an almost universal feature of
pre-literate societies, and studies of human development suggest that
children pass through an animist phase, in which mental states are
attributed to a wide variety of objects quite naturally (see Piaget
1929).[3] It is tempting to speculate that the basic idea of
panpsychism arose in what is a common process of explanatory extension
based upon the existence of what is nowadays called "folk
psychology". It would have been difficult for our ancestors, in the
face of a perplexing and complex world, to resist applying one of the
few systematic, and highly successful, modes of explanation in their
possession."

The paragraphs that follow are also of interest. They clearly point
out that the problem of panpsychism or animism is the absence of the
emergent properties that characterize mind (particularly consciousness)
at the fundamental level.

Unfortunately, nightlight seems to have stopped at this primitive
animistic level of metaphysics, attributing conscious activities to
molecules by an argument by analogy with human behavior, as described
as "some pretty silly things" below:

"The most straightforward argument from analogy goes like this: if we
look closely, with an open mind, we see that even the simplest forms of
matter actually exhibit behavior which is akin to that we associate
with mentality in animals and human beings. Unfortunately, in general,
this seems quite preposterous, and some panpsychists have written some
pretty silly things in its defense. For example, Ferdinand Schiller
attempted to "explain" catalysis in terms of mentalistic relations:
"is not this [that is, catalysis of a reaction between A and B by the
catalyst C] strangely suggestive of the idea that A and B did not know
each other until they were introduced by C, and then liked each other
so well that C was left out in the cold" (as quoted by Edwards (1967)
in an acidly humorous paragraph, from Schiller (1907)). Strange?
Certainly, but not really very suggestive at all compared to the
physical chemists' intricately worked out, mathematical and empirically
testable tale of energy reducing reaction pathways. There has always
been a strain of mysticism in many panpsychists, who like to imagine
they can "sense" that the world is alive and thinking, or find that
panpsychism provides a more "satisfying" picture of the world,
liberating them from the arid barrenness of materialism and perhaps
this leads them to seek analogies somewhat too assiduously (as noted
above, Fechner was the most poetical advocate of the mystical appeal of
panpsychism and also a fervent advocate of analogical arguments for
panpsychism)."

This paragraph is followed by:

"A more intriguing hope for an analogical defense of panpsychism
springs from the overthrow of determinism in physics occasioned by the
birth of quantum mechanics. There have been occasional attempts by some
modern panpsychists, starting with Whitehead, to see this indeterminacy
as an expression not of blind chance but spontaneous freedom in
response to a kind of informational inclination rather than mechanical
causation. This updated version of the analogy argument has the
advantage that the property at issue, freedom, modelled as spontaneity
and grounded in indeterminacy, can be found at the most fundamental
level of the physical world. As in any analogical argument, the crux of
the issue is whether the phenomena cited on the one side are
sufficiently analogous to the target phenomena to warrant the
conclusion that the attributes in question can be extended from the one
domain to the other. In this case, we have to ask whether the
indeterminacy found at the micro-level genuinely corresponds to what we
take freedom to be, and this is doubtful. The indeterminacy of modern
physics seems to be a pure randomness quite remote from deliberation,
decision and indecision."

The above paragraph, I think, is the crux of nightlite's problem.
Empirically, it can and has been determined that the generation of
specific mutations occurs at random wrt need (mutation is random;
selection is not) to the extent that we understand the expectations of
random events mathematically and statistically. That is because there
is no detectable entanglement of need and the processes that produce
mutation at specific sites. This is unlike the quantum effect of
entanglement, being a mass averaging. The indeterminacy of random
mutation bothers the mystical need of nightlite to have "deliberation,
decision, and indecision" by "informational inclination" in inanimate
chemistry. So he invents the idea that somehow the "biochemical
network" *somehow* (at the mystical level of algorithm) ensures
increased specific mutations when there is a need. Not just as an
induced response, but as an anticipation of need! That the empirical
evidence is against this idea doesn't bother him. It should.

My personal position is also pointed out in this article in the
arguments against panpsychism, to which nitelight should pay more
attention:

"Perhaps the initially most obvious problem with panpsychism is simply
the apparent lack of evidence that the fundamental entities of the
physical world possess any mentalistic characteristics. Protons,
electrons, photons (to say nothing of rocks, planets, bridges etc.)
exhibit nothing justifying the ascription of psychological attributes
and thus Occam's razor, if nothing else, encourages withholding any
such ascriptions. Furthermore, it is argued, since we now have
scientific explanations (or modes of explanation at least) which have
no need to ascribe mental properties very widely (it is tempting to
interject: not even to people!) panpsychism can be seen as merely a
vestige of primitive pre-scientific beliefs. At one time, perhaps,
panpsychism or animism may have been the conclusions of successful
inferences to the best explanation, but that time has long passed.

As we examine ever smaller, more basic units of the physical world, it
seems harder and harder even to imagine that such things have any
properties that go beyond those ascribed to them by the physical
theories which are, after all, the only reason we have to believe in
them. In any case, there seems no reason to assign any intrinsic nature
to the theoretically postulated entities of physics that goes beyond
providing for the causal powers they are presumed to possess according
to the theories which posit them. Even granting the need to assign some
intrinsic nature to matter, it remains far from clear that mentality is
the intrinsic character required for possession of these causal powers.
Some such argument likely accounts for the general sense of
implausibility with which many people greet panpsychism nowadays. For
example, John Searle describes panpsychism as an "absurd view" and
asserts that thermostats do not have "enough structure even to be a
remote candidate for consciousness" (1997, p. 48) while Colin McGinn
(1999, pp. 95 ff.) labels it either "ludicrous" (for the strong
panpsychism which asserts that everything has full fledged
consciousness) or "empty" (for the weak panpsychism which asserts
only that everything has at least some kind of proto-mentality)."

And later on, and particularly relevant:

"Panpsychism is an abstract metaphysical doctrine which as such has no
direct bearing on any scientific work; there is no empirical test that
could decisively confirm or refute panpsychism."

The point remains that empirical tests do show that (among the tested
genes, which is not a highly biased sample) generation of mutation
occurs, to the extent that we can detect it, at random wrt need. That
is the empirical reality that any explanation or theory about how
organisms work to change allele frequencies or generate new
functionalities must reside within. As for the mystical idea that the
"hills (or molecules, or biochemical networks) are alive", I have no
need for that hypothesis.

Windy

unread,
Jul 16, 2006, 12:44:29 PM7/16/06
to
ErikW wrote:
> nightlight wrote:
> > michael...@worldnet.att.net wrote:
> >
> > > Now researchers have identified a genetic mutation
> > > that underlies natural selection for the sand-matching
> > > coat color of the beach mice, an adaptive trait that
> > > camouflages them from aerial predators....
> >
> > It doesn't appear they have shown that the mutation
> > was _random_. They only found a variant of a gene
> > which is responsible for the lighter color.
> >
> These researchers searched for the locus causing the light fur in these
> mice and found a dominant point mutation.
> However, there are more than one mutation causing light furred "beach"
> mice. Any of those mutations will do. (How many different there are is
> not known.)
>
> So RM + NS pwns ID :P

But perhaps the pink mutation fairy had very good reasons to choose
this specific mutation! :)

Another problem with nightlight's thesis is this: why isn't *somatic*
mutation directed?

Since somatic cells have the cellular "supercomputer" at hand too,
examples of adaptive somatic mutation should be dime a dozen. Imagine
being a dark mouse on the beach, predators swooping overhead - if your
genome is able to send a signal to your *germ* cells to produce a
mutation in a specific gene, why not send a signal to your skin cells
too so they'll mutate and start producing lighter hair ASAP? After all,
the germ line mutation will only be expressed in the next generation,
while the mouse is in danger *now*.

-- w.

It is loading more messages.
0 new messages