Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss
Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Stacking the Deck

80 views
Skip to first unread message

Sean Pitman

unread,
Jan 4, 2004, 11:25:02 AM1/4/04
to
lmuc...@yahoo.com (RobinGoodfellow) wrote in message news:<81fa9bf3.04010...@posting.google.com>...
> seanpi...@naturalselection.0catch.com (Sean Pitman) wrote in message news:<80d0c26f.03123...@posting.google.com>...
>
> Good gravy! That was so wrong, it feels wrong to even use the word
> "wrong" to describe it. All I can recommend is that you run, don't
> walk, to your nearest college or university, and sign up as quickly as
> you can for a few math and/or statistics courses: I especially
> recommend courses in probability theory and stochastic modelling.
> With all due respect, Sean, I am beginning to see why the biologists
> and biochemists in this group are so frustrated with you: my
> background in those fields is fairly weak - enough to find your
> arguments unconvincing but not necessarily ridiculous - but if you are
> as weak with biochemistry as you are with statistical and
> computational problems, then I can see why knowledgeable people in
> those areas would cringe at your posts.

With all due respect, what is your area of professional training? I
mean, after reading your post I dare say that you are not only weak in
biology, but statistics as well. Certainly your numbers and
calculations are correct, but the logic behind your assumptions is
extraordinarily fanciful. You sure wouldn't get away with such
assumptions in any sort of peer reviewed medical journal or other
statistically based science journal - that's for sure. Of course, you
may have good success as a novelist . . .

> I'll try to address some of the mistakes you've made below, though I
> doubt that I can do much to dispel your misconceptions. Much of my
> reply will not even concern evolution in a real sense, since I wish to
> highlight and address the mathematical errors that you are making.

What you ended up doing is highlighting your misunderstanding of
probability as it applies to this situation as well as your amazing
faith in an extraordinary stacking of the deck which allows evolution
to work as you envision it working. Certainly, if evolution is true
then you must be correct in your views. However, if you are correct
in your views as stated then it would not be evolution via mindless
processes alone, but evolution via a brilliant intelligently designed
stacking of the deck.

> > RobinGoodfellow <lmuc...@yahoo.com> wrote in message news:<bsd7ue$r1c$1...@news01.cit.cornell.edu>...
>
> > > It is even worse than that. Even random walks starting at random points
> > > in N-dimensional space can, in theory, be used to sample the states
> > > with a desired property X (such as Sean's "beneficial sequences"), even
> > > if the number of such states is exponentially small compared to the
> > > total state space size.
> >
> > This depends upon just how exponentially small the number of
> > beneficial states is relative to the state space.
>
> No, it does not. If you take away anything from this discussion, it
> has to be this: the relative number of beneficial states has virtually
> no bearing on the amount of time a local search algorithm will need to
> find such a state.

LOL - You really don't have a clue how insane this statement is?

> The things that *would* matter are the
> distribution of beneficial states through the state space, the types
> of steps the local search is allowed to take (and the probabilities
> associated with each step), and the starting point.

The distribution of states has very little if anything to do with how
much time it takes to find one of them on average. The starting point
certainly is important to initial success, but it also has very little
if anything to do with the average time needed to find more and more
beneficial functions within that same level of complexity. For
example, if all the beneficial states were clustered together in one
or two areas, the average starting point, if anything, would be
farther way than if these states were distributed more evenly
throughout the sequence space. So, this leaves the only really
relevant factor - the types of steps and the number of steps per unit
of time. That is the only really important factor in searching out
the state space - on average.

> For an extreme
> example, consider a space of strings consisting of length 1000, where
> each position can be occupied by one of 10 possible characters.

Ok. This would give you a state space of 10 to the power of 1000 or
1e1000. That is an absolutely enormous number.

> Suppose there are only two beneficial strings: ABC........, and
> BBC........ (where the dots correspond to the same characters). The
> allowed transitions between states are point mutations, that are
> equally probable for each position and each character from the
> alphabet. Suppose, furthermore, that we start at the beneficial state
> ABC. Then, the probability of a transition from ABC... to BBC... in a
> single mutation 1/(10*1000) = 1/10000 (assuming self-loops - i.e.
> mutations that do not alter the string, are allowed).

You are good so far. But, you must ask yourself this question: What
are the odds that out of a sequence space of 1e1000 the only two
beneficial sequences with uniquely different functions will have a gap
between them of only 1 in 10,000? The time required to cross this
tiny gap would require a random walk of only 10,000 steps on average.
For a decent sized population, this could be done in just one
generation.

Don't you see the problem with this little scenario of yours?
Certainly this is a common mistake made by evolutionists, but it is
none-the less a fallacy of logic. What you have done is assume that
the density of beneficial states is unimportant to the problem of
evolution since it is possible to have the beneficial states clustered
around your starting point. But such a close proximity of beneficial
states is highly unlikely. On average, the beneficial states will be
more widely distributed throughout the sequence space.

For example, say that there are 10 beneficial sequences in this
sequence space of 1e1000. Now say one of these 10 beneficial
sequences just happens to be one change away from your starting point
and so the gap is only a random walk of 10,000 steps as you calculated
above. However, on average, how long will it take to find any one of
the other 9 beneficial states? That is the real question. You rest
your faith in evolution on this inane notion that all of these states
will be clustered around your starting point. If they were, that
certainly would be a fabulous stroke of luck - like it was *designed*
that way. But, in real life, outside of intelligent design, such
strokes of luck are so remote as to be impossible for all practical
purposes. On average we would expect that the other nine sequences
would be separated from each other and our starting point by around
1e999 random walk steps/mutations (i.e., on average it is reasonable
to expect there to be around 999 differences between each of the 10
beneficial sequences). So, even if a starting sequence did happen to
be so extraordinarily lucky to be just one positional change away from
one of the "winning" sequences, the odds are that this luck will not
hold up as well in the evolution of any of the other 9 "winning"
sequences this side of a practical eternity of time.

Real time experiments support this position rather nicely. For
example, a recent and very interesting paper was published by Lenski
et. al., entitled, "The Evolutionary Origin of Complex Features" in
the 2003 May issue of Nature. In this particular experiment the
researchers studied 50 different populations, or genomes, of 3,600
individuals. Each individual began with 50 lines of code and no
ability to perform "logic operations". Those that evolved the ability
to perform logic operations were rewarded, and the rewards were larger
for operations that were "more complex". After only15,873 generations,
23 of the genomes yielded descendants capable of carrying out the most
complex logic operation: taking two inputs and determining if they are
equivalent (the "EQU" function).

In principle, 16 mutations (recombinations) coupled with the three
instructions that were present in the original digital ancestor could
have combined to produce an organism that was able to perform the
complex equivalence operation. According to the researcher themselves,
"Given the ancestral genome of length 50 and 26 possible instructions
at each site, there are ~5.6 x 10e70 genotypes [sequence space]; and
even this number underestimates the genotypic space because length
evolves."

Of course this sequence space was overcome in smaller steps. The
researchers arbitrarily defined 6 other sequences as beneficial (NAND,
AND, OR, NOR, XOR, and NOT functions). The average gap between these
pre-defined steppingstone sequences was 2.5 steps, translating into an
average search space between beneficial sequences of only 3,400 random
walk steps. Of course, with a population of 3,600 individuals in a
population, a random walk of 3,400 will be covered in short order by
at least one member of that population. And, this is exactly what
happened. The average number of mutations required to cross the
16-step gap was only 103 mutations per population.

Now that is lightening fast evolution. Certainly if real life
evolution were actually based on this sort of setup then evolution of
novel functions at all levels of complexity would be a piece of cake.
Of course, this is where most descriptions of this most interesting
experiment stop. But, what the researchers did next is the most
important part of this experiment.

Interestingly enough, Lenski and the other scientists went on to set
up different environments to see which environments would support the
evolution of all the potentially beneficial functions - to include the
most complex EQU function. Consider the following description about
what happened when various intermediate steps were not arbitrarily
defined by the scientists as "beneficial".

"At the other extreme, 50 populations evolved in an environment where
only EQU was rewarded, and no simpler function yielded energy. We
expected that EQU would evolve much less often because selection would
not preserve the simpler functions that provide foundations to build
more complex features. Indeed, none of these populations evolved EQU,
a highly significant difference from the fraction that did so in the
reward-all environment (P = 4.3 x 10e-9, Fisher's exact test).
However, these populations tested more genotypes, on average, than did
those in the reward-all environment (2.15 x 10e7 versus 1.22 x 10e7;
P<0.0001, Mann-Witney test), because they tended to have smaller
genomes, faster generations, and thus turn over more quickly. However,
all populations explored only a tiny fraction of the total genotypic
space. Given the ancestral genome of length 50 and 26 possible
instructions at each site, there are ~5.6 x 10e70 genotypes; and even
this number underestimates the genotypic space because length
evolves."

Isn't that just fascinating? When the intermediate stepping stone
functions were removed, the neutral gap that was created successfully
blocked the evolution of the EQU function, which happened *not* to be
right next door to their starting point. Of course, this is only to
be expected based on statistical averages that go strongly against the
notion that very many possible starting points would just happen to be
very close to an EQU functional sequence in such a vast sequence
space.

Now, isn't this consistent with my predictions? This experiment was
successful because the intelligent designers were capable to defining
what sequences were "beneficial" for their evolving "organisms." If
enough sequences are defined as beneficial and they are placed in just
the right way, with the right number of spaces between them, then
certainly such a high ratio will result in rapid evolution - as we saw
here. However, when neutral non-defined gaps are present, they are a
real problem for evolution. In this case, a gap of just 16 neutral
mutations effectively blocked the evolution of the EQU function.

http://naturalselection.0catch.com/Files/computerevolution.html

> Thus, a random
> walk that restarts each time after the first step (or alternatively, a
> random walk performed by a large population of sequences, each
> starting at state ABC...) is expected to explore, on average, 10000
> states before finding the next beneficial sequence.

Yes, but you are failing to consider the likelihood that your "winning
sequence" will in fact be within these 10,000 steps on average.

> Now, below, we
> will apply your model to the same problem.

Oh, I can hardly wait!

> > It also depends
> > upon how fast this space is searched through. For example, if the
> > ratio of beneficial states to non-beneficial states is as high as say,
> > 1 in a 1e12, and if 1e9 states are searched each second, how long with
> > it take, on average, to find a new beneficial state?
>
> OK. Let's take my example, instead, and apply your calculations.
> There are only 2 beneficial sequences, out of the state space of
> 1e1000 sequences.

Ok, I'm glad that you at least realize the size of the state space.

> Since the ratio of beneficial sequences to
> non-beneficial ones is (2/10^1000), if your "statistics" are correct,
> then I should be exploring 10^1000/2 states, on average, before
> finding the next beneficial state. That is a huge, huge, huge number.
> So why does my very simple random walk explore only 10,000 states,
> when the ratio of beneficial sequences is so small?

Yes, that is the real question and the answer is very simple - You
either got unbelievably lucky in the positioning of your start point
or your "beneficial" sequences were clustered by intelligent design.

> The answer is simple - the ratio of beneficial states does NOT matter!

Yes it does. You are ignoring the highly unlikely nature of your
scenario. Tell me, how often do you suppose your start point would
just happen to be so close to the only other beneficial sequence in
such a huge sequence space? Hmmmm? I find it just extraordinary that
you would even suggest such a thing as "likely" with all sincerity of
belief. The ratio of beneficial to non-beneficial in your
hypothetical scenario is absolutely miniscule and yet you still have
this amazing faith that the starting point will most likely be close
to the only other "winning" sequence in an absolutely enormous
sequence space?! Your logic here is truly mysterious and your faith
is most impressive. I'm sorry, but I just can't get into that boat
with you. You are simply beyond me.

> All that matters is their distribution, and how well a particular
> random walk is suited to explore this distribution.

Again, you must consider the odds that your "distribution" will be so
fortuitous as you seem to believe it will be. In fact, it has to be
this fortuitous in order to work. It basically has to be a set up for
success. The deck must be stacked in an extraordinary way in your
favor in order for your position to be tenable. If such a stacked
deck happened at your table in Las Vegas you would be asked to leave
the casino in short order or be arrested for "cheating" by intelligent
design since such deck stacking only happens via intelligent design.
Mindless processes cannot stack the deck like this. It is
statistically impossible - for all practical purposes.

> (Again, it is a
> gross, meaningless over-simplification to model evolution as a random
> walk over a frozen N-dimensional sequence space, but my point is that
> your calculations are wrong even for that relatively simple model.)

Come now Robin - who is trying to stack the deck artificially in their
own favor here? My calculations are not based on the assumption of a
stacked deck like your calculations are, but upon a more likely
distribution of beneficial sequences in sequence space. The fact of
the matter is that sequence space does indeed contain vastly more
absolutely non-beneficial sequences than it does those that are even
remotely beneficial. In fact, there is an entire theory called the
"Neutral Theory of Evolution". Of all mutations that occur in every
generation in say, humans (around 200 to 300 per generation), the
large majority of them are completely "neutral" and those few that are
functional are almost always detrimental. This ratio of beneficial to
non-beneficial is truly small and gets exponentially smaller with each
step up the ladder of specified functional complexity. Truly,
evolution gets into very deep weeds very quickly beyond the lowest
levels of functional/informational complexity.

> > It will take
> > just over 1,000 seconds - a bit less than 20 minutes on average. But,
> > what happens if at higher levels of functional complexity the density
> > of beneficial functions decreases exponentially with each step up the
> > ladder? The rate of search stays the same, but the junk sequences
> > increase exponentially and so the time required to find the rarer and
> > rarer beneficial states also increases exponentially.
>
> The above is only true if you use the following search algorithm:
>
> 1. Generate a completely random N-character sequence
> 2. If the sequence is beneficial, say "OK";
> Otherwise, go to step 1.

Actually the above is also true if you start with a likely starting
point. A likely starting point will be an average distance away from
the next closest beneficial sequence. A random mutation to a sequence
that does not find the new beneficial sequence will not be selectable
as advantageous and a random walk will begin.

> For an alphabet of size S, where only k characters are "beneficial"
> for each position, the above search algorithm will indeed need to explore
> exponentially many states in N (on average, (S/k)^N), before finding a
> beneficial state. But, this analysis applies only to the above search
> algorithm - an exteremely naive approach that resembles nothing that
> is going on in nature.

Oh really? How do you propose that nature gets around this problem?
How does nature stack the deck so that its starting point is so close
to all the beneficial sequences that otherwise have such a low density
in sequence space?

> The above algorithm isn't even a random walk
> per se, since random walks make local modifications to the current
> state, rather than generate entire states anew.

The random walk I am talking about does indeed make local
modifications to a current sequence. However, if you want to get from
the type of function produced by one state to a new type of function
produced by a different state/sequence, you will need to eventually
leave your first state and move onto the next across whatever neutral
gap there might be in the way. If a new function requires a sequence
that does not happen to be as fortuitously close to your starting
sequence as you like to imagine, then you might be in just a bit of a
pickle. Please though, do explain to me how it is so easy to get from
your current state, one random walk step at a time, to a new state
with a new type of function when the density of beneficial sequences
of the new type of function are extraordinarily infinitesimal?

> A random walk
> starting at a given beneficial sequence, and allowing certain
> transitions from one sequence to another, would require a completely
> different type of analysis. In the analyses of most such search
> algorithms, the "ratio" of beneficial sequences would be irrelevant -
> it is their *distribution* that would determine how well such an
> algorithm would perform.

The most likely distribution of beneficial sequences is determined by
their density/ratio. You cannot simply assume that the deck will be
so fantastically stacked in the favor of your neat little evolutionary
scenario. I mean really, if the deck was stacked like this with lots
of beneficial sequences neatly clustered around your starting point,
evolution would happen very quickly. Of course, there have been those
who propose the "Baby Bear Hypothesis". That is, the clustering is
"just right" so that the theory of evolution works. That is the best
you can hope for. Against all odds the deck was stacked just right so
that we can still believe in evolution. Well, if this were the case
then it would still be evolution by design. Mindless processes just
can't stack the deck like you are proposing.

> My example above demonstrates a problem
> where the ratio of beneficial states is exteremely tiny, yet the
> search finds a new beneficial state relatively quickly.

Yes - because you stacked the deck in your favor via deliberate
design. You did not even try to explain the likelihood of this
scenario in real life. How do you propose that this is even a remote
reflection of what mindless processes are capable of? I'm talking
average probabilities here while you are talking about extraordinarily
unlikely scenarios that are basically impossible outside of deliberate
design.

> I could also
> very easily construct an example where the ratio is nearly one, yet a
> random walk starting at a given beneficial sequence would stall with a
> very high probability.

Oh really? You can construct a scenario where all sequences are
beneficial and yet evolution cannot evolve a new one? Come on now . .
. now you're just being silly. But I certainly would like to see you
try and set up such a scenario. I think it would be most
entertaining.

> In other words, Sean, your calculations are
> irrelevant for the kind of problem you are trying to analyze.

Only if you want to bury your head in the sand and force yourself to
believe in the fairytale scenarios that you are trying to float.

> If you
> wish to model evolution as a random walk of point mutations on a
> frozen N-dimensional sequence space, you will need to apply a totally
> different statististical analysis: one that takes into account the
> distributions of known "beneficial" sequences in sequence space. And
> then I'll tell you why that model too is so wrong as to be totally
> irrelevant.

And if you wish to model evolution as a walk between tight clusters of
beneficial sequences in an otherwise extraordinarily low density
sequence space, then I have some oceanfront property in Arizona to
sell you at a great price.

Until then, this is all I have time for today.

> Cheers,
> RobinGoodfellow.

Sean
www.naturalselection.0catch.com

"Rev Dr" Lenny Flank

unread,
Jan 4, 2004, 1:48:24 PM1/4/04
to
Sean Pitman wrote:


>
> Until then, this is all I have time for today.


Hey doc, when will you have time to tell us what the scientific theory
of intelligent design is --- what does the desigher do, specifically,
what mechanisms does it use to do it, where can we see these mechanisms
in operation today. And what idnicates there is only one desinger and
not, say, ten or fifty of them all working together.

After that, can you find the time to explain to me how ID "theory" is
any less "materialist" or "naturalist" or "atheist" than is evolutionary
biology, since ID "theory" not only does NOT hypothesize the existence
of any supernatural entities or actions, but specifically states that
the "intelligent designer" might be nothing but a space alien.

And after THAT, could you find the time to tell us how you apply
anything other than "naturalism" or "materialism" to your medical
practice? What non-naturalistic cures do you recommend for your
patients, doctor.

I do understand that you wont' answer, doc. That's OK. The questions
make their point -- with you or without you.

===============================================
Lenny Flank
"There are no loose threads in the web of life"

Creation "Science" Debunked:
http://www.geocities.com/lflank

DebunkCreation Email list:
http://www.groups.yahoo.com/group/DebunkCreation

-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 100,000 Newsgroups - 19 Different Servers! =-----

RobinGoodfellow

unread,
Jan 4, 2004, 10:53:13 PM1/4/04
to
I've already responded to this same post in a different thread. See:

http://groups.google.com/groups?dq=&hl=en&lr=&ie=UTF-8&threadm=3FF89BDA.EB18D013%40indiana.edu&prev=/groups%3Fdq%3D%26num%3D25%26hl%3Den%26lr%3D%26ie%3DUTF-8%26group%3Dtalk.origins%26start%3D50
or
http://makeashorterlink.com/?C309615F6

Incidentally, I'll be leaving for a much-needed vacation in a couple
of days, and expect that other commitments will force me to return to
lurkdom for a while afterwards. So I apologize in advance for leaving
these two threads hanging, though I look forward to reading your
replies.

Cheers,
Robin.


seanpi...@naturalselection.0catch.com (Sean Pitman) wrote in message news:<80d0c26f.04010...@posting.google.com>...

Sean Pitman

unread,
Jan 14, 2004, 6:02:24 AM1/14/04
to
RobinGoodfellow <lmuc...@yahoo.com> wrote in message news:<bt8i6p$r9h$1...@news01.cit.cornell.edu>...

> > Sean Pitman wrote:
> >
> > With all due respect, what is your area of professional training? I
> > mean, after reading your post I dare say that you are not only weak in
> > biology, but statistics as well. Certainly your numbers and
> > calculations are correct, but the logic behind your assumptions is
> > extraordinarily fanciful. You sure wouldn't get away with such
> > assumptions in any sort of peer reviewed medical journal or other
> > statistically based science journal - that's for sure. Of course, you
> > may have good success as a novelist . . .
>
> Tsk, tsk... I thank you for the career advice. I'll keep it in mind,
> should my current stint in computer science fall through. I wouldn't go
> so far as to say that Monte-Carlo methods are my specialty, but I will
> say that my own research and the research of half my colleagues would be
> non-existent if they worked the way you think they do.

Hmmmm, so what has your research shown? I've seen nothing from the
computer science front that shows how anything new, such as a new
software program, beyond the lowest levels of functional complexity
can be produced by computers without the input of an intelligent mind.
Your outlandish claims for the result of research done so far, such
as the Lenski experiments, are just over the top. They don't
demonstrate anything even close to what you claim they demonstrate
(See Below).

> >>I'll try to address some of the mistakes you've made below, though I
> >>doubt that I can do much to dispel your misconceptions. Much of my
> >>reply will not even concern evolution in a real sense, since I wish to
> >>highlight and address the mathematical errors that you are making.
> >
> > What you ended up doing is highlighting your misunderstanding of
> > probability as it applies to this situation as well as your amazing
> > faith in an extraordinary stacking of the deck which allows evolution
> > to work as you envision it working. Certainly, if evolution is true
> > then you must be correct in your views. However, if you are correct
> > in your views as stated then it would not be evolution via mindless
> > processes alone, but evolution via a brilliant intelligently designed
> > stacking of the deck.
>

> Exactly what views did I state, Sean? Other than that your calculations
> are, to put it plainly, irrelevant. Not even wrong - just irrelevant.
>
> Yes, the example I give below incredibly stacks the deck in my favor.
> It ought to. It is what is called a "counter-example". It falsifies
> the hypothesis that your "model" of evolution is correct. Now aren't
> you glad you proposed something falsifiable?

Come again? How does your stacking the deck via the use of
intelligent design, since there is no other logical way to stack the
deck so that your scenario will actually work, disprove my position?
My hypothesis is dependent on the far more likely scenario that the
deck in not stacked as you suggest, but is in fact much more random
than you seem to think it is. Certainly the ONLY way evolution could
work is if the deck was stacked, but then this would be easily
detected as evidence of intelligent design, not the normal
understanding of evolution as a mindless non-directed process.

> > This distribution of states has very little if anything to do with how


> > much time it takes to find one of them on average. The starting point
> > certainly is important to initial success, but it also has very little
> > if anything to do with the average time needed to find more and more
> > beneficial functions within that same level of complexity.
>

> Except in every real example of a working Monte-Carlo procedure, where
> the distribution and starting point have *everything* to do whether such
> a procedure is successful or not.

You mean that the stacking of the deck has everything to do with
whether or not an "evolutionary" scenario will succeed. Certainly
this would be true, but such a stacking of the deck has no resemblance
to reality. You must ask yourself about the likelihood that one will
find such a stacked deck in real life outside of intelligent design .
. .

> > For
> > example, if all the beneficial states were clustered together in one
> > or two areas, the average starting point, if anything, would be
> > farther way than if these states were distributed more evenly
> > throughout the sequence space. So, this leaves the only really
> > relevant factor - the types of steps and the number of steps per unit
> > of time. That is the only really important factor in searching out
> > the state space - on average.
>

> *Sigh*. The problem is that the model *you* are proposing (one I think
> is silly) is of a random on walk on a specific frozen sequence space
> with beneficial sequences as points in that space. It does not deal
> with an "average" distribution, and an "average" starting point, but
> with one very specific distribution of beneficial sequences and one very
> specific starting point.

Consider the scenario where there are 10 ice cream cones on the
continental USA. The goal is for a blind man to find as many as he
can in a million years. It seems that what you are suggesting is that
the blind man should expect that the ice cream cones will all be
clustered together and that this cluster will be with arms reach of
where he happens to start his search. This is simply a ludicrous
notion outside of intelligent design. My hypothesis, on the other
hand, suggests that these 10 ice cream cones will have a more random
distribution with hundreds of miles separating each one, on average.
An average starting point of the blind man may, by a marvelous stroke
of luck, place him right beside one of the 10 cones. However, after
finding this first cone, how long, on average, will it take him to
find any of the other 9 cones? That is the question here. The very
low density of ice cream cones translates into a marked increase in
the average time required to find them. Now, if there were billions
upon billions of ice cream cones all stuffed into this same area, then
one could reasonably expect that they would be separated by a much
closer average distance - say just a couple of feet. With such a high
density, the average time needed for the blind man to find another ice
cream cone would be just a few seconds.

So, whose position is more likely? Your notion that the density of
beneficial sequences in sequence space doesn't matter or my notion
that density does matter? Is your hypothetical situation where a low
density of beneficial states is clustered around a given starting
point really valid outside of intelligent design? If so, name a
non-designed situation where such an unlikely phenomenon has ever been
observed to occur . . .

> You cannot simply assume an "average"
> distribution in the absence of background information: you have to find
> out precisely the kind of distribution you are dealing with. And even
> if you do find that the distribution is "stacked", it does not imply
> that an intelligence was involved.

Oh really? You think that stacking the deck as you have done can
happen mindlessly in less than zillions of years of average time?
Come on now! What planet are you from?

> The stacking could occur due to the
> constraints imposed by the very definition of the problem: in the case
> of evolutions, by the physical constraints governing the interactions
> between the molecules involved in biological systems.

Oh, so the physical laws of atoms and molecules force them to
self-assemble themselves in functionally complex systems? Now you are
really reaching. Tell me why the physical constraints of these
molecular machines force all beneficial possibilities to be so close
together? This is simply the most ludicrous notion that I have heard
in a very long time. You would really do well in Vegas with that one!
Try telling them, when they come to arrest you for cheating, that the
deck was stacked because of the physical constraints of the playing
cards.

> In fact, why
> would you expect that the regular and highly predictable physical laws
> governing biochemical reactions would produce a random, "average"
> distribution of "beneficial sequences"?

Because, I don't know of any requirement for them to be clustered
outside of deliberate design - do you? I can see nothing special
about the building blocks that make up living things that would cause
the potentially beneficial systems found in living things to have to
be clustered (just like there is nothing inherent in playing cards
that would cause them to stack themselves in any particular order).
However, if you know of a reason why the physical nature of the
building blocks of life would force them to cluster together despite
having a low density in sequence space, please, do share it with me.
Certainly none of your computer examples have been able to demonstrate
such a necessity. Why then would you expect such a forced clustering
in the potentially beneficial states of living things?

> >>For an extreme
> >>example, consider a space of strings consisting of length 1000, where
> >>each position can be occupied by one of 10 possible characters.
>

> Note, I wrote, "extereme example". My point was *not* invent a
> distribution which makes it likely for evolutiuon to occur (this example
> has about as much to do with evolution as ballet does with quantum
> mechanics), but to show how inadequate your methods are.

Actually, this situation has a lot to do with evolution and is the
real reason why evolution is such a ludicrous idea. What your
illustration shows is that only if the deck is stacked in a most
unlikely way will evolution have the remotest possibility of working.
That is what I am trying to show and you demonstrated this very
nicely. Unwittingly it is you who effectively show just how
inadequate evolutionary methods are at making much of anything outside
of an intelligently designed stacking of the deck.



> >>Suppose there are only two beneficial strings: ABC........, and
> >>BBC........ (where the dots correspond to the same characters). The
> >>allowed transitions between states are point mutations, that are
> >>equally probable for each position and each character from the
> >>alphabet. Suppose, furthermore, that we start at the beneficial state
> >>ABC. Then, the probability of a transition from ABC... to BBC... in a
> >>single mutation 1/(10*1000) = 1/10000 (assuming self-loops - i.e.
> >>mutations that do not alter the string, are allowed).
> >
> >
> > You are good so far. But, you must ask yourself this question: What
> > are the odds that out of a sequence space of 1e1000 the only two
> > beneficial sequences with uniquely different functions will have a gap
> > between them of only 1 in 10,000?
>

> Mind-numbingly low. 1000*.9*.1^999, to be precise. But that is not the
> point.

Actually, this is precisely the point. What you are basically saying
is that if there were only one ice cream cone in the entire universe
that it could be easily found if the starting point of the blind man's
search just so happened to be an arms reach away from the cone. That
is what you are saying is it not?



> > Don't you see the problem with this little scenario of yours?
> > Certainly this is a common mistake made by evolutionists, but it is
> > none-the less a fallacy of logic. What you have done is assume that
> > the density of beneficial states is unimportant to the problem of
> > evolution since it is possible to have the beneficial states clustered
> > around your starting point. But such a close proximity of beneficial
> > states is highly unlikely. On average, the beneficial states will be
> > more widely distributed throughout the sequence space.
>

> On average, yes.

On average yes?! How can you say this and yet disagree with my
conclusions?

> But didn't you just say above that the distribution
> of the sequences is irrelevant? That all that matters is "ratio" of
> beneficial sequences?

It is only by determining the ration of beneficial sequences that you
can obtain a reasonable idea about the likely distribution of these
sequences around any particular starting point. You hold a huge
fallacy of logic that by some magical means the distribution could be
just right even though the density is truly miniscule (like the
finding one atom in zillions of universes the size of ours).

> (Incidentally, "ratio" and "density" are not
> identical. The distribution I showed you has a relatively high density
> of beneficial sequences, despite a low ratio.)

You are talking local "density", which, in your scenario, also has a
locally high "ratio". I, on the other hand, was talking about the
total ratio and density of the whole potential space taken as a whole.
Really, you are very much mistaken to suggest that the ratio and
density of a state in question per the same unit of state space are
not equivalent.

> > For example, say that there are 10 beneficial sequences in this
> > sequence space of 1e1000. Now say one of these 10 beneficial
> > sequences just happens to be one change away from your starting point
> > and so the gap is only a random walk of 10,000 steps as you calculated
> > above. However, on average, how long will it take to find any one of
> > the other 9 beneficial states? That is the real question. You rest
> > your faith in evolution on this inane notion that all of these states
> > will be clustered around your starting point. If they were, that
> > certainly would be a fabulous stroke of luck - like it was *designed*
> > that way. But, in real life, outside of intelligent design, such
> > strokes of luck are so remote as to be impossible for all practical
> > purposes. On average we would expect that the other nine sequences
> > would be separated from each other and our starting point by around
> > 1e999 random walk steps/mutations (i.e., on average it is reasonable
> > to expect there to be around 999 differences between each of the 10
> > beneficial sequences). So, even if a starting sequence did happen to
> > be so extraordinarily lucky to be just one positional change away from
> > one of the "winning" sequences, the odds are that this luck will not
> > hold up as well in the evolution of any of the other 9 "winning"
> > sequences this side of a practical eternity of time.
>

> Unless, of course, it follows from the properties of the problem that
> the other 9 benefecial sequences must be close to the starting sequence.

And I am sure you have some way to explain why these 9 other
beneficial sequences would have to be close together outside of
deliberate design? What "properties" of the problem would force such
a low density of novel beneficial states to be so clustered? I see
absolutely no reason to suggest such a necessity. Certainly such a
necessity much be true if evolution is true, but if no reasonable
naturalistic explanation can be given, why should I simply assume such
a necessity? Upon what basis do you make this claim?

> > Real time experiments support this position rather nicely. For
> > example, a recent and very interesting paper was published by Lenski
> > et. al., entitled, "The Evolutionary Origin of Complex Features" in
> > the 2003 May issue of Nature. In this particular experiment the
> > researchers studied 50 different populations, or genomes, of 3,600
> > individuals. Each individual began with 50 lines of code and no
> > ability to perform "logic operations". Those that evolved the ability
> > to perform logic operations were rewarded, and the rewards were larger
> > for operations that were "more complex". After only15,873 generations,
> > 23 of the genomes yielded descendants capable of carrying out the most
> > complex logic operation: taking two inputs and determining if they are
> > equivalent (the "EQU" function).
>

> I've already covered how you've completely misinterpreted Lenski's
> research in the other post. But let's run with this for a bit:

Lets . . . Oh, and if you would give a link to where you "covered" my
"misinterpretation", that would be appreciated.

> > In principle, 16 mutations (recombinations) coupled with the three
> > instructions that were present in the original digital ancestor could
> > have combined to produce an organism that was able to perform the
> > complex equivalence operation. According to the researcher themselves,
> > "Given the ancestral genome of length 50 and 26 possible instructions
> > at each site, there are ~5.6 x 10e70 genotypes [sequence space]; and
> > even this number underestimates the genotypic space because length
> > evolves."
> >
> > Of course this sequence space was overcome in smaller steps. The
> > researchers arbitrarily defined 6 other sequences as beneficial (NAND,
> > AND, OR, NOR, XOR, and NOT functions).
>

> As a minor quibble, I believe they actually started with NAND (you need
> it for all the other functions). But I could be wrong - I've read that
> paper months ago.

You are correct. The fact is though that the NAND starting point was
defined as beneficial and it was not made up of random sequences of
computer code. It was all set up very specifically so that certain
recombinations of code (point mutations were not primarily used,
though they did happen on occasion during recombination events), would
yield certain types of other pre-determined coded functions.

> > those in the reward-all environment (2.15 x 1e7 versus 1.22 x 1e7;


> > P<0.0001, Mann-Witney test), because they tended to have smaller
> > genomes, faster generations, and thus turn over more quickly. However,
> > all populations explored only a tiny fraction of the total genotypic
> > space. Given the ancestral genome of length 50 and 26 possible

> > instructions at each site, there are ~5.6 x 1e70 genotypes; and even


> > this number underestimates the genotypic space because length
> > evolves."
>

> And after years of painstaking research, Sean finally invents the wheel.
> Yes, evolution does not pop complex systems out of thin air, but
> constructs through integration and co-optation of simpler functional
> components. Move along, folks, nothing to see here!

What this shows is that if the "simpler" components aren't defined as
"beneficial" then a system of somewhat higher complexity will not
evolve at all - period - even given zillions of years of time. Truly,
this means that there really isn't anything to see here. Nothing
evolves without the deck being stacked by intelligent design. That is
all this Lenski experiment showed.

> > Isn't that just fascinating? When the intermediate stepping stone
> > functions were removed, the neutral gap that was created successfully
> > blocked the evolution of the EQU function, which happened *not* to be
> > right next door to their starting point. Of course, this is only to
> > be expected based on statistical averages that go strongly against the
> > notion that very many possible starting points would just happen to be
> > very close to an EQU functional sequence in such a vast sequence
> > space.
>

> Here's a question for you. There were only 5 beneficial functions in
> that big old sequence space of yours.

Actually, including the starting and ending points, there were 7
defined beneficial sequences in this sequence space (NAND, AND, OR,
NOR, XOR, NOT, and EQU functions).

> They are all very standard
> Boolean functions: in no way were they specifically designed by Lenski
> et. al. to ease the way to into evolving the EQ functions.

Actually, they very much were designed by Lenski et. al. to ease the
way along the path to the EQU sequence. The original code was set up
with very specific lines of code that could, when certain
recombinations occurred, give rise to each of these logic functions.
The lines of code were not random lines of code and they were not all
needed to be as they were for the original NAND function to operate.
In fact the researchers knew the approximate rate of evolution that
would be expected ahead of time based on their programming of the
coded sequences, the rate of recombination of these sequences, the
size of the sequence space and the distance between each step along
the pathway. It really was a very nice setup for success. Read the
paper again and you will see that this is true.

> How come
> they were all sufficiently close in sequence space to one another, when
> according to you such a thing is so highly improbable?

Because they were designed to be close together deliberately. The
deck was stacked on purpose. I mean really, you can't be suggesting
that these 7 beneficial states just happened to be clustered together
in a state space of 1e70 by the mindless restriction of the program do
you? The program was set up with the restrictions stacked in a
particular way so that only these 7 states could evolve and that each
subsequence state was just a couple of steps away from the current
state. No other function was set up to evolve, so no other novel
function evolved. These lines of code did not get together and make a
calculator program or a photo-editing program, or even a simple
program to open the CD player. That should tell you something . . .
This Lenski experiment was *designed* to succeed like it did. Without
such input of intelligent deck stacking, it never would have worked
like it did.

> > Now, isn't this consistent with my predictions? This experiment was
> > successful because the intelligent designers were capable to defining
> > what sequences were "beneficial" for their evolving "organisms." If
> > enough sequences are defined as beneficial and they are placed in just
> > the right way, with the right number of spaces between them, then
> > certainly such a high ratio will result in rapid evolution - as we saw
> > here. However, when neutral non-defined gaps are present, they are a
> > real problem for evolution. In this case, a gap of just 16 neutral
> > mutations effectively blocked the evolution of the EQU function.
>

> You are not even close. Lenski et. al. didn't define which *sequences*
> were "beneficial".

Yes, they did exactly that. Read the paper again. They arbitrarily
wrote the code in a meaningful way for the starting lines as well as
arbitrarily defined which recombinations would be "beneficial". They
say it in exactly that way. They absolutely say that they defined
what was and what was not "beneficial".

> They didn't even design functions to serve
> specifically as stepping stones in the evolutionary pathways of EQ.

Yes they did in that they wrote the original code so that it would be
possible to form such pre-defined "beneficial" codes in a series of
recombinations of lines of code.

> What they have done is to name some functions of intermediate complexity
> that might be beneficial to the organism.

You obviously either haven't read the original paper or you don't
understand what it said. The researchers openly admit to arbitrarily
defining the "intermediate" states as beneficial. This fact is only
proven because they went on to remove the "beneficial" definition from
these intermediate states. Without this arbitrary assignment of
beneficial to the intermediate states, the EQU state did not evolve.
Go back an read the paper again. It was the researchers who defined
the states. The states themselves obviously didn't have inherent
benefits in the "world" that they were evolving in outside of the
researcher's definitions for them.

> They certainly did not tell
> their program how to reach these functions, or what the systems
> performing these functions might look like, but simply indicated that
> there are functions at varying levels of complexity that might be useful
> to an organism in its environment.

Wrong again. They did in fact tell their program exactly which
states, specifically, to reward and how to reward them if present.
They told the program exactly what they would look like ahead of time
so that they would be recognized and treated as beneficial when they
arrived on the scene.

You really don't seem like you have a clue how this experiment was
done. I really don't understand how you can make such statements as
this if you had actually read the paper.

> Thus, they have demonstrated exactly
> what they set out to: that in evolution, complex functional features are
> acquired through co-optation and modification of simpler ones.

They did nothing of the sort. All they did was show that stacking the
deck by intelligent design really does work. The problem is that
evolution is supposed to work to create incredible diversity and
informational complexity without any intelligent intervention having
ever been required. So, you evolutionists are back to ground zero.
There simply is no evolution, outside of intelligent design, beyond


the lowest levels of functional/informational complexity.

<snip>


> >>(Again, it is a
> >>gross, meaningless over-simplification to model evolution as a random
> >>walk over a frozen N-dimensional sequence space, but my point is that
> >>your calculations are wrong even for that relatively simple model.)
> >
> > Come now Robin - who is trying to stack the deck artificially in their
> > own favor here? My calculations are not based on the assumption of a
> > stacked deck like your calculations are, but upon a more likely
> > distribution of beneficial sequences in sequence space. The fact of
> > the matter is that sequence space does indeed contain vastly more
> > absolutely non-beneficial sequences than it does those that are even
> > remotely beneficial.
>

> Yes, but your caclulations are based on the equally unfounded assumption
> that the deck is not stacked in any way, shape, or form. (That is, if
> the sequences were really distributed evenly in your frozen sequence
> space, then your probability calculation would still be off, but not by
> too much.)

Not by too much? Hmmmmm . . . So, you are saying that if the
sequence space where set up even close to the way in which I am
suggesting then my calculations would be pretty much correct? So,
unless the sequence space looks like you envision it looking, all nice
and neatly clustered around your pre-arranged starting point, then I
am basically right? So, either the deck is stacked pretty much like
you suggest or the deck is more randomly distributed like I suggest.
If it is stacked, then you are correct and evolution is saved. If the
deck is more randomly distributed like I suggest, then evolution is
false and should be discarded as untenable - correct?

Now where did I miss it? You said at the beginning that my
calculations were completely off base given my own position and that
you were going to correct my math. You said that I needed special
training in statistics. Now, how can my calculations be pretty much
on target given my hypothesis and yet I not know anything about
statistics?

> What makes you think that the laws of physics do not stack
> the deck sufficiently to make evolution possible?

More importantly, what makes you think that they do? I've never seen
a mindless process stack the deck like this, have you? Where are your
examples of mindless processes stacking the deck in such as way as you
suggest outside of aid of intelligent design?

> You may feel that
> they can't: but in the meantime, you should be striving to find out what
> the actual distribution is, rather than assuming it is unstacked. (Not
> that this would make your model relevant, but it'll be a small step in
> the right direction.)

Actually, an unstacked deck would make my model very relevant indeed.
You admit as much yourself when you say that my calculations are
pretty much correct give that the hypothesis of an unstacked deck is
true. Now, the ball is in your court. It is so extremely
counterintuitive to me that the deck would be unstacked that such an
assertion demands equivalent evidence. Where do you see such deck
stacking outside of intelligent design? That is the real question
here.

> > In fact, there is an entire theory called the
> > "Neutral Theory of Evolution". Of all mutations that occur in every
> > generation in say, humans (around 200 to 300 per generation), the
> > large majority of them are completely "neutral" and those few that are
> > functional are almost always detrimental. This ratio of beneficial to
> > non-beneficial is truly small and gets exponentially smaller with each
> > step up the ladder of specified functional complexity. Truly,
> > evolution gets into very deep weeds very quickly beyond the lowest
> > levels of functional/informational complexity.
>

> The fact that the vast majority of mutations are neutral does not imply
> that there exists any point where there is no opportunity for a
> beneficial mutation. And where such an opportunity presents itself,
> evolution will eventually find it, given large enough populations and
> sufficient times.

Yes, if by "sufficient time" you mean zillions of years - even for
extremely large populations.

> >>>It will take
> >>>just over 1,000 seconds - a bit less than 20 minutes on average. But,
> >>>what happens if at higher levels of functional complexity the density
> >>>of beneficial functions decreases exponentially with each step up the
> >>>ladder? The rate of search stays the same, but the junk sequences
> >>>increase exponentially and so the time required to find the rarer and
> >>>rarer beneficial states also increases exponentially.
> >>
> >>The above is only true if you use the following search algorithm:
> >>
> >> 1. Generate a completely random N-character sequence
> >> 2. If the sequence is beneficial, say "OK";
> >> Otherwise, go to step 1.
> >
> > Actually the above is also true if you start with a likely starting
> > point. A likely starting point will be an average distance away from
> > the next closest beneficial sequence. A random mutation to a sequence
> > that does not find the new beneficial sequence will not be selectable
> > as advantageous and a random walk will begin.
>

> Actually, your last paragraph will be approximately true only if all
> your "beneficial" points are uniformly spread out through your sequence
> space.

In other words, if they aren't stacked in some extraordinarily
fortuitous fashion?

> Even then, your probability calculation will be off by some
> orders of magnitude, since you will actually need to apply combinatorial
> forumlas to compute these probabilities correctly. But, I suppose,
> it'll be close enough.

My calculations will not be off too far. And, even if they are off by
a few orders of magnitude, it doesn't matter compared to the numbers
involved. As you say, the rough estimates involved here are clearly,
"close enough" to get a very good idea of the problem. My math is not
"way off" as you originally indicated. If anything you have a
conceptual problem with my hypothesis, not my statistics/math. It
basically boils down to this: Either the deck was stacked by a
mindless or a mindful process. You have yet to provide any convincing
evidence that a mindless process can stack a deck, like it would have
to have been stacked for life forms to be as diverse and complex they
are, outside of a lot of help from intelligent design.

<snip>


> >> I could also
> >>very easily construct an example where the ratio is nearly one, yet a
> >>random walk starting at a given beneficial sequence would stall with a
> >>very high probability.
> >
> > Oh really? You can construct a scenario where all sequences are
> > beneficial and yet evolution cannot evolve a new one? Come on now . .
> > . now you're just being silly. But I certainly would like to see you
> > try and set up such a scenario. I think it would be most
> > entertaining.
>

> I didn't say all sequences are beneficial, Sean. That *would* be silly.
> I did say that the ratio *approaches* one, but is not quite that.
> But, here you are:
>
> Same "sequence space" as before, but now a sequence is "beneficial" if
> it is AAAAAAAAAA......AAA (all A's), or it differs from AAAAA...AAA by
> at least 2 amino acids. All other sequences are *harmful* - if the
> random walk ever stumbles onto one, it will die off, and will need to
> return to its starting point. (This means there are exactly 1000*9 +
> (1000*999/2)*81 or about 4.02e6 harmful sequences, and 1e1000-4.02e6 or
> about 1e1000 beneficial sequences: that is, virtually every sequence is
> beneficial.) Again, the allowed transitions are point mutations, and
> the starting point is none other AAAAAAA...AAA. Now, will this random
> walk ever find another beneficial sequence?

Your math here seems to be just a bit off. For example, if out of
1e1000 the number of beneficial sequences was 1e999, the ratio of
beneficial sequences would be 1 in 10. At this ratio, the average
distance to a new beneficial function would not be "two amino acid
changes away", but less than one amino acid change away. The ratio
created by "at least 2 amino acid changes" is less than 1 in 400, not
less than 1 in 10 like you suggest here.

Also, even if all sequences less than 2 amino acid changes were
detrimental (which is very unlikely), an average bacterial colony of
100 billion or so individuals would cross this 2 amino acid gap in
short order since a colony this size would experience a double
mutation in a sequence this size in several members of its population
during the course of just one generation.



> > And if you wish to model evolution as a walk between tight clusters of
> > beneficial sequences in an otherwise extraordinarily low density
> > sequence space, then I have some oceanfront property in Arizona to
> > sell you at a great price.
>

> If I did wish to model evolution this way, then I would gladly buy this
> property off your hands. And then sell it back to you at twice the
> price, because it would still be better than the model you propose.

LOL - Ok, you just keep thinking that way. But, until you have some
evidence to support your wishful thinking mindless stacking of the
deck hypothesis, what is there to make your position attractive or
even remotely logical?

> Cheers,
> RobinGoodfellow.

Sean
www.naturalselection.0catch.com

Chris Merli

unread,
Jan 14, 2004, 10:00:08 AM1/14/04
to

"Sean Pitman" <seanpi...@naturalselection.0catch.com> wrote in message
news:80d0c26f.04011...@posting.google.com...

But there is not one blind man looking there are many and only those close
enough to the cluster of cones in the first place are likely to succeed.

>
> So, whose position is more likely? Your notion that the density of
> beneficial sequences in sequence space doesn't matter or my notion
> that density does matter? Is your hypothetical situation where a low
> density of beneficial states is clustered around a given starting
> point really valid outside of intelligent design? If so, name a
> non-designed situation where such an unlikely phenomenon has ever been
> observed to occur . . .
>
> > You cannot simply assume an "average"
> > distribution in the absence of background information: you have to find
> > out precisely the kind of distribution you are dealing with. And even
> > if you do find that the distribution is "stacked", it does not imply
> > that an intelligence was involved.
>
> Oh really? You think that stacking the deck as you have done can
> happen mindlessly in less than zillions of years of average time?
> Come on now! What planet are you from?

Lets talk clusters. How many point mutations of a protein are in fact still
functional. This tends to create a cluster all of its own. Given this fact
the idea that they are spread evenly accross the landscape is just not true.

howard hershey

unread,
Jan 14, 2004, 2:52:27 PM1/14/04
to

Sean Pitman wrote:

Except that is NOT what evolution does. Evolution starts with an
organism with pre-existing sequences that produce products and interact
with environmental chemicals in ways that are useful to the organism's
reproduction. The situation is more like 10,000 blind men in a varying
topography who blindly follow simple and dumb rules of the game to find
useful things (ice cream at the tops of fitness peaks): Up is good. Down
is bad. Flat is neither good nor bad. Keep walking in all cases. It
would not take too long for these 10,000 blind men to be found in
decidedly non-random places (the high mesas of functional utility where
they are wandering around the flat tops if you haven't guessed). And
the ice cream cones (the useful functions), remember, are not randomly
distributed either. They are specifically at the tops of these mesas as
well. That is what a fitness landscape looks like.

If this topography of utility only changed slowly, at any given time it
would appear utterly amazing to Sean that the blind men will all be
found at these local high points or optimal states (the mesas licking
the ice cream cones on them) rather than being randomly scattered around
the entire surface. They reached these high points (with the ice cream)
by following a simple dumb algorithm.

But you were wondering how something new could arise *after* the blind
men are already wandering around the mesas? The answer is that it
depends. They can't always do so. But remember that these pre-existing
mesas are not random places. They do something specific with local
utility. Let's say that each mesa top has a different basic *flavor* of
ice cream. Say that chocolate is a glycoside hydrolase that binds a
glucose-based glycoside. Now let's say that the environment changes so
that one no longer needs this glucose-based glycoside (the mesa sinks
down to the mean level) but now one needs a galactose-based glycoside
hydrolase. Notice that the difference in need here is something more
like wanting chocolate with almonds than wanting even strawberry, much
less jalapeno or anchovy-flavored ice cream. The blind man on the newly
sunk mesa must keep walking, of course, but he is not thousands of miles
away from the newly risen mesa with chocolate with almonds ice cream on
top. Changing from one glucose-based glycoside hydrolase to one with a
slightly different structure is not the same as going from chocolate to
jalapeno or fish-flavored ice cream. Not even the same as going from
chocolate to coffee. The "island" of chocolate with almonds is *not*
going to be way across the ocean from the "island" of chocolate. It will
be nearby where the blind man is. *And* because chocolate with almonds
is now the need, it will also be on the new local high mesa (relative to
the position of the blind man on the chocolate mesa). The blind man
need only follow the simple rules (Up good. Down bad. Neutral neutral.
Keep walking.) and he has a good chance of reach the 'new' local mesa
top quite often.

And remember that there is not just one blind man on one mesa in this
ocean of possible sequences. There are 10,000 already present on 10,000
different local mesas with even more flavors than the 31 that most ice
cream stores offer. Your math always presuposes that whenever you need
to find, say, vanilla with cherry the one blind man starts in some
random site and walks in a completely random fashion (rather than by the
rules I pointed out) across half the universe of sequence space to reach
your pre-determined goal by pure dumb luck to find the perfect lick. My
presumption is that the successful search is almost always going to
start from the pre-existing mesa with the closest flavor to the new need
(or from a duplicate, which, as a duplicate, is often superfluous and
quickly erodes to ground level in terms of its utility). As mentioned,
these pre-existing mesas are not random pop-ups. They are at the most
useful places in sequence space from which to try to find near-by mesas
with closely-related biologically useful properties because they already
have biologically useful properties.

> It seems that what you are suggesting is that
> the blind man should expect that the ice cream cones will all be
> clustered together and that this cluster will be with arms reach of
> where he happens to start his search. This is simply a ludicrous
> notion outside of intelligent design. My hypothesis, on the other
> hand, suggests that these 10 ice cream cones will have a more random
> distribution with hundreds of miles separating each one, on average.
> An average starting point of the blind man may, by a marvelous stroke
> of luck, place him right beside one of the 10 cones. However, after
> finding this first cone, how long, on average, will it take him to
> find any of the other 9 cones? That is the question here. The very
> low density of ice cream cones translates into a marked increase in
> the average time required to find them. Now, if there were billions
> upon billions of ice cream cones all stuffed into this same area, then
> one could reasonably expect that they would be separated by a much
> closer average distance - say just a couple of feet. With such a high
> density, the average time needed for the blind man to find another ice
> cream cone would be just a few seconds.
>
> So, whose position is more likely?

Your position is not wrong. It is simply irrelevant and unrelated to
reality.

> Your notion that the density of
> beneficial sequences in sequence space doesn't matter or my notion
> that density does matter?

All that matters is whether there is a pre-existing sequence close
enough to one that meets your requirement for being beneficial. And
pre-existing sequences in biological organisms are not random. And
there are more than one such sequence. The only one that matters is the
closest one.

> Is your hypothetical situation where a low
> density of beneficial states is clustered around a given starting
> point really valid outside of intelligent design? If so, name a
> non-designed situation where such an unlikely phenomenon has ever been
> observed to occur . . .

It seems to me that mountains often are found in clusters. That islands
are often found in clusters. And those are the metaphors we are using
for beneficial states. They (mountains, islands, and biologically
useful activities) occur in clusters because of causal reasons, not
random ones.

>>You cannot simply assume an "average"
>>distribution in the absence of background information: you have to find
>>out precisely the kind of distribution you are dealing with. And even
>>if you do find that the distribution is "stacked", it does not imply
>>that an intelligence was involved.
>
>
> Oh really? You think that stacking the deck as you have done can
> happen mindlessly in less than zillions of years of average time?
> Come on now! What planet are you from?

When you start with useful rather than random sequences in a
pre-existing organism, you are necessarily stacking the deck in a search
for other related useful sequences. Especially if the search were not
random (but followed the simple rules I gave to my blind man), did not
occur on a perfectly flat plane, and did not start with a search from
one random site but from many non-random partially useful sites. Only
the ones that *start* off close to the desired island/mountain have a
good chance of reaching a useful end point, but that is merely probability.

>>The stacking could occur due to the
>>constraints imposed by the very definition of the problem: in the case
>>of evolutions, by the physical constraints governing the interactions
>>between the molecules involved in biological systems.
>
>
> Oh, so the physical laws of atoms and molecules force them to
> self-assemble themselves in functionally complex systems?

As a matter of fact, it is indeed the physical laws of atoms and
molecules that cause the self-assembly of structures like flagella from
their component parts. There is no intelligent assembler of flagella in
bacteria. You keep confusing and confounding the self-assembly of
flagella (or ribosomes, or cilia, or mitochondrial spindles) in cells
with their evolutionary points of origin. Please use these terms correctly.

Just so you know, I suspect he was talking about the constraints
involved in the evolution, say, of a glycoside hydrolysis. One of these
constraints being the ability to bind a specific glycoside. This
probably requires the presence of a binding cleft in the protein, thus
limiting the evolution of beta galactosidases to modifications of
molecules that have a cleft capable of binding the sugar galactose
linked through a betagalactoside linkage to another molecule. For
example, ebg or immunoglobins (yep, that cleft can be modified to make
an immunoglobin an effective lactase). The hard part in evolving a
lactase from an immunoglobin is in having the right few amino acids
needed to weaken the bond to be hydrolyzed and in not having binding be
so tight that the products are not released.

> Now you are
> really reaching. Tell me why the physical constraints of these
> molecular machines force all beneficial possibilities to be so close
> together? This is simply the most ludicrous notion that I have heard
> in a very long time. You would really do well in Vegas with that one!
> Try telling them, when they come to arrest you for cheating, that the
> deck was stacked because of the physical constraints of the playing
> cards.

The above makes no sense at all as written when compared to reality. I
suspect that Sean misunderstood what Robin meant. Surely Sean must
realize that all the complex structures in cells self-assemble in these
cells because of simple chemical and physical affinities. There are no
little homunculi working on assembly lines in cells, willing to go on
strike for higher wages (MORE ATP!), etc. That would be carrying the
idea of intelligence involved in these processes a step too far.

>>In fact, why
>>would you expect that the regular and highly predictable physical laws
>>governing biochemical reactions would produce a random, "average"
>>distribution of "beneficial sequences"?
>

I wouldn't expect new beneficial sequences to be random. I would expect
new "beneficial sequences" to be close to one or more of the
pre-existing "beneficial sequences" in a cell. That is because the
'new' needs of a cell are most often going to involve molecules with
similarity to molecules that are already biologically relevant. That
is, I suspect that there will be clusters of 'beneficial' sequences.
Why do think 'new' beneficial sequences are evenly spaced throughout
sequence space, but always very, very far away from any current sequence?

>
> Because, I don't know of any requirement for them to be clustered
> outside of deliberate design - do you? I can see nothing special
> about the building blocks that make up living things that would cause
> the potentially beneficial systems found in living things to have to
> be clustered (just like there is nothing inherent in playing cards
> that would cause them to stack themselves in any particular order).

I *do* expect to see clustering in useful sequences. And I *do* see it.
One regularly sees families of genes rather than genes with no
sequence similarity. For example, a big chunk of genes are very similar
as membrane-spanning proteins, but differ in the allosteric effector
that transduces an effect across the membrane in eucaryotes. I expect
to see things like the similarity in the TTSS proteins and flagellar
proteins rather than seeing completely different proteins. The reason I
*do* expect to see such clustering is because I think these features
arose by descent with modification rather than by a random walk from a
random starting point to an end that is unrelated to the starting point.
The reason I *do* see such clustering is because descent with
modification is how nature works to produce new proteins. The reason I
don't see complete randomness in new sequence is because your model of
evolution is a bogus strawman.

> However, if you know of a reason why the physical nature of the
> building blocks of life would force them to cluster together despite
> having a low density in sequence space, please, do share it with me.

Sequences of utility cluster together because they arose by common
descent and descent with modification rather than by random walks
through random sequence space from a random starting point.

> Certainly none of your computer examples have been able to demonstrate
> such a necessity. Why then would you expect such a forced clustering
> in the potentially beneficial states of living things?

Look at an evolutionary branching tree. You will see clustering of
exactly the type one sees in sequences. Not *just* similar. Exactly.

>>>>For an extreme
>>>>example, consider a space of strings consisting of length 1000, where
>>>>each position can be occupied by one of 10 possible characters.
>>
>>Note, I wrote, "extereme example". My point was *not* invent a
>>distribution which makes it likely for evolutiuon to occur (this example
>>has about as much to do with evolution as ballet does with quantum
>>mechanics), but to show how inadequate your methods are.
>
>
> Actually, this situation has a lot to do with evolution and is the
> real reason why evolution is such a ludicrous idea.

No, Sean. It has a lot to do with your bogus straw man of evolution.
It has nothing to do with reality.

> What your
> illustration shows is that only if the deck is stacked in a most
> unlikely way will evolution have the remotest possibility of working.
> That is what I am trying to show and you demonstrated this very
> nicely. Unwittingly it is you who effectively show just how
> inadequate evolutionary methods are at making much of anything outside
> of an intelligently designed stacking of the deck.

[Snip much more of little interest, since GIGO is GIGO whether it is
done in one paragraph or twenty]

Sean Pitman

unread,
Jan 14, 2004, 3:52:22 PM1/14/04
to
"Chris Merli" <clm...@insightbb.com> wrote in message news:<GTcNb.65427$xy6.124383@attbi_s02>...

> >
> > Consider the scenario where there are 10 ice cream cones on the
> > continental USA. The goal is for a blind man to find as many as he
> > can in a million years. It seems that what you are suggesting is that
> > the blind man should expect that the ice cream cones will all be
> > clustered together and that this cluster will be with arms reach of
> > where he happens to start his search. This is simply a ludicrous
> > notion outside of intelligent design. My hypothesis, on the other
> > hand, suggests that these 10 ice cream cones will have a more random
> > distribution with hundreds of miles separating each one, on average.
> > An average starting point of the blind man may, by a marvelous stroke
> > of luck, place him right beside one of the 10 cones. However, after
> > finding this first cone, how long, on average, will it take him to
> > find any of the other 9 cones? That is the question here. The very
> > low density of ice cream cones translates into a marked increase in
> > the average time required to find them. Now, if there were billions
> > upon billions of ice cream cones all stuffed into this same area, then
> > one could reasonably expect that they would be separated by a much
> > closer average distance - say just a couple of feet. With such a high
> > density, the average time needed for the blind man to find another ice
> > cream cone would be just a few seconds.
>
> But there is not one blind man looking there are many and only those close
> enough to the cluster of cones in the first place are likely to succeed.

Exactly right. The problem is that increasing the number of blind men
searching only helps for a while, at the lowest levels of functional
complexity where the density of ice cream cones is the greatest.
However, with each step up the ladder of functional complexity, the
density of ice cream cones decreases in an exponential manner. In
order to keep up with this exponential decrease in average cone
density, the number of blind men has to increase exponentially in
order to find the rarer cones at the same rate. Very soon the
environment cannot support any more blind men and so they must
individually search out exponentially more and more sequence space, on
average, before success can be realized (i.e., a cone or cluster of
cones is found). For example, it can be visualized as stacked levels
of rooms. Each room has its own average density of ice cream cones.
The rooms on the lowest level have the highest density of ice cream
cones - say one cone every meter or so, on average. Moving up to the
next higher room the density decreases so that there is a cone every 2
meters or so. Then, in the next higher room, the density decreases to
a cone every 4 meters or so, on average. And, it goes from there.
After 30 or so steps up to higher levels, the cone density is 1 every
billion meters or so, on average.

Are you starting to see the problem? What one blind man could find in
just a few seconds at the lowest levels, thousands of blind men cannot
find in thousands of years after just a few step up into the higher
levels. Clustering doesn't help them out here. Because, on average,
the blind men just will not happen to start out close to a cluster of
cones. And, if they do happen to get so fortunate as to end up close
to a rare cluster, what are the odds that they will find another
cluster of cones within that same level? You must think about the
*average* time involved, not the unlikely scenario that finding one
cluster solves all problems. Clustering, contrary to what many have
suggested, does not increase the average density of beneficial states
at a particular level of sequence space. This means that clustering
does not decrease the average time required to find a new ice cream
cone. In fact, if anything, clustering would increase the average
time required to find a new ice cream cone.

> > > You cannot simply assume an "average"
> > > distribution in the absence of background information: you have to find
> > > out precisely the kind of distribution you are dealing with. And even
> > > if you do find that the distribution is "stacked", it does not imply
> > > that an intelligence was involved.
> >
> > Oh really? You think that stacking the deck as you have done can
> > happen mindlessly in less than zillions of years of average time?
> > Come on now! What planet are you from?
>
> Lets talk clusters. How many point mutations of a protein are in fact still
> functional. This tends to create a cluster all of its own. Given this fact

> the idea that they are spread evenly across the landscape is just not true.

Certainly the various beneficial functions are indeed clustered. But
you must realize that clustering doesn't help you find a new cluster
with a new type of function any faster. Say that you start on a
particular clustered island of function. You can move around this
island pretty easily. But, the entire island pretty much does the
same type of function. The question is, how long will it take, on
average, to find a new island of states/sequences with a new type of
function? In order to solve this problem you must have some idea
about the *average* density of all beneficial states in sequence space
as they compare to the non-beneficial sequences that also exist in
sequence space. This average density will tell you, clustered or not,
how long it will take to find a new sequence with a new type of
function via random walk across the non-beneficial sequences. If
fact, the more clustered the sequences are, the longer it will take,
on average to find a new cluster.

Of course Robin, Howard, and many others in this forum have tried to
float the idea that these islands will all happen to be clustered
neatly around the starting point by some unknown but necessary force
of nature despite incredibly low average densities given the overall
volume of sequence space at that level of complexity. They are
basically suggesting that evolution works because the deck is stacked
neatly in favor of evolutionary processes. Of course, for evolution
to really work such deck stacking would not only be helpful, but
vital. Evolution simply cannot work unless the deck is marvelously
stacked in its favor like this. But, what are the odds that the deck
would be so neatly stacked like this outside of intelligent design?
That is the real question here. And so far, no evolutionist that I
have yet encountered seems to be able to answer this question in a way
that makes any sort of rational sense to me. Perhaps you are better
able to understand the solution to this problem than I am?

Sean
www.naturalselection.0catch.com

Frank J

unread,
Jan 14, 2004, 7:13:27 PM1/14/04
to
"\"Rev Dr\" Lenny Flank" <lflank...@ij.net> wrote in message news:<3ff86071$1...@corp.newsgroups.com>...

> Sean Pitman wrote:
>
>
> >
> > Until then, this is all I have time for today.
>
>
> Hey doc, when will you have time to tell us what the scientific theory
> of intelligent design is --- what does the desigher do, specifically,
> what mechanisms does it use to do it, where can we see these mechanisms
> in operation today. And what idnicates there is only one desinger and
> not, say, ten or fifty of them all working together.


C'mon, one question at a time. And good luck getting any answer since
I am still waiting for him and several others to answer my simple
question to define "common design."

>
> After that, can you find the time to explain to me how ID "theory" is
> any less "materialist" or "naturalist" or "atheist" than is evolutionary
> biology, since ID "theory" not only does NOT hypothesize the existence
> of any supernatural entities or actions, but specifically states that
> the "intelligent designer" might be nothing but a space alien.


ID may be less "naturalistic," but only because it rarely makes
testable claims to support its own model. But when it does, it is
every bit as "naturalistic" as evolution and the
mutually-contradictory creationisms. Too bad those claims fail every
time.

And ID and creationism are no less "atheistic" than evolution,
because, as you know, and as anti-evolutionists don't want anyone to
know, evolution never specifically rules out an "intelligent
designer." Ironically it is the anti-evolutionists who constantly
promote "atheistic science" by their false dichotomy.


>
> And after THAT, could you find the time to tell us how you apply
> anything other than "naturalism" or "materialism" to your medical
> practice? What non-naturalistic cures do you recommend for your
> patients, doctor.

My guess is that he says: "Oo ee oo ah ah ting tang walla walla bing
bang."

Chris Merli

unread,
Jan 14, 2004, 9:13:16 PM1/14/04
to

"Sean Pitman" <seanpi...@naturalselection.0catch.com> wrote in message
news:80d0c26f.0401...@posting.google.com...

This is based on the false assumption that increasing complexity must entail
de novo development of the more complex systems. It is painfully clear from
an examination of most proteins that even within a single polypeptide there
are portions that are recruited from other coding sequences. Thus the basic
units that even you have realized can evolve are easily shuffled copied and
adapted. I would contend in fact that the hardest part of the evolution is
not the complex systems that you have argued but the very simple functions.

In
> order to keep up with this exponential decrease in average cone
> density, the number of blind men has to increase exponentially in
> order to find the rarer cones at the same rate. Very soon the
> environment cannot support any more blind men and so they must
> individually search out exponentially more and more sequence space, on
> average, before success can be realized (i.e., a cone or cluster of
> cones is found). For example, it can be visualized as stacked levels
> of rooms. Each room has its own average density of ice cream cones.
> The rooms on the lowest level have the highest density of ice cream
> cones - say one cone every meter or so, on average. Moving up to the
> next higher room the density decreases so that there is a cone every 2
> meters or so. Then, in the next higher room, the density decreases to
> a cone every 4 meters or so, on average. And, it goes from there.
> After 30 or so steps up to higher levels, the cone density is 1 every
> billion meters or so, on average.

If the development of each protein started from scratch you may have an
excellent arguement but nearly all proteins from other proteins so you are
starting from a point that is known to be functional.

Have you ever really considered how many functions proteins provide. At the
very basic level there are very few. All those complex functions are based
on only a few very simple things that can occur at a link between two amino
acids plus some chemical and electrical forces. Look at the active site of
most enzymes and you will find them remarkably simple.

I am afraid I will simply have to wait for evidence to elucidate the reason
for this. I asked you before what evidence you had that these clusters do
not exist and based on your reply here it is safe to assume the answer is
none. Not only do you not know if there is clustering but you are not even
certain what percentage of the protein sequences are functional in any way.
Based on this it is very hard to lend any weight to your speculations.
Could you present a experiment that would support any of your assumptions?
Please do not present experiments that would require negative results as
those are not scientific.

>
> Sean
> www.naturalselection.0catch.com
>

Sean Pitman

unread,
Jan 14, 2004, 9:21:03 PM1/14/04
to
howard hershey <hers...@indiana.edu> wrote in message news:<bu46sv$srt$1...@hood.uits.indiana.edu>...


> > Consider the scenario where there are 10 ice cream cones on the
> > continental USA. The goal is for a blind man to find as many as he
> > can in a million years.
>
> Except that is NOT what evolution does. Evolution starts with an
> organism with pre-existing sequences that produce products and interact
> with environmental chemicals in ways that are useful to the organism's
> reproduction.

Yes . . . so start the blind man off with an ice-cream cone to begin
with and then have him find another one.

> The situation is more like 10,000 blind men in a varying
> topography who blindly follow simple and dumb rules of the game to find
> useful things (ice cream at the tops of fitness peaks):

You don't understand. In this scenario, the positively selectable
topography is the ice-cream cone. There are no other selectable
fitness peaks here. The rest of the landscape is neutral. Some of
the ice-cream cones may be more positively selectable than others
(i.e., perhaps the man likes vanilla more than chocolate). However,
all positive peaks are represented in this case by an ice-cream cone.

> Up is good. Down
> is bad.

Ice-cream cone = Good or "Up" (to one degree or another) or even
neutral depending upon one's current position as it compares to one's
previous position. For example, once you have an ice cream, that is
good. But, all changes that maintain that ice cream but do not gain
another ice cream are neutral.

No ice-cream cone = "Bad", "Down", or even "neutral" depending upon
one's current position as it compares to one's previous position.

> Flat is neither good nor bad.

Exactly. Flat is neutral. The more neutral space between each "good"
upslope/ice-cream cone, the longer the random walk. The average
distance between each selectable "good" state translates into the
average time required to find such a selectable state/ice-cream cone.
More blind men searching, like 10,000 of them, would cover the area
almost 10,000 times faster than just one blind man searching alone.
However, at increasing levels of complexity the flat area expands at
an exponential rate. In order to keep up and find new functions at
these higher levels of functional complexity, the population of blind
men will have to increase at an equivalent rate. The only problem
with increasing the population is that very soon the local environment
will not be able to support any larger of a population. So, if the
environment limits the number of blind men possible to 10,000 - that's
great if the average neutral distance between ice-cream cones in a few
miles or so, but what happens when, with a few steps up the ladder of
functional complexity, the neutral distance expands to a few trillion
miles between each cone, on average? Now each one of your 10,000
blind men have to search around 50 million sq. miles, on average,
before the next ice-cream cone or a new cluster of ice cream cones
will be found by even one blind man in this population.

> Keep walking in all cases.

They keep walking alright - a very long ways indeed before they reach
anything beneficially selectable at anything very far beyond the
lowest levels of functional complexity.

> It
> would not take too long for these 10,000 blind men to be found in
> decidedly non-random places (the high mesas of functional utility where
> they are wandering around the flat tops if you haven't guessed).

There is a funny thing about these mesas. At low levels of
complexity, these mesas are not very large. In fact, many of them are
downright tiny - just one or two steps wide in any direction and a
new, higher mesa can be reached. However, once a blind man finds this
new mesa new higher mesa (representing a different type of function at
higher level of specified complexity) and climbs up onto its higher
surface, the distance to a new mesa at the same height or taller is
exponentially greater than it was at the lower levels of mesas.

___
__ __
_ _-_ __
__-_
_-_- -__-__-_- _-__-_-_-__- -_- _-_-_
_-_-__

> And
> the ice cream cones (the useful functions), remember, are not randomly
> distributed either. They are specifically at the tops of these mesas as
> well. That is what a fitness landscape looks like.

Actually, the mesa itself, every part of its surface, represents an
ice cream cone. There is no gradual increase here. Either you have
the ice-cream cone or you don't. If you don't have one that is even
slightly "good"/beneficial, then you are not higher than you were to
begin with and you must continue your random walk on top of the flat
mesa that you first started on (i.e., your initial beneficial
function(s)).

> If this topography of utility only changed slowly, at any given time it
> would appear utterly amazing to Sean that the blind men will all be
> found at these local high points or optimal states (the mesas licking
> the ice cream cones on them) rather than being randomly scattered around
> the entire surface.


If all the 10,000 blind men started at the same place, on the same
point of the same mesa, and then went out blindly trying to find a
higher mesa than the one they started on, the number that they found
would be directly proportional to the average distance between these
taller mesas. If the density of taller mesas, as compared to the one
they are now on, happens to be say, one every 100 meters, then they
will indeed find a great many of these in short order. However, if
the average density of taller mesas, happens to be one every 10,000
kilometers, then it would take a lot longer time to find the same
number of different mesas as compared to the number the blind men
found the first time when the mesas were just 100 meters apart.

> They reached these high points (with the ice cream)
> by following a simple dumb algorithm.

Yes - and this mindless "dumb" algorithm works just fine to find new
and higher mesas if and only there is a large average density of mesas
per given unit of area (i.e., sequence space). That is why it is easy
to evolve between 3-letter sequences. The ratio/density of such
sequences is as high as 1 in 15. Any one mutating sequence will find
a new 3-letter sequence within 15 random walk steps on average. A
population of 10,000 such sequences (blind men) would find most if not
all the beneficial 3-letter words (ice-cream cones) in 3-letter
sequence space in less than 30 generations (given that there was one
step each, on average, per generation).

This looks good so far now doesn't it? However, the problems come as
you move up the ladder of specified complexity. Using language as an
illustration again, it is not so easy to evolve new beneficial
sequences that require say, 20 fairly specified letters, to transmit
an idea/function. Now, each member of our 10,000 blind men is going
to have to take over a trillion steps before success (the finding of a
new type of beneficial state/ice cream cone) is realized for just one
of them at this level of complexity.

Are we starting to see the problem here? Of course, you say that
knowledge about the average density of beneficial sequences is
irrelevant to the problem, but it is not irrelevant unless you, like
Robin, want to believe that all the various ice-cream cones
spontaneously cluster themselves into one tiny corner of the potential
sequence space AND that this corner of sequence space just so happens
to be the same corner that your blind men just happen to be standing
in when they start their search. What an amazing stroke of luck that
would be now wouldn't it?

> But you were wondering how something new could arise *after* the blind
> men are already wandering around the mesas? The answer is that it
> depends. They can't always do so.

And why not Howard? Why can't they always do so? What would limit
the blind men from finding new mesas? I mean really, each blind man
will self-replicate (hermaphrodite blind men) and make 10,000 new
blind men on the mesa that he/she/it now finds himself on. This new
population would surely be able to find new mesas in short order if
things worked as you suggest. But the problem is that if the mesas
are not as close together, on average, as they were at the lower level
where the blind men first started their search, it is going to take
longer time to find new mesas at the same level or higher. That is
the only reason why these blind men "can't always" find "something
new". It has to do with the average density of mesas at that level.

> But remember that these pre-existing
> mesas are not random places. They do something specific with local
> utility.

The mesas represent sequences with specific utilities. These
sequences may in fact be widely separated mesas even if they happen to
do something very similar. Really, the there is no reason for the
mesas to be clustered in one corner of sequence space. A much more
likely scenario is for them to be more evenly distributed throughout
the potential sequence space. Certainly there may be clusters of
mesas here and there, but on average, there will still be a wide
distribution of mesas and clusters of mesas throughout sequence space
at any given level. And, regardless of if the mesas are more
clustered or less clustered, the *average* distance between what is
currently available and the next higher mesa will not be significantly
affected.

> Let's say that each mesa top has a different basic *flavor* of
> ice cream. Say that chocolate is a glycoside hydrolase that binds a
> glucose-based glycoside. Now let's say that the environment changes so
> that one no longer needs this glucose-based glycoside (the mesa sinks
> down to the mean level) but now one needs a galactose-based glycoside
> hydrolase.

You have several problems here with your illustration. First off,
both of these functions are very similar in type and use very similar
sequences. Also, their level of functional complexity is relatively
low (like the 4 or 5 letter word level). Also, you must consider the
likelihood that the environment would change so neat so that galactose
would come just when glucose is leaving. Certainly if you could
program the environment just right, in perfect sequence, evolution
would be no problem. But you must consider the likelihood that the
environment will change in just the right way to make the next step in
an evolutionary sequence beneficial when it wasn't before. The odds
that such changes will happen in just the right way on both the
molecular level and environmental level get exponentially lower and
lower with each step up the ladder of functional complexity. What was
so easy to evolve with functions requiring no more than a few hundred
fairly specified amino acids at minimum, is much much more difficult
to do when the level of specified complexity requires just a few
thousand amino acids at minimum. It's the difference between evolving
between 3-letter words and evolving between 20-letter phrases. What
are the odds that one 20-letter phrase/mesa that worked well in one
situation will sink down with a change in situations to be replaced by
a new phrase of equal complexity that is actually beneficial? -
Outside of intelligent design? That is the real question here.

> Notice that the difference in need here is something more
> like wanting chocolate with almonds than wanting even strawberry, much
> less jalapeno or anchovy-flavored ice cream. The blind man on the newly
> sunk mesa must keep walking, of course, but he is not thousands of miles
> away from the newly risen mesa with chocolate with almonds ice cream on
> top.

He certainly may be extremely far away from the chocolate with almonds
as well as every other new type of potentially beneficial ice cream
depending upon the level of complexity that he happens to be at (i.e.,
the average density of ice-creams of any type in the sequence space at
that level of complexity).

> Changing from one glucose-based glycoside hydrolase to one with a
> slightly different structure is not the same as going from chocolate to
> jalapeno or fish-flavored ice cream. Not even the same as going from
> chocolate to coffee. The "island" of chocolate with almonds is *not*
> going to be way across the ocean from the "island" of chocolate.

Ok, lets say, for arguments sake, that the average density of
ice-cream cones in a space of 1 million square miles is 1 cone per 100
square miles. Now, it just so happens that many of the cones are
clustered together. There is the chocolate cluster with all the
various types of chocolate cones all fairly close together. Then,
there is the strawberry cones with all the variations on the
strawberry theme pretty close together. Then, there is the . . .
well, you get the point. The question is, does this clustering of
certain types of ice creams help is the traversing the gap between
these clustered types of ice creams? No it doesn't. If anything, the
clustering only makes the average gap between clusters wider. The
question is, how to get from chocolate to strawberry or any other
island cluster of ice creams when the average gap is still quite
significant?

You see, the overall average density of cones is still significant to
the problem no matter how you look at it. Clustering some of them
together is not going to help you find the other clusters - unless
absolutely all of the ice cream islands are clustered together as well
in a cluster of clusters all in one tiny portion of the overall
potential space. This is what Robin is trying to propose, but I'm
sorry, this is an absolutely insane argument outside of intelligent
design. How is this clustering of clusters explained via mindless
processes alone?

> It will
> be nearby where the blind man is. *And* because chocolate with almonds
> is now the need, it will also be on the new local high mesa (relative to
> the position of the blind man on the chocolate mesa). The blind man
> need only follow the simple rules (Up good. Down bad. Neutral neutral.
> Keep walking.) and he has a good chance of reach the 'new' local mesa
> top quite often.

And what about the other clusters? Is the environment going to change
just right a zillion times in a row so that bridges can be built to
the other clusters?

> And remember that there is not just one blind man on one mesa in this
> ocean of possible sequences. There are 10,000 already present on 10,000
> different local mesas with even more flavors than the 31 that most ice
> cream stores offer. Your math always presuposes that whenever you need
> to find, say, vanilla with cherry the one blind man starts in some
> random site and walks in a completely random fashion (rather than by the
> rules I pointed out) across half the universe of sequence space to reach
> your pre-determined goal by pure dumb luck to find the perfect lick.

That is not my position at all as I have pointed out to you numerous
times. It seems that no matter how often I correct you on this straw
man caricature of my position you make the same straw man assertions.
Oh well, here it goes again.

I'm perfectly fine with the idea that there is not just one man, but
10,000 or many more men already in place on different mesas that are
in fact selectably beneficial. In fact, there may be 10,000 or more
men on each of 10,000 mesas. That is all perfectly fine and happens
in real life. When something new "needs to be found", say, "vanilla
with a cherry on top" or any other potentially beneficial function at
that level of complexity or greater (this is not a teleological search
you know since there are many ice-cream cones available), all of the
men may search at the same time.

My math certainly does not and never did presuppose that only one man
may search the sequence space. That is simply ridiculous. All the
men search at the same time (millions and even hundreds of billions of
them at times). The beneficial sequences are those sequences that are
even slightly better than what is currently had by even one member of
the vast population of blind men that is searching for something new
and good.

Now, if the average density of something new and good that is even
slightly selectable as new and good is less than 1 in a trillion
trillion, even 100 billion men searching at the same time will take a
while to find something, anything, that is even a little bit new and
good at the same level of specified complexity that they started with.
On average, none of the men on their various mesas will be very close
to any one of the new and good mesas within the same or higher levels
of sequence space if the starting point is very far beyond the lowest
levels of specified complexity.

> My
> presumption is that the successful search is almost always going to
> start from the pre-existing mesa

Agreed.

> with the closest flavor to the new need
> (or from a duplicate, which, as a duplicate, is often superfluous and
> quickly erodes to ground level in terms of its utility).

This is where we differ. Say you have chocolate and vanilla. Getting
to the different varieties of chocolate and vanilla is not going to be
much of a problem. But, say that neither chocolate nor vanilla are
very close to strawberry or to each other. Each cluster is separated
from the other clusters by thousands of miles. Now, even though you
already have two clusters in your population, how are you going to
evolve the strawberry cluster if an environmental need arises where it
would be beneficial?

You see, you make the assumption that just because you start out with
a lot of clusters that any new potentially beneficial sequence or
cluster of sequences will be fairly close to at least one of your
10,000 starting clusters. This is an error when you start considering
levels of sequence space that have very low overall densities of
beneficial sequences. No matter where you start from and no matter
how many starting positions you have to begin with, odds are that the
vast majority of new islands of beneficial sequences will be very far
away from everything that you have to start with beyond the lowest
levels of functional complexity.

> As mentioned,
> these pre-existing mesas are not random pop-ups. They are at the most
> useful places in sequence space from which to try to find near-by mesas
> with closely-related biologically useful properties because they already
> have biologically useful properties.

Yes, similar useful biological properties would all be clustered
together under one type of functional island of sequences. However,
the overall density of beneficial sequences in sequence space dictates
how far apart, on averages, these clusters of clusters will be from
each other. New types of functions that are not so closely related
will most certainly be very far away from anything that you have to
start with beyond the lowest levels of functional complexity. You may
do fine with chocolate and vanilla variations since those are what you
started with, but you will have great difficulty finding anything
else, such as strawberry, mocha, caviar, etc . . .

The suggestion that absolutely all of the clusters are themselves
clustered together in a larger cluster or archipelago of clusters in a
tiny part of sequence space is simply a ludicrous notion to me -
outside of intelligent design that is. Oh no, you, Robin, Deaddog,
Sweetness, Musgrave, and all the rest will have to do a much better
job and explaining how all the clusters can get clustered together
(when they obviously aren't) outside of intelligent design.



> I *do* expect to see clustering in useful sequences. And I *do* see it.

So do I. Who is arguing against this? Useful sequences are often
clustered around a certain type of function. What I am talking about
is evolution between different types of functions. The evolution of
different sequences with the same basic type of function is not an
issue at all. It happens all the time, usually in the form of an
up-regulation or down-regulation of a certain type of function, even
at the highest levels of functional complexity. But, this sort of
intra-island evolution is a far cry from evolving a new type of
function (i.e., going from one cluster to another). In fact, this
sort of evolution never happens beyond the lowest levels of functional
complexity due to the lack of density of beneficial sequences at these
higher levels of specified complexity.

In any case, this is all I have time for today. As always, it has
been most interesting. Please do try again . . .

Sean
www.naturalselection.0catch.com

Jethro Gulner

unread,
Jan 15, 2004, 12:17:26 AM1/15/04
to
I'm thinking TSS to flagellum is on the order of chocolate to
chocolate-fudge-brownie

howard hershey <hers...@indiana.edu> wrote in message news:<bu46sv$srt$1...@hood.uits.indiana.edu>...

david ford

unread,
Jan 15, 2004, 8:51:08 AM1/15/04
to
Sean Pitman <seanpi...@naturalselection.0catch.com> on 4 Jan 2004:
RobinGoodfellow <lmuc...@yahoo.com>:

[snip]

There's no such thing as an intelligent designer.
What I meant to say was, there's no such thing as an intelligent
designer of computer programs.
What I really meant to say was, there's no such thing as an
intelligent designer(s) of biology.

> If
> enough sequences are defined as beneficial and they are placed in just
> the right way, with the right number of spaces between them, then
> certainly such a high ratio will result in rapid evolution - as we saw
> here. However, when neutral non-defined gaps are present, they are a
> real problem for evolution. In this case, a gap of just 16 neutral
> mutations effectively blocked the evolution of the EQU function.
>
> http://naturalselection.0catch.com/Files/computerevolution.html

[snip]

>> The answer is simple - the ratio of beneficial states does NOT
matter!
>
> Yes it does. You are ignoring the highly unlikely nature of your
> scenario. Tell me, how often do you suppose your start point would
> just happen to be so close to the only other beneficial sequence in
> such a huge sequence space? Hmmmm? I find it just extraordinary that
> you would even suggest such a thing as "likely" with all sincerity of
> belief. The ratio of beneficial to non-beneficial in your
> hypothetical scenario is absolutely miniscule and yet you still have
> this amazing faith that the starting point will most likely be close
> to the only other "winning" sequence in an absolutely enormous
> sequence space?! Your logic here is truly mysterious and your faith
> is most impressive.

Anything is possible with enough faith. Simply believe hard enough,
and reality _will_ conform.

> I'm sorry, but I just can't get into that boat
> with you. You are simply beyond me.

What are you afraid of-- getting a little wet? When the boat sinks,
you will, after all, be able to swim. Though I don't know for how
long....



>> All that matters is their distribution, and how well a particular
>> random walk is suited to explore this distribution.
>
> Again, you must consider the odds that your "distribution" will be so
> fortuitous as you seem to believe it will be. In fact, it has to be
> this fortuitous in order to work. It basically has to be a set up for
> success. The deck must be stacked in an extraordinary way in your
> favor in order for your position to be tenable. If such a stacked
> deck happened at your table in Las Vegas you would be asked to leave
> the casino in short order or be arrested for "cheating" by intelligent
> design since such deck stacking only happens via intelligent design.

Intelligent design advocates often cheat. They are masters of
illusion and sleight of hand. Their ideas adapt to data as fog adapts
to land, to borrow some phraseology from the creationist ReMine.
Their views can "explain" any conceivable observation, and any
conceivable set of circumstances (exception: if biology did not
exist, or if we are living in the Matrix, and what we think is real is
not real and is a dream).

Magician Walter ReMine wrote the extremely dangerous and execrable
book _The Biotic Message: Evolution versus Message Theory_ (1993),
538pp. I cannot urge upon you strongly enough the importance of not
reading that book. Miraculously, my faith in the solidity and rigor
of the theory of evolution aka Precious survived the reading of large
portions of that most despicable book. Those were dark times, but my
faith in Precious survived.

[snip]

>> A random walk
>> starting at a given beneficial sequence, and allowing certain
>> transitions from one sequence to another, would require a
completely
>> different type of analysis. In the analyses of most such search
>> algorithms, the "ratio" of beneficial sequences would be irrelevant
-
>> it is their *distribution* that would determine how well such an
>> algorithm would perform.
>
> The most likely distribution of beneficial sequences is determined by
> their density/ratio. You cannot simply assume that the deck will be
> so fantastically stacked in the favor of your neat little evolutionary
> scenario. I mean really, if the deck was stacked like this with lots
> of beneficial sequences neatly clustered around your starting point,
> evolution would happen very quickly. Of course, there have been those
> who propose the "Baby Bear Hypothesis". That is, the clustering is
> "just right" so that the theory of evolution works.

How could the existence of such just-right clustering be accounted
for-- what could have produced it?
In your response, please do not invoke intelligence. After all, in
the story of Goldilocks and the three bears, the porridge was not
prepared by intelligence. Intelligence cannot account for the
appearance of _anything_. This post is an illustration of that fact.

> That is the best of

Sorry, not interested. I recently blew my life savings on 50 acres of
oceanfront property on the Moon.
If you act now, you too can get in on this amazing ground-level deal,
and be privy to the secrets of the Oceanfront Moon Property Society.
The only requirements for membership are that you own Moon property
and affirm that intelligence/ mind cannot be an explanation for the
appearance of anything, especially biology.

Sean Pitman

unread,
Jan 15, 2004, 11:30:52 AM1/15/04
to
jethro...@bigfoot.com (Jethro Gulner) wrote in message news:<edf04d4a.04011...@posting.google.com>...

>
> I'm thinking TSS to flagellum is on the order of chocolate to
> chocolate-fudge-brownie

Now that's a serious stretch of the imagination. The TTSS system is a
non-motile secretory system while the fully formed flagellar system is
a motility system as well. The TTSS system requires 6 or so different
protein parts, at minimum, for its formation while the motility
function of the flagellar system requires and additional 14 or so
different protein parts (for a total of over 20 parts) before its
motility function can be realized. Unless you can find intermediate
functions for the gap of more than a dozen required parts that
separate the TTSS system from the Flagellar system, I'd say this gap
is quite significant indeed, requiring at minimum several thousand
fairly specified amino acids. Certainly this is not the same thing as
roaming around the same island cluster with the same type of function.
The evolution form the TTSS island of function to the brand new type
of motility function found in the flagellar island would have to cross
a significant distance before the motility function of the flagellum
could be realized. Such a distance could not be crossed via random
walk alone this side of zillions of years in any population of
bacteria on Earth. In order for evolution to have truly crossed such
a gap, without intelligent design helping it along, there would have
to be a series of closely spaced beneficial functions/sequences
between the TTSS and the motility function of the flagellum.

Where is this series of steppingstones? That is the real question!
Many have tried to propose the existence of various stepping-stone
functions, but none have been able to show that these steppingstones
could actually work as no one has ever shown the crossing from any
proposed steppingstone to any other in real life. If you think you
know better how such a series could exist and actually work to
eliminate this gap problem, please do share your evolutionary sequence
with us.

Sean
www.naturalselection.0catch.com

Sean Pitman

unread,
Jan 15, 2004, 2:50:33 PM1/15/04
to
"Chris Merli" <clm...@insightbb.com> wrote in message news:<lKmNb.55023$5V2.67607@attbi_s53>...

> >
> > Exactly right. The problem is that increasing the number of blind men
> > searching only helps for a while, at the lowest levels of functional
> > complexity where the density of ice cream cones is the greatest.
> > However, with each step up the ladder of functional complexity, the
> > density of ice cream cones decreases in an exponential manner.
>
> This is based on the false assumption that increasing complexity must entail
> de novo development of the more complex systems. It is painfully clear from
> an examination of most proteins that even within a single polypeptide there
> are portions that are recruited from other coding sequences. Thus the basic
> units that even you have realized can evolve are easily shuffled copied and
> adapted. I would contend in fact that the hardest part of the evolution is
> not the complex systems that you have argued but the very simple functions.

This is a very common misconception among evolutionists - that if the
right subparts of a system are similar or identical to other parts
elsewhere in other systems, that the system in question obviously
arose via a "simple" assembly of pre-existing subparts.

The problem with this idea is that just because all of the right
subparts needed to make a new beneficial system of function are there,
already fully formed as parts of other systems of function, does not
mean that they will self-assemble themselves to form a new collective
system of function. For example, all of the individual amino acids
are there, fully formed, to make a motility apparatus in a
historically non-motile bacterial colony. Say that motility would be
advantageous to this colony if it evolved a system that would give it
motility. All the right parts are there, but they don't know how to
self-assemble themselves to make such a system.

Now why is this? Because, in order for correct assembly of the parts
to proceed, the information for their assembly must be pre-established
in the DNA. This genetic information tells where, when, and how much
of each part to make so that the assembly of the molecular systems can
occur. Without this pre-established information the right parts just
won't assembly properly beyond the lowest levels of functional
complexity. It would be like having all the parts to a watch in a
bag, shaking the bag for a billion years, and expecting a fully formed
watch, or anything else of equal or greater emergent functional
complexity, to fall out at the end of that time. The same is true for
say, a bacterial flagellum. Take all of the necessary subparts needed
to make a flagellum, put them together randomly, and see if they will
self-assemble a flagellar apparatus. It just doesn't happen outside
of the very specific production constraints provided by the
pre-established genetic information that code for both flagellar part
production as well as where, when, and how much part to produce so
that assembly of these parts will occur in a proper way. The simple
production of flagellar parts in a random non-specific way will only
produce a junk pile - not a highly complex flagellar system.

Now, of course, if you throw natural selection into the picture, this
is supposed to get evolution out of this mess. It sort through the
potential junk pile options and picks only those assemblages that are
beneficial, in a stepwise manner, until higher and higher systems of
functional complexity are realized. This is how it is supposed to
work. The problem with this notion is that as one climbs up the
ladder of functional complexity, it becomes more and more difficult to
keep adding genetic sequences together in a beneficial way without
having to cross vast gaps of neutral or even detrimental changes.

For example, start with a meaningful English word and then add to or
change that word so that it makes both meaningful and beneficial sense
in a given situation/environment. At first such a game is fairly easy
to do. But, very quickly you get to a point where any more additions
or changes become very difficult without there being significant
changes happening that are "just right". The required changes needed
to maintain beneficial meaning with longer and longer phrases,
sentences, paragraphs, etc., start to really get huge. Each word has
a meaning by itself that may be used in a beneficial manner by many
different types of sentences with completely different meanings.
Although the individual word does have a meaning by itself, its
combination with other words produces an emergent meaning/function
that goes beyond the sum of the individual words. The same thing
happens with genes and proteins. A portion of a protein may in fact
work well in a completely different type of protein, but in the
protein that it currently belongs to, it is part of a completely
different collective emergent function. Its relative order as it
relates to the other parts of this larger whole is what is important.
How is this relative order established if there are many many more
ways in which the relative order of these same parts would not be
beneficial in the least?

Again, just because the right parts happen to be in the same place at
the same time does not mean much outside of a pre-established
information code that tells them how to specifically arrange
themselves.

> > In
> > order to keep up with this exponential decrease in average cone
> > density, the number of blind men has to increase exponentially in
> > order to find the rarer cones at the same rate. Very soon the
> > environment cannot support any more blind men and so they must
> > individually search out exponentially more and more sequence space, on
> > average, before success can be realized (i.e., a cone or cluster of
> > cones is found). For example, it can be visualized as stacked levels
> > of rooms. Each room has its own average density of ice cream cones.
> > The rooms on the lowest level have the highest density of ice cream
> > cones - say one cone every meter or so, on average. Moving up to the
> > next higher room the density decreases so that there is a cone every 2
> > meters or so. Then, in the next higher room, the density decreases to
> > a cone every 4 meters or so, on average. And, it goes from there.
> > After 30 or so steps up to higher levels, the cone density is 1 every
> > billion meters or so, on average.
>
> If the development of each protein started from scratch you may have an
> excellent arguement but nearly all proteins from other proteins so you are
> starting from a point that is known to be functional.

You are actually suggesting here that the system in question had its
origin in many different places. You seem to be suggesting that all
the various parts found as subparts of many different systems somehow
brought themselves together to make a new type of system . . . just
like that. Well now, how did these various different functional
parts, as subparts of many different systems, know how to come
together so nicely to make a completely new system of function? This
would be like various parts from a car simply deciding, by themselves,
to reassemble to make an airplane, or a boat, or a house.

Don't you see, just because the subparts are functional as parts of
different systems of function does not mean that these subparts can
simply make an entirely new collective system of function. This just
doesn't happen although evolutionists try and use this argument all
the time. It just doesn't make sense. It is like throwing a bunch of
words on the ground at random saying, "Well, they all work as parts of
different sentences, so they should work together to make a new
meaningful sentence." Really now, it just doesn't work like this.
You must be able to add the genetic words together in a steppingstone
sequence where each addition makes a beneficial change in the overall
function of the evolving system. If each change does not result in a
beneficial change in function, then nature will not and cannot select
to keep that change. Such non-beneficial changes are either
detrimental or neutral. The crossing of such detrimental/neutral gaps
really starts to slow evolution down, in an exponential fashion,
beyond the lowest levels of specified functional complexity. Very
soon, evolution simply stalls out and cannot make any more
improvements beyond the current level of complexity that it finds
itself, this side of zillions of years of average time.

Sean
www.naturalselection.0catch.com

Bennett Standeven

unread,
Jan 15, 2004, 7:41:15 PM1/15/04
to
seanpi...@naturalselection.0catch.com (Sean Pitman) wrote in message news:<80d0c26f.04011...@posting.google.com>...

> howard hershey <hers...@indiana.edu> wrote in message news:<bu46sv$srt$1...@hood.uits.indiana.edu>...
>
> > > Consider the scenario where there are 10 ice cream cones on the
> > > continental USA. The goal is for a blind man to find as many as he
> > > can in a million years.
> >
> > Except that is NOT what evolution does. Evolution starts with an
> > organism with pre-existing sequences that produce products and interact
> > with environmental chemicals in ways that are useful to the organism's
> > reproduction.
>
> Yes . . . so start the blind man off with an ice-cream cone to begin
> with and then have him find another one.

OK; the ice cream cones are probably found in shops; so given any
cone, odds are that another cone is just a few inches away. This is
still true even if there is only one shop in the USA.


> > Up is good. Down
> > is bad.
>
> Ice-cream cone = Good or "Up" (to one degree or another) or even
> neutral depending upon one's current position as it compares to one's
> previous position. For example, once you have an ice cream, that is
> good. But, all changes that maintain that ice cream but do not gain
> another ice cream are neutral.
>
> No ice-cream cone = "Bad", "Down", or even "neutral" depending upon
> one's current position as it compares to one's previous position.
>
> > Flat is neither good nor bad.
>
> Exactly. Flat is neutral. The more neutral space between each "good"
> upslope/ice-cream cone, the longer the random walk. The average
> distance between each selectable "good" state translates into the
> average time required to find such a selectable state/ice-cream cone.
> More blind men searching, like 10,000 of them, would cover the area
> almost 10,000 times faster than just one blind man searching alone.
> However, at increasing levels of complexity the flat area expands at
> an exponential rate.

But it does not matter, because the blind men always start out in the
ice cream shop, with an ever increasing selection of cones within
arm's reach. Of course, they'll never find any of the other shops, but
so what?

[...]

then each of them is a local high point; only these mesas will have
blind men on them.


> > But you were wondering how something new could arise *after* the blind
> > men are already wandering around the mesas? The answer is that it
> > depends. They can't always do so.
>
> And why not Howard? Why can't they always do so? What would limit
> the blind men from finding new mesas? I mean really, each blind man
> will self-replicate (hermaphrodite blind men) and make 10,000 new
> blind men on the mesa that he/she/it now finds himself on.

But since the current mesa is a local high point, there is nowhere for
them to go.

[...]


> > Let's say that each mesa top has a different basic *flavor* of
> > ice cream. Say that chocolate is a glycoside hydrolase that binds a
> > glucose-based glycoside. Now let's say that the environment changes so
> > that one no longer needs this glucose-based glycoside (the mesa sinks
> > down to the mean level) but now one needs a galactose-based glycoside
> > hydrolase.
>
> You have several problems here with your illustration. First off,
> both of these functions are very similar in type and use very similar
> sequences.

That's not a "problem", it's the whole point. Evolution by definition
involves gradual changes, in which new systems have similar functions
and definitions to the old ones.

> Also, their level of functional complexity is relatively
> low (like the 4 or 5 letter word level).

I don't know exactly what "galactose-based glycoside" is, but
something tells me that it takes more than 4 or 5 amino acids to bind
to it.

> Also, you must consider the likelihood that the environment would change
> so neat so that galactose would come just when glucose is leaving.

Yes. More likely the galactose was always there, but was ignored in
favor of the glucose, until the latter disappeared.

> Certainly if you could program the environment just right, in perfect
> sequence, evolution would be no problem. But you must consider the
> likelihood that the environment will change in just the right way to
> make the next step in an evolutionary sequence beneficial when it
> wasn't before.

That's pretty easy; following the mesa analogy, either the high mesa
drops to be lower than the formerly low one, or the low one rises
above the formerly high one. Happens all the time.

> The odds
> that such changes will happen in just the right way on both the
> molecular level and environmental level get exponentially lower and
> lower with each step up the ladder of functional complexity.

No; the chance that two sequences will interchange in relative fitness
does not depend on how complex they are.

> What was so easy to evolve with functions requiring no more than a few
> hundred fairly specified amino acids at minimum, is much much more
> difficult to do when the level of specified complexity requires just a few
> thousand amino acids at minimum. It's the difference between evolving
> between 3-letter words and evolving between 20-letter phrases. What
> are the odds that one 20-letter phrase/mesa that worked well in one
> situation will sink down with a change in situations to be replaced by
> a new phrase of equal complexity that is actually beneficial? -

Quite good, I'd say. I can easily imagine the relative fitness of
"Today we'll talk about unicorns" exchanging places with that of
"Today we'll talk about unicode", for example. Those are 25-letter
phrases; making them even longer would only increase the number of
nearby phrases with potential value.

> Outside of intelligent design? That is the real question here.
>
> > Notice that the difference in need here is something more
> > like wanting chocolate with almonds than wanting even strawberry, much
> > less jalapeno or anchovy-flavored ice cream. The blind man on the newly
> > sunk mesa must keep walking, of course, but he is not thousands of miles
> > away from the newly risen mesa with chocolate with almonds ice cream on
> > top.
>
> He certainly may be extremely far away from the chocolate with almonds
> as well as every other new type of potentially beneficial ice cream
> depending upon the level of complexity that he happens to be at (i.e.,
> the average density of ice-creams of any type in the sequence space at
> that level of complexity).

Yes; the higher the lever of complexity, the more likely that the new
ice cream cone is nearby, since the fancier (more complex) flavors
tend to appear in the stores with the largest selection.

> > Changing from one glucose-based glycoside hydrolase to one with a
> > slightly different structure is not the same as going from chocolate to
> > jalapeno or fish-flavored ice cream. Not even the same as going from
> > chocolate to coffee. The "island" of chocolate with almonds is *not*
> > going to be way across the ocean from the "island" of chocolate.
>
> Ok, lets say, for arguments sake, that the average density of
> ice-cream cones in a space of 1 million square miles is 1 cone per 100
> square miles. Now, it just so happens that many of the cones are
> clustered together. There is the chocolate cluster with all the
> various types of chocolate cones all fairly close together. Then,
> there is the strawberry cones with all the variations on the
> strawberry theme pretty close together. Then, there is the . . .
> well, you get the point. The question is, does this clustering of
> certain types of ice creams help is the traversing the gap between
> these clustered types of ice creams? No it doesn't. If anything, the
> clustering only makes the average gap between clusters wider. The
> question is, how to get from chocolate to strawberry or any other
> island cluster of ice creams when the average gap is still quite
> significant?

You don't; if you want to get from chocolate to strawberry, you need
to do it early on, when the distance is smaller. That's why
fundamental differences between organisms (such as between chocolate
and strawberry ice cream) are taken as evidence that they are only
distantly related.

>
> You see, the overall average density of cones is still significant to
> the problem no matter how you look at it. Clustering some of them
> together is not going to help you find the other clusters

Who said we had to find all of the clusters?

> > It will be nearby where the blind man is. *And* because chocolate with
> > almonds is now the need, it will also be on the new local high mesa
> > (relative to the position of the blind man on the chocolate mesa). The
> > blind man need only follow the simple rules (Up good. Down bad. Neutral
> > neutral. Keep walking.) and he has a good chance of reach the 'new' local
> > mesa top quite often.
>
> And what about the other clusters? Is the environment going to change
> just right a zillion times in a row so that bridges can be built to
> the other clusters?

No; the blind men at the other clusters reached them when they were
still a part of this one. Eventually the clusters split apart and
"drifted" away from each other. (Much as galaxies "drift" apart due to
cosmic expansion.)

[...]

>
> > My
> > presumption is that the successful search is almost always going to
> > start from the pre-existing mesa
>
> Agreed.
>
> > with the closest flavor to the new need
> > (or from a duplicate, which, as a duplicate, is often superfluous and
> > quickly erodes to ground level in terms of its utility).
>
> This is where we differ. Say you have chocolate and vanilla. Getting
> to the different varieties of chocolate and vanilla is not going to be
> much of a problem. But, say that neither chocolate nor vanilla are
> very close to strawberry or to each other. Each cluster is separated
> from the other clusters by thousands of miles. Now, even though you
> already have two clusters in your population, how are you going to
> evolve the strawberry cluster if an environmental need arises where it
> would be beneficial?

In that case, you wouldn't. You'd have to settle for chocolate ice
cream with strawberries or some such.

Similarly, we would not expect birds to evolve jet engines, as they
are too different from any system the birds possess now.

[...]


> The suggestion that absolutely all of the clusters are themselves
> clustered together in a larger cluster or archipelago of clusters in a
> tiny part of sequence space is simply a ludicrous notion to me -
> outside of intelligent design that is. Oh no, you, Robin, Deaddog,
> Sweetness, Musgrave, and all the rest will have to do a much better
> job and explaining how all the clusters can get clustered together
> (when they obviously aren't) outside of intelligent design.

It isn't necessary that they _all_ be clustered in that fashion; only
that some of them are.

Bennett Standeven

unread,
Jan 15, 2004, 7:46:23 PM1/15/04
to
dfo...@gl.umbc.edu (david ford) wrote in message news:<b1c67abe.04011...@posting.google.com>...


> What I meant to say was, there's no such thing as an intelligent
> designer of computer programs.

Heh. Sometimes it feels that way, even with my own programs.

Chris Merli

unread,
Jan 15, 2004, 9:59:07 PM1/15/04
to

Actually this comes from the examination of many protein sequences.

So how would you explain that there are hundreds of examples of parts of
proteins that were obviously lifted from other proteins. More importantly
how do you explain the nested heiarchies that they follow. A designer may
borrow an idea to use again but they would not modify simple highly
functional components for each new use. That would be like completely
re-engineering a bolt for every new machine. So then your theory must
predict that we would find very similiar proteins in very diverse organisms.
Is this a prediction of your theory?

I thought you were beyond this base level of a strawman arguement.

>
> Don't you see, just because the subparts are functional as parts of
> different systems of function does not mean that these subparts can
> simply make an entirely new collective system of function. This just
> doesn't happen although evolutionists try and use this argument all
> the time. It just doesn't make sense.

Actually if you look at the components of the clotting system or the globin
genes you will see that this is exactly what happens. And if you want to go
for real word scrabble try the immune system. The basic idea is to shuffle
the parts of these genes to creat hundreds of different antibodies.

It is like throwing a bunch of
> words on the ground at random saying, "Well, they all work as parts of
> different sentences, so they should work together to make a new
> meaningful sentence." Really now, it just doesn't work like this.

Maybe it would help if you avoided using analogies and just stuck to
biological examples.

> You must be able to add the genetic words together in a steppingstone
> sequence where each addition makes a beneficial change in the overall
> function of the evolving system. If each change does not result in a
> beneficial change in function, then nature will not and cannot select
> to keep that change. Such non-beneficial changes are either
> detrimental or neutral. The crossing of such detrimental/neutral gaps
> really starts to slow evolution down, in an exponential fashion,
> beyond the lowest levels of specified functional complexity. Very
> soon, evolution simply stalls out and cannot make any more
> improvements beyond the current level of complexity that it finds

> itself, this side of zillions of years of average time.
>
> Sean
> www.naturalselection.0catch.com

I noticed that you did not address the most important parts of my last post.
If you have developed a scientific theory as an alternative explanation then
you should be able to provide some testable predictions to support the
theory.

>


Von Smith

unread,
Jan 16, 2004, 12:33:18 AM1/16/04
to
seanpi...@naturalselection.0catch.com (Sean Pitman) wrote in message news:<80d0c26f.04011...@posting.google.com>...

Except of course that organisms that actually have all these
components don't actually produce a junkpile, and in many cases, such
as in the TTSS or Tsp pilus, the relevant parts already assemble in
substantially the same way they do for a flagellum. It appears that
Dr. Pitman has taken his strawman version of evolution to the next
level: not content with suggesting that proteins must evolve from
scratch from random peptide sequences, he is now telling us that
complex multi-protein systems must evolve from random junk-piles of
constituent parts.

I would have been more impressed if you had written this *after*
giving a substantive reply to Deaddog's recent excellent post on
Synthetic Biology, which probably sheds some light on how biologists
*really* think complex multi-protein systems might evolve. In it, he
cites a paper in which researchers randomly switched around some of
the parts involved in complex multi-protein interactions to see what
they would do.

Combinatorial synthesis of genetic networks.
Guet CC, Elowitz MB, Hsing W, Leibler S.
Science. 2002 May 24; 296(5572): 1466-70.

http://www.sciencemag.org/cgi/content/full/296/5572/1466

So what happens when one shakes up the regulatory bits of a biological
system and lets them fall where they will? AIUI, far from ending up
with nothing but random junkpiles, the researchers were able to obtain
a variety of novel logically-functioning phenotypes. No need for some
pre-existing homonculus magically prompting the various parts on how
to behave: as often as not the parts were able to associate and
interact coherently left to their own devices. Of course it is
possible that this liberal arts major is misunderstanding the article.
Perhaps the biologically washed can comment more coherently.

>
> Now, of course, if you throw natural selection into the picture, this
> is supposed to get evolution out of this mess. It sort through the
> potential junk pile options and picks only those assemblages that are
> beneficial, in a stepwise manner, until higher and higher systems of
> functional complexity are realized. This is how it is supposed to
> work. The problem with this notion is that as one climbs up the
> ladder of functional complexity, it becomes more and more difficult to
> keep adding genetic sequences together in a beneficial way without
> having to cross vast gaps of neutral or even detrimental changes.

Maybe, maybe not. Again, you wouldn't necessarily need an
astronomical amount of novel assembly to get motility out of a Tsp
pilus; many of the constituent parts are not only already there, but
are already interacting in the way they do in a flagellum. It appears
that once again, you are assuming that we are talking about evolving
such a system from *any* random assemblage of the individual
components, rather than from a logical precursor like a pilus. And
besides, this recent work in systems biology indicates that even if we
*are* talking about randomly rejumbling the components of a system,
the prospects of getting a novel beneficial function as a result may
not be quite as grim as you make out.

<snip yet another English language analogy>

I may be somewhat out of my depth here, but what the hell: Proteins
interact with one another according to chemistry. Since the precursor
proteins had basically the same chemical propoerties they do now, they
already more or less "knew how" to interact with one another. You
might want a point mutation or two to improve affinity; this is hardly
a problem for evolution. Timing and delivery of the parts is
controlled by things like regulatory sequences and transport proteins;
these can also evolve new behaviors, and have been observed to do so.
Genes can be up- or down-regulated, and regulatory sequences can
evolve to respond to different repressors. Regulatory sequences can
be switched around. Transport proteins can be co-opted and modified to
transport different substances.

ISTM you are trying to create a mystification. We don't have all the
*specific* answers to how, exactly, this or that structure evolved, or
even know all the details about how the various parts of the flagellum
work, but I don't think that how proteins generally "know how" to
interact with one another is the sort of inexplicable black magic you


seem to think it is.

Von Smith
Fortuna nimis dat multis, satis nulli.

howard hershey

unread,
Jan 16, 2004, 2:13:32 PM1/16/04
to

Sean Pitman wrote:

> howard hershey <hers...@indiana.edu> wrote in message
> news:<bu46sv$srt$1...@hood.uits.indiana.edu>...
>
>
>>> Consider the scenario where there are 10 ice cream cones on the
>>> continental USA. The goal is for a blind man to find as many as
>>> he can in a million years.
>>
>> Except that is NOT what evolution does. Evolution starts with an
>> organism with pre-existing sequences that produce products and
>> interact with environmental chemicals in ways that are useful to
>> the organism's reproduction.
>
>
> Yes . . . so start the blind man off with an ice-cream cone to begin
> with and then have him find another one.
>
>
>> The situation is more like 10,000 blind men in a varying topography
>> who blindly follow simple and dumb rules of the game to find useful
>> things (ice cream at the tops of fitness peaks):
>
>
> You don't understand. In this scenario, the positively selectable
> topography is the ice-cream cone.

The reason why I used a mesa loaded with ice cream cones is because of
the difference in size of the searching blind man (the modal sequence in
a specific population) and an ice cream cone (all sequences with the
same effective functional activity).

The only way your scenario would be an accurate reflection of reality
is if the ice cream cone were really an ice cream mountain that the
blind woman can climb, with a few dribs and drabs of ice cream (of some
flavor) at the base, with increasing concentrations of ice cream up the
slope toward the mesa, enticing the man upward to the mesa she can
wander around.

Now, why do I use this model rather than your sudden tiny ice cream
cones that pop out of nowhere as tiny dots on the tops of telephone
poles in monotonously flat landscape? You need to remember what these
entities are representing and what the reality of a search through
sequence space in real protein sequence space would look like.

The blind man following my dumb rules is the sequence du jour (the
current modal sequence) of a population of organisms. The reward for
following the rules is winding up on the mesas where the maximum utility
or reward to the organism (measured by the metric of reproductive
success; ice cream is just like sex) is. This is a mesa rather than an
alp because there are literally thousands of sequences that can have
effectively the same optimal functionality, as evidenced by the fact
that there are hundreds to thousands of different sequences that perform
the same function with effectively equal efficiency in different species
of modern organisms and even within species. That doesn't preclude
minor variations in altitude on the mesa top. Moreover, it is quite
clear that there are also many sequences that have *less* utility than
the optimal utility at the mesa. The mesa is surrounded by ground that
*slopes* upward toward the mesa.
That is, the fitness mesa does not simply pop up straight out of
ground (it is unlike most mesas in this sense) like Devil's Tower (WY)
but there are many sequences of varying utility from no selectable
utility through partial utility to optimal utility. It is also clear
that optimal utility is a relative condition because many enzymes and
systems have to balance conflicting needs (such as need for being able
to utilize several different substrates).


> There are no other selectable fitness peaks here. The rest of the
> landscape is neutral.

For a *given* function or utility, the vast majority of the landscape
will be neutral (meaning equally useless in this case) for *that*
function. However, nearby the functional mesa will be other mesas that
have *related* functions or utility. These may even be poking out from
the gradual slope leading up to the function or utility of current
interest. For example, nearby a hypothetical beta galactosidase mesa
will likely be mesas that bind other sugar-adducts via a glycoside
linkage and hydrolyse those linkages, but do not bind galactose. That
is, there will be sequences which already are part way up the slope
leading to the lactase activity, but the mesas part ways and go upward
in a different directions (one toward glucose-adducts, say). The
reason these sequences are *clustered* close to the sequences for
galactosidase activity are *because* cleavage of a galactose-adduct bond
shares many of the structural and sequence feature needs with activities
that cleave glucose-adduct bonds. These sequences are clustered because
they are similar, like chocolate and chocolate with nuts.

Now this is a very different type of sequence landscape than the one I
see Sean proposing. Let me try to ascii draw what I see as the
differences between Sean's model of sequence space and mine. I could be
wrong about his model since he keeps using word descriptions that
disagree with his mathematical model, which invariably assumes that what
determines the difficulty of evolving a new function is the distance
between some *average* or *random* sequence and the new sequence that
must be generated, a point he repeatedly makes but denies making.

Howard's interpretation of Sean's model of sequence space:

|
|
|
| . .
|
| x
|
|
|
| . .
|
|
| .
|
|
| .
|
| o
|
|
| .
|
|
|
|
| . .
|
|
|_______________________________________________________

By 'sequence space' I specifically mean *all* possible protein sequences
of a particular length, not just those with, say, lactase activity.
The .'s in this model represents the 'ice cream cones'; that is, the
rare sequences that serve *any* useful function whatsoever. Everywhere
else we have a flat surface. The x represents the function (the type of
ice cream cone) you think the blind man (the o) must find by wandering
around the flat spaces. The blind man (the o) is the modal sequence or
starting sequence in the search. I started the o at a random or average
site in all sequence space because that is what Sean's *mathematical*
treatment presumes. He presumes that the search from an average or
random sequence to the desired sequence, which on average would depend
on the overall ratio of useless or neutral sequence to useful sequence
is what is important. Moreover, the ratio Sean uses in his calculations
is the ratio of sequences for a *particular* useful function to *all*
sequence space, and thus any sequence which is useful for a different
function other than the chosen one is put in the denominator as being
equivalent to a sequence that has no utility whatsoever. That is one
the reason why I consider his goal to be a teleological or
pre-determined role.

What I cannot represent here, but is certainly an important point, is
the idea that the .'s in Sean's model are completely randomly
distributed wrt to function. That is, if the . at the lower right
encodes a glucose-based glycoside hydrolase, a . representing a
galactose-based glycoside hydrolase will NOT cluster with the
glucose-based glycoside hydrolase, but will be found at some random
position (on average, far away) in this sequence space wrt the sequence
that encodes glucose-based glycoside hydrolase. In this model, and only
in this model where the search involves a completely random search, the
separation between functional sequences is a function of the ratio of
useful to non-useful sequences and nothing else. Neither the starting
point of the blind man nor nor any of the useful sequences show any
clustering of functionalities. Feel free, Sean, to correct any part of
this model that you regard as a misrepresentation of what your
*mathematical* model presents.


Howard's model of sequence space.

|
| _______ _______
| / .o. \ / ... \
| | ... | | .o. _|_
| \ ... / \ .../xxx\
| ------- -----|xxx|
| \xxx/
| _______ _______
| / ... \ / ... \
| | .o. | | ... |
| \ ... / \ .o. /
| -------___ -------
| / ... \
| | ... |
| \ o.. / _______
| ------- / ... \
| | ... |
| \ ... /
| -------
| _______
| / o.. \
| | ... |
| \ ... /
| -------
| _______ _______
| / ... \ / ... \
| | ..o | | .o. |
| \ ... / \ ... /
| ------- -------
|_______________________________________________________

By 'sequence space' I specifically mean *all* possible protein sequences
of a particular length. The .'s represents the 'ice cream cones'; that
is, the rare sequences that serve *any* useful function whatsoever.
There are a number of sequences that are equally useful. I have
clustered these in a 3x3 dot array, because the equally useful sequences
are close together. Of course, the reality would be that the size of
these boxes will be highly variable. Some will be only a very small
cluster. Others, like fibrinogen peptide, can essentially cover the
entire sequence space! There is little relationship between size of the
protein and the number of sequences in sequence space that can fulfill
that function. But, in general, the smaller proteins tend to have
higher constraint (fewer sequences will fullfil the function). The
differences between the .'s are selectively neutral, so the position of
the blind man (the modal sequence in a particular organism) is random
within that group. There is *also* a pneumbra of surrounding sequences
with *less* utility of varying degrees of full functionality. I have
represented that by the box around the cluster of useful sequences. The
boundary is the edge of selectable utility (where the blind man starts
to notice a selective slope). The x represents the function (the type of
ice cream cone) you think the blind man (one of the o's) must find.
Notice that this representation is a representation of a real landscape
with real topography, not a perfectly flat plane with telephone poles
sticking up randomly at scattered points.

I am starting with a real cell and not with a hypothetical blind man
who is starting as some random sequence in all of sequence space.
Each of my blind men (genes, if you will) already occupies a site on the
mesa of functionality, but the different mesas (and their modal gene
sequences) represent quite different functionalities. One peak may
represent a glucose-glycoside hydrolase (the one on the upper right very
near the xxx mesa). Another, down on the lower left, may represent a
sequence with fatty acid synthetase activity. The whole board
represents all of sequence space, after all. But the o's of my modal
gene sequences in populations are not on random or average positions.
They are specifically on mesas of functionality. I would argue that
that is a better representation of reality than Sean's (or the best I
can make of Sean's) model of reality.

Notice that there is also some overlap in functionalities. And there is
even one potentially useful site that has no blind man (middle far
right). This is, simply put, a potential function that this particular
cell does not have, but does exist in sequence space, such as nylonase
activity or the ability to extract energy from H2S. It is certain that
this cell does not *need* this activity for survival. It is not
necessarily the case that it could not *use* it, although that may also
be true. One does not, after all, *need* nylonase activity in all
possible environments. In my model, all the blind men are moving around
their respective mesas. Some may even take a few steps downslope by
accident. It is highly unlikely that a *randomly chosen* one of these
10,000 pre-existing blind men (and 10,000 does not seem to be an absurd
number for the number of different genes) will find, by such a walk, the
spots marked xxx.

But evolution does not work by a *randomly* chosen or *average* blind
man wandering through functionless space to chance upon the xxx's in my
model of sequence space. In particular, in my model, there is, compared
to Sean's model, a definite, obvious, and intuitive clustering of
functionally useful sequences. That is, the cluster that overlaps the
xxx sequence is not some random sequence with some random function. It
is, let's say, a glucose-based glycoside hydrolase with no selectable
beta galactosidase activity. In my model, such a hydrolase is not
randomly present in the sequence space, but is specifically likely to be
clustered close to those sequences that do have selectable beta
galactosidase activity.

In fact, even if one started with a randomly chosen blind man starting
at some place on the flats between useful sequences, if that sequence
were ever to become useful, it would do so by climbing the nearest mesa
that has no blind man on top (that blind man's landscape does not
include already occupied mesas) using the simple rules I described.

Another thing that is not represented diagramatically is the role of
duplication. A duplicate of the blind man on the mesa close to the
xxx's does not have the same position as the original blind man (which
is already at the top of the mesa). It is, instead, often at a position
that is close to the flatland (that is, one copy, the one I call the
'duplicate' is functionally redundant rather than functionally useful).
Thus, when this redundant blind man takes a step toward the xxx's he
is not taking a step down, but a step up. The landscape for this man is
different than the landscape for the identical clone of this man. This,
of course, is hard to represent in a simple plane.

> Some of the ice-cream cones may be more
> positively selectable than others (i.e., perhaps the man likes
> vanilla more than chocolate). However, all positive peaks are
> represented in this case by an ice-cream cone.
>
>
>> Up is good. Down is bad.
>
>
> Ice-cream cone = Good or "Up" (to one degree or another) or even
> neutral depending upon one's current position as it compares to one's
> previous position. For example, once you have an ice cream, that is
> good. But, all changes that maintain that ice cream but do not gain
> another ice cream are neutral.
>
> No ice-cream cone = "Bad", "Down", or even "neutral" depending upon
> one's current position as it compares to one's previous position.
>
>
>> Flat is neither good nor bad.

This position appears to represent ice cream cones as an all-or-nothing
phenomenon. There are no possible intermediate states in this model.
It looks like a flat plain with telephone poles. In short, it looks
like an artificial landscape, not a real one.


>
> Exactly. Flat is neutral. The more neutral space between each
> "good" upslope/ice-cream cone, the longer the random walk. The
> average distance between each selectable "good" state translates into
> the average time required to find such a selectable state/ice-cream
> cone. More blind men searching, like 10,000 of them, would cover the
> area almost 10,000 times faster than just one blind man searching
> alone. However, at increasing levels of complexity the flat area
> expands at an exponential rate.

How does one determine, in mathematical terms, "level of complexity"?
The reason why I did not have landscapes where I used sequence space at
a given level of complexity rather than at a given amino acid number is
that I have no idea how one determines "level of complexity".
Why do I have to keep asking that question? And what is your evidence
that increasing levels of complexity causes a change in the ratio of
utile to useless sequence? How do you determine the ratio of utile (for
*any function*) to useless (for *any function*) sequence in any case?
What is it that prevents clustering of functionally related sequences in
your landscape?

> In order to keep up and find new
> functions at these higher levels of functional complexity, the
> population of blind men will have to increase at an equivalent rate.

Only if you think the blind men start at random positions and go to a
sequence which is randomly placed wrt their position.

> The only problem with increasing the population is that very soon the
> local environment will not be able to support any larger of a
> population. So, if the environment limits the number of blind men
> possible to 10,000 - that's great if the average neutral distance
> between ice-cream cones in a few miles or so, but what happens when,
> with a few steps up the ladder of functional complexity, the neutral
> distance expands to a few trillion miles between each cone, on
> average? Now each one of your 10,000 blind men have to search around
> 50 million sq. miles, on average, before the next ice-cream cone or a
> new cluster of ice cream cones will be found by even one blind man in
> this population.

Could you explain the relevance of the above model to the real world?
Why does the *average* distance between a *random* site and a
*teleologically determined* site matter? Wouldn't the distance between
the blind man closest to a teleologically determined site and that site
be more important and relevant? We are not interested in the odds of
the *average* sequence changing into the teleologically determined one.
We are interested in the *best* odds of *any* existing sequence
changing into the teleologically determined one. The best odds are
those of the pre-existing sequence that is closest to the end sequence
and has nothing to do with the odds of an average or random sequence
becoming the end sequence.

>> Keep walking in all cases.
>
>
> They keep walking alright - a very long ways indeed before they reach
> anything beneficially selectable at anything very far beyond the
> lowest levels of functional complexity.

Only if they started from random spots in sequence space. If the blind
man which is closest to the xxx starts walking, it will quickly, by the
simple rules I invoked, find its way up the Mt. Improbable right next door.

>> It would not take too long for these 10,000 blind men to be found
>> in decidedly non-random places (the high mesas of functional
>> utility where they are wandering around the flat tops if you
>> haven't guessed).


> There is a funny thing about these mesas. At low levels of
> complexity, these mesas are not very large. In fact, many of them
> are downright tiny - just one or two steps wide in any direction and
> a new, higher mesa can be reached. However, once a blind man finds
> this new mesa new higher mesa (representing a different type of
> function at higher level of specified complexity) and climbs up onto
> its higher surface, the distance to a new mesa at the same height or
> taller is exponentially greater than it was at the lower levels of
> mesas.
>
> ___ __ __ _
> _-_ __ __-_ _-_- -__-__-_- _-__-_-_-__-
> -_- _-_-_ _-_-__
>

Can you make the above make sense? Remember in my model, that the usual
way for a blind man to move involves a change in the landscape or the
presence of a duplicate blind man who is now redundant and for whom the
landscape is differently shaped.

>> And the ice cream cones (the useful functions), remember, are not
>> randomly distributed either. They are specifically at the tops of
>> these mesas as well. That is what a fitness landscape looks like.
>
>
> Actually, the mesa itself, every part of its surface, represents an
> ice cream cone. There is no gradual increase here. Either you have
> the ice-cream cone or you don't.

I.e., your model is of a flat plain with telephone poles where there
cannot be intermediacy in function. Where an enzyme cannot mutate or
generate a closely related sequence with 50% of optimal activity. Or
10%. In your model, it is indeed all-or-nothing. That is what I get
from this discussion. Am I right? [The reason I ask is because I will
want to compare your and my model with reality -- that is test the model
against the evidence of nature -- to see which is closer to the way that
real organisms and real enzymes and real systems of change work.]

> If you don't have one that is even
> slightly "good"/beneficial, then you are not higher than you were to
> begin with and you must continue your random walk on top of the flat
> mesa that you first started on (i.e., your initial beneficial
> function(s)).

"Good/beneficial" is not an absolute value. It is a relative value. It
is "better than". Indeed, in an *unchanged* selective environment, it
is unlikely that there will be a mesa of higher utility arising out of
an original mesa that will not have already been discovered by a random
walk which retains the original activity or function at each step. What
your model seems to indicate is something quite different. Your
landscape is like a flat plain with telephone poles, and you seem to say
that the only way to reach a new telephone pole is to climb down and
wander the flatlands blindly. That is, one first completely loses all
functional utility and wanders functionless space.

In my model, it may be that there is, in fact, a mesa newly arisen out
of an original mesa that suddenly looks more attractive than the
original. This would be a consequence of a change in environment. An
example of this would be the conversion of ebg to lactase due to a
change in environment that made lactase activity far more beneficial
than the original activity of ebg. Or it could be due to the production
of a redundant duplicate, with the duplicate being free to explore new
nearby upward directions that the original could not, because its
function was too valuable. Or it could be a new function for an old
protein that appears by a change in regulation (as in eye crystallins).
But, then, I don't see any changes that must necessarily involve long
selectively neutral walks. I only see walks to related structures in a
cluster that has related functions or emergent functions of old
structures. And I envision a *real* landscape, not a flat plain with
telephone poles.

>> If this topography of utility only changed slowly, at any given
>> time it would appear utterly amazing to Sean that the blind men
>> will all be found at these local high points or optimal states (the
>> mesas licking the ice cream cones on them) rather than being
>> randomly scattered around the entire surface.
>
>
>
> If all the 10,000 blind men started at the same place, on the same
> point of the same mesa, and then went out blindly trying to find a
> higher mesa than the one they started on, the number that they found
> would be directly proportional to the average distance between these
> taller mesas.

My model does no such thing. The 10,000 blind men are found on
functionally useful mesas that are scattered *in clusters* throughout
sequence space. You seem to think that I am thinking that each blind
man represents an individual organism. I am thinking of each blind man
representing a modal sequence in a population of organisms and their
scatter representing the pattern of real functional cell sequences in
sequence space. That is because evolution of new function by sequence
change does not start from some arbitrary set of random sequences. It
starts with already useful sequences in already functioning cells. Each
mesa does something different; each cluster in a mesa or cluster of
related mesas does something related to what other members of the
cluster do. Each blind man is moving around his functional mesa.
Probability says that the blind man in the cluster closest to the new
sequence is the most likely one to find a new mesa optimum. Not some
random blind man. The average density of mesas is irrelevant to the
odds of some blind man finding a new solution or sequence. All that
matters is how far the nearest sequence with a blind man is and whether
the environmental landscape has changed to favor movement from current
optimi.

> If the density of taller mesas, as compared to the one

> they are now on, happens to be, say, one every 100 meters, then they

> will indeed find a great many of these in short order. However, if
> the average density of taller mesas, happens to be one every 10,000
> kilometers, then it would take a lot longer time to find the same
> number of different mesas as compared to the number the blind men
> found the first time when the mesas were just 100 meters apart.

My whole point is that *average* distance from an *average* blind man is
utterly irrelevant to reality. It is not wrong. It is irrelevant.

>> They reached these high points (with the ice cream) by following a
>> simple dumb algorithm.
>
>
> Yes - and this mindless "dumb" algorithm works just fine to find new
> and higher mesas if and only there is a large average density of
> mesas per given unit of area (i.e., sequence space). That is why it
> is easy to evolve between 3-letter sequences. The ratio/density of
> such sequences is as high as 1 in 15. Any one mutating sequence will
> find a new 3-letter sequence within 15 random walk steps on average.
> A population of 10,000 such sequences (blind men) would find most if
> not all the beneficial 3-letter words (ice-cream cones) in 3-letter
> sequence space in less than 30 generations (given that there was one
> step each, on average, per generation).

Notice that you are starting with a *random* 3-letter sequence and
asking how many steps would be required for it to reach another *random*
specified 3-letter sequence by a *random* walk with no intermediate
utility. That is the mathematical argument you repeatedly say you are
NOT making, but repeatedly insist on doing. That model is not wrong.
It is irrelevant.

> This looks good so far now doesn't it? However, the problems come as
> you move up the ladder of specified complexity. Using language as
> an illustration again, it is not so easy to evolve new beneficial
> sequences that require say, 20 fairly specified letters, to transmit
> an idea/function. Now, each member of our 10,000 blind men is going
> to have to take over a trillion steps before success (the finding of
> a new type of beneficial state/ice cream cone) is realized for just
> one of them at this level of complexity.

This does fit the model I presented as my interpretation of what you
said. I think that model has very little relationship to either the
reality of sequence space or the mechanisms of evolution. It is nothing
but the tornado whipping together a 747 argument gussied up so she
doesn't look like the old decrepit whore she is.


>
> Are we starting to see the problem here? Of course, you say that
> knowledge about the average density of beneficial sequences is
> irrelevant to the problem, but it is not irrelevant unless you, like
> Robin, want to believe that all the various ice-cream cones
> spontaneously cluster themselves into one tiny corner of the
> potential sequence space AND that this corner of sequence space just
> so happens to be the same corner that your blind men just happen to
> be standing in when they start their search. What an amazing stroke
> of luck that would be now wouldn't it?

I do think that sequences cluster by functional attributes. That is,
enzymes that hydrolyze glycoside linkages will all have similar
sequences or at least sequences that produce similar 3-D structures with
a *few* key sites being strongly conserved. Why do you think otherwise?

>> But you were wondering how something new could arise *after* the
>> blind men are already wandering around the mesas? The answer is
>> that it depends. They can't always do so.
>
>
> And why not Howard? Why can't they always do so? What would limit
> the blind men from finding new mesas?

The fact that the blind men (the modal sequence of a population) are
already on mesas of utility. Usually a change in functional or
selective landscape is required in the vicinity of a blind man to allow
him to reach a different peak by following the simple rules.

> I mean really, each blind man
> will self-replicate (hermaphrodite blind men) and make 10,000 new
> blind men on the mesa that he/she/it now finds himself on. This new
> population would surely be able to find new mesas in short order if
> things worked as you suggest.

If the change is positively *selective*, the walk of the blind man (the
modal population sequence) to the goal will indeed be rapid. But
neutral drift of a modal population sequence is not a fast process. If
it requires a few steps downward before hitting a new upward slope to a
different function the process will be quite episodic.

> But the problem is that if the mesas
> are not as close together, on average, as they were at the lower
> level where the blind men first started their search, it is going to
> take longer time to find new mesas at the same level or higher. That
> is the only reason why these blind men "can't always" find "something
> new". It has to do with the average density of mesas at that level.

Average density of a specified end only has meaning if one is
envisioning evolutionary searches as a random search for a specified end
from a random or average position. Evolutionary searches that succeed
never or rarely (nylonase, perhaps) start from a random or average site.
They start from a site close to the destination. And since functional
sequences do seem to be clustered rather than randomly scattered across
sequence space, it is not unusual for the starting point of *successful*
evolutionary inventions to be nearby.

>> But remember that these pre-existing mesas are not random places.
>> They do something specific with local utility.
>
>
> The mesas represent sequences with specific utilities. These
> sequences may in fact be widely separated mesas even if they happen
> to do something very similar. Really, the there is no reason for the
> mesas to be clustered in one corner of sequence space. A much more
> likely scenario is for them to be more evenly distributed throughout
> the potential sequence space.

Chose your poison. If mesas are clustered, in fact, reaching a new mesa
that is far away from any cluster becomes more difficult because o's
(the blind men or modal population sequences) are clustered on
pre-existing mesas. Perhaps requiring a chance event like the one that
produced nylonase or one that caused the formation of a chimeric protein
rather than a stepwise change of single nucleotides, as would be
possible if the new mesa were in the same functional family. Or perhaps
making that change impossible for that organism.

If mesas are *evenly* spread throughout sequence space, that still
doesn't obviate the fact that the distance between the *average*
pre-existing mesa, with its blind man, and the new mesa is irrelevant
compared to the distance between the *nearest* pre-existing mesa and the
new mesa. Evolution to the new mesa won't come from some *average* mesa
or some mesa on the other side of sequence space. It will come from
mesas that are closest to the new one.

> Certainly there may be clusters of
> mesas here and there, but on average, there will still be a wide
> distribution of mesas and clusters of mesas throughout sequence space
> at any given level. And, regardless of if the mesas are more
> clustered or less clustered, the *average* distance between what is
> currently available and the next higher mesa will not be
> significantly affected.

No. It will be utterly irrelevant.

>> Let's say that each mesa top has a different basic *flavor* of ice
>> cream. Say that chocolate is a glycoside hydrolase that binds a
>> glucose-based glycoside. Now let's say that the environment
>> changes so that one no longer needs this glucose-based glycoside
>> (the mesa sinks down to the mean level) but now one needs a
>> galactose-based glycoside hydrolase.
>
>
> You have several problems here with your illustration. First off,
> both of these functions are very similar in type and use very similar
> sequences.

No kidding! Who would have thunk that blind evolution would chose to
evolve a lactase from a closely related sequence rather than from some
random or average sequence or from an alcohol dehydrogenase? Surely not
Sean.

> Also, their level of functional complexity is relatively
> low (like the 4 or 5 letter word level). Also, you must consider
> the likelihood that the environment would change so neat so that
> galactose would come just when glucose is leaving. Certainly if you
> could program the environment just right, in perfect sequence,
> evolution would be no problem.

A concentration gradient would suffice; that would provide environments
in which the original strain could grow and also a new niche would be
open for any variant able to exploit it. The environment, of course,
only selects among existing variants, so the selectable change would
have to have already happened.

> But you must consider the likelihood
> that the environment will change in just the right way to make the
> next step in an evolutionary sequence beneficial when it wasn't
> before. The odds that such changes will happen in just the right way
> on both the molecular level and environmental level get exponentially
> lower and lower with each step up the ladder of functional
> complexity.

How does one calculate "functional complexity" so that one knows what
rung of the ladder one is talking about?

> What was so easy to evolve with functions requiring no
> more than a few hundred fairly specified amino acids at minimum, is
> much much more difficult to do when the level of specified complexity
> requires just a few thousand amino acids at minimum.

What do these numbers of amino acids mean wrt "level of specified
complexity". How does one determine that there are a "few hundred
fairly specified amino acids" required for a change in function.
Especially since function can change without changing *any* amino acids
(see the eye crystallins).

> It's the
> difference between evolving between 3-letter words and evolving
> between 20-letter phrases. What are the odds that one 20-letter
> phrase/mesa that worked well in one situation will sink down with a
> change in situations to be replaced by a new phrase of equal
> complexity that is actually beneficial? - Outside of intelligent
> design? That is the real question here.

Well, it would help if you would actually tell us what your meaningless,
gobbledygook, hand-waving terms actually meant and how they could be
operationally quantified.

>> Notice that the difference in need here is something more like
>> wanting chocolate with almonds than wanting even strawberry, much
>> less jalapeno or anchovy-flavored ice cream. The blind man on the
>> newly sunk mesa must keep walking, of course, but he is not
>> thousands of miles away from the newly risen mesa with chocolate
>> with almonds ice cream on top.
>
>
> He certainly may be extremely far away from the chocolate with
> almonds as well as every other new type of potentially beneficial ice
> cream depending upon the level of complexity that he happens to be at
> (i.e., the average density of ice-creams of any type in the sequence
> space at that level of complexity).

That is certainly counter-intuitive and counter-evidence that related
functions tend to be found to have related sequences (be in gene families).

>> Changing from one glucose-based glycoside hydrolase to one with a
>> slightly different structure is not the same as going from
>> chocolate to jalapeno or fish-flavored ice cream. Not even the same
>> as going from chocolate to coffee. The "island" of chocolate with
>> almonds is *not* going to be way across the ocean from the "island"
>> of chocolate.
>
>
> Ok, lets say, for arguments sake, that the average density of
> ice-cream cones in a space of 1 million square miles is 1 cone per
> 100 square miles. Now, it just so happens that many of the cones are
> clustered together. There is the chocolate cluster with all the
> various types of chocolate cones all fairly close together. Then,
> there is the strawberry cones with all the variations on the
> strawberry theme pretty close together. Then, there is the . . .
> well, you get the point. The question is, does this clustering of
> certain types of ice creams help is the traversing the gap between
> these clustered types of ice creams?

It certainly reduces the distance needed to go from chocolate to
chocolate with almonds. But why would anyone think that evolution works
by converting an alcohol dehydrogenase into a glycoside hydrolase rather
than by modifing one glycoside hydrolase into a different one?

> No it doesn't. If anything,
> the clustering only makes the average gap between clusters wider.
> The question is,