Stacking the Deck

73 views
Skip to first unread message

Sean Pitman

unread,
Jan 4, 2004, 11:25:02 AM1/4/04
to
lmuc...@yahoo.com (RobinGoodfellow) wrote in message news:<81fa9bf3.04010...@posting.google.com>...
> seanpi...@naturalselection.0catch.com (Sean Pitman) wrote in message news:<80d0c26f.03123...@posting.google.com>...
>
> Good gravy! That was so wrong, it feels wrong to even use the word
> "wrong" to describe it. All I can recommend is that you run, don't
> walk, to your nearest college or university, and sign up as quickly as
> you can for a few math and/or statistics courses: I especially
> recommend courses in probability theory and stochastic modelling.
> With all due respect, Sean, I am beginning to see why the biologists
> and biochemists in this group are so frustrated with you: my
> background in those fields is fairly weak - enough to find your
> arguments unconvincing but not necessarily ridiculous - but if you are
> as weak with biochemistry as you are with statistical and
> computational problems, then I can see why knowledgeable people in
> those areas would cringe at your posts.

With all due respect, what is your area of professional training? I
mean, after reading your post I dare say that you are not only weak in
biology, but statistics as well. Certainly your numbers and
calculations are correct, but the logic behind your assumptions is
extraordinarily fanciful. You sure wouldn't get away with such
assumptions in any sort of peer reviewed medical journal or other
statistically based science journal - that's for sure. Of course, you
may have good success as a novelist . . .

> I'll try to address some of the mistakes you've made below, though I
> doubt that I can do much to dispel your misconceptions. Much of my
> reply will not even concern evolution in a real sense, since I wish to
> highlight and address the mathematical errors that you are making.

What you ended up doing is highlighting your misunderstanding of
probability as it applies to this situation as well as your amazing
faith in an extraordinary stacking of the deck which allows evolution
to work as you envision it working. Certainly, if evolution is true
then you must be correct in your views. However, if you are correct
in your views as stated then it would not be evolution via mindless
processes alone, but evolution via a brilliant intelligently designed
stacking of the deck.

> > RobinGoodfellow <lmuc...@yahoo.com> wrote in message news:<bsd7ue$r1c$1...@news01.cit.cornell.edu>...
>
> > > It is even worse than that. Even random walks starting at random points
> > > in N-dimensional space can, in theory, be used to sample the states
> > > with a desired property X (such as Sean's "beneficial sequences"), even
> > > if the number of such states is exponentially small compared to the
> > > total state space size.
> >
> > This depends upon just how exponentially small the number of
> > beneficial states is relative to the state space.
>
> No, it does not. If you take away anything from this discussion, it
> has to be this: the relative number of beneficial states has virtually
> no bearing on the amount of time a local search algorithm will need to
> find such a state.

LOL - You really don't have a clue how insane this statement is?

> The things that *would* matter are the
> distribution of beneficial states through the state space, the types
> of steps the local search is allowed to take (and the probabilities
> associated with each step), and the starting point.

The distribution of states has very little if anything to do with how
much time it takes to find one of them on average. The starting point
certainly is important to initial success, but it also has very little
if anything to do with the average time needed to find more and more
beneficial functions within that same level of complexity. For
example, if all the beneficial states were clustered together in one
or two areas, the average starting point, if anything, would be
farther way than if these states were distributed more evenly
throughout the sequence space. So, this leaves the only really
relevant factor - the types of steps and the number of steps per unit
of time. That is the only really important factor in searching out
the state space - on average.

> For an extreme
> example, consider a space of strings consisting of length 1000, where
> each position can be occupied by one of 10 possible characters.

Ok. This would give you a state space of 10 to the power of 1000 or
1e1000. That is an absolutely enormous number.

> Suppose there are only two beneficial strings: ABC........, and
> BBC........ (where the dots correspond to the same characters). The
> allowed transitions between states are point mutations, that are
> equally probable for each position and each character from the
> alphabet. Suppose, furthermore, that we start at the beneficial state
> ABC. Then, the probability of a transition from ABC... to BBC... in a
> single mutation 1/(10*1000) = 1/10000 (assuming self-loops - i.e.
> mutations that do not alter the string, are allowed).

You are good so far. But, you must ask yourself this question: What
are the odds that out of a sequence space of 1e1000 the only two
beneficial sequences with uniquely different functions will have a gap
between them of only 1 in 10,000? The time required to cross this
tiny gap would require a random walk of only 10,000 steps on average.
For a decent sized population, this could be done in just one
generation.

Don't you see the problem with this little scenario of yours?
Certainly this is a common mistake made by evolutionists, but it is
none-the less a fallacy of logic. What you have done is assume that
the density of beneficial states is unimportant to the problem of
evolution since it is possible to have the beneficial states clustered
around your starting point. But such a close proximity of beneficial
states is highly unlikely. On average, the beneficial states will be
more widely distributed throughout the sequence space.

For example, say that there are 10 beneficial sequences in this
sequence space of 1e1000. Now say one of these 10 beneficial
sequences just happens to be one change away from your starting point
and so the gap is only a random walk of 10,000 steps as you calculated
above. However, on average, how long will it take to find any one of
the other 9 beneficial states? That is the real question. You rest
your faith in evolution on this inane notion that all of these states
will be clustered around your starting point. If they were, that
certainly would be a fabulous stroke of luck - like it was *designed*
that way. But, in real life, outside of intelligent design, such
strokes of luck are so remote as to be impossible for all practical
purposes. On average we would expect that the other nine sequences
would be separated from each other and our starting point by around
1e999 random walk steps/mutations (i.e., on average it is reasonable
to expect there to be around 999 differences between each of the 10
beneficial sequences). So, even if a starting sequence did happen to
be so extraordinarily lucky to be just one positional change away from
one of the "winning" sequences, the odds are that this luck will not
hold up as well in the evolution of any of the other 9 "winning"
sequences this side of a practical eternity of time.

Real time experiments support this position rather nicely. For
example, a recent and very interesting paper was published by Lenski
et. al., entitled, "The Evolutionary Origin of Complex Features" in
the 2003 May issue of Nature. In this particular experiment the
researchers studied 50 different populations, or genomes, of 3,600
individuals. Each individual began with 50 lines of code and no
ability to perform "logic operations". Those that evolved the ability
to perform logic operations were rewarded, and the rewards were larger
for operations that were "more complex". After only15,873 generations,
23 of the genomes yielded descendants capable of carrying out the most
complex logic operation: taking two inputs and determining if they are
equivalent (the "EQU" function).

In principle, 16 mutations (recombinations) coupled with the three
instructions that were present in the original digital ancestor could
have combined to produce an organism that was able to perform the
complex equivalence operation. According to the researcher themselves,
"Given the ancestral genome of length 50 and 26 possible instructions
at each site, there are ~5.6 x 10e70 genotypes [sequence space]; and
even this number underestimates the genotypic space because length
evolves."

Of course this sequence space was overcome in smaller steps. The
researchers arbitrarily defined 6 other sequences as beneficial (NAND,
AND, OR, NOR, XOR, and NOT functions). The average gap between these
pre-defined steppingstone sequences was 2.5 steps, translating into an
average search space between beneficial sequences of only 3,400 random
walk steps. Of course, with a population of 3,600 individuals in a
population, a random walk of 3,400 will be covered in short order by
at least one member of that population. And, this is exactly what
happened. The average number of mutations required to cross the
16-step gap was only 103 mutations per population.

Now that is lightening fast evolution. Certainly if real life
evolution were actually based on this sort of setup then evolution of
novel functions at all levels of complexity would be a piece of cake.
Of course, this is where most descriptions of this most interesting
experiment stop. But, what the researchers did next is the most
important part of this experiment.

Interestingly enough, Lenski and the other scientists went on to set
up different environments to see which environments would support the
evolution of all the potentially beneficial functions - to include the
most complex EQU function. Consider the following description about
what happened when various intermediate steps were not arbitrarily
defined by the scientists as "beneficial".

"At the other extreme, 50 populations evolved in an environment where
only EQU was rewarded, and no simpler function yielded energy. We
expected that EQU would evolve much less often because selection would
not preserve the simpler functions that provide foundations to build
more complex features. Indeed, none of these populations evolved EQU,
a highly significant difference from the fraction that did so in the
reward-all environment (P = 4.3 x 10e-9, Fisher's exact test).
However, these populations tested more genotypes, on average, than did
those in the reward-all environment (2.15 x 10e7 versus 1.22 x 10e7;
P<0.0001, Mann-Witney test), because they tended to have smaller
genomes, faster generations, and thus turn over more quickly. However,
all populations explored only a tiny fraction of the total genotypic
space. Given the ancestral genome of length 50 and 26 possible
instructions at each site, there are ~5.6 x 10e70 genotypes; and even
this number underestimates the genotypic space because length
evolves."

Isn't that just fascinating? When the intermediate stepping stone
functions were removed, the neutral gap that was created successfully
blocked the evolution of the EQU function, which happened *not* to be
right next door to their starting point. Of course, this is only to
be expected based on statistical averages that go strongly against the
notion that very many possible starting points would just happen to be
very close to an EQU functional sequence in such a vast sequence
space.

Now, isn't this consistent with my predictions? This experiment was
successful because the intelligent designers were capable to defining
what sequences were "beneficial" for their evolving "organisms." If
enough sequences are defined as beneficial and they are placed in just
the right way, with the right number of spaces between them, then
certainly such a high ratio will result in rapid evolution - as we saw
here. However, when neutral non-defined gaps are present, they are a
real problem for evolution. In this case, a gap of just 16 neutral
mutations effectively blocked the evolution of the EQU function.

http://naturalselection.0catch.com/Files/computerevolution.html

> Thus, a random
> walk that restarts each time after the first step (or alternatively, a
> random walk performed by a large population of sequences, each
> starting at state ABC...) is expected to explore, on average, 10000
> states before finding the next beneficial sequence.

Yes, but you are failing to consider the likelihood that your "winning
sequence" will in fact be within these 10,000 steps on average.

> Now, below, we
> will apply your model to the same problem.

Oh, I can hardly wait!

> > It also depends
> > upon how fast this space is searched through. For example, if the
> > ratio of beneficial states to non-beneficial states is as high as say,
> > 1 in a 1e12, and if 1e9 states are searched each second, how long with
> > it take, on average, to find a new beneficial state?
>
> OK. Let's take my example, instead, and apply your calculations.
> There are only 2 beneficial sequences, out of the state space of
> 1e1000 sequences.

Ok, I'm glad that you at least realize the size of the state space.

> Since the ratio of beneficial sequences to
> non-beneficial ones is (2/10^1000), if your "statistics" are correct,
> then I should be exploring 10^1000/2 states, on average, before
> finding the next beneficial state. That is a huge, huge, huge number.
> So why does my very simple random walk explore only 10,000 states,
> when the ratio of beneficial sequences is so small?

Yes, that is the real question and the answer is very simple - You
either got unbelievably lucky in the positioning of your start point
or your "beneficial" sequences were clustered by intelligent design.

> The answer is simple - the ratio of beneficial states does NOT matter!

Yes it does. You are ignoring the highly unlikely nature of your
scenario. Tell me, how often do you suppose your start point would
just happen to be so close to the only other beneficial sequence in
such a huge sequence space? Hmmmm? I find it just extraordinary that
you would even suggest such a thing as "likely" with all sincerity of
belief. The ratio of beneficial to non-beneficial in your
hypothetical scenario is absolutely miniscule and yet you still have
this amazing faith that the starting point will most likely be close
to the only other "winning" sequence in an absolutely enormous
sequence space?! Your logic here is truly mysterious and your faith
is most impressive. I'm sorry, but I just can't get into that boat
with you. You are simply beyond me.

> All that matters is their distribution, and how well a particular
> random walk is suited to explore this distribution.

Again, you must consider the odds that your "distribution" will be so
fortuitous as you seem to believe it will be. In fact, it has to be
this fortuitous in order to work. It basically has to be a set up for
success. The deck must be stacked in an extraordinary way in your
favor in order for your position to be tenable. If such a stacked
deck happened at your table in Las Vegas you would be asked to leave
the casino in short order or be arrested for "cheating" by intelligent
design since such deck stacking only happens via intelligent design.
Mindless processes cannot stack the deck like this. It is
statistically impossible - for all practical purposes.

> (Again, it is a
> gross, meaningless over-simplification to model evolution as a random
> walk over a frozen N-dimensional sequence space, but my point is that
> your calculations are wrong even for that relatively simple model.)

Come now Robin - who is trying to stack the deck artificially in their
own favor here? My calculations are not based on the assumption of a
stacked deck like your calculations are, but upon a more likely
distribution of beneficial sequences in sequence space. The fact of
the matter is that sequence space does indeed contain vastly more
absolutely non-beneficial sequences than it does those that are even
remotely beneficial. In fact, there is an entire theory called the
"Neutral Theory of Evolution". Of all mutations that occur in every
generation in say, humans (around 200 to 300 per generation), the
large majority of them are completely "neutral" and those few that are
functional are almost always detrimental. This ratio of beneficial to
non-beneficial is truly small and gets exponentially smaller with each
step up the ladder of specified functional complexity. Truly,
evolution gets into very deep weeds very quickly beyond the lowest
levels of functional/informational complexity.

> > It will take
> > just over 1,000 seconds - a bit less than 20 minutes on average. But,
> > what happens if at higher levels of functional complexity the density
> > of beneficial functions decreases exponentially with each step up the
> > ladder? The rate of search stays the same, but the junk sequences
> > increase exponentially and so the time required to find the rarer and
> > rarer beneficial states also increases exponentially.
>
> The above is only true if you use the following search algorithm:
>
> 1. Generate a completely random N-character sequence
> 2. If the sequence is beneficial, say "OK";
> Otherwise, go to step 1.

Actually the above is also true if you start with a likely starting
point. A likely starting point will be an average distance away from
the next closest beneficial sequence. A random mutation to a sequence
that does not find the new beneficial sequence will not be selectable
as advantageous and a random walk will begin.

> For an alphabet of size S, where only k characters are "beneficial"
> for each position, the above search algorithm will indeed need to explore
> exponentially many states in N (on average, (S/k)^N), before finding a
> beneficial state. But, this analysis applies only to the above search
> algorithm - an exteremely naive approach that resembles nothing that
> is going on in nature.

Oh really? How do you propose that nature gets around this problem?
How does nature stack the deck so that its starting point is so close
to all the beneficial sequences that otherwise have such a low density
in sequence space?

> The above algorithm isn't even a random walk
> per se, since random walks make local modifications to the current
> state, rather than generate entire states anew.

The random walk I am talking about does indeed make local
modifications to a current sequence. However, if you want to get from
the type of function produced by one state to a new type of function
produced by a different state/sequence, you will need to eventually
leave your first state and move onto the next across whatever neutral
gap there might be in the way. If a new function requires a sequence
that does not happen to be as fortuitously close to your starting
sequence as you like to imagine, then you might be in just a bit of a
pickle. Please though, do explain to me how it is so easy to get from
your current state, one random walk step at a time, to a new state
with a new type of function when the density of beneficial sequences
of the new type of function are extraordinarily infinitesimal?

> A random walk
> starting at a given beneficial sequence, and allowing certain
> transitions from one sequence to another, would require a completely
> different type of analysis. In the analyses of most such search
> algorithms, the "ratio" of beneficial sequences would be irrelevant -
> it is their *distribution* that would determine how well such an
> algorithm would perform.

The most likely distribution of beneficial sequences is determined by
their density/ratio. You cannot simply assume that the deck will be
so fantastically stacked in the favor of your neat little evolutionary
scenario. I mean really, if the deck was stacked like this with lots
of beneficial sequences neatly clustered around your starting point,
evolution would happen very quickly. Of course, there have been those
who propose the "Baby Bear Hypothesis". That is, the clustering is
"just right" so that the theory of evolution works. That is the best
you can hope for. Against all odds the deck was stacked just right so
that we can still believe in evolution. Well, if this were the case
then it would still be evolution by design. Mindless processes just
can't stack the deck like you are proposing.

> My example above demonstrates a problem
> where the ratio of beneficial states is exteremely tiny, yet the
> search finds a new beneficial state relatively quickly.

Yes - because you stacked the deck in your favor via deliberate
design. You did not even try to explain the likelihood of this
scenario in real life. How do you propose that this is even a remote
reflection of what mindless processes are capable of? I'm talking
average probabilities here while you are talking about extraordinarily
unlikely scenarios that are basically impossible outside of deliberate
design.

> I could also
> very easily construct an example where the ratio is nearly one, yet a
> random walk starting at a given beneficial sequence would stall with a
> very high probability.

Oh really? You can construct a scenario where all sequences are
beneficial and yet evolution cannot evolve a new one? Come on now . .
. now you're just being silly. But I certainly would like to see you
try and set up such a scenario. I think it would be most
entertaining.

> In other words, Sean, your calculations are
> irrelevant for the kind of problem you are trying to analyze.

Only if you want to bury your head in the sand and force yourself to
believe in the fairytale scenarios that you are trying to float.

> If you
> wish to model evolution as a random walk of point mutations on a
> frozen N-dimensional sequence space, you will need to apply a totally
> different statististical analysis: one that takes into account the
> distributions of known "beneficial" sequences in sequence space. And
> then I'll tell you why that model too is so wrong as to be totally
> irrelevant.

And if you wish to model evolution as a walk between tight clusters of
beneficial sequences in an otherwise extraordinarily low density
sequence space, then I have some oceanfront property in Arizona to
sell you at a great price.

Until then, this is all I have time for today.

> Cheers,
> RobinGoodfellow.

Sean
www.naturalselection.0catch.com

"Rev Dr" Lenny Flank

unread,
Jan 4, 2004, 1:48:24 PM1/4/04
to
Sean Pitman wrote:


>
> Until then, this is all I have time for today.


Hey doc, when will you have time to tell us what the scientific theory
of intelligent design is --- what does the desigher do, specifically,
what mechanisms does it use to do it, where can we see these mechanisms
in operation today. And what idnicates there is only one desinger and
not, say, ten or fifty of them all working together.

After that, can you find the time to explain to me how ID "theory" is
any less "materialist" or "naturalist" or "atheist" than is evolutionary
biology, since ID "theory" not only does NOT hypothesize the existence
of any supernatural entities or actions, but specifically states that
the "intelligent designer" might be nothing but a space alien.

And after THAT, could you find the time to tell us how you apply
anything other than "naturalism" or "materialism" to your medical
practice? What non-naturalistic cures do you recommend for your
patients, doctor.

I do understand that you wont' answer, doc. That's OK. The questions
make their point -- with you or without you.

===============================================
Lenny Flank
"There are no loose threads in the web of life"

Creation "Science" Debunked:
http://www.geocities.com/lflank

DebunkCreation Email list:
http://www.groups.yahoo.com/group/DebunkCreation

-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 100,000 Newsgroups - 19 Different Servers! =-----

RobinGoodfellow

unread,
Jan 4, 2004, 10:53:13 PM1/4/04
to
I've already responded to this same post in a different thread. See:

http://groups.google.com/groups?dq=&hl=en&lr=&ie=UTF-8&threadm=3FF89BDA.EB18D013%40indiana.edu&prev=/groups%3Fdq%3D%26num%3D25%26hl%3Den%26lr%3D%26ie%3DUTF-8%26group%3Dtalk.origins%26start%3D50
or
http://makeashorterlink.com/?C309615F6

Incidentally, I'll be leaving for a much-needed vacation in a couple
of days, and expect that other commitments will force me to return to
lurkdom for a while afterwards. So I apologize in advance for leaving
these two threads hanging, though I look forward to reading your
replies.

Cheers,
Robin.


seanpi...@naturalselection.0catch.com (Sean Pitman) wrote in message news:<80d0c26f.04010...@posting.google.com>...

Sean Pitman

unread,
Jan 14, 2004, 6:02:24 AM1/14/04
to
RobinGoodfellow <lmuc...@yahoo.com> wrote in message news:<bt8i6p$r9h$1...@news01.cit.cornell.edu>...

> > Sean Pitman wrote:
> >
> > With all due respect, what is your area of professional training? I
> > mean, after reading your post I dare say that you are not only weak in
> > biology, but statistics as well. Certainly your numbers and
> > calculations are correct, but the logic behind your assumptions is
> > extraordinarily fanciful. You sure wouldn't get away with such
> > assumptions in any sort of peer reviewed medical journal or other
> > statistically based science journal - that's for sure. Of course, you
> > may have good success as a novelist . . .
>
> Tsk, tsk... I thank you for the career advice. I'll keep it in mind,
> should my current stint in computer science fall through. I wouldn't go
> so far as to say that Monte-Carlo methods are my specialty, but I will
> say that my own research and the research of half my colleagues would be
> non-existent if they worked the way you think they do.

Hmmmm, so what has your research shown? I've seen nothing from the
computer science front that shows how anything new, such as a new
software program, beyond the lowest levels of functional complexity
can be produced by computers without the input of an intelligent mind.
Your outlandish claims for the result of research done so far, such
as the Lenski experiments, are just over the top. They don't
demonstrate anything even close to what you claim they demonstrate
(See Below).

> >>I'll try to address some of the mistakes you've made below, though I
> >>doubt that I can do much to dispel your misconceptions. Much of my
> >>reply will not even concern evolution in a real sense, since I wish to
> >>highlight and address the mathematical errors that you are making.
> >
> > What you ended up doing is highlighting your misunderstanding of
> > probability as it applies to this situation as well as your amazing
> > faith in an extraordinary stacking of the deck which allows evolution
> > to work as you envision it working. Certainly, if evolution is true
> > then you must be correct in your views. However, if you are correct
> > in your views as stated then it would not be evolution via mindless
> > processes alone, but evolution via a brilliant intelligently designed
> > stacking of the deck.
>

> Exactly what views did I state, Sean? Other than that your calculations
> are, to put it plainly, irrelevant. Not even wrong - just irrelevant.
>
> Yes, the example I give below incredibly stacks the deck in my favor.
> It ought to. It is what is called a "counter-example". It falsifies
> the hypothesis that your "model" of evolution is correct. Now aren't
> you glad you proposed something falsifiable?

Come again? How does your stacking the deck via the use of
intelligent design, since there is no other logical way to stack the
deck so that your scenario will actually work, disprove my position?
My hypothesis is dependent on the far more likely scenario that the
deck in not stacked as you suggest, but is in fact much more random
than you seem to think it is. Certainly the ONLY way evolution could
work is if the deck was stacked, but then this would be easily
detected as evidence of intelligent design, not the normal
understanding of evolution as a mindless non-directed process.

> > This distribution of states has very little if anything to do with how


> > much time it takes to find one of them on average. The starting point
> > certainly is important to initial success, but it also has very little
> > if anything to do with the average time needed to find more and more
> > beneficial functions within that same level of complexity.
>

> Except in every real example of a working Monte-Carlo procedure, where
> the distribution and starting point have *everything* to do whether such
> a procedure is successful or not.

You mean that the stacking of the deck has everything to do with
whether or not an "evolutionary" scenario will succeed. Certainly
this would be true, but such a stacking of the deck has no resemblance
to reality. You must ask yourself about the likelihood that one will
find such a stacked deck in real life outside of intelligent design .
. .

> > For
> > example, if all the beneficial states were clustered together in one
> > or two areas, the average starting point, if anything, would be
> > farther way than if these states were distributed more evenly
> > throughout the sequence space. So, this leaves the only really
> > relevant factor - the types of steps and the number of steps per unit
> > of time. That is the only really important factor in searching out
> > the state space - on average.
>

> *Sigh*. The problem is that the model *you* are proposing (one I think
> is silly) is of a random on walk on a specific frozen sequence space
> with beneficial sequences as points in that space. It does not deal
> with an "average" distribution, and an "average" starting point, but
> with one very specific distribution of beneficial sequences and one very
> specific starting point.

Consider the scenario where there are 10 ice cream cones on the
continental USA. The goal is for a blind man to find as many as he
can in a million years. It seems that what you are suggesting is that
the blind man should expect that the ice cream cones will all be
clustered together and that this cluster will be with arms reach of
where he happens to start his search. This is simply a ludicrous
notion outside of intelligent design. My hypothesis, on the other
hand, suggests that these 10 ice cream cones will have a more random
distribution with hundreds of miles separating each one, on average.
An average starting point of the blind man may, by a marvelous stroke
of luck, place him right beside one of the 10 cones. However, after
finding this first cone, how long, on average, will it take him to
find any of the other 9 cones? That is the question here. The very
low density of ice cream cones translates into a marked increase in
the average time required to find them. Now, if there were billions
upon billions of ice cream cones all stuffed into this same area, then
one could reasonably expect that they would be separated by a much
closer average distance - say just a couple of feet. With such a high
density, the average time needed for the blind man to find another ice
cream cone would be just a few seconds.

So, whose position is more likely? Your notion that the density of
beneficial sequences in sequence space doesn't matter or my notion
that density does matter? Is your hypothetical situation where a low
density of beneficial states is clustered around a given starting
point really valid outside of intelligent design? If so, name a
non-designed situation where such an unlikely phenomenon has ever been
observed to occur . . .

> You cannot simply assume an "average"
> distribution in the absence of background information: you have to find
> out precisely the kind of distribution you are dealing with. And even
> if you do find that the distribution is "stacked", it does not imply
> that an intelligence was involved.

Oh really? You think that stacking the deck as you have done can
happen mindlessly in less than zillions of years of average time?
Come on now! What planet are you from?

> The stacking could occur due to the
> constraints imposed by the very definition of the problem: in the case
> of evolutions, by the physical constraints governing the interactions
> between the molecules involved in biological systems.

Oh, so the physical laws of atoms and molecules force them to
self-assemble themselves in functionally complex systems? Now you are
really reaching. Tell me why the physical constraints of these
molecular machines force all beneficial possibilities to be so close
together? This is simply the most ludicrous notion that I have heard
in a very long time. You would really do well in Vegas with that one!
Try telling them, when they come to arrest you for cheating, that the
deck was stacked because of the physical constraints of the playing
cards.

> In fact, why
> would you expect that the regular and highly predictable physical laws
> governing biochemical reactions would produce a random, "average"
> distribution of "beneficial sequences"?

Because, I don't know of any requirement for them to be clustered
outside of deliberate design - do you? I can see nothing special
about the building blocks that make up living things that would cause
the potentially beneficial systems found in living things to have to
be clustered (just like there is nothing inherent in playing cards
that would cause them to stack themselves in any particular order).
However, if you know of a reason why the physical nature of the
building blocks of life would force them to cluster together despite
having a low density in sequence space, please, do share it with me.
Certainly none of your computer examples have been able to demonstrate
such a necessity. Why then would you expect such a forced clustering
in the potentially beneficial states of living things?

> >>For an extreme
> >>example, consider a space of strings consisting of length 1000, where
> >>each position can be occupied by one of 10 possible characters.
>

> Note, I wrote, "extereme example". My point was *not* invent a
> distribution which makes it likely for evolutiuon to occur (this example
> has about as much to do with evolution as ballet does with quantum
> mechanics), but to show how inadequate your methods are.

Actually, this situation has a lot to do with evolution and is the
real reason why evolution is such a ludicrous idea. What your
illustration shows is that only if the deck is stacked in a most
unlikely way will evolution have the remotest possibility of working.
That is what I am trying to show and you demonstrated this very
nicely. Unwittingly it is you who effectively show just how
inadequate evolutionary methods are at making much of anything outside
of an intelligently designed stacking of the deck.



> >>Suppose there are only two beneficial strings: ABC........, and
> >>BBC........ (where the dots correspond to the same characters). The
> >>allowed transitions between states are point mutations, that are
> >>equally probable for each position and each character from the
> >>alphabet. Suppose, furthermore, that we start at the beneficial state
> >>ABC. Then, the probability of a transition from ABC... to BBC... in a
> >>single mutation 1/(10*1000) = 1/10000 (assuming self-loops - i.e.
> >>mutations that do not alter the string, are allowed).
> >
> >
> > You are good so far. But, you must ask yourself this question: What
> > are the odds that out of a sequence space of 1e1000 the only two
> > beneficial sequences with uniquely different functions will have a gap
> > between them of only 1 in 10,000?
>

> Mind-numbingly low. 1000*.9*.1^999, to be precise. But that is not the
> point.

Actually, this is precisely the point. What you are basically saying
is that if there were only one ice cream cone in the entire universe
that it could be easily found if the starting point of the blind man's
search just so happened to be an arms reach away from the cone. That
is what you are saying is it not?



> > Don't you see the problem with this little scenario of yours?
> > Certainly this is a common mistake made by evolutionists, but it is
> > none-the less a fallacy of logic. What you have done is assume that
> > the density of beneficial states is unimportant to the problem of
> > evolution since it is possible to have the beneficial states clustered
> > around your starting point. But such a close proximity of beneficial
> > states is highly unlikely. On average, the beneficial states will be
> > more widely distributed throughout the sequence space.
>

> On average, yes.

On average yes?! How can you say this and yet disagree with my
conclusions?

> But didn't you just say above that the distribution
> of the sequences is irrelevant? That all that matters is "ratio" of
> beneficial sequences?

It is only by determining the ration of beneficial sequences that you
can obtain a reasonable idea about the likely distribution of these
sequences around any particular starting point. You hold a huge
fallacy of logic that by some magical means the distribution could be
just right even though the density is truly miniscule (like the
finding one atom in zillions of universes the size of ours).

> (Incidentally, "ratio" and "density" are not
> identical. The distribution I showed you has a relatively high density
> of beneficial sequences, despite a low ratio.)

You are talking local "density", which, in your scenario, also has a
locally high "ratio". I, on the other hand, was talking about the
total ratio and density of the whole potential space taken as a whole.
Really, you are very much mistaken to suggest that the ratio and
density of a state in question per the same unit of state space are
not equivalent.

> > For example, say that there are 10 beneficial sequences in this
> > sequence space of 1e1000. Now say one of these 10 beneficial
> > sequences just happens to be one change away from your starting point
> > and so the gap is only a random walk of 10,000 steps as you calculated
> > above. However, on average, how long will it take to find any one of
> > the other 9 beneficial states? That is the real question. You rest
> > your faith in evolution on this inane notion that all of these states
> > will be clustered around your starting point. If they were, that
> > certainly would be a fabulous stroke of luck - like it was *designed*
> > that way. But, in real life, outside of intelligent design, such
> > strokes of luck are so remote as to be impossible for all practical
> > purposes. On average we would expect that the other nine sequences
> > would be separated from each other and our starting point by around
> > 1e999 random walk steps/mutations (i.e., on average it is reasonable
> > to expect there to be around 999 differences between each of the 10
> > beneficial sequences). So, even if a starting sequence did happen to
> > be so extraordinarily lucky to be just one positional change away from
> > one of the "winning" sequences, the odds are that this luck will not
> > hold up as well in the evolution of any of the other 9 "winning"
> > sequences this side of a practical eternity of time.
>

> Unless, of course, it follows from the properties of the problem that
> the other 9 benefecial sequences must be close to the starting sequence.

And I am sure you have some way to explain why these 9 other
beneficial sequences would have to be close together outside of
deliberate design? What "properties" of the problem would force such
a low density of novel beneficial states to be so clustered? I see
absolutely no reason to suggest such a necessity. Certainly such a
necessity much be true if evolution is true, but if no reasonable
naturalistic explanation can be given, why should I simply assume such
a necessity? Upon what basis do you make this claim?

> > Real time experiments support this position rather nicely. For
> > example, a recent and very interesting paper was published by Lenski
> > et. al., entitled, "The Evolutionary Origin of Complex Features" in
> > the 2003 May issue of Nature. In this particular experiment the
> > researchers studied 50 different populations, or genomes, of 3,600
> > individuals. Each individual began with 50 lines of code and no
> > ability to perform "logic operations". Those that evolved the ability
> > to perform logic operations were rewarded, and the rewards were larger
> > for operations that were "more complex". After only15,873 generations,
> > 23 of the genomes yielded descendants capable of carrying out the most
> > complex logic operation: taking two inputs and determining if they are
> > equivalent (the "EQU" function).
>

> I've already covered how you've completely misinterpreted Lenski's
> research in the other post. But let's run with this for a bit:

Lets . . . Oh, and if you would give a link to where you "covered" my
"misinterpretation", that would be appreciated.

> > In principle, 16 mutations (recombinations) coupled with the three
> > instructions that were present in the original digital ancestor could
> > have combined to produce an organism that was able to perform the
> > complex equivalence operation. According to the researcher themselves,
> > "Given the ancestral genome of length 50 and 26 possible instructions
> > at each site, there are ~5.6 x 10e70 genotypes [sequence space]; and
> > even this number underestimates the genotypic space because length
> > evolves."
> >
> > Of course this sequence space was overcome in smaller steps. The
> > researchers arbitrarily defined 6 other sequences as beneficial (NAND,
> > AND, OR, NOR, XOR, and NOT functions).
>

> As a minor quibble, I believe they actually started with NAND (you need
> it for all the other functions). But I could be wrong - I've read that
> paper months ago.

You are correct. The fact is though that the NAND starting point was
defined as beneficial and it was not made up of random sequences of
computer code. It was all set up very specifically so that certain
recombinations of code (point mutations were not primarily used,
though they did happen on occasion during recombination events), would
yield certain types of other pre-determined coded functions.

> > those in the reward-all environment (2.15 x 1e7 versus 1.22 x 1e7;


> > P<0.0001, Mann-Witney test), because they tended to have smaller
> > genomes, faster generations, and thus turn over more quickly. However,
> > all populations explored only a tiny fraction of the total genotypic
> > space. Given the ancestral genome of length 50 and 26 possible

> > instructions at each site, there are ~5.6 x 1e70 genotypes; and even


> > this number underestimates the genotypic space because length
> > evolves."
>

> And after years of painstaking research, Sean finally invents the wheel.
> Yes, evolution does not pop complex systems out of thin air, but
> constructs through integration and co-optation of simpler functional
> components. Move along, folks, nothing to see here!

What this shows is that if the "simpler" components aren't defined as
"beneficial" then a system of somewhat higher complexity will not
evolve at all - period - even given zillions of years of time. Truly,
this means that there really isn't anything to see here. Nothing
evolves without the deck being stacked by intelligent design. That is
all this Lenski experiment showed.

> > Isn't that just fascinating? When the intermediate stepping stone
> > functions were removed, the neutral gap that was created successfully
> > blocked the evolution of the EQU function, which happened *not* to be
> > right next door to their starting point. Of course, this is only to
> > be expected based on statistical averages that go strongly against the
> > notion that very many possible starting points would just happen to be
> > very close to an EQU functional sequence in such a vast sequence
> > space.
>

> Here's a question for you. There were only 5 beneficial functions in
> that big old sequence space of yours.

Actually, including the starting and ending points, there were 7
defined beneficial sequences in this sequence space (NAND, AND, OR,
NOR, XOR, NOT, and EQU functions).

> They are all very standard
> Boolean functions: in no way were they specifically designed by Lenski
> et. al. to ease the way to into evolving the EQ functions.

Actually, they very much were designed by Lenski et. al. to ease the
way along the path to the EQU sequence. The original code was set up
with very specific lines of code that could, when certain
recombinations occurred, give rise to each of these logic functions.
The lines of code were not random lines of code and they were not all
needed to be as they were for the original NAND function to operate.
In fact the researchers knew the approximate rate of evolution that
would be expected ahead of time based on their programming of the
coded sequences, the rate of recombination of these sequences, the
size of the sequence space and the distance between each step along
the pathway. It really was a very nice setup for success. Read the
paper again and you will see that this is true.

> How come
> they were all sufficiently close in sequence space to one another, when
> according to you such a thing is so highly improbable?

Because they were designed to be close together deliberately. The
deck was stacked on purpose. I mean really, you can't be suggesting
that these 7 beneficial states just happened to be clustered together
in a state space of 1e70 by the mindless restriction of the program do
you? The program was set up with the restrictions stacked in a
particular way so that only these 7 states could evolve and that each
subsequence state was just a couple of steps away from the current
state. No other function was set up to evolve, so no other novel
function evolved. These lines of code did not get together and make a
calculator program or a photo-editing program, or even a simple
program to open the CD player. That should tell you something . . .
This Lenski experiment was *designed* to succeed like it did. Without
such input of intelligent deck stacking, it never would have worked
like it did.

> > Now, isn't this consistent with my predictions? This experiment was
> > successful because the intelligent designers were capable to defining
> > what sequences were "beneficial" for their evolving "organisms." If
> > enough sequences are defined as beneficial and they are placed in just
> > the right way, with the right number of spaces between them, then
> > certainly such a high ratio will result in rapid evolution - as we saw
> > here. However, when neutral non-defined gaps are present, they are a
> > real problem for evolution. In this case, a gap of just 16 neutral
> > mutations effectively blocked the evolution of the EQU function.
>

> You are not even close. Lenski et. al. didn't define which *sequences*
> were "beneficial".

Yes, they did exactly that. Read the paper again. They arbitrarily
wrote the code in a meaningful way for the starting lines as well as
arbitrarily defined which recombinations would be "beneficial". They
say it in exactly that way. They absolutely say that they defined
what was and what was not "beneficial".

> They didn't even design functions to serve
> specifically as stepping stones in the evolutionary pathways of EQ.

Yes they did in that they wrote the original code so that it would be
possible to form such pre-defined "beneficial" codes in a series of
recombinations of lines of code.

> What they have done is to name some functions of intermediate complexity
> that might be beneficial to the organism.

You obviously either haven't read the original paper or you don't
understand what it said. The researchers openly admit to arbitrarily
defining the "intermediate" states as beneficial. This fact is only
proven because they went on to remove the "beneficial" definition from
these intermediate states. Without this arbitrary assignment of
beneficial to the intermediate states, the EQU state did not evolve.
Go back an read the paper again. It was the researchers who defined
the states. The states themselves obviously didn't have inherent
benefits in the "world" that they were evolving in outside of the
researcher's definitions for them.

> They certainly did not tell
> their program how to reach these functions, or what the systems
> performing these functions might look like, but simply indicated that
> there are functions at varying levels of complexity that might be useful
> to an organism in its environment.

Wrong again. They did in fact tell their program exactly which
states, specifically, to reward and how to reward them if present.
They told the program exactly what they would look like ahead of time
so that they would be recognized and treated as beneficial when they
arrived on the scene.

You really don't seem like you have a clue how this experiment was
done. I really don't understand how you can make such statements as
this if you had actually read the paper.

> Thus, they have demonstrated exactly
> what they set out to: that in evolution, complex functional features are
> acquired through co-optation and modification of simpler ones.

They did nothing of the sort. All they did was show that stacking the
deck by intelligent design really does work. The problem is that
evolution is supposed to work to create incredible diversity and
informational complexity without any intelligent intervention having
ever been required. So, you evolutionists are back to ground zero.
There simply is no evolution, outside of intelligent design, beyond


the lowest levels of functional/informational complexity.

<snip>


> >>(Again, it is a
> >>gross, meaningless over-simplification to model evolution as a random
> >>walk over a frozen N-dimensional sequence space, but my point is that
> >>your calculations are wrong even for that relatively simple model.)
> >
> > Come now Robin - who is trying to stack the deck artificially in their
> > own favor here? My calculations are not based on the assumption of a
> > stacked deck like your calculations are, but upon a more likely
> > distribution of beneficial sequences in sequence space. The fact of
> > the matter is that sequence space does indeed contain vastly more
> > absolutely non-beneficial sequences than it does those that are even
> > remotely beneficial.
>

> Yes, but your caclulations are based on the equally unfounded assumption
> that the deck is not stacked in any way, shape, or form. (That is, if
> the sequences were really distributed evenly in your frozen sequence
> space, then your probability calculation would still be off, but not by
> too much.)

Not by too much? Hmmmmm . . . So, you are saying that if the
sequence space where set up even close to the way in which I am
suggesting then my calculations would be pretty much correct? So,
unless the sequence space looks like you envision it looking, all nice
and neatly clustered around your pre-arranged starting point, then I
am basically right? So, either the deck is stacked pretty much like
you suggest or the deck is more randomly distributed like I suggest.
If it is stacked, then you are correct and evolution is saved. If the
deck is more randomly distributed like I suggest, then evolution is
false and should be discarded as untenable - correct?

Now where did I miss it? You said at the beginning that my
calculations were completely off base given my own position and that
you were going to correct my math. You said that I needed special
training in statistics. Now, how can my calculations be pretty much
on target given my hypothesis and yet I not know anything about
statistics?

> What makes you think that the laws of physics do not stack
> the deck sufficiently to make evolution possible?

More importantly, what makes you think that they do? I've never seen
a mindless process stack the deck like this, have you? Where are your
examples of mindless processes stacking the deck in such as way as you
suggest outside of aid of intelligent design?

> You may feel that
> they can't: but in the meantime, you should be striving to find out what
> the actual distribution is, rather than assuming it is unstacked. (Not
> that this would make your model relevant, but it'll be a small step in
> the right direction.)

Actually, an unstacked deck would make my model very relevant indeed.
You admit as much yourself when you say that my calculations are
pretty much correct give that the hypothesis of an unstacked deck is
true. Now, the ball is in your court. It is so extremely
counterintuitive to me that the deck would be unstacked that such an
assertion demands equivalent evidence. Where do you see such deck
stacking outside of intelligent design? That is the real question
here.

> > In fact, there is an entire theory called the
> > "Neutral Theory of Evolution". Of all mutations that occur in every
> > generation in say, humans (around 200 to 300 per generation), the
> > large majority of them are completely "neutral" and those few that are
> > functional are almost always detrimental. This ratio of beneficial to
> > non-beneficial is truly small and gets exponentially smaller with each
> > step up the ladder of specified functional complexity. Truly,
> > evolution gets into very deep weeds very quickly beyond the lowest
> > levels of functional/informational complexity.
>

> The fact that the vast majority of mutations are neutral does not imply
> that there exists any point where there is no opportunity for a
> beneficial mutation. And where such an opportunity presents itself,
> evolution will eventually find it, given large enough populations and
> sufficient times.

Yes, if by "sufficient time" you mean zillions of years - even for
extremely large populations.

> >>>It will take
> >>>just over 1,000 seconds - a bit less than 20 minutes on average. But,
> >>>what happens if at higher levels of functional complexity the density
> >>>of beneficial functions decreases exponentially with each step up the
> >>>ladder? The rate of search stays the same, but the junk sequences
> >>>increase exponentially and so the time required to find the rarer and
> >>>rarer beneficial states also increases exponentially.
> >>
> >>The above is only true if you use the following search algorithm:
> >>
> >> 1. Generate a completely random N-character sequence
> >> 2. If the sequence is beneficial, say "OK";
> >> Otherwise, go to step 1.
> >
> > Actually the above is also true if you start with a likely starting
> > point. A likely starting point will be an average distance away from
> > the next closest beneficial sequence. A random mutation to a sequence
> > that does not find the new beneficial sequence will not be selectable
> > as advantageous and a random walk will begin.
>

> Actually, your last paragraph will be approximately true only if all
> your "beneficial" points are uniformly spread out through your sequence
> space.

In other words, if they aren't stacked in some extraordinarily
fortuitous fashion?

> Even then, your probability calculation will be off by some
> orders of magnitude, since you will actually need to apply combinatorial
> forumlas to compute these probabilities correctly. But, I suppose,
> it'll be close enough.

My calculations will not be off too far. And, even if they are off by
a few orders of magnitude, it doesn't matter compared to the numbers
involved. As you say, the rough estimates involved here are clearly,
"close enough" to get a very good idea of the problem. My math is not
"way off" as you originally indicated. If anything you have a
conceptual problem with my hypothesis, not my statistics/math. It
basically boils down to this: Either the deck was stacked by a
mindless or a mindful process. You have yet to provide any convincing
evidence that a mindless process can stack a deck, like it would have
to have been stacked for life forms to be as diverse and complex they
are, outside of a lot of help from intelligent design.

<snip>


> >> I could also
> >>very easily construct an example where the ratio is nearly one, yet a
> >>random walk starting at a given beneficial sequence would stall with a
> >>very high probability.
> >
> > Oh really? You can construct a scenario where all sequences are
> > beneficial and yet evolution cannot evolve a new one? Come on now . .
> > . now you're just being silly. But I certainly would like to see you
> > try and set up such a scenario. I think it would be most
> > entertaining.
>

> I didn't say all sequences are beneficial, Sean. That *would* be silly.
> I did say that the ratio *approaches* one, but is not quite that.
> But, here you are:
>
> Same "sequence space" as before, but now a sequence is "beneficial" if
> it is AAAAAAAAAA......AAA (all A's), or it differs from AAAAA...AAA by
> at least 2 amino acids. All other sequences are *harmful* - if the
> random walk ever stumbles onto one, it will die off, and will need to
> return to its starting point. (This means there are exactly 1000*9 +
> (1000*999/2)*81 or about 4.02e6 harmful sequences, and 1e1000-4.02e6 or
> about 1e1000 beneficial sequences: that is, virtually every sequence is
> beneficial.) Again, the allowed transitions are point mutations, and
> the starting point is none other AAAAAAA...AAA. Now, will this random
> walk ever find another beneficial sequence?

Your math here seems to be just a bit off. For example, if out of
1e1000 the number of beneficial sequences was 1e999, the ratio of
beneficial sequences would be 1 in 10. At this ratio, the average
distance to a new beneficial function would not be "two amino acid
changes away", but less than one amino acid change away. The ratio
created by "at least 2 amino acid changes" is less than 1 in 400, not
less than 1 in 10 like you suggest here.

Also, even if all sequences less than 2 amino acid changes were
detrimental (which is very unlikely), an average bacterial colony of
100 billion or so individuals would cross this 2 amino acid gap in
short order since a colony this size would experience a double
mutation in a sequence this size in several members of its population
during the course of just one generation.



> > And if you wish to model evolution as a walk between tight clusters of
> > beneficial sequences in an otherwise extraordinarily low density
> > sequence space, then I have some oceanfront property in Arizona to
> > sell you at a great price.
>

> If I did wish to model evolution this way, then I would gladly buy this
> property off your hands. And then sell it back to you at twice the
> price, because it would still be better than the model you propose.

LOL - Ok, you just keep thinking that way. But, until you have some
evidence to support your wishful thinking mindless stacking of the
deck hypothesis, what is there to make your position attractive or
even remotely logical?

> Cheers,
> RobinGoodfellow.

Sean
www.naturalselection.0catch.com

Chris Merli

unread,
Jan 14, 2004, 10:00:08 AM1/14/04
to

"Sean Pitman" <seanpi...@naturalselection.0catch.com> wrote in message
news:80d0c26f.04011...@posting.google.com...

But there is not one blind man looking there are many and only those close
enough to the cluster of cones in the first place are likely to succeed.

>
> So, whose position is more likely? Your notion that the density of
> beneficial sequences in sequence space doesn't matter or my notion
> that density does matter? Is your hypothetical situation where a low
> density of beneficial states is clustered around a given starting
> point really valid outside of intelligent design? If so, name a
> non-designed situation where such an unlikely phenomenon has ever been
> observed to occur . . .
>
> > You cannot simply assume an "average"
> > distribution in the absence of background information: you have to find
> > out precisely the kind of distribution you are dealing with. And even
> > if you do find that the distribution is "stacked", it does not imply
> > that an intelligence was involved.
>
> Oh really? You think that stacking the deck as you have done can
> happen mindlessly in less than zillions of years of average time?
> Come on now! What planet are you from?

Lets talk clusters. How many point mutations of a protein are in fact still
functional. This tends to create a cluster all of its own. Given this fact
the idea that they are spread evenly accross the landscape is just not true.

howard hershey

unread,
Jan 14, 2004, 2:52:27 PM1/14/04
to

Sean Pitman wrote:

Except that is NOT what evolution does. Evolution starts with an
organism with pre-existing sequences that produce products and interact
with environmental chemicals in ways that are useful to the organism's
reproduction. The situation is more like 10,000 blind men in a varying
topography who blindly follow simple and dumb rules of the game to find
useful things (ice cream at the tops of fitness peaks): Up is good. Down
is bad. Flat is neither good nor bad. Keep walking in all cases. It
would not take too long for these 10,000 blind men to be found in
decidedly non-random places (the high mesas of functional utility where
they are wandering around the flat tops if you haven't guessed). And
the ice cream cones (the useful functions), remember, are not randomly
distributed either. They are specifically at the tops of these mesas as
well. That is what a fitness landscape looks like.

If this topography of utility only changed slowly, at any given time it
would appear utterly amazing to Sean that the blind men will all be
found at these local high points or optimal states (the mesas licking
the ice cream cones on them) rather than being randomly scattered around
the entire surface. They reached these high points (with the ice cream)
by following a simple dumb algorithm.

But you were wondering how something new could arise *after* the blind
men are already wandering around the mesas? The answer is that it
depends. They can't always do so. But remember that these pre-existing
mesas are not random places. They do something specific with local
utility. Let's say that each mesa top has a different basic *flavor* of
ice cream. Say that chocolate is a glycoside hydrolase that binds a
glucose-based glycoside. Now let's say that the environment changes so
that one no longer needs this glucose-based glycoside (the mesa sinks
down to the mean level) but now one needs a galactose-based glycoside
hydrolase. Notice that the difference in need here is something more
like wanting chocolate with almonds than wanting even strawberry, much
less jalapeno or anchovy-flavored ice cream. The blind man on the newly
sunk mesa must keep walking, of course, but he is not thousands of miles
away from the newly risen mesa with chocolate with almonds ice cream on
top. Changing from one glucose-based glycoside hydrolase to one with a
slightly different structure is not the same as going from chocolate to
jalapeno or fish-flavored ice cream. Not even the same as going from
chocolate to coffee. The "island" of chocolate with almonds is *not*
going to be way across the ocean from the "island" of chocolate. It will
be nearby where the blind man is. *And* because chocolate with almonds
is now the need, it will also be on the new local high mesa (relative to
the position of the blind man on the chocolate mesa). The blind man
need only follow the simple rules (Up good. Down bad. Neutral neutral.
Keep walking.) and he has a good chance of reach the 'new' local mesa
top quite often.

And remember that there is not just one blind man on one mesa in this
ocean of possible sequences. There are 10,000 already present on 10,000
different local mesas with even more flavors than the 31 that most ice
cream stores offer. Your math always presuposes that whenever you need
to find, say, vanilla with cherry the one blind man starts in some
random site and walks in a completely random fashion (rather than by the
rules I pointed out) across half the universe of sequence space to reach
your pre-determined goal by pure dumb luck to find the perfect lick. My
presumption is that the successful search is almost always going to
start from the pre-existing mesa with the closest flavor to the new need
(or from a duplicate, which, as a duplicate, is often superfluous and
quickly erodes to ground level in terms of its utility). As mentioned,
these pre-existing mesas are not random pop-ups. They are at the most
useful places in sequence space from which to try to find near-by mesas
with closely-related biologically useful properties because they already
have biologically useful properties.

> It seems that what you are suggesting is that
> the blind man should expect that the ice cream cones will all be
> clustered together and that this cluster will be with arms reach of
> where he happens to start his search. This is simply a ludicrous
> notion outside of intelligent design. My hypothesis, on the other
> hand, suggests that these 10 ice cream cones will have a more random
> distribution with hundreds of miles separating each one, on average.
> An average starting point of the blind man may, by a marvelous stroke
> of luck, place him right beside one of the 10 cones. However, after
> finding this first cone, how long, on average, will it take him to
> find any of the other 9 cones? That is the question here. The very
> low density of ice cream cones translates into a marked increase in
> the average time required to find them. Now, if there were billions
> upon billions of ice cream cones all stuffed into this same area, then
> one could reasonably expect that they would be separated by a much
> closer average distance - say just a couple of feet. With such a high
> density, the average time needed for the blind man to find another ice
> cream cone would be just a few seconds.
>
> So, whose position is more likely?

Your position is not wrong. It is simply irrelevant and unrelated to
reality.

> Your notion that the density of
> beneficial sequences in sequence space doesn't matter or my notion
> that density does matter?

All that matters is whether there is a pre-existing sequence close
enough to one that meets your requirement for being beneficial. And
pre-existing sequences in biological organisms are not random. And
there are more than one such sequence. The only one that matters is the
closest one.

> Is your hypothetical situation where a low
> density of beneficial states is clustered around a given starting
> point really valid outside of intelligent design? If so, name a
> non-designed situation where such an unlikely phenomenon has ever been
> observed to occur . . .

It seems to me that mountains often are found in clusters. That islands
are often found in clusters. And those are the metaphors we are using
for beneficial states. They (mountains, islands, and biologically
useful activities) occur in clusters because of causal reasons, not
random ones.

>>You cannot simply assume an "average"
>>distribution in the absence of background information: you have to find
>>out precisely the kind of distribution you are dealing with. And even
>>if you do find that the distribution is "stacked", it does not imply
>>that an intelligence was involved.
>
>
> Oh really? You think that stacking the deck as you have done can
> happen mindlessly in less than zillions of years of average time?
> Come on now! What planet are you from?

When you start with useful rather than random sequences in a
pre-existing organism, you are necessarily stacking the deck in a search
for other related useful sequences. Especially if the search were not
random (but followed the simple rules I gave to my blind man), did not
occur on a perfectly flat plane, and did not start with a search from
one random site but from many non-random partially useful sites. Only
the ones that *start* off close to the desired island/mountain have a
good chance of reaching a useful end point, but that is merely probability.

>>The stacking could occur due to the
>>constraints imposed by the very definition of the problem: in the case
>>of evolutions, by the physical constraints governing the interactions
>>between the molecules involved in biological systems.
>
>
> Oh, so the physical laws of atoms and molecules force them to
> self-assemble themselves in functionally complex systems?

As a matter of fact, it is indeed the physical laws of atoms and
molecules that cause the self-assembly of structures like flagella from
their component parts. There is no intelligent assembler of flagella in
bacteria. You keep confusing and confounding the self-assembly of
flagella (or ribosomes, or cilia, or mitochondrial spindles) in cells
with their evolutionary points of origin. Please use these terms correctly.

Just so you know, I suspect he was talking about the constraints
involved in the evolution, say, of a glycoside hydrolysis. One of these
constraints being the ability to bind a specific glycoside. This
probably requires the presence of a binding cleft in the protein, thus
limiting the evolution of beta galactosidases to modifications of
molecules that have a cleft capable of binding the sugar galactose
linked through a betagalactoside linkage to another molecule. For
example, ebg or immunoglobins (yep, that cleft can be modified to make
an immunoglobin an effective lactase). The hard part in evolving a
lactase from an immunoglobin is in having the right few amino acids
needed to weaken the bond to be hydrolyzed and in not having binding be
so tight that the products are not released.

> Now you are
> really reaching. Tell me why the physical constraints of these
> molecular machines force all beneficial possibilities to be so close
> together? This is simply the most ludicrous notion that I have heard
> in a very long time. You would really do well in Vegas with that one!
> Try telling them, when they come to arrest you for cheating, that the
> deck was stacked because of the physical constraints of the playing
> cards.

The above makes no sense at all as written when compared to reality. I
suspect that Sean misunderstood what Robin meant. Surely Sean must
realize that all the complex structures in cells self-assemble in these
cells because of simple chemical and physical affinities. There are no
little homunculi working on assembly lines in cells, willing to go on
strike for higher wages (MORE ATP!), etc. That would be carrying the
idea of intelligence involved in these processes a step too far.

>>In fact, why
>>would you expect that the regular and highly predictable physical laws
>>governing biochemical reactions would produce a random, "average"
>>distribution of "beneficial sequences"?
>

I wouldn't expect new beneficial sequences to be random. I would expect
new "beneficial sequences" to be close to one or more of the
pre-existing "beneficial sequences" in a cell. That is because the
'new' needs of a cell are most often going to involve molecules with
similarity to molecules that are already biologically relevant. That
is, I suspect that there will be clusters of 'beneficial' sequences.
Why do think 'new' beneficial sequences are evenly spaced throughout
sequence space, but always very, very far away from any current sequence?

>
> Because, I don't know of any requirement for them to be clustered
> outside of deliberate design - do you? I can see nothing special
> about the building blocks that make up living things that would cause
> the potentially beneficial systems found in living things to have to
> be clustered (just like there is nothing inherent in playing cards
> that would cause them to stack themselves in any particular order).

I *do* expect to see clustering in useful sequences. And I *do* see it.
One regularly sees families of genes rather than genes with no
sequence similarity. For example, a big chunk of genes are very similar
as membrane-spanning proteins, but differ in the allosteric effector
that transduces an effect across the membrane in eucaryotes. I expect
to see things like the similarity in the TTSS proteins and flagellar
proteins rather than seeing completely different proteins. The reason I
*do* expect to see such clustering is because I think these features
arose by descent with modification rather than by a random walk from a
random starting point to an end that is unrelated to the starting point.
The reason I *do* see such clustering is because descent with
modification is how nature works to produce new proteins. The reason I
don't see complete randomness in new sequence is because your model of
evolution is a bogus strawman.

> However, if you know of a reason why the physical nature of the
> building blocks of life would force them to cluster together despite
> having a low density in sequence space, please, do share it with me.

Sequences of utility cluster together because they arose by common
descent and descent with modification rather than by random walks
through random sequence space from a random starting point.

> Certainly none of your computer examples have been able to demonstrate
> such a necessity. Why then would you expect such a forced clustering
> in the potentially beneficial states of living things?

Look at an evolutionary branching tree. You will see clustering of
exactly the type one sees in sequences. Not *just* similar. Exactly.

>>>>For an extreme
>>>>example, consider a space of strings consisting of length 1000, where
>>>>each position can be occupied by one of 10 possible characters.
>>
>>Note, I wrote, "extereme example". My point was *not* invent a
>>distribution which makes it likely for evolutiuon to occur (this example
>>has about as much to do with evolution as ballet does with quantum
>>mechanics), but to show how inadequate your methods are.
>
>
> Actually, this situation has a lot to do with evolution and is the
> real reason why evolution is such a ludicrous idea.

No, Sean. It has a lot to do with your bogus straw man of evolution.
It has nothing to do with reality.

> What your
> illustration shows is that only if the deck is stacked in a most
> unlikely way will evolution have the remotest possibility of working.
> That is what I am trying to show and you demonstrated this very
> nicely. Unwittingly it is you who effectively show just how
> inadequate evolutionary methods are at making much of anything outside
> of an intelligently designed stacking of the deck.

[Snip much more of little interest, since GIGO is GIGO whether it is
done in one paragraph or twenty]

Sean Pitman

unread,
Jan 14, 2004, 3:52:22 PM1/14/04
to
"Chris Merli" <clm...@insightbb.com> wrote in message news:<GTcNb.65427$xy6.124383@attbi_s02>...

> >
> > Consider the scenario where there are 10 ice cream cones on the
> > continental USA. The goal is for a blind man to find as many as he
> > can in a million years. It seems that what you are suggesting is that
> > the blind man should expect that the ice cream cones will all be
> > clustered together and that this cluster will be with arms reach of
> > where he happens to start his search. This is simply a ludicrous
> > notion outside of intelligent design. My hypothesis, on the other
> > hand, suggests that these 10 ice cream cones will have a more random
> > distribution with hundreds of miles separating each one, on average.
> > An average starting point of the blind man may, by a marvelous stroke
> > of luck, place him right beside one of the 10 cones. However, after
> > finding this first cone, how long, on average, will it take him to
> > find any of the other 9 cones? That is the question here. The very
> > low density of ice cream cones translates into a marked increase in
> > the average time required to find them. Now, if there were billions
> > upon billions of ice cream cones all stuffed into this same area, then
> > one could reasonably expect that they would be separated by a much
> > closer average distance - say just a couple of feet. With such a high
> > density, the average time needed for the blind man to find another ice
> > cream cone would be just a few seconds.
>
> But there is not one blind man looking there are many and only those close
> enough to the cluster of cones in the first place are likely to succeed.

Exactly right. The problem is that increasing the number of blind men
searching only helps for a while, at the lowest levels of functional
complexity where the density of ice cream cones is the greatest.
However, with each step up the ladder of functional complexity, the
density of ice cream cones decreases in an exponential manner. In
order to keep up with this exponential decrease in average cone
density, the number of blind men has to increase exponentially in
order to find the rarer cones at the same rate. Very soon the
environment cannot support any more blind men and so they must
individually search out exponentially more and more sequence space, on
average, before success can be realized (i.e., a cone or cluster of
cones is found). For example, it can be visualized as stacked levels
of rooms. Each room has its own average density of ice cream cones.
The rooms on the lowest level have the highest density of ice cream
cones - say one cone every meter or so, on average. Moving up to the
next higher room the density decreases so that there is a cone every 2
meters or so. Then, in the next higher room, the density decreases to
a cone every 4 meters or so, on average. And, it goes from there.
After 30 or so steps up to higher levels, the cone density is 1 every
billion meters or so, on average.

Are you starting to see the problem? What one blind man could find in
just a few seconds at the lowest levels, thousands of blind men cannot
find in thousands of years after just a few step up into the higher
levels. Clustering doesn't help them out here. Because, on average,
the blind men just will not happen to start out close to a cluster of
cones. And, if they do happen to get so fortunate as to end up close
to a rare cluster, what are the odds that they will find another
cluster of cones within that same level? You must think about the
*average* time involved, not the unlikely scenario that finding one
cluster solves all problems. Clustering, contrary to what many have
suggested, does not increase the average density of beneficial states
at a particular level of sequence space. This means that clustering
does not decrease the average time required to find a new ice cream
cone. In fact, if anything, clustering would increase the average
time required to find a new ice cream cone.

> > > You cannot simply assume an "average"
> > > distribution in the absence of background information: you have to find
> > > out precisely the kind of distribution you are dealing with. And even
> > > if you do find that the distribution is "stacked", it does not imply
> > > that an intelligence was involved.
> >
> > Oh really? You think that stacking the deck as you have done can
> > happen mindlessly in less than zillions of years of average time?
> > Come on now! What planet are you from?
>
> Lets talk clusters. How many point mutations of a protein are in fact still
> functional. This tends to create a cluster all of its own. Given this fact

> the idea that they are spread evenly across the landscape is just not true.

Certainly the various beneficial functions are indeed clustered. But
you must realize that clustering doesn't help you find a new cluster
with a new type of function any faster. Say that you start on a
particular clustered island of function. You can move around this
island pretty easily. But, the entire island pretty much does the
same type of function. The question is, how long will it take, on
average, to find a new island of states/sequences with a new type of
function? In order to solve this problem you must have some idea
about the *average* density of all beneficial states in sequence space
as they compare to the non-beneficial sequences that also exist in
sequence space. This average density will tell you, clustered or not,
how long it will take to find a new sequence with a new type of
function via random walk across the non-beneficial sequences. If
fact, the more clustered the sequences are, the longer it will take,
on average to find a new cluster.

Of course Robin, Howard, and many others in this forum have tried to
float the idea that these islands will all happen to be clustered
neatly around the starting point by some unknown but necessary force
of nature despite incredibly low average densities given the overall
volume of sequence space at that level of complexity. They are
basically suggesting that evolution works because the deck is stacked
neatly in favor of evolutionary processes. Of course, for evolution
to really work such deck stacking would not only be helpful, but
vital. Evolution simply cannot work unless the deck is marvelously
stacked in its favor like this. But, what are the odds that the deck
would be so neatly stacked like this outside of intelligent design?
That is the real question here. And so far, no evolutionist that I
have yet encountered seems to be able to answer this question in a way
that makes any sort of rational sense to me. Perhaps you are better
able to understand the solution to this problem than I am?

Sean
www.naturalselection.0catch.com

Frank J

unread,
Jan 14, 2004, 7:13:27 PM1/14/04
to
"\"Rev Dr\" Lenny Flank" <lflank...@ij.net> wrote in message news:<3ff86071$1...@corp.newsgroups.com>...

> Sean Pitman wrote:
>
>
> >
> > Until then, this is all I have time for today.
>
>
> Hey doc, when will you have time to tell us what the scientific theory
> of intelligent design is --- what does the desigher do, specifically,
> what mechanisms does it use to do it, where can we see these mechanisms
> in operation today. And what idnicates there is only one desinger and
> not, say, ten or fifty of them all working together.


C'mon, one question at a time. And good luck getting any answer since
I am still waiting for him and several others to answer my simple
question to define "common design."

>
> After that, can you find the time to explain to me how ID "theory" is
> any less "materialist" or "naturalist" or "atheist" than is evolutionary
> biology, since ID "theory" not only does NOT hypothesize the existence
> of any supernatural entities or actions, but specifically states that
> the "intelligent designer" might be nothing but a space alien.


ID may be less "naturalistic," but only because it rarely makes
testable claims to support its own model. But when it does, it is
every bit as "naturalistic" as evolution and the
mutually-contradictory creationisms. Too bad those claims fail every
time.

And ID and creationism are no less "atheistic" than evolution,
because, as you know, and as anti-evolutionists don't want anyone to
know, evolution never specifically rules out an "intelligent
designer." Ironically it is the anti-evolutionists who constantly
promote "atheistic science" by their false dichotomy.


>
> And after THAT, could you find the time to tell us how you apply
> anything other than "naturalism" or "materialism" to your medical
> practice? What non-naturalistic cures do you recommend for your
> patients, doctor.

My guess is that he says: "Oo ee oo ah ah ting tang walla walla bing
bang."

Chris Merli

unread,
Jan 14, 2004, 9:13:16 PM1/14/04
to

"Sean Pitman" <seanpi...@naturalselection.0catch.com> wrote in message
news:80d0c26f.0401...@posting.google.com...

This is based on the false assumption that increasing complexity must entail
de novo development of the more complex systems. It is painfully clear from
an examination of most proteins that even within a single polypeptide there
are portions that are recruited from other coding sequences. Thus the basic
units that even you have realized can evolve are easily shuffled copied and
adapted. I would contend in fact that the hardest part of the evolution is
not the complex systems that you have argued but the very simple functions.

In
> order to keep up with this exponential decrease in average cone
> density, the number of blind men has to increase exponentially in
> order to find the rarer cones at the same rate. Very soon the
> environment cannot support any more blind men and so they must
> individually search out exponentially more and more sequence space, on
> average, before success can be realized (i.e., a cone or cluster of
> cones is found). For example, it can be visualized as stacked levels
> of rooms. Each room has its own average density of ice cream cones.
> The rooms on the lowest level have the highest density of ice cream
> cones - say one cone every meter or so, on average. Moving up to the
> next higher room the density decreases so that there is a cone every 2
> meters or so. Then, in the next higher room, the density decreases to
> a cone every 4 meters or so, on average. And, it goes from there.
> After 30 or so steps up to higher levels, the cone density is 1 every
> billion meters or so, on average.

If the development of each protein started from scratch you may have an
excellent arguement but nearly all proteins from other proteins so you are
starting from a point that is known to be functional.

Have you ever really considered how many functions proteins provide. At the
very basic level there are very few. All those complex functions are based
on only a few very simple things that can occur at a link between two amino
acids plus some chemical and electrical forces. Look at the active site of
most enzymes and you will find them remarkably simple.

I am afraid I will simply have to wait for evidence to elucidate the reason
for this. I asked you before what evidence you had that these clusters do
not exist and based on your reply here it is safe to assume the answer is
none. Not only do you not know if there is clustering but you are not even
certain what percentage of the protein sequences are functional in any way.
Based on this it is very hard to lend any weight to your speculations.
Could you present a experiment that would support any of your assumptions?
Please do not present experiments that would require negative results as
those are not scientific.

>
> Sean
> www.naturalselection.0catch.com
>

Sean Pitman

unread,
Jan 14, 2004, 9:21:03 PM1/14/04
to
howard hershey <hers...@indiana.edu> wrote in message news:<bu46sv$srt$1...@hood.uits.indiana.edu>...


> > Consider the scenario where there are 10 ice cream cones on the
> > continental USA. The goal is for a blind man to find as many as he
> > can in a million years.
>
> Except that is NOT what evolution does. Evolution starts with an
> organism with pre-existing sequences that produce products and interact
> with environmental chemicals in ways that are useful to the organism's
> reproduction.

Yes . . . so start the blind man off with an ice-cream cone to begin
with and then have him find another one.

> The situation is more like 10,000 blind men in a varying
> topography who blindly follow simple and dumb rules of the game to find
> useful things (ice cream at the tops of fitness peaks):

You don't understand. In this scenario, the positively selectable
topography is the ice-cream cone. There are no other selectable
fitness peaks here. The rest of the landscape is neutral. Some of
the ice-cream cones may be more positively selectable than others
(i.e., perhaps the man likes vanilla more than chocolate). However,
all positive peaks are represented in this case by an ice-cream cone.

> Up is good. Down
> is bad.

Ice-cream cone = Good or "Up" (to one degree or another) or even
neutral depending upon one's current position as it compares to one's
previous position. For example, once you have an ice cream, that is
good. But, all changes that maintain that ice cream but do not gain
another ice cream are neutral.

No ice-cream cone = "Bad", "Down", or even "neutral" depending upon
one's current position as it compares to one's previous position.

> Flat is neither good nor bad.

Exactly. Flat is neutral. The more neutral space between each "good"
upslope/ice-cream cone, the longer the random walk. The average
distance between each selectable "good" state translates into the
average time required to find such a selectable state/ice-cream cone.
More blind men searching, like 10,000 of them, would cover the area
almost 10,000 times faster than just one blind man searching alone.
However, at increasing levels of complexity the flat area expands at
an exponential rate. In order to keep up and find new functions at
these higher levels of functional complexity, the population of blind
men will have to increase at an equivalent rate. The only problem
with increasing the population is that very soon the local environment
will not be able to support any larger of a population. So, if the
environment limits the number of blind men possible to 10,000 - that's
great if the average neutral distance between ice-cream cones in a few
miles or so, but what happens when, with a few steps up the ladder of
functional complexity, the neutral distance expands to a few trillion
miles between each cone, on average? Now each one of your 10,000
blind men have to search around 50 million sq. miles, on average,
before the next ice-cream cone or a new cluster of ice cream cones
will be found by even one blind man in this population.

> Keep walking in all cases.

They keep walking alright - a very long ways indeed before they reach
anything beneficially selectable at anything very far beyond the
lowest levels of functional complexity.

> It
> would not take too long for these 10,000 blind men to be found in
> decidedly non-random places (the high mesas of functional utility where
> they are wandering around the flat tops if you haven't guessed).

There is a funny thing about these mesas. At low levels of
complexity, these mesas are not very large. In fact, many of them are
downright tiny - just one or two steps wide in any direction and a
new, higher mesa can be reached. However, once a blind man finds this
new mesa new higher mesa (representing a different type of function at
higher level of specified complexity) and climbs up onto its higher
surface, the distance to a new mesa at the same height or taller is
exponentially greater than it was at the lower levels of mesas.

___
__ __
_ _-_ __
__-_
_-_- -__-__-_- _-__-_-_-__- -_- _-_-_
_-_-__

> And
> the ice cream cones (the useful functions), remember, are not randomly
> distributed either. They are specifically at the tops of these mesas as
> well. That is what a fitness landscape looks like.

Actually, the mesa itself, every part of its surface, represents an
ice cream cone. There is no gradual increase here. Either you have
the ice-cream cone or you don't. If you don't have one that is even
slightly "good"/beneficial, then you are not higher than you were to
begin with and you must continue your random walk on top of the flat
mesa that you first started on (i.e., your initial beneficial
function(s)).

> If this topography of utility only changed slowly, at any given time it
> would appear utterly amazing to Sean that the blind men will all be
> found at these local high points or optimal states (the mesas licking
> the ice cream cones on them) rather than being randomly scattered around
> the entire surface.


If all the 10,000 blind men started at the same place, on the same
point of the same mesa, and then went out blindly trying to find a
higher mesa than the one they started on, the number that they found
would be directly proportional to the average distance between these
taller mesas. If the density of taller mesas, as compared to the one
they are now on, happens to be say, one every 100 meters, then they
will indeed find a great many of these in short order. However, if
the average density of taller mesas, happens to be one every 10,000
kilometers, then it would take a lot longer time to find the same
number of different mesas as compared to the number the blind men
found the first time when the mesas were just 100 meters apart.

> They reached these high points (with the ice cream)
> by following a simple dumb algorithm.

Yes - and this mindless "dumb" algorithm works just fine to find new
and higher mesas if and only there is a large average density of mesas
per given unit of area (i.e., sequence space). That is why it is easy
to evolve between 3-letter sequences. The ratio/density of such
sequences is as high as 1 in 15. Any one mutating sequence will find
a new 3-letter sequence within 15 random walk steps on average. A
population of 10,000 such sequences (blind men) would find most if not
all the beneficial 3-letter words (ice-cream cones) in 3-letter
sequence space in less than 30 generations (given that there was one
step each, on average, per generation).

This looks good so far now doesn't it? However, the problems come as
you move up the ladder of specified complexity. Using language as an
illustration again, it is not so easy to evolve new beneficial
sequences that require say, 20 fairly specified letters, to transmit
an idea/function. Now, each member of our 10,000 blind men is going
to have to take over a trillion steps before success (the finding of a
new type of beneficial state/ice cream cone) is realized for just one
of them at this level of complexity.

Are we starting to see the problem here? Of course, you say that
knowledge about the average density of beneficial sequences is
irrelevant to the problem, but it is not irrelevant unless you, like
Robin, want to believe that all the various ice-cream cones
spontaneously cluster themselves into one tiny corner of the potential
sequence space AND that this corner of sequence space just so happens
to be the same corner that your blind men just happen to be standing
in when they start their search. What an amazing stroke of luck that
would be now wouldn't it?

> But you were wondering how something new could arise *after* the blind
> men are already wandering around the mesas? The answer is that it
> depends. They can't always do so.

And why not Howard? Why can't they always do so? What would limit
the blind men from finding new mesas? I mean really, each blind man
will self-replicate (hermaphrodite blind men) and make 10,000 new
blind men on the mesa that he/she/it now finds himself on. This new
population would surely be able to find new mesas in short order if
things worked as you suggest. But the problem is that if the mesas
are not as close together, on average, as they were at the lower level
where the blind men first started their search, it is going to take
longer time to find new mesas at the same level or higher. That is
the only reason why these blind men "can't always" find "something
new". It has to do with the average density of mesas at that level.

> But remember that these pre-existing
> mesas are not random places. They do something specific with local
> utility.

The mesas represent sequences with specific utilities. These
sequences may in fact be widely separated mesas even if they happen to
do something very similar. Really, the there is no reason for the
mesas to be clustered in one corner of sequence space. A much more
likely scenario is for them to be more evenly distributed throughout
the potential sequence space. Certainly there may be clusters of
mesas here and there, but on average, there will still be a wide
distribution of mesas and clusters of mesas throughout sequence space
at any given level. And, regardless of if the mesas are more
clustered or less clustered, the *average* distance between what is
currently available and the next higher mesa will not be significantly
affected.

> Let's say that each mesa top has a different basic *flavor* of
> ice cream. Say that chocolate is a glycoside hydrolase that binds a
> glucose-based glycoside. Now let's say that the environment changes so
> that one no longer needs this glucose-based glycoside (the mesa sinks
> down to the mean level) but now one needs a galactose-based glycoside
> hydrolase.

You have several problems here with your illustration. First off,
both of these functions are very similar in type and use very similar
sequences. Also, their level of functional complexity is relatively
low (like the 4 or 5 letter word level). Also, you must consider the
likelihood that the environment would change so neat so that galactose
would come just when glucose is leaving. Certainly if you could
program the environment just right, in perfect sequence, evolution
would be no problem. But you must consider the likelihood that the
environment will change in just the right way to make the next step in
an evolutionary sequence beneficial when it wasn't before. The odds
that such changes will happen in just the right way on both the
molecular level and environmental level get exponentially lower and
lower with each step up the ladder of functional complexity. What was
so easy to evolve with functions requiring no more than a few hundred
fairly specified amino acids at minimum, is much much more difficult
to do when the level of specified complexity requires just a few
thousand amino acids at minimum. It's the difference between evolving
between 3-letter words and evolving between 20-letter phrases. What
are the odds that one 20-letter phrase/mesa that worked well in one
situation will sink down with a change in situations to be replaced by
a new phrase of equal complexity that is actually beneficial? -
Outside of intelligent design? That is the real question here.

> Notice that the difference in need here is something more
> like wanting chocolate with almonds than wanting even strawberry, much
> less jalapeno or anchovy-flavored ice cream. The blind man on the newly
> sunk mesa must keep walking, of course, but he is not thousands of miles
> away from the newly risen mesa with chocolate with almonds ice cream on
> top.

He certainly may be extremely far away from the chocolate with almonds
as well as every other new type of potentially beneficial ice cream
depending upon the level of complexity that he happens to be at (i.e.,
the average density of ice-creams of any type in the sequence space at
that level of complexity).

> Changing from one glucose-based glycoside hydrolase to one with a
> slightly different structure is not the same as going from chocolate to
> jalapeno or fish-flavored ice cream. Not even the same as going from
> chocolate to coffee. The "island" of chocolate with almonds is *not*
> going to be way across the ocean from the "island" of chocolate.

Ok, lets say, for arguments sake, that the average density of
ice-cream cones in a space of 1 million square miles is 1 cone per 100
square miles. Now, it just so happens that many of the cones are
clustered together. There is the chocolate cluster with all the
various types of chocolate cones all fairly close together. Then,
there is the strawberry cones with all the variations on the
strawberry theme pretty close together. Then, there is the . . .
well, you get the point. The question is, does this clustering of
certain types of ice creams help is the traversing the gap between
these clustered types of ice creams? No it doesn't. If anything, the
clustering only makes the average gap between clusters wider. The
question is, how to get from chocolate to strawberry or any other
island cluster of ice creams when the average gap is still quite
significant?

You see, the overall average density of cones is still significant to
the problem no matter how you look at it. Clustering some of them
together is not going to help you find the other clusters - unless
absolutely all of the ice cream islands are clustered together as well
in a cluster of clusters all in one tiny portion of the overall
potential space. This is what Robin is trying to propose, but I'm
sorry, this is an absolutely insane argument outside of intelligent
design. How is this clustering of clusters explained via mindless
processes alone?

> It will
> be nearby where the blind man is. *And* because chocolate with almonds
> is now the need, it will also be on the new local high mesa (relative to
> the position of the blind man on the chocolate mesa). The blind man
> need only follow the simple rules (Up good. Down bad. Neutral neutral.
> Keep walking.) and he has a good chance of reach the 'new' local mesa
> top quite often.

And what about the other clusters? Is the environment going to change
just right a zillion times in a row so that bridges can be built to
the other clusters?

> And remember that there is not just one blind man on one mesa in this
> ocean of possible sequences. There are 10,000 already present on 10,000
> different local mesas with even more flavors than the 31 that most ice
> cream stores offer. Your math always presuposes that whenever you need
> to find, say, vanilla with cherry the one blind man starts in some
> random site and walks in a completely random fashion (rather than by the
> rules I pointed out) across half the universe of sequence space to reach
> your pre-determined goal by pure dumb luck to find the perfect lick.

That is not my position at all as I have pointed out to you numerous
times. It seems that no matter how often I correct you on this straw
man caricature of my position you make the same straw man assertions.
Oh well, here it goes again.

I'm perfectly fine with the idea that there is not just one man, but
10,000 or many more men already in place on different mesas that are
in fact selectably beneficial. In fact, there may be 10,000 or more
men on each of 10,000 mesas. That is all perfectly fine and happens
in real life. When something new "needs to be found", say, "vanilla
with a cherry on top" or any other potentially beneficial function at
that level of complexity or greater (this is not a teleological search
you know since there are many ice-cream cones available), all of the
men may search at the same time.

My math certainly does not and never did presuppose that only one man
may search the sequence space. That is simply ridiculous. All the
men search at the same time (millions and even hundreds of billions of
them at times). The beneficial sequences are those sequences that are
even slightly better than what is currently had by even one member of
the vast population of blind men that is searching for something new
and good.

Now, if the average density of something new and good that is even
slightly selectable as new and good is less than 1 in a trillion
trillion, even 100 billion men searching at the same time will take a
while to find something, anything, that is even a little bit new and
good at the same level of specified complexity that they started with.
On average, none of the men on their various mesas will be very close
to any one of the new and good mesas within the same or higher levels
of sequence space if the starting point is very far beyond the lowest
levels of specified complexity.

> My
> presumption is that the successful search is almost always going to
> start from the pre-existing mesa

Agreed.

> with the closest flavor to the new need
> (or from a duplicate, which, as a duplicate, is often superfluous and
> quickly erodes to ground level in terms of its utility).

This is where we differ. Say you have chocolate and vanilla. Getting
to the different varieties of chocolate and vanilla is not going to be
much of a problem. But, say that neither chocolate nor vanilla are
very close to strawberry or to each other. Each cluster is separated
from the other clusters by thousands of miles. Now, even though you
already have two clusters in your population, how are you going to
evolve the strawberry cluster if an environmental need arises where it
would be beneficial?

You see, you make the assumption that just because you start out with
a lot of clusters that any new potentially beneficial sequence or
cluster of sequences will be fairly close to at least one of your
10,000 starting clusters. This is an error when you start considering
levels of sequence space that have very low overall densities of
beneficial sequences. No matter where you start from and no matter
how many starting positions you have to begin with, odds are that the
vast majority of new islands of beneficial sequences will be very far
away from everything that you have to start with beyond the lowest
levels of functional complexity.

> As mentioned,
> these pre-existing mesas are not random pop-ups. They are at the most
> useful places in sequence space from which to try to find near-by mesas
> with closely-related biologically useful properties because they already
> have biologically useful properties.

Yes, similar useful biological properties would all be clustered
together under one type of functional island of sequences. However,
the overall density of beneficial sequences in sequence space dictates
how far apart, on averages, these clusters of clusters will be from
each other. New types of functions that are not so closely related
will most certainly be very far away from anything that you have to
start with beyond the lowest levels of functional complexity. You may
do fine with chocolate and vanilla variations since those are what you
started with, but you will have great difficulty finding anything
else, such as strawberry, mocha, caviar, etc . . .

The suggestion that absolutely all of the clusters are themselves
clustered together in a larger cluster or archipelago of clusters in a
tiny part of sequence space is simply a ludicrous notion to me -
outside of intelligent design that is. Oh no, you, Robin, Deaddog,
Sweetness, Musgrave, and all the rest will have to do a much better
job and explaining how all the clusters can get clustered together
(when they obviously aren't) outside of intelligent design.



> I *do* expect to see clustering in useful sequences. And I *do* see it.

So do I. Who is arguing against this? Useful sequences are often
clustered around a certain type of function. What I am talking about
is evolution between different types of functions. The evolution of
different sequences with the same basic type of function is not an
issue at all. It happens all the time, usually in the form of an
up-regulation or down-regulation of a certain type of function, even
at the highest levels of functional complexity. But, this sort of
intra-island evolution is a far cry from evolving a new type of
function (i.e., going from one cluster to another). In fact, this
sort of evolution never happens beyond the lowest levels of functional
complexity due to the lack of density of beneficial sequences at these
higher levels of specified complexity.

In any case, this is all I have time for today. As always, it has
been most interesting. Please do try again . . .

Sean
www.naturalselection.0catch.com

Jethro Gulner

unread,
Jan 15, 2004, 12:17:26 AM1/15/04
to
I'm thinking TSS to flagellum is on the order of chocolate to
chocolate-fudge-brownie

howard hershey <hers...@indiana.edu> wrote in message news:<bu46sv$srt$1...@hood.uits.indiana.edu>...

david ford

unread,
Jan 15, 2004, 8:51:08 AM1/15/04
to
Sean Pitman <seanpi...@naturalselection.0catch.com> on 4 Jan 2004:
RobinGoodfellow <lmuc...@yahoo.com>:

[snip]

There's no such thing as an intelligent designer.
What I meant to say was, there's no such thing as an intelligent
designer of computer programs.
What I really meant to say was, there's no such thing as an
intelligent designer(s) of biology.

> If
> enough sequences are defined as beneficial and they are placed in just
> the right way, with the right number of spaces between them, then
> certainly such a high ratio will result in rapid evolution - as we saw
> here. However, when neutral non-defined gaps are present, they are a
> real problem for evolution. In this case, a gap of just 16 neutral
> mutations effectively blocked the evolution of the EQU function.
>
> http://naturalselection.0catch.com/Files/computerevolution.html

[snip]

>> The answer is simple - the ratio of beneficial states does NOT
matter!
>
> Yes it does. You are ignoring the highly unlikely nature of your
> scenario. Tell me, how often do you suppose your start point would
> just happen to be so close to the only other beneficial sequence in
> such a huge sequence space? Hmmmm? I find it just extraordinary that
> you would even suggest such a thing as "likely" with all sincerity of
> belief. The ratio of beneficial to non-beneficial in your
> hypothetical scenario is absolutely miniscule and yet you still have
> this amazing faith that the starting point will most likely be close
> to the only other "winning" sequence in an absolutely enormous
> sequence space?! Your logic here is truly mysterious and your faith
> is most impressive.

Anything is possible with enough faith. Simply believe hard enough,
and reality _will_ conform.

> I'm sorry, but I just can't get into that boat
> with you. You are simply beyond me.

What are you afraid of-- getting a little wet? When the boat sinks,
you will, after all, be able to swim. Though I don't know for how
long....



>> All that matters is their distribution, and how well a particular
>> random walk is suited to explore this distribution.
>
> Again, you must consider the odds that your "distribution" will be so
> fortuitous as you seem to believe it will be. In fact, it has to be
> this fortuitous in order to work. It basically has to be a set up for
> success. The deck must be stacked in an extraordinary way in your
> favor in order for your position to be tenable. If such a stacked
> deck happened at your table in Las Vegas you would be asked to leave
> the casino in short order or be arrested for "cheating" by intelligent
> design since such deck stacking only happens via intelligent design.

Intelligent design advocates often cheat. They are masters of
illusion and sleight of hand. Their ideas adapt to data as fog adapts
to land, to borrow some phraseology from the creationist ReMine.
Their views can "explain" any conceivable observation, and any
conceivable set of circumstances (exception: if biology did not
exist, or if we are living in the Matrix, and what we think is real is
not real and is a dream).

Magician Walter ReMine wrote the extremely dangerous and execrable
book _The Biotic Message: Evolution versus Message Theory_ (1993),
538pp. I cannot urge upon you strongly enough the importance of not
reading that book. Miraculously, my faith in the solidity and rigor
of the theory of evolution aka Precious survived the reading of large
portions of that most despicable book. Those were dark times, but my
faith in Precious survived.

[snip]

>> A random walk
>> starting at a given beneficial sequence, and allowing certain
>> transitions from one sequence to another, would require a
completely
>> different type of analysis. In the analyses of most such search
>> algorithms, the "ratio" of beneficial sequences would be irrelevant
-
>> it is their *distribution* that would determine how well such an
>> algorithm would perform.
>
> The most likely distribution of beneficial sequences is determined by
> their density/ratio. You cannot simply assume that the deck will be
> so fantastically stacked in the favor of your neat little evolutionary
> scenario. I mean really, if the deck was stacked like this with lots
> of beneficial sequences neatly clustered around your starting point,
> evolution would happen very quickly. Of course, there have been those
> who propose the "Baby Bear Hypothesis". That is, the clustering is
> "just right" so that the theory of evolution works.

How could the existence of such just-right clustering be accounted
for-- what could have produced it?
In your response, please do not invoke intelligence. After all, in
the story of Goldilocks and the three bears, the porridge was not
prepared by intelligence. Intelligence cannot account for the
appearance of _anything_. This post is an illustration of that fact.

> That is the best of

Sorry, not interested. I recently blew my life savings on 50 acres of
oceanfront property on the Moon.
If you act now, you too can get in on this amazing ground-level deal,
and be privy to the secrets of the Oceanfront Moon Property Society.
The only requirements for membership are that you own Moon property
and affirm that intelligence/ mind cannot be an explanation for the
appearance of anything, especially biology.

Sean Pitman

unread,
Jan 15, 2004, 11:30:52 AM1/15/04
to
jethro...@bigfoot.com (Jethro Gulner) wrote in message news:<edf04d4a.04011...@posting.google.com>...

>
> I'm thinking TSS to flagellum is on the order of chocolate to
> chocolate-fudge-brownie

Now that's a serious stretch of the imagination. The TTSS system is a
non-motile secretory system while the fully formed flagellar system is
a motility system as well. The TTSS system requires 6 or so different
protein parts, at minimum, for its formation while the motility
function of the flagellar system requires and additional 14 or so
different protein parts (for a total of over 20 parts) before its
motility function can be realized. Unless you can find intermediate
functions for the gap of more than a dozen required parts that
separate the TTSS system from the Flagellar system, I'd say this gap
is quite significant indeed, requiring at minimum several thousand
fairly specified amino acids. Certainly this is not the same thing as
roaming around the same island cluster with the same type of function.
The evolution form the TTSS island of function to the brand new type
of motility function found in the flagellar island would have to cross
a significant distance before the motility function of the flagellum
could be realized. Such a distance could not be crossed via random
walk alone this side of zillions of years in any population of
bacteria on Earth. In order for evolution to have truly crossed such
a gap, without intelligent design helping it along, there would have
to be a series of closely spaced beneficial functions/sequences
between the TTSS and the motility function of the flagellum.

Where is this series of steppingstones? That is the real question!
Many have tried to propose the existence of various stepping-stone
functions, but none have been able to show that these steppingstones
could actually work as no one has ever shown the crossing from any
proposed steppingstone to any other in real life. If you think you
know better how such a series could exist and actually work to
eliminate this gap problem, please do share your evolutionary sequence
with us.

Sean
www.naturalselection.0catch.com

Sean Pitman

unread,
Jan 15, 2004, 2:50:33 PM1/15/04
to
"Chris Merli" <clm...@insightbb.com> wrote in message news:<lKmNb.55023$5V2.67607@attbi_s53>...

> >
> > Exactly right. The problem is that increasing the number of blind men
> > searching only helps for a while, at the lowest levels of functional
> > complexity where the density of ice cream cones is the greatest.
> > However, with each step up the ladder of functional complexity, the
> > density of ice cream cones decreases in an exponential manner.
>
> This is based on the false assumption that increasing complexity must entail
> de novo development of the more complex systems. It is painfully clear from
> an examination of most proteins that even within a single polypeptide there
> are portions that are recruited from other coding sequences. Thus the basic
> units that even you have realized can evolve are easily shuffled copied and
> adapted. I would contend in fact that the hardest part of the evolution is
> not the complex systems that you have argued but the very simple functions.

This is a very common misconception among evolutionists - that if the
right subparts of a system are similar or identical to other parts
elsewhere in other systems, that the system in question obviously
arose via a "simple" assembly of pre-existing subparts.

The problem with this idea is that just because all of the right
subparts needed to make a new beneficial system of function are there,
already fully formed as parts of other systems of function, does not
mean that they will self-assemble themselves to form a new collective
system of function. For example, all of the individual amino acids
are there, fully formed, to make a motility apparatus in a
historically non-motile bacterial colony. Say that motility would be
advantageous to this colony if it evolved a system that would give it
motility. All the right parts are there, but they don't know how to
self-assemble themselves to make such a system.

Now why is this? Because, in order for correct assembly of the parts
to proceed, the information for their assembly must be pre-established
in the DNA. This genetic information tells where, when, and how much
of each part to make so that the assembly of the molecular systems can
occur. Without this pre-established information the right parts just
won't assembly properly beyond the lowest levels of functional
complexity. It would be like having all the parts to a watch in a
bag, shaking the bag for a billion years, and expecting a fully formed
watch, or anything else of equal or greater emergent functional
complexity, to fall out at the end of that time. The same is true for
say, a bacterial flagellum. Take all of the necessary subparts needed
to make a flagellum, put them together randomly, and see if they will
self-assemble a flagellar apparatus. It just doesn't happen outside
of the very specific production constraints provided by the
pre-established genetic information that code for both flagellar part
production as well as where, when, and how much part to produce so
that assembly of these parts will occur in a proper way. The simple
production of flagellar parts in a random non-specific way will only
produce a junk pile - not a highly complex flagellar system.

Now, of course, if you throw natural selection into the picture, this
is supposed to get evolution out of this mess. It sort through the
potential junk pile options and picks only those assemblages that are
beneficial, in a stepwise manner, until higher and higher systems of
functional complexity are realized. This is how it is supposed to
work. The problem with this notion is that as one climbs up the
ladder of functional complexity, it becomes more and more difficult to
keep adding genetic sequences together in a beneficial way without
having to cross vast gaps of neutral or even detrimental changes.

For example, start with a meaningful English word and then add to or
change that word so that it makes both meaningful and beneficial sense
in a given situation/environment. At first such a game is fairly easy
to do. But, very quickly you get to a point where any more additions
or changes become very difficult without there being significant
changes happening that are "just right". The required changes needed
to maintain beneficial meaning with longer and longer phrases,
sentences, paragraphs, etc., start to really get huge. Each word has
a meaning by itself that may be used in a beneficial manner by many
different types of sentences with completely different meanings.
Although the individual word does have a meaning by itself, its
combination with other words produces an emergent meaning/function
that goes beyond the sum of the individual words. The same thing
happens with genes and proteins. A portion of a protein may in fact
work well in a completely different type of protein, but in the
protein that it currently belongs to, it is part of a completely
different collective emergent function. Its relative order as it
relates to the other parts of this larger whole is what is important.
How is this relative order established if there are many many more
ways in which the relative order of these same parts would not be
beneficial in the least?

Again, just because the right parts happen to be in the same place at
the same time does not mean much outside of a pre-established
information code that tells them how to specifically arrange
themselves.

> > In
> > order to keep up with this exponential decrease in average cone
> > density, the number of blind men has to increase exponentially in
> > order to find the rarer cones at the same rate. Very soon the
> > environment cannot support any more blind men and so they must
> > individually search out exponentially more and more sequence space, on
> > average, before success can be realized (i.e., a cone or cluster of
> > cones is found). For example, it can be visualized as stacked levels
> > of rooms. Each room has its own average density of ice cream cones.
> > The rooms on the lowest level have the highest density of ice cream
> > cones - say one cone every meter or so, on average. Moving up to the
> > next higher room the density decreases so that there is a cone every 2
> > meters or so. Then, in the next higher room, the density decreases to
> > a cone every 4 meters or so, on average. And, it goes from there.
> > After 30 or so steps up to higher levels, the cone density is 1 every
> > billion meters or so, on average.
>
> If the development of each protein started from scratch you may have an
> excellent arguement but nearly all proteins from other proteins so you are
> starting from a point that is known to be functional.

You are actually suggesting here that the system in question had its
origin in many different places. You seem to be suggesting that all
the various parts found as subparts of many different systems somehow
brought themselves together to make a new type of system . . . just
like that. Well now, how did these various different functional
parts, as subparts of many different systems, know how to come
together so nicely to make a completely new system of function? This
would be like various parts from a car simply deciding, by themselves,
to reassemble to make an airplane, or a boat, or a house.

Don't you see, just because the subparts are functional as parts of
different systems of function does not mean that these subparts can
simply make an entirely new collective system of function. This just
doesn't happen although evolutionists try and use this argument all
the time. It just doesn't make sense. It is like throwing a bunch of
words on the ground at random saying, "Well, they all work as parts of
different sentences, so they should work together to make a new
meaningful sentence." Really now, it just doesn't work like this.
You must be able to add the genetic words together in a steppingstone
sequence where each addition makes a beneficial change in the overall
function of the evolving system. If each change does not result in a
beneficial change in function, then nature will not and cannot select
to keep that change. Such non-beneficial changes are either
detrimental or neutral. The crossing of such detrimental/neutral gaps
really starts to slow evolution down, in an exponential fashion,
beyond the lowest levels of specified functional complexity. Very
soon, evolution simply stalls out and cannot make any more
improvements beyond the current level of complexity that it finds
itself, this side of zillions of years of average time.

Sean
www.naturalselection.0catch.com

Bennett Standeven

unread,
Jan 15, 2004, 7:41:15 PM1/15/04
to
seanpi...@naturalselection.0catch.com (Sean Pitman) wrote in message news:<80d0c26f.04011...@posting.google.com>...

> howard hershey <hers...@indiana.edu> wrote in message news:<bu46sv$srt$1...@hood.uits.indiana.edu>...
>
> > > Consider the scenario where there are 10 ice cream cones on the
> > > continental USA. The goal is for a blind man to find as many as he
> > > can in a million years.
> >
> > Except that is NOT what evolution does. Evolution starts with an
> > organism with pre-existing sequences that produce products and interact
> > with environmental chemicals in ways that are useful to the organism's
> > reproduction.
>
> Yes . . . so start the blind man off with an ice-cream cone to begin
> with and then have him find another one.

OK; the ice cream cones are probably found in shops; so given any
cone, odds are that another cone is just a few inches away. This is
still true even if there is only one shop in the USA.


> > Up is good. Down
> > is bad.
>
> Ice-cream cone = Good or "Up" (to one degree or another) or even
> neutral depending upon one's current position as it compares to one's
> previous position. For example, once you have an ice cream, that is
> good. But, all changes that maintain that ice cream but do not gain
> another ice cream are neutral.
>
> No ice-cream cone = "Bad", "Down", or even "neutral" depending upon
> one's current position as it compares to one's previous position.
>
> > Flat is neither good nor bad.
>
> Exactly. Flat is neutral. The more neutral space between each "good"
> upslope/ice-cream cone, the longer the random walk. The average
> distance between each selectable "good" state translates into the
> average time required to find such a selectable state/ice-cream cone.
> More blind men searching, like 10,000 of them, would cover the area
> almost 10,000 times faster than just one blind man searching alone.
> However, at increasing levels of complexity the flat area expands at
> an exponential rate.

But it does not matter, because the blind men always start out in the
ice cream shop, with an ever increasing selection of cones within
arm's reach. Of course, they'll never find any of the other shops, but
so what?

[...]

then each of them is a local high point; only these mesas will have
blind men on them.


> > But you were wondering how something new could arise *after* the blind
> > men are already wandering around the mesas? The answer is that it
> > depends. They can't always do so.
>
> And why not Howard? Why can't they always do so? What would limit
> the blind men from finding new mesas? I mean really, each blind man
> will self-replicate (hermaphrodite blind men) and make 10,000 new
> blind men on the mesa that he/she/it now finds himself on.

But since the current mesa is a local high point, there is nowhere for
them to go.

[...]


> > Let's say that each mesa top has a different basic *flavor* of
> > ice cream. Say that chocolate is a glycoside hydrolase that binds a
> > glucose-based glycoside. Now let's say that the environment changes so
> > that one no longer needs this glucose-based glycoside (the mesa sinks
> > down to the mean level) but now one needs a galactose-based glycoside
> > hydrolase.
>
> You have several problems here with your illustration. First off,
> both of these functions are very similar in type and use very similar
> sequences.

That's not a "problem", it's the whole point. Evolution by definition
involves gradual changes, in which new systems have similar functions
and definitions to the old ones.

> Also, their level of functional complexity is relatively
> low (like the 4 or 5 letter word level).

I don't know exactly what "galactose-based glycoside" is, but
something tells me that it takes more than 4 or 5 amino acids to bind
to it.

> Also, you must consider the likelihood that the environment would change
> so neat so that galactose would come just when glucose is leaving.

Yes. More likely the galactose was always there, but was ignored in
favor of the glucose, until the latter disappeared.

> Certainly if you could program the environment just right, in perfect
> sequence, evolution would be no problem. But you must consider the
> likelihood that the environment will change in just the right way to
> make the next step in an evolutionary sequence beneficial when it
> wasn't before.

That's pretty easy; following the mesa analogy, either the high mesa
drops to be lower than the formerly low one, or the low one rises
above the formerly high one. Happens all the time.

> The odds
> that such changes will happen in just the right way on both the
> molecular level and environmental level get exponentially lower and
> lower with each step up the ladder of functional complexity.

No; the chance that two sequences will interchange in relative fitness
does not depend on how complex they are.

> What was so easy to evolve with functions requiring no more than a few
> hundred fairly specified amino acids at minimum, is much much more
> difficult to do when the level of specified complexity requires just a few
> thousand amino acids at minimum. It's the difference between evolving
> between 3-letter words and evolving between 20-letter phrases. What
> are the odds that one 20-letter phrase/mesa that worked well in one
> situation will sink down with a change in situations to be replaced by
> a new phrase of equal complexity that is actually beneficial? -

Quite good, I'd say. I can easily imagine the relative fitness of
"Today we'll talk about unicorns" exchanging places with that of
"Today we'll talk about unicode", for example. Those are 25-letter
phrases; making them even longer would only increase the number of
nearby phrases with potential value.

> Outside of intelligent design? That is the real question here.
>
> > Notice that the difference in need here is something more
> > like wanting chocolate with almonds than wanting even strawberry, much
> > less jalapeno or anchovy-flavored ice cream. The blind man on the newly
> > sunk mesa must keep walking, of course, but he is not thousands of miles
> > away from the newly risen mesa with chocolate with almonds ice cream on
> > top.
>
> He certainly may be extremely far away from the chocolate with almonds
> as well as every other new type of potentially beneficial ice cream
> depending upon the level of complexity that he happens to be at (i.e.,
> the average density of ice-creams of any type in the sequence space at
> that level of complexity).

Yes; the higher the lever of complexity, the more likely that the new
ice cream cone is nearby, since the fancier (more complex) flavors
tend to appear in the stores with the largest selection.

> > Changing from one glucose-based glycoside hydrolase to one with a
> > slightly different structure is not the same as going from chocolate to
> > jalapeno or fish-flavored ice cream. Not even the same as going from
> > chocolate to coffee. The "island" of chocolate with almonds is *not*
> > going to be way across the ocean from the "island" of chocolate.
>
> Ok, lets say, for arguments sake, that the average density of
> ice-cream cones in a space of 1 million square miles is 1 cone per 100
> square miles. Now, it just so happens that many of the cones are
> clustered together. There is the chocolate cluster with all the
> various types of chocolate cones all fairly close together. Then,
> there is the strawberry cones with all the variations on the
> strawberry theme pretty close together. Then, there is the . . .
> well, you get the point. The question is, does this clustering of
> certain types of ice creams help is the traversing the gap between
> these clustered types of ice creams? No it doesn't. If anything, the
> clustering only makes the average gap between clusters wider. The
> question is, how to get from chocolate to strawberry or any other
> island cluster of ice creams when the average gap is still quite
> significant?

You don't; if you want to get from chocolate to strawberry, you need
to do it early on, when the distance is smaller. That's why
fundamental differences between organisms (such as between chocolate
and strawberry ice cream) are taken as evidence that they are only
distantly related.

>
> You see, the overall average density of cones is still significant to
> the problem no matter how you look at it. Clustering some of them
> together is not going to help you find the other clusters

Who said we had to find all of the clusters?

> > It will be nearby where the blind man is. *And* because chocolate with
> > almonds is now the need, it will also be on the new local high mesa
> > (relative to the position of the blind man on the chocolate mesa). The
> > blind man need only follow the simple rules (Up good. Down bad. Neutral
> > neutral. Keep walking.) and he has a good chance of reach the 'new' local
> > mesa top quite often.
>
> And what about the other clusters? Is the environment going to change
> just right a zillion times in a row so that bridges can be built to
> the other clusters?

No; the blind men at the other clusters reached them when they were
still a part of this one. Eventually the clusters split apart and
"drifted" away from each other. (Much as galaxies "drift" apart due to
cosmic expansion.)

[...]

>
> > My
> > presumption is that the successful search is almost always going to
> > start from the pre-existing mesa
>
> Agreed.
>
> > with the closest flavor to the new need
> > (or from a duplicate, which, as a duplicate, is often superfluous and
> > quickly erodes to ground level in terms of its utility).
>
> This is where we differ. Say you have chocolate and vanilla. Getting
> to the different varieties of chocolate and vanilla is not going to be
> much of a problem. But, say that neither chocolate nor vanilla are
> very close to strawberry or to each other. Each cluster is separated
> from the other clusters by thousands of miles. Now, even though you
> already have two clusters in your population, how are you going to
> evolve the strawberry cluster if an environmental need arises where it
> would be beneficial?

In that case, you wouldn't. You'd have to settle for chocolate ice
cream with strawberries or some such.

Similarly, we would not expect birds to evolve jet engines, as they
are too different from any system the birds possess now.

[...]


> The suggestion that absolutely all of the clusters are themselves
> clustered together in a larger cluster or archipelago of clusters in a
> tiny part of sequence space is simply a ludicrous notion to me -
> outside of intelligent design that is. Oh no, you, Robin, Deaddog,
> Sweetness, Musgrave, and all the rest will have to do a much better
> job and explaining how all the clusters can get clustered together
> (when they obviously aren't) outside of intelligent design.

It isn't necessary that they _all_ be clustered in that fashion; only
that some of them are.

Bennett Standeven

unread,
Jan 15, 2004, 7:46:23 PM1/15/04
to
dfo...@gl.umbc.edu (david ford) wrote in message news:<b1c67abe.04011...@posting.google.com>...


> What I meant to say was, there's no such thing as an intelligent
> designer of computer programs.

Heh. Sometimes it feels that way, even with my own programs.

Chris Merli

unread,
Jan 15, 2004, 9:59:07 PM1/15/04
to

Actually this comes from the examination of many protein sequences.

So how would you explain that there are hundreds of examples of parts of
proteins that were obviously lifted from other proteins. More importantly
how do you explain the nested heiarchies that they follow. A designer may
borrow an idea to use again but they would not modify simple highly
functional components for each new use. That would be like completely
re-engineering a bolt for every new machine. So then your theory must
predict that we would find very similiar proteins in very diverse organisms.
Is this a prediction of your theory?

I thought you were beyond this base level of a strawman arguement.

>
> Don't you see, just because the subparts are functional as parts of
> different systems of function does not mean that these subparts can
> simply make an entirely new collective system of function. This just
> doesn't happen although evolutionists try and use this argument all
> the time. It just doesn't make sense.

Actually if you look at the components of the clotting system or the globin
genes you will see that this is exactly what happens. And if you want to go
for real word scrabble try the immune system. The basic idea is to shuffle
the parts of these genes to creat hundreds of different antibodies.

It is like throwing a bunch of
> words on the ground at random saying, "Well, they all work as parts of
> different sentences, so they should work together to make a new
> meaningful sentence." Really now, it just doesn't work like this.

Maybe it would help if you avoided using analogies and just stuck to
biological examples.

> You must be able to add the genetic words together in a steppingstone
> sequence where each addition makes a beneficial change in the overall
> function of the evolving system. If each change does not result in a
> beneficial change in function, then nature will not and cannot select
> to keep that change. Such non-beneficial changes are either
> detrimental or neutral. The crossing of such detrimental/neutral gaps
> really starts to slow evolution down, in an exponential fashion,
> beyond the lowest levels of specified functional complexity. Very
> soon, evolution simply stalls out and cannot make any more
> improvements beyond the current level of complexity that it finds

> itself, this side of zillions of years of average time.
>
> Sean
> www.naturalselection.0catch.com

I noticed that you did not address the most important parts of my last post.
If you have developed a scientific theory as an alternative explanation then
you should be able to provide some testable predictions to support the
theory.

>


Von Smith

unread,
Jan 16, 2004, 12:33:18 AM1/16/04
to
seanpi...@naturalselection.0catch.com (Sean Pitman) wrote in message news:<80d0c26f.04011...@posting.google.com>...

Except of course that organisms that actually have all these
components don't actually produce a junkpile, and in many cases, such
as in the TTSS or Tsp pilus, the relevant parts already assemble in
substantially the same way they do for a flagellum. It appears that
Dr. Pitman has taken his strawman version of evolution to the next
level: not content with suggesting that proteins must evolve from
scratch from random peptide sequences, he is now telling us that
complex multi-protein systems must evolve from random junk-piles of
constituent parts.

I would have been more impressed if you had written this *after*
giving a substantive reply to Deaddog's recent excellent post on
Synthetic Biology, which probably sheds some light on how biologists
*really* think complex multi-protein systems might evolve. In it, he
cites a paper in which researchers randomly switched around some of
the parts involved in complex multi-protein interactions to see what
they would do.

Combinatorial synthesis of genetic networks.
Guet CC, Elowitz MB, Hsing W, Leibler S.
Science. 2002 May 24; 296(5572): 1466-70.

http://www.sciencemag.org/cgi/content/full/296/5572/1466

So what happens when one shakes up the regulatory bits of a biological
system and lets them fall where they will? AIUI, far from ending up
with nothing but random junkpiles, the researchers were able to obtain
a variety of novel logically-functioning phenotypes. No need for some
pre-existing homonculus magically prompting the various parts on how
to behave: as often as not the parts were able to associate and
interact coherently left to their own devices. Of course it is
possible that this liberal arts major is misunderstanding the article.
Perhaps the biologically washed can comment more coherently.

>
> Now, of course, if you throw natural selection into the picture, this
> is supposed to get evolution out of this mess. It sort through the
> potential junk pile options and picks only those assemblages that are
> beneficial, in a stepwise manner, until higher and higher systems of
> functional complexity are realized. This is how it is supposed to
> work. The problem with this notion is that as one climbs up the
> ladder of functional complexity, it becomes more and more difficult to
> keep adding genetic sequences together in a beneficial way without
> having to cross vast gaps of neutral or even detrimental changes.

Maybe, maybe not. Again, you wouldn't necessarily need an
astronomical amount of novel assembly to get motility out of a Tsp
pilus; many of the constituent parts are not only already there, but
are already interacting in the way they do in a flagellum. It appears
that once again, you are assuming that we are talking about evolving
such a system from *any* random assemblage of the individual
components, rather than from a logical precursor like a pilus. And
besides, this recent work in systems biology indicates that even if we
*are* talking about randomly rejumbling the components of a system,
the prospects of getting a novel beneficial function as a result may
not be quite as grim as you make out.

<snip yet another English language analogy>

I may be somewhat out of my depth here, but what the hell: Proteins
interact with one another according to chemistry. Since the precursor
proteins had basically the same chemical propoerties they do now, they
already more or less "knew how" to interact with one another. You
might want a point mutation or two to improve affinity; this is hardly
a problem for evolution. Timing and delivery of the parts is
controlled by things like regulatory sequences and transport proteins;
these can also evolve new behaviors, and have been observed to do so.
Genes can be up- or down-regulated, and regulatory sequences can
evolve to respond to different repressors. Regulatory sequences can
be switched around. Transport proteins can be co-opted and modified to
transport different substances.

ISTM you are trying to create a mystification. We don't have all the
*specific* answers to how, exactly, this or that structure evolved, or
even know all the details about how the various parts of the flagellum
work, but I don't think that how proteins generally "know how" to
interact with one another is the sort of inexplicable black magic you


seem to think it is.

Von Smith
Fortuna nimis dat multis, satis nulli.

howard hershey

unread,
Jan 16, 2004, 2:13:32 PM1/16/04
to

Sean Pitman wrote:

> howard hershey <hers...@indiana.edu> wrote in message
> news:<bu46sv$srt$1...@hood.uits.indiana.edu>...
>
>
>>> Consider the scenario where there are 10 ice cream cones on the
>>> continental USA. The goal is for a blind man to find as many as
>>> he can in a million years.
>>
>> Except that is NOT what evolution does. Evolution starts with an
>> organism with pre-existing sequences that produce products and
>> interact with environmental chemicals in ways that are useful to
>> the organism's reproduction.
>
>
> Yes . . . so start the blind man off with an ice-cream cone to begin
> with and then have him find another one.
>
>
>> The situation is more like 10,000 blind men in a varying topography
>> who blindly follow simple and dumb rules of the game to find useful
>> things (ice cream at the tops of fitness peaks):
>
>
> You don't understand. In this scenario, the positively selectable
> topography is the ice-cream cone.

The reason why I used a mesa loaded with ice cream cones is because of
the difference in size of the searching blind man (the modal sequence in
a specific population) and an ice cream cone (all sequences with the
same effective functional activity).

The only way your scenario would be an accurate reflection of reality
is if the ice cream cone were really an ice cream mountain that the
blind woman can climb, with a few dribs and drabs of ice cream (of some
flavor) at the base, with increasing concentrations of ice cream up the
slope toward the mesa, enticing the man upward to the mesa she can
wander around.

Now, why do I use this model rather than your sudden tiny ice cream
cones that pop out of nowhere as tiny dots on the tops of telephone
poles in monotonously flat landscape? You need to remember what these
entities are representing and what the reality of a search through
sequence space in real protein sequence space would look like.

The blind man following my dumb rules is the sequence du jour (the
current modal sequence) of a population of organisms. The reward for
following the rules is winding up on the mesas where the maximum utility
or reward to the organism (measured by the metric of reproductive
success; ice cream is just like sex) is. This is a mesa rather than an
alp because there are literally thousands of sequences that can have
effectively the same optimal functionality, as evidenced by the fact
that there are hundreds to thousands of different sequences that perform
the same function with effectively equal efficiency in different species
of modern organisms and even within species. That doesn't preclude
minor variations in altitude on the mesa top. Moreover, it is quite
clear that there are also many sequences that have *less* utility than
the optimal utility at the mesa. The mesa is surrounded by ground that
*slopes* upward toward the mesa.
That is, the fitness mesa does not simply pop up straight out of
ground (it is unlike most mesas in this sense) like Devil's Tower (WY)
but there are many sequences of varying utility from no selectable
utility through partial utility to optimal utility. It is also clear
that optimal utility is a relative condition because many enzymes and
systems have to balance conflicting needs (such as need for being able
to utilize several different substrates).


> There are no other selectable fitness peaks here. The rest of the
> landscape is neutral.

For a *given* function or utility, the vast majority of the landscape
will be neutral (meaning equally useless in this case) for *that*
function. However, nearby the functional mesa will be other mesas that
have *related* functions or utility. These may even be poking out from
the gradual slope leading up to the function or utility of current
interest. For example, nearby a hypothetical beta galactosidase mesa
will likely be mesas that bind other sugar-adducts via a glycoside
linkage and hydrolyse those linkages, but do not bind galactose. That
is, there will be sequences which already are part way up the slope
leading to the lactase activity, but the mesas part ways and go upward
in a different directions (one toward glucose-adducts, say). The
reason these sequences are *clustered* close to the sequences for
galactosidase activity are *because* cleavage of a galactose-adduct bond
shares many of the structural and sequence feature needs with activities
that cleave glucose-adduct bonds. These sequences are clustered because
they are similar, like chocolate and chocolate with nuts.

Now this is a very different type of sequence landscape than the one I
see Sean proposing. Let me try to ascii draw what I see as the
differences between Sean's model of sequence space and mine. I could be
wrong about his model since he keeps using word descriptions that
disagree with his mathematical model, which invariably assumes that what
determines the difficulty of evolving a new function is the distance
between some *average* or *random* sequence and the new sequence that
must be generated, a point he repeatedly makes but denies making.

Howard's interpretation of Sean's model of sequence space:

|
|
|
| . .
|
| x
|
|
|
| . .
|
|
| .
|
|
| .
|
| o
|
|
| .
|
|
|
|
| . .
|
|
|_______________________________________________________

By 'sequence space' I specifically mean *all* possible protein sequences
of a particular length, not just those with, say, lactase activity.
The .'s in this model represents the 'ice cream cones'; that is, the
rare sequences that serve *any* useful function whatsoever. Everywhere
else we have a flat surface. The x represents the function (the type of
ice cream cone) you think the blind man (the o) must find by wandering
around the flat spaces. The blind man (the o) is the modal sequence or
starting sequence in the search. I started the o at a random or average
site in all sequence space because that is what Sean's *mathematical*
treatment presumes. He presumes that the search from an average or
random sequence to the desired sequence, which on average would depend
on the overall ratio of useless or neutral sequence to useful sequence
is what is important. Moreover, the ratio Sean uses in his calculations
is the ratio of sequences for a *particular* useful function to *all*
sequence space, and thus any sequence which is useful for a different
function other than the chosen one is put in the denominator as being
equivalent to a sequence that has no utility whatsoever. That is one
the reason why I consider his goal to be a teleological or
pre-determined role.

What I cannot represent here, but is certainly an important point, is
the idea that the .'s in Sean's model are completely randomly
distributed wrt to function. That is, if the . at the lower right
encodes a glucose-based glycoside hydrolase, a . representing a
galactose-based glycoside hydrolase will NOT cluster with the
glucose-based glycoside hydrolase, but will be found at some random
position (on average, far away) in this sequence space wrt the sequence
that encodes glucose-based glycoside hydrolase. In this model, and only
in this model where the search involves a completely random search, the
separation between functional sequences is a function of the ratio of
useful to non-useful sequences and nothing else. Neither the starting
point of the blind man nor nor any of the useful sequences show any
clustering of functionalities. Feel free, Sean, to correct any part of
this model that you regard as a misrepresentation of what your
*mathematical* model presents.


Howard's model of sequence space.

|
| _______ _______
| / .o. \ / ... \
| | ... | | .o. _|_
| \ ... / \ .../xxx\
| ------- -----|xxx|
| \xxx/
| _______ _______
| / ... \ / ... \
| | .o. | | ... |
| \ ... / \ .o. /
| -------___ -------
| / ... \
| | ... |
| \ o.. / _______
| ------- / ... \
| | ... |
| \ ... /
| -------
| _______
| / o.. \
| | ... |
| \ ... /
| -------
| _______ _______
| / ... \ / ... \
| | ..o | | .o. |
| \ ... / \ ... /
| ------- -------
|_______________________________________________________

By 'sequence space' I specifically mean *all* possible protein sequences
of a particular length. The .'s represents the 'ice cream cones'; that
is, the rare sequences that serve *any* useful function whatsoever.
There are a number of sequences that are equally useful. I have
clustered these in a 3x3 dot array, because the equally useful sequences
are close together. Of course, the reality would be that the size of
these boxes will be highly variable. Some will be only a very small
cluster. Others, like fibrinogen peptide, can essentially cover the
entire sequence space! There is little relationship between size of the
protein and the number of sequences in sequence space that can fulfill
that function. But, in general, the smaller proteins tend to have
higher constraint (fewer sequences will fullfil the function). The
differences between the .'s are selectively neutral, so the position of
the blind man (the modal sequence in a particular organism) is random
within that group. There is *also* a pneumbra of surrounding sequences
with *less* utility of varying degrees of full functionality. I have
represented that by the box around the cluster of useful sequences. The
boundary is the edge of selectable utility (where the blind man starts
to notice a selective slope). The x represents the function (the type of
ice cream cone) you think the blind man (one of the o's) must find.
Notice that this representation is a representation of a real landscape
with real topography, not a perfectly flat plane with telephone poles
sticking up randomly at scattered points.

I am starting with a real cell and not with a hypothetical blind man
who is starting as some random sequence in all of sequence space.
Each of my blind men (genes, if you will) already occupies a site on the
mesa of functionality, but the different mesas (and their modal gene
sequences) represent quite different functionalities. One peak may
represent a glucose-glycoside hydrolase (the one on the upper right very
near the xxx mesa). Another, down on the lower left, may represent a
sequence with fatty acid synthetase activity. The whole board
represents all of sequence space, after all. But the o's of my modal
gene sequences in populations are not on random or average positions.
They are specifically on mesas of functionality. I would argue that
that is a better representation of reality than Sean's (or the best I
can make of Sean's) model of reality.

Notice that there is also some overlap in functionalities. And there is
even one potentially useful site that has no blind man (middle far
right). This is, simply put, a potential function that this particular
cell does not have, but does exist in sequence space, such as nylonase
activity or the ability to extract energy from H2S. It is certain that
this cell does not *need* this activity for survival. It is not
necessarily the case that it could not *use* it, although that may also
be true. One does not, after all, *need* nylonase activity in all
possible environments. In my model, all the blind men are moving around
their respective mesas. Some may even take a few steps downslope by
accident. It is highly unlikely that a *randomly chosen* one of these
10,000 pre-existing blind men (and 10,000 does not seem to be an absurd
number for the number of different genes) will find, by such a walk, the
spots marked xxx.

But evolution does not work by a *randomly* chosen or *average* blind
man wandering through functionless space to chance upon the xxx's in my
model of sequence space. In particular, in my model, there is, compared
to Sean's model, a definite, obvious, and intuitive clustering of
functionally useful sequences. That is, the cluster that overlaps the
xxx sequence is not some random sequence with some random function. It
is, let's say, a glucose-based glycoside hydrolase with no selectable
beta galactosidase activity. In my model, such a hydrolase is not
randomly present in the sequence space, but is specifically likely to be
clustered close to those sequences that do have selectable beta
galactosidase activity.

In fact, even if one started with a randomly chosen blind man starting
at some place on the flats between useful sequences, if that sequence
were ever to become useful, it would do so by climbing the nearest mesa
that has no blind man on top (that blind man's landscape does not
include already occupied mesas) using the simple rules I described.

Another thing that is not represented diagramatically is the role of
duplication. A duplicate of the blind man on the mesa close to the
xxx's does not have the same position as the original blind man (which
is already at the top of the mesa). It is, instead, often at a position
that is close to the flatland (that is, one copy, the one I call the
'duplicate' is functionally redundant rather than functionally useful).
Thus, when this redundant blind man takes a step toward the xxx's he
is not taking a step down, but a step up. The landscape for this man is
different than the landscape for the identical clone of this man. This,
of course, is hard to represent in a simple plane.

> Some of the ice-cream cones may be more
> positively selectable than others (i.e., perhaps the man likes
> vanilla more than chocolate). However, all positive peaks are
> represented in this case by an ice-cream cone.
>
>
>> Up is good. Down is bad.
>
>
> Ice-cream cone = Good or "Up" (to one degree or another) or even
> neutral depending upon one's current position as it compares to one's
> previous position. For example, once you have an ice cream, that is
> good. But, all changes that maintain that ice cream but do not gain
> another ice cream are neutral.
>
> No ice-cream cone = "Bad", "Down", or even "neutral" depending upon
> one's current position as it compares to one's previous position.
>
>
>> Flat is neither good nor bad.

This position appears to represent ice cream cones as an all-or-nothing
phenomenon. There are no possible intermediate states in this model.
It looks like a flat plain with telephone poles. In short, it looks
like an artificial landscape, not a real one.


>
> Exactly. Flat is neutral. The more neutral space between each
> "good" upslope/ice-cream cone, the longer the random walk. The
> average distance between each selectable "good" state translates into
> the average time required to find such a selectable state/ice-cream
> cone. More blind men searching, like 10,000 of them, would cover the
> area almost 10,000 times faster than just one blind man searching
> alone. However, at increasing levels of complexity the flat area
> expands at an exponential rate.

How does one determine, in mathematical terms, "level of complexity"?
The reason why I did not have landscapes where I used sequence space at
a given level of complexity rather than at a given amino acid number is
that I have no idea how one determines "level of complexity".
Why do I have to keep asking that question? And what is your evidence
that increasing levels of complexity causes a change in the ratio of
utile to useless sequence? How do you determine the ratio of utile (for
*any function*) to useless (for *any function*) sequence in any case?
What is it that prevents clustering of functionally related sequences in
your landscape?

> In order to keep up and find new
> functions at these higher levels of functional complexity, the
> population of blind men will have to increase at an equivalent rate.

Only if you think the blind men start at random positions and go to a
sequence which is randomly placed wrt their position.

> The only problem with increasing the population is that very soon the
> local environment will not be able to support any larger of a
> population. So, if the environment limits the number of blind men
> possible to 10,000 - that's great if the average neutral distance
> between ice-cream cones in a few miles or so, but what happens when,
> with a few steps up the ladder of functional complexity, the neutral
> distance expands to a few trillion miles between each cone, on
> average? Now each one of your 10,000 blind men have to search around
> 50 million sq. miles, on average, before the next ice-cream cone or a
> new cluster of ice cream cones will be found by even one blind man in
> this population.

Could you explain the relevance of the above model to the real world?
Why does the *average* distance between a *random* site and a
*teleologically determined* site matter? Wouldn't the distance between
the blind man closest to a teleologically determined site and that site
be more important and relevant? We are not interested in the odds of
the *average* sequence changing into the teleologically determined one.
We are interested in the *best* odds of *any* existing sequence
changing into the teleologically determined one. The best odds are
those of the pre-existing sequence that is closest to the end sequence
and has nothing to do with the odds of an average or random sequence
becoming the end sequence.

>> Keep walking in all cases.
>
>
> They keep walking alright - a very long ways indeed before they reach
> anything beneficially selectable at anything very far beyond the
> lowest levels of functional complexity.

Only if they started from random spots in sequence space. If the blind
man which is closest to the xxx starts walking, it will quickly, by the
simple rules I invoked, find its way up the Mt. Improbable right next door.

>> It would not take too long for these 10,000 blind men to be found
>> in decidedly non-random places (the high mesas of functional
>> utility where they are wandering around the flat tops if you
>> haven't guessed).


> There is a funny thing about these mesas. At low levels of
> complexity, these mesas are not very large. In fact, many of them
> are downright tiny - just one or two steps wide in any direction and
> a new, higher mesa can be reached. However, once a blind man finds
> this new mesa new higher mesa (representing a different type of
> function at higher level of specified complexity) and climbs up onto
> its higher surface, the distance to a new mesa at the same height or
> taller is exponentially greater than it was at the lower levels of
> mesas.
>
> ___ __ __ _
> _-_ __ __-_ _-_- -__-__-_- _-__-_-_-__-
> -_- _-_-_ _-_-__
>

Can you make the above make sense? Remember in my model, that the usual
way for a blind man to move involves a change in the landscape or the
presence of a duplicate blind man who is now redundant and for whom the
landscape is differently shaped.

>> And the ice cream cones (the useful functions), remember, are not
>> randomly distributed either. They are specifically at the tops of
>> these mesas as well. That is what a fitness landscape looks like.
>
>
> Actually, the mesa itself, every part of its surface, represents an
> ice cream cone. There is no gradual increase here. Either you have
> the ice-cream cone or you don't.

I.e., your model is of a flat plain with telephone poles where there
cannot be intermediacy in function. Where an enzyme cannot mutate or
generate a closely related sequence with 50% of optimal activity. Or
10%. In your model, it is indeed all-or-nothing. That is what I get
from this discussion. Am I right? [The reason I ask is because I will
want to compare your and my model with reality -- that is test the model
against the evidence of nature -- to see which is closer to the way that
real organisms and real enzymes and real systems of change work.]

> If you don't have one that is even
> slightly "good"/beneficial, then you are not higher than you were to
> begin with and you must continue your random walk on top of the flat
> mesa that you first started on (i.e., your initial beneficial
> function(s)).

"Good/beneficial" is not an absolute value. It is a relative value. It
is "better than". Indeed, in an *unchanged* selective environment, it
is unlikely that there will be a mesa of higher utility arising out of
an original mesa that will not have already been discovered by a random
walk which retains the original activity or function at each step. What
your model seems to indicate is something quite different. Your
landscape is like a flat plain with telephone poles, and you seem to say
that the only way to reach a new telephone pole is to climb down and
wander the flatlands blindly. That is, one first completely loses all
functional utility and wanders functionless space.

In my model, it may be that there is, in fact, a mesa newly arisen out
of an original mesa that suddenly looks more attractive than the
original. This would be a consequence of a change in environment. An
example of this would be the conversion of ebg to lactase due to a
change in environment that made lactase activity far more beneficial
than the original activity of ebg. Or it could be due to the production
of a redundant duplicate, with the duplicate being free to explore new
nearby upward directions that the original could not, because its
function was too valuable. Or it could be a new function for an old
protein that appears by a change in regulation (as in eye crystallins).
But, then, I don't see any changes that must necessarily involve long
selectively neutral walks. I only see walks to related structures in a
cluster that has related functions or emergent functions of old
structures. And I envision a *real* landscape, not a flat plain with
telephone poles.

>> If this topography of utility only changed slowly, at any given
>> time it would appear utterly amazing to Sean that the blind men
>> will all be found at these local high points or optimal states (the
>> mesas licking the ice cream cones on them) rather than being
>> randomly scattered around the entire surface.
>
>
>
> If all the 10,000 blind men started at the same place, on the same
> point of the same mesa, and then went out blindly trying to find a
> higher mesa than the one they started on, the number that they found
> would be directly proportional to the average distance between these
> taller mesas.

My model does no such thing. The 10,000 blind men are found on
functionally useful mesas that are scattered *in clusters* throughout
sequence space. You seem to think that I am thinking that each blind
man represents an individual organism. I am thinking of each blind man
representing a modal sequence in a population of organisms and their
scatter representing the pattern of real functional cell sequences in
sequence space. That is because evolution of new function by sequence
change does not start from some arbitrary set of random sequences. It
starts with already useful sequences in already functioning cells. Each
mesa does something different; each cluster in a mesa or cluster of
related mesas does something related to what other members of the
cluster do. Each blind man is moving around his functional mesa.
Probability says that the blind man in the cluster closest to the new
sequence is the most likely one to find a new mesa optimum. Not some
random blind man. The average density of mesas is irrelevant to the
odds of some blind man finding a new solution or sequence. All that
matters is how far the nearest sequence with a blind man is and whether
the environmental landscape has changed to favor movement from current
optimi.

> If the density of taller mesas, as compared to the one

> they are now on, happens to be, say, one every 100 meters, then they

> will indeed find a great many of these in short order. However, if
> the average density of taller mesas, happens to be one every 10,000
> kilometers, then it would take a lot longer time to find the same
> number of different mesas as compared to the number the blind men
> found the first time when the mesas were just 100 meters apart.

My whole point is that *average* distance from an *average* blind man is
utterly irrelevant to reality. It is not wrong. It is irrelevant.

>> They reached these high points (with the ice cream) by following a
>> simple dumb algorithm.
>
>
> Yes - and this mindless "dumb" algorithm works just fine to find new
> and higher mesas if and only there is a large average density of
> mesas per given unit of area (i.e., sequence space). That is why it
> is easy to evolve between 3-letter sequences. The ratio/density of
> such sequences is as high as 1 in 15. Any one mutating sequence will
> find a new 3-letter sequence within 15 random walk steps on average.
> A population of 10,000 such sequences (blind men) would find most if
> not all the beneficial 3-letter words (ice-cream cones) in 3-letter
> sequence space in less than 30 generations (given that there was one
> step each, on average, per generation).

Notice that you are starting with a *random* 3-letter sequence and
asking how many steps would be required for it to reach another *random*
specified 3-letter sequence by a *random* walk with no intermediate
utility. That is the mathematical argument you repeatedly say you are
NOT making, but repeatedly insist on doing. That model is not wrong.
It is irrelevant.

> This looks good so far now doesn't it? However, the problems come as
> you move up the ladder of specified complexity. Using language as
> an illustration again, it is not so easy to evolve new beneficial
> sequences that require say, 20 fairly specified letters, to transmit
> an idea/function. Now, each member of our 10,000 blind men is going
> to have to take over a trillion steps before success (the finding of
> a new type of beneficial state/ice cream cone) is realized for just
> one of them at this level of complexity.

This does fit the model I presented as my interpretation of what you
said. I think that model has very little relationship to either the
reality of sequence space or the mechanisms of evolution. It is nothing
but the tornado whipping together a 747 argument gussied up so she
doesn't look like the old decrepit whore she is.


>
> Are we starting to see the problem here? Of course, you say that
> knowledge about the average density of beneficial sequences is
> irrelevant to the problem, but it is not irrelevant unless you, like
> Robin, want to believe that all the various ice-cream cones
> spontaneously cluster themselves into one tiny corner of the
> potential sequence space AND that this corner of sequence space just
> so happens to be the same corner that your blind men just happen to
> be standing in when they start their search. What an amazing stroke
> of luck that would be now wouldn't it?

I do think that sequences cluster by functional attributes. That is,
enzymes that hydrolyze glycoside linkages will all have similar
sequences or at least sequences that produce similar 3-D structures with
a *few* key sites being strongly conserved. Why do you think otherwise?

>> But you were wondering how something new could arise *after* the
>> blind men are already wandering around the mesas? The answer is
>> that it depends. They can't always do so.
>
>
> And why not Howard? Why can't they always do so? What would limit
> the blind men from finding new mesas?

The fact that the blind men (the modal sequence of a population) are
already on mesas of utility. Usually a change in functional or
selective landscape is required in the vicinity of a blind man to allow
him to reach a different peak by following the simple rules.

> I mean really, each blind man
> will self-replicate (hermaphrodite blind men) and make 10,000 new
> blind men on the mesa that he/she/it now finds himself on. This new
> population would surely be able to find new mesas in short order if
> things worked as you suggest.

If the change is positively *selective*, the walk of the blind man (the
modal population sequence) to the goal will indeed be rapid. But
neutral drift of a modal population sequence is not a fast process. If
it requires a few steps downward before hitting a new upward slope to a
different function the process will be quite episodic.

> But the problem is that if the mesas
> are not as close together, on average, as they were at the lower
> level where the blind men first started their search, it is going to
> take longer time to find new mesas at the same level or higher. That
> is the only reason why these blind men "can't always" find "something
> new". It has to do with the average density of mesas at that level.

Average density of a specified end only has meaning if one is
envisioning evolutionary searches as a random search for a specified end
from a random or average position. Evolutionary searches that succeed
never or rarely (nylonase, perhaps) start from a random or average site.
They start from a site close to the destination. And since functional
sequences do seem to be clustered rather than randomly scattered across
sequence space, it is not unusual for the starting point of *successful*
evolutionary inventions to be nearby.

>> But remember that these pre-existing mesas are not random places.
>> They do something specific with local utility.
>
>
> The mesas represent sequences with specific utilities. These
> sequences may in fact be widely separated mesas even if they happen
> to do something very similar. Really, the there is no reason for the
> mesas to be clustered in one corner of sequence space. A much more
> likely scenario is for them to be more evenly distributed throughout
> the potential sequence space.

Chose your poison. If mesas are clustered, in fact, reaching a new mesa
that is far away from any cluster becomes more difficult because o's
(the blind men or modal population sequences) are clustered on
pre-existing mesas. Perhaps requiring a chance event like the one that
produced nylonase or one that caused the formation of a chimeric protein
rather than a stepwise change of single nucleotides, as would be
possible if the new mesa were in the same functional family. Or perhaps
making that change impossible for that organism.

If mesas are *evenly* spread throughout sequence space, that still
doesn't obviate the fact that the distance between the *average*
pre-existing mesa, with its blind man, and the new mesa is irrelevant
compared to the distance between the *nearest* pre-existing mesa and the
new mesa. Evolution to the new mesa won't come from some *average* mesa
or some mesa on the other side of sequence space. It will come from
mesas that are closest to the new one.

> Certainly there may be clusters of
> mesas here and there, but on average, there will still be a wide
> distribution of mesas and clusters of mesas throughout sequence space
> at any given level. And, regardless of if the mesas are more
> clustered or less clustered, the *average* distance between what is
> currently available and the next higher mesa will not be
> significantly affected.

No. It will be utterly irrelevant.

>> Let's say that each mesa top has a different basic *flavor* of ice
>> cream. Say that chocolate is a glycoside hydrolase that binds a
>> glucose-based glycoside. Now let's say that the environment
>> changes so that one no longer needs this glucose-based glycoside
>> (the mesa sinks down to the mean level) but now one needs a
>> galactose-based glycoside hydrolase.
>
>
> You have several problems here with your illustration. First off,
> both of these functions are very similar in type and use very similar
> sequences.

No kidding! Who would have thunk that blind evolution would chose to
evolve a lactase from a closely related sequence rather than from some
random or average sequence or from an alcohol dehydrogenase? Surely not
Sean.

> Also, their level of functional complexity is relatively
> low (like the 4 or 5 letter word level). Also, you must consider
> the likelihood that the environment would change so neat so that
> galactose would come just when glucose is leaving. Certainly if you
> could program the environment just right, in perfect sequence,
> evolution would be no problem.

A concentration gradient would suffice; that would provide environments
in which the original strain could grow and also a new niche would be
open for any variant able to exploit it. The environment, of course,
only selects among existing variants, so the selectable change would
have to have already happened.

> But you must consider the likelihood
> that the environment will change in just the right way to make the
> next step in an evolutionary sequence beneficial when it wasn't
> before. The odds that such changes will happen in just the right way
> on both the molecular level and environmental level get exponentially
> lower and lower with each step up the ladder of functional
> complexity.

How does one calculate "functional complexity" so that one knows what
rung of the ladder one is talking about?

> What was so easy to evolve with functions requiring no
> more than a few hundred fairly specified amino acids at minimum, is
> much much more difficult to do when the level of specified complexity
> requires just a few thousand amino acids at minimum.

What do these numbers of amino acids mean wrt "level of specified
complexity". How does one determine that there are a "few hundred
fairly specified amino acids" required for a change in function.
Especially since function can change without changing *any* amino acids
(see the eye crystallins).

> It's the
> difference between evolving between 3-letter words and evolving
> between 20-letter phrases. What are the odds that one 20-letter
> phrase/mesa that worked well in one situation will sink down with a
> change in situations to be replaced by a new phrase of equal
> complexity that is actually beneficial? - Outside of intelligent
> design? That is the real question here.

Well, it would help if you would actually tell us what your meaningless,
gobbledygook, hand-waving terms actually meant and how they could be
operationally quantified.

>> Notice that the difference in need here is something more like
>> wanting chocolate with almonds than wanting even strawberry, much
>> less jalapeno or anchovy-flavored ice cream. The blind man on the
>> newly sunk mesa must keep walking, of course, but he is not
>> thousands of miles away from the newly risen mesa with chocolate
>> with almonds ice cream on top.
>
>
> He certainly may be extremely far away from the chocolate with
> almonds as well as every other new type of potentially beneficial ice
> cream depending upon the level of complexity that he happens to be at
> (i.e., the average density of ice-creams of any type in the sequence
> space at that level of complexity).

That is certainly counter-intuitive and counter-evidence that related
functions tend to be found to have related sequences (be in gene families).

>> Changing from one glucose-based glycoside hydrolase to one with a
>> slightly different structure is not the same as going from
>> chocolate to jalapeno or fish-flavored ice cream. Not even the same
>> as going from chocolate to coffee. The "island" of chocolate with
>> almonds is *not* going to be way across the ocean from the "island"
>> of chocolate.
>
>
> Ok, lets say, for arguments sake, that the average density of
> ice-cream cones in a space of 1 million square miles is 1 cone per
> 100 square miles. Now, it just so happens that many of the cones are
> clustered together. There is the chocolate cluster with all the
> various types of chocolate cones all fairly close together. Then,
> there is the strawberry cones with all the variations on the
> strawberry theme pretty close together. Then, there is the . . .
> well, you get the point. The question is, does this clustering of
> certain types of ice creams help is the traversing the gap between
> these clustered types of ice creams?

It certainly reduces the distance needed to go from chocolate to
chocolate with almonds. But why would anyone think that evolution works
by converting an alcohol dehydrogenase into a glycoside hydrolase rather
than by modifing one glycoside hydrolase into a different one?

> No it doesn't. If anything,
> the clustering only makes the average gap between clusters wider.
> The question is, how to get from chocolate to strawberry or any other
> island cluster of ice creams when the average gap is still quite
> significant?

What evidence do you have that one ever *needs* to convert vanilla into
chocolate? There are, of course, evolutionary mechanisms for making
vanilla/chocolate swirl (chimeric hybrid formation). But we who prefer
our hypotheses to be realistic leave the converting of vanilla into
chocolate to the alchemists and magicians.

> You see, the overall average density of cones is still significant to
> the problem no matter how you look at it. Clustering some of them
> together is not going to help you find the other clusters - unless
> absolutely all of the ice cream islands are clustered together as
> well in a cluster of clusters all in one tiny portion of the overall
> potential space. This is what Robin is trying to propose, but I'm
> sorry, this is an absolutely insane argument outside of intelligent
> design. How is this clustering of clusters explained via mindless
> processes alone?

No. I strongly suspect he is proposing a situation close to that I am
proposing. Where cells have functions and these functions are clustered
and sequences form gene families rather than being scattered in sequence
space. And where the relevant distance is between some current sequence
closest to the end sequence rather than the distance between some random
current sequence and the end sequence.


>
>
>> It will be nearby where the blind man is. *And* because chocolate
>> with almonds is now the need, it will also be on the new local high
>> mesa (relative to the position of the blind man on the chocolate
>> mesa). The blind man need only follow the simple rules (Up good.
>> Down bad. Neutral neutral. Keep walking.) and he has a good chance
>> of reach the 'new' local mesa top quite often.
>
>
> And what about the other clusters? Is the environment going to
> change just right a zillion times in a row so that bridges can be
> built to the other clusters?

You obviously have missed the fact that I am modelling a real organism
which has more than one gene sequence. Who knows what you are modelling?


>
>
>> And remember that there is not just one blind man on one mesa in
>> this ocean of possible sequences. There are 10,000 already present
>> on 10,000 different local mesas with even more flavors than the 31
>> that most ice cream stores offer. Your math always presuposes that
>> whenever you need to find, say, vanilla with cherry the one blind
>> man starts in some random site and walks in a completely random
>> fashion (rather than by the rules I pointed out) across half the
>> universe of sequence space to reach your pre-determined goal by
>> pure dumb luck to find the perfect lick.
>
>
> That is not my position at all as I have pointed out to you numerous
> times. It seems that no matter how often I correct you on this straw
> man caricature of my position you make the same straw man
> assertions. Oh well, here it goes again.
>
> I'm perfectly fine with the idea that there is not just one man, but
> 10,000 or many more men already in place on different mesas that are
> in fact selectably beneficial. In fact, there may be 10,000 or more
> men on each of 10,000 mesas. That is all perfectly fine and happens
> in real life. When something new "needs to be found", say, "vanilla
> with a cherry on top" or any other potentially beneficial function at
> that level of complexity or greater (this is not a teleological
> search you know since there are many ice-cream cones available), all
> of the men may search at the same time.

First, we seem to differ on the meaning of the "blind man". You seem to
be thinking of it as an organism. I am thinking of it as a modal
sequence in a population. That I am putting my modal sequence (there
will be continual mutations producing variants from this modal sequence,
but change in the modal sequence is determined by selection (which is
fast when it occurs) or neutral drift (which is slow but more frequent).
Neither zooms around the mesa, but they do explore it and also keep
the blind man on the top until or unless the geography changes.

But the math still proposes that these blind men are placed in random
spots away from the desired sequence and must search all of sequence
space to find the desired sequence.

And I beg to differ. You choose an end point and claim that some
average or random sequence must end up there and nowhere else. I see a
landscape filled with different mesas, each requiring a different
cluster of sequences for quite different functional results. A randomly
chosen blind man is not in a landscape that is flat except for the
teleologic goal you decided upon. The sequence space landscape of a
cell certainly has many (as many as 30,000-70,000 in humans -- think
gene number) mesas and presumably many more other unused or potential
mesas (sets of sequences) for functions that, at the present time, have
no selective value in humans. The landscape you describe is devoid of
contour except for a telephone pole at the teleologic goal. I would
expect a randomly chosen blind man that is not already on a mesa to walk
until it came across the first unoccupied slope available to it. Such a
slope would only exist when there is a selectable function that is not
currently occupied. It would climb that slope. Odds are that any such
slope leading to function will not be randomly distant. It will be
close by.


> My math certainly does not and never did presuppose that only one man
> may search the sequence space. That is simply ridiculous.

I find it ridicuolous that your math always assumes that the starting
point of any successful finder of the desired mesa must be, on average,
at an average or random position in sequence space. My position is that
out of all the useful (or even useless) sequences that do exist in a
cell at any given time, some will, even if only by chance, be much
closer to the desired mesa than others and *especially* much closer than
the average or random position. My point is that the probability is
that any *successful* random walk will most likely be from one of these
closest postion than from some average or random position. That means
that all your determinations of average distance has no relevance to any
successful walk. Only the positions of outliers close to the goal count.

> All the
> men search at the same time (millions and even hundreds of billions
> of them at times). The beneficial sequences are those sequences that
> are even slightly better than what is currently had by even one
> member of the vast population of blind men that is searching for
> something new and good.

Well, I would require significantly better in my simple algorithm.
Slightly better may or may not be significant.

> Now, if the average density of something new and good that is even
> slightly selectable as new and good is less than 1 in a trillion
> trillion, even 100 billion men searching at the same time will take a
> while to find something, anything, that is even a little bit new and
> good at the same level of specified complexity that they started
> with. On average, none of the men on their various mesas will be very
> close to any one of the new and good mesas within the same or higher
> levels of sequence space if the starting point is very far beyond the
> lowest levels of specified complexity.

Again with this "lowest levels of specified complexity" bullshit
verbiage. How do you measure this? The above is meaningless mantra to
avoid clear thought.

>> My presumption is that the successful search is almost always going
>> to start from the pre-existing mesa
>
>
> Agreed.
>
>
>> with the closest flavor to the new need (or from a duplicate,
>> which, as a duplicate, is often superfluous and quickly erodes to
>> ground level in terms of its utility).
>
>
> This is where we differ. Say you have chocolate and vanilla.
> Getting to the different varieties of chocolate and vanilla is not
> going to be much of a problem. But, say that neither chocolate nor
> vanilla are very close to strawberry or to each other. Each cluster
> is separated from the other clusters by thousands of miles. Now,
> even though you already have two clusters in your population, how are
> you going to evolve the strawberry cluster if an environmental need
> arises where it would be beneficial?

What makes you think evolution does this? To do that by the mechanism
of many single steps you would be starting out with a *random* sequence
relative to the end sequence. Now it *can* happen that large gaps can
be crossed (vide nylonase or the formation of chimeric proteins by
duplication). But these examples did not happen by the mechanism of a
trip of a thousand steps, changing one nucleotide at a time. These
examples happened in one swell foop by a single mutational event
involving changes in many nucleotides all at once to jump the sequence
close to or within where the slope goes up the mesa.

Some changes in what you might call 'complexity' do involve two
pre-existing subsystems that have independent utility combining to form
a new structure with a different utility. Usually the combining does
not involve many thousands of changes, but only those that cause
association between proteins to occur.

> You see, you make the assumption that just because you start out with
> a lot of clusters that any new potentially beneficial sequence or
> cluster of sequences will be fairly close to at least one of your
> 10,000 starting clusters.

Yes. Because the starting clusters are not sequences devoid of function,
but sequences that already perform biologically useful functions.

> This is an error when you start
> considering levels of sequence space that have very low overall
> densities of beneficial sequences.

How do you determine this, again?

> No matter where you start from
> and no matter how many starting positions you have to begin with,
> odds are that the vast majority of new islands of beneficial
> sequences will be very far away from everything that you have to
> start with beyond the lowest levels of functional complexity.

And how is it possible for every place in sequence space to be equally
far away from the new islands of beneficial sequences? Is the
beneficial islands in the center of a sphere and all other sequences on
the surface? Or perhaps the desired sequence is on one side of a Mobius
strip and all other sequences are on the other side. The geometry of
this is most intriguing. Could you describe a plane of sequence space
where all possible alternative sequences will be very far away from the
desired sequence?

And I have no way of determining how "levels of functional complexity"
fit into this plane of sequence space. Those words seem to have no
meaning whatsoever, except that they are used by you as a mantra to ward
away what you think is evil.


>
>
>> As mentioned, these pre-existing mesas are not random pop-ups.
>> They are at the most useful places in sequence space from which to
>> try to find near-by mesas with closely-related biologically useful
>> properties because they already have biologically useful
>> properties.
>
>
> Yes, similar useful biological properties would all be clustered
> together under one type of functional island of sequences. However,
> the overall density of beneficial sequences in sequence space
> dictates how far apart, on averages, these clusters of clusters will
> be from each other. New types of functions that are not so closely
> related will most certainly be very far away from anything that you
> have to start with beyond the lowest levels of functional complexity.
> You may do fine with chocolate and vanilla variations since those are
> what you started with, but you will have great difficulty finding
> anything else, such as strawberry, mocha, caviar, etc . . .

So when do you think this sort of leap from strawberry to mocha must
occur? Oh, I know. It must occur whenever you perceive the change must
involve something "beyond the lowest levels of functional complexity".
But those are just meaningless words.


>
> The suggestion that absolutely all of the clusters are themselves
> clustered together in a larger cluster or archipelago of clusters in
> a tiny part of sequence space is simply a ludicrous notion to me -
> outside of intelligent design that is. Oh no, you, Robin, Deaddog,
> Sweetness, Musgrave, and all the rest will have to do a much better
> job and explaining how all the clusters can get clustered together
> (when they obviously aren't) outside of intelligent design.

Not all sequence space is biologically useful. The clusters that exist
in cells are biologicially useful.


>
>
>> I *do* expect to see clustering in useful sequences. And I *do*
>> see it.
>
>
> So do I. Who is arguing against this? Useful sequences are often
> clustered around a certain type of function. What I am talking about
> is evolution between different types of functions.

The change in function from enzyme to lens crystallin, for example?
Wasn't that a change between two quite different types of function?

> The evolution of
> different sequences with the same basic type of function is not an
> issue at all. It happens all the time, usually in the form of an
> up-regulation or down-regulation of a certain type of function, even
> at the highest levels of functional complexity.

Or changing substrates? Or connecting a pre-existing neural pathway to
a change in rhodopsin? Or keeping substrates but changing enzymatic
activity? Or binding two proteins together in a specific stoichiometry?
Which of these is a change in "the highest levels of functional
complexity"? And could you show the math that allowed you to identify
the change as one at "the highest levels of functional complexity"?

> But, this sort of
> intra-island evolution is a far cry from evolving a new type of
> function (i.e., going from one cluster to another). In fact, this
> sort of evolution never happens beyond the lowest levels of
> functional complexity due to the lack of density of beneficial
> sequences at these higher levels of specified complexity.

Why is the density of beneficial sequences any lower or higher dependent
upon the "level of specified complexity"?

david ford

unread,
Jan 16, 2004, 11:52:16 PM1/16/04
to
Sean Pitman <seanpi...@naturalselection.0catch.com> in
"Re: Is Evolution an Anti-God Theory?" on 1 Jun 2003:

> It seems to me that the concept of IC is quite helpful indeed. The
> problem is that many, even Behe himself, seem to try to limit the
> definition of IC to "very complex" systems of function in order to
> show that IC systems cannot evolve.
>
> As I see it all systems of function are IC. It is just that some
> systems are more simple than other systems of function. There is a
> spectrum of complexity, but all systems along this spectrum from
> simple to more and more complex are all IC. In other words, not all
> setups of a given number of parts or part types will be able to
> perform a given function. The parts in any system of function can in
> fact be altered, removed, or ordered in a different manner so that the
> function of the system is completely destroyed. In fact, there are
> vastly more non-functional potential arrangements of parts than there
> are beneficially functional arrangements of parts in a particular
> scenario.

Compare Dawkins on ways of being dead
http://tinyurl.com/2aov5
aka
http://www.google.com/groups?selm=Pine.SGI.3.96A.990406232938.942967A-100000%40umbc8.umbc.edu

> Take, for example, Behe's famous mousetrap IC illustration. Many try
> to argue that a mousetrap is not IC since parts can be removed or
> changed and it still can catch mice. That is not the issue. If you
> change the mousetrap, it may still catch mice, but not in the same
> way. The changed mousetrap is a different mousetrap that catches mice
> in a different way. Certainly there are many different kinds of
> mousetraps that can catch mice, some more effectively than others.
> However, all of these mousetraps are dependent upon a certain number
> of parts that are all arranged in a very specific way in order for
> these parts to work together to catch mice (i.e., To perform their
> function). Clearly there are a lot more arrangements of mousetrap
> parts for any given type of mousetrap that would not catch mice at
> all. All mousetraps can in fact be reduced or changed in a way that
> would destroy their function completely. And, these potential
> non-functional mousetraps are far more numerous than those
> comparatively few arrangements than can actually perform the mouse
> catching function.
>
> Of course, it is theoretically possible to arrange several of these
> working mousetraps in sequential order so that very small steps seem
> to exist as one moves from one type of trap to the other. Obviously
> then, it is NOT impossible for IC systems to evolve via function-based
> selection mechanisms since such an evolutionary path need not
> necessarily cross wide neutral gaps in function or non-function. The
> problem is that these gaps are often wider than one might initially
> think.
>
> http://naturalselection.0catch.com/Files/irreduciblemousetrap.html
>
> Even functions that are based on the workings of single proteins, such
> as the enzymatic functions of lactase or nylonase, are IC in that
> there are a limited number of parts that are required to give rise to
> that particular type of function. For more simple functions, such as
> these single-protein-based functions, there might be a much higher
> ratio of sequences of a given length or smaller that would be able to
> perform a given function, like the lactase or nylonase function. For
> example, given a sequence of amino acids 1,000aa in size, there are
> about 1 x 10e1300 different possible protein sequences. This is an
> absolutely huge number of different possibilities. It is a 1 with
> 1,300 zeros following it. Out of all of these possibilities, how many
> would have the lactase function? Certainly there would be many of
> these sequences that would have the lactase function, but certainly
> not all of them or even most of them. Perhaps the ratio would be as
> high as 1 in a trillion? If the ratio where 1 in a trillion, that
> means that any given functional lactase AA sequence would be
> surrounded by an average of 1 trillion non-lactase sequences. If a
> particular functioning lactase sequence is changed or "reduced" beyond
> a certain point, it will no longer function at all, not even a little
> bit. This is the definition of IC. The lactase function, even though
> based in the AA sequence of a single protein, is IC. Of course,
> compared to other systems of function, the lactase and nylonase single
> protein enzymes are not all that complex since there are is a
> relatively high percentage of potential lactase sequences as compared
> with the total number of possible sequences out there. Because of
> this, these functions are relatively simple, requiring a relatively
> short stretch of DNA to code for their function. Other systems of
> function require multiple proteins all working together
> simultaneously. Much more DNA real estate is necessary.
>
> Before thinking about more complex systems function, such as bacterial
> motility, consider that even the evolution of the relatively simple
> lactase function is quite difficult. Barry Hall demonstrated this in
> several experiments where he deleted the lacZ genes in E. coli
> bacteria to see if they would evolve the lactase function back again
> using some other genetic sequence. And, they did evolve the lactase
> function in just one or two generations. As it turned out, a single
> point mutation to a completely different DNA sequence was able to
> produce a selectively advantageous lactase function in a lactose
> environment. Hall called this "evolved" sequence the ebg gene
> (evolved beta galactosidase gene). But, he started wondering, "If
> this worked for the deletion of the lacZ gene, what will happen if I
> delete the ebg gene too?" So, Hall deleted the ebg and lacZ genes in
> certain colonies of E. coli. What happened next is very interesting.
> These double mutant E. coli colonies never evolved the lactase
> function back again despite high population numbers, high mutation
> rates, 4 million base pairs of DNA each, positive selection pressure,
> and tens of thousands of generations.
>
> http://naturalselection.0catch.com/Files/galactosidaseevolution.html
>
> Now, why didn't Hall's double mutant E. coli colonies evolve the
> relatively simple lactase function back again? Hall himself described
> this colonies as having, "limited evolutionary potential." What was
> it that limited their ability to evolve the relatively simple lactase
> function despite very positive benefits if they were to ever evolve
> this helpful function?
>
> It seems that neutral gaps existed between what was there and what was
> needed. The genetic real estate of this huge population of E. coli
> simply was not large enough to undergo the random walk across this
> neutral gap in beneficial function despite being given thousands of
> generations.
>
> Obviously then, even such simple functions as the function of single
> proteins are IC and this can and often does create difficulties for
> mindless evolutionary processes. The problems only increase
> (exponentially) as one moves up the spectrum of complex systems.
>
> Sean
> www.naturalselection.0catch.com

Von Smith

unread,
Jan 17, 2004, 2:33:09 AM1/17/04
to
drea...@hotmail.com (Von Smith) wrote in message news:<8d74ec45.04011...@posting.google.com>...

> seanpi...@naturalselection.0catch.com (Sean Pitman) wrote in message news:<80d0c26f.04011...@posting.google.com>...
> > "Chris Merli" <clm...@insightbb.com> wrote in message news:<lKmNb.55023$5V2.67607@attbi_s53>...

<snip>

>
> I would have been more impressed if you had written this *after*
> giving a substantive reply to Deaddog's recent excellent post on
> Synthetic Biology, which probably sheds some light on how biologists
> *really* think complex multi-protein systems might evolve. In it, he
> cites a paper in which researchers randomly switched around some of
> the parts involved in complex multi-protein interactions to see what
> they would do.
>
> Combinatorial synthesis of genetic networks.
> Guet CC, Elowitz MB, Hsing W, Leibler S.
> Science. 2002 May 24; 296(5572): 1466-70.
>
> http://www.sciencemag.org/cgi/content/full/296/5572/1466
>
> So what happens when one shakes up the regulatory bits of a biological
> system and lets them fall where they will? AIUI, far from ending up
> with nothing but random junkpiles, the researchers were able to obtain
> a variety of novel logically-functioning phenotypes. No need for some
> pre-existing homonculus magically prompting the various parts on how
> to behave: as often as not the parts were able to associate and
> interact coherently left to their own devices. Of course it is
> possible that this liberal arts major is misunderstanding the article.
> Perhaps the biologically washed can comment more coherently.
>

To be fair and accurate, I should note that the researchers did not
actually shuffle the constituent parts of their combinatorial
libraries randomly; they took a sequence of three transcriptional
regulators, and associated each one with any one of five promoters.
This yielded a "sequence space" of 125 possible arrangements of
promoters and regulators. With a combinatorial library this small,
Guet et al didn't have to sample the population, they were able to
survey all the possible combinations in the library. The point is
that, out of all these possible combinations, there was not an
overwhelming majority of junk piles with maybe two or three working
combinations, which Dr. Pitman might expect to be the case, but rather
a goodly proportion of the combinations worked coherently. In other
words, if they *had* shuffled the promoters randomly, the odds that
they would have ended up with a coherent complex system would actually
have been quite good.

Von Smith

unread,
Jan 17, 2004, 4:12:43 AM1/17/04
to
seanpi...@naturalselection.0catch.com (Sean Pitman) wrote in message news:<80d0c26f.04011...@posting.google.com>...

> howard hershey <hers...@indiana.edu> wrote in message news:<bu46sv$srt$1...@hood.uits.indiana.edu>...
>

<snip lengthy discourse in which the visually impaired are compelled
to wander dessert wastelands in search of ice cream mesas>

>
> And what about the other clusters? Is the environment going to change
> just right a zillion times in a row so that bridges can be built to
> the other clusters?

<snip some more>

I wanted to highlight a couple of issues that Dr. Pitman raises at the
end of this rather lengthy post. His argument here, if I understand
it correctly, is that: granted that useful sequences of a *given*
function may tend to form clusters in a sequence space, rather than
being scattered randomly and sparsely throughout it, it is nonetheless
the case that various clusters of sequences serving *different*
functions will tend to be thus widely spaced apart, so that a sequence
sitting atop a mesa of mint chocolate chip ice cream cones is likely
to be far away from any other mesas of other given flavor, such as
butter pecan. Our visually challenged hero may be able to find other
varieties of bubble gum fairly easily, but the chances of our his
successfully traversing the Rocky Road to a different flavor is
vanishingly small.

Now, I suppose one way to determine if this is actually the case is to
wave one's hands (being careful not to drop that triple scoop of mint
chocolate chip) and make guesses based on personal incredulity and
weak analogies; this seems to be Dr. Pitman's preferred method of
inference and argument. Another possibility might be to see if what
we actually know about the real world could provide us with some
clues.

First, it is important to note that the underlying issue here is
whether the various complex structures we observe in life today might
have evolved from different complex structures that existed in life
yesterday. This is *not* the same question as whether any two
functions we might select at random are likely to be able to evolve
into one another. It may very well be nearly impossible for a
flagellum to evolve from, say, a Golgi apparatus or a 2,4-DNT enzyme
cascade. even given "zillions" of years. Who cares? No one is
suggesting such a thing. Structures such as Tsp pili or TTSS weren't
proposed as possible flagellum precursors at random. They interest
biologists because of actual evidence that there are significant
homologies between them and the flagellum.

To reiterate: we are not randomly selecting any two functions and
discussing whether or not they might have evolved; our sampling method
is quite biased. Now Dr. Pitman has objected on several occasions
that he is not assuming that such structures evolve from random
sequences, either. He may well be sincere in this. It may well be
that he just does not appreciate how his arguments and claimed
probability calculations are sensitive to this assumption.

So just how widely scattered are the different mesas of functions in
life today from one another, and from the different mesas of function
that existed in life yesterday and 500 Mya?

Well, my casual survey of different functions in life tells me that,
quite often, dramatically different functions can be served not only
by closely-related clusters of mesas; they can often be found on the
*mesa*. Consider the following functions:

improving flight;
display;
insulation;
camouflage.

The exact same bird feathers perform *all* of these functions. Not
only that, but the same protein that goes into making these feathers
also goes into making the bird's beak, and is in turn essentially the
same protein found in mammal hair and reptile scales. Several
functions stacked up on the exact same mesa.

But, one might argue, structures like feathers are relatively simple.
As one goes up the ladder of complexity, however, etc. etc. Well, OK,
let's take the whole suite of adaptations that make most birds
efficient fliers, including wing shape, pneumatic bones, a
super-efficient respiratory system, etc. How likely is this complex
system of adaptations for flight to be adaptable for a different
function? There's no need to wonder, as we have actual evidence.
Penguins, with more or less the same suite of adaptations, use them to
swim instead of fly (and of course, some birds are quite proficient
both as swimmers *and* fliers).

Note that it wasn't necessarily obvious ahead of time that swimming
and flying adaptations would overlap. AFAIK, bats aren't very good
swimmers at all, nor are butterflies. Likewise, I don't expect
whales, squids, sharks, or sea tortoises, for all their swimming
prowess, to be especially good fliers, although I suppose there are
"flying" fish. I don't expect that pterodons evolved from pleiosaurs,
or vice versa. In a world without penguins or cormorants, I'm not
sure how useful our prior intuitions would be about how likely
different functions are to overlap. One has to compare actual
*structures*, and try to determine whether a reasonable evolutionary
pathway exists between a structure and some *logical* precursor, not
some randomly selected one.

OK, that was on the level of gross animal analogy. What about at the
molecular level? We already know about enzymes doubling as lens
crystallins, immunoglobins with enzyme activities, Hox genes that code
for body segmentation in one organism and hindbrain formation in
another. Blah blah blah. But of course those are relatively simple,
closely-related functions, so they don't really count. We all know
that as we move up the ladder of complexity, these sorts of
overlapping clusters of differing functions go away, or at least
become vanishingly rare, right?

Let's see. Biologists propose that a highly complex motility function
might have evolved from an ancestral secretory system. What are the
odds, one might ask, that a cluster of motility "mesas" was likely to
be somewhere close enough to a cluster of secretory "mesas" to evolve
from it? Well, using the Incredulous Hand-waving, Weak Analogy
Method, we can rigorously calculate that they are so far away from one
another that our blind ice cream enthusiast couldn't possibly travel
from one mesa to the next in less than 17.8 zillion years.

Surprisingly, however, the Actually Look At the Evidence Method yields
a dramatically different result: it turns out that the flagellum
actually *is* also a secretory system. Not only are the clusters of
mesas not widely scattered, they overlap. At least one of our blind
man's ice cream cones already has more than flavor stacked on top of
it.

Further application of the ALAE method yields even more discrepancies
with the IHWA method: when we do phylogenetic analyses of genes to
determine where they *do* actually cluster, we find gene families and
super-families that encompass a variety of different functions, while
the set of genes serving the same function might be scattered among
different familes or "clusters" of genes.

So what assurance do we have, Dr. Pitman might demand, that no matter
where our blind heroes start out, that at least some of them will
always find a way to get their cherry cheesecake fix? None
whatsoever. And in fact the current lowlands of our dessert landscape
will be littered with the bleached bones of failed Cookies & Cream
afficionados. It may well be that the overwhelming majority of all
clusters of *possible* functions is out in the uncharted territory
that our natural history will never see. But we don't care about
those. Are problem is simply to locate the clusters of functions that
actually exist, and to compare them one another, and to the clusters
of functions that actually existed yesterday. When we do this, we
find evidence of precisely the sort of inter-function clustering that
Dr. Pitman finds unlikely.

The Sandman has given me a past due notice, so I will stop for now.

Von Smith
Fortuna nimis dat multis, satis nulli.

It m

Zachriel

unread,
Jan 17, 2004, 2:41:06 PM1/17/04
to
"Sean Pitman" <seanpi...@naturalselection.0catch.com> wrote in message
news:80d0c26f.03120...@posting.google.com...
> So, what you
> "start with" is quite important to determining what is and what is
> not beneficial. Then, beyond this, say you start with a short
> sequence, like a two or three-letter word that is defined or
> recognized as beneficial by a much larger system of function, such as
> a living cell or an English language system. Try evolving this short
> word, one letter at a time, into a longer and longer word or phrase.
> See how far you can go. Very quickly you will find yourself running
> into walls of non-beneficial function.

First you made this challenge. I responded with a word puzzle where,
starting with the single letter word "O", and by only changing one letter at
a time, and with concatenation, I constructed the phrase, "Beware a war of
words, Sean Pitman, ere you err."


"Sean Pitman" <seanpi...@naturalselection.0catch.com> wrote in message

news:80d0c26f.04011...@posting.google.com...
<snipped>

> For example, start with a meaningful English word and then add to or
> change that word so that it makes both meaningful and beneficial sense
> in a given situation/environment. At first such a game is fairly easy
> to do. But, very quickly you get to a point where any more additions
> or changes become very difficult without there being significant
> changes happening that are "just right". The required changes needed
> to maintain beneficial meaning with longer and longer phrases,
> sentences, paragraphs, etc., start to really get huge. Each word has
> a meaning by itself that may be used in a beneficial manner by many
> different types of sentences with completely different meanings.
> Although the individual word does have a meaning by itself, its
> combination with other words produces an emergent meaning/function
> that goes beyond the sum of the individual words. The same thing
> happens with genes and proteins. A portion of a protein may in fact
> work well in a completely different type of protein, but in the
> protein that it currently belongs to, it is part of a completely
> different collective emergent function. Its relative order as it
> relates to the other parts of this larger whole is what is important.
> How is this relative order established if there are many many more
> ways in which the relative order of these same parts would not be
> beneficial in the least?
>

Now you have upped the challenge to longer sentences and paragraphs. This
may surprise you, but it is much much easier to make longer words, sentences
and paragraphs once we have a starting library of phrases. As you point out,
the same word has many different uses. This is very similar to biology
whereby a feather can insulate as well as provide lift for flight, or where
limbs can be adapted for walking, flight or swimming.

Despite your incredulity, Sean, it is quite easy to construct sentences,
paragraphs, even whole essays from these simple rules. You must try to keep
in mind that ignorance is not evidence.

To make the exercise a little more challenging, I have adopted a loose
iambic pentameter. All words are found in Merriam-Webster, excepting Sean's
own name. Some of the nonsense verse is quite interesting, "like, lick,
lock, block, click, clock, slick, stick, stack" or when concatenating
phrases, "a war, of words, a war of words, beware a war of words"

But now we are after whole lines and verses. Here we go . . . (complete poem
at the bottom of post) . . .

-----------------------

Rules: Change only one letter at a time from any existing string. Can
concatenate any two strings. However, only one operation at a time. All
words, phrases and sentences must make sense in standard English.

Starting with a single letter word, "O".

----------------------------------

o, a, i
o, or, ore, one, wore, word, whore

words, wordy, ward, war, tar, wars, ware, tare, are
ere, err, era, ore, ode, of, off, or, our, your, you
ire, irk, irks, lire, lyre, fire, lice, like, lick,
lock, block, click, clock, slick, stick, stack
for, fore, form, forms, foreword
ow, row, brow, prow, prom, from

war, wan, man, may, mean, many, bean, bear, beer, bee
be, ear, year, dear, tear, pear, spear, dean, deal, ideal, idea
sean, sear, bead, lead, seer, steer, steed, stead
eat, ear, seat, set, wet, we, see, sit, pit, it, is, in, gin, instead
seep, step, pet, poet, poem

ion, sir, stir, stair, staid, tee, tea, teat, tear
treat, great
as, an, can, and, ass, pass, piss, kiss
to, do, so, go, no, not, nod
sin, tin, kin, king, win, wine, pine, pin, ping
wee, weeping, weening, weaning, meaning
is, his, this, him, hem, he, the, thy, why, who, thin, think, thing

----------------------------------

be-ware
wordy ward, a wordy ward, word wars
a war, a kiss, world, world war

of words, a war of words
beware a war of words
pit-man, Sean pitman
beware a war of words Sean pitman
you err
I err
ere you err

a war, of words, a war of words
beware a war of words
pit-man
Sean pitman
beware a war of words Sean pitman
you err
I err
ere you err

* Beware a war of words, Sean Pitman,
* Ere you err.

----------------------------------

piss, puss, pus, bus
but, jut, just
O, ow, row, crow, crown, crowd
crew, grew, grow
lo, low, lowe, lower, lowers, slow, log, blog
do, doe, dose, lose, rose
loss, close, chose, choose
not, snot, soot, shoot, hoot
me, some, same, tame, time
sometime, sometimes

do, doe, does, dole, pole, bole, bold, old
cold, could, would
no, now, know, known
err, error
ass, ash, lash, clash

O Sean Pitman,

elk, elm, helm, hell, well, help
elf, self, shelf
at, cat, hat, that, hate, have
ate, rate, crate, create, grate

rat, ray, tray, stray, astray
swords

A man, A man wins, the crown, A man wins the crown
is helm, lowers his helm, but lowers his helm
A kiss, is a kiss, a war, be just, can be just, a war can be just
the crowd, irks the crowd, just irks the crowd
leads you, leads you well, leads you well astray
you know, can lead, a clash, of swords
a clash of swords, to a clash of swords.

* A man wins the crown, but lowers his helm. A kiss
* Is a kiss, and a war can be just, but a war of words
* Just irks the crowd and leads you well astray.
* Words, you know, can lead to a clash of swords.

----------------------------------

fa, got, fagot, faggot
rag, lag, leg, it, tit, legit, sag, sage, sages

eve, ever, every, aver, eye, eyes
every-one
not, her, nother, another, other, others
me-me, meme, mere

hat, what, whet, whey, why, when
ate, late, hate
one, lone, alone
kin, kiln, kill, till, tall, all, kind, find
sword, sworn, shorn
i, id, lid, did

O Sean Pitman,

you think, do you think, you alone, have it
why do you think, that you alone, have it legit
when others, aver another, aver another idea

* Why do you think that you alone have it
* Legit when others aver another idea?

----------------------------------

rat, rot, lot, slot, slut
ink, pink, oink, oink
i, bi, by, got, bigot, bight, blight, light
big, bit, bite, byte
rig, orig, origin, origin
led, pled, bled, bleed
dim, dimpled, dimple, simple

Could it, could it be, you could, that you could
the light, see the light, that you could see the light
But choose, but choose instead
your eyes, close your eyes, and block
close your eyes, to close your eyes
the sight, block the sight
The origin, of life, we know
The origin of life, The origin of life we know
this poem, like this poem, just like this poem
simple forms, rose from simple forms
in meaning, in kind, and in kind, by step
step-by-step

* Could it be that you could see the light
* But choose instead to close your eyes and block
* The sight? The origin of life we know
* Just like this poem rose from simple forms
* In meaning, and in kind, step-by-step.

----------------------------------

We can trace the "etymology" of each word used in the poem. Some of the more
difficult words to create include "light", "choose", "instead" and "simple".

o, go, got, i, bi, bigot, bight, light
o, do, doe, dose, lose, close, chose, choose
i, in, o, or, ore, ere, err, ear, sear, seer, steer, steed, stead, instead
i, is, his, him, dim, id, lid, led, pled, dimpled, dimple, simple

(It would have been much easier if we had allowed prefixes and suffixes,
like free radicals in chemistry; and instead of merely letters, had included
phoenetics, such as "sh" or "tr"; or allowed letter rotations; or allowed
dropping letters when concatenating; but that would have been much too
easy.)

----------------------------------

* Beware a war of words, Sean Pitman,
* Ere you err. O Sean Pitman hear me:

* A man wins the crown, but lowers his helm. A kiss
* Is a kiss, and a war can be just, but a war of words
* Just irks the crowd and leads you far astray.
* Words, you know, can lead to a clash of swords.

* Why do you think that you alone have it
* Legit when sages aver another idea?

* Could it be that you could see the light
* But choose instead to close your eyes and block
* The sight? The origin of the life we know
* Just like this poem rose from simple forms,
* In meaning, and in kind, step-by-step.

Uncle Davey

unread,
Jan 17, 2004, 3:47:52 PM1/17/04
to

"Zachriel" <an...@zachriel.com> wrote in message
news:100j472...@corp.supernews.com...

Very clever.

Uncle Davey


Chris Krolczyk

unread,
Jan 17, 2004, 4:40:42 PM1/17/04
to
dfo...@gl.umbc.edu (david ford) wrote in message news:<b1c67abe.0401...@posting.google.com>...

(huge snip)

That's nice, David. Other than the typical self-refential URL
and a completely redundant quoting of Pitman, what's your point?

-Chris Krolczyk

howard hershey

unread,
Jan 19, 2004, 11:49:00 AM1/19/04
to

Sean Pitman wrote:

Let's call this the Charlie Wagner ploy. When a creationist tires of
claiming that the assembly of protein structures is too complicated and
God must have, therefore, done it, he/she/it/they then turns to DNA as
the mysterious intelligent entity that encodes and *enacts* the wisdom
of the ages. They, the masters of bad analogical arguments, analogize
the DNA of a cell as its 'brain', intelligently directing everything the
cell does, including determining what gets transcribed and how proteins
assemble into structures like flagella.

Alas, DNA's *only* contribution to the assembly of protein structures is
*encoding* the protein's amino acid sequence and *short* regulatory
sequences. DNA does not *enact* anything by itself. Genetic
information does not, by itself, tell where, when, or how much of each
part of a system is made. The cell's proteins, interacting with and
sensing the environment, interact with each other (in long chains and
cascades of reactions) to modify DNA-binding proteins so that they
either bind or release particular *short* DNA sequences. The *short*
regulatory (DNA-binding) sequences are well under the "hundreds to
several thousands" of changes you say evolution cannot produce. They are
typically 6-10 nucleotides in length, since that is the number of
nucleotides that can be seen in a single helical twist in the major
groove of DNA. These sequence elements can then allow a response to
environmental cues by either allowing or discouraging the formation of
the protein complex called RNA polymerase to make or not make a mRNA
transcript of this sequence infomation. [Notice that DNA is a passive
recipient of actions performed upon it.] That's it. I am not saying
that the sequence information in DNA is unimportant. I am saying that
that is *all* that the DNA provides. DNA is a dumb, unintelligent
molecule. DNA is *acted upon* by the cell in response to environmental
cues by the action of the cell's proteins. DNA does not act
independently as an intelligent agent in any way. BTW, *because* the
regulatory sequences of DNA are so *short* and have so little complex
information, it is not surprising that much of evolutionary change
involves changes in regulation rather than change in sequence. It is
quite possible and very easy to change regulation by random mutation
producing a new regulatory region or by combinatorial changes putting a
gene under new regulatory regions. Witness the eye crystallins.

Let's go through some very basic knowledge about DNA, RNA, and protein
that Sean has seemingly never learned, or if learned, only for
responding on a test rather than for understanding. These basic ideas
are so fundametntal they are even called The Central Dogma.

1) DNA gets transcribed by RNA polymerase to produce an mRNA.
Regulatory proteins that are responsive to environmental cues interact
with regulatory sequences to initiate this process (often via a chain or
regulatory cascade of enzymes). The DNA dumbly and stupidly responds to
these environmental cues by transcribing a mRNA when the proper proteins
are in the proper position. That DNA responds dumbly rather than with
intelligence is shown by the fact that moving a regulatory sequence from
its original postion in DNA or by creating a regulatory sequence
elsewhere by mutation results in transcription from that new position
despite the fact that doing so might be utterly without value to the
cell and not what the cell needs. Molecular biologists take advantage
of this 'stupidity' of DNA by creating hybrid chimeric molecules
(chimeric molecules also happen in nature) with, say, a bacterial
florescent protein, under the control of the regulatory sequence of the
insulin gene. They do this so that they can literally *see* when the
insulin genes are being transcribed. Transcription is not an
intelligent process. It is a dumb chemical process. DNA is acted upon
by proteins. It only encodes information. It is not an independent
actor. By itself, DNA does nothing. It is only useful as a part of a
system.

In eucaryotic cells, after the mRNA is transcribed it is typically
further processed to remove introns, add 5' caps, and 3' tails. [In
bacteria, these processes are missing.]

Then the processed mRNAs then go out out of the nucleus where they are
translated into proteins. In bacteria, translation starts immediately
after transcription and is proceeding even as transcription continues.

But DNA, neither the genes being encoded nor DNA as a general molecule,
plays any role in any of these steps (other than providing the mRNAs
and, via the same process, the proteins that provide these functions).
In eucaryotes, in particular, there is always a nuclear membrane between
DNA and the translation machinery.

The direct role of DNA in whatever a protein does extends *only* to the
point of producing the primary transcripts. Any subsequent effect is a
consequence of the sequence of that primary transcript.

Translated proteins then are often clipped, chaparoned, differentially
transported or otherwise modified during or after translation. The DNA
that encodes this protein does nothing in any of these steps.

The proteins then aggregate with one another due to the fact that they
have sites that cause them to attach more or less strongly to each
other. Environmental conditions (such as concentration of protein, the
presence of a seed protein, the level of O2, the level of Ca, the
presence or absence of small allosteric effectors -- small molecules
like sugars or cAMP that change the conformation of protein when bound)
may differentially influence whether two or more proteins assemble with
sufficient binding strength or dissassemble. Sequence of the protein,
again, is the only thing that DNA does that affects assembly of proteins
into higher order structures. Notice that the encoding DNA is not even
present anywhere *near* where the proteins self-assemble into their
final structure. So how does Sean imagine the DNA directing the
self-assembly of flagella? By ESP? By neuronic tentacles?

Examples: Sickle cell anemia and normal hemoglobin affect the structure
and shape of the entire rbc. The change in conformation in low O2 of the
sickle cell is due to changes in environmental conditions in the
complete absence of any nuclear DNA in the cell (since mammalian cells
are enucleate). Ribosomes can self-assemble in a test tube. So can
entire viruses, often one can encapsulate non-viral DNA or only a short
sequence (acted upon by proteins) of viral DNA. That is, the role of
DNA in phage assembly has little or no relationship to its sequence.

Mitotic spindles can be made to assemble into long tubes or dissassemble
by merely changing environmental conditions (Ca ion concentration plays
a big role). A change in structure in a particular protein, because of
the presence of a 'seed' protein, can make a cow very, very angry. This
can happen without any change in the DNA or in protein synthesis.
Environmental conditions (including environmental conditions that change
transcription rates by feedback through proteins) also regulate the
construction of the sub-parts of the bacterial flagella. DNA is not
involved in the assembly of a single part of the bacterial flagella
*except* indirectly as it affects the sequence of the proteins. DNA
supplies the raw materials, the proteins whose sequences allow them to
self-assemble, but their assembly into higher order structures is
entirely independent of DNA.

> Without this pre-established information the right parts just
> won't assembly properly beyond the lowest levels of functional
> complexity. It would be like having all the parts to a watch in a
> bag, shaking the bag for a billion years, and expecting a fully formed
> watch, or anything else of equal or greater emergent functional
> complexity, to fall out at the end of that time. The same is true for
> say, a bacterial flagellum. Take all of the necessary subparts needed
> to make a flagellum, put them together randomly, and see if they will
> self-assemble a flagellar apparatus.

In fact, the *proteins* of the bacterial flagella do self-assemble in a
particular order. It is that ontologic order that tells us how the
flagella likely arose via specific subsystems that were *independently*
derived and subsequently co-opted to perform their present function.
For example, the L and P ring proteins reach their positions
independently (via a different export machinery) from the TTSS export
machinery that pumps out all the closely-related flagellar proteins. The
TTSS export machinery of the flagella, of course, self-assembles first.
The motor assembles independently of the TTSS export machinery and then
becomes attached as a sub-system. So the flagella looks like the
assembly of independently useful (or potentially independently useful
and independently regulated) subsystems that were co-opted to perform a
new function rather than the assembly of a single system.

The base of the flagella is clearly capable of performing the
independent function of protein transport. It still does perform that
function. The motor and regulatory proteins clearly have independent
utility, as closely related proteins still perform these functions
elsewhere. The L and P rings also have relatives that serve similar
functions. The flagellar whip proteins are all structurally similar and
probably represent duplication and specialization events. But the
flagellar proteins all have the ability to bind to one another to form
the tube (when they reach the tip of the growing tube, but not before)
because of this sequence relatedness. Some of the specialization
probably is due to the fact that the first proteins are released from
the growing tip into a different environment than is the case for later
whip proteins. But there is no doubt at all that the flagellar whip
*self-assembles* and does so in the complete absence of the DNA that
informed its sequence. There is no doubt at all that, if we know the
right environmental cues, flagellar whip proteins could be induced to
form flagellar whips in a test tube in the absence of DNA.

> It just doesn't happen outside
> of the very specific production constraints provided by the
> pre-established genetic information that code for both flagellar part
> production as well as where, when, and how much part to produce so
> that assembly of these parts will occur in a proper way. The simple
> production of flagellar parts in a random non-specific way will only
> produce a junk pile - not a highly complex flagellar system.

Yes. Cells are systems. But no one but you is envisioning the flagella
evolving from some sort of junk pile of randomly produced proteins.
Rather, the flagella evolved by co-opting already useful systems to
perform an additional functionally useful activity.


>
> Now, of course, if you throw natural selection into the picture, this
> is supposed to get evolution out of this mess. It sort through the
> potential junk pile options and picks only those assemblages that are
> beneficial, in a stepwise manner, until higher and higher systems of
> functional complexity are realized. This is how it is supposed to
> work. The problem with this notion is that as one climbs up the
> ladder of functional complexity,

If there is a "ladder of functional complexity" I would like to know
about it. How is it determined that system A is higher or lower on this
ladder than system B? How do you imagine evolution working its way up
the ladder? By first magically poofing utterly useless pieces of junk
protein and then magically poofing all those pieces directly to the top
of the ladder? That seems to be your strawman du jour.

> it becomes more and more difficult to
> keep adding genetic sequences together in a beneficial way without
> having to cross vast gaps of neutral or even detrimental changes.
>
> For example, start with a meaningful English word and then add to or
> change that word so that it makes both meaningful and beneficial sense
> in a given situation/environment. At first such a game is fairly easy
> to do. But, very quickly you get to a point where any more additions
> or changes become very difficult without there being significant
> changes happening that are "just right". The required changes needed
> to maintain beneficial meaning with longer and longer phrases,
> sentences, paragraphs, etc., start to really get huge. Each word has
> a meaning by itself that may be used in a beneficial manner by many
> different types of sentences with completely different meanings.
> Although the individual word does have a meaning by itself, its
> combination with other words produces an emergent meaning/function
> that goes beyond the sum of the individual words. The same thing
> happens with genes and proteins. A portion of a protein may in fact
> work well in a completely different type of protein, but in the
> protein that it currently belongs to, it is part of a completely
> different collective emergent function. Its relative order as it
> relates to the other parts of this larger whole is what is important.
> How is this relative order established if there are many many more
> ways in which the relative order of these same parts would not be
> beneficial in the least?

Ever hear of transpositon, deletion, duplication? Those are mechanisms
that can bring together different functional parts. But keep in mind
that most evolutionary change is more a matter of change in quantity
(regulation) than in structure. After all there are very significant
few structural differences at the DNA level between humans and chimps.


>
> Again, just because the right parts happen to be in the same place at
> the same time does not mean much outside of a pre-established
> information code that tells them how to specifically arrange
> themselves.

Notice that in the model of evolution of the bacterial flagella that
zosdad has pointed you to (it was written by someone he is quite in
touch with) that all of the precursors had independent utility and
already self-assembled into independently useful sub-components.

>>>In
>>>order to keep up with this exponential decrease in average cone
>>>density, the number of blind men has to increase exponentially in
>>>order to find the rarer cones at the same rate. Very soon the
>>>environment cannot support any more blind men and so they must
>>>individually search out exponentially more and more sequence space, on
>>>average, before success can be realized (i.e., a cone or cluster of
>>>cones is found). For example, it can be visualized as stacked levels
>>>of rooms. Each room has its own average density of ice cream cones.
>>>The rooms on the lowest level have the highest density of ice cream
>>>cones - say one cone every meter or so, on average. Moving up to the
>>>next higher room the density decreases so that there is a cone every 2
>>>meters or so. Then, in the next higher room, the density decreases to
>>>a cone every 4 meters or so, on average. And, it goes from there.
>>>After 30 or so steps up to higher levels, the cone density is 1 every
>>>billion meters or so, on average.
>>
>>If the development of each protein started from scratch you may have an
>>excellent arguement but nearly all proteins from other proteins so you are
>>starting from a point that is known to be functional.
>
>
> You are actually suggesting here that the system in question had its
> origin in many different places. You seem to be suggesting that all
> the various parts found as subparts of many different systems somehow
> brought themselves together to make a new type of system . . . just
> like that.

No. The independent subsystems came together by several independent
steps, each useful in its own right. No one is suggesting that the
bacterial flagella appeared by the equivalent of a three-body collision.
We are suggesting that it arose by two two-body collisions with a
useful, albeit transient, intermediate. If you think it requires a
four-body collision to go from four independent subsystems to a final
single system, you are wrong there as well. It requires three two-body
collisions. You keep failing to notice that all of the subsystems
proposed (the TTSS function of the base, the regulatory functions of the
regulatory proteins, the motor functions of the mot proteins, the P and
L rings, and even the whip/injectosome) had independent useful functions
assigned to them and basically did not change their biochemical actions
to a major degree when they became co-opted into a flagellar system.
The flagella was not assembled out of "junk". It was assembled out of
subsystems (or duplicates thereof) that were co-opted into performing a
related biochemical action in service of a new developing function.
Subsequent specialization of these subsystems for this new function was
a *consequence* of the utility of the new function: because, in these
cells, motility was a useful selectable function, selection for changes
that improved that function was favored.

> Well now, how did these various different functional
> parts, as subparts of many different systems, know how to come
> together so nicely to make a completely new system of function?

They didn't. It was a process of trial and error. When a trial
produced a useful intermediate, it was kept.

> This
> would be like various parts from a car simply deciding, by themselves,
> to reassemble to make an airplane, or a boat, or a house.

It is not at all like that. It is like the fact that all three could
use a motor and all could borrow that motor from a car. All three could
use a seat, and all could borrow that seat from a car. All three could
use a windshield as a window. And complex cellular structures look like
they were tinkered together rather than intelligently designed.

> Don't you see, just because the subparts are functional as parts of
> different systems of function does not mean that these subparts can
> simply make an entirely new collective system of function. This just
> doesn't happen although evolutionists try and use this argument all
> the time. It just doesn't make sense. It is like throwing a bunch of
> words on the ground at random saying, "Well, they all work as parts of
> different sentences, so they should work together to make a new
> meaningful sentence." Really now, it just doesn't work like this.

What prevents the types of intermediate steps proposed for the bacterial
flagella? What prevents a TTSS from secreting and forming a whip-like
injectosome without motility function? What prevents such a system from
interacting with mot proteins that already function in a similar way
with other systems? What is to prevent a P-ring, used for generalized
export of materials, from being co-opted to a specialized function as a
bushing for flagella? Which of the several steps (we are talking about
the several steps that make this a chain of two-body events rather than
a magical poofing together of an n-body system from its parts) mentioned
as intermediate steps is 'unevolvable'?

> You must be able to add the genetic words together in a steppingstone
> sequence where each addition makes a beneficial change in the overall
> function of the evolving system. If each change does not result in a
> beneficial change in function, then nature will not and cannot select
> to keep that change. Such non-beneficial changes are either
> detrimental or neutral. The crossing of such detrimental/neutral gaps
> really starts to slow evolution down,

Yes to all the above. But that is indeed how evolution works.

> in an exponential fashion,
> beyond the lowest levels of specified functional complexity.

This last, however, is meaningless verbiage. You have not yet told
anyone how to determine what you mean by "lowest levels of specified
functional complexity". How do you determine the "level of specified
functional complexity"? What is the metric you use? Until you tell us
how you determine these numbers or how you are able to determine that it
is "thousands of amino acids" in bacterial flagella (for example) or
even how you determine it is 400 aa in lactases and why number of amino
acids says anything about *functional* complexity, you are engaged in
producing a pseudoscientific pseudocalculation which means little more
than saying "I find it difficult to imagine how this could have evolved
by magically poofing into existence from utterly functionless pieces of
junk." Of course, no evolutionary mechanism proposes that any system
'magically poofs into existence' from 'utterly functionless pieces of junk'.

> Very
> soon, evolution simply stalls out and cannot make any more
> improvements beyond the current level of complexity that it finds
> itself, this side of zillions of years of average time.
>

Until we understand what you mean by "level of complexity" such that
adding a modification of or to the pre-existing "level of complexity"
that results in a change of function cannot happen, your calculation is
nothing more than the old whore of imagining a 747 assembling by a
random process in a tornado.

> Sean
> www.naturalselection.0catch.com
>

howard hershey

unread,
Jan 21, 2004, 3:21:45 PM1/21/04
to

Sean Pitman wrote:

> jethro...@bigfoot.com (Jethro Gulner) wrote in message news:<edf04d4a.04011...@posting.google.com>...
>
>>I'm thinking TSS to flagellum is on the order of chocolate to
>>chocolate-fudge-brownie
>
>
> Now that's a serious stretch of the imagination. The TTSS system is a
> non-motile secretory system while the fully formed flagellar system is
> a motility system as well. The TTSS system requires 6 or so different
> protein parts, at minimum, for its formation while the motility
> function of the flagellar system requires and additional 14 or so
> different protein parts (for a total of over 20 parts) before its
> motility function can be realized.

Wherein Sean exhibits confusion of what modern eubacterial flagella does
include and what it must include in order for there to be a motility
function. The modern eubacterial flagella includes parts that are not
*necessary* for motility. One does not *need* a whole bunch of
different closely related whip proteins to have a motile whip. And
several non-motile 'whips' do exist in association with TTSS systems
that involve smaller numbers of proteins.

If a whip exists and it is rotated, motility will occur. Subsequent
duplications and divergence to produce a *better* whip does not affect
the crucial protein-protein interactions that allow a whip to
self-assemble. The real problem for generating *motility* is linking
the mot protein subsystem to the core of the TTSS-like structure. It is
not *even* necessary that the original selectable function of that
linkage be cellular motility. It could be involved in something as
prosaic as helping in the transport of whip proteins by the TTSS-like core.

> Unless you can find intermediate
> functions for the gap of more than a dozen required parts that
> separate the TTSS system from the Flagellar system, I'd say this gap
> is quite significant indeed, requiring at minimum several thousand
> fairly specified amino acids.

The independent motor subsystem from which the flagellar motor derived
undoubtedly, like the related non-flagellar ExbBD or TolQR systems, act
to produce rotary motion through a third protein via energy produced by
an ion channel. The motor subsystem in the flagella acts to produce
rotary motion through a third protein via energy produced by an ion
channel. [sarcasm on] Clearly this is a large functional gap to be
leaped. The flagellar motor has gone all the way from generating motion
through a third protein via energy from an ion channel in some
non-flagellar system to generating motion through a third protein via
energy from an ion channel in a possibly related system. Such a massive
change in function must require, at minimum, several thousand fairly
specified amino acids that differ from the non-flagellar motor to the
flagellar motor. [sarcasm off]

> Certainly this is not the same thing as
> roaming around the same island cluster with the same type of function.

It isn't? To go all the way from a motor that produces motion through a
third protein using an ion channel to generating a motor that produces
motion through a third protein using an ion channel requires a swim
across a vast gulf of function-space?

> The evolution form the TTSS island of function to the brand new type
> of motility function found in the flagellar island would have to cross
> a significant distance before the motility function of the flagellum
> could be realized.

It requires the modification of a single protein, and only to the extent
that that protein now links a pre-existing motor to a pre-existing TTSS
central core. That is, the additon of the motor to the TTSS may well
involve nothing more complicated than a change in a single binding site.
It certainly did not involve any major change in *function* in either
the TTSS-like subcomponent (it still acts as a protein transport device)
nor in the motor subcomponent (it still acts as a motor). Rather, the
two together generated a (at least potential) emergent function, motility.

> Such a distance could not be crossed via random
> walk alone this side of zillions of years in any population of
> bacteria on Earth.

See Sean wave his hands. Wave your hands, Sean. Wave them furiously.
Maybe somebody will ignore the man behind the curtain pulling the chains.

> In order for evolution to have truly crossed such
> a gap, without intelligent design helping it along, there would have
> to be a series of closely spaced beneficial functions/sequences
> between the TTSS and the motility function of the flagellum.

TTSS with injectosome + motor = crude motility of injectosome. [There
are other alternatives as well. Such as TTSS with motor = improved
transport. This plus increasing length of injectosome = crude
motility.] Motility is either a surprise function or an emergent one.
A similar event undoubtedly is involved in the motility of the archaeal
flagella, just with fewer proteins.


>
> Where is this series of steppingstones? That is the real question!

Did you read Nic's article?

> Many have tried to propose the existence of various stepping-stone
> functions, but none have been able to show that these steppingstones
> could actually work as no one has ever shown the crossing from any
> proposed steppingstone to any other in real life. If you think you
> know better how such a series could exist and actually work to
> eliminate this gap problem, please do share your evolutionary sequence
> with us.

Did you read Nic's article? If so, perhaps you can post the stepwise
stepping stones he does and show why any of those stepping stones
(choose only one -- your best shot, or show why certain events cannot
happen in such a stepwise fashion) between the proposed functionally
useful independent intermediate structures require changes of "thousands
of fairly specified amino acids" or is *impossible* on mathematical or
logical grounds? The entire chain of stepping stones may have involved
a lot of changes, but asking for the entire chain as if it were a single
poofing event would clearly be teleological thinking and much closer to
what creationists think happened. And I agree that such a single event
poofing of a flagella is highly unlikely. Going from nothing or random
sequences directly to the end flagella would assume that all the
proposed intermediate functional states have no utility but to serve as
a precursor to the teleologic goal.

Science does not work by requiring that every last event must be tested
experimentally or it is regarded as unlikely. If, for example, it has
been demonstrated that single mutational events can cause two proteins
to bind to one another or interact with one another in a new way leading
to new or modified function, and the proposed stepping stone involves a
very similar sort of naturalistic event under similar conditions, the
normal inference would be that such an event is possible rather than
unlikely. If you have a *reason* as to why such a similar event is
unlikely in this case or must involve thousands of changes in this case,
do share it. Waving your hands and asserting that such an event is
impossible won't do in the face of the fact that similar events
(mutations that affect protein-protein interactions) have been observed.
It may be interesting in a vaguely 'everything is interesting' way to
experimentally demonstrate that such an event *could* link a motor
subsystem to a different system to produce rotory motion in that system,
but it would hardly tell us anything new about how the proteins work.
But why would it have to be done with a flagellar system in particular?
>
> Sean
> www.naturalselection.0catch.com
>

Von Smith

unread,
Jan 22, 2004, 1:28:44 AM1/22/04
to
"Zachriel" <an...@zachriel.com> wrote in message news:<100j472...@corp.supernews.com>...


Another typical evolutionary "Just 'O'" story.

Sean Pitman

unread,
Jan 22, 2004, 1:23:34 PM1/22/04