Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

In the News: Experts debunk evolution

1 view
Skip to first unread message

Jason Spaceman

unread,
Jul 18, 2006, 1:53:42 AM7/18/06
to
From the article:
--------------------------------------------------------------------
7/14/2006 5:28:17 PM
Daily Journal

BY CHARITY GORDON

Daily Journal

STARKVILLE - "In the beginning God created the heavens and the earth,"
and 6,000 years later, according to a society at Mississippi State
University, scientists will gather to prove it.

The Society for the Advancement of Creation Science, whose purpose is
"to strengthen people's faith in the Creator and his Word," will hold
its first lecture series July 17-20 at Dorman Hall Auditorium on MSU's
campus. Worship starts at 6:30 each evening, and the lectures will
begin at 7:30 p.m. The event is open to the public.

Dr. John Sanford, who will present the July 17 lecture, is the primary
inventor of the gene gun process. His research has been used to
engineer most of the world's transgenic crops.

"My talk will be for non-specialists," Sanford said. "I will show that
evolutionary theory - mutation plus natural selection equals evolution
- can be conclusively shown to be false."
---------------------------------------------------------------------------

Read it at
http://www.djournal.com/pages/story.asp?ID=224002&pub=1&div=Lifestyles

J. Spaceman

Iain

unread,
Jul 18, 2006, 9:06:23 AM7/18/06
to
>Re: In the News: Experts debunk evolution

Ah well, never mind.

~Iain

Tiny Bulcher

unread,
Jul 18, 2006, 9:15:37 AM7/18/06
to

Jason Spaceman wrote:
> From the article:

> The Society for the Advancement of Creation Science, whose purpose is
> "to strengthen people's faith in the Creator and his Word,"

> Worship starts at 6:30 each evening,

Yeah, this has got absolutely nothing to do with religion. Pure
science.

--
Tiny

Kermit

unread,
Jul 18, 2006, 11:37:21 AM7/18/06
to

***************************************************************
A review of Sanford's book at
http://www.amazon.com/gp/product/1599190028/102-1025579-2822519?v=glance&n=283155
"This work is very typical of someone who wants to believe something --
in this case, that evolution could not possibly be the result of "blind
forces" -- and in order to support his belief, is both willing and able
to ignore contrary evidence sitting in front of his nose.

Unlike what both he and "the professor" says, the central premise of
the book (that mutations lead only to degeneration, never to
gain-of-function or "progress", unless some outside force intervenes to
guide things) is actually easily refuted, by anyone who knows what they
are talking about. The easiest path for gain-of-function mutations is
gene duplication; a gene, or part of a gene, is accidentally copied
more than once during sex cell formation. In the resulting organism, as
long as one gene retains its original function, any duplicate copies
can collect mutations -- and many of these mutations will indeed, by
blind chance, lead to novel proteins, with novel functions, and new
results for the organism. Such mutations have been well-documented in
literally hundreds, if not thousands, of papers.

Ironically, Sanford (actually a "Courtesy Associate Professor" at
Cornell's Horticultural Sciences department) ought to be especially
well acquainted with this process, since plants often end up with a
duplicate copy of significant portions of their entire genome to play
with (known as "polyploidy") and gain-of-function mutations from this
are also quite well known. Polyploidy is in fact one of the most common
methods of speciation in plants. And far from polyploidy only weakening
the organism, polyploidy has led to an increase of vigor, increased
pest resistance, increased resistance to environmental stresses, and
enhanced reproductive success in a number of well-documented cases in
plants from grasses to citrus. The fact that such gains can and do
*also* happen as the result of deliberate cultivation and breeding,
does not in any way diminish or negate the fact that they happen "by
accident" in the wild as well.

Despite this, Sanford argues that in nature, mutations only lead to
loss of fitness overall. In support of this thesis, he cherry-picks
data (taking the bits that he likes and ignoring the existence of the
rest, not a valid way of doing science), subtly distorts the
interpretation of real papers, and makes coherent and "logical"
arguments in ways that are appealing to a lay audience and whose
technical problems can only be spotted by people familiar with the
primary literature that he draws from. This is extraordinarily and
subtly deceptive.

I can only see this as the human ability to believe something based on
its emotional appeal regardless of the physical evidence available, a
fallacy that is all too common. What I don't know is whether he is
conscious of what he is doing here, and does so because he believes
that advancing the idea of an "intelligence" behind evolutionary
development is worth a bit of misrepresentation of data; or whether he
genuinely misses the evidence against him because it is not important
to him.

Either way, despite his work at Cornell this work is NOT a good
representation of the science of genetics, nor is it taken at all
seriously by others in the field. It is only a good representation of
the way Sanford thinks. That may not be a good thing to guide one's own
understanding by. If he can only support a thesis by deliberately
ignoring available information, then there is something wrong with that
thesis."
******************************************************************************************
>
>
> J. Spaceman
Kermit

Windy

unread,
Jul 18, 2006, 11:51:13 AM7/18/06
to

Jason Spaceman wrote:
> Dr. John Sanford, who will present the July 17 lecture, is the primary
> inventor of the gene gun process. His research has been used to
> engineer most of the world's transgenic crops.

He may have invented the gun, but the bullets come from evolution.

-- w.

Seanpit

unread,
Jul 18, 2006, 11:56:50 AM7/18/06
to

Not beyond very low levels of functional complexity . . .

Sean Pitman
www.DetectingDesign.com

Kermit

unread,
Jul 18, 2006, 1:46:31 PM7/18/06
to

Sure, for any single mutation. But these duplicated strings can
accumulate numerous mutations without handicapping the organism.

1. The benefits can accumulate.
2. The benefits can have duplicate functions - for example. feathers
can start off as insulation, then add functions of sexual display,
expanding visible size to discourage predators, and aid in flying (and
any chicken will tell you that half a wing is very useful).
3. A structure can drop intermediate uses if a later, additional usage
becomes more important. E.g. feathers which start as mere insulation
may adapt a stiff spine and ability to spread as a sexual display, then
be co-opted for flying, which becomes more important than display, and
those spreadable, stiff feathers now adapt for better flight. Looking
back, it might be difficult to imagine how fluffly down became stiff
flying feathers - if one arbitrarily dismisses additional uses, denies
dropping of functions, insists that the end product was the goal, etc.

>
> Sean Pitman
> www.DetectingDesign.com
>
<snip>
> > ******************************************************************************************
> > >
> > >
> > > J. Spaceman
> > Kermit
Kermit

Kermit

unread,
Jul 18, 2006, 1:46:53 PM7/18/06
to

Sure, for any single mutation. But these duplicated strings can


accumulate numerous mutations without handicapping the organism.

1. The benefits can accumulate.
2. The benefits can have duplicate functions - for example. feathers
can start off as insulation, then add functions of sexual display,
expanding visible size to discourage predators, and aid in flying (and
any chicken will tell you that half a wing is very useful).
3. A structure can drop intermediate uses if a later, additional usage
becomes more important. E.g. feathers which start as mere insulation
may adapt a stiff spine and ability to spread as a sexual display, then
be co-opted for flying, which becomes more important than display, and
those spreadable, stiff feathers now adapt for better flight. Looking

back, it might be difficult to imagine how fluffy down became stiff

Seanpit

unread,
Jul 18, 2006, 2:55:52 PM7/18/06
to

Kermit wrote:

> > > Unlike what both he and "the professor" says, the central premise of
> > > the book (that mutations lead only to degeneration, never to
> > > gain-of-function or "progress", unless some outside force intervenes to
> > > guide things) is actually easily refuted, by anyone who knows what they
> > > are talking about. The easiest path for gain-of-function mutations is
> > > gene duplication; a gene, or part of a gene, is accidentally copied
> > > more than once during sex cell formation. In the resulting organism, as
> > > long as one gene retains its original function, any duplicate copies
> > > can collect mutations -- and many of these mutations will indeed, by
> > > blind chance, lead to novel proteins, with novel functions, and new
> > > results for the organism. Such mutations have been well-documented in
> > > literally hundreds, if not thousands, of papers.
> >
>
> > Not beyond very low levels of functional complexity . . .
>
> Sure, for any single mutation. But these duplicated strings can
> accumulate numerous mutations without handicapping the organism.

Being able to accommodate a mutation without handicapping the organism
is a far different thing from gaining a novel beneficial function.
When it comes to higher levels of functional complexity, it just
doesn't happen regardless of the type or number of mutations that
occur. No function that requires a minimum of more than 3 to 4 fairly
specified Kb of genetic real estate ever evolves in reality - there is
not one example.

> 1. The benefits can accumulate.

The benefits cannot accumulate if they aren't found - and they are
never found by random mutations of any kind, combined with natural
selection, beyond very low levels of functional complexity.

> 2. The benefits can have duplicate functions - for example. feathers
> can start off as insulation, then add functions of sexual display,
> expanding visible size to discourage predators, and aid in flying (and
> any chicken will tell you that half a wing is very useful).

You don't seem to understand that even with the potential of duplicate
functions, it is exponentially harder and harder to find additional
functions at higher and higher levels of complexity. Beyond low
levels, it just doesn't happen - ever.

> 3. A structure can drop intermediate uses if a later, additional usage
> becomes more important.

That's true, if it ever found the additional usage to begin with. It
doesn't beyond very low levels of functional complexity - ever.

> E.g. feathers which start as mere insulation
> may adapt a stiff spine and ability to spread as a sexual display, then
> be co-opted for flying, which becomes more important than display, and
> those spreadable, stiff feathers now adapt for better flight. Looking
> back, it might be difficult to imagine how fluffly down became stiff
> flying feathers - if one arbitrarily dismisses additional uses, denies
> dropping of functions, insists that the end product was the goal, etc.

Very nice story telling - has nothing to do with reality. The cold hard
truth of the matter is that evolution just doesn't come up with any
novel functions beyond very low levels of functional complexity. It
just becomes exponentially harder and harder to find novel functions at
higher and higher levels. That is why evolutionary potential never gets
beyond the lowest rungs of the ladder this side of trillions upon
trillions of years of average time.

> Kermit

Sean Pitman
www.DetectingDesign.com

Richard Forrest

unread,
Jul 18, 2006, 3:07:34 PM7/18/06
to

Seanpit wrote:
> Kermit wrote:
>
> > > > Unlike what both he and "the professor" says, the central premise of
> > > > the book (that mutations lead only to degeneration, never to
> > > > gain-of-function or "progress", unless some outside force intervenes to
> > > > guide things) is actually easily refuted, by anyone who knows what they
> > > > are talking about. The easiest path for gain-of-function mutations is
> > > > gene duplication; a gene, or part of a gene, is accidentally copied
> > > > more than once during sex cell formation. In the resulting organism, as
> > > > long as one gene retains its original function, any duplicate copies
> > > > can collect mutations -- and many of these mutations will indeed, by
> > > > blind chance, lead to novel proteins, with novel functions, and new
> > > > results for the organism. Such mutations have been well-documented in
> > > > literally hundreds, if not thousands, of papers.
> > >
> >
> > > Not beyond very low levels of functional complexity . . .
> >
> > Sure, for any single mutation. But these duplicated strings can
> > accumulate numerous mutations without handicapping the organism.
>
> Being able to accommodate a mutation without handicapping the organism
> is a far different thing from gaining a novel beneficial function.
> When it comes to higher levels of functional complexity, it just
> doesn't happen regardless of the type or number of mutations that
> occur.

And you base this assertion on what, exactly?

> No function that requires a minimum of more than 3 to 4 fairly
> specified Kb of genetic real estate ever evolves in reality - there is
> not one example.

And you know this because of your exhaustive knowledge of genetics, I
presume?

>
> > 1. The benefits can accumulate.
>
> The benefits cannot accumulate if they aren't found - and they are
> never found by random mutations of any kind, combined with natural
> selection, beyond very low levels of functional complexity.

Perhaps it would help us if you could provide us with a measure of
functional complexity?

What units do you use to measure "functional complexity"?
What level of "functional complexity" is "very low"?
What level of "functional complexity" is too high to be achieved by
natural selection, and on what basis have you determined this metric?

>
> > 2. The benefits can have duplicate functions - for example. feathers
> > can start off as insulation, then add functions of sexual display,
> > expanding visible size to discourage predators, and aid in flying (and
> > any chicken will tell you that half a wing is very useful).
>
> You don't seem to understand that even with the potential of duplicate
> functions, it is exponentially harder and harder to find additional
> functions at higher and higher levels of complexity.


What units do you use to measure "functional complexity"?
What level of "functional complexity" is too high to be achieved by
natural selection, and on what basis have you determined this metric?

> Beyond low
> levels, it just doesn't happen - ever.

And you base this assertion on .....?

>
> > 3. A structure can drop intermediate uses if a later, additional usage
> > becomes more important.
>
> That's true, if it ever found the additional usage to begin with. It
> doesn't beyond very low levels of functional complexity - ever.
>

And you base this assertion on ......?

> > E.g. feathers which start as mere insulation
> > may adapt a stiff spine and ability to spread as a sexual display, then
> > be co-opted for flying, which becomes more important than display, and
> > those spreadable, stiff feathers now adapt for better flight. Looking
> > back, it might be difficult to imagine how fluffly down became stiff
> > flying feathers - if one arbitrarily dismisses additional uses, denies
> > dropping of functions, insists that the end product was the goal, etc.
>
> Very nice story telling - has nothing to do with reality.

So how do you explain the fact that feathers are found in dinosaurs as
predicted by evolutionary theory, and that in some instances the type
of feather is the less developed form predicted by evolutionary theory.

Perhaps you can provide some alternative explanation which makes
predictions which ca be tested against the evidence?

> The cold hard
> truth of the matter is that evolution just doesn't come up with any
> novel functions beyond very low levels of functional complexity.

And we should believe this assertion because.........?

> It
> just becomes exponentially harder and harder to find novel functions at
> higher and higher levels.

And we should believe this assetion because.......?

> That is why evolutionary potential never gets
> beyond the lowest rungs of the ladder this side of trillions upon
> trillions of years of average time.

The idea of evolution as a ladder is very old, and has been completely
discarded by biologists for a long time.

By the way, what mathematical model of evolution have you used, and on
what basis have you calculated all the probabilities which lead you to
the conclusion that it would take "trillions upon trillions of years of
average time" to get beyond the "lowest rungs of the ladder"?

What is the difference between "average time" and ordinary time, by the
way?

Or is this another new Pitman invention?

RF

>
> > Kermit
>
> Sean Pitman
> www.DetectingDesign.com

Andrew Arensburger

unread,
Jul 18, 2006, 4:34:53 PM7/18/06
to
Kermit <unrestra...@hotmail.com> wrote:
> 2. The benefits can have duplicate functions - for example. feathers
> can start off as insulation, then add functions of sexual display,
> expanding visible size to discourage predators, and aid in flying (and
> any chicken will tell you that half a wing is very useful).

In other words, don't ask "what is this structure for?", but
rather "what can you do with this structure?"

--
Andrew Arensburger, Systems guy University of Maryland
arensb.no-...@umd.edu Office of Information Technology
"So... yeah." -- Eddie Izzard

Marc

unread,
Jul 18, 2006, 8:07:38 PM7/18/06
to

Seanpit wrote:
> Kermit wrote:
>
> > > > Unlike what both he and "the professor" says, the central premise of
> > > > the book (that mutations lead only to degeneration, never to
> > > > gain-of-function or "progress", unless some outside force intervenes to
> > > > guide things) is actually easily refuted, by anyone who knows what they
> > > > are talking about. The easiest path for gain-of-function mutations is
> > > > gene duplication; a gene, or part of a gene, is accidentally copied
> > > > more than once during sex cell formation. In the resulting organism, as
> > > > long as one gene retains its original function, any duplicate copies
> > > > can collect mutations -- and many of these mutations will indeed, by
> > > > blind chance, lead to novel proteins, with novel functions, and new
> > > > results for the organism. Such mutations have been well-documented in
> > > > literally hundreds, if not thousands, of papers.
> > >
> >
> > > Not beyond very low levels of functional complexity . . .
> >
> > Sure, for any single mutation. But these duplicated strings can
> > accumulate numerous mutations without handicapping the organism.
>
> Being able to accommodate a mutation without handicapping the organism
> is a far different thing from gaining a novel beneficial function.
> When it comes to higher levels of functional complexity, it just
> doesn't happen regardless of the type or number of mutations that
> occur. No function that requires a minimum of more than 3 to 4 fairly
> specified Kb of genetic real estate ever evolves in reality - there is
> not one example.

Perhaps you should read more of the available literature.

See this paper by Liu et al. regarding exon shuffling:
"Significant expansion of exon-bordering protein domains during animal
proteome evolution" Nucleic Acids Res. 2005 Jan 7;33(1):95-105.
PMID: 15640447
http://www.pubmedcentral.gov/articlerender.fcgi?tool=pubmed&pubmedid=15640447
http://nar.oxfordjournals.org/cgi/content/full/33/1/95

Or this paper by Zhang, Dean, Brunet and Long:
"Evolving protein functional diversity in new genes of Drosophila"
Proc Natl Acad Sci U S A. 2004 Nov 16;101(46):16246-50.
PMID: 15534206
http://www.pubmedcentral.gov/articlerender.fcgi?tool=pubmed&pubmedid=15534206
http://www.pnas.org/cgi/content/full/101/46/16246

But, because there *is* a degree of controversy about some aspects
of evolution, you should also read this paper by Conant and Wagner:
"The rarity of gene shuffling in conserved genes"
Genome Biol. 2005;6(6):R50. PMID: 15960802
http://genomebiology.com/2005/6/6/R50
http://www.pubmedcentral.gov/articlerender.fcgi?tool=pubmed&pubmedid=15960802

There are some other papers strongly in support of exon shuffling
especially when you get into mammalian evolution (due to the
degree to which retroviral elements have become embedded in
the genomes - see for example the thread from a couple of weeks
ago about nashtOn's teddy bear - actually about Koala genomes
being under invasion by a retrovirus). I'll cite some more a bit
later when I'm at my office, but these ones are freely available.

Now it is your turn... back your arguments up with the literature.


(signed) marc


..

Frank J

unread,
Jul 18, 2006, 8:49:12 PM7/18/06
to

Not so fast.

This is the "creationism" that ID claims not to be when it claims not
to be religious. Well, here's another chance for ID to back up its
claims by telling the world in no uncertain terms, what it's leaders
seem to know - that YEC is nonsense.

Don't hold your breath, though.
>
> --
> Tiny

Greg S

unread,
Jul 18, 2006, 9:31:33 PM7/18/06
to

Sanford's book was on a list I saw an ID proponent provide for
'anti-evolutionary, non-theist perspectives'. His was the only name I
didn't recognise (Behe- check, Dembski- check). I was a little
disappointed to learn that Sanford became an evangelical after he
became an academic. I guess this guy had a different idea of 'nontheist
perspective' than I do. Sanford also lists his research interests as
something like 'exploring the limits of mutation + selection'. If he's
telling the churches that he's proved it's false I wonder why is he
still doing research?

Marc

unread,
Jul 18, 2006, 10:01:27 PM7/18/06
to

Seanpit wrote:
> Kermit wrote:

........ snip

> > Unlike what both he and "the professor" says, the central premise of
> > the book (that mutations lead only to degeneration, never to
> > gain-of-function or "progress", unless some outside force intervenes to
> > guide things) is actually easily refuted, by anyone who knows what they
> > are talking about. The easiest path for gain-of-function mutations is
> > gene duplication; a gene, or part of a gene, is accidentally copied
> > more than once during sex cell formation. In the resulting organism, as
> > long as one gene retains its original function, any duplicate copies
> > can collect mutations -- and many of these mutations will indeed, by
> > blind chance, lead to novel proteins, with novel functions, and new
> > results for the organism. Such mutations have been well-documented in
> > literally hundreds, if not thousands, of papers.
>
> Not beyond very low levels of functional complexity . . .

........ snip

Beside the papers I've cited for you elsewhere in this thread, and
there are a *lot* more such papers in the literature (if you would
read more of the literature you would know this), there are some
very good review articles in the Annual Review series which you
need to get your hands on before you make more such ignorant
comments as the one above. Unfortunately the Annual Review
series are not freely available like the other papers I cited for you,
so you might need to visit a library or ask someone to forward
a copy to you, but you really should read the review from the
2005 Annual Review of Biochemistry by Orengo and Thornton

"Protein families and their evolution - a structural perspective"
Annu Rev Biochem. 2005;74:867-900. PMID: 15954844
http://arjournals.annualreviews.org/doi/abs/10.1146/annurev.biochem.74.082803.133029

There is a section in their review you should focus on, "How do
changes in domains and domain partnerships give rise to new
protein functions and biological processes?" not to mention
the section that follows: "How does the evolution of protein
families influence the complexity and evolution of organisms?"

If you have trouble getting your hands on this paper and are
willing to read it, then you can ask within this group for help
in getting a copy sent to you. You might consider browsing
the PLoS (Public Library of Science) web site as well, since
some recent papers there would provide the evidence you
feel is lacking, and those papers are freely available.

(signed) marc

.

John Harshman

unread,
Jul 18, 2006, 10:23:08 PM7/18/06
to
Marc wrote:

There's another important paper in a recent issue of Evolution: S. R.
Proulx and P. C. Phillips. 2006. Allelic divergence precedes and
promotes gene duplication. Evolution 60:881-892.

The take-home message is that, based on simulations, divergent functions
are most likely to originate as allelic variation at a single locus,
with gene duplication offering the advantage of permanent
heterozygosity, so to speak.

Robert Maas, see http://tinyurl.com/uh3t

unread,
Jul 18, 2006, 11:57:26 PM7/18/06
to
> > any chicken will tell you that half a wing is very useful).
> In other words, don't ask "what is this structure for?", but
> rather "what can you do with this structure?"

Indeed, since nobody designed the structure, it just happened as a
result of a mutation, the first question is meaningless, the structure
isn't *for" anything.

The second question is much more useful. If the critter can use the
structure to survive better than its non-mutated brothers, then the
mutation is likely to persist, and the non-mutated genome is likely to
disappear.

As for chickens: They are too heavy to fly, so a full pair of wings
probably just gets in the way without providing any value, hence the
advantage of mutations to reduce them to half-wings. As for the value
in getting too heavy to fly in the first place, I'll leave that to
somebody else's speculation.

rupert....@gmail.com

unread,
Jul 19, 2006, 12:13:21 AM7/19/06
to

Seanpit wrote:
> Kermit wrote:
>
> > > > Unlike what both he and "the professor" says, the central premise of
> > > > the book (that mutations lead only to degeneration, never to
> > > > gain-of-function or "progress", unless some outside force intervenes to
> > > > guide things) is actually easily refuted, by anyone who knows what they
> > > > are talking about. The easiest path for gain-of-function mutations is
> > > > gene duplication; a gene, or part of a gene, is accidentally copied
> > > > more than once during sex cell formation. In the resulting organism, as
> > > > long as one gene retains its original function, any duplicate copies
> > > > can collect mutations -- and many of these mutations will indeed, by
> > > > blind chance, lead to novel proteins, with novel functions, and new
> > > > results for the organism. Such mutations have been well-documented in
> > > > literally hundreds, if not thousands, of papers.
> > >
> >
> > > Not beyond very low levels of functional complexity . . .
> >
> > Sure, for any single mutation. But these duplicated strings can
> > accumulate numerous mutations without handicapping the organism.
>
> Being able to accommodate a mutation without handicapping the organism
> is a far different thing from gaining a novel beneficial function.
> When it comes to higher levels of functional complexity, it just
> doesn't happen regardless of the type or number of mutations that
> occur. No function that requires a minimum of more than 3 to 4 fairly
> specified Kb of genetic real estate ever evolves in reality - there is
> not one example.

Presumably you concede that functions that require less than "3Kb of
genetic real estate" evolve? (if that's not what you mean then forgive
me but I don't understand what you just said)

Then what is the barrier? I would suggest that you have picked a number
that matches current observations, but you lack a theory that explains
why that number has the value it does. If ID theory predicts a value
for the maximum number of specified base pair mutations that can
happen, please cite it, or better yet give us a short explanation.

How did you come up with the trillions of years figure? Did you know
that some creationists believe that the universe is over one hundred
trillion years old?

>
> > Kermit
>
> Sean Pitman
> www.DetectingDesign.com

Marc

unread,
Jul 19, 2006, 2:11:53 AM7/19/06
to


Thanks for that. What I've read so far is quite interesting, but
I doubt Sean will either be interested or could understand it
even though it does provide a new role for sex.

I do enjoy the ease of access to such articles that the internet
provides (with appropriate library access). It's been ages since
I've actually gone to the library to photocopy something (apart
from a 1981 nature article to sort a quote-mine out for another
thread here).

(signed) marc

.

Von R. Smith

unread,
Jul 19, 2006, 10:45:04 AM7/19/06
to

Seanpit wrote:
> Kermit wrote:
>
> > > > Unlike what both he and "the professor" says, the central premise of
> > > > the book (that mutations lead only to degeneration, never to
> > > > gain-of-function or "progress", unless some outside force intervenes to
> > > > guide things) is actually easily refuted, by anyone who knows what they
> > > > are talking about. The easiest path for gain-of-function mutations is
> > > > gene duplication; a gene, or part of a gene, is accidentally copied
> > > > more than once during sex cell formation. In the resulting organism, as
> > > > long as one gene retains its original function, any duplicate copies
> > > > can collect mutations -- and many of these mutations will indeed, by
> > > > blind chance, lead to novel proteins, with novel functions, and new
> > > > results for the organism. Such mutations have been well-documented in
> > > > literally hundreds, if not thousands, of papers.
> > >
> >
> > > Not beyond very low levels of functional complexity . . .
> >
> > Sure, for any single mutation. But these duplicated strings can
> > accumulate numerous mutations without handicapping the organism.
>
> Being able to accommodate a mutation without handicapping the organism
> is a far different thing from gaining a novel beneficial function.
> When it comes to higher levels of functional complexity, it just
> doesn't happen regardless of the type or number of mutations that
> occur. No function that requires a minimum of more than 3 to 4 fairly
> specified Kb of genetic real estate ever evolves in reality - there is
> not one example.


How many base pairs of sequences coding for the 2,4-DNT pathway are
fairly specified, Sean? How did you arrive at that figure, and how
could somebody else verify it? I'm still waiting for an answer.

An appropriate, responsive answer will include a number relevant to the
2,4-DNT pathway, as well as a justification of that number. It will
not include trying to change the subject to how much more complex the
flagellum is than enzyme cascades.

sea...@gmail.com

unread,
Jul 19, 2006, 5:29:38 PM7/19/06
to

Perhaps you could show me a paper where a real-time example of
evolution is actually taking place beyond very low levels of functional
complexity? The papers you've listed here do not present experimental
evidence of real evolution in action. Rather, they are nothing more
than "just-so stories" about how the scientists think evolution
happened in the past. There are no papers dealing with real time
evolution, happening right now, that goes beyond very low levels of
functional complexity. This sort of evolution just doesn't happen.
Sequence comparisons and homologies can be equally explained by common
design. Evolution is not the only option to explain homologies or
nested hierarchies.

sea...@gmail.com

unread,
Jul 19, 2006, 5:35:30 PM7/19/06
to
Von R. Smith wrote:

> How many base pairs of sequences coding for the 2,4-DNT pathway are
> fairly specified, Sean? How did you arrive at that figure, and how
> could somebody else verify it? I'm still waiting for an answer.

Again, Von, enzymatic cascades are not highly specified, relatively
speaking, because they do not require 3D orientation in order to work
and enzymatic functions, in general, need not be very specific at each
step along the way in order to work to a useful degree. Compare this
with functions like pinocytosis or flagellar motility where a specific
3D orientation to all the parts is required and they all work together
at the same time. Such a system of function requires a much greater
degree of minimum size and specificity.

> An appropriate, responsive answer will include a number relevant to the
> 2,4-DNT pathway, as well as a justification of that number. It will
> not include trying to change the subject to how much more complex the
> flagellum is than enzyme cascades.

That's because you don't seem to grasp the concept that cascading
functions are much different than functions like flagellar motility.
This difference is very important to understanding the problem with
evolving high-level functions like flagellar motility.

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
Jul 19, 2006, 5:52:31 PM7/19/06
to

Yes, functions that require a fair degree of specificity, but
significantly less than 3kb of genetic real estate, can evolve via
random mutation and natural selection.

> Then what is the barrier? I would suggest that you have picked a number
> that matches current observations, but you lack a theory that explains
> why that number has the value it does. If ID theory predicts a value
> for the maximum number of specified base pair mutations that can
> happen, please cite it, or better yet give us a short explanation.

If you care to visit my website (www.DetectingDesign.com) I discuss the
reason for the limitation at this level in detail.

In short, each additional minimum size and/or specificity requirement
that a functional system has puts it into a category with relatively
few other sequences at that size (within the entire sequence space of
potential genetic sequences) that would also be beneficial within the
given gene pool. For example, how many potential 3-letter sequences
are there in the English language system? The answer is 17,576. But,
how many meaningful (do worry about "beneficial" for now) 3-letter
sequences are there? About 930 or so. This produces a ratio of
meaningful vs. non-meaningful of about 1 in 18. Now, what do you think
happens to this ratio when we move up to the 7-character level? The
ratio drops dramatically to about 1 in 250,000 - and we aren't even
talking "beneficial" yet.

With each step up the ladder of minimum size and/or specificity
requirements, this ratio drops off exponentially. What does this
dramatic drop in ratio do to the ability of random mutations, of any
kind, to find a new sequence with a novel as well as beneficial
function attacked to it? - from the perspective of a particular gene
pool? Well, it increases the average time involved to achieve success.
At higher and higher levels, this time to success starts to increase
exponentially as well.

Very quickly, trillions upon trillions of years are needed to achieve
success and evolutionary progress simply stalls out on the lowest rungs
of the ladder.

It doesn't matter what creationists believe. It matters what you
believe. How old do you think the Earth or the entire universe is? Do
you think that there is enough time for evolution to have happened if
relatively simple functions that require only 3 or 4 ka bp of genetic
real estate would take many trillions of years to evolve?

> > > Kermit

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
Jul 19, 2006, 5:57:45 PM7/19/06
to

Marc wrote:
> Seanpit wrote:
> > Kermit wrote:
>
> ........ snip
>
> > > Unlike what both he and "the professor" says, the central premise of
> > > the book (that mutations lead only to degeneration, never to
> > > gain-of-function or "progress", unless some outside force intervenes to
> > > guide things) is actually easily refuted, by anyone who knows what they
> > > are talking about. The easiest path for gain-of-function mutations is
> > > gene duplication; a gene, or part of a gene, is accidentally copied
> > > more than once during sex cell formation. In the resulting organism, as
> > > long as one gene retains its original function, any duplicate copies
> > > can collect mutations -- and many of these mutations will indeed, by
> > > blind chance, lead to novel proteins, with novel functions, and new
> > > results for the organism. Such mutations have been well-documented in
> > > literally hundreds, if not thousands, of papers.
> >
> > Not beyond very low levels of functional complexity . . .
>
> ........ snip
>
> Beside the papers I've cited for you elsewhere in this thread, and
> there are a *lot* more such papers in the literature (if you would
> read more of the literature you would know this), there are some
> very good review articles in the Annual Review series which you
> need to get your hands on before you make more such ignorant
> comments as the one above.

You need to know what the challenge is before you go off and list a
bunch of referneces that have nothing to do with my challenge. I'm not
asking for a list of reference detailing how evolution is thought to
have happened. I'm asking for references detailing real experimental
demonstration of evolution in real time. For example, Berry Hall
demonstrated evolution of the lactase function in real time using E.
coli bacteria. The only problem is, the lactase function requires a
fairly specified minimum size of no more than about 400aa (~1,200bp).
Do you have any such experiment that goes beyond this level of
functional complexity? All I need is one paper at a time. Give me
your best example, with a relevant quote from the referenced paper, and
we'll work with that first - ok?

Marc

unread,
Jul 19, 2006, 6:14:10 PM7/19/06
to

snex

unread,
Jul 19, 2006, 6:13:27 PM7/19/06
to

do you have the number of fairly specified base pairs for the 2,4-DNT
pathway or do you not? do you have a methodology that can yield such a
number or do you not?

Marc

unread,
Jul 19, 2006, 6:13:11 PM7/19/06
to


What is wrong with understanding evolution in general?
Since you don't, this sort of paper is important for you to read.

However, a recent issue of Nature had an article showing evolution
in action. In this case, it is an invasion of the koala genome by a
retrovirus, with implications for cancer, gene shuffling and so on.
http://www.nature.com/nature/journal/v442/n7098/abs/nature04841.html

Do *not* say that this is unrelated to us or to other species, since
this shows what was going on those hundreds of times that such
viral invasions occurred in ourselves and in other species, and if
you knew anything about biology, genetics and evolution you
would know that this has important repercussions. (But if you
knew the science-stuff, you wouldn't need this "show me it
happening in front of my eyes" type proof, would you?)

(signed) marc

..

Kermit

unread,
Jul 19, 2006, 6:51:57 PM7/19/06
to

Seanpit wrote:
> rupert....@gmail.com wrote:

<snip>


>
> In short, each additional minimum size and/or specificity requirement
> that a functional system has puts it into a category with relatively
> few other sequences at that size (within the entire sequence space of
> potential genetic sequences) that would also be beneficial within the
> given gene pool. For example, how many potential 3-letter sequences
> are there in the English language system? The answer is 17,576. But,
> how many meaningful (do worry about "beneficial" for now) 3-letter
> sequences are there? About 930 or so. This produces a ratio of
> meaningful vs. non-meaningful of about 1 in 18. Now, what do you think
> happens to this ratio when we move up to the 7-character level? The
> ratio drops dramatically to about 1 in 250,000 - and we aren't even
> talking "beneficial" yet.

Ooh! Word games; good. I don't know much about genetics - I haven't
trained in it.

How about if we assemble randomly chosen 4-letter words with randomly
chosen 3-letter prefixes? What's the ration then? What if we just add
"S" to the end of the entries in your seven-letter word list - will we
get many real words?

You seem to talking about assembling these long strings of genetic code
from scratch; but surely you're not. That would be silly.

>
> With each step up the ladder of minimum size and/or specificity
> requirements, this ratio drops off exponentially.

I never really have understood this claim of yours. If we're modifying
what came before, why is the modification necessarily so complicated? A
single point mutaion is a single change, not all of the thousand of
base pairs that may be associated with it.

> What does this
> dramatic drop in ratio do to the ability of random mutations, of any
> kind, to find a new sequence with a novel as well as beneficial
> function attacked to it? - from the perspective of a particular gene
> pool?

How novel do you think average cnages are from generation to
generation?

Are you saying that God's intervention was necessary to help e. coli
putt around like a motor boat, but natural selection and random
mutation was sufficient to produce us from a chimpier ancestor?

> Well, it increases the average time involved to achieve success.
> At higher and higher levels, this time to success starts to increase
> exponentially as well.

Good thing nature doesn't have to start from scratch each time, then.

We'd have never gotten to escheria coli, I think.

>
> Very quickly, trillions upon trillions of years are needed to achieve
> success and evolutionary progress simply stalls out on the lowest rungs
> of the ladder.

How, exactly, do you climb ladders? I take 'em one rung at a time. I
think you believe that you need to leap from the floor to scale each
succeeding rung. That's really not the easiest way to do it.

I think you don't understand the concept of "adaption". And I think
that you have found a way to baffle with bullshit, but I notice that
the geneticists and mathemticians are not impressed. When you do talk
about something I know a little about, I am *way not impressed.

Explain why 300,000 or so generations between us and the common
ancestor with chimps needed anything other than NS and the variety
available from mutation. Why does the change from me to my daughter
require these humongous numbers of yours? At what point were these
unlikly odds necessary if we work back 7,000,000 years?

>
> > > > Kermit
>
> Sean Pitman
> www.DetectingDesign.com

Kermit

Kermit

unread,
Jul 19, 2006, 7:01:38 PM7/19/06
to

City kid, huh? Many chickens can fly. Not like a sparrow, but roosting
in trees or flying over fences is not uncommon. Some farmers clip the
wings of their chickens to minimize that. Their recent ancestors easily
flew short distances, like a pheasant or guinea hen. They wouldn't be
migrators, but their "half-wings" would give them a serious advantage
when escaping from predators.

Kermit

Seanpit

unread,
Jul 19, 2006, 7:05:26 PM7/19/06
to

A cascading system is no more functionally complex than the most
complex single link in its chain. So, pick the most complex single
enzyme in this cascade and find the shortest possible genetic sequence
that can code for that type of functional enzyme - and you have your
answer.

Sean Pitman
www.DetectingDesign.com

snex

unread,
Jul 19, 2006, 7:11:31 PM7/19/06
to

without the other less complex enzymes, the cascade is not complete.
the cascading system is the system under examination. you still have
neither supplied the number of fairly specified base pairs or any
method to determine that number.

>
> Sean Pitman
> www.DetectingDesign.com

Seanpit

unread,
Jul 19, 2006, 7:35:27 PM7/19/06
to

Kermit wrote:
> Seanpit wrote:
> > rupert....@gmail.com wrote:
>
> <snip>
> >
> > In short, each additional minimum size and/or specificity requirement
> > that a functional system has puts it into a category with relatively
> > few other sequences at that size (within the entire sequence space of
> > potential genetic sequences) that would also be beneficial within the
> > given gene pool. For example, how many potential 3-letter sequences
> > are there in the English language system? The answer is 17,576. But,
> > how many meaningful (do worry about "beneficial" for now) 3-letter
> > sequences are there? About 930 or so. This produces a ratio of
> > meaningful vs. non-meaningful of about 1 in 18. Now, what do you think
> > happens to this ratio when we move up to the 7-character level? The
> > ratio drops dramatically to about 1 in 250,000 - and we aren't even
> > talking "beneficial" yet.
>
> Ooh! Word games; good. I don't know much about genetics - I haven't
> trained in it.

Yet you feel yourself well able to explain genetic evolution to others?

> How about if we assemble randomly chosen 4-letter words with randomly
> chosen 3-letter prefixes? What's the ration then? What if we just add
> "S" to the end of the entries in your seven-letter word list - will we
> get many real words?

If you have a bunch of 4-letter words how would you go about getting
only 3-letter prefixes attached to the right spot on these words?
Random multi-character mutations don't know what is or isn't a prefix
and they don't know that prefixes are supposed to only get stuck onto
the beginning of your 4-letter words. Random mutations may randomly
snip out 1 or 2 or 3 or 4 or . . . characters from one place and insert
them randomly into another place within the genome. But, what are the
odds that the snippet, once inserted, will create a novel beneficial
sequence?

A series of single point mutations is analogous to a random walk
through sequence space while multicharacter mutations are analogous to
random jumps or random selections within sequence space - from a given
beneficial starting point. As it turns out, the odds that a random
walk will hit a novel beneficial sequence within sequence space is
about the same as the odds that random selection will land on a
beneficial sequence in sequence space. So, multicharacter mutations
really don't solve the time gap problem for finding novel beneficial
sequences in a level of sequence space where they are very rare.

> You seem to talking about assembling these long strings of genetic code
> from scratch; but surely you're not. That would be silly.

Not true. You can start at whatever beneficial starting point you want
and use a very large pre-established gene pool of options to begin
with. It won't help beyond very low levels of functional complexity.

> > With each step up the ladder of minimum size and/or specificity
> > requirements, this ratio drops off exponentially.
>
> I never really have understood this claim of yours. If we're modifying
> what came before, why is the modification necessarily so complicated? A
> single point mutaion is a single change, not all of the thousand of
> base pairs that may be associated with it.

Yes, and what are the odds that the single change of a single point
mutation will hit upon a novel beneficial genetic sequence? The answer
is that it depends upon the density of beneficial sequences at a given
level of functional complexity. The less the density, the less the
odds of success. And this is true for both single point mutations as
well as multi-character mutations (insertions, deletions,
translocations etc).

> > What does this
> > dramatic drop in ratio do to the ability of random mutations, of any
> > kind, to find a new sequence with a novel as well as beneficial
> > function attacked to it? - from the perspective of a particular gene
> > pool?
>
> How novel do you think average cnages are from generation to
> generation?

If a novel beneficial function is not found from generation to
generation, then nature cannot select to keep a mutated sequence. It
will either be lost from the gene pool over time, or will, at best,
continue its random walk in a blind search for a novel beneficial
sequence.

> Are you saying that God's intervention was necessary to help e. coli
> putt around like a motor boat, but natural selection and random
> mutation was sufficient to produce us from a chimpier ancestor?

Most certainly a highly intelligent designer was necessary to produce a
functional bacterial flagellar motility system. That's exactly what
I'm saying. As far as humans and chimps go, we simply don't know
enough about the genetics of our functional differences to be able to
tell if there is definitely an uncrossable gap between us. I
personally do not believe humans share a common ancestor with apes, but
I cannot base this belief on genetic evidence at this time.

> > Well, it increases the average time involved to achieve success.
> > At higher and higher levels, this time to success starts to increase
> > exponentially as well.
>
> Good thing nature doesn't have to start from scratch each time, then.

It doesn't matter where nature starts. It cannot evolve novel
functions beyond very low levels of complexity - novel functions that
were not already in the gene pool to begin with.

> We'd have never gotten to escheria coli, I think.

Well, we'd at least never gotten to certain functions that E. coli have
- that's for darn sure.

> > Very quickly, trillions upon trillions of years are needed to achieve
> > success and evolutionary progress simply stalls out on the lowest rungs
> > of the ladder.
>
> How, exactly, do you climb ladders? I take 'em one rung at a time. I
> think you believe that you need to leap from the floor to scale each
> succeeding rung. That's really not the easiest way to do it.

No. Each rung takes exponentially greater amounts of time to achieve
from the rung below. I'm not talking about jumping from the ground
each time. I'm talking about jumping from the rung below each time.
It is like a staircase where each step gets higher and higher than the
last - exponentially higher. Pretty soon, you are simply not able to
take the next step no matter how long you try to jump for it.

Then explain it to me yourself - if you know so much about how it is
supposed to work. You are just trusting that someone else must know
how it works even though you yourself have no clue? Is that it?

> Explain why 300,000 or so generations between us and the common
> ancestor with chimps needed anything other than NS and the variety
> available from mutation.

Again, we don't know nearly as much about the functional differences
between us and chimps as we know about more simple systems like those
that bacteria or single cells use, like systems of flagellar motility
or pinocytosis (cell drinking), or HIV docking, or transcription and
translation, etc. If you talk about a specific function that we humans
actually know a fair bit about, then it is much easier to tell if that
function could or could not have evolved.

> Why does the change from me to my daughter
> require these humongous numbers of yours?

You need to read about Mendelian genetics a bit. The differences
between you and your daughter or you and a son that you might have are
not the result of the evolution of anything new within your gene pool
at all. The potential for these differences was already there -
preprogrammed into the genetics of you and your wife. Mendelian
variation/genetic recombination, allows you and your wife to literally
be able to produce an almost infinite variety of children (trillions
upon trillions of different children). Yet, no new function is involved
that either your or your wife didn't already have. In other words,
mutations are not needed to produce this variety.

> At what point were these
> unlikly odds necessary if we work back 7,000,000 years?

Mendelian variation is not the same thing as Darwinian-style evolution.
Read up on it a bit. It is very interesting. I personally think that
if Mendelian genetics had become widely known before Darwin came along,
the Theory of Evolution would have had a much harder time getting off
the ground. Why? Because all of Darwin's examples of evolution in
action can all be explained now as nothing more than Mendelian
variation.

> Kermit

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
Jul 19, 2006, 7:38:08 PM7/19/06
to

A cascade doesn't have to be complete in order for each step in the
cascade to be useful. Beyond this, none of the parts of an enzymatic
cascade require specific 3D orientation with the other parts.

> > Sean Pitman
> > www.DetectingDesign.com

Desertphile

unread,
Jul 19, 2006, 7:44:19 PM7/19/06
to
Jason Spaceman wrote:

> From the article:
> --------------------------------------------------------------------
> 7/14/2006 5:28:17 PM
> Daily Journal
>
> BY CHARITY GORDON
>
> Daily Journal
>
> STARKVILLE - "In the beginning God created the heavens and the earth,"
> and 6,000 years later, according to a society at Mississippi State
> University, scientists will gather to prove it.
>
> The Society for the Advancement of Creation Science, whose purpose is
> "to strengthen people's faith in the Creator and his Word," will hold
> its first lecture series July 17-20 at Dorman Hall Auditorium on MSU's
> campus. Worship starts at 6:30 each evening, and the lectures will
> begin at 7:30 p.m. The event is open to the public.
>
> Dr. John Sanford, who will present the July 17 lecture, is the primary
> inventor of the gene gun process. His research has been used to
> engineer most of the world's transgenic crops.
>
> "My talk will be for non-specialists," Sanford said. "I will show that
> evolutionary theory - mutation plus natural selection equals evolution
> - can be conclusively shown to be false."


If he can do so, I very much hope he does so. Imagine the world-wide
amazement, accolades, and applause he will receive when he shows
evolutionary theory is false.

I brethlessly await the hundreds of front-page headlines regarding this
amazing accomplishment.

Seanpit

unread,
Jul 19, 2006, 7:43:22 PM7/19/06
to

I don't have access to the full article. Please quote the relevant
portion of the article describing the real-time evolution of the novel
function. If the function is nothing more than a virus being able to
gain entrance into a cell via some sort of template matching, then this
is very low-level evolution. It is not the type of evolution that
would explain the creation of a functional system like flagellar
motility.

> (signed) marc

Sean Pitman
www.DetectingDesign.com

snex

unread,
Jul 19, 2006, 7:48:52 PM7/19/06
to

irrelevant.

> Beyond this, none of the parts of an enzymatic
> cascade require specific 3D orientation with the other parts.

then factor this into your equations - you know, the ones you are being
asked for.

>
> > > Sean Pitman
> > > www.DetectingDesign.com

Seanpit

unread,
Jul 19, 2006, 7:50:11 PM7/19/06
to

I understand evolution in general just fine. I am fully aware of how
evolution is supposed to work and have read many of these just so
stories about how various functions are supposed to have evolved. I've
yet to read about a real-time demonstration to back up even one of
these just-so proclamations beyond very low levels of functional
complexity.

> However, a recent issue of Nature had an article showing evolution


> in action. In this case, it is an invasion of the koala genome by a
> retrovirus, with implications for cancer, gene shuffling and so on.
> http://www.nature.com/nature/journal/v442/n7098/abs/nature04841.html

Please quote the relevant portion of this paper detailing the novel
function that evolved in this case. Some novel functions, especially
those that given rise to cancer and the like via viral involvement, are
the result of a deregulation or loss of what was there before. Almost
no cancer arises as the result of the creation of a truly novel
function. Cancer is almost always the result of a loss of regulation
or an imbalance in functional systems that are already there.

> Do *not* say that this is unrelated to us or to other species, since
> this shows what was going on those hundreds of times that such
> viral invasions occurred in ourselves and in other species, and if
> you knew anything about biology, genetics and evolution you
> would know that this has important repercussions. (But if you
> knew the science-stuff, you wouldn't need this "show me it
> happening in front of my eyes" type proof, would you?)

If you can show a novel function evolving in any species or even
computer program that goes beyond very low levels of functional
complexity, I'll become an evolutionist. So far, all I've seen are
thousands upon thousands of these just-so stories you've listed here.

Seanpit

unread,
Jul 19, 2006, 8:14:56 PM7/19/06
to

This point is not at all irrelevant. It is very relevant indeed since
the whole question is about how long it would take to gain the next
beneficial steppingstone along a evolutionary pathway.

> > Beyond this, none of the parts of an enzymatic
> > cascade require specific 3D orientation with the other parts.
>
> then factor this into your equations - you know, the ones you are being
> asked for.

Like I said, cascading systems are no more complex than their most
complex single part.

> > > > Sean Pitman
> > > > www.DetectingDesign.com

snex

unread,
Jul 19, 2006, 8:25:50 PM7/19/06
to

i am not interested in your assertions. provide the numbers and how you
arrived at them or stop maintaining you have the authority to
pontificate on these matters.

>
> > > > > Sean Pitman
> > > > > www.DetectingDesign.com

wf3h

unread,
Jul 19, 2006, 9:41:17 PM7/19/06
to

Seanpit wrote:
>
> I understand evolution in general just fine. I am fully aware of how
> evolution is supposed to work and have read many of these just so
> stories about how various functions are supposed to have evolved. I've
> yet to read about a real-time demonstration to back up even one of
> these just-so proclamations beyond very low levels of functional
> complexity.

as a scientist who is objective, since i'm not an evolutionary
biologist, i find seanpit's argument unconvincing

evolution has been demonstrated in the lab. the long timeframes
necessary for evolution of species can't be demonstrated in the lab
because of the reason already stated: it takes a long time. BUT
evolutionary bio is an EXPERIMENTAL science

christianism/islamism, however, is not. the 'just so stories' of
religion are being played out today across the world in death,
slaughter, genocide and destruction. seanpit's world is a world of
ignorance, betrayal and evil. it is the replacement of logic by faith,
the substitution of reason by ignorance...

> >
> If you can show a novel function evolving in any species

and when you show a god creating ANYTHING besides genocide you be sure
and let me know.

bro...@noguchi.mimcom.net

unread,
Jul 20, 2006, 12:59:14 AM7/20/06
to

He cannot produce any specific numbers bcause he doesn't have a
specific model of anything at all. And if he pulls some number out of
his nether regions he certainly will not be able to justify them.


>
> >
> > > > > > Sean Pitman
> > > > > > www.DetectingDesign.com

Marc

unread,
Jul 20, 2006, 2:28:53 AM7/20/06
to


Not access to *a* cell, you git, it is embedding itself within the
genome.

Much of the Koala population now has this embedded retrovirus as a
new part of their genome but some isolated subpopulations do not yet
have it. When a virus like this is in the genome, it is there in the
sperm
and egg at the point of conception and so the virus is now a part of
every cell - every single cell - of each affected individual. That is
one
of the features of a "retrovirus", that it can become a part of a
genome.

Now that it is a part of the genome it can start to have other effects
such as shifting parts of the genome around (a side effect of which
may be to cause some forms of cancer). Obviously you do not
know much about the HERVs in our genome, do you?

As a service to mankind I will try to e-mail you a copy if I can
figure out where to send it.

(signed) marc

Seanpit

unread,
Jul 20, 2006, 5:10:31 AM7/20/06
to

Marc wrote:

> > > Explain this then....
> > > http://www.nature.com/nature/journal/v442/n7098/abs/nature04841.html
> >
> > I don't have access to the full article. Please quote the relevant
> > portion of the article describing the real-time evolution of the novel
> > function. If the function is nothing more than a virus being able to
> > gain entrance into a cell via some sort of template matching, then this
> > is very low-level evolution. It is not the type of evolution that
> > would explain the creation of a functional system like flagellar
> > motility.
>
> Not access to *a* cell, you git, it is embedding itself within the
> genome.

Great, what novel function is evolved? Please do describe the function
itself that evolved with the introduction of this retrovirus to the
Koala gene pool.

> Much of the Koala population now
> has this embedded retrovirus as a
> new part of their genome but some
> isolated subpopulations do not yet
> have it. When a virus like this is
> in the genome, it is there in the
> sperm and egg at the point of conception
> and so the virus is now a part of
> every cell - every single cell - of
> each affected individual. That is
> one of the features of a "retrovirus",
> that it can become a part of a
> genome.

Yes, but what novel function evolved? Please do describe it. The
simple entrance of outside DNA to a gene pool is not the same thing as
the evolution of something that was not already in the gene pool of
either the virus or the Koala - via random mutation and natural
selection. Bacteria pick up outside DNA all the time and are often
able to incorporate new functions encoded by this outside DNA - such as
enzyme producing plasmids that might produce antibiotic resistance.
This sort of thing is not the evolution of anything new. It is simply
the transfer of something that was preformed elsewhere. I'm talking
specifically about Darwinian-style evolution here were novel functions
that did not exist anywhere before were evolved in a particular gene
pool via random mutations and natural selection. Do you have any
examples of functions evolving via this method beyond very low levels
of functional complexity?

> Now that it is a part of the genome it can start to have other effects


> such as shifting parts of the genome around (a side effect of which
> may be to cause some forms of cancer).

Yes, but do you know of any novel functions that evolve via this
ability to shift parts of the genome around? - functions that were not
already there in the genome? Most of the time, cancer is not the
result of the evolution of a novel function, but arises as a result of
a loss of regulation of pre-existing functions.

> Obviously you do not
> know much about the HERVs in our genome, do you?

I know a bit about human endogenous retroviruses (HERVs). I have yet
to see where viral DNA incorporated into a genome produces novel
functions beyond very low levels of complexity that were not already in
existence. Do you have a real time example showing otherwise?

> As a service to mankind I will try to e-mail you a copy if I can
> figure out where to send it.

First tell me what novel function was described as evolving in this
paper . . .

Seanpit

unread,
Jul 20, 2006, 5:10:25 AM7/20/06
to

Marc wrote:

> > > Explain this then....
> > > http://www.nature.com/nature/journal/v442/n7098/abs/nature04841.html
> >
> > I don't have access to the full article. Please quote the relevant
> > portion of the article describing the real-time evolution of the novel
> > function. If the function is nothing more than a virus being able to
> > gain entrance into a cell via some sort of template matching, then this
> > is very low-level evolution. It is not the type of evolution that
> > would explain the creation of a functional system like flagellar
> > motility.
>
> Not access to *a* cell, you git, it is embedding itself within the
> genome.

Great, what novel function is evolved? Please do describe the function


itself that evolved with the introduction of this retrovirus to the
Koala gene pool.

> Much of the Koala population now


> has this embedded retrovirus as a
> new part of their genome but some
> isolated subpopulations do not yet
> have it. When a virus like this is
> in the genome, it is there in the
> sperm and egg at the point of conception
> and so the virus is now a part of
> every cell - every single cell - of
> each affected individual. That is
> one of the features of a "retrovirus",
> that it can become a part of a
> genome.

Yes, but what novel function evolved? Please do describe it. The


simple entrance of outside DNA to a gene pool is not the same thing as
the evolution of something that was not already in the gene pool of
either the virus or the Koala - via random mutation and natural
selection. Bacteria pick up outside DNA all the time and are often
able to incorporate new functions encoded by this outside DNA - such as
enzyme producing plasmids that might produce antibiotic resistance.
This sort of thing is not the evolution of anything new. It is simply
the transfer of something that was preformed elsewhere. I'm talking
specifically about Darwinian-style evolution here were novel functions
that did not exist anywhere before were evolved in a particular gene
pool via random mutations and natural selection. Do you have any

examples of functions evolving via this method beyond very low levels
of functional complexity?

> Now that it is a part of the genome it can start to have other effects


> such as shifting parts of the genome around (a side effect of which
> may be to cause some forms of cancer).

Yes, but do you know of any novel functions that evolve via this


ability to shift parts of the genome around? - functions that were not
already there in the genome? Most of the time, cancer is not the
result of the evolution of a novel function, but arises as a result of
a loss of regulation of pre-existing functions.

> Obviously you do not


> know much about the HERVs in our genome, do you?

I know a bit about human endogenous retroviruses (HERVs). I have yet


to see where viral DNA incorporated into a genome produces novel
functions beyond very low levels of complexity that were not already in
existence. Do you have a real time example showing otherwise?

> As a service to mankind I will try to e-mail you a copy if I can


> figure out where to send it.

First tell me what novel function was described as evolving in this
paper . . .

> (signed) marc

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
Jul 20, 2006, 5:25:51 AM7/20/06
to

If you understand the notion that cascading enzymatic systems are no
more complex than their most complex protein part, then you determine
the minimum size and specificity for that protein part that will still
do the job and you have your answer. That's it. If you think
otherwise, please do explain yourself instead of trying to be thick.

> >
> > > > > > Sean Pitman
> > > > > > www.DetectingDesign.com

Seanpit

unread,
Jul 20, 2006, 5:22:56 AM7/20/06
to

wf3h wrote:
> Seanpit wrote:
> >
> > I understand evolution in general just fine. I am fully aware of how
> > evolution is supposed to work and have read many of these just so
> > stories about how various functions are supposed to have evolved. I've
> > yet to read about a real-time demonstration to back up even one of
> > these just-so proclamations beyond very low levels of functional
> > complexity.
>
> as a scientist who is objective, since i'm not an evolutionary
> biologist, i find seanpit's argument unconvincing
> evolution has been demonstrated in the lab.

Not beyond functions that require more than a few hundred fairly
specified amino acid residues working together at the same time (less
than 3 to 4 thousand bp of genetic real estate at minimum).

> the long timeframes
> necessary for evolution of species can't be demonstrated in the lab
> because of the reason already stated: it takes a long time. BUT
> evolutionary bio is an EXPERIMENTAL science

I'm not talking about evolving an entirely new creature, just a new
relatively simple functional biosystem that requires more than a few
thousand bp of genetic real estate. I mean really, not even one of the
proposed steps in the evolution of a system like the flagellum as ever
been demonstrated in the lab - not one step. If such a step were so
simple, it should be quite easy to set up such a demonstration under
real time laboratory conditions. This sort of demonstration has yet to
be done beyond very low levels of functional complexity.

The fact that low-level evolution can be demonstrated does not mean
that these low level examples can be reasonably extrapolated up the
ladder since the ratio of potentially beneficial vs. potentially
non-beneficial decreases exponentially with each step up the ladder.
This is why you don't see anything beyond very low-level evolution
demonstrated in the lab or in real life in real time. There is a gap
problem for evolution that grows exponentially with each increase in a
functions minimum size and/or specificity requirements.

> christianism/islamism, however, is not. the 'just so stories' of
> religion are being played out today across the world in death,
> slaughter, genocide and destruction. seanpit's world is a world of
> ignorance, betrayal and evil. it is the replacement of logic by faith,
> the substitution of reason by ignorance...

Oh give me a break with your sob stories. Religions do not have a
market on evil people who claim to belong to these groups. All groups
have their bad eggs - even atheists and evolutionists have evil people
in the ranks. There are also very brilliant and well-educated people
on both sides of this issue. So, try and come up with an actual
argument that is on topic instead of trying to make blanket
generalizations that simply aren't true or relevant.

> > If you can show a novel function evolving in any species
>
> and when you show a god creating ANYTHING besides genocide you be sure
> and let me know.

Humans can create many things as well as genocide. You would think
that someone smarter than a human could do at least as well . . . What
if that someone made you with free will - and you go off and do
something evil. Who's responsible? Who do you blame? - your Creator
or yourself?

Sean Pitman
www.DetectingDesign.com

Von R. Smith

unread,
Jul 20, 2006, 7:38:06 AM7/20/06
to

sea...@gmail.com wrote:
> Von R. Smith wrote:
>
> > How many base pairs of sequences coding for the 2,4-DNT pathway are
> > fairly specified, Sean? How did you arrive at that figure, and how
> > could somebody else verify it? I'm still waiting for an answer.
>
> Again, Von, enzymatic cascades are not highly specified, relatively
> speaking,


"Relatively speaking" compared to what? To individual proteins? I
should think that, at very least, the number of "fairly specified"
amino acids needed for a cascade would be the sum total of the amino
acids needed for each of the enzymes in that cascade. Also, I'm not
asking you about "highly specified", but about "fairly specified",
which I assume is a slightly less stringent standard. All 400 amino
acids of lacZ are not "highly specified" by any reasonable definition
of the term, but you contend that they are "fairly specified". Same
goes for 10,000 or so aa of all the proteins going into making a
eubacterial flagellum.


> because they do not require 3D orientation in order to work
> and enzymatic functions, in general, need not be very specific at each
> step along the way in order to work to a useful degree. Compare this
> with functions like pinocytosis or flagellar motility where a specific
> 3D orientation to all the parts is required and they all work together
> at the same time. Such a system of function requires a much greater
> degree of minimum size and specificity.


So what is the minimum size and specificity of the sequence coding for
the 2,4-DNT cascade? I just want a number, Sean, along with a
sufficiently-detailed explanation of how you got it that I can verify
that you didn't just make it up.

>
> > An appropriate, responsive answer will include a number relevant to the
> > 2,4-DNT pathway, as well as a justification of that number. It will
> > not include trying to change the subject to how much more complex the
> > flagellum is than enzyme cascades.
>
> That's because you don't seem to grasp the concept that cascading
> functions are much different than functions like flagellar motility.
> This difference is very important to understanding the problem with
> evolving high-level functions like flagellar motility.


Thatn's nice. Unfortunately it does not answer the question. I asked
you for a number, and for a justification of that number. How many
base pairs of the sequences coding for the 2,4-DNT cascade are fairly
specified? How did you arrive at that sequence? How could another
person verify that number?

Marc

unread,
Jul 20, 2006, 8:10:32 AM7/20/06
to

Seanpit wrote:
> Marc wrote:
>
> > > > Explain this then....
> > > > http://www.nature.com/nature/journal/v442/n7098/abs/nature04841.html
> > >
> > > I don't have access to the full article. Please quote the relevant
> > > portion of the article describing the real-time evolution of the novel
> > > function. If the function is nothing more than a virus being able to
> > > gain entrance into a cell via some sort of template matching, then this
> > > is very low-level evolution. It is not the type of evolution that
> > > would explain the creation of a functional system like flagellar
> > > motility.
> >
> > Not access to *a* cell, you git, it is embedding itself within the
> > genome.

... snip

> > As a service to mankind I will try to e-mail you a copy if I can
> > figure out where to send it.
>
> First tell me what novel function was described as evolving in this
> paper . . .

The novel function is called "evolution".

That you insist it must have "function" and you refuse to read
the paper shows your agenda. Why even post here if you don't
want to hear an answer apart from the one you insist on?

(signed) marc

Von R. Smith

unread,
Jul 20, 2006, 8:35:52 AM7/20/06
to

The answer to what? Not to the question of how many of the base-pairs
in the coding sequence are "fairly-specified". Just the other day you
wrote this:

"As I've told you over and over again, specificity is a description of
the minimum sequence order or limitations of character differences
within a sequence that can be sustained without a complete loss of the
function in question."

This is a non-standard usage of the term "specificity", but we'll let
that pass. Now, it is trivial that it takes a longer sequence to code
for the entire 2,4-DNT pathway than it does for any individual step in
that pathway. So unless you think poor dntAbAcAd can do the entire
pathway all by itself, then the pathway must have a greater "minimum
size requirement" than the largest individual gene within it. And
unless you are saying that any random sequence can fill in the rest of
the "size requirement" in order to perform the rest of the steps of the
pathway than there must be at least some "limitations of character
differences" wiithin the rest of the coding sequence.

So now that we have established that the answer to my question cannot
be "the minimum coding sequence for the largest protein in the
pathway", let me ask again: How many of the base pairs in the genetic
sequences encoding the 2,4-DNT pathway are "fairly specified"?

Let me give you some help with the numbers. You have yourself
identified 4 enzymes that are central to and specific to the cascade in
question. Here are the smallest sequences performing their functions
that I could find in the NCBI database:


dntAb ~ 104
dntAc ~ 447
dntAd ~ 194

dntB ~ 548

dntD ~ 314

dntG ~ 281

for a total of 1,888 amino acids.


Note that I have not included dntAa (which can be deleted without
compromising the function in question), dntC (the putative non-specific
reductase), or ORF13 (the role of which is unknown and may or may not
be essential). I even left out dntE, which you dissed for some reason.


On at least three other occasions, when asked to put a number on
"minimum size requirements" for a given function, you have estimated
that requirement using the smallest published sequence known to perform
that function. In the case of 2,4-DNT, that number is about 1,888aa,
which would require roughly 6,000 base pairs to code for it.

Now I'm pretty sure that you don't think that dntAbAcAd could perform
the entire pathway by itself (meaning that it does not single-handedly
account for the "minimum size requirement"), and I know you don't
believe that any random peptide sequence could perform the subsequent
of the steps of the pathway (meaning that dntAbAcAd does not
single-handedly account for your sequence conservation criterion,
either). So by your own stated criteria, the answer to the question is
*not* "pick the most complex single enzyme in this cascade and find the


shortest possible genetic sequence that can code for that type of

functional enzyme".

And now that we've gotten that out of the way, let me ask once more:

How many of the base pairs coding for the 2,4-DNT pathway are "fairly
specified"? How did you arrive at that number, and how could an
independent observer verify that number?

John Harshman

unread,
Jul 20, 2006, 10:56:14 AM7/20/06
to
Marc wrote:

Seldom have I witnessed a more egregious case of people talking past
each other, and I think you are largely at fault. What do you think Sean
is asking for? What point do you think you're making with these
semi-endogenous retroviruses? I have no idea. Please explain.

Andrew Arensburger

unread,
Jul 20, 2006, 12:02:56 PM7/20/06
to
Kermit <unrestra...@hotmail.com> wrote:
> City kid, huh? Many chickens can fly. Not like a sparrow, but roosting
> in trees or flying over fences is not uncommon.

I wonder if anyone's brought up feral chickens on t.o
before...

<Ducks, before he gets goosed>

--
Andrew Arensburger, Systems guy University of Maryland
arensb.no-...@umd.edu Office of Information Technology
/etc/passwd is full -- go away!

snex

unread,
Jul 20, 2006, 12:09:31 PM7/20/06
to

i am not interested in your assertions. provide the NUMBERS and HOW YOU
ARRIVED AT THEM or stop maintaining you have the authority to
pontificate on these matters.

really, what is so hard to understand about my request? would you write
a scientific paper in this manner, or would you supply the methodology
and the numbers right off the bat?

>
> > >
> > > > > > > Sean Pitman
> > > > > > > www.DetectingDesign.com

Seanpit

unread,
Jul 20, 2006, 1:03:36 PM7/20/06
to
Marc wrote:
> Seanpit wrote:
> > Marc wrote:
> >
> > > > > Explain this then....
> > > > > http://www.nature.com/nature/journal/v442/n7098/abs/nature04841.html
> > > >
> > > > I don't have access to the full article. Please quote the relevant
> > > > portion of the article describing the real-time evolution of the novel
> > > > function. If the function is nothing more than a virus being able to
> > > > gain entrance into a cell via some sort of template matching, then this
> > > > is very low-level evolution. It is not the type of evolution that
> > > > would explain the creation of a functional system like flagellar
> > > > motility.
> > >
> > > Not access to *a* cell, you git, it is embedding itself within the
> > > genome.
>
> ... snip
>
> > > As a service to mankind I will try to e-mail you a copy if I can
> > > figure out where to send it.
> >
> > First tell me what novel function was described as evolving in this
> > paper . . .
>
> The novel function is called "evolution".

Evolution is a process not a novel function in and of itself. Evolution
is supposed to produce novel functions in living things over time. I
say that evolution doesn't do this beyond very low levels of functional
complexity. If you are challenging me on this point, what novel
function is described as evolving in this paper of yours?

> That you insist it must have "function" and you refuse to read
> the paper shows your agenda. Why even post here if you don't
> want to hear an answer apart from the one you insist on?

I'm only interested in evolving functions. If the paper doesn't talk
about the real time evolution of a novel function, what is your point
in presenting it to me for review?

I get this all the time. People tell me to read this or that paper.
When I do, it almost always does not say what I was told it said.
Often it isn't even dealing with the topic at hand. The topic here is
the evolution of novel functions in real time. If you have an example
that goes beyond very low levels of functional complexity, I'd like to
see it. Otherwise, though perhaps quite interesting, papers that do
not deal with this topic are, well, off topic.

Ken Denny

unread,
Jul 20, 2006, 1:21:24 PM7/20/06
to
Seanpit wrote:
>
> Evolution is a process not a novel function in and of itself. Evolution
> is supposed to produce novel functions in living things over time. I
> say that evolution doesn't do this beyond very low levels of functional
> complexity. If you are challenging me on this point, what novel
> function is described as evolving in this paper of yours?

So what you are asking for is an example of a process that takes
thousands of years being observed completing in a few decades. Do I
have that right?

snex

unread,
Jul 20, 2006, 1:46:43 PM7/20/06
to

his claim is that it would take trillions of years, not just thousands.

how he thinks the lack of observation of a process hypothesized to take
thousands of years implies that it takes trillions of years is anyone's
guess.

perhaps sean would like to stop whining that no such process has been
observed (since thousand year processes are unobservable by definition)
and show how indirect observations support trillions of years rather
than thousands.

my guess is that he will resort back to flawed english language
analogies that have a far greater ratio of useless sequences to all
sequences than DNA. and even these do not support him, as zachariel's
word mutagenator demonstrates.

lannybudd

unread,
Jul 20, 2006, 3:42:23 PM7/20/06
to

Seanpit wrote:

"I've yet to read about a real-time demonstration to back up even one
of
these just-so proclamations beyond very low levels of functional
complexity."

Stop wasting electrons arguing with this clod. He wants you to cite a
paper where a cat gave birth to a dog. More subtle reasoning need not
apply.

Ken Denny

unread,
Jul 20, 2006, 3:53:38 PM7/20/06
to
snex wrote:
>
> my guess is that he will resort back to flawed english language
> analogies that have a far greater ratio of useless sequences to all
> sequences than DNA. and even these do not support him, as zachariel's
> word mutagenator demonstrates.

Maybe we should demand that he show us an example of the creation of a
new "kind" being observed.

TomS

unread,
Jul 20, 2006, 4:25:46 PM7/20/06
to
"On 20 Jul 2006 12:53:38 -0700, in article
<1153425218.1...@i42g2000cwa.googlegroups.com>, Ken Denny stated..."

I'm just curious about what happens when a new kind appears, whether or not we
have any evidence for it.

A few of the possibilities that have occurred to me are:

* The appearance of an individual (or, perhaps a pair, male and female) which is
destined to be the ancestor of all the later individuals of this new kind. This
individual could be born to an individual of another kind, or could make its
appearance as an adult.

* A population, with a variety of individuals of different apparent ages, could
suddenly appear.

* A whole new, interacting, ecological system of many different kinds of animals
and plants and physical environment.

The fact that this question has been around for at least 150 years, with no
attempt at an answer, makes me begin to suspect that somebody isn't taking
it seriously.


--
---Tom S. <http://talkreason.org/articles/chickegg.cfm>
"... have a clear idea of what you should expect if your hypothesis is correct,
and what you should observe if your hypothesis is wrong ... If you cannot do
this, then this is an indicator that your hypothesis may be too vague."
RV Clarke & JE Eck: Crime Analysis for Problem Solvers - step 20

Von R. Smith

unread,
Jul 20, 2006, 6:00:03 PM7/20/06
to


I should think that, at very least, the minimum size of a cascade is
the *sum* of all the minimum sizes of its parts. When I asked you
earlier about how you determined what was "fairly specified", here is
what you said:

"As I've told you over and over again, specificity is a description of
the minimum sequence order or limitations of character differences
within a sequence that can be sustained without a complete loss of the

function in question. A function that has greater minimum size
requirements (given a constant degree of specificity) with be at a
higher level of functional complexity. The same is true for a function
that requires a greater degree of specificity (given a constant minimum

size requirement). And, obviously, the same is true for functions that

require both a greater minimum size and specificity requirement."

So since the entire cascade has a larger minimum sequence than any of
its individual proteins, and since it has at least the number of
conserved and constrained sites of all its individual protein, then
Sean's claim that a cascade is only as complex as its largest part,
whatever else that might mean, cannot mean that it has the same number
of "fairly specified" residues as its largest part. Or is Sean going
to contend with a straight face that NONE of the 284aa in dntG are
"fairly specified" without even looking at it, simply because there
happens to be a larger enzyme somewhere in the cascade?

Marc

unread,
Jul 20, 2006, 6:42:31 PM7/20/06
to


He is asking for "evolution in action in real time" but has set
conditions about "complexity". I'm fairly sure that he would
reject any lab-based manipulations and insist that examples
of evolution must occur *in nature* to qualify, and the yeast
paper where genomic changes were shown to be a removable
barrier to evolution would also be "wrong". Evolution by it's
nature works over time, often over a great deal of time, and
clear-cut evidence of evolution happening in the wild here and
now would go a long way to establishing that evolution is real.

I was not "talking past" anybody, John. I was offering to e-mail
him directly a copy of a nature article that he complained he
was not able to obtain. I can't read it for him (although he did
ask that I do so), but what is the harm in his actually having a
copy of the paper made available? The endogeneous retroviral
genomes do provide function within mammalian evolution, but
it is up to Sean to see what the paper's authors say about that
and to determine if this is evidence he can accept. Doing what
nashtOn does, sticking fingers in his ears so to speak, after
saying he couldn't get the paper just shows he has an agenda
rather than an interest in learning. Since I'm not involved with
the paper I think he should read it for himself rather than asking
me to explain it to him. While this paper may or may not meet
Sean's criteria about "high level function", it certainly does give
an example of "real time" evolution in a natural system. That
Sean doesn't want to even look at the paper is not surprising
given his misunderstanding of the content of the abstract.

(signed) marc

John Harshman

unread,
Jul 20, 2006, 8:23:46 PM7/20/06
to
Marc wrote:

I'm not sure he would say this. He has allowed, for example, that other
lab experiments showing evolution of new functions are correct, and his
out has been that they're simple functions, not that they're invalid
because they're lab experiments.

> Evolution by it's
> nature works over time, often over a great deal of time, and
> clear-cut evidence of evolution happening in the wild here and
> now would go a long way to establishing that evolution is real.
>
> I was not "talking past" anybody, John.

Yes you are. Let me explain.

> I was offering to e-mail
> him directly a copy of a nature article that he complained he
> was not able to obtain. I can't read it for him (although he did
> ask that I do so), but what is the harm in his actually having a
> copy of the paper made available? The endogeneous retroviral
> genomes do provide function within mammalian evolution, but
> it is up to Sean to see what the paper's authors say about that
> and to determine if this is evidence he can accept. Doing what
> nashtOn does, sticking fingers in his ears so to speak, after
> saying he couldn't get the paper just shows he has an agenda
> rather than an interest in learning. Since I'm not involved with
> the paper I think he should read it for himself rather than asking
> me to explain it to him. While this paper may or may not meet
> Sean's criteria about "high level function", it certainly does give
> an example of "real time" evolution in a natural system. That
> Sean doesn't want to even look at the paper is not surprising
> given his misunderstanding of the content of the abstract.

He's not asking for an example of real-time evolution. He's asking for
an example of real-time evolution of some new protein or set of proteins
with a complex function. And while I doubt he has any rigorous criteria
for this, certainly a retroviral insertion doesn't count. Even I can
tell that. And saying that retroviruses serve a function is silly. It's
just like saying that mutations serve a function (because to the
organism, that's what insertions are: random mutations.) While mutations
allow evolution to happen, this is not a "function" in any reasonable
sense. That would be teleological thinking, more suited to Sean than to
any scientist. Mutations have no "purpose" or "function"; they just happen.

I think that you may conceivably be doing what Sean does all the time:
confusing two meanings of "evolution". One meaning is common descent,
and another is change through natural processes (e.g. mutation and
selection). Your retroviruses may be evidence for the first. But are
they evidence for the second? And if they are, are they evidence for
adaptive evolution?

Seanpit

unread,
Jul 20, 2006, 11:49:20 PM7/20/06
to

An enzymatic cascade has relatively low specificity because of several
reasons. Each step in the cascade may be beneficial all by itself,
without the subsequent cascade. Enzymatic activity by itself doesn't
require a great deal of size or specificity in order to achieve minimum
usefulness (as I've previously showed you with a 1200aa lactase that
could work with just 400aa). 3D orientation of the individual enzymes
is not required.

But, for argument's sake, lets say that all the enzymes in a particular
cascade are required before a benefit will be gained. The fact that 3D
orientation is not required dramatically reduces the specificity.

For example, what are the odds that two specific 3-character sequences
(26-possible characters per position) will be present in a long string
of say, 1 billion randomly typed 3-character sequences (3 billion
characters total)? The total number of possibilities is 17,576
(sequence space for 3-character sequences). The odds that a particular
3-character sequence will not be anywhere in this sequence is
(17,575/17,576)^1e9 = 6e-24,711. So, the odds that 1 specific
3-character sequence will be present in our sequence is very good at
1-6e-24,711 = very very near 100%. So, what are the odds that 2
specific 3-character sequences will be there? Also very close to 100%.
(i.e., 0.99999 . . . * 0.999999 . . . ).

Now, what about a 6-character sequence. What are the odds that a
specific 6-character sequence will be present in these 3 billion
characters? The total number of possibilities is 308,915,776. The
odds that a particular 6-character sequence will not be anywhere in the
string of 3 billion characters is 308,915,775/308,915,776 ^ 5e8 =
~0.20. So, the odds that 1 specific 6-character sequence will be
present in our sequence isn't nearly as good at 1 - 0.20 = 0.80 or 80%.
So, what are the odds that 2 specific 6-character sequences will be
found within our 3 billion character string? Only 64% chance.

What about a 12-character sequence? Sequence space = ~95e15
possibilities. The odds that a particular 12-character sequence will
not be anywhere in the string of 3 billion characters is 95e15-1/95e15
^ 2.5e8 = ~ 0.999999997. So, the odds that 1 specific 12-character
sequence will be present in our sequence is much much less likely at 1
- 0.999999997 = ~1e-9. The odds that 2 specific 12-character sequences
will be present is about 1e-18.

This is why specific shorter sequences that need only be specified
independent of the other sequences in the cascade, do not carry nearly
the degree of specificity that a function carries which requires all
its parts to be specifically oriented relative to each other into a
unified whole. A function that requires two highly specified
3-character sequences that are not also required to be in a specific
orientation/relationship with each other does not carry nearly the
degree of specificity of a function that requires a 6-character
sequence where all the characters must be oriented relative to all the
others. The degree of specificity is exponentially greater for the
6-character function vs. the function that requires two independently
acting 3-character parts.

This is why cascading enzymatic systems are not comparable, as far as
functional complexity is concerned, to higher level systems like
flagellar motility.

> So now that we have established that the answer to my question cannot
> be "the minimum coding sequence for the largest protein in the
> pathway", let me ask again: How many of the base pairs in the genetic
> sequences encoding the 2,4-DNT pathway are "fairly specified"?
>
> Let me give you some help with the numbers. You have yourself
> identified 4 enzymes that are central to and specific to the cascade in
> question. Here are the smallest sequences performing their functions
> that I could find in the NCBI database:
>
>
> dntAb ~ 104
> dntAc ~ 447
> dntAd ~ 194
>
> dntB ~ 548
>
> dntD ~ 314
>
> dntG ~ 281
>
> for a total of 1,888 amino acids.

As explained above, one simply cannot add up the numbers like one could
for a non-cascading system were each part required specific 3D
orientation against all the other parts of the system. Basically, what
you have here is a system of complexity that isn't much more complex
than its most complex subpart (i.e., 548 fairly specified aa).

> Note that I have not included dntAa (which can be deleted without
> compromising the function in question), dntC (the putative non-specific
> reductase), or ORF13 (the role of which is unknown and may or may not
> be essential). I even left out dntE, which you dissed for some reason.

None of this really matters to the final statistical outcome.

> On at least three other occasions, when asked to put a number on
> "minimum size requirements" for a given function, you have estimated
> that requirement using the smallest published sequence known to perform
> that function. In the case of 2,4-DNT, that number is about 1,888aa,
> which would require roughly 6,000 base pairs to code for it.

Again, cascading systems are a different story. They do not have nearly
the equivalent degree of specificity that systems of function have were
all of its parts must be specifically oriented relative to all the
other parts.

> Now I'm pretty sure that you don't think that dntAbAcAd could perform
> the entire pathway by itself (meaning that it does not single-handedly
> account for the "minimum size requirement"), and I know you don't
> believe that any random peptide sequence could perform the subsequent
> of the steps of the pathway (meaning that dntAbAcAd does not
> single-handedly account for your sequence conservation criterion,
> either). So by your own stated criteria, the answer to the question is
> *not* "pick the most complex single enzyme in this cascade and find the
> shortest possible genetic sequence that can code for that type of
> functional enzyme".

Actually, as far as specificity is concerned, this is pretty much
correct.

> And now that we've gotten that out of the way, let me ask once more:
>
> How many of the base pairs coding for the 2,4-DNT pathway are "fairly
> specified"? How did you arrive at that number, and how could an
> independent observer verify that number?

548aa = ~1644bp of genetic real estate (given that this enzyme cannot
be significantly reduced in size and is already fairly specified in and
of itself).

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
Jul 20, 2006, 11:58:08 PM7/20/06
to

Why should the evolution of a relatively simple functional system, a
system that requires only a few thousand fairly specified base pairs of
genetic real estate, that a few thousand years to evolve? - therefore
making such relatively low-level evolution unobservable? I'm not
asking for turning one type of creature into another type. I'm just
taking about relatively simple functional biosystems here.

It is just most interesting that evolutionary progress does in fact
show an exponential stalling out effect with each additional size and
specificity requirement until it stalls out completely on relatively
low rungs of the ladder of functional complexity. Statistically, this
is easily explained by the non-beneficial gap problem between what is
and what might be beneficial if random mutations could actually find
novel beneficial sequences in the vastness of sequence space at higher
and higher levels of minimum size and sequence specificity requirements
for higher and higher level functions.

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
Jul 21, 2006, 12:01:36 AM7/21/06
to

Not at all. Just cite a paper where random mutation and natural
selection give rise to a functional system that requires more than 3 or
4 thousand fairly specified base pairs of genetic real estate.
Relatively speaking, this shouldn't be much of a problem for you guys
since such a function would be relatively simple - not even remotely
close to a cat giving birth to a dog.

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
Jul 20, 2006, 11:51:05 PM7/20/06
to
> The answer to what? Not to the question of how many of the base-pairs
> in the coding sequence are "fairly-specified". Just the other day you
> wrote this:
>
> "As I've told you over and over again, specificity is a description of
> the minimum sequence order or limitations of character differences
> within a sequence that can be sustained without a complete loss of the
> function in question."
>

> *not* "pick the most complex single enzyme in this cascade and find the


> shortest possible genetic sequence that can code for that type of

Rich

unread,
Jul 21, 2006, 8:42:31 AM7/21/06
to
On 19 Jul 2006 14:52:31 -0700, "Seanpit"
<seanpi...@naturalselection.0catch.com> wrote:

>Yes, functions that require a fair degree of specificity, but
>significantly less than 3kb of genetic real estate, can evolve via


>random mutation and natural selection.

I basically agree with what you are saying, but do you actually
believe that evolution (random mutations) *does* create low level
functions, or do you just accept the idea that it could do this
theoretically?

For since it cannot create anything above those low levels, something
else (ID) must have to take over from here then. And that doesn't make
much sense - you would imagine it was the same mechanism that worked
on any level of the process - either evolution, or ID.

I believe though that random mutation producing algorithms might play
a role in certain ID processes, because this might be useful, e.g. to
create variation, or to try to find the genetic code that would
provide immunity against a certain drug or poison etc. But that is
purely per design, just like when we include random generators in
computer programs for different practical purposes.

And I think it would be easy to interprete such semi-random processes
as truly random if you don't understand that they're designed and
don't look for evidence that they are. Basically that's what I think
is the problem with evcolutionary theory - you don't see what's really
going on (like Einsteins said: "It is the theory that decides what can
be observed"), and then you interprete is as random (meaning: we don't
have any explanation for it).

BTW, do you know Bruce Lipton and his "mind over genes" theory? if
not, check it out, it's very interesting, and backed up by lots of
experimental evidence. Basically it explains adaptation to the
environment on a genetic level as an intelligent reaction rather than
being caused by random mutations.

http://www.brucelipton.com/article/mind-over-genes-the-new-biology


Stanley Friesen

unread,
Jul 21, 2006, 9:51:15 AM7/21/06
to
Jason Spaceman <notr...@jspaceman.homelinux.org> wrote:

>From the article:
>--------------------------------------------------------------------
>7/14/2006 5:28:17 PM
>Daily Journal
>
>Dr. John Sanford, who will present the July 17 lecture, is the primary
>inventor of the gene gun process. His research has been used to
>engineer most of the world's transgenic crops.

Hmm, sounds like another engineer - an agricultural engineer.

--
The peace of God be with you.

Stanley Friesen

Robin Levett

unread,
Jul 22, 2006, 7:00:39 AM7/22/06
to
Andrew Arensburger wrote:

> Kermit <unrestra...@hotmail.com> wrote:
>> City kid, huh? Many chickens can fly. Not like a sparrow, but roosting
>> in trees or flying over fences is not uncommon.
>
> I wonder if anyone's brought up feral chickens on t.o
> before...
>
> <Ducks, before he gets goosed>

Trying to avoid a "felt effect"?

--
Robin Levett
rle...@rlevett.ibmuklunix.net (unmunge by removing big blue - don't yahoo)

Alexander

unread,
Jul 22, 2006, 7:17:39 AM7/22/06
to

Robin Levett wrote:
> Andrew Arensburger wrote:
>
> > Kermit <unrestra...@hotmail.com> wrote:
> >> City kid, huh? Many chickens can fly. Not like a sparrow, but roosting
> >> in trees or flying over fences is not uncommon.
> >
> > I wonder if anyone's brought up feral chickens on t.o
> > before...
> >
> > <Ducks, before he gets goosed>
>
> Trying to avoid a "felt effect"?
>

well someone's going down - probably me as I'm such a feather weight

Robin Levett

unread,
Jul 22, 2006, 7:47:50 AM7/22/06
to
Alexander wrote:

>
> Robin Levett wrote:
>> Andrew Arensburger wrote:
>>
>> > Kermit <unrestra...@hotmail.com> wrote:
>> >> City kid, huh? Many chickens can fly. Not like a sparrow, but roosting
>> >> in trees or flying over fences is not uncommon.
>> >
>> > I wonder if anyone's brought up feral chickens on t.o
>> > before...
>> >
>> > <Ducks, before he gets goosed>
>>
>> Trying to avoid a "felt effect"?
>>
>
> well someone's going down

I shouldn't have read this thread after reading Nashton on "Gay from the
womb"...

> - probably me as I'm such a feather weight

That depends on where Saturn is at the moment.

Robin Levett

unread,
Jul 22, 2006, 7:54:39 AM7/22/06
to
Seanpit wrote:

Doesn't this rather depend on whether we're allowed to start with a
functional system requiring either 2,999 or 3,999, or 6k or 8k "fairly
specified base pairs" (however that's measured)?

The ToE says that (in general) is how it happens - you insist on being shown
a system that doesn't do that.

How do I measure "levels of functional complexity" without asking you?

Seanpit

unread,
Jul 22, 2006, 8:21:37 AM7/22/06
to

Robin Levett wrote:
> Seanpit wrote:
>
> >
> > lannybudd wrote:
> >> Seanpit wrote:
> >>
> >> "I've yet to read about a real-time demonstration to back up even one
> >> of
> >> these just-so proclamations beyond very low levels of functional
> >> complexity."
> >>
> >> Stop wasting electrons arguing with this clod. He wants you to cite a
> >> paper where a cat gave birth to a dog. More subtle reasoning need not
> >> apply.
> >
> > Not at all. Just cite a paper where random mutation and natural
> > selection give rise to a functional system that requires more than 3 or
> > 4 thousand fairly specified base pairs of genetic real estate.
> > Relatively speaking, this shouldn't be much of a problem for you guys
> > since such a function would be relatively simple - not even remotely
> > close to a cat giving birth to a dog.
>
> Doesn't this rather depend on whether we're allowed to start with a
> functional system requiring either 2,999 or 3,999, or 6k or 8k "fairly
> specified base pairs" (however that's measured)?

Not at all. It doesn't matter what you have to start. You can start
with an entire genome of intact functional systems with many systems at
very high levels of functional complexity.

> The ToE says that (in general) is how it happens - you insist on being shown
> a system that doesn't do that.

The ToE does say that is how it happens. I say it doesn't happen
regardless of what you have to start with.

> How do I measure "levels of functional complexity" without asking you?

You determine the minimum size and degree of specific sequence
arrangement that a particular functional system requires before its
function in question can be realized to at least a minimum degree of
selective advantage. These minimum requirements before functionality
of a particular type can be realized are a measurement of that
function's level of complexity.

> Robin Levett
> rle...@rlevett.ibmuklunix.net (unmunge by removing big blue - don't yahoo)

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
Jul 22, 2006, 9:52:52 AM7/22/06
to

Robin Levett

unread,
Jul 22, 2006, 9:54:45 AM7/22/06
to
Seanpit wrote:

So you say that a functional system of 2,999 base pairs never, through a
single mutation, becomes a functional system with 3,000 base pairs?

>
>> How do I measure "levels of functional complexity" without asking you?
>
> You determine the minimum size and degree of specific sequence
> arrangement that a particular functional system requires before its
> function in question can be realized to at least a minimum degree of
> selective advantage.

And how do I do that? What is a "minimum degree of selective advantage"?
What is a "specific sequence arrangement"?

What variables do I feed into what equations to come up with a "level of
functional complexity"? How do I measure those variables?

> These minimum requirements before functionality
> of a particular type can be realized are a measurement of that
> function's level of complexity.


--

Seanpit

unread,
Jul 22, 2006, 10:09:35 AM7/22/06
to

That's right. A novel system at such a level or greater (requiring a
minimum of at least 3 or 4k of fairly specified bps) is never evolved
even if you start with thousands or even millions of pre-established
functional 3 or 4k systems in the gene pool in question.

Seanpit

unread,
Jul 22, 2006, 10:09:30 AM7/22/06
to

That's right. A novel system at such a level or greater (requiring a


minimum of at least 3 or 4k of fairly specified bps) is never evolved
even if you start with thousands or even millions of pre-established
functional 3 or 4k systems in the gene pool in question.

> >> How do I measure "levels of functional complexity" without asking you?

Richard Forrest

unread,
Jul 23, 2006, 6:34:34 AM7/23/06
to

Seanpit wrote:
> Ken Denny wrote:
> > Seanpit wrote:
> > >
> > > Evolution is a process not a novel function in and of itself. Evolution
> > > is supposed to produce novel functions in living things over time. I
> > > say that evolution doesn't do this beyond very low levels of functional
> > > complexity. If you are challenging me on this point, what novel
> > > function is described as evolving in this paper of yours?
> >
> > So what you are asking for is an example of a process that takes
> > thousands of years being observed completing in a few decades. Do I
> > have that right?
>
> Why should the evolution of a relatively simple functional system, a
> system that requires only a few thousand fairly specified base pairs of
> genetic real estate, that a few thousand years to evolve? - therefore
> making such relatively low-level evolution unobservable? I'm not
> asking for turning one type of creature into another type. I'm just
> taking about relatively simple functional biosystems here.
>
> It is just most interesting that evolutionary progress does in fact
> show an exponential stalling out effect with each additional size and
> specificity requirement until it stalls out completely on relatively
> low rungs of the ladder of functional complexity.

It does?
On which evidence do you base this assertion?

Or is this another of those "facts" that we are supposed to believe
simply because you say so?

>Statistically, this
> is easily explained by the non-beneficial gap problem between what is
> and what might be beneficial if random mutations could actually find
> novel beneficial sequences in the vastness of sequence space at higher
> and higher levels of minimum size and sequence specificity requirements
> for higher and higher level functions.

You still haven't told us how you measure "specificity", and on what
mathematical basis you have determined what the "higher levels of
minimum size" are.

Or is this another of those "facts" that we are supposed to believe
simply because you say so?

By the way, here is a paper which argues on the basis of genetics the
evolution of novel functions in vertebrates.

http://www.evolutionsbiologie.uni-konstanz.de/pdf1-182/P095.pdf

I know that you will find this "unconvincing". That is irrelevant. If
you find it unconvincing, provide an alternative, testable explanation
for the evidence. If you can't, your assertion that novel functions
can't evolve is unfounded.

By the way, you also need to provide alternative explanations for the
evidence presented here:
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=7665566&dopt=Citation

and here:
http://www.pnas.org/cgi/content/abstract/95/7/3708

and here:
http://www.genome.org/cgi/content/abstract/8/10/1038

and here:
http://intl.jcb.org/cgi/content/abstract/152/3/451

and here:
http://patelweb.berkeley.edu/Lab/Davis,%20%20G.-TIG.pdf

and here:
http://intl.amjbot.org/cgi/content/abstract/85/8/1047

and here:
http://www3.interscience.wiley.com/cgi-bin/abstract/34267/ABSTRACT?CRETRY=1&SRETRY=0

and here:
http://www3.interscience.wiley.com/cgi-bin/abstract/109920232/ABSTRACT

and here:
http://patsy.hunter.cuny.edu/MolecularBiophysics/PDF/Genomics_Proteomics/function_evol_inter_genomics_currop2001.pdf

Remember:
Your lack of conviction is not an argument.
You need to base your "refutation" of these papers on evidence and
argument, not assertion.

And when you've "refuted" these, I'll give you another long list.

And then another.

And then another.

Or else you can simply accept that there is strong evidence for the
evolution of novel functions in living organisms.

RF

>

> Sean Pitman
> www.DetectingDesign.com

Seanpit

unread,
Jul 23, 2006, 9:52:28 AM7/23/06
to

Which one of these references of yours describes the evolution of a
novel function requiring a minimum of more than 3 or 4 thousand fairly
specified bp of genetic real estate? Sure, novel functions do evolve
via random mutation and natural selection, but not beyond very low
levels of functional complexity.

> RF
>
> >
>
> > Sean Pitman
> > www.DetectingDesign.com

Von R. Smith

unread,
Jul 23, 2006, 9:56:50 AM7/23/06
to

Umm, Sean, why did you cut and paste text from this post and then graft
it wholesale over three different ones in two different threads? Even
if you think essentially the same argument serves equally well in each
case, that does not justify wholesale obliteration of the post you are
supposed to be responding to.


At any rate, here is the rest of your paragraph in question:

"...A function that has greater minimum size requirements (given a
constant degree of specificity) with[sic; I assume you meant "will"] be
at a higher level of functional complexity. The same is true for a
function that requires a greater degree of specificity (given a


constant minimum size requirement). And, obviously, the same is true
for functions that require both a greater minimum size and specificity
requirement."

Some important points to remember are:

1) The specificity is required for "the function in question", which
in this case is the 2,4-DNT degredation pathway. It is *not* just for
*any* function at all.

2) Minimum size requirement figures into the measure of functional
complexity, independently of "sequence specificity".

3) Sean *measures* functional complexity by the metrics of specificity
and minimum size requirement, or at least he says he does. If his
conclusions about the "functional complexity" of an enzyme cascade are
to have any rigor, they must be honestly and accurately based on this
metric, not tailored with special rules to fit his desired conclusion.

4) Disclaimer: Before we get much deeper into this, I want people to
understand that I don't actually endorse Sean's methodology or even
consider it particularly relevant to how biochemistry actually works.
I am, however, humoring his terminology and method of argument for the
purposes of the current discussion, to show that his claim that:

"Even with an entire gene pool of many thousands of functional
sequences
evolution will not create a novel functional system *of any kind* (not
just a particular teleologic system) that requires a minimum of more
than 3 or 4 fairly thousand fairly specified bp of genetic real
estate."

is false, even by his own definitions of his home-grown terminology.


> > This is a non-standard usage of the term "specificity", but we'll let
> > that pass. Now, it is trivial that it takes a longer sequence to code
> > for the entire 2,4-DNT pathway than it does for any individual step in
> > that pathway. So unless you think poor dntAbAcAd can do the entire
> > pathway all by itself, then the pathway must have a greater "minimum
> > size requirement" than the largest individual gene within it. And
> > unless you are saying that any random sequence can fill in the rest of
> > the "size requirement" in order to perform the rest of the steps of the
> > pathway than there must be at least some "limitations of character
> > differences" wiithin the rest of the coding sequence.
>
> An enzymatic cascade has relatively low specificity because of several
> reasons. Each step in the cascade may be beneficial all by itself,
> without the subsequent cascade.


Irrelevant. By your own stated rule, specificity is to the "function
in question", not just any function at all. And yes, of course the
individual components and subassemblies can be beneficial by
themselves; that would be expected if the cascade had evolved in the
incremental, selectable fashion envisioned by the *actual* ToE you
profess to understand and argue against, rather than the strawman
version of it you keep telling us you repudiate. "Irreducible
complexity" is your bugbear, not mine.

Practically any biological system you might name is going to have this
sort of modular structure, including your favorite examples at both
ends of your "ladder of complexity: the flagellum (the export system
alone is useful) and cytochrome-c (just the part binding heme is
useful). This isn't some sort of special property unique to cascades,
so why pretend that it is?

Your ploy here sounds suspiciously like an attempt to define away any
possible evolution of complex structures: if a function requiring
several proteins can be built up through selectable, individually
useful intermediates, then it doesn't "count" as evolution of a complex
structure. Under these rules the *only* examples that would count
would be ones that evolved via your "neutral-drift" strawman version of
evolution, a strawman you disown whenever challenged on it, but then
try to slip right back in at the next opportunity.


> Enzymatic activity by itself doesn't
> require a great deal of size or specificity in order to achieve minimum
> usefulness (as I've previously showed you with a 1200aa lactase that
> could work with just 400aa). 3D orientation of the individual enzymes
> is not required.
>
> But, for argument's sake, lets say that all the enzymes in a particular
> cascade are required before a benefit will be gained.


No, let's not say that for argument's sake, because that isn't the
point under argument. Nobody is saying that the entire cascade must be
in place for *a* benefit to be gained. The issue is what is required
to degrade 2,4-DNT sufficiently for an organism to use it as its sole
carbon, nitrogen, and energy source (the "function in question").
Let's keep that in mind.


> The fact that 3D
> orientation is not required dramatically reduces the specificity.


How? AFAICT, "specificity" sensu Pitman basically means the degree of
sequence constraint required to conserve function. I am not sure what
you mean by "3D orientation", but my best guess is that it is something
to do with the degree to which the proteins combine to form an actual
structure or spatial arrangement.

If that is an accurate statement of your notion, then I would maintain
that you are probably wrong to assume a simple, straightforward
relationship between complexity of quaternary structure and
conservation of primary structure. AIUI, some structurally simple
peptides can have (and possibly require) high sequence conservation,
such as histone, whereas other structures requiring a larger-scale and
more complex "3d orientation" might have relatively low constraints on
sequence, such as hemoglobin.

So before I entertain this particular claim, I would like to see an
actual justification for it. Maybe a chart plotting the relation of
sequence conservation to some index of "3D orientation", with some
actual values derived from published data.


>
> For example,


*interrupts*. The below is not an example of anything we are
discussing. This is another of your weak language analogies. If
you're going to present something and call it an "example", at least
give a relevant one from biology.


Once more, let us recall that this is the claim I am presenting the
2,4-DNT cascade to dispute:

"No function that requires a minimum of more than 3 to 4 fairly
specified Kb of genetic real estate ever evolves in reality - there is
not one example."

I am not asking you about the *degree* of specificity compared to some
other sequence you believe to be even more complex. I am simply
asking for a count of the number of "fairly specified" base pairs
required to code for "the function in question". That number, if it is
to be meaningful, shouldn't change just because you can imagine a more
complex system than the one we are examining. To go back to your
"example", even if we grant your argument that your 6-character
function is more specified than your pair-of-3-character function, the
fact remains that both require a total of six specified characters.
This point is especially pertinent when we remember that we are talking
about proteins, in which the most highly conserved and functionally
crucial residues are usually spaced apart singly or in very small
groups, rather than together in large clumps.


>
> This is why cascading enzymatic systems are not comparable, as far as
> functional complexity is concerned, to higher level systems like
> flagellar motility.


I am not asking for a comparison, I am asking for a count. If being
"fairly specified" is actually a property of the sequence in question,
I should be able to determine it without such comparisons, simply by
estimating the minimum size and degree of sequence constraint of the
primary structure.


>
> > So now that we have established that the answer to my question cannot
> > be "the minimum coding sequence for the largest protein in the
> > pathway", let me ask again: How many of the base pairs in the genetic
> > sequences encoding the 2,4-DNT pathway are "fairly specified"?
> >
> > Let me give you some help with the numbers. You have yourself
> > identified 4 enzymes that are central to and specific to the cascade in
> > question. Here are the smallest sequences performing their functions
> > that I could find in the NCBI database:
> >
> >
> > dntAb ~ 104
> > dntAc ~ 447
> > dntAd ~ 194
> >
> > dntB ~ 548
> >
> > dntD ~ 314
> >
> > dntG ~ 281
> >
> > for a total of 1,888 amino acids.
>
> As explained above, one simply cannot add up the numbers like one could
> for a non-cascading system were each part required specific 3D
> orientation against all the other parts of the system.


You misspelled "as asserted above".

Let us recall what you said about how to measure specificity:

"As I've told you over and over again, specificity is a description of
the minimum sequence order or limitations of character differences
within a sequence that can be sustained without a complete loss of the

function in question. A function that has greater minimum size
requirements (given a constant degree of specificity) with be at a
higher level of functional complexity. The same is true for a function
that requires a greater degree of specificity (given a constant minimum

size requirement). And, obviously, the same is true for functions that

require both a greater minimum size and specificity requirement."

So what I want from you is: *first* tell me how to determine the
minimum size requirement for coding the *entire* 2,4-DNT pathway, since
that is the "function in question", *then* you tell me how to measure
the degree of sequence conservation required, *then* we will have an
answer as to how many "fairly specified" base pairs there are in the
coding sequence.

I don't want your assertions about the complexity of enzyme cascades.
I want a method of measurement that independently verifies your
assertions, you know, one that *doesn't* use made-up ad hoc rules meant
to save an already-decided-upon conclusion.

You have stated rules for measuring complexity, Sean. I want to see
you apply them consistently, rigorously, and honestly. First, I want
you to *measure* the compelxity of the 2,4-DNT pathway, using your
stated parameters of minimum size requirement and degree of primary
structure constraint; *then* we can discuss whether your assertion
about it being no more complex than the largest enzyme in it is
correct.

It seems to me that, by your own stated criteria for measuring
"specification" (and for that matter, "functional complexity"), I *can*
measure it the way I have described because:

1) all the enzymes named are required for the function in question (so
they all figure into the "minimum size requirement" for the cascade)

2) each of the enzymes has some degree of sequence constraint; their
functions couldn't be effectively performed by just any random peptides
of the same length (so they all figure into the specificity/sequence
constraint requirement, as well).

You yourself were quick to make the point that each of these enzymes
has an individual function that could benefit an organism that produced
it. Presumably, each of these individual functions has a non-zero
minimum size requirement, and a non-zero degree of sequence constraint
(you can't perform it with just any random peptide of the appropriate
length). This undermines your "cascade rule", rather than supporting
it. If each of these sub-functions has a minimum size requirement and
non-zero specificity, and each is required for the pathway, then at
very least the total minimum size requirement and specificity of the
pathway is the sum of those sub-functions.


> Basically, what
> you have here is a system of complexity that isn't much more complex
> than its most complex subpart (i.e., 548 fairly specified aa).


Sure it is, because it has a larger minimum size requirement. Here,
let me help you remember your own rule:

"...A function that has greater minimum size requirements (given a


constant degree of specificity) with be at a higher level of functional
complexity."

So unless you are saying that there is some offsetting *decrease* in
the specificity of the largest enzyme in the cascade, than the entire
cascade is more complex by your own stated rules.

Again, complexity according to you is supposed to be a *measurement* of
the number of "fairly specified residues". *First* you need to do the
measurement, and tell me how you arrived at it, *then* you can make
claims about its complexity. You don't get to make *prior*
declarations about the cascade's complexity, and then use that to
adjust how to measure it, if your claims are to have any rigor at all.


Your cascade rule is not consistent with your stated criteria for
measuring minimum size requirement (a cascade requires a larger coding
sequence than just that for the largest enzyme in it), nor with your
stated criteria of specificity (not only must the sequence of the
largest enzyme be constrained so as to conserve function, but also all
the other steps as well). I would like to see how you reconcile them
(besides just repeating your assertions, that is).


>
> > Note that I have not included dntAa (which can be deleted without
> > compromising the function in question), dntC (the putative non-specific
> > reductase), or ORF13 (the role of which is unknown and may or may not
> > be essential). I even left out dntE, which you dissed for some reason.
>
> None of this really matters to the final statistical outcome.


There is no "statistical outcome" Sean. Just a made-up
special-pleading that is not even consistent with your stated criteria
of minimum size requirement or degree of sequence constraint.


>
> > On at least three other occasions, when asked to put a number on
> > "minimum size requirements" for a given function, you have estimated
> > that requirement using the smallest published sequence known to perform
> > that function. In the case of 2,4-DNT, that number is about 1,888aa,
> > which would require roughly 6,000 base pairs to code for it.
>
> Again, cascading systems are a different story. They do not have nearly
> the equivalent degree of specificity that systems of function have were
> all of its parts must be specifically oriented relative to all the
> other parts.


Sean, a system's degree of complexity, according to your rules, should
be determined by a *measurement* of its minimum sequence size and
degree of sequence constraint. So *first* I want to see your
measurements of these parameters, *then* we can discuss whether your
prior intuition about its degree of complexity is correct. All this
stuff about 3-D orientation is just a red herring until you demonstrate
that there is a logical connection.

>
> > Now I'm pretty sure that you don't think that dntAbAcAd could perform
> > the entire pathway by itself (meaning that it does not single-handedly
> > account for the "minimum size requirement"), and I know you don't
> > believe that any random peptide sequence could perform the subsequent
> > of the steps of the pathway (meaning that dntAbAcAd does not
> > single-handedly account for your sequence conservation criterion,
> > either). So by your own stated criteria, the answer to the question is
> > *not* "pick the most complex single enzyme in this cascade and find the
> > shortest possible genetic sequence that can code for that type of
> > functional enzyme".
>
> Actually, as far as specificity is concerned, this is pretty much
> correct.


Right, so it is your position that I can replace dntAbAcAd with any
random peptide of any length, including length 0, and still get the
2,4-DNT pathway, right? In fact, I can replace all the other enzymes
besides dntB with random peptides of any length, as you have just
declared that they all have zero specificity. Remember, in order for
the rest of the peptide to have a zero specificity, I should be able to
replace any character in its sequence with any other character without
loss of function. Your rules, Sean, not mine.

So, do you stand by the claim that all the other enzymes besides dntB
involved in the 2,4-DNT pathway can be replaced by random peptides, or
omitted altogether, and still
get an intact pathway? If not, then your cascade rule cannot be
reconciled with your stated criteria for measuring complexity and
counting "fairly specified" amino acids, or the base-pairs coding for
them.


>
> > And now that we've gotten that out of the way, let me ask once more:
> >
> > How many of the base pairs coding for the 2,4-DNT pathway are "fairly
> > specified"? How did you arrive at that number, and how could an
> > independent observer verify that number?
>
> 548aa = ~1644bp of genetic real estate (given that this enzyme cannot
> be significantly reduced in size and is already fairly specified in and
> of itself).


Well, that's a number all right. Unfortunately, it only comes from
your special "cascade rule", and not from any consistent application of
your criteria of minimum size requirement or sequence constraint.

Seanpit

unread,
Jul 23, 2006, 10:34:53 AM7/23/06
to

There is a complete lack of any real time demonstration of evolution of
any novel function that requires more than 3 or 4 thousand fairly
specified bp of genetic real estate. It just doesn't happen. And,
even lower level functions, functions that do evolve, show a dramatic
decrease in evolvability with each increase in size and sequence
specificity requirements.

> >Statistically, this
> > is easily explained by the non-beneficial gap problem between what is
> > and what might be beneficial if random mutations could actually find
> > novel beneficial sequences in the vastness of sequence space at higher
> > and higher levels of minimum size and sequence specificity requirements
> > for higher and higher level functions.
>
> You still haven't told us how you measure "specificity", and on what
> mathematical basis you have determined what the "higher levels of
> minimum size" are.
> Or is this another of those "facts" that we are supposed to believe
> simply because you say so?

How many times do I have to tell you? Sequence specificity is a
measure of the limitation of changes in a sequence that can be
tolerated before a complete loss of a given function is realized. For
example, functional cytochrome c system has a relatively high degree of
minimum sequence specificity - even though its minimum size requirement
is only about 80aa. A significant number of the residue positions in CC
are restricted to 1 or 2 or 3 aa. Also, changing multiple residues at
the same time is also highly restricted.

What this all translates into is a rough overall estimate of how many
sequences in sequence space of 100aa would have the cytochrome c
function. In 1992 Yockey published a ratio of 1 in 1e36 as his
estimate of how many CC there are in sequence space relative to
non-lactase sequences.

In this light, it is interesting to consider the work of "Robert T.
Sauer and his M.I.T. team of biologists who undertook the scientific
research of substituting the 20 different types amino acids in two
different proteins. After each substitution, the protein sequence was
reinserted into bacteria to be tested for function. They discovered
that in some locations of the protein's amino acid chains, up to 15
different amino acids may be substituted while at other locations their
was a tolerance of only a few, and yet other locations could not
tolerate even one substitution of any other amino acid. One of the
proteins they chose was the 92-residue lambda repressor.Sauer
calculated that: "... there should be about 10^57 different allowed
sequences for the entire 92 residue domain. ... the calculation does
indicate in a qualitative way the tremendous degeneracy in the
information that does specifies a particular protein fold.
Nevertheless, the estimated number of sequences capable of adopting the
lambda repressor fold is still an exceedingly small fraction, about 1
in 10^63, of the total possible 92 residue sequences."

Yockey, H.P., On the information content of cytochrome C, Journal of
Theoretical Biology , 67 (1977), p. 345-376.

http://www.detectingdesign.com/PDF%20Files/Cytochrome%20C%20Variability.doc

Yockey, HP, "Information Theory and Molecular Biology", Cambridge
University Press, 1992

R.T. Sauer, James U Bowie, John F.R. Olson, and Wendall A. Lim, 1989,
'Proceedings of the National Academy of Science's USA 86, 2152-2156.
and 1990, March 16, Science, 247; and, Olson and R.T. Sauer, 'Proteins:
Structure, Function and Genetics', 7:306 - 316, 1990.

> By the way, here is a paper which argues on the basis of genetics the
> evolution of novel functions in vertebrates.
>
> http://www.evolutionsbiologie.uni-konstanz.de/pdf1-182/P095.pdf
>
> I know that you will find this "unconvincing". That is irrelevant. If
> you find it unconvincing, provide an alternative, testable explanation
> for the evidence. If you can't, your assertion that novel functions
> can't evolve is unfounded.

This paper does not present any real time demonstration of evolution in
action. This paper does nothing more than present a just-so story about
how these scientists think evolution happened. They do not detail any
of the functional steps involved or what mutations, specifically, would
have been required to cross the gap to gain these novel functions.
They scientists simply assume that evolution is a given. Given that
evolution must have created the differences that they see, they assume
that the degree of sequence similarities indicates the degree of
evolutionary relationship.

The fact of the matter is, such similarities could also be the result
of preservation of design. In order to rule show that the evolutionary
mechanism is actually capable of such feats of creativity, scientists
much show that the exponentially expanding non-beneficial gaps can
actually be crossed by random mutation and natural selection. This has
yet to be done beyond very low levels of functional complexity. The
statistics involved are very convincing. It is clearly obvious to
anyone with a candid mind that the exponential decline in the ratio of
potentially beneficial vs. potentially non-beneficial, with each step
up the ladder of functional complexity, will most certainly result in a
dramatic decline in the evolvability of higher and higher level
functions - regardless of the starting point.

Tell me, which one of these papers shows the real time evolution of any
function that requires a minimum of more than 3 or 4 thousand fairy
specified base pairs of genetic real estate? You only need to present
one example here.

> And when you've "refuted" these, I'll give you another long list.
>
> And then another.
>
> And then another.

You only need one example . . . Which one of the papers you've
presented goes past the threshold I've listed? Please quote the
relevant passage as well as the reference.

> Or else you can simply accept that there is strong evidence for the
> evolution of novel functions in living organisms.

I already accept that there is strong evidence for the evolution of
novel functions in living organisms. I just don't see that this
evolution, as real as it is, goes beyond very low levels of functional
complexity - ever.

> RF

Sean Pitman
www.DetectingDesign.com

Von R. Smith

unread,
Jul 23, 2006, 10:43:14 AM7/23/06
to
<snip all>


Sean, as you haven't bothered to reproduce the actual text of the
message to which you are responding here, I haven't either. I will
simply ask you this:

This is how you recently said you define functional complexity:

"As I've told you over and over again, specificity is a description of
the minimum sequence order or limitations of character differences
within a sequence that can be sustained without a complete loss of the

function in question. A function that has greater minimum size
requirements (given a constant degree of specificity) with be at a
higher level of functional complexity. The same is true for a function
that requires a greater degree of specificity (given a constant minimum

size requirement). And, obviously, the same is true for functions that

require both a greater minimum size and specificity requirement."

So basically, functional complexity is a description of the minimum
possible length of a sequence coding for peptides that can perform the
"function in question", along with the degree of constraint on that
sequence.

How do you reconcile this with your claim that the 2,4-DNT pathway has
the same "functional complexity" as the dntB monoxygenase alone? We
know that dntB on its own does not degrade 2,4-DNT -it does not
effectively attack the 2,4-DNT molecule itself at all- nor does it
produce the end products of pyruvate and CoA propionyl. So what is the
basis for equating the two functions in any way?

Your mumblings about each step being individually beneficial, and about
3-D conformation or lack thereof, are irrelevant. Neithers tell us
anything about minimum size requirement or sequence specificity. So
where does your cascade rule come from?

Or should I call it the cascade exception. Here is a quote from Sean's
website:

"The fact is that all cellular functions are irreducibly complex in
that all of them require a minimum number of parts in a particular
order or orientation. I go beyond what Behe proposes and make the
suggestion that even single-protein enzymes are irreducibly complex. A
minimum number of parts in the form of amino acids are required for
them to have their particular functions. The lactase function cannot
be realized in even the smallest degree with a string of only 5 or 10
or even 100 amino acids of any order. Also, not only is a minimum
number of parts required for the lactase function to be realized, the
parts themselves, once they are available in the proper number, must be
assembled in the proper order and three-dimensional orientation.
Brought together randomly, the amino acids, if left to themselves, do
not know how to self-assemble themselves to form a much of anything as
far as a functional system that even comes close to the level of
complexity of a even a relatively simple function like a lactase
enzyme. And yet, their specified assembly and ultimate order is vital
to high level function."

And yet, here in this thread, Sean claims that enzyme cascades lack
this property. In fact, his wording even bears the interpretation that
all but the largest enzyme in a cascade *individually* lack this
property:

"Enzymatic activity by itself doesn't require a great deal of size or
specificity in order to achieve minimum usefulness (as I've previously
showed you with a 1200aa lactase that could work with just 400aa). 3D
orientation of the individual enzymes
is not required."

Apparently, enzymes involved in cascades are unique among all cellular
functions in lacking this property. Why is that Sean?

John Harshman

unread,
Jul 23, 2006, 11:16:18 AM7/23/06
to
Seanpit wrote:

[snip]

> Which one of these references of yours describes the evolution of a
> novel function requiring a minimum of more than 3 or 4 thousand fairly
> specified bp of genetic real estate?

Nobody knows, because nobody knows how to determine what that means.
There are many problems with your notion of "3 or 4 thousand fairly
specified bp", and we could argue about which is biggest, but I think
it's the tacit assumption that these bp begin in some kind of vacuum,
rather than starting, usually as some sequence that performs a function
with some relation to the new function.

> Sure, novel functions do evolve
> via random mutation and natural selection, but not beyond very low
> levels of functional complexity.

Which has always been your unsupported claim.

Now it seems to me that if we have good evidence that a particular
system did evolve, that's also good evidence that it's possible to
evolve. You might argue that it didn't happen by random mutation and
natural selection, but you can't argue that it didn't evolve. So common
descent is entirely relevant to your question. Pick any feature by which
humans differ from chimps and tell me which are "beyond very low levels
of functional complexity". Any such feature is evidence that such things
can evolve, given the overwhelming evidence that humans and chimps are
related. Would you like to claim that god intervened in evolution?

Seanpit

unread,
Jul 23, 2006, 12:09:23 PM7/23/06
to

Von R. Smith wrote:

< snip >

> "...A function that has greater minimum size requirements (given a
> constant degree of specificity) with[sic; I assume you meant "will"] be
> at a higher level of functional complexity. The same is true for a
> function that requires a greater degree of specificity (given a
> constant minimum size requirement). And, obviously, the same is true
> for functions that require both a greater minimum size and specificity
> requirement."
>
> Some important points to remember are:
>
> 1) The specificity is required for "the function in question", which
> in this case is the 2,4-DNT degredation pathway. It is *not* just for
> *any* function at all.

Right . . . But, since this function is a cascading function, it does
not require specific 3D orientation. Therefore, its degree of
specificity will be exponentially reduced relative to a function that
uses the same amount of genetic real estate, but does require specific
3D orientation of its parts. In comparison then, the relative
specificity of a cascading system will be pretty much equivalent to its
most complex single protein part.

> 2) Minimum size requirement figures into the measure of functional
> complexity, independently of "sequence specificity".

Minimum size, as a measure of complexity, is not independent of
sequence specificity. Both of these minimum requirements must be taken
together in order to calculate functional complexity. They have no
independent utility in this regard.

> 3) Sean *measures* functional complexity by the metrics of specificity
> and minimum size requirement, or at least he says he does. If his
> conclusions about the "functional complexity" of an enzyme cascade are
> to have any rigor, they must be honestly and accurately based on this
> metric, not tailored with special rules to fit his desired conclusion.

The minimum size and specificity requirements show that cascading
functions have exponentially less required specificity than does a
function of equivalent size that also requires specific 3D orientation
of all its parts relative to all the other parts.

> 4) Disclaimer: Before we get much deeper into this, I want people to
> understand that I don't actually endorse Sean's methodology or even
> consider it particularly relevant to how biochemistry actually works.
> I am, however, humoring his terminology and method of argument for the
> purposes of the current discussion, to show that his claim that:
>
> "Even with an entire gene pool
> of many thousands of functional
> sequences evolution will not create a
> novel functional system *of any kind* (not
> just a particular teleologic system)
> that requires a minimum of more
> than 3 or 4 fairly thousand fairly
> specified bp of genetic real
> estate."
>
> is false, even by his own definitions of his home-grown terminology.

You don't understand that your cascading function is completely out of
the league of non-cascading functions with regard to degree of minimum
required specificity. The math is obvious, yet you still don't get it.

> > > This is a non-standard usage of the term "specificity", but we'll let
> > > that pass. Now, it is trivial that it takes a longer sequence to code
> > > for the entire 2,4-DNT pathway than it does for any individual step in
> > > that pathway. So unless you think poor dntAbAcAd can do the entire
> > > pathway all by itself, then the pathway must have a greater "minimum
> > > size requirement" than the largest individual gene within it. And
> > > unless you are saying that any random sequence can fill in the rest of
> > > the "size requirement" in order to perform the rest of the steps of the
> > > pathway than there must be at least some "limitations of character
> > > differences" wiithin the rest of the coding sequence.
> >
> > An enzymatic cascade has relatively low specificity because of several
> > reasons. Each step in the cascade may be beneficial all by itself,
> > without the subsequent cascade.
>
> Irrelevant. By your own stated rule, specificity is to the "function
> in question", not just any function at all.

Yes, that's what I'm talking about - use of 2,4-DNT to some advantage.
That is the function in question here.

> And yes, of course the
> individual components and subassemblies can be beneficial by
> themselves; that would be expected if the cascade had evolved in the
> incremental, selectable fashion envisioned by the *actual* ToE you
> profess to understand and argue against, rather than the strawman
> version of it you keep telling us you repudiate. "Irreducible
> complexity" is your bugbear, not mine.

Flagellar motility cannot be realized at all, not even a little bit,
until over 20 proteins, which are each fair specified, are each
oriented in a highly specified degree with each other in 3 dimensions.
This is not true of your enzymatic cascade - which does not require 3D
orientation and which 2,4-DNT may be useful well before the entire
cascade is complete.

> Practically any biological system you might name is going to have this
> sort of modular structure, including your favorite examples at both
> ends of your "ladder of complexity: the flagellum (the export system
> alone is useful) and cytochrome-c (just the part binding heme is
> useful). This isn't some sort of special property unique to cascades,
> so why pretend that it is?

Yes, there may be subsystems of functions to subparts of a flagellar
system of motility, but the motility function will not be realized
until all the parts are in place. On the other hand, 2,4-DNT utility
may be realized will before the entire cascading pathway is in place.
This only part of the reason why cascading systems are not in the same
league with systems like flagellar motility.

> Your ploy here sounds suspiciously like an attempt to define away any
> possible evolution of complex structures: if a function requiring
> several proteins can be built up through selectable, individually
> useful intermediates, then it doesn't "count" as evolution of a complex
> structure.

Not at all. I'm saying that if a particular type of function can be
realized early on, like 2,4-DNT utility, before the entire cascade is
complete, then it isn't an example of a function that requires all the
parts before that function, in particular (not some other type of
function) can be realized.

> Under these rules the *only* examples that would count
> would be ones that evolved via your "neutral-drift" strawman version of
> evolution, a strawman you disown whenever challenged on it, but then
> try to slip right back in at the next opportunity.

You don't understand the difference between cascading functions and
those functions, like flagellar motility, that require specific 3D
orientation of all the parts at the same time before the function in
question can be realized even a little bit.

> > Enzymatic activity by itself doesn't
> > require a great deal of size or specificity in order to achieve minimum
> > usefulness (as I've previously showed you with a 1200aa lactase that
> > could work with just 400aa). 3D orientation of the individual enzymes
> > is not required.
> >
> > But, for argument's sake, lets say that all the enzymes in a particular
> > cascade are required before a benefit will be gained.
>
> No, let's not say that for argument's sake, because that isn't the
> point under argument. Nobody is saying that the entire cascade must be
> in place for *a* benefit to be gained.

We are talking about 2,4-DNT utility here. The benefit will be the
ability to use 2,4-DNT to some advantage. That is the function in
question here. If this function can be realized without the entire
cascade in place, then it cannot be used as a function that requires
the entire cascade to be in place. Flagellar motility, on the other
hand, does require the entire collection of protein parts to be in
place. See the difference?

> The issue is what is required
> to degrade 2,4-DNT sufficiently for an organism to use it as its sole
> carbon, nitrogen, and energy source (the "function in question").
> Let's keep that in mind.

Are you sure that 2,4-DNT must have the entire cascade in place before
2,4-DNT can be used as the sole carbon source? That was my original
question. However, even if it is true that the entire cascade must be
in place before this function can be realized, it doesn't help you to
any significant degree. Why? Because of the fact that cascading
enzymatic systems to not require specific 3D orientation of all the
subparts of the cascade. The lack of this specificity requirement
dramatically reduces the overall specificity of the system in
comparison to a system of the same size that does have this requirement
(there is a huge exponential difference).

> > The fact that 3D
> > orientation is not required dramatically reduces the specificity.
>
> How? AFAICT, "specificity" sensu Pitman basically means the degree of
> sequence constraint required to conserve function. I am not sure what
> you mean by "3D orientation", but my best guess is that it is something
> to do with the degree to which the proteins combine to form an actual
> structure or spatial arrangement.

Yes, the requirement for a specific 3D orientation of the subparts,
before the function in question can be realized, exponentially
increases the functional complexity of that system via an exponential
increase in sequence specificity.

You have the seemingly obvious notion that if a system of function
requires two proteins, A and B, coded for by two genes X and Y, that my
requirements are nothing more than a simple addition of the base pairs
in both genes. This simply isn't true. You have to take into account
the requirements of the system. Does the system in question require
specific orientation of the parts relative to each other? If so, the
degree of specificity increases exponentially - not additively.

> If that is an accurate statement of your notion, then I would maintain
> that you are probably wrong to assume a simple, straightforward
> relationship between complexity of quaternary structure and
> conservation of primary structure. AIUI, some structurally simple
> peptides can have (and possibly require) high sequence conservation,
> such as histone, whereas other structures requiring a larger-scale and
> more complex "3d orientation" might have relatively low constraints on
> sequence, such as hemoglobin.
>
> So before I entertain this particular claim, I would like to see an
> actual justification for it. Maybe a chart plotting the relation of
> sequence conservation to some index of "3D orientation", with some
> actual values derived from published data.

Take the flagellar system, for example. The individual protein parts
within this system each require a fair degree of sequence specificity
as well as size in order to work properly within the larger system of
flagellar motility. On top of this, they also required to be in a very
specific orientation will the other parts of the system. This is
unlike your cascading enzymatic system where the individual parts of
the cascade are not required to be specifically oriented with any of
the other parts of the system in 3D space in order for the overall
system to work properly.


> > For example,
>
> *interrupts*. The below is not an example of anything we are
> discussing. This is another of your weak language analogies. If
> you're going to present something and call it an "example", at least
> give a relevant one from biology.

Oh please! Are you really this limited in your ability to comprehend
that what is described below is identical to what is happening with a
biological system? All you have to do is change the number of
potential characters per loci from 26 to 4. The formulas do not change
and neither does the significance of the result.

It seems that you are trying to argue is that all systems that require
3 to 4 thousand base pairs of genetic real estate are all "fairly
specified". They aren't. Given the same size requirement and degree
of specificity of the individual subparts of two different systems, one
a cascading system and the other a non-cascading system that require
specific 3D orientation of each part with all the others, the cascading
system will not have near the overall specificity of the other system -
not even remotely close.

> I am simply
> asking for a count of the number of "fairly specified" base pairs
> required to code for "the function in question".

And I've told you the answer over and over again. The equivalent
degree of fairly specified base pairs in a cascading system, as
compared to a system that requires specific 3D orientation of all of
its subparts, is the size of its most complex subpart.

> That number, if it is
> to be meaningful, shouldn't change just
> because you can imagine a more
> complex system than the one we are examining.

You are trying to compare apples and oranges here. System specificity
is dramatically affected by the requirement for 3D orientation of all
of the subparts. That is part of the measure of system specificity.
Cascading systems of function have very very low system specificity -
relatively speaking. They simply are not "fairly specified" in
comparison to a system of equivalent size that does require specific 3D
orientation.

> To go back to your
> "example", even if we grant your argument that your 6-character
> function is more specified than your pair-of-3-character function, the
> fact remains that both require a total of six specified characters.

That's true . . . However, the minimum size requirement, by itself, is
not a measure of functional complexity. Measuring functional
complexity requires that the calculation include both variables at the
same time - both size and specificity.

> This point is especially pertinent when we remember that we are talking
> about proteins, in which the most highly conserved and functionally
> crucial residues are usually spaced apart singly or in very small
> groups, rather than together in large clumps.

It doesn't matter as long as the large clumps require specific
orientation with all the other large clumps. This 3D orientation
requirement dramatically increases the overall specificity of the
protein as well as of a system of proteins.

What it boils down to is the ratio of sequences in sequence space that
could give rise to the system in question vs. the ones that could not.
When you do not have a specific 3D orientation requirement, the size of
the sequence space that must be considered is dramatically reduced.
This ends up dramatically increasing the ratio or odds of finding the
needed parts in a given amount of time.

> > This is why cascading enzymatic systems are not comparable, as far as
> > functional complexity is concerned, to higher level systems like
> > flagellar motility.
>
> I am not asking for a comparison, I am asking for a count. If being
> "fairly specified" is actually a property of the sequence in question,
> I should be able to determine it without such comparisons, simply by
> estimating the minimum size and degree of sequence constraint of the
> primary structure.

Your mistaken in estimating specificity is in thinking that parts that
do not require specific orientation will all the other parts in a
system can simply be added up to produce a total. This isn't true.
You can only add up the numbers like you do if the parts do have this
additional specificity requirement. What you are actually doing in
this "addition" is not adding at all. You are actually multiplying.

For example, what is 10^20 * 10^20? Isn't the answer 10^40? In other
words, you are multiplying the numbers, which is done by addition of
the exponents. What happens with cascading systems is that you can't
multiply the individual numbers. You can only add them together: 10^20
+ 10^20 = 20^20. That is a far different number - right? As far as a
measure of specificity is concerned, this number is not nearly as
"specified" as the 10^40 number. In other words, there is a far greater
ratio of potentially workable sequences on the one hand than on the
other.

> > > So now that we have established that the answer to my question cannot
> > > be "the minimum coding sequence for the largest protein in the
> > > pathway", let me ask again: How many of the base pairs in the genetic
> > > sequences encoding the 2,4-DNT pathway are "fairly specified"?
> > >
> > > Let me give you some help with the numbers. You have yourself
> > > identified 4 enzymes that are central to and specific to the cascade in
> > > question. Here are the smallest sequences performing their functions
> > > that I could find in the NCBI database:
> > >
> > >
> > > dntAb ~ 104
> > > dntAc ~ 447
> > > dntAd ~ 194
> > >
> > > dntB ~ 548
> > >
> > > dntD ~ 314
> > >
> > > dntG ~ 281
> > >
> > > for a total of 1,888 amino acids.
> >
> > As explained above, one simply cannot add up the numbers like one could
> > for a non-cascading system were each part required specific 3D
> > orientation against all the other parts of the system.
>
> You misspelled "as asserted above".

Read it again and reconsider the notion that cascading systems are
remotely "fairly specified". They just aren't.

> Let us recall what you said about how to measure specificity:
>
> "As I've told you over and over again, specificity is a description of
> the minimum sequence order or limitations of character differences
> within a sequence that can be sustained without a complete loss of the
> function in question. A function that has greater minimum size

> requirements (given a constant degree of specificity) will be at a


> higher level of functional complexity. The same is true for a function
> that requires a greater degree of specificity (given a constant minimum
> size requirement). And, obviously, the same is true for functions that
> require both a greater minimum size and specificity requirement."
>

> So what I want from you is: *first* tell me how to determine the
> minimum size requirement for coding the *entire* 2,4-DNT pathway, since
> that is the "function in question",

Given that the function in question, use of 2,4-DNT as the sole carbon
source, cannot be realized without all the enzymes in the pathway, I'm
fine with a total of about 2,000aa or 6,000bp.

> *then* you tell me how to measure
> the degree of sequence conservation required,

I already did - about. Plug in the numbers and use the same formula to
get your answer. The degree of specificity of a 2,000aa cascading
system, relative to the degree of specificity of a 2,000aa function
that requires all of its parts to be specifically oriented relative to
each other, will be exponentially less so that the degree of complexity
is basically a measure of its most complex subpart - like I pointed out
originally.

> *then* we will have an
> answer as to how many "fairly specified"
> base pairs there are in the coding sequence.

Great! ; )

> I don't want your assertions about the complexity of enzyme cascades.

The complexity of enzymatic cascades is tied to specificity - which
just isn't remotely close to the level of being "fairly specified".

> I want a method of measurement that independently verifies your
> assertions, you know, one that *doesn't* use made-up ad hoc rules meant
> to save an already-decided-upon conclusion.

These aren't ad hoc rules since they haven't changed. The rules have
always been "minimum sequence size and specificity". Cascading systems
have the size, but not the specificity.

> You have stated rules for measuring complexity, Sean. I want to see
> you apply them consistently, rigorously, and honestly. First, I want
> you to *measure* the compelxity of the 2,4-DNT pathway, using your
> stated parameters of minimum size requirement and degree of primary
> structure constraint; *then* we can discuss whether your assertion
> about it being no more complex than the largest enzyme in it is
> correct.

The equivalent size and specificity requirements for a cascading system
are very close to the measure of the subpart with the greatest size and
specificity requirements. If you want to get really technical, the
total requirement is an addition of the sequence space ratio for all
the parts together.

For example, if each part in a two-part cascading system had a sequence
space ratio of 1e-40, then the total ratio for the entire system would
be 1e-40 + 1e-40 = 2e-40.

However, if each part of a two-part system which required specific
orientation of the parts relative to each other had a sequence space
ratio of 1e-40, then the total ratio for the entire system would be
1e-40 * 1e-40 = 1e-80.

See the difference?

As another example, consider that in 1992 Yockey published a ratio of 1


in 1e36 as his estimate of how many CC there are in sequence space
relative to non-lactase sequences.

In this light, it is also interesting to consider the work of "Robert


T. Sauer and his M.I.T. team of biologists who undertook the
scientific research of substituting the 20 different types amino acids
in two different proteins. After each substitution, the protein
sequence was reinserted into bacteria to be tested for function. They
discovered that in some locations of the protein's amino acid chains,
up to 15 different amino acids may be substituted while at other
locations their was a tolerance of only a few, and yet other locations
could not tolerate even one substitution of any other amino acid. One
of the proteins they chose was the 92-residue lambda repressor.Sauer
calculated that: "... there should be about 10^57 different allowed
sequences for the entire 92 residue domain. ... the calculation does
indicate in a qualitative way the tremendous degeneracy in the
information that does specifies a particular protein fold.
Nevertheless, the estimated number of sequences capable of adopting the
lambda repressor fold is still an exceedingly small fraction, about 1
in 10^63, of the total possible 92 residue sequences."

Now, if you have a system that requires more than 1 protein, how do you
calculate the ratio for the entire system - the multiprotein system as
a whole? Well, for systems that do not require specific orientation of
the subparts with each other, you add up the ratios. But, for systems
that require specific orientation of the subparts, you multiply the
ratios.

References:

Yockey, H.P., On the information content of cytochrome C, Journal of
Theoretical Biology , 67 (1977), p. 345-376.

http://www.detectingdesign.com/PDF%20Files/Cytochrome%20C%20Variability.doc

Yockey, HP, "Information Theory and Molecular Biology", Cambridge
University Press, 1992

R.T. Sauer, James U Bowie, John F.R. Olson, and Wendall A. Lim, 1989,
'Proceedings of the National Academy of Science's USA 86, 2152-2156.
and 1990, March 16, Science, 247; and, Olson and R.T. Sauer, 'Proteins:
Structure, Function and Genetics', 7:306 - 316, 1990.

> It seems to me that, by your own stated criteria for measuring


> "specification" (and for that matter, "functional complexity"), I *can*
> measure it the way I have described because:
>
> 1) all the enzymes named are required for the function in question (so
> they all figure into the "minimum size requirement" for the cascade)

Even if they are all required, you cannot multiply them together like
you did. You must add their individual degrees of complexity together.
Your mistake is that you tried to multiply them together. You just
can't do that for a cascading system.

> 2) each of the enzymes has some degree of sequence constraint; their
> functions couldn't be effectively performed by just any random peptides
> of the same length (so they all figure into the specificity/sequence
> constraint requirement, as well).

That's right. So, to find out the total constraint for the entire
cascading system, you add up the degrees of individual constraints. In
other words, you add up the ratios for each protein part in a cascading
system. However, you multiply the ratios for a system that requires
specific orientation of each part relative to all the other parts.

> You yourself were quick to make the point that each of these enzymes
> has an individual function that could benefit an organism that produced
> it. Presumably, each of these individual functions has a non-zero
> minimum size requirement, and a non-zero degree of sequence constraint
> (you can't perform it with just any random peptide of the appropriate
> length). This undermines your "cascade rule", rather than supporting
> it. If each of these sub-functions has a minimum size requirement and
> non-zero specificity, and each is required for the pathway, then at
> very least the total minimum size requirement and specificity of the
> pathway is the sum of those sub-functions.

That's right - but that's not what you did. I know it must not have
seemed like it to you, but what you did was to multiply the
specificities of each of the subparts. You didn't add them - you
really multiplied. Yet, you can only multiply the ratios if the
function in question requires specific orientation of each subpart.

> > Basically, what
> > you have here is a system of complexity that isn't much more complex
> > than its most complex subpart (i.e., 548 fairly specified aa).
>
> Sure it is, because it has a larger minimum size requirement. Here,
> let me help you remember your own rule:
>
> "...A function that has greater minimum size requirements (given a
> constant degree of specificity) with be at a higher level of functional
> complexity."
>
> So unless you are saying that there is some offsetting *decrease* in
> the specificity of the largest enzyme in the cascade, than the entire
> cascade is more complex by your own stated rules.

Yes, the entire cascade is more complex by my own rules, but not to any
significant degree. The relative degree of increase isn't hardly worth
mentioning. It's the difference between going from 10^20 to 20^20 vs.
10^40.

< snip repetitions >

> > 548aa = ~1644bp of genetic real estate (given that this enzyme cannot
> > be significantly reduced in size and is already fairly specified in and
> > of itself).
>
> Well, that's a number all right. Unfortunately, it only comes from
> your special "cascade rule", and not from any consistent application of
> your criteria of minimum size requirement or sequence constraint.

You just don't understand that you are really multiplying when you
should be adding.

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
Jul 23, 2006, 12:10:41 PM7/23/06
to

Von R. Smith wrote:

< snip >

> "...A function that has greater minimum size requirements (given a

> constant degree of specificity) with[sic; I assume you meant "will"] be


> at a higher level of functional complexity. The same is true for a
> function that requires a greater degree of specificity (given a
> constant minimum size requirement). And, obviously, the same is true
> for functions that require both a greater minimum size and specificity
> requirement."
>

> > > This is a non-standard usage of the term "specificity", but we'll let


> > > that pass. Now, it is trivial that it takes a longer sequence to code
> > > for the entire 2,4-DNT pathway than it does for any individual step in
> > > that pathway. So unless you think poor dntAbAcAd can do the entire
> > > pathway all by itself, then the pathway must have a greater "minimum
> > > size requirement" than the largest individual gene within it. And
> > > unless you are saying that any random sequence can fill in the rest of
> > > the "size requirement" in order to perform the rest of the steps of the
> > > pathway than there must be at least some "limitations of character
> > > differences" wiithin the rest of the coding sequence.
> >
> > An enzymatic cascade has relatively low specificity because of several
> > reasons. Each step in the cascade may be beneficial all by itself,
> > without the subsequent cascade.
>

> > Enzymatic activity by itself doesn't


> > require a great deal of size or specificity in order to achieve minimum
> > usefulness (as I've previously showed you with a 1200aa lactase that
> > could work with just 400aa). 3D orientation of the individual enzymes
> > is not required.
> >

> > But, for argument's sake, lets say that all the enzymes in a particular
> > cascade are required before a benefit will be gained.
>

> No, let's not say that for argument's sake, because that isn't the
> point under argument. Nobody is saying that the entire cascade must be
> in place for *a* benefit to be gained.

We are talking about 2,4-DNT utility here. The benefit will be the
ability to use 2,4-DNT to some advantage. That is the function in
question here. If this function can be realized without the entire
cascade in place, then it cannot be used as a function that requires
the entire cascade to be in place. Flagellar motility, on the other
hand, does require the entire collection of protein parts to be in
place. See the difference?

> The issue is what is required
> to degrade 2,4-DNT sufficiently for an organism to use it as its sole
> carbon, nitrogen, and energy source (the "function in question").
> Let's keep that in mind.

Are you sure that 2,4-DNT must have the entire cascade in place before
2,4-DNT can be used as the sole carbon source? That was my original
question. However, even if it is true that the entire cascade must be
in place before this function can be realized, it doesn't help you to
any significant degree. Why? Because of the fact that cascading
enzymatic systems to not require specific 3D orientation of all the
subparts of the cascade. The lack of this specificity requirement
dramatically reduces the overall specificity of the system in
comparison to a system of the same size that does have this requirement
(there is a huge exponential difference).

> > The fact that 3D


> > orientation is not required dramatically reduces the specificity.
>

> > what are the odds that two specific 3-character sequences

> > This is why cascading enzymatic systems are not comparable, as far as


> > functional complexity is concerned, to higher level systems like
> > flagellar motility.
>

> I am not asking for a comparison, I am asking for a count. If being
> "fairly specified" is actually a property of the sequence in question,
> I should be able to determine it without such comparisons, simply by
> estimating the minimum size and degree of sequence constraint of the
> primary structure.

Your mistaken in estimating specificity is in thinking that parts that
do not require specific orientation will all the other parts in a
system can simply be added up to produce a total. This isn't true.
You can only add up the numbers like you do if the parts do have this
additional specificity requirement. What you are actually doing in
this "addition" is not adding at all. You are actually multiplying.

For example, what is 10^20 * 10^20? Isn't the answer 10^40? In other
words, you are multiplying the numbers, which is done by addition of
the exponents. What happens with cascading systems is that you can't
multiply the individual numbers. You can only add them together: 10^20
+ 10^20 = 20^20. That is a far different number - right? As far as a
measure of specificity is concerned, this number is not nearly as
"specified" as the 10^40 number. In other words, there is a far greater
ratio of potentially workable sequences on the one hand than on the
other.

> > > So now that we have established that the answer to my question cannot


> > > be "the minimum coding sequence for the largest protein in the
> > > pathway", let me ask again: How many of the base pairs in the genetic
> > > sequences encoding the 2,4-DNT pathway are "fairly specified"?
> > >
> > > Let me give you some help with the numbers. You have yourself
> > > identified 4 enzymes that are central to and specific to the cascade in
> > > question. Here are the smallest sequences performing their functions
> > > that I could find in the NCBI database:
> > >
> > >
> > > dntAb ~ 104
> > > dntAc ~ 447
> > > dntAd ~ 194
> > >
> > > dntB ~ 548
> > >
> > > dntD ~ 314
> > >
> > > dntG ~ 281
> > >
> > > for a total of 1,888 amino acids.
> >
> > As explained above, one simply cannot add up the numbers like one could
> > for a non-cascading system were each part required specific 3D
> > orientation against all the other parts of the system.
>

> You misspelled "as asserted above".

Read it again and reconsider the notion that cascading systems are
remotely "fairly specified". They just aren't.

> Let us recall what you said about how to measure specificity:
>

> "As I've told you over and over again, specificity is a description of
> the minimum sequence order or limitations of character differences
> within a sequence that can be sustained without a complete loss of the
> function in question. A function that has greater minimum size

> requirements (given a constant degree of specificity) will be at a


> higher level of functional complexity. The same is true for a function
> that requires a greater degree of specificity (given a constant minimum
> size requirement). And, obviously, the same is true for functions that
> require both a greater minimum size and specificity requirement."
>

Great! ; )

See the difference?

References:

http://www.detectingdesign.com/PDF%20Files/Cytochrome%20C%20Variability.doc

> > Basically, what


> > you have here is a system of complexity that isn't much more complex
> > than its most complex subpart (i.e., 548 fairly specified aa).
>

> Sure it is, because it has a larger minimum size requirement. Here,
> let me help you remember your own rule:
>

> "...A function that has greater minimum size requirements (given a


> constant degree of specificity) with be at a higher level of functional
> complexity."
>

> So unless you are saying that there is some offsetting *decrease* in
> the specificity of the largest enzyme in the cascade, than the entire
> cascade is more complex by your own stated rules.

Yes, the entire cascade is more complex by my own rules, but not to any
significant degree. The relative degree of increase isn't hardly worth
mentioning. It's the difference between going from 10^20 to 20^20 vs.
10^40.

< snip repetitions >

> > 548aa = ~1644bp of genetic real estate (given that this enzyme cannot


> > be significantly reduced in size and is already fairly specified in and
> > of itself).
>

Seanpit

unread,
Jul 23, 2006, 12:13:27 PM7/23/06
to

You can't argue common descent over common design. That's your problem.
Sure, there was a common origin, but was it common descent or common
design? Without a remotely tenable mechanism for common descent beyond
very low levels of functional complexity, the most likely alternative
is common design.

Sean Pitman
www.DetectingDesign.com

Windy

unread,
Jul 23, 2006, 12:48:29 PM7/23/06
to

A designer would have to have high levels of functional complexity, no?
Since you start with the premise that high levels of functional
complexity can't originate naturally, how did the designer come about?
If you assume the designer popped out of nowhere, your example is a lot
less tenable and likely than common descent.

I would also like an example of a designer designing >3-4 kb of highly
specified genetic real estate in real time. Preferably this designer
should be non-human but we can start with a human example. Otherwise I
can't see how you have a remotely tenable mechanism of design.

I am also interested in the human-chimp example. How many highly
specified >3-4kb bits of genetic real estate are present in the human
genome but not the chimp, or vice versa? Any predictions on this?

-- w.

Richard Forrest

unread,
Jul 23, 2006, 1:04:34 PM7/23/06
to

What measure of "fairly specified" have you determined makes it
impossible for evolution to occur, and how have you calculated it?


> Sure, novel functions do evolve
> via random mutation and natural selection, but not beyond very low
> levels of functional complexity.

And what level of "functional complexity" would that be, and how have
you calculated it?

>

Pulling numbers out of the air is easy, but if you want your answers to
be considered to have any integrity, you need to provide the
calculations.

By the way, some of those papers refer to hox gene duplications which
have led to major modifications to the bauplan of complex organisms,
which in anyone's language involves the evolution of major new and
complex systems. Perhaps you can look at the evidence described there,
and point out the weaknesses in the arguments presented by the authors.

RF

> > RF
> >
> > >
> >
> > > Sean Pitman
> > > www.DetectingDesign.com

David Wilson

unread,
Jul 23, 2006, 3:09:31 PM7/23/06
to
In article <nsm1c2pfer96svoi3...@4ax.com> on

July 21st in talk.origins Stanley Friesen <sar...@friesen.net> wrote:

> Jason Spaceman <notr...@jspaceman.homelinux.org> wrote:
>
> >From the article:
> >--------------------------------------------------------------------
> >7/14/2006 5:28:17 PM
> >Daily Journal
> >
> >Dr. John Sanford, who will present the July 17 lecture, is the primary
> >inventor of the gene gun process. His research has been used to
> >engineer most of the world's transgenic crops.
>
> Hmm, sounds like another engineer - an agricultural engineer.

Horticulturalist. <http://www.nysaes.cornell.edu/hort/faculty/sanford/>

------------------------------------------------------------------------
David Wilson

SPAMMERS_fingers@WILL_BE_fwi_PROSECUTED_.net.au
(Remove underlines and upper case letters to obtain my email address.

Richard Forrest

unread,
Jul 23, 2006, 1:25:43 PM7/23/06
to

Why on earth not?
There is vast and extensive evidence for common descent, and not one
jot or tittle of evidence for "common design". There is not even any
evidence that the work of any designer should show such a phenomenon.

These are all products designed by an old friend of my father's,
Kenneth Grange, who is a highly respected and well-known industrial
designer:

Kodak Instamatic Camera
Intercity 125 High Speed Train
Ronson Rio Hairdryer
TX1 London Cab
Kenwood Chef
Type 3 Anglepoise Lamp
Wilkinson Sword Gel Response Razor
Parker Duofold Pen
Bendix Washing Machine
Imperial Typewriter
B&W Loudspeakers

Would you care to to demonstrate on the basis of shared characters that
they form a nested hierarchy?


> That's your problem.
> Sure, there was a common origin, but was it common descent or common
> design? Without a remotely tenable mechanism for common descent beyond
> very low levels of functional complexity, the most likely alternative
> is common design.

As we have a very tenable mechanism for common descent, and as you are
utterly incapable of doing more than make unfounded assertions about
"low levels of functional complexity" without telling us how you
calculate "functional complexity", let alone on what basis you have
determined what the cut-off level beyond which evolution is impossible,
perhaps you can explain why we should reject the conclusions reached by
scientific research?

RF

>
> Sean Pitman
> www.DetectingDesign.com

Richard Forrest

unread,
Jul 23, 2006, 1:44:51 PM7/23/06
to

As you have not told us on what basis you have determined this, it is
simply another unsupported assertion.

> It just doesn't happen.

Another unsupported assertion.

>And,
> even lower level functions, functions that do evolve, show a dramatic
> decrease in evolvability with each increase in size and sequence
> specificity requirements.

Another unsupported assertion.

>
> > >Statistically, this
> > > is easily explained by the non-beneficial gap problem between what is
> > > and what might be beneficial if random mutations could actually find
> > > novel beneficial sequences in the vastness of sequence space at higher
> > > and higher levels of minimum size and sequence specificity requirements
> > > for higher and higher level functions.
> >
> > You still haven't told us how you measure "specificity", and on what
> > mathematical basis you have determined what the "higher levels of
> > minimum size" are.
> > Or is this another of those "facts" that we are supposed to believe
> > simply because you say so?
>
> How many times do I have to tell you?

You have asserted this countless times. What you have not done is told
us how you reach this conclusion.

> Sequence specificity is a
> measure of the limitation of changes in a sequence that can be
> tolerated before a complete loss of a given function is realized.

So how do you measure it, and on what basis have you determined the
cut-off level beyond which evolution cannot occur?

> For
> example, functional cytochrome c system has a relatively high degree of
> minimum sequence specificity - even though its minimum size requirement
> is only about 80aa.


So how do you measure it, and on what basis have you determined the
cut-off level beyond which evolution cannot occur?


>A significant number of the residue positions in CC
> are restricted to 1 or 2 or 3 aa. Also, changing multiple residues at
> the same time is also highly restricted.


So how do you measure it, and on what basis have you determined the
cut-off level beyond which evolution cannot occur?


>
> What this all translates into is a rough overall estimate of how many
> sequences in sequence space of 100aa would have the cytochrome c
> function. In 1992 Yockey published a ratio of 1 in 1e36 as his
> estimate of how many CC there are in sequence space relative to
> non-lactase sequences.
>

And it has been explained to you repeatedly by other why your
interpretation of these results is wrong.

> In this light, it is interesting to consider the work of "Robert T.
> Sauer and his M.I.T. team of biologists who undertook the scientific
> research of substituting the 20 different types amino acids in two
> different proteins. After each substitution, the protein sequence was
> reinserted into bacteria to be tested for function. They discovered
> that in some locations of the protein's amino acid chains, up to 15
> different amino acids may be substituted while at other locations their
> was a tolerance of only a few, and yet other locations could not
> tolerate even one substitution of any other amino acid. One of the
> proteins they chose was the 92-residue lambda repressor.Sauer
> calculated that: "... there should be about 10^57 different allowed
> sequences for the entire 92 residue domain. ... the calculation does
> indicate in a qualitative way the tremendous degeneracy in the
> information that does specifies a particular protein fold.
> Nevertheless, the estimated number of sequences capable of adopting the
> lambda repressor fold is still an exceedingly small fraction, about 1
> in 10^63, of the total possible 92 residue sequences."
>
> Yockey, H.P., On the information content of cytochrome C, Journal of
> Theoretical Biology , 67 (1977), p. 345-376.
>
> http://www.detectingdesign.com/PDF%20Files/Cytochrome%20C%20Variability.doc
>

You still haven't told us how you measure "specificity", and on what


mathematical basis you have determined what the "higher levels of
minimum size" are.

> Yockey, HP, "Information Theory and Molecular Biology", Cambridge


> University Press, 1992
>
> R.T. Sauer, James U Bowie, John F.R. Olson, and Wendall A. Lim, 1989,
> 'Proceedings of the National Academy of Science's USA 86, 2152-2156.
> and 1990, March 16, Science, 247; and, Olson and R.T. Sauer, 'Proteins:
> Structure, Function and Genetics', 7:306 - 316, 1990.
>
> > By the way, here is a paper which argues on the basis of genetics the
> > evolution of novel functions in vertebrates.
> >
> > http://www.evolutionsbiologie.uni-konstanz.de/pdf1-182/P095.pdf
> >
> > I know that you will find this "unconvincing". That is irrelevant. If
> > you find it unconvincing, provide an alternative, testable explanation
> > for the evidence. If you can't, your assertion that novel functions
> > can't evolve is unfounded.
>
> This paper does not present any real time demonstration of evolution in
> action. This paper does nothing more than present a just-so story about
> how these scientists think evolution happened. They do not detail any
> of the functional steps involved or what mutations, specifically, would
> have been required to cross the gap to gain these novel functions.

Why on earth should they?

> They scientists simply assume that evolution is a given. Given that
> evolution must have created the differences that they see, they assume
> that the degree of sequence similarities indicates the degree of
> evolutionary relationship.

So provide an alternative, testable explanation for the evidence
presented by the authors and demonstrate that it provides a better
explanation for that evidence than that suggested by the authors.

>
> The fact of the matter is, such similarities could also be the result
> of preservation of design.

What on earth is "preservation of design"? I have trained as a designer
in a degree course which included a heavy dosage of design theory, and
have never come across such a concept applied to the works of a
designer.

Making up terms with no basis in the real world does not add to your
argument.

> In order to rule show that the evolutionary
> mechanism is actually capable of such feats of creativity, scientists
> much show that the exponentially expanding non-beneficial gaps can
> actually be crossed by random mutation and natural selection.

No they don't.

They need to provide the best, testable hypothesis which can explain
the evidence. There may be gaps in that hypothesis, but unless you can
come up with an hypothesis which provides a better explanation, science
will stick with what it has.


> This has
> yet to be done beyond very low levels of functional complexity. The
> statistics involved are very convincing.

Only to you Sean, only to you.
The problem you have is that you don't need to persuade yourself, you
need to persuade other people. You may be able to persuade those who
know little about science that you are correct, but you may have
noticed that you are not meeting much success in persuading anyone who
*does* know anything about science.

Your habit of making unfounded assertions does not advance your
argunment.

How do you measure "fairly specified"?

And no, you need to show from the evidence presented in these papers
that their conclusions are wrong. Making assertions about hypothetical
barriers to evolution for whose existence you can provide no evidence
is not much of an argument.

>.You only need to present
> one example here.

What you need to do is to demonstrate "FROM THE EVIDENCE* how the
authors of those papers have reached false conclusions.

You are incapable even of presenting a measure of "fairly specified".

>
> > And when you've "refuted" these, I'll give you another long list.
> >
> > And then another.
> >
> > And then another.
>
> You only need one example . . . Which one of the papers you've
> presented goes past the threshold I've listed? Please quote the
> relevant passage as well as the reference.
>
> > Or else you can simply accept that there is strong evidence for the
> > evolution of novel functions in living organisms.
>
> I already accept that there is strong evidence for the evolution of
> novel functions in living organisms. I just don't see that this
> evolution, as real as it is, goes beyond very low levels of functional
> complexity - ever.

So provide a better explanation, based on the evidence, for why we see
such strong evidence for evolution of complex novel functions in living
organism.

Anyone can make assertions about "fairly specified" levels of
complexity or "preservation of design". You need to back up your
assertions with evidence, something you have completely failed to to.

Ernest Major

unread,
Jul 23, 2006, 1:56:43 PM7/23/06
to
In message <1153671207.8...@b28g2000cwb.googlegroups.com>,
Seanpit <seanpi...@naturalselection.0catch.com> writes

There are testable mechanisms for common descent, starting with
mutation, natural selection and genetic drift. The assertion by you that
these are limited is just that, an assertion; you haven't demonstrated
that the space of viable organisms has the topology required by your
claims. However there doesn't seem to be even a ghost of a testable
mechanism for common design; that would seem to invalidate you
conclusion even if we were to accept your prior assertion.

Common descent can be argued over common design. Apart from the
previously mentioned point about mechanisms, there are any number of
classes of observations that are explained by common descent but not
common design. What is the common design explanation for systematic
differences between continental and oceanic island fauans? What is the
common design explanation for the same niche being occupied by different
groups in different localities; such as cacti in the Americas,
euphorbias (etc) in Africa, Didieras in Madagascar; such as spider
monkeys in South America, sifakas in Madagascar, colobus in Africa,
langurs in India, and gibbons in south east Asia; such as marsupials in
Australia and ecological analogues in other parts of the world? Why do
most (all?) native Australian rodents belong to a small subgroup of
rodents? What is the common design explanation for shared pseudogenes?
What is the common design explanation for shared retroviral insertions?
What is the common design explanation for the near uniformity of the
genetic code, and the nature and distribution of the exceptions?
>
>Sean Pitman
>www.DetectingDesign.com
>

--
alias Ernest Major


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.10.3/395 - Release Date: 21/07/2006

John Harshman

unread,
Jul 23, 2006, 2:03:41 PM7/23/06
to
Seanpit wrote:

Sure I can. No problem.

> Sure, there was a common origin, but was it common descent or common
> design? Without a remotely tenable mechanism for common descent beyond
> very low levels of functional complexity, the most likely alternative
> is common design.

No, common design (without descent -- note that common descent might
conceivably include divine intervention to produce particular favored
mutations) and common descent make different predictions about what we
will find. Common descent predicts a nested hierarchy; common design
does not. We find a nested hierarchy. QED.

Robin Levett

unread,
Jul 23, 2006, 2:28:53 PM7/23/06
to
Seanpit wrote:

Are you serious? What stops that 2,999bp functional system from adding that
one extra mutation to ebcome a 3kbp functional system?

>
>> >> How do I measure "levels of functional complexity" without asking you?
>> >
>> > You determine the minimum size and degree of specific sequence
>> > arrangement that a particular functional system requires before its
>> > function in question can be realized to at least a minimum degree of
>> > selective advantage.
>>
>> And how do I do that? What is a "minimum degree of selective advantage"?
>> What is a "specific sequence arrangement"?

You forgot to answer this.

>>
>> What variables do I feed into what equations to come up with a "level of
>> functional complexity"? How do I measure those variables?

And this.

>>
>> > These minimum requirements before functionality
>> > of a particular type can be realized are a measurement of that
>> > function's level of complexity.

<and your newsreader doesn't snip properly formatted sigs, either>

wf3h

unread,
Jul 23, 2006, 3:11:10 PM7/23/06
to

Seanpit wrote:
> wf3h wrote:
> > Seanpit wrote:
> > >
> > > I understand evolution in general just fine. I am fully aware of how
> > > evolution is supposed to work and have read many of these just so
> > > stories about how various functions are supposed to have evolved. I've

> > > yet to read about a real-time demonstration to back up even one of
> > > these just-so proclamations beyond very low levels of functional
> > > complexity.
> >
> > as a scientist who is objective, since i'm not an evolutionary
> > biologist, i find seanpit's argument unconvincing
> > evolution has been demonstrated in the lab.
>
> Not beyond functions that require more than a few hundred fairly
> specified amino acid residues working together at the same time (less
> than 3 to 4 thousand bp of genetic real estate at minimum).

backpedaling. the idea that some arbitrary level of functionality has
to be demonstrated to YOUR satisfaction is totally irrelevant for
several reason. these include:

1. you are a christianist. you are biased. you have an agenda. NO
amount of evidence would satisfy you.

2. evolution HAS been demonstrated in the lab. the acquistion of
functionality (eg bacterial resistance) has been shown in the lab

3. there is no reason why predictions about nature, based on scientific
concepts and theories, can't be tested in nature as well as in the
lab....the fossil record is a good place to start

as i said, i have no bias, save a scientific one. you, however, do.

>
> > the long timeframes
> > necessary for evolution of species can't be demonstrated in the lab
> > because of the reason already stated: it takes a long time. BUT
> > evolutionary bio is an EXPERIMENTAL science
>
> I'm not talking about evolving an entirely new creature, just a new
> relatively simple functional biosystem that requires more than a few
> thousand bp of genetic real estate.

see the above comment about resistance

I mean really, not even one of the
> proposed steps in the evolution of a system like the flagellum as ever
> been demonstrated in the lab - not one step.

irrelevant. see above.

If such a step were so
> simple, it should be quite easy to set up such a demonstration under
> real time laboratory conditions.

again, as a scientist, i expect you to prove your assumptions.

1. why is it simple?
2. why can it be done in a SHORT period of time (whatever that is)

in addition, the fact that evolution HAS been demonstrated in the lab,
while creationism has NEVER been seen, shows that your bias is
anti-science, and christianist.
>
> The fact that low-level evolution can be demonstrated does not mean
> that these low level examples can be reasonably extrapolated up the
> ladder since the ratio of potentially beneficial vs. potentially
> non-beneficial decreases exponentially with each step up the ladder.

uh, why? it's EXACTLY the type of extrapolation science makes all the
time. physics certainly does it. we in chemistry use approximations to
solve equations in quantum chemistry. WHY are these 'extrapolations'
(what REAL scientists call 'predictions') not acceptable, beyond the
idea that your religion rules them out?

> This is why you don't see anything beyond very low-level evolution
> demonstrated in the lab or in real life in real time. There is a gap
> problem for evolution that grows exponentially with each increase in a
> functions minimum size and/or specificity requirements.

exponentially? in what sense? and how do you know this? selection based
on physical laws eliminates whole classes of chemical reactions.

>
> > christianism/islamism, however, is not. the 'just so stories' of
> > religion are being played out today across the world in death,
> > slaughter, genocide and destruction. seanpit's world is a world of
> > ignorance, betrayal and evil. it is the replacement of logic by faith,
> > the substitution of reason by ignorance...
>
> Oh give me a break with your sob stories.

tell it to the guys killing US troops with suicide bombs. you're part
of the same ideological approach.

Religions do not have a
> market on evil people who claim to belong to these groups.

WHOA tall dark stranger. i NEVER said they did. YOU, however, seem to
think that the ONLY religious people are those who rule out evolution

THAT is christianism/islamism at its finest. in islam, it's called
'salafism'....in christianisty it's called fundamentalism. it's
arrogance...this idea that only YOU are a christian.

All groups
> have their bad eggs - even atheists and evolutionists have evil people
> in the ranks. There are also very brilliant and well-educated people
> on both sides of this issue.

well let's see. christianists have about 600 'scientists' on their
side. scientists, well, we have millions.

So, try and come up with an actual
> argument that is on topic instead of trying to make blanket
> generalizations that simply aren't true or relevant.

why ARENT they relevant? if bin laden thinks evolution is wrong based
on his religion and YOU think science is wrong based on your religion,
what's the difference? It's STILL RELIGION trying to destroy science.

> > > and when you show a god creating ANYTHING besides genocide you be sure
> > and let me know.
>
> Humans can create many things as well as genocide. You would think
> that someone smarter than a human could do at least as well . . . What
> if that someone made you with free will - and you go off and do
> something evil. Who's responsible? Who do you blame? - your Creator
> or yourself?
>

humans can create gods as well. perhaps you should think about that
when you want to wreck science with your christianist theology.

wf3h

unread,
Jul 23, 2006, 3:13:32 PM7/23/06
to

Robin Levett wrote:
>
> How do I measure "levels of functional complexity" without asking you?
>
> --

it's alice in wonderland science, you see. one of the characters in
that book said that a 'word means what i want it to mean, nothing more,
nothing less'.

for christianists, 'specified complexity' means 'whatever can disprove
evolution, based on my theological belief that evolution didn't happen.

John Harshman

unread,
Jul 23, 2006, 3:50:56 PM7/23/06
to
Seanpit wrote:

[snip]

Sean's math skills highlighted?:

>10^20 + 10^20 = 20^20.

[snips]

Von R. Smith

unread,
Jul 23, 2006, 4:40:22 PM7/23/06
to
Seanpit wrote:
> Von R. Smith wrote:
>

< snip >


> >


> > I am not asking for a comparison, I am asking for a count. If being
> > "fairly specified" is actually a property of the sequence in question,
> > I should be able to determine it without such comparisons, simply by
> > estimating the minimum size and degree of sequence constraint of the
> > primary structure.
>
> Your mistaken in estimating specificity is in thinking that parts that
> do not require specific orientation will all the other parts in a
> system can simply be added up to produce a total. This isn't true.
> You can only add up the numbers like you do if the parts do have this
> additional specificity requirement. What you are actually doing in
> this "addition" is not adding at all. You are actually multiplying.
>
> For example, what is 10^20 * 10^20? Isn't the answer 10^40? In other
> words, you are multiplying the numbers, which is done by addition of
> the exponents. What happens with cascading systems is that you can't
> multiply the individual numbers. You can only add them together: 10^20
> + 10^20 = 20^20.


I think you mean "1e20 + 1e20 = 2e20". Big difference, you know. :)

> That is a far different number - right? As far as a
> measure of specificity is concerned, this number is not nearly as
> "specified" as the 10^40 number. In other words, there is a far greater
> ratio of potentially workable sequences on the one hand than on the
> other.


So, this is it? The basis of your cascade rule is the sort of math
mistake a C-student might make in his first week of an intro prob/stat
course?

I have seen Sean say some incompetent things about probabilities
before, but this one takes the cake. This isn't even wrong; it is
Zoemath written at a higher reading-level.

The probability of two independent events occuring is the product of
the probabilities of each one occuring. Period. It doesn't matter
whether you're talking about enzymes in a cascade, structural proteins
in a flagellum, or lottery numbers. If you want to argue that the
sequence in which the proteins are coded are unimportant for a cascade
(also a bit questionable), than you can multiply that product by N!,
with N being the number of steps being coded for. That would be a
pretty big swing for something requiring 20 steps (20! is approximately
2.4e18), but considerably less so for a cascade of two to four steps
(2! and 4! being 2 and 12, respectively). So an enzyme cascade of four
enzymes would have a twelve times higher probability than a structure
in which a rigid "right order" matters. That might help you a little
bit, but not very much, and it certainly doesn't justify your cascade
rule. It is the diference of perhaps a single "fairly-specified" codon
or two at most.

If you want to argue that the proteins that have to fit together in
some sort of 3d conformation are more specific (hence more constrained
in sequence, and hence rarer in sequence space), you should represent
that simply by assigning them lower individual sequence densities (say,
1e-40 instead of 1e-20), although the biochemical justification for
such an assumption is shaky, as we see in the examples of hemoglobin
and histone.

Or, if you want to argue that "3-D confirmation" requires something
more than just the right structural sequences, you could express that
by requiring additional events above and beyond those coding for the
sequences.

But P(E1*E2) is going to be some function of P(E1) * P(E2) any way you
slice it, cascade or no. The only time one adds probabilities is when
one is calculating the probability of either one event *or* the other
one occuring. This is basic prob/stat, Sean.


<snip>

>
> The equivalent size and specificity requirements for a cascading system
> are very close to the measure of the subpart with the greatest size and
> specificity requirements. If you want to get really technical, the
> total requirement is an addition of the sequence space ratio for all
> the parts together.
>
> For example, if each part in a two-part cascading system had a sequence
> space ratio of 1e-40, then the total ratio for the entire system would
> be 1e-40 + 1e-40 = 2e-40.


Nope. The probability of two independent events occuring together is
the product of their occuring individually, or P(E1) * P(E2). Period.
Now, if you want to say that the order is unimportant, so that E1E2 is
just as good as E2E1, it becomes P(E1) * P(E2) * 2! = 2e-80. What you
just modelled is the probability of either E1 *or* E2 occuring in a
given trial (actually, this would be 2e-40 - 1e160, assuming that any
overlap is random, but that's not a significant difference here).


>
> However, if each part of a two-part system which required specific
> orientation of the parts relative to each other had a sequence space
> ratio of 1e-40, then the total ratio for the entire system would be
> 1e-40 * 1e-40 = 1e-80.
>
> See the difference?


Yes, the difference is that I still remember some of the basics from my
prob/stat courses and you, apparently, do not.

Learn some math, Sean.

Seanpit

unread,
Jul 23, 2006, 11:27:23 PM7/23/06
to
John Harshman wrote:

> Sean's math skills highlighted?:
>
> >10^20 + 10^20 = 20^20.

Oh come on now. You know what I meant. You're like the spelling
flames. I'm was typing fast, but the concept is still the same. (1 x
10^20) + (1 x 10^20) = 2 x 10^20. Or, 1e20 + 1e20 = 2e20. Whatever .
. .

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
Jul 24, 2006, 12:30:50 AM7/24/06
to

Von R. Smith wrote:
> Seanpit wrote:
> > Von R. Smith wrote:
> >
>
> < snip >
>
>
> > >
> > > I am not asking for a comparison, I am asking for a count. If being
> > > "fairly specified" is actually a property of the sequence in question,
> > > I should be able to determine it without such comparisons, simply by
> > > estimating the minimum size and degree of sequence constraint of the
> > > primary structure.
> >
> > Your mistaken in estimating specificity is in thinking that parts that
> > do not require specific orientation will all the other parts in a
> > system can simply be added up to produce a total. This isn't true.
> > You can only add up the numbers like you do if the parts do have this
> > additional specificity requirement. What you are actually doing in
> > this "addition" is not adding at all. You are actually multiplying.
> >
> > For example, what is 10^20 * 10^20? Isn't the answer 10^40? In other
> > words, you are multiplying the numbers, which is done by addition of
> > the exponents. What happens with cascading systems is that you can't
> > multiply the individual numbers. You can only add them together: 10^20
> > + 10^20 = 20^20.
>
> I think you mean "1e20 + 1e20 = 2e20". Big difference, you know. :)

It really doesn't make a hill of beans difference as far as the problem
at hand. You, like John, must know what I meant. You're like the


spelling flames. I'm was typing fast, but the concept is still the
same. (1 x 10^20) + (1 x 10^20) = 2 x 10^20. Or, 1e20 + 1e20 = 2e20.
Whatever . . .

> > That is a far different number - right? As far as a


> > measure of specificity is concerned, this number is not nearly as
> > "specified" as the 10^40 number. In other words, there is a far greater
> > ratio of potentially workable sequences on the one hand than on the
> > other.
>
> So, this is it? The basis of your cascade rule is the sort of math
> mistake a C-student might make in his first week of an intro prob/stat
> course?
>
> I have seen Sean say some incompetent things about probabilities
> before, but this one takes the cake. This isn't even wrong; it is
> Zoemath written at a higher reading-level.
>
> The probability of two independent events occuring is the product of
> the probabilities of each one occuring. Period. It doesn't matter
> whether you're talking about enzymes in a cascade, structural proteins
> in a flagellum, or lottery numbers.

The probabilities of two independent events occurring is the sum of the
probabilities of each one occurring - not the product.

For example, let's say that we have a box of 1 million marbles with
only 1 of them being black. If each marble has equal chances of being
picked, on average, how many times will I have to draw out a marble,
and put it back in random order, before I pick the black one? 1
million - right? How many times before I pick the black marble twice?
By your logic I'd have to draw out the marbles 1e6 * 1e6 = 1e12 or 1
trillion times before I'd pull out that black marble twice. That's
simply not true. In reality, it would only take me an average of only 2
million picks to land on the black marble twice.

What you are trying to calculate is the odds of me picking the black
marble twice in a row. The odds of that happening are indeed 1e-12.
However, that isn't the correct formula to answer the question at hand.


> If you want to argue that the
> sequence in which the proteins are coded are unimportant for a cascade
> (also a bit questionable), than you can multiply that product by N!,
> with N being the number of steps being coded for. That would be a
> pretty big swing for something requiring 20 steps (20! is approximately
> 2.4e18), but considerably less so for a cascade of two to four steps
> (2! and 4! being 2 and 12, respectively). So an enzyme cascade of four
> enzymes would have a twelve times higher probability than a structure
> in which a rigid "right order" matters. That might help you a little
> bit, but not very much, and it certainly doesn't justify your cascade
> rule. It is the diference of perhaps a single "fairly-specified" codon
> or two at most.

Again, it seems to me that you are not thinking about the problem
correctly. As I originally described to you, it is far easier to find
all the parts of your multipart system in a given amount of time
because they all exist in a much much smaller sequence space.

> If you want to argue that the proteins that have to fit together in
> some sort of 3d conformation are more specific (hence more constrained
> in sequence, and hence rarer in sequence space), you should represent
> that simply by assigning them lower individual sequence densities (say,
> 1e-40 instead of 1e-20), although the biochemical justification for
> such an assumption is shaky, as we see in the examples of hemoglobin
> and histone.

Ah, but that's not my argument. I'm arguing that given the same
individual sequence specificity per protein part, that a cascading
system has a much much lower overall degree of specificity than a
system that requires all the parts to work together where each protein
has a specific location relative to all the other protein parts.

> Or, if you want to argue that "3-D confirmation" requires something
> more than just the right structural sequences, you could express that
> by requiring additional events above and beyond those coding for the
> sequences.

I'm not making that argument either - and I don't need to in order to
be correct.

> But P(E1*E2) is going to be some function of P(E1) * P(E2) any way you
> slice it, cascade or no. The only time one adds probabilities is when
> one is calculating the probability of either one event *or* the other
> one occuring. This is basic prob/stat, Sean.

You need to consider the ratio here - the odds that the needed
sequences will actually exist in a given search space. Remember also,
you have a great many searchers searching out the sequence space at the
same time - many more than 4 or 5 searchers. So, if all the sequences
exist in the same given sequence space, they will all be found at about
the same time (i.e., 5 black marbles will be found by 1000 searchers at
about the same time, on average).

Again, as I pointed out to you before, you need to ask yourself, what


are the odds that two specific 3-character sequences (26-possible
characters per position) will be present in a long string of say, 1
billion randomly typed 3-character sequences (3 billion characters
total)? The total number of possibilities is 17,576 (sequence space
for 3-character sequences). The odds that a particular 3-character
sequence will not be anywhere in this sequence is (17,575/17,576)^1e9 =
6e-24,711. So, the odds that 1 specific 3-character sequence will be
present in our sequence is very good at 1-6e-24,711 = very very near
100%. So, what are the odds that 2 specific 3-character sequences will
be there? Also very close to 100%. (i.e., 0.99999 . . . * 0.999999 .
. . ).

Now, let me add something to this that I thought was obvious before,
but I guess not. The amount of time needed to find both of these
sequences will be pretty much the same amount of time needed to find
just one of these sequences - since there are many mutations throughout
the genome happening concurrently. And, even if there were only one
searcher, the time needed for one searcher to find both sequences would
be 2 * the time needed to find one string. The time needed is not
multiplied - it is added. If it takes an average time of 1 year to
find a 3-character sequence, it will take an average of just 2 years
for one searcher to find both of the correct 3-character sequences.

> > The equivalent size and specificity requirements for a cascading system
> > are very close to the measure of the subpart with the greatest size and
> > specificity requirements. If you want to get really technical, the
> > total requirement is an addition of the sequence space ratio for all
> > the parts together.
> >
> > For example, if each part in a two-part cascading system had a sequence
> > space ratio of 1e-40, then the total ratio for the entire system would
> > be 1e-40 + 1e-40 = 2e-40.
>
> Nope. The probability of two independent events occuring together is
> the product of their occuring individually, or P(E1) * P(E2). Period.

Nope. You're actually thinking about two independent events occurring
in a row.

> > However, if each part of a two-part system which required specific
> > orientation of the parts relative to each other had a sequence space
> > ratio of 1e-40, then the total ratio for the entire system would be
> > 1e-40 * 1e-40 = 1e-80.
> >
> > See the difference?
>
> Yes, the difference is that I still remember some of the basics from my
> prob/stat courses and you, apparently, do not.
>
> Learn some math, Sean.

; )

Sean Pitman
www.DetectingDesign.com

Seanpit

unread,
Jul 24, 2006, 12:39:34 AM7/24/06
to

The fact that the odds of a novel beneficial system of function being
only 1bp change away from an existing 2,999bp system or a 3,000 bp
system or a 4,000bp system or even a 10,000bp are extremely low. The
odds that a novel system will be even 10bp differences away or even
100bp differeneces away are also extremely unlikely this side of a
practical eternity of time.

It is loading more messages.
0 new messages