Now obviously creationists who stuck with that ended up looking rather
stupid because while they accepted microevolution they had to somehow ignore
its logical macroevolutionary consequences. In fact to logically support
that you'd pretty much have to discover a genetic mechanism which would
prevent micro changes aggregating to the macro level.
Since no such mechanism I'm aware of has been forthcoming they seem to be
trying to avoid the above problem by attempting to define Microevolution as
difference in *essence*, rather than just degree, from Macroevolution. The
thrust of the arguments I heard was that microevolution within a species
*never* results in the increased genetic "information" neccesary for the
arising of new species. If true that would enable one to deny that micro
leads to macro and not look like you were denying the painfully obvious for
the purposes of supporting shaky mythology.
My question is: How the hell do they know, or purport to know, that
macroevolutionary change within a species *never* results in an increase in
genetic information? That claim appears to me to be several orders of
magnitude more farcical than than just accepting the conventional definition
of microevolution and denying its logical consequence. I'm no geneticist but
I would assume that to know the claim was true you would first have to
sequence the genome of every species in which a microevolutionary/adaptive
change had occoured, isolate the specific genes which the change affected,
and examine that part of the genome before and after the change to see if
any new functional genetic 'information' had been added.
Is there something I'm missing about this creationist argument, or are they
really just making up the 'no new information' thing out of thin air? As far
I'm aware no creationist organization has ever done anything close to the
mammoth research into the genome neccesary to support such a grandiose and
sweeping claim. In fact as far as I'm aware no creationist organization has
produced any significant original research, in any feild, ever.
-Adder.
> Is there something I'm missing about this creationist argument, or are they
> really just making up the 'no new information' thing out of thin air?
The latter.
We regularly use genetic algorithms to find good solutions to a broad array of
problems, and those solutions are not in the pile of random numbers we start
from. Selection, natural or otherwise, really does make sense out of
non-sense. (I deliberately avoid the word "information", since I don't want to
follow creationists' footsteps in making broad claims about a topic that I'm
not trained to treat rigorously.)
Bobby Bryant
Austin, Texas
> The thrust of the
> arguments I heard was that microevolution within a species *never*
> results in the increased genetic "information" neccesary for the arising
> of new species. If true that would enable one to deny that micro leads
> to macro and not look like you were denying the painfully obvious for
> the purposes of supporting shaky mythology. My question is: How the
> hell do they know, or purport to know, that macroevolutionary change
> within a species *never* results in an increase in genetic information?
They don't. And it's nothing more than a rehash of the old 2LoT argument.
Information, complexity, and entropy are all intertwined. So much so as
to seem aspects of each other. The idea that information "cannot"
increase is just saying that complexity and order "cannot" increase
because of the 2LoT.
They seem to have figured out that the false dichotomy of "micro" and
"macro" wasn't working out. That the accumulation of "micro" changes
*would, over time, make a "macro" difference. So they went looking for
that "mechanism" that would, somehow, stop changes at some arbitrary
point. They *think they found it by reviving the old 2LoT argument and
insisting information never increases.
Mark
Gidday Blackadder (where's Baldrik?),
> My question is: How the hell do they know, or purport to know, that
> macroevolutionary change within a species *never* results in an increase in
> genetic information?
They don't. There is no evidence to support this view, and in fact,
"macroevolution" has been observed in nature - check the TO archive for
more details.
> Is there something I'm missing about this creationist argument, or are they
> really just making up the 'no new information' thing out of thin air?
Thin air. There is nothing you are missing here.
> As far
> I'm aware no creationist organization has ever done anything close to the
> mammoth research into the genome neccesary to support such a grandiose and
> sweeping claim. In fact as far as I'm aware no creationist organization has
> produced any significant original research, in any feild, ever.
True. You can't expect a creationist to actually do any *science* now,
can you? ;)
--
The Great Hairy One,
BAAWA
====================================
The Blade of Reason
Cuts Through the Mist
Of Confusion
(Remove spam block to email)
>
> My question is: How the hell do they know, or purport to know, that
> macroevolutionary change within a species *never* results in an increase in
> genetic information? That claim appears to me to be several orders of
> magnitude more farcical than than just accepting the conventional definition
> of microevolution and denying its logical consequence. I'm no geneticist but
> I would assume that to know the claim was true you would first have to
> sequence the genome of every species in which a microevolutionary/adaptive
> change had occoured, isolate the specific genes which the change affected,
> and examine that part of the genome before and after the change to see if
> any new functional genetic 'information' had been added.
>
> Is there something I'm missing about this creationist argument, or are they
> really just making up the 'no new information' thing out of thin air? As far
> I'm aware no creationist organization has ever done anything close to the
> mammoth research into the genome neccesary to support such a grandiose and
> sweeping claim. In fact as far as I'm aware no creationist organization has
> produced any significant original research, in any feild, ever.
They're not making it up out of thin air; they're taking it vastly out
of context.
What they're doing is looking at Information Theory, and mucking about
with it in stupid ways.
They start with Shannon information theory. Shannon IT studies
communication in the presence of noise. In Shannon theory, you're
transmitting a piece of information across a noisy channel. It studies
how you can determine the effects of noise on communication. So in
Shannon IT, there is an initial information content, which can only be
degraded. There is *no way* of adding information in Shannon theory.
Shannon then talks about "entropy" in an information system. Entropy
is random noise introduced into the data that you transmit. Entropy
can only increase, and any increase in entropy reduces the information
content of the received data.
Now, they mix this up with a different kind of information theory,
called Kolmogorov/Chaitin information theory. K/C information theory
can, in theory, be used to describe the information content of a
genome. It also has a notion of entropy, which is completely distinct
from the Shannon entropy.
By mixing K/C and Shannon, they can say that a gene has a certain
information content (measured using K/C theory), and that the
information content of it cannot increase (using Shannon theory).
Of course, mixing incompatible theories results in nonsense. The
particular nonsense is that they are, at the root of things, starting
from the assumption that information content cannot increase
(Shannon), and then using that to prove that information content
cannot increase, even though genes are *not* a Shannon medium. There
is no conservation of information is genes; there is no ideal
information at the starting point. In fact, K/C is a pretty good way
of measuring the information content of genes, and K/C does not
preclude increases in information content.
-Mark
--
"There's nothing I like better than the sound of a banjo, unless of
course it's the sound of a chicken caught in a vacuum cleaner. "
Mark Craig Chu-Carroll (m...@watson.ibm.com)
IBM T.J. Watson Research Center
Hey, that's a tidy and comprehensible summary of the differences
between a couple of things well outside my field of
expertise...you've actually made something else I've been reading
suddenly make sense. Thanks!
--
pz
Is this part of a newsgroup FAQ? This "no increase of information"
thing comes up more and more.
--
Email handle is time-encoded to foil spammers.
Use recent handles only. Filter on domain name only.
http://www.sherilyn.org.uk/
I second this. POTM perhaps?
Best Regards,
Dave
"Let Mary inviolate be torn upon wheels: for her sake let all chaste women be utterly despised among you!" - Aleister Crowley, The Book of the Law
E-mail: dave AT valinor DOT freeserve DOT co DOT uk
WWW: http://www.valinor.freeserve.co.uk OR http://www.kharne.net
> In Message-ID <pzm-AE9126.1...@news.newsguy.com>,
> pz <p...@mac.com> wrote:
> >In article <m3u23xu...@taliesin.diz.watson.ibm.com>,
> > m...@watson.ibm.com (Mark C. Chu-Carroll) wrote:
> >
> >> "Blackadder" <Blackadder20...@hotmail.com> writes:
> >>
<snip> information post - I second it as PotM
> >Hey, that's a tidy and comprehensible summary of the differences
> >between a couple of things well outside my field of
> >expertise...you've actually made something else I've been reading
> >suddenly make sense. Thanks!
>
> Is this part of a newsgroup FAQ? This "no increase of information"
> thing comes up more and more.
We need a Macroinformation FAQ :-)
--
John Wilkins, Head, Communication Services, The Walter and Eliza Hall
Institute of Medical Research, Melbourne, Australia
Homo homini aut deus aut lupus - Erasmus of Rotterdam
<http://www.users.bigpond.com/thewilkins/darwiniana.html>
> >>
> >> They're not making it up out of thin air; they're taking it vastly out
> >> of context.
> >>
<snip tidy and comprehensible summary>
> >Hey, that's a tidy and comprehensible summary of the differences
> >between a couple of things well outside my field of
> >expertise...you've actually made something else I've been reading
> >suddenly make sense. Thanks!
> >--
> >pz
> >
>
> I second this. POTM perhaps?
>
Yes! Isn't there a new POTMmeister now?
Noelie
--
"What was that middle part again?" --Otto, _AFCW_
Mark, do you have any links to more information about K/C information
theory?
--
| Andrew Glasgow <amg39(at)cornell.edu> |
| SCSI is *NOT* magic. There are *fundamental technical |
| reasons* why it is necessary to sacrifice a young goat |
| to your SCSI chain now and then. -- John Woods |
-Adder
"Mark C. Chu-Carroll" <m...@watson.ibm.com> wrote in message
news:m3u23xu...@taliesin.diz.watson.ibm.com...
> In article <m3u23xu...@taliesin.diz.watson.ibm.com>,
> m...@watson.ibm.com (Mark C. Chu-Carroll) wrote:
>
...
>
>
> Mark, do you have any links to more information about K/C information
> theory?
He will, but I do too :-)
http://www.cs.auckland.ac.nz/CDMTCS/chaitin/inv.html
http://www.best.com/~szabo/kolmogorov.html
http://xxx.lanl.gov/PS_cache/quant-ph/pdf/9510/9510005.pdf
http://www.jucs.org/jucs_2_5/introduction_to_algorithmic_information
http://asuwlink.uwyo.edu/~wtg/infophys/node16.html
Or do, as I did, a search on Algorithmic Information Theory. Also search
for Chaitin or go to his home page
http://www.cs.auckland.ac.nz/CDMTCS/chaitin/
or search for Chaitin omega (the measure of the minimum compressibility,
and hence the information content, of a string): eg.
http://citeseer.nj.nec.com/calude99chaitin.html
A tonne (metric) of Chaitin's stuff and articles about it is online,
mostly in PDF. Gotta love mathematicians.
Also this book is pretty neat:
Chaitin, G. J. (1999). The unknowable. Singapore; New York, Springer.
"This companion volume to Chaitin's "The Limits of Mathematics",
also published by Springer, gives a historical survey of the work this
century on the foundations of mathematics, in which the author was a
major participant. "The Unknowable" is a readable and concrete
introduction to Chaitin's ideas, and it includes a detailed explanation
of the programming language used by Chaitin in both volumes. It will
enable computer users to interact with the author's proofs and discover
for themselves how they work. The software for "The Unknowable" can be
downloaded from the author's Web site."--BOOK JACKET.
Indeed there is. ;) It's me. The nomination is duly noted.
--
When I am dreaming,
I don't know if I'm truly asleep, or if I'm awake.
When I get up,
I don't know if I'm truly awake, or if I'm still dreaming...
--Forest for the Trees, "Dream"
To send e-mail, change "excite" to "hotmail"
There's no t.o. FAQ on it AFAIK, but I'm working on an essay cataloging some
information-increasing mutations for my website,
www.freespeech.org/ebonmusings, using the facts from some excellent
elucidations of information theory such as this one.
Thanks for the info, Mark. Good stuff. I have one little quibble, though
- I really don't think that a creationist actually sits down and thinks
about this, or even knows about Shannon/KC IT.
> Mark Craig Chu-Carroll (m...@watson.ibm.com)
Chu-Carroll??
Does Chtuhlu walk amongst us?
"Mark C. Chu-Carroll" wrote:
> "Blackadder" <Blackadder20...@hotmail.com> writes:
>
> >
> > My question is: How the hell do they know, or purport to know, that
> > macroevolutionary change within a species *never* results in an increase in
> > genetic information? That claim appears to me to be several orders of
> > magnitude more farcical than than just accepting the conventional definition
> > of microevolution and denying its logical consequence. I'm no geneticist but
> > I would assume that to know the claim was true you would first have to
> > sequence the genome of every species in which a microevolutionary/adaptive
> > change had occoured, isolate the specific genes which the change affected,
> > and examine that part of the genome before and after the change to see if
> > any new functional genetic 'information' had been added.
> >
> > Is there something I'm missing about this creationist argument, or are they
> > really just making up the 'no new information' thing out of thin air? As far
> > I'm aware no creationist organization has ever done anything close to the
> > mammoth research into the genome neccesary to support such a grandiose and
> > sweeping claim. In fact as far as I'm aware no creationist organization has
> > produced any significant original research, in any feild, ever.
>
> They're not making it up out of thin air; they're taking it vastly out
> of context.
>
> What they're doing is looking at Information Theory, and mucking about
> with it in stupid ways...
[lots of good stuff deleted]
They also never get round to defining how they measure the amount of information
in a DNA string. This allows them to say that both that
- duplication mutations add no new information because they just copy existing
genes
- point mutations add new information because they don't increase the size of the
genome
Roy
wilkins wrote:
>
> Sherilyn <6...@sherilyn.org.uk> wrote:
>
> > In Message-ID <pzm-AE9126.1...@news.newsguy.com>,
> > pz <p...@mac.com> wrote:
> > >In article <m3u23xu...@taliesin.diz.watson.ibm.com>,
> > > m...@watson.ibm.com (Mark C. Chu-Carroll) wrote:
> > >
> > >> "Blackadder" <Blackadder20...@hotmail.com> writes:
> > >>
> <snip> information post - I second it as PotM
I third it.
Sure. Greg Chaitin's homepage is at:
http://www.cs.umaine.edu/~chaitin/
He's got some amazingly dense books that go into great depth,
like "Algorithmic Information Theory", which is available in postscript
at his website. He's also got some more approachable books. I highly
recommend "The Limits of Mathematics"; it's a fascinating read. He's
got a new one called "Exploring Randomness" which sounds very relevant
to this discussion, but I haven't read it yet. Links to all of these
are on his webpage.
> "Mark C. Chu-Carroll" wrote:
> [Snippity]
>
> Thanks for the info, Mark. Good stuff. I have one little quibble, though
> - I really don't think that a creationist actually sits down and thinks
> about this, or even knows about Shannon/KC IT.
I didn't mean to imply that they understood it. If you do a very
quick search for "Entropy" and "Information", you find a lot of
both Shannon and Chaitin. To really understand either takes a
*lot* of effort; but by doing shallow keyword matching without
attempting to understand either, you wind up with the creationist
gibberish.
> > Mark Craig Chu-Carroll (m...@watson.ibm.com)
>
> Chu-Carroll??
>
> Does Chtuhlu walk amongst us?
Not yet. But my 8 month old daughter's best friend is a purple stuffed
cthulu doll. (Seriously.)
Glad to be of help. It's always fun when I get to take advantage of
my math; it doesn't happen often in t.o.
> The Great Hairy One <"the_great_hairy"@yahoo.com.au (yahoo!)> writes:
>
> > "Mark C. Chu-Carroll" wrote:
> > [Snippity]
> >
> > Thanks for the info, Mark. Good stuff. I have one little quibble, though
> > - I really don't think that a creationist actually sits down and thinks
> > about this, or even knows about Shannon/KC IT.
>
> I didn't mean to imply that they understood it. If you do a very
> quick search for "Entropy" and "Information", you find a lot of
> both Shannon and Chaitin. To really understand either takes a
> *lot* of effort; but by doing shallow keyword matching without
> attempting to understand either, you wind up with the creationist
> gibberish.
About the way many headhunters understand technical job descriptions.
I had one that I kept telling: No, dammit, I'm Systems Programmer, not a
Systems Analyst!
I got to one interview and things were going just fine until the manager asked
"what do you know aout SDLC?" and I said "Synchronous Data Link Control?" "No,
Systems Design Life Cycle..."
>
>
> > > Mark Craig Chu-Carroll (m...@watson.ibm.com)
> >
> > Chu-Carroll??
> >
> > Does Chtuhlu walk amongst us?
>
> Not yet. But my 8 month old daughter's best friend is a purple stuffed
> cthulu doll. (Seriously.)
As long as it doesn't "call" to her...
--
Fred Stone
aa # 1369
"There is on earth among all dangers no more dangerous thing than a richly
endowed and adroit reason...Reason must be deluded, blinded, and destroyed."
[Martin Luther]
True, true. But again, I don't think the creationists do this when they
set up their evolutionary straw men, and put forward their arguments. I
would think that people like Hovind and the rest haven't even considered
information theory, and simply take their arguments from the bible, or
bad science.
> Not yet. But my 8 month old daughter's best friend is a purple stuffed
> cthulu doll. (Seriously.)
Yeah, I've seen those dolls - very cute, for an Elder God. Have you seen
the Nyarlothotep ones?
No, one should not expect that. Creationists spend all their available time
appealing to lack of proof that their sky pixie did not design and create
everything. For example, Tichy writes, "... the lack of an example of
complex specified information clearly created by unintelligent forces lends
strength to an inference to a designer." What 'designer' is he asserting?
Tichy writes, "God exists ... "
My 6 year-old son's is similar I think. Godzilla is similar isn't he?
Aron-Ra
Cthulhu would look something like an octopus kinda tentacly thing.
Yes I know that. But I was comparing the two as non-human entities the size
of roller coasters that reside in the depths and are worshipped as gods.
Aron-Ra
The natives sacrificed pretty girls to King Kong, but to Godzilla? Didn't he
just wreck Tokyo until they got enough hit points on him?
Besides, anything's better than a purple plush mutant dinosaur that sings :-)
Yes it dofs. And the situathion is even vorse. Just sticking
with the Shaninon definition, it is necesssary to dephine what
the information *is*.
Whenever some cretinist bring it up I alwazs have a bite
of a chuklee. You see, one way to tronsmit secret infornation
is to obfuskate it and then mix it it with some innoculous
signel, like the typical talk.orgins post. De person
doing the "entroply" analisis sees the signal as being
nosy. Only he's measering the wron noise becavse he's
loking at ze talk.originz post.
---- Paul J. Gans
> In talk.origins Sherilyn <6...@sherilyn.org.uk> wrote:
<snip>
>
> > Is this part of a newsgroup FAQ? This "no increase of information"
> > thing comes up more and more.
>
> Yes it dofs. And the situathion is even vorse. Just sticking
> with the Shaninon definition, it is necesssary to dephine what
> the information *is*.
>
> Whenever some cretinist bring it up I alwazs have a bite
> of a chuklee. You see, one way to tronsmit secret infornation
> is to obfuskate it and then mix it it with some innoculous
> signel, like the typical talk.orgins post. De person
> doing the "entroply" analisis sees the signal as being
> nosy. Only he's measering the wron noise becavse he's
> loking at ze talk.originz post.
>
> ---- Paul J. Gans
For a minute, I thought I was reading Corporal Carrot's letter home to
his mume. But then I realised, Paul is just demonstrating redundancy...
In the 1954 Americanized version of the original "Gojira", Raymond Burr was
added to entice American audiences. Before the monster ever appeared on
land, Burr's charactor (named Steve Martin) went to a small island of
natives who worshipped it.
When the American ace reporter was informed that the tribal cerimony being
performed was to appease a monster, Burr showed immediate disdain for their
silliness.
Yes, we used to be cynical back in the 50s.
If I were doing Tristar's remake of that film, I would have Steve Martin
playing a reporter named Raymond Burr and when the new reporter heard the
first hint of island natives, he would have asked in an excited childlike
and hopeful tone about any possability of sea monsters or sacrifices to
volcano gods, while the natives roll their eyes at his silliness.
Aron-Ra
<snicker>
> If I were doing Tristar's remake of that film, I would have Steve Martin
> playing a reporter named Raymond Burr and when the new reporter heard the
> first hint of island natives, he would have asked in an excited childlike
> and hopeful tone about any possability of sea monsters or sacrifices to
> volcano gods, while the natives roll their eyes at his silliness.
This is clearly an improvement over what Sony actually inflicted on us.
It was very cleansing to see the most recent Tojo Films installment.
> WickedDyno <amg39.RE...@cornell.edu.invalid> writes:
>
> > In article <m3u23xu...@taliesin.diz.watson.ibm.com>,
> > m...@watson.ibm.com (Mark C. Chu-Carroll) wrote:
> >
> > Mark, do you have any links to more information about K/C information
> > theory?
>
> Sure. Greg Chaitin's homepage is at:
>
> http://www.cs.umaine.edu/~chaitin/
>
> He's got some amazingly dense books that go into great depth,
> like "Algorithmic Information Theory", which is available in postscript
> at his website. He's also got some more approachable books. I highly
> recommend "The Limits of Mathematics"; it's a fascinating read. He's
> got a new one called "Exploring Randomness" which sounds very relevant
> to this discussion, but I haven't read it yet. Links to all of these
> are on his webpage.
>
> -Mark
Cool. I really wish I had time to check all this stuff out. Sigh. If
only I got paid to look up interesting stuff on line and figure it all
out.
> WickedDyno <amg39.RE...@cornell.edu.invalid> wrote:
>
> > In article <m3u23xu...@taliesin.diz.watson.ibm.com>,
> > m...@watson.ibm.com (Mark C. Chu-Carroll) wrote:
> >
> ...
> >
> >
> > Mark, do you have any links to more information about K/C information
> > theory?
>
> He will, but I do too :-)
<keanu> Whoa. </keanu>
> http://www.cs.auckland.ac.nz/CDMTCS/chaitin/inv.html
Ok, the first half or 2/3 of this basically made my eyes glaze over --
I'm not much of a programmer, nor do I know LISP, -- but the last
section, about mathematics being basically distinct from everyday
empirical science only in its age... well, I had to break out a stiff
drink to bring my brain cells back in line.
<stinky hippie>It, like, blew my mind, man. </stinky hippie>
> http://www.best.com/~szabo/kolmogorov.html
The section on superficiality and sophistication is particularly germane
to evolutionary discussion.
"More examples of sophistication are provided by the highly evolved
structures of living things, such as wings, eyes, brains, and so on.
These could not have been thrown together by chance; they must be the
result of an adaptive algorithm such as Darwin's algorithm of variation
and selection. If we lost the genetic code for vertebrate eyes in a mass
extinction, it would take nature a vast number of animal lifetimes to
re-evolve them. A sophisticated structure has a high replacement cost.
Bennett calls the computational replacement cost of an object its
logical depth. Loosely speaking, depth is the necessary number of steps
in the causal path linking an object with its plausible origin.
Formally, it is the time required by the universal Turing machine to
compute an object from its compressed original description."
This profoundly astounds me. Could the "compressed original
description" an organism's genome be it's DNA? Could living organisms
be seen as DNA-based Turing machines? <beatnik>*snap*snap*snap*snap*
Deeeeeeep.</beatnik>
> http://xxx.lanl.gov/PS_cache/quant-ph/pdf/9510/9510005.pdf
O_o
The whoosh you just heard was the above page going way over my head.
> http://www.jucs.org/jucs_2_5/introduction_to_algorithmic_information
Account-based, no access for me. I guess Cornell doesn't have a
subscription.
> http://asuwlink.uwyo.edu/~wtg/infophys/node16.html
Eeeeenteresting. Too bad I'm way too busy to look up some of the
references on this page; although, I'd probablyt not be able to
understand them if I did.
> Or do, as I did, a search on Algorithmic Information Theory. Also search
> for Chaitin or go to his home page
>
> http://www.cs.auckland.ac.nz/CDMTCS/chaitin/
>
> or search for Chaitin omega (the measure of the minimum compressibility,
> and hence the information content, of a string): eg.
> http://citeseer.nj.nec.com/calude99chaitin.html
>
> A tonne (metric) of Chaitin's stuff and articles about it is online,
> mostly in PDF. Gotta love mathematicians.
>
> Also this book is pretty neat:
>
> Chaitin, G. J. (1999). The unknowable. Singapore; New York, Springer.
> "This companion volume to Chaitin's "The Limits of Mathematics",
> also published by Springer, gives a historical survey of the work this
> century on the foundations of mathematics, in which the author was a
> major participant. "The Unknowable" is a readable and concrete
> introduction to Chaitin's ideas, and it includes a detailed explanation
> of the programming language used by Chaitin in both volumes. It will
> enable computer users to interact with the author's proofs and discover
> for themselves how they work. The software for "The Unknowable" can be
> downloaded from the author's Web site."--BOOK JACKET.
${DEITY}-damnit, If I wasn't so busy with schoolwork, I could actually
be busy learning something.... Hmmm, there's something wrong with that
sentence....
I've always been a big ol' Godzilla fan,
even though they've never ever made a suitable film on either side of the
Pacific ever.
Raymond Burr's opening monologue in 1954 was the best thing any of those
movies ever had except for a truly original idea, which is always so rare in
cinema.
Aron-Ra
>They start with Shannon information theory. Shannon IT studies
>communication in the presence of noise. In Shannon theory, you're
>transmitting a piece of information across a noisy channel. It studies
>how you can determine the effects of noise on communication. So in
>Shannon IT, there is an initial information content, which can only be
>degraded. There is *no way* of adding information in Shannon theory.
>Shannon then talks about "entropy" in an information system. Entropy
>is random noise introduced into the data that you transmit. Entropy
>can only increase, and any increase in entropy reduces the information
>content of the received data.
Ack, nooooo! I'm afraid you're garbling Shannon theory in several ways
here. In the first place, you're using a different interpretation of
Shannon's theory than he used, and in the second place the claims that
there is no way of adding information and that entropy always increases
are both simply incorrect.
Let me try to explain. In Shannon's seminal paper ("A Mathematical
Theory of Communication", _Bell System Technical Journal_, v. 27
(1948), pp. 379-423 and 623-656; available online at <http://cm.bell-
labs.com/cm/ms/what/shannonday/paper.html>), he defines entropy
primarily as a measure of information, not degradation of information.
I recommend reading sections 6 ("Choice, Uncertainty, and Entropy") and
7 ("The Entropy of an Information Source") to clarify Shannon's view of
the relation between entropy and information. For those who don't want
to take that detour, here are some sound-bites:
Quantities of the form H = -K * sum from i = 1 to n of P_i log P_i
[i.e. Shannon-entropies -GD] (the constant K merely amounts to a
choice of a unit of measure) play a central role in information
theory as measures of information, choice and uncertainty. [from
secton 6]
H or H' [previously defined as the information source's entropy per
symbol and entropy per second, respectively -GD] measures the amount
of information generated by the source per symbol or per second.
[from section 7]
Many of Shannon's results only make sense under this interpretation of
entropy (H); for example:
Theorem 9: Let a source have entropy H (bits per symbol) and a
[noiseless -GD] channel have have a capacity C (bits per second).
Then is is possible to encode the output of the source in such a way
as to transmit at the average rate C/H-epsilon symbols per second
over the channel where epsilon is arbitrarily small. It is not
possible to transmit at an average rate greater than C/H. [from
section 9]
Theorem 11: Let a discrete [noisy -GD] channel have the capacity C
and a discrete source the entropy per second H. If H =< C there
exists a coding system such that the output of the source can be
transmitted over the channel with an arbitrarily small frequency of
errors (or an arbitrarily small equivocation [information loss in
transit -GD]). If H > C it is possible to encode the source so
that the equivocation is less than H-C+epsilon where epsilon is
arbitrarily small. There is no method of encoding which gives an
equivocation less than H-C. [from section 13]
Both of these theorems give the limits on how much information --
measured by the entropy of the information source -- a channel can
carry.
The interpretation you're using appears (at least as far as I've been
able to figure out) to have originated with Leon Brillouin (primarily
in _Science and Information Theory_, Academic Press Inc, New York,
1956), and derived from a mixture of Shannon's definition of entropy
and Norbert Weiner's (in _Cybernetics_, John Wiley and Sons, Inc., New
York, 1948) definition of information as a decrease in entropy (where
entropy is used as a measure of uncertainty). This is not in complete
conflict with Shannon's useage -- he also sometimes used entropy to
measure uncertainty, and the Brillouin definition is fairly close to
Shannon's joint information (a measure of the correlation between two
signals). The big difference is that Brillouin and his followers don't
consider entropy to be useful as a measure of information -- they
regard entropy as the opposite of information, and use it _only_ to
measure uncertainty (what I would think of as missing information).
To illustrate the quantities involved in Shannon's theory and their
various interpretations, let me analyse an instance of communication
via email. Suppose Alice sends Bob an email message consisting
entirely of two enclosures, X and Y. To keep things simple, assume
that X and Y are unrelated (technically, their joint information is 0),
and the message contains no other information (e.g. in the header).
The entropy of the transmitted message is then simply the sum of the
entropies of the individual enclosures, that is H(T) = H(X)+H(Y).
According to both Shannon's and Brillouin's interpretations, this is
a measure of the apriori uncertainty of the transmitted message. In
Shannon's interpretation, it is also a measure of the amount of
information in the transmitted message.
Suppose that one of the mail servers along the route from Alice to
Bob has a (deterministic) bug that drops the second enclosure, and
substitutes Z (an unrelated file that happened to be lying around the
server), so the message that Bob receives actually consists of X and Z.
Not suprisingly, the entropy of the received message is the sum
of the entropies of X and Z, that is H(R) = H(X)+H(Z). In both
interpretations, this is a measure of the apriori uncertainty of the
received message. In Shannon's interpretation, it is also a measure of
the amount of information in the received message. Note that (under
either interpretation) there's no particular relation between H(R) and
H(T); they could be equal, H(T) could be larger, H(R) could be larger,
whatever. Here and in general, there's no particular directionality to
the entropy change during transmission.
Now, for the more obscure measures: the joint entropy of the
transmitted and received messages is the sum of all of the
involved enclosures' entropies, H(T,R) = H(X)+H(Y)+H(Z). In both
interpretations, this is a measure of the total apriori uncertainty of
the whole transmit-receive transaction. In Shannon's interpretation,
it is also a measure of the total amount of information in the two
messages taken together. This will, in general, be larger than the
information in either the transmitted or received message; intuitively,
this means that someone who knows the contents of both versions of the
message has more information than someone who knows the contents of
only one of them. However, the joint entropy will generally be less
than the sum of the transmitted and received messages' entropies;
this is because having access to a second copy of X doesn't tell our
hypothetical all-seeing observer anything new, so adding up the sent
and received information would double-count the information in X.
The conditional entropy of the transmitted message given the received
message (also known as the equivocation) is the entropy of Y, i.e.
H(T|R) = H(Y). In both interpretations, this is Bob's uncertainty
about the transmitted message (after receiving the garbled version).
Both interpretations would also regard it as a measure of the amount
of information lost in transit. Shannon's interpretation would
more specifically call it the amount of information present in the
transmitted message that's not also in the received signal.
There is also a symmetric reverse of this -- the conditional entropy
of the received message given the transmitted message, which in this
case is equal to the entropy of Z, i.e. H(R|T) = H(Z). In both
interpretations, this is basically a measure of the amount of noise
added to the message in transit. In Shannon's interpretation, it can
also be thought of as the amount of information added in transit, or
the amount of information in the received message that isn't also in
the transmitted message.
Note carefully the implications here: noise _is_ information, just not
the information you wanted. For example, when you're trying to hold
a conversation at a noisy party, your "noise" is really just other
people's conversations -- other people's signal is your noise. And
simarly, your conversation -- your signal -- is part of other people's
noise. As far as I can see, this turns out to be true in general:
the distinction between signal and noise is purely a subjective value
judgement. This is obscured in the standard example of a communication
system, but even here the distinction is fundamentally arbitrary:
signal is information coming from the transmitter we're supposed to be
getting information from; noise is information from anything else. If
I tune my radio to the all-Barry-Manilow station and hear Bach instead,
that's defined as noise; not because Manilow is better than Bach in any
objective sense, but because I (for whatever reason) wanted to hear
Manlow. If I tune to the Bach station and hear static, that again is
noise, because if I wanted static I would've tuned to the all-static
station. And if the carefully-designed static source my cryptographic
system uses to generate unguessable secret keys starts churning out
Manilow MP3's, I'm going to sue the manufacturer.
But I digress...
The joint information of the transmitted and received messages turns
out to be equal to the entropy of X, i.e. I(T,R) = H(X). In both
interpretations this is regarded as the amount of information
successfully communicated by the message. Brillouin fans will point
out that the joint information can be defined as the entropy of the
transmitted message minus the conditional entropy of the transmitted
message given the received message -- i.e. it is the (average) decrease
in Bob's uncertainty about the transmitted message due to the received
message. Shannon's interpretation would also regard it as the amount
of information shared by both the transmitted and received messages.
This is closely related to, but not quite, what the Brillouin/Weiner
interpretation regards as the fundamental definition of information:
the decrease in the receiver's uncertainty about the transmitted signal
due to a _specific_ received message. Shannon's joint information is
exactly the Brillouin/Weiner information, averaged over all possible
received messages.
(Actually, I would argue that all of Shannon's measures should be
regarded as averages over all possibilities. For example, there is a
way of associating specific amounts of information to each specific
message that the transmitter might send -- the entropy of the
transmitted signal turns out to be the average of these, weighted by
each message's probability of occurance.)
I (as a Shannon fan) would like to point out that if you add up the
amount of information successfully communicated (the joint information,
according to both interpretations) and the amount lost in transit (the
equivocation, according to both interpretations), you get the entropy
of the transmitted message; but that sum should, intuitively, also
correspond to the amount of transmitted information, shouldn't it?
Also, I'd like to point out that that this means that the amount of
successfully communicated information is exactly equal to the entropy
of the portion of the message that was successfully communicated (and
not just by coincidence, either). Also, the joint information can be
defined as the sum of the two messages' individual entropies minus
their joint entropy -- that is, it's the amount of information that
would be double-counted by adding together the amounts of information
in the two messages. Finally, I should also note that the relationship
between transmit and receive is completely symmetric -- the joint
information can also be defined as the entropy of the received
message minus the entropy of the received message given the transmitted
message, which can be thought of as Alice's decrease in uncertainty
about the received message due to having written the transmitted
message. All of these definitions are mathematically equivalent.
Confused? Let me try a more visual approach to explaining the
Shannon view, specifically a Venn diagram. Picture, if you will, two
overlapping circles corresponding to the information in the transmitted
and received messages. X (the enclosure that got through intact) is
the information shared by both messages, and corresponds to the region
where the circles overlap; Y (the enclosure that got dropped) is the
part of the transmit circle that doesn't overlap; Z (the enclosure that
got added) is the part of the receive circle that doesn't overlap. It
all looks vaguely like this:
transmitted received
| |
V V
_________
/ / \ \
| Y | X | Z |
\___\_/___/
(only much better drawn). With this picture in mind, here are the
parts of the information involved that these quantities measure:
Transmitted entropy, H(T) Received entropy, H(R)
_____ _____
/ \ / \
| Y X | Z Y | X Z |
\_____/ \_____/
Conditional entropy of Conditional entropy of
Xmit given Rec, H(T|R) Rec given Xmit, H(R|T)
____ ____
/ / \ \
| Y | X Z Y X | Z |
\___\ /___/
Joint entropy, H(T,R) Joint information, I(T,R)
_________ _
/ \ / \
| Y X Z | Y | X | Z
\_________/ \_/
A little clearer? Maybe?
Let be take a little more direct look at the claim that Shannon threory
requires that information always degrade. As I've said above, entropy
can either increase or decrease in transit, so if you view entropy as
directly related (either positively or negatively) to information, this
looks unsupportable. However, there is one sense in which information
can only get worse in transit: it can only get more and more different
from how it started out, and in a communication system change is by
definition degradation. As Shannon puts it, "The fundamental problem
of communication is that of reproducing at one point either exactly
or approximately a message selected at another point" [from the
introduction]. The goal of a communication systems is to preserve and
transfer information intact, not add or change information in transit.
In this sense, and in this sense only, information can only degrade.
If you apply this to evolution (and I'd argue that it actually does
apply, although I'd warn against drawing conclusions incautiously),
it essentially means that change accumulates irreversably -- loosely
speaking, the information in gene pools can only get more and more
different from the progenote's. I don't see any conflict here with
evolution.
>Now, they mix this up with a different kind of information theory,
>called Kolmogorov/Chaitin information theory. K/C information theory
>can, in theory, be used to describe the information content of a
>genome. It also has a notion of entropy, which is completely distinct
>from the Shannon entropy.
I'll also disagree (though much less so) here. I agree that Shannon's
statistical theory and algorithmic (K/C) information theory are very
different in basic approach, and I think a lot of people (not just
creationists) tend to be sloppy about mixing them. However, the two
theories actually are closely parallel in many ways, and there are even
some rigorous connections that can be drawn (for example, the average
K-complexity of a distribution of strings is always greater than or
equal to the distribution's Shannon-entropy).
One parallel that's particularly damning for the Creationists' claim is
that both theories imply a very close connection between information
and randomness. In the algorithmic theory, strings of near-maximal
complexity are referred to as random because, well, for one thing
they're patternless (i.e. have no regular internal structure), and for
another thing if you generate a string at random, there's a very high
probability you'll get a near-maximal-complexity string. (OTOH, high-
complexity strings are hard to generate by deterministic means.)
In statistical information theory, randomness and information
production are practically synonymous. Allow me to quote Shannon
again:
We now consider the information source. How is an information
source to be described mathematically, and how much information
in bits per second is produced in a given source? The main
point at issue is the effect of statistical knowledge about the
source in reducing the required capacity of the [communication]
channel, by the use of proper encoding of the information. In
telegraphy, for example, the messages to be transmitted consist
of sequences of letters. These sequences, however, are not
completely random. In general, they form sentences and have the
statistical structure of, say, English. The letter E occurs
more frequently than Q, the sequence TH more frequently than XP,
etc.
....
We can think of a discrete source as generating the message,
symbol by symbol. It will choose successive symbols according
to certain probabilities depending, in general, on preceding
choices as well as the particular symbols in question. A
physical system, or a mathematical model of a system which
produces such a sequence of symbols governed by a set of
probabilities, is known as a stochastic [i.e. random -GD]
process. We may consider a discrete source, therefore, to be
represented by a stochastic process. Conversely, any stochastic
process which produces a discrete sequence of symbols chosen
from a finite set may be considered a discrete source. [from
section 2]
So, in light of that, I'd have to go with Blackadder's suggestion that
creationists are making things up out of thin air.
--
Human: Gordon Davisson ><todd>
HASA: Member, S division. o o
Internet: gor...@tardigrade.org
[wonderful post on information theories deleted]
PotM! PotM! PotM!
--
Dave Empey