Glenn
On Mon, 15 Jun 1998 20:17:35 -0500, Glenn R. Morton wrote:
>SJ>Could you please clarify what your "technical definition" of
"information"
>is?
DM>H=-K sum(i) P(i)log(p(i))
>
>where P(i) are the probabilities of each character in the information
>carrying set.
Thanks. See my son's reply, under the heading of Information: Brad's reply.
But I suspect that a major problem may be two different uses of the
word "information". In biology it is the specific *meaning* of the
genetic code which constitutes its "information". Thus in Dawkins'
famous string "METHINKS IT IS LIKE A WEASEL", Dawkins
started from the random phrase WDLMNLT
DTJBKWIRZREZLMQCO P and reached METHINKS IT IS LIKE
A WEASEL by a series of random `mutations':
"So much for single-step selection of random variation. What about
cumulative selection; how much more effective should this be? Very
very much more effective, perhaps more so than we at first realize,
although it is almost obvious when we reflect further. We again use
our computer monkey, but with a crucial difference in its program.
It again begins by choosing a random sequence of 28 letters, just as
before:
WDLMNLT DTJBKWIRZREZLMQCO P
It now 'breeds from' this random phrase. It duplicates it repeatedly,
but with a certain chance of random error - 'mutation' - in the
copying. The computer examines the mutant nonsense phrases, the
'progeny' of the original phrase, and chooses the one which, however
slightly, most resembles the target phrase, METHINKS IT IS LIKE
A WEASEL. In this instance the winning phrase of the next
'generation' happened to be:
WDLTMNLT DTJBSWIRZREZLMQCO P
Not an obvious improvement! But the procedure is repeated, again
mutant 'progeny' are 'bred from' the phrase, and a new 'winner' is
chosen. This goes on, generation after generation. After 10
generations, the phrase chosen for 'breeding' was:
MDLDMNLS ITJISWHRZREZ MECS P
After 20 generations it was:
MELDINLS IT ISWPRKE Z WECSEL
By now, the eye of faith fancies that it can see a resemblance to the
target phrase. By 30 generations there can be no doubt:
METHINGS IT ISWLIKE B WECSEL
Generation 40 takes us to within one letter of the target:
METHINKS IT IS LIKE I WEASEL
And the target was finally reached in generation 43. A second run of
the computer began with the phrase:
Y YVMQKZPFJXWVHGLAWFVCHQXYOPY,
passed through (again reporting only every tenth generation):
Y YVMQKSPFTXWSHLIKEFV HQYSPY
YETHINKSPITXISHLIKEFA WQYSEY
METHINKS IT ISSLIKE A WEFSEY
METHINKS IT ISBLIKE A WEASES
METHINKS IT ISJLIKE A WEASEO
METHINKS IT IS LIKE A WEASEP
and reached the target phrase in generation 64. In a third run the
computer started with:
GEWRGZRPBCTPGQMCKHFDBGW ZCCF
and reached METHINKS IT IS LIKE A WEASEL in 41 generations
of selective 'breeding'.
(Dawkins R., "The Blind Watchmaker," [1986], Penguin: London,
1991, pp47-48)
Now some of the above `random mutations' may have caused the
string to contain more "information" in an Information Theory sense,
but they had less "information" in a biological meaning sense.
Yockey himself says that the word "information" has a variety of
meanings:
"The word "information" itself may mean "the communication or
reception of knowledge or intelligence" or "intelligence, news, facts
or data," etc., as well as several other meanings." (Yockey H.P., "An
Application of Information Theory to the Central Dogma and the
Sequence Hypothesis", Journal of Theoretical Biology, Vol. 46,
1974, p370)
and that how "information" is used in Information Theory is a special,
"mathematical function":
"...we will keep in mind that the word information is the name of a
mathematical function. We must not allow ourselves to ascribe any
other meaning associated with the word information as used
in ordinary language." (Yockey, 1974, p375).
Indeed, Yockey says that "information" in the sense it is used in
biology, ie."the assignment of meaning" or "specificity to certain
particular member of the ensemble", "lies outside information
theory":
"Some have attempted to develop a theory of meaning from these
ideas. Shannon (1949) warned against this at the outset of his paper.
The assignment of meaning or in biology, specificity, to certain
particular member of the ensemble lies outside information theory."
(Yockey, 1974, pp371-372).
Horgan echoes this, observing that "Efforts to apply information
theory to other fields" including "biology" "have generally failed-in
large part because the theory cannot address the issue of meaning":
"Information theory. Created by Claude E. Shannon in 1948, the
theory provided a way to quantify the information content in a
message. The hypothesis still serves as the theoretical foundation for
information coding, compression, encryption and other aspects of
information processing. Efforts to apply information theory to other
fields, ranging from physics and biology to psychology and even the
arts, have generally failed-in large part because the theory cannot
address the issue of meaning." (Horgan J., "From Complexity to
Perplexity", Scientific American, Vol. 272 No. 6, June 1995, p78-79)
GM>To quote Yockey,
>
>"if all the n probabilities are equal, p(i)=1/n, then H must be a
>monotonic increasing function of n"
>
>Yockey Theorem 2.1 Information Theory and Molecular Biology p.
62
>
>do you see what I am talking about Stephen? It is quite technical.
It seems "technical." Now how about translating it into layman's
terms so I can evaluate your claims?
>>GM> and by that definition, there are lots of examples of increase
>>in information by mutation.
On the information you have supplied to date, you have not
substantatiated your claim above.
>SJ>First of all, I would like to state up-front that I do not rule out
>>that some increases in information could occur by random
>>mutation and natural selection. If it could be shown that some
>>limited increases in genetic information could occur, that may still
>>only be a mechanism for microevolution.
>>
>>However, Spetner, whose qualifications are in information and
communication theory:
>>
>>"...I spent most of my professional career doing research and
>>development on information processing in electronic systems, and
>>teaching information and communication theory..." (Spetner L.M.,
"Not by Chance!", 1997 revised, p.iv)
>>
>>says:
>>
>>"Information theory, which was introduced as a discipline about
>>half a century ago by Claude Shannon, has thrown new light on
>>this problem. It turns out that random variation cannot lead to
>>large evolutionary changes. The information required for large-
>>scale evolution cannot come from random variations. There are
>>cases in which random mutations do lead to evolution on a small
>>scale. But it turns out that, in these instances, no information is
>>added to the organism. Most often, information is lost. A process
>>that adds no heritable information into the organism cannot lead to
>>the grand evolutionary advances envisioned by the neo-
Darwinians." (Spetner, >1997 p.vii)
GM>Does he show that mathematically?
Spetner says his book is "for the layman" (Spetner, 1997, p.x), so that
while there is some mathematics, it does not as far as I can see,
discuss the formula you cite above. But clearly, given his
qualifications, there is no doubt (in my mind at least) that he could
"show that mathematically".
GM>Yockey seems to accept evolution and Yockey is also an
>expert in information theory.
We are not discussing a vague term like "evolution", which may only
mean "a change in the genetic characteristics of a population over
time" or "the common descent of living organisms from shared
ancestors" ("The Talk.Origins Archive: Frequently Asked Questions
about creationism and evolution." http://www.talkorigins.org/
origins/faqs-qa.html). We are discussing Richard Dawkin's veresion
of "evolution", namely *Neo-Darwinism* (ie. the claim that the
information in living things was built up micro-mutation by
micromutation). According to Johnson, Yockey believes "the neo-
Darwinian theory is inadequate":
"Position C: Organisms contain irreducible information, meaning
information not explainable in terms of physical laws and/or chance.
Hence the neo-Darwinian theory is inadequate (except at the micro
level) in a way that probably cannot be fixed. This is the position of
Wilder-Smith, Michael Behe's forthcoming book, and those who
endorse "intelligent design." Support for it comes also from the
writings of antireductionists like Polanyi and Yockey (who do not
explore or welcome the theistic implications)." (Johnson P.E.,
"Reason in the Balance," 1995, p213-214)
>SJ>and
>>
>>"You can easily add symbols to a message and not add
>>information: just add random symbols. Then you won't be adding
>>information - you'll be adding only nonsense. Similarly, if you add
>>random nucleotides to the genome you add no information.
>>Symbols without meaning carry no information." (Spetner, 1997
p83)
>>GM>If you start with a sequence of 11111111111 or in DNA
>>>AAAAAAAAAAA and then mutate it so that it is 11111211111
>>>or in DNA AAAAATAAAAA you have actually increased the
>>>information.
>SJ>I have a son, Brad, who is studying InformationTechnology
>>Engineering at university and he is currently doing Information
>>Theory. I showed him your message and he said what you are
>>saying here is "wrong" (his word). He gave me a very detailed
>>explanation why (which I did not fully understand), and even if I
>>did it sounded to complex for me to repeat!
GM>Brad better wait til he gets his degree.
Brad at least is doing a "degree" with units in Information Theory.
You are a *geophysicist* with, I presume, *no* formal training
whatsoever in Information Theory. I would therefore expect that
Brad's knowledge of Information Theory would surpass yours.
BTW, you just deleted without elipses my WinZip evidence. Did
you make a simple mistake and overlook it, or is it yet another
example of your "My Utmost for His Highest" double-standard,
which you apply to others, but not to yourself? I will assume the
former and re-post it for you to answer:
----------------------------------------------------------------
Brad also said that, according to Information Theory,
AAAAAAAAAAA and AAAAATAAAAA have the *same*
information, namely zero. He proposed a test of writing 640 blocks of
AAAAAAAAAAA's and AAAAATAAAAA's each to a text file and
compressed each file with WinZip. Both files compressed nearly
100%, which shows they both have effectively zero information. To
be sure, the AAAAAAAAAAA block compressed slightly more (99%
compared to 100%), but this was because of the way WinZip works.
We had previously compressed smaller much blocks and they were
99% for AAAAAAAAAAA and 98% for AAAAATAAAAA. The
actual files reduced in size from 8,538 bytels each down to 153 bytes
for the AAAAAAAAAAA file and 157 bytes for the
AAAAATAAAAA file.This suggests that it is WinZip's algorithm
which makes the difference. If the blocks were longer, say 6400
blocks of each, the effect of WinZip's algorithm would be expected to
decrease till both files were compressed 100%, which is zero
information content.
----------------------------------------------------------------
GM>The equation I presented above shows why there is no
>information in a sequence like AAAAAAAAAA. There are no
>choices of characters so the probability of getting A is 100% or 1.
>
>Thus when you put that into the above equation
>
>H=-K (1) log(1).
>
>Well the log of 1 is ZERO, 0. This means that -K times 1 times
>zero is ZERO. NO information.
>
>In the case of AAAAATAAAA the probabilities of the characters
are:
>
>P(A)=.9
>P(T)=.1
>
>Thus
>
>H=-K (.9log(.9)+.1log(.1))= -K(-.041-.1)=.14K.
Brad points out that he gets a different answer. Maybe you you have
probably used the wrong "log" in the above, because Yockey,
discussing the above formula, says to use base two logs:
"If we take the logarithm to the base 2, as is usually done..." (Yockey
H.P., "An Application of Information Theory to the Central Dogma
and the Sequence Hypothesis", Journal of Theoretical Biology, Vol.
46, 1974, p372).
GM>As long as K is not zero, which it isn't then the mutation Brad
>says doesn't increase information does exactly that. There is more
>information in the later case than in the former.
>This should be enough for now.
Brad says you are still wrong. He says you misunderstand the formula
and its application. See Brad's reply in this batch of messages, under
my name.
>SJ>How about quoting the relevant paragraphs from Yockey to
>>support your claims?
GM>I did.
No you haven't. Your claim was:
----------------------------------------------------------------
On Tue, 09 Jun 1998 19:56:04 -0500, Glenn R. Morton wrote:
...I would like to note that information has a very technical definition
and by that definition, there are lots of examples of increase in
information by mutation. "...and by that definition, there are lots of
examples of increase in information by mutation.
----------------------------------------------------------------
but all you have given is *one* formula:
GM>H=-K sum(i) P(i)log(p(i))
with the cryptic explanation:
DM>where P(i) are the probabilities of each character in the
>information carrying set.
You have substituted some numbers into the formula (and apparently
got the answer wrong), but you have not shown yet that the formula
is relevant to what you are claiming. My son points out you could
plug some numbers into E=mc^2 but it wouldn't mean anything unless
it was the *right* formula.
Steve
"Evolution is the greatest engine of atheism ever invented."
--- Dr. William Provine, Professor of History and Biology, Cornell University.
http://fp.bio.utk.edu/darwin/1998/slides_view/Slide_7.html
--------------------------------------------------------------------
Stephen E (Steve) Jones ,--_|\ sejones@ibm.net
3 Hawker Avenue / Oz \ Steve.Jones@health.wa.gov.au
Warwick 6024 ->*_,--\_/ Phone +61 8 9448 7439
Perth, West Australia v "Test everything." (1Thess 5:21)
--------------------------------------------------------------------
--_=_=_=IMA.BOUNDARY.HTML_4820800=_=_=_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Glenn --_=_=_=IMA.BOUNDARY.HTML_4820800=_=_=_--
On Mon, 15 Jun 1998 20:17:35 -0500, Glenn R. Morton wrote:
>SJ>Could you please clarify what your "technical definition" of
"information"
>is?
DM>H=-K sum(i) P(i)log(p(i))
>
>where P(i) are the probabilities of each character in the information
>carrying set.
Thanks. See my son's reply, under the heading of Information: Brad's reply.
But I suspect that a major problem may be two different uses of the
word "information". In biology it is the specific *meaning* of the
genetic code which constitutes its "information". Thus in Dawkins'
famous string "METHINKS IT IS LIKE A WEASEL", Dawkins
started from the random phrase WDLMNLT
DTJBKWIRZREZLMQCO P and reached METHINKS IT IS LIKE
A WEASEL by a series of random `mutations':
"So much for single-step selection of random variation. What about
cumulative selection; how much more effective should this be? Very
very much more effective, perhaps more so than we at first realize,
although it is almost obvious when we reflect further. We again use
our computer monkey, but with a crucial difference in its program.
It again begins by choosing a random sequence of 28 letters, just as
before:
WDLMNLT DTJBKWIRZREZLMQCO P
It now 'breeds from' this random phrase. It duplicates it repeatedly,
but with a certain chance of random error - 'mutation' - in the
copying. The computer examines the mutant nonsense phrases, the
'progeny' of the original phrase, and chooses the one which, however
slightly, most resembles the target phrase, METHINKS IT IS LIKE
A WEASEL. In this instance the winning phrase of the next
'generation' happened to be:
WDLTMNLT DTJBSWIRZREZLMQCO P
Not an obvious improvement! But the procedure is repeated, again
mutant 'progeny' are 'bred from' the phrase, and a new 'winner' is
chosen. This goes on, generation after generation. After 10
generations, the phrase chosen for 'breeding' was:
MDLDMNLS ITJISWHRZREZ MECS P
After 20 generations it was:
MELDINLS IT ISWPRKE Z WECSEL
By now, the eye of faith fancies that it can see a resemblance to the
target phrase. By 30 generations there can be no doubt:
METHINGS IT ISWLIKE B WECSEL
Generation 40 takes us to within one letter of the target:
METHINKS IT IS LIKE I WEASEL
And the target was finally reached in generation 43. A second run of
the computer began with the phrase:
Y YVMQKZPFJXWVHGLAWFVCHQXYOPY,
passed through (again reporting only every tenth generation):
Y YVMQKSPFTXWSHLIKEFV HQYSPY
YETHINKSPITXISHLIKEFA WQYSEY
METHINKS IT ISSLIKE A WEFSEY
METHINKS IT ISBLIKE A WEASES
METHINKS IT ISJLIKE A WEASEO
METHINKS IT IS LIKE A WEASEP
and reached the target phrase in generation 64. In a third run the
computer started with:
GEWRGZRPBCTPGQMCKHFDBGW ZCCF
and reached METHINKS IT IS LIKE A WEASEL in 41 generations
of selective 'breeding'.
(Dawkins R., "The Blind Watchmaker," [1986], Penguin: London,
1991, pp47-48)
Now some of the above `random mutations' may have caused the
string to contain more "information" in an Information Theory sense,
but they had less "information" in a biological meaning sense.
Yockey himself says that the word "information" has a variety of
meanings:
"The word "information" itself may mean "the communication or
reception of knowledge or intelligence" or "intelligence, news, facts
or data," etc., as well as several other meanings." (Yockey H.P., "An
Application of Information Theory to the Central Dogma and the
Sequence Hypothesis", Journal of Theoretical Biology, Vol. 46,
1974, p370)
and that how "information" is used in Information Theory is a special,
"mathematical function":
"...we will keep in mind that the word information is the name of a
mathematical function. We must not allow ourselves to ascribe any
other meaning associated with the word information as used
in ordinary language." (Yockey, 1974, p375).
Indeed, Yockey says that "information" in the sense it is used in
biology, ie."the assignment of meaning" or "specificity to certain
particular member of the ensemble", "lies outside information
theory":
"Some have attempted to develop a theory of meaning from these
ideas. Shannon (1949) warned against this at the outset of his paper.
The assignment of meaning or in biology, specificity, to certain
particular member of the ensemble lies outside information theory."
(Yockey, 1974, pp371-372).
Horgan echoes this, observing that "Efforts to apply information
theory to other fields" including "biology" "have generally failed-in
large part because the theory cannot address the issue of meaning":
"Information theory. Created by Claude E. Shannon in 1948, the
theory provided a way to quantify the information content in a
message. The hypothesis still serves as the theoretical foundation for
information coding, compression, encryption and other aspects of
information processing. Efforts to apply information theory to other
fields, ranging from physics and biology to psychology and even the
arts, have generally failed-in large part because the theory cannot
address the issue of meaning." (Horgan J., "From Complexity to
Perplexity", Scientific American, Vol. 272 No. 6, June 1995, p78-79)
GM>To quote Yockey,
>
>"if all the n probabilities are equal, p(i)=1/n, then H must be a
>monotonic increasing function of n"
>
>Yockey Theorem 2.1 Information Theory and Molecular Biology p.
62
>
>do you see what I am talking about Stephen? It is quite technical.
It seems "technical." Now how about translating it into layman's
terms so I can evaluate your claims?
>>GM> and by that definition, there are lots of examples of increase
>>in information by mutation.
On the information you have supplied to date, you have not
substantatiated your claim above.
>SJ>First of all, I would like to state up-front that I do not rule out
>>that some increases in information could occur by random
>>mutation and natural selection. If it could be shown that some
>>limited increases in genetic information could occur, that may still
>>only be a mechanism for microevolution.
>>
>>However, Spetner, whose qualifications are in information and
communication theory:
>>
>>"...I spent most of my professional career doing research and
>>development on information processing in electronic systems, and
>>teaching information and communication theory..." (Spetner L.M.,
"Not by Chance!", 1997 revised, p.iv)
>>
>>says:
>>
>>"Information theory, which was introduced as a discipline about
>>half a century ago by Claude Shannon, has thrown new light on
>>this problem. It turns out that random variation cannot lead to
>>large evolutionary changes. The information required for large-
>>scale evolution cannot come from random variations. There are
>>cases in which random mutations do lead to evolution on a small
>>scale. But it turns out that, in these instances, no information is
>>added to the organism. Most often, information is lost. A process
>>that adds no heritable information into the organism cannot lead to
>>the grand evolutionary advances envisioned by the neo-
Darwinians." (Spetner, >1997 p.vii)
GM>Does he show that mathematically?
Spetner says his book is "for the layman" (Spetner, 1997, p.x), so that
while there is some mathematics, it does not as far as I can see,
discuss the formula you cite above. But clearly, given his
qualifications, there is no doubt (in my mind at least) that he could
"show that mathematically".
GM>Yockey seems to accept evolution and Yockey is also an
>expert in information theory.
We are not discussing a vague term like "evolution", which may only
mean "a change in the genetic characteristics of a population over
time" or "the common descent of living organisms from shared
ancestors" ("The Talk.Origins Archive: Frequently Asked Questions
about creationism and evolution." http://www.talkorigins.org/
origins/faqs-qa.html). We are discussing Richard Dawkin's veresion
of "evolution", namely *Neo-Darwinism* (ie. the claim that the
information in living things was built up micro-mutation by
micromutation). According to Johnson, Yockey believes "the neo-
Darwinian theory is inadequate":
"Position C: Organisms contain irreducible information, meaning
information not explainable in terms of physical laws and/or chance.
Hence the neo-Darwinian theory is inadequate (except at the micro
level) in a way that probably cannot be fixed. This is the position of
Wilder-Smith, Michael Behe's forthcoming book, and those who
endorse "intelligent design." Support for it comes also from the
writings of antireductionists like Polanyi and Yockey (who do not
explore or welcome the theistic implications)." (Johnson P.E.,
"Reason in the Balance," 1995, p213-214)
>SJ>and
>>
>>"You can easily add symbols to a message and not add
>>information: just add random symbols. Then you won't be adding
>>information - you'll be adding only nonsense. Similarly, if you add
>>random nucleotides to the genome you add no information.
>>Symbols without meaning carry no information." (Spetner, 1997
p83)
>>GM>If you start with a sequence of 11111111111 or in DNA
>>>AAAAAAAAAAA and then mutate it so that it is 11111211111
>>>or in DNA AAAAATAAAAA you have actually increased the
>>>information.
>SJ>I have a son, Brad, who is studying InformationTechnology
>>Engineering at university and he is currently doing Information
>>Theory. I showed him your message and he said what you are
>>saying here is "wrong" (his word). He gave me a very detailed
>>explanation why (which I did not fully understand), and even if I
>>did it sounded to complex for me to repeat!
GM>Brad better wait til he gets his degree.
Brad at least is doing a "degree" with units in Information Theory.
You are a *geophysicist* with, I presume, *no* formal training
whatsoever in Information Theory. I would therefore expect that
Brad's knowledge of Information Theory would surpass yours.
BTW, you just deleted without elipses my WinZip evidence. Did
you make a simple mistake and overlook it, or is it yet another
example of your "My Utmost for His Highest" double-standard,
which you apply to others, but not to yourself? I will assume the
former and re-post it for you to answer:
----------------------------------------------------------------
Brad also said that, according to Information Theory,
AAAAAAAAAAA and AAAAATAAAAA have the *same*
information, namely zero. He proposed a test of writing 640 blocks of
AAAAAAAAAAA's and AAAAATAAAAA's each to a text file and
compressed each file with WinZip. Both files compressed nearly
100%, which shows they both have effectively zero information. To
be sure, the AAAAAAAAAAA block compressed slightly more (99%
compared to 100%), but this was because of the way WinZip works.
We had previously compressed smaller much blocks and they were
99% for AAAAAAAAAAA and 98% for AAAAATAAAAA. The
actual files reduced in size from 8,538 bytels each down to 153 bytes
for the AAAAAAAAAAA file and 157 bytes for the
AAAAATAAAAA file.This suggests that it is WinZip's algorithm
which makes the difference. If the blocks were longer, say 6400
blocks of each, the effect of WinZip's algorithm would be expected to
decrease till both files were compressed 100%, which is zero
information content.
----------------------------------------------------------------
GM>The equation I presented above shows why there is no
>information in a sequence like AAAAAAAAAA. There are no
>choices of characters so the probability of getting A is 100% or 1.
>
>Thus when you put that into the above equation
>
>H=-K (1) log(1).
>
>Well the log of 1 is ZERO, 0. This means that -K times 1 times
>zero is ZERO. NO information.
>
>In the case of AAAAATAAAA the probabilities of the characters
are:
>
>P(A)=.9
>P(T)=.1
>
>Thus
>
>H=-K (.9log(.9)+.1log(.1))= -K(-.041-.1)=.14K.
Brad points out that he gets a different answer. Maybe you you have
probably used the wrong "log" in the above, because Yockey,
discussing the above formula, says to use base two logs:
"If we take the logarithm to the base 2, as is usually done..." (Yockey
H.P., "An Application of Information Theory to the Central Dogma
and the Sequence Hypothesis", Journal of Theoretical Biology, Vol.
46, 1974, p372).
GM>As long as K is not zero, which it isn't then the mutation Brad
>says doesn't increase information does exactly that. There is more
>information in the later case than in the former.
>This should be enough for now.
Brad says you are still wrong. He says you misunderstand the formula
and its application. See Brad's reply in this batch of messages, under
my name.
>SJ>How about quoting the relevant paragraphs from Yockey to
>>support your claims?
GM>I did.
No you haven't. Your claim was:
----------------------------------------------------------------
On Tue, 09 Jun 1998 19:56:04 -0500, Glenn R. Morton wrote:
...I would like to note that information has a very technical definition
and by that definition, there are lots of examples of increase in
information by mutation. "...and by that definition, there are lots of
examples of increase in information by mutation.
----------------------------------------------------------------
but all you have given is *one* formula:
GM>H=-K sum(i) P(i)log(p(i))
with the cryptic explanation:
DM>where P(i) are the probabilities of each character in the
>information carrying set.
You have substituted some numbers into the formula (and apparently
got the answer wrong), but you have not shown yet that the formula
is relevant to what you are claiming. My son points out you could
plug some numbers into E=mc^2 but it wouldn't mean anything unless
it was the *right* formula.
Steve
--- Dr. William Provine, Professor of History and Biology, Cornell University.
http://fp.bio.utk.edu/darwin/1998/slides_view/Slide_7.html
--------------------------------------------------------------------
Stephen E (Steve) Jones ,--_|\ sejones@ibm.net
3 Hawker Avenue / Oz \ Steve.Jones@health.wa.gov.au
Warwick 6024 ->*_,--\_/ Phone +61 8 9448 7439
Perth, West Australia v "Test everything." (1Thess 5:21)
--------------------------------------------------------------------