>Brian
>
>This is not a private message. I am sending it to you privately to cut down
>on my Reflector traffic. Feel free to reply via the Reflector. Apologies for its
>lateness.
>
>Steve
>
Please be sure to read the above. The message was sent to me
rather than the reflector, however, as one will see below, there
are sections clearly meant for the reflector in general. Thus I
decided to reply here. This created a little bit of a problem as
I felt it important not to delete anything Steve wrote since
no one else received the original. The problem is I cannot
add much to the message without exceeding the capacity of
my editor. So, I will reply to about the first 2/3 of Steve's
message here. The last part I may or may not reply to.
SJ:===
>
>On Sun, 01 Dec 1996 20:32:15 -0500, Brian D. Harper wrote:
>
>[...]
>
>>SJ>Sorry, but one thing that "random mutation" cannot do is to "create
>>new information":
>
>>BH>How would one define "information" in such a way that a random
>>process would not result in an increase in information? The only
>>objective definitions of information that I know of are those found
>>in information theory. These information measures are maximal
>>for random processes.
>
>SJ>I am not by any stretch of the imagination an expert in
>>"information theory", so I am unable to "define `information'" in
>>such terms. I rely here on the expertise of others:
>
>BH>Steve, is it possible to respond a little more quickly? Its been
>>almost a month. It took me awhile to recapture my train of thought
>>on this business.
>
SJ:==
>Sorry to Brian and others, but I had already warned of this in
>advance. I have a new job and have less time to respond, so I my
>replies will be in a batch, several weeks behind. If people think
>this is too late, then they should feel free to ignore my posts.
>This will then be self-correcting - fewer replies and I will catch
>up! :-) But if it looks like I am getting further and further
>behind, I will unsubscribe for a while until I catch up.
>
I don't mean this to be negative, my suggestion is that you just
let slide stuff more than a week or two old and get caught up.
You'll be much more effective this way.
>BH>There were a couple of reasons for my challenge above. One was to
>>see if you had any understanding of the quotes you were giving
>>out. The other was a genuine curiosity about the answer to the
>>question I posed. As you are no doubt already aware, I'm not
>>particularly a fan of neo-Darwinism and if there is an information
>>theoretic argument against it then I'm certainly interested in
>>knowing about it. But hand waving and word games such as those
>>provided by W-S won't do.
>
SJ:====
>I was responding to Brian's specific request that I define
>"information" in terms of "information theory":
>
You misunderstood my request. You are free to define
information any way you wish [except, of course, something
like "that quantity which does not increase due to a random
mutation" ]. I merely mentioned that the only objective definitions
I know about come from information theory (classical or algorithmic).
SJ==
>-------------------------------------------------------
>>On Thu, 31 Oct 1996 15:23:55 -0500, Brian D. Harper wrote:
>>
>>[...]
>>
>>BH>How would one define "information" in such a way that a random
>>process would not result in an increase in information? The only
>>objective definitions of information that I know of are those found
>>in information theory. These information measures are maximal
>>for random processes.
>-------------------------------------------------------
>
>My point was not that I cannot define "information" but that I
>cannot define it "in `information theory'...terms". I understand
>what "information" is as described by scientific writers in books
>addressed to laymen like myself, ie. as "specified complexity":
>
One problem is that "information" can mean all sorts of different
things in books written for laymen. Its very confusing sometimes
figuring out just what is meant by a particular author. But I
think "specified complexity" corresponds fairly well to the
meaning of "information" in algorithmic information theory.
The AIT measure of complexity was proposed independently
by Kolmogorov, Chaitin and Solomonoff but Chaitin has done
more than anyone else in terms of developing the theory.
He has a web page containing almost all his papers (either
in postscript or HTML) including some written for laymen
that appeared in Scientific American and New Scientist.
http://www.research.ibm.com/people/c/chaitin
In addition to the papers mentioned above you may also want to
look at a transcript from a recent lecture entitled "An invitation to
algorithmic information theory" at:
http://www.research.ibm.com/people/c/chaitin/inv.html
After saying above that I thought "specified complexity" corresponds
fairly well to the meaning of "information" in algorithmic information
theory, I discovered in reading the BT quote below, that this is
not the case at all, indicating again the confusion that can occur
over terms. I'll comment further below:
SJ quoting Bradley, Thaxton:====
>"Information in this context means the precise determination, or
>specification, of a sequence of letters. We said above that a
>message represents `specified complexity.' We are now able to
>understand what specified means. The more highly specified a thing
>is, the fewer choices there are about fulfilling each instruction.
>In a random situation, options are unlimited and each option is
>equally probable.
Oops.
continuing B&T quote:============
>In generating a list of random letters, for
>instance, there are no constraints on the choice of letters at each
>step. The letters are unspecified. On the other hand, an ordered
>structure like our book full of `I love you' is highly specified but
>redundant and not complex, though each letter is specified. It has a
>low information content, as noted before, because the instructions
>needed to specify it are few. Ordered structures and random
>structures are similar in that both have a low information content.
>They differ in that ordered structures are highly specified and
>random structures are unspecified. A complex structure like a poem
>is likewise highly specified. It differs from an ordered structure,
>however, in that it not only is highly specified but also has a high
>information content. Writing a poem requires new instructions to
>specify each letter. To sum up, information theory has given us
>tools to distinguish between the two kinds of order we distinguished
>at the beginning. Lack of order - randomness - is neither specified
>nor high in information. The first kind of order is the kind found
>in a snowflake. Using the terms of information theory, a snowflake
>is specified but has a low information content. Its order arises
>from a single structure repeated over and over. It is like the book
>filled with `I love you.' The second kind of order, the kind found
>in the faces on Mount Rushmore, is both specified and high in
>information. Molecules characterized by specified complexity make up
>living things. These molecules are, most notably, DNA and protein.
>By contrast, nonliving natural things fall into one of two
>categories. They are either unspecified and random (lumps of granite
>and mixtures of random nucleotides) or specified but simple
>(snowflakes and crystals). A crystal fails to qualify as living
>because it lacks complexity. A chain of random nucleotides fails to
>qualify because it lacks complexity. A chain of random nucleotides
>fails to qualify because it lacks specificity. No nonliving things
>(except DNA and protein in living things, human artifacts and written
>language) have specified complexity." (Bradley W.L. & Thaxton C.B.,
>"Information & the Origin of Life", in Moreland J.P. ed., "The
>Creation Hypothesis", InterVarsity Press: Downers Grove Ill., 1994,
>pp207-208)
>
I can agree with quite a bit of what is written above. What B&T
are calling "specified complexity" I would call "organized complexity".
I would disagree with trying to relate this directly to any objective
measure of information content. I also don't particularly like
their definition of random situation.
Briefly, the algorithmic info content (or Kolmogorov complexity)
can be thought of roughly in terms of "descriptive length". The
longer the description of an object, the greater its complexity.
Of course, one is talking here of the length of the shortest
description so that B&T's "I love you" book above could be described
"I love you" repeat 4000 times. The descriptive length is small so
the complexity of this message is small.
The reason I thought at first that "specified complexity" corresponded
roughly to Kolmogorov complexity is that I was thinking in terms
how long it takes to specify an object.
Now, let me illustrate why the descriptive complexity (algorithmic
information content) is generally expected to increase due to a
random mutation. First we consider the following message written
in our alphabet with 26 letters:
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA ...............
Now we introduce a random mutation anywhere, say:
AAAAAAAAAAAAAXAAAAAAAAAAAAAAAAAAAAAAAAAA................
The first sequence has a small descriptive length:
AA repeat
the second has a much longer descriptive length:
AAAAAAAAAAAAAXAA repeat A
>BH>I've done a little thinking on this since my initial question to
>>you and I've come with a suggestion that might be worth
>>pursuing. Note that this is just an intuitive guess on my
>>part (a non-expert) but it seems to me that some measure
>>of mutual information content probably will satisfy the condition
>>of not increasing due to a random mutation (the classical Shannon
>>information would increase). I'm also suspecting that mutual
>>information may also capture the basic ideas of irreducible
>>complexity with the added bonus of being objectively measurable. I
>>kind of like this idea, I wonder if someone has already thought
>>of it. Probably :). Let's suppose that some clever person comes
>>up with a measure of mutual information content and can show
>>that this information cannot increase due to a random mutation.
>>This would be only a small step towards providing any threat to
>>neo-Darwinism. One would still have to show that this information
>>measure doesn't increase due to the combination random mutation
>>plus selection mechanism. Good luck. It seems to me that one is
>>searching for some general principle that might be called the
>>"conservation of information". Stated this way it almost makes
>>me giggle out loud. OK, I confess, I did giggle a little as I wrote
>>that :). But who knows, maybe there is such a principle at least
>>in some narrower sense, i.e. "information is always conserved
>>for certain types of physical processes X, Y and Z".
>>
>>OK, I've helped you out enough already ;-). When you discover
>>the CoI Principle and become famous, don't forget where you
>>got the idea ;-).
>
>Brian needn't worry. This is as clear as mud to me, so his
>secret is safe - with me at least! :-)
>
>>SJ>But if Glenn or Brian has an example in the scientific literature
>>of a "random mutation" that has "created new information", they
>>could post a reference to it.
>
You have an example above. You can find another example in the
pure chance thread.
>BH>As soon as you define what you mean by information in some
>>objective, measurable way ...
>
SJ:==========
>Brian seems to have forgotten, but it was *Glenn* who was the person
>claiming that "random mutation" could "create new information":
>
and you replied
'Sorry, but one thing that "random mutation" cannot
do is to "create new information" ' --SJ
You then tried to support this by giving some of WS's ideas
which you now refuse to defend.
>-------------------------------------------------------
>>On Sun, 06 Oct 1996 14:44:29, Glenn Morton wrote:
>>
>>GM>Which is a better design technique, rational design or random
>>evolution? Creationists often cite the supposed inability of
>>random mutation to create new information and its inability to
>>perform better than a human designer.
>-------------------------------------------------------
>
>All I did was deny that "random mutation" can "create new
>information".
And all I did was ask you to provide some justification for
your denial. BTW, you did more than just deny this, you
also quoted WS thinking that that supprted your denial.
Glad to see you're going to take it back. That's a start.
SJ:===
>Since it is impossible to prove a universal negative,
>I cannot prove my denial is true. But Glenn can prove me wrong by
>citing examples* where "random mutation" can indeed "create new
>information".
>
Been there, done that.
SJ:===
>*It is theoretically possible for a monkey at a typewriter to
>randomly type "John loves Mary.", thus randomly creating new
>information. But firstly, the possibility of this is astronomical,
>being 54^16 (assuming upper and lower case, space and fullstop),
>which is 10^27.7 according to my calculations. Even the simpler
>"johnlovesmary" is 26^13, or 10^18.4. That's only one chance in more
>than one million trillion! A monkey typing at one letter per second
>(on a 26-key typewriter), would take one hundred billion years (on
>average) to type it, ie. more than 5 times the age of the universe.
>And second, the monkey would not be aware that this *was* new
>information, so even if he typed it he wouldn't conserve it.
>
>I take Brian (and Glenn's) failure to post "an example in the
>scientific literature of a `random mutation' that has `created new
>information', as evdience that there is no such example.
>
Steve, do realize how illogical this statement is? First, neither
Glenn or I are experts in this field. More importantly, there
is no causal connection between what we post here and what
is contained in the literature. egad man, think about what you're
saying.
In spite of this, an example has been given.
>BH>BTW, the following definition
>>won't do:
>>
>> information: that quantity which doesn't increase due
>> to random mutations.
>>
>>If by "information" you mean the measure in classical information
>>theory then your request is easily answered since every random
>>mutation to a system will increase its information content. For
>>example, a monkey typing randomly at a keyboard creates more
>>information than in a Shakespearean play containing the same number
>>of keystrokes.
>
>No, I have defined it as "specified complexity",
Err, Steve, you need to learn the difference between past and present.
You introduced "specified complexity" in this post, not the one
I was responding to above. That's why I asked you to define "information".
SJ:===
>the analogy of a
>grammatically correct English sentence, eg. "John loves Mary".
>Brian's contrast between "a monkey typing randomly at a keyboard"
>and "a Shakespearean play containing the same number of keystrokes"
>indicates that he an I share the same definition of what
>"information" means.
>
Not quite. I'm afraid you misunderstood my example of "a monkey
typing randomly at a keyboard". I was referring to a typical sequence
produced by a monkey rather than a highly improbable one, i.e.
podgoihafauiodhfennonvmsphodjgoiheiuqkrqpeoijnvbmvxvzs.sdkfo
aehyqtuqbenmsppgnlpfjkhgosihuiafnauifhquirqemgopkgpiojdiuaus
ygwqhwoiqandkjvbayusgquwqejo,,vlsdjkhfyuegwerjyioertksrioueryt
wriotiudnjbcnbhjduihgertopwemlkdnmfxnvolp;kpqaiuqsnklznxiknf
asiodhasuesfnblm;z;x,djsmgklehuiankflnkjnszkljdksjdnkazjksdbg
uiaoiehyuiqwehioerjkhsiofjsdhioasjoqwhpqkeppoejiohxcmoxcbnx
cbohsdognsouihxuiojiowhrpkfolxncohsiodjauiodioszhuiasouiqh
wuiasuifsfbiauiujkauiqwoiqohaisklzml;vmnpckpcvbjoidfguiwehriqa
nfkjabcjkabsdiuqwgbqjnsdnksjdgisdnoshfinjweopqhjrejgodhnashu
iqweowheinaosdnghoeoqihiabsfisudbvsdnfopiqwhwrojusdhfgpgjml
vlzsmvl;sjgosjefuiehokwngijndvuionovihifnkdnbvjklsdbiqawnioqab
ifqabifndohasnfoqaehgsffhgposajuioqgwyubajkbdfyuwfyuqweijrfoph
mjklsnajkbjklawyuqiotjroijkdvkldnfiouqwjproqjeoglnsdskldnoiehto
qengasdngolasdngoianioasnioaenfiouqwehuiweioejtrioueirofpognp
omg.,mznjvuitruqwhsnbdjkfzbkjvauifbaibfuiaygqwiejtpoerjkpodfgm
mbosdngwsenuifgeuifyuiytfueosuidhiofbidbabfiuasdiuwehqtwjros
dlvnsoidjgiuwrjqpwoejgas -- the MonkMeister
This sequence has a higher information content than a passage
from Shakespeare of the same length.
Oooh, I know you're going to like this ;-).
Seriously, though, if you want to understand better what is being
measured with Kolmogorov complexity and why its a good
measure of information content, I just thought of another
good analogy to it that will let us easily calculate some rough
numbers for information content, namely compression algorithms
like pkzip or gzip.
>[...]
>
>>SJ>"...NeoDarwinian theory does not enlighten us as to how a
>>primeval cell can energetically finance the production of new
>>information, so that it becomes a higher plant or a higher animal
>>cell....." (Wilder-Smith, A.E., "The Natural Sciences Know Nothing
>>of Evolution", T.W.F.T. Publishers: Costa Mesa CA, 1981, p.vi)
>
>>BH>egad man, I wish you wouldn't give quotes like this. They give me
>>brain cramps. How much does it cost to "energetically finance
>>the production of new information"? Would it be, say, 10 Joules
>>per bit or what?
>
>>SJ>I note Brian picks on the minor point in Wilder-Smith's argument,
>>and parodies his terminology (not his content). I wonder if he has
>>ever read W-S's full argument in his books? :-) W-S's major point
>>is "Transformism demands a very large increase in information, the
>>principle behind which Neo-Darwinian thought is incapable of
>>explaining."
>
>BH>Of course, its really tough explaining a principle which is never
>>defined. Perhaps you could summarize this principle for us?
>>Please don't ask me to dig it out of W-S myself.
>
SJ:===
>I take this as Brian's way of admitting he has not read
>Wilder-Smith? I suggest he does read him, rather than rely on my
>meagre powers of summarisation. IMHO W-S was a genius. He refutes
>Kenyon's Biochemical Predestination, and I would not be at all
>surprised if Kenyon's conversion to creationism was due to reading
>W-S. Indeed, Kenyon himself says:
>
>"My own initiation into creationist scientific writing came in 1976
>with the geological sections of Whitcomb and Morris' The Genesis
>Flood, and somewhat later, A. E. Wilder-Smith 's The Creation of
>Life: A Cybernetic Approach to Evolution. It soon became apparent
>to me that the creationist challenge to evolutionism was indeed a
>formidable one, and I no longer believe that the arguments in
>Biochemical Predestination (Kenyon and Steinman, McGraw-Hill, 1969)
>and in similar books by other authors, add up to an adequate defense
>of the view that life arose spontaneously on this planet from
>nonliving matter."(Kenyon D.H., "Foreword", in Morris H.M. & Parker
>G.E., "What is Creation Science?", 1987, Master Books, El Cajon, CA,
>Foreword, pii).
>
>Can there be anything as amazing as this? A leading evolutionist
>Professor of Biology reads a refutation of his life's work in a
>creationist book and becomes a creationist? I would urge Brian and
>others to read Wilder-Smith's "The Creation of Life" for
>themselves.
>
>BH>The principle should put some meat on the above statements.
>>Given a particular transformation how do I go about calculating the
>>very large increase in information? You claim I'm picking a minor
>>point, however it seems to me that the energetic financing of
>>information production plays a key role in this "principle". Thus
>>you should be able to calculate this cost in energy. I had hoped
>>to show how ludicrous such a principle is by observing what the
>>units of such a measure would be [J/bit]. If you think about it a
>>bit I think you'll see that such an idea is reductionistic to the
>>point of absurdity. Would the information in DNA be reducible to
>>the energy contained in chemical bonds? Does this information have
>>anything at all to do with chemical energy? Would the information
>>in Mike Behe's book be reducible to the energy of adhesion between
>>ink and paper?
>
>[...]
>
SJ:===
>I could spend a lot of time answering this, but somehow, on Brian's
>track record with my posts, I don't think he really wants to know! :-)
Given my detailed response to you I find this statement truly
amazing. I generally spend a considerable amount of time on my
posts to the reflector, including my responses to you. The reason
it takes time is that I'm giving my own ideas rather than just
dumping a bag full of quotes. Let me suggest to you that if you
have no intention of defending the ideas in this quotation ad
absurdum campaign then don't present them.
BTW, I'm getting sick of your policy of insulting someone however
you wish and then following it with a :-). I suspect I know why you
do it, so you can say whatever you want without taking any
responsibility for it.
another BTW, I suspect what you really meant to say was given
my track record of shooting you down, you dare not actually say
anything.
SJ:===
>Creationists on this Reflector are always being urged to read
>the literature for themselves. If Brian really is interested in
>what Wilder-Smith means by "energetically finance the production of
>new information" he will read the latter's books for himself.
>
Translation: Steve doesn't know what WS means either.
Brian Harper
Associate Professor
Applied Mechanics
Ohio State University