From: NSS (kirk@newscholars.com)
Date: Wed Nov 12 2003 - 14:19:17 EST
Some good questions have been raised. My short answer to Wayne is that my
method is constructed to yield no false positives. Thus it will fail to
identify innumerable instances of ID, but it when it does indicate an
instance of ID, you can be sure ID is required. Especially problematic will
be instances of ID designed to 'look natural' as an election scam would try
hard to do. In the case you suggest, one instance would almost certainly be
undetected by my approach, although there might be a particular component of
the scheme that might yield a positive result. My priority is to avoid false
positives while, at the same time, identifying instances that would
*require* ID. I will define 'require' in a later post, in the course of my
forthcoming explanation.
With regard to Howard's concern, I think it would be incumbent on me to show
that his view does not represent the method I propose.
Walt's question is central to the discussion of computer simulations and the
generation of functional information.
The only way I can see myself proceeding in a way that would actually
accomplish something, is to move ahead one point at a time. I ask Walt,
Howard, and Wayne to have patience as I try to see if we can establish some
generally agreed-to foundations. Eventually, all their questions will be
addressed.
It will help me if I can use html text in my emails. Will this be a problem
for anyone? I'll refrain from doing so until I'm given the go-ahead.
START:
To begin, the Shannon approach to quantifying information does not generally
distinguish between functional (or meaningful) information and
non-functional, or meaningless, information. Thus the term 'information' is
taken in its broadest sense possible, under Shannon information. Jack
Szostak has raised this problem in his recent article ('Molecular messages',
*Nature* Vol. 423, (2003), p. 689.) He doesn't go into a great amount of
detail, but the bottom line is that functional information can be defined,
in units of bits, as:
If = -log2(Nf/N) (1)
where Nf = number of states/sequences/configurations that are functional
and, N= total number of possible states/sequences/configurations that are
possible for the physical system under investigation.
'Functional' can be taken generally or specifically. Generally, 'functional'
means that the state/sequence/configuration has some
positive/meaningful/useful effect within a larger system. Within the large
category of 'functional' will be any number of specific functions. Within
genetics we are usually concerned with the specific function(s) of a given
protein or regulatory sequence.
Eqn. (1) also assumes that each possible state/sequence/configuration is
approximately as probable as any other.
Are we okay with using Eqn. (1) as my method to quantify functional
information? If not, what might be your objection. Are there any questions
about Eqn. (1) before I proceed? If someone wants the derivation, I can
provide that (email me off list and I will send you a properly formatted
derivation).
Since Eqn. (1) is essential for any further progress, I will stop here. I am
not a member of this list, so will not see your comments unless you email me
directly, or Denyse forwards your post to me. I have more emails/day than I
can handle already, so I cannot afford to receive the additional emails/day
that being a member of the ASA list would produce. Thus this compromise.
Cheers,
Kirk
This archive was generated by hypermail 2.1.4 : Sat Nov 15 2003 - 00:34:15 EST