RE: Emergence of information out of nothing?

From: Peter Ruest (pruest@pop.mysunrise.ch)
Date: Sat May 04 2002 - 10:46:30 EDT

  • Next message: Peter Ruest: "RE: Emergence of information out of nothing?"

    Hi Glenn, Howard, Tim,

    the three of you have commented on the same topic in my post of Wed, 01
    May 2002 19:16:51 +0200. So let me deal with this subject first. In a
    separate post, I shall answer Howard regarding his other points.

    So, first about the question of what "information" is: you are
    completely right about this point. You may have missed the remark in my
    5th paragraph, where I stated basically the same thing: "So far, this
    doesn't even take into consideration the fundamental distinction between
    the maximum information carrying capacity, which Hogan talks about, and
    the (smaller) functional information relevant for biological systems."
    I call this a fundamental distinction between the following two
    concepts:
    (I) Maximum information carrying capacity;
    (II) Functional information relevant for biological systems.

    These are Tim's "apples and oranges". Information (I) is an _extensive
    measure_, increasing with the number of possible states or elements, but
    information (II), like a text, is an _intensive measure_, independent of
    the number of copies available. So, we are not even comparing apples and
    oranges, but an orchard and apples. If we want to compare possibility
    (I) and specification (II), we need first of all to specify the size of
    (I), the universe of discourse. Examples are: the entire universe at
    inflation, a 20 GB hard disk, an ocatapeptide (the 3 happen to have
    about the same information carrying capacity). We still don't know
    whether there is even a single apple in the orchard. Nevertheless, if I
    want a ton of apples, I need an orchard of a certain minimal size, plus
    certain additional conditions.

    For at least 20 years, I have drawn this distinction between (I) and
    (II) clearly, in both talks and articles. Glenn, thank you for your fine
    summary of Shannon information, which I knew in principle, but could not
    have formulated as well as you did. It corresponds to (I), and to what
    C.J. Hogan discussed in the paper I cited. I agree that, in itself, this
    has nothing to do with semantics, meaning or function denoted by concept
    (II). But it denotes an absolute upper limit of the amount of semantic
    information that can be transmitted or stored in a given system. And
    this is one of the points I wanted to make in my post.

    The semantic, meaningful, or functional information (II) is extremely
    difficult to define properly for natural systems, as Howard correctly
    points out.
    -- The term "semantic" indicates that it is coded in DNA, in analogy to
    a language.
    -- The term "functional" indicates that it provides a specification for
    a function, or what a biological macromolecule, complex, or other system
    part will _DO_, as Howard emphasizes.
    -- The term "meaningful" indicates a teleological view which designates
    the effect of this function in the context of the whole organism.

    While it is easy to compute the amount of "information" (I), as Glenn
    has shown, different factors make it difficult to estimate an amount of
    "information" (II).
    (1) Synonymy: different molecular structures or molecules may have the
    same effect, such that it doesn't matter which one is used.
    (2) Redundancy: different operational pathways may salvage a system in
    case one of them is damaged.
    (3) Ecology: depending on the current environment, a given function may
    or may not be needed, or may have different selective values.
    (4) Population dynamics: population size and time may determine the
    survival of a given feature.
    (5) Microevolutionary accessibility: different sequence configurations
    may be more or less easily reached by a mutational random walk.
    (6) Robustness: depending on its location in sequence space, a
    macromolecule may be more or less apt to survive during evolution.
    These factors happen to come to mind at present ... there may be more.
    I'm sure biologists will be able point out others.

    Now, is this information (II) perhaps equal to zero, such that it can be
    neglected entirely? Then a virtual infinity of viable evolutionary paths
    would be possible, and it would be certain that life evolved wherever
    the conditions are not extremely inimical. In this case, E.L. Shock,
    whom I also quoted, probably wouldn't consider it to be "a major
    challenge" to find feasible ways leading from the ubiquitous small
    organic "building-blocks of life" to living systems. This is the second
    point I wanted to make in my last post.

    We don't know yet. But the fact that it is not easy to construct
    artificial proteins of a specified function without leaning heavily on
    preexisting natural ones, seems to indicate that sequence space is not
    that full of useful functions. And H.P. Yockey, in his "Information
    theory and molecular biology" (Cambridge University Press, 1992, ISBN
    0-521-35005-0), for one, apparently thinks information (II) is very
    significantly larger than zero (Glenn, never mind that, in a later
    edition, he modified his figures; they are still highly significant). On
    this point, Howard and I disagree: he is very much more optimistic in
    this regard.

    Regards,

    Peter

    -- 
    Dr. Peter Ruest, CH-3148 Lanzenhaeusern, Switzerland
    <pruest@dplanet.ch> - Biochemistry - Creation and evolution
    "..the work which God created to evolve it" (Genesis 2:3)
    



    This archive was generated by hypermail 2b29 : Sat May 04 2002 - 13:00:24 EDT