Intelligent Agency by Proxy and TDI

From: Wesley R. Elsberry (welsberr@inia.cls.org)
Date: Sat Sep 30 2000 - 13:59:26 EDT

  • Next message: FMAJ1019@aol.com: "Re: Response to Baylor"

    Paul Nelson wrote:

    [...]

    FM> An interesting "riddle" was also given by Wesley with
    FM> his "algorithm room".
    FM> http://inia.cls.org/~welsberr/ae/dembski_wa.html

    PN>Right. Wesley and I have talked about this in
    PN>private correspondence. Ask Wesley if he knows
    PN>of any evolutionary algorithm whose causal
    PN>history (as lines of code) does not implicate at
    PN>least one intelligent agent.

    By Dembski's definition of the phrase "evolutionary algorithm",
    natural selection itself would be a candidate.

    Dembski used "the solution is infused by the intelligence
    that went into the program, compiler, OS, computer, etc."
    excuse at the 1997 NTSE in his discussion period. It
    doesn't wash, as my argument in
    <http://inia.cls.org/~welsberr/zgists/wre/papers/antiec.html>
    shows.

    PN>Put another way, evolutionary algorithms are proxy agents.
    PN>If you pursue the causal story, you'll find the action of a
    PN>designer somewhere down the road.

    [...]

    I could develop the following at greater length as an article
    for "Origins and Design". How about it? Or do you have a
    recommendation on some other venue for eventual publication?
    Zygon? First Things? The Southern Journal of Philosophy?

    What Does "Intelligent Agency by Proxy" Do For the Design Inference?

    by Wesley R. Elsberry

    William A. Dembski wrote "The Design Inference" as his
    technical explication of the logic and methods of inferring
    that an event must be explained as being due to design. In
    other essays aimed at less technically inclined audiences (and
    the book, "Intelligent Design", which collects some of those
    essays), Dembski has also written about making design
    inferences. There are certain aspects of Dembski's popular
    writings which appear to be at odds with, or at least
    unsupported by, the technical explication of "The Design
    Inference".

    [Quote]

    Thus, to claim that laws, even radically new ones, can produce
    specified complexity is in my view to commit a category
    mistake. It is to attribute to laws something they are
    intrinsically incapable of delivering-indeed, all our evidence
    points to intelligence as the sole source for specified
    complexity. Even so, in arguing that evolutionary algorithms
    cannot generate specified complexity and in noting that
    specified complexity is reliably correlated with intelligence,
    I have not refuted Darwinism or denied the capacity of
    evolutionary algorithms to solve interesting problems. In the
    case of Darwinism, what I have established is that the
    Darwinian mechanism cannot generate actual specified
    complexity. What I have not established is that living things
    exhibit actual specified complexity. That is a separate
    question.

    Does Davies's original problem of finding radically new laws
    to generate specified complexity thus turn into the slightly
    modified problem of finding find radically new laws that
    generate apparent-but not actual-specified complexity in
    nature? If so, then the scientific community faces a logically
    prior question, namely, whether nature exhibits actual
    specified complexity. Only after we have confirmed that nature
    does not exhibit actual specified complexity can it be safe to
    dispense with design and focus all our attentions on natural
    laws and how they might explain the appearance of specified
    complexity in nature.

    [End Quote - WA Dembski,
    <http://inia.cls.org/~welsberr/ae/dembski_wa/19990913_explaining_csi.html>]

    In TDI, Dembski claims that we can examine the properties of
    an event and classify it as being due to "regularity",
    "chance", or "design". We need only the event itself and some
    side information by which a specification may be formed.
    Under Dembski's DI, what we do not need information about is
    the cause of the event. This is important to Dembski's
    argument because Dembski wants us to conclude "design" for an
    event and then infer "intelligent agency" in cases where we
    have no information about the "intelligent agent" which may
    have caused the event in question.

    In Dembski's examples of TDI, it is clear that known causal
    stories are treated differently. They are not submitted to
    his Explanatory Filter as possible "regularity" or "chance"
    hypotheses. That Caputo cheated is not treated as either
    "regularity" or "chance". Plagiary is not treated as either
    "regularity" or "chance". DNA identification is not treated
    as either "regularity" or "chance". Mendel falsifying data is
    not treated as either "regularity" or "chance". These causal
    stories instead are treated as the basis for "specifications"
    and utilized in classifying an event as "due to design".

    But in "Explaining Specified Complexity", Dembski does treat a
    known causal story as either "regularity" or "chance". The
    causal story in question is that an evolutionary algorithm
    yields a specified result in a small number of tries out of a
    large problem space. Here, Dembski tells us that the
    complexity of the result (found by reference to a "chance"
    hypothesis") is apparently large but actually zero, because
    the probability of the result <i>given its known cause</i> is
    1.

    As pointed out above, Dembski's TDI does not condone plugging
    in known causes in as "regularity" or "chance" hypotheses. At
    best, one might plug in a hypothesized cause that is identical
    to an actual cause. After all, some things <i>are</i> due to
    regularity and chance. But let's consider what follows from
    this change in operation between TDI and "Explaining Specified
    Complexity".

    We have two events, each yielding a solution to a 100-city
    tour of the Travelling Salesman Problem. (I select this one
    as an example because it has well-known characteristics and I
    have been using it since 1997.) In one event, we know that a
    human agent has toiled long and hard to produce the solution.
    In the other case, a genetic algorithm was fed the city
    coordinate data and spit out the same solution some time
    later. We will now apply the Design Inference from TDI and
    the Design Inference as modified in "Explaining Specified
    Complexity".

    For TDI_TDI, the known causal stories are irrelevant. Thus,
    both events are treated identically, which is to say that our
    speculations concerning how these events occurred may be the
    basis for specifications, but otherwise do not impinge upon
    our analysis. We eliminate "regularity", since these are not
    high probability events. We eliminate chance, because these
    are not simply intermediate probability events. We conclude
    that the events are due to "design" because they are both
    "small probability" (and in fact meet Dembski's universal
    small probability bound) and are "specified" as the shortest
    closed loop path that visits each city once. Both events are
    classed as having "specified complexity".

    This is not the case for TDI_ESC. Now, there is an asymmetry
    in how we treat the two events based upon our knowledge of the
    causal stories. For the solution given by the human, we again
    decline to utilize our knowledge of causation, and things
    proceed as for TDI_TDI, and we find the solution is due to
    "design". Not so for the solution produced by GA. There are,
    in fact, two possible alternate ways in which this event may
    be processed which deny placing it in the "due to design" bin.

    The one explicated in "Explaining Specified Complexity" goes
    like this. First, regularity is eliminated; the event is not
    of high probability. Second, we consider chance hypotheses
    and find our complexity estimate thereby. We submit as a
    chance hypothesis the known causal story: the result was
    obtained by operation of a genetic algorithm. Unsurprisingly,
    when we know that an event is due to a particular cause and
    we use that cause as a "chance" hypothesis, we find that the
    event is "due to chance". And because we base our complexity
    measure upon the relevant chance hypothesis, we find that
    the probability of the event given our "chance" hypothesis
    is high, and thus the complexity is very low indeed.

    The second possible way to eliminate the event yielded by
    genetic algorithm is to treat the operation of the genetic
    algorithm as a regularity. In this case, we again use our
    knowledge that the event was caused by a genetic algorithm.
    We note that genetic algorithms are capable of solving
    problems of this apparent complexity, and class the solution
    as being due to the regularity of solution by genetic
    algorithm. Again, our classification is unsurprising, since
    we applied our known causal story to a decision node in the
    Explanatory Filter, we also find that our known causal story
    explains the event.

    In either of the above ways of avoiding making a successful
    design inference for the solution produced by genetic
    algorithm, we apply knowledge of the cause of the event
    differently from when we know that the cause is an intelligent
    agent. In the case where an intelligent agent is known to
    act, we are told that the event represents "actual specified
    complexity". In the case where an algorithm is known to have
    produced the event, we are told that the event represents
    "apparent specified complexity". Note that "apparent
    specified complexity" is established only because we have
    knowledge of the causal process and use it differently from
    the analytic method given in TDI.

    To clarify why these cases indicate problems for making Design
    Inferences, consider an event where we are shown a solution to
    a 100-city TSP, but we are <b>not</b> given any information as
    to the causal story. We do not know whether an intelligent
    agent or some algorithm worked out this solution; we merely
    have the solution and our knowledge of the TSP problem in
    general. According to the procedures and logic given in TDI,
    we can make a reliable inference of "design" given just that
    information. And as indicated before, this event when
    analyzed according to TDI_TDI is classified as "due to
    design". We now have a problem: The event is "due to design",
    but it may not reliably mark the work of an intelligent agent
    in producing it. This is a challenge to the claim that TDI
    gives us a reliable method of inferring the action of
    intelligent agents. Because the same event could have either
    "apparent specified complexity" or "actual specified
    complexity", we find ourselves exactly where we were before
    having used TDI. The mere fact that an event has "specified
    complexity" does not enable us to reliably infer the action
    of an intelligent agent in producing it.

    One way of approaching this challenge is to repudiate the
    claim that there is any such split between "apparent specified
    complexity" and "actual specified complexity". This would
    preserve the concept of "specified complexity" as having some
    bearing upon marking the action of intelligent agency, rather
    than simply being a complicated piece of rhetoric whose
    content is solely a long-winded way of begging the question.
    Since the only effects of "apparent" vs. "actual" specified
    complexity categories are to cast doubt upon the logical
    framework and methods of the Design Inference, repudiating it
    seems the clear way to proceed. But then there is still the
    problem that human and algorithm may produce identical events
    that are tagged as having "specified complexity".

    When "apparent" vs. "actual" specified complexity is
    repudiated, the residual problem may then be approached by
    claiming that whenever an algorithm is the cause of an event
    having the property of "specified complexity", that we may
    infer that an intelligent agency designed and implemented the
    algorithm, and that the production of events by such
    algorithms is in each case to be considered "intelligent
    agency by proxy" (IABP). [This approach, including the
    repudiation of "apparent specified complexity", was taken by
    Paul Nelson in personal correspondence from October, 1999.]
    Thus, whenever "design" is found, we are assured that an
    intelligent agent operated, either to produce the event
    proximally, or to produce the process by which the event
    occurred ultimately.

    There are further problems that ensue from use of IABP, but
    fortunately for the Design Inference these turn out to be
    relatively simple inconsistencies between some of Dembski's
    claims outside of TDI and those covered within TDI. In other
    words, retaining the "apparent" vs. "actual" specified
    complexity distinction entered by Dembski logically
    invalidates the Design Inference (it is somewhat ironic for an
    author to vitiate his own work), while dumping it and adopting
    IABP yields a revised form of TDI which is still arguable.

    Now, I will consider what adoption of IABP implies for the
    Design Inference.

    First, IABP invalidates Dembski's claim in "Intelligent
    Design" that "functions, algorithms, and natural law" cannot
    produce specified complexity aka "complex specified
    information". Instead, functions, algorithms, and natural
    laws which are produced by intelligent agents and which act as
    proxies for those agents also have the ability to produce
    events with specified complexity.

    Second, IABP means that the method of the Design Inference
    cannot distinguish between direct proximal action of an
    intelligent agent in producing an event and indirect action
    via proxy one or an infinite number of steps removed. Once
    a process has been made by an intelligent agent as a proxy,
    whatever events it might produce henceforth would then be
    capable of yielding events with specified complexity.
    There is no basis in the Design Inference for distinguishing
    between two events, one produced directly by an intelligent
    agent, and an identical one produced by that agent's proxy.
    Consider the TSP example given above. A human can produce
    a genetic algorithm that solves TSP problems. The same
    human can work TSP problems even as his algorithm is employed
    doing the same thing. As long as each is working properly,
    they may both produce solutions (or equivalently close
    approximate solutions) to TSP problems. The Design Inference
    can only detect "specified complexity", and thus cannot tell
    us whether any particular TSP solution was produced by the
    human or by his algorithmic proxy.

    Third, IABP undermines Dembski's position taken in
    "Intelligent Design" that attributing processes rather than
    contrivances to the intelligent agency of God is an error.
    Because one can examine a contrivance as an event via TDI, but
    the results are ambiguous with respect to whether the
    contrivance's specified complexity is due to God's direct
    intervention in producing the contrivance or due to God's
    indirect causation through one or an infinite number of steps
    removed via a function, algorithm, or natural law set up as a
    proxy process, one cannot distinguish via TDI whether God acts
    directly or not for any particular contrivance examined.

    Fourth, IABP implies that the strongest theological claim that
    can be predicated upon the Design Inference is a version of
    Deism wherein the Deist God undertakes creating a complete set
    of proxy functions, algorithms, and natural laws which result
    in the universe and life as we know it. Specifically, the
    Design Inference is incapable of asserting a direct
    intervention of God in forming irreducibly complex biological
    systems. Displacing a hypothesized instance of the action of
    natural selection in adaptation is conceptually beyond the
    reach of the Design Inference or "specified complexity". At
    best, on the basis of the Design Inference alone under IABP it
    can be claimed that the concept and implementation of natural
    selection is due to God, not that it was not operative as a
    proxy for God.

    In conclusion, the principle of "intelligent agency by proxy"
    helps save the Design Inference from the logical collapse
    necessitated by adoption of the distinction between "apparent
    specified complexity" and "actual specified complexity", but
    imposes certain costs of its own. In particular, several of
    the auxiliary statements about the Design Inference made by
    William Dembski in his popular writings would have to be set
    aside. These include the claim that "functions, algorithms,
    and natural law" cannot produce events with specified
    complexity, and that identification of specified complexity
    for biological systems implies that natural selection was not
    operative. IABP and the Design Inference can be used
    theologically as an argument for the existence of a God with
    Deist properties. Stronger arguments than that will need to
    be justified independently.

    Wesley

    <http://inia.cls.org/~welsberr/ae/dembski_wa/wre_id_proxy.txt>
    <http://inia.cls.org/~welsberr/ae/dembski_wa.html>



    This archive was generated by hypermail 2b29 : Sat Sep 30 2000 - 13:20:12 EDT