RE: [asa] Bayesian inference and design inference

From: Alexanian, Moorad <alexanian@uncw.edu>
Date: Sat Sep 19 2009 - 07:26:39 EDT

 I wonder, is the “information source, a form of low entropy” alive?

Moorad
________________________________________
From: asa-owner@lists.calvin.edu [asa-owner@lists.calvin.edu] On Behalf Of David Clounch [david.clounch@gmail.com]
Sent: Saturday, September 19, 2009 12:49 AM
To: Iain Strachan
Cc: asa; Christians_In_Science
Subject: Re: [asa] Bayesian inference and design inference

Iain,

I followed this up until this sentence:

"In every case we
know about, information implies an information giver"

I reject that statement. Information requires an information source, a form of low entropy. Not an information giver in the sense of an intelligence. I concluded about 5 years ago, whether rightly or wrongly, that design detection is one thing, intelligence is something else. Intelligence is involved with a prior knowledge. So I object to intelligence being inferred. In some cases the prior knowledge is hypothetical, which is what religious faith gives us. In other cases it is concrete, as in the cases of humans we can observe. Regardless, starting with intelligence and inferring design is a logical error of affirming the consequent. Dembski is trying to go the other way. But he would be better off to dump the intelligence part. At this point people say I am trying to dump God. Yes. I am. Its a question of trying to retain objectivity.

So I think you have a good approach in general.

Regards,
Dave C

On Fri, Sep 18, 2009 at 3:12 PM, Iain Strachan <igd.strachan@gmail.com<mailto:igd.strachan@gmail.com>> wrote:
I wanted to outline my reason for not accepting the Intelligent Design
inference, which relates to Bayesian inference, which is at the heart
of modern probabilistic methods. Although it's a technical term, I'll
try and explain it by way of example, without the maths. I think even
if we don't directly use Bayes's Theorem to make inferences, we
implicitly do something like it, which can't carry over into making
the kind of design inference that the ID community want to make.

Here's my (somewhat light-hearted) example to explain the process:

You walk into a room where there is a computer with the monitor
switched off, a scientist and two "subjects". One of the subjects is
a human and the other one is a monkey. The scientist tells you that
what is now on the screen of the monitor was input using the keyboard
from one of the two subjects. Which one was it? Not being able to
see what's on the screen (is it a sentence or gibberish like asdfga0s
dua0s9df d0 ads09' ), you are unable to say - it's 50-50.

Then the scientist switches on the monitor and the screen displays the
message "I AM THE MONKEY AND I TYPED IN THIS SENTENCE".

So you're now thinking it's definitely the human. How could a monkey
have typed an intelligible sentence? But when you say that, what you
had assumed before you saw the screen was that it was equally likely
to be the human or the monkey.

Then the scientist tells you the rule for selection of the subject.
She took a fair coin and tossed it ten times. The rule was if it came
up heads all ten times, the human would be selected, otherwise the
monkey.

You're still thinking it's the human. A 1024:1 shot isn't that remote
compared with the possibility of a monkey typing that sentence.

Suppose she tells you she tossed the coin a thousand times and the
human would be selected if they all came up heads, otherwise the
monkey.

I guess the first thing you do is examine the coin to see if it really
is fair. You toss it a few times Heads, Heads, Tails, Tails, Heads
Tails. Seems pretty fair. You run a lie detector on the scientist.
She's telling the truth.
So now you're thinking, incredible as it seems, that it's the monkey.
Maybe monkeys can be trained to do such a feat.

What you're implicitly doing is estimating a _prior probability_ on
which subject was chosen. Ten coin tosses all needing to be heads
makes a prior probability of 1/1024 of it being the human and
1023/1024 of it being the monkey. Then you get more data, and as a
result you recalculate your estimate of which of the subjects typed
the sentence. From the observations, you modify your prior
probabilities to get _posterior_ probabilities. The nature of the
evidence might swing your estimate right round. If one wants to do
this calculation rigorously (although you are implicitly doing it
roughly anyway), you would use Bayes's theorem to compute the
posterior probabilities. This is the way it's done in all sorts of
modern expert systems, for example for medical diagnosis. I was told
by my PhD supervisor, a director at Microsoft Research Labs in
Cambridge, that a little Bayesian inference engine powers the "printer
troubleshooter" in MS windows. (Though it's never helped me much!!)
(Or maybe it was the infamous "paper clip" cartoon character that gave
you hints all the time that you didn't want to know - can't remember
exactly).

Now take an example that has been cited by Dembski as "design
detection". It relates to the case (if I recall the details
correctly) of Nicholas Caputo, who was in charge of arranging the
ballot tickets in elections in one particular ward (state? - I don't
know the correct term). It is well-known that the name that appears
first on the list of a ballot paper has an unfair advantage because
lots of people are too stupid to check and just put an X on the first
name on the list, irrespective of the party. Hence lots would be
drawn to determine which party was at the top of the list each time.
It was found that under Caputo's direction, a democrat had appeared 40
times out of 40. He was convicted on the grounds that this was so
unlikely to happen by chance, that he must have rigged it. This is
cited as a clear instance of design detection.

However, even here, implicitly one is using a Bayesian technique.
This is because you have concrete independent evidence that humans can
be corrupt and prone to rigging elections. Most people are honest but
a small minority are corrupt. You've therefore got a reasonable idea
of the prior probability of a corrupt person determining the first
name on the ballot paper. The 40 out of 40 democrats is further
evidence that modifies your posterior probability that Caputo is
corrupt.

There are other pieces of evidence that could swing it back the other
way. Suppose that Caputo, like Pinocchio has an affliction that
causes his nose to grow every time he tells a lie. He stands up in
court and swears that he the way he conducted the drawing of lots was
above board, and the result was just a freak; he was as surprised as
anyone else to see the result. His nose stays the same size. That
would make you more likely to accept the freak result, because your
prior probability of his telling the truth just took a massive
increase.

Now take the case of the intelligent design inference. In the
publicity for his new book, Stephen C. Meyer states that DNA is like a
computer code with immense amounts of information. In every case we
know about, information implies an information giver, and a program
requires a programmer. Hence design (the nature of the designer
remaining unknown and unspecified) is the best explanation.

But it seems to me that this is entirely different from the Caputo
case. In the Caputo case you have independent verifiable evidence
that people exist who rig elections. Many of them, hopefully, are in
jail! So you can assign a prior probability. But by definition, you
don't KNOW about the existence of an unspecified designer - the fact
that you don't say anything about the identity of the designer
undermines the whole argument. There is therefore no meaningful way
to assign a prior probability. Indeed what you are trying to do is to
infer the _existence_ of a designer responsible for the perceived
design. In the Caputo case, the software programmer case etc, you
already know that corrupt people, computer programmers, etc exist -
you are trying to determine whether your evidence is explained by one
of these people whom you know exist, or by coincidence.

In a nutshell: Inferences we make are all implicitly Bayesian,
because we have a prior idea of the probabilities of the different
inferences that could be made. But with the Design inference, where
the nature of the Designer is unknown (as is tenaciously held by the
ID community), then you can't assign a prior probability and hence
can't begin to make a meaningful inference.

Discuss.

Iain

--
-----------
Non timeo sed caveo
(\__/)
(='.'=)
(")_(") This is a bunny copy him into your signature so he can gain
world domination
-----------
To unsubscribe, send a message to majordomo@calvin.edu<mailto:majordomo@calvin.edu> with
"unsubscribe asa" (no quotes) as the body of the message.
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Sat Sep 19 07:28:06 2009

This archive was generated by hypermail 2.1.8 : Sat Sep 19 2009 - 07:28:06 EDT