To see why complexity is crucial to inferring design, consider the
following sequence of bits:
110111011111
These are the first twelve bits in the previous sequence representing the
prime numbers 2, 3, and 5 respectively. Now I can guarantee that no SETI
researcher, if confronted with this twelve-bit sequence, is going to
contact the science editor at the New York Times, hold a press conference,
and announce that an extra-terrestrial intelligence has been discovered. No
headline is going to read, "Extra-Terrestrials Have Mastered the First
Three Prime Numbers!"
The problem is that this sequence is much too short (i.e., has too little
complexity) to establish that an extra-terrestrial intelligence with
knowledge of prime numbers produced it. A randomly beating pulsar might by
chance just happen to output the sequence "110111011111." A sequence of
1126 bits representing the prime numbers from 2 to 101, however, is a
different story. Here the sequence is sufficiently long (i.e., has enough
complexity) to establish that an extra-terrestrial intelligence could have
produced it.
Even so, complexity by itself isn't enough to eliminate chance and
implicate design. If I flip a coin 1000 times, I'll participate in a highly
complex (or what amounts to the same thing, highly improbable) event.
Indeed, the sequence I end up flipping will be one in a trillion trillion
trillion ..., where the ellipsis needs twenty-two more "trillions." This
sequence of coin tosses won't, however, trigger a design inference. Though
complex, this sequence won't exhibit a suitable pattern. Contrast this with
the previous sequence representing the prime numbers from 2 to 101. Not
only is this sequence complex, but it also embodies a suitable pattern. The
SETI researcher who in the movie Contact discovered this sequence put it
this way: "This isn't noise, this has structure."
What is a suitable pattern for inferring design? Not just any pattern will
do. Some patterns can legitimately be employed to infer design whereas
others cannot. The basic intuition underlying the distinction between
patterns that alternately succeed or fail to implicate design is, however,
easily motivated. Consider the case of an archer. Suppose an archer
stands 50 meters from a large wall with bow and arrow in hand. The wall,
let's say, is sufficiently large that the archer can't help but hit it.
Now suppose each time the archer shoots an arrow at the wall, the archer
paints a target around the arrow so that the arrow is squarely in the
bull's-eye. What can be concluded from this scenario? Absolutely nothing
about the archer's ability as an archer. Yes, a pattern is being matched;
but it is a pattern fixed only after the arrow has been shot. The pattern
is thus purely ad hoc.
But suppose instead the archer paints a fixed target on the wall and then
shoots at it. Suppose the archer shoots a hundred arrows, and each time
hits a perfect bull's-eye. What can be concluded from this second
scenario? Confronted with this second scenario we are obligated to infer
that here is a world-class archer, one whose shots cannot legitimately be
referred to luck, but rather must be referred to the archer's skill and
mastery. Skill and mastery are of course special cases of design.
The type of pattern where the archer fixes a target first and then shoots
at it is common to statistics, where it is known as setting a rejection
region prior to an experiment. In statistics, if the outcome of an
experiment falls within a rejection region, the chance hypothesis
supposedly responsible for the outcome is rejected. Now a little
reflection makes clear that a pattern need not be given prior to an event
to eliminate chance and implicate design. Consider the following cipher
text:
nfuijolt ju jt mjlf b xfbtfm
Initially this looks like a random sequence of letters and
spaces--initially you lack any pattern for rejecting chance and inferring
design.
But suppose next that someone comes along, after you've seen this sequence,
and tells you to treat it as a Caesar cipher, moving each letter one notch
down the alphabet. Behold, the sequence now reads,
methinks it is like a weasel
Even though the pattern is now given after the fact, it still is the right
sort of pattern for eliminating chance and inferring design. In contrast to
statistics, which always tries to identify its patterns before an
experiment is performed, cryptanalysis must discover its patterns after the
fact. In both instances, however, the patterns are suitable for inferring
design.
Patterns divide into two types, those that in the presence of complexity
warrant a design inference and those that despite the presence of
complexity do not warrant a design inference. The first type of pattern is
called a specification, the second a fabrication. Specifications are the
non-ad hoc patterns that can legitimately be used to eliminate chance and
warrant a design inference. In contrast, fabrications are the ad hoc
patterns that cannot legitimately be used to warrant a design inference.
This distinction between specifications and fabrications can be made with
full statistical rigor.
Why the Criterion Works
Why does the complexity-specification criterion reliably detect design? To
see why this criterion is exactly the right instrument for detecting
design, we need to understand what it is about intelligent agents that
makes them detectable in the first place. The principal characteristic of
intelligent agency is choice. Whenever an intelligent agent acts, it
chooses from a range of competing possibilities.
This is true not just of humans, but of animals as well as of
extra-terrestrial intelligences. A rat navigating a maze must choose
whether to go right or left at various points in the maze. When SETI
researchers attempt to discover intelligence in the extra-terrestrial radio
transmissions they are monitoring, they assume an extra-terrestrial
intelligence could have chosen any number of possible radio transmissions,
and then attempt to match the transmissions they observe with certain
patterns as opposed to others. Whenever a human being utters meaningful
speech, a choice is made from a range of possible sound-combinations that
might have been uttered. Intelligent agency always entails
discrimination, choosing certain things, ruling out others.
Given this characterization of intelligent agency, the crucial question is
how to recognize it. Intelligent agents act by making a choice. How then
do we recognize that an intelligent agent has made a choice? A bottle of
ink spills accidentally onto a sheet of paper; someone takes a fountain pen
and writes a message on a sheet of paper. In both instances ink is applied
to paper. In both instances one among an almost infinite set of
possibilities is realized. In both instances a contingency is actualized
and others are ruled out. Yet in one instance we ascribe agency, in the
other chance.
What is the relevant difference? Not only do we need to observe that a
contingency was actualized, but we ourselves need also to be able to
specify that contingency. The contingency must conform to an independently
given pattern, and we must be able independently to formulate that pattern.
A random ink blot is unspecifiable; a message written with ink on paper is
specifiable. Wittgenstein in Culture and Value made the same point as
follows: "We tend to take the speech of a Chinese for inarticulate
gurgling. Someone who understands Chinese will recognize language in what
he hears. Similarly I often cannot discern the humanity in man."
In hearing a Chinese utterance, someone who understands Chinese not only
recognizes that one from a range of all possible utterances was actualized,
but is also able to specify the utterance as coherent Chinese speech.
Contrast this with someone who does not understand Chinese. In hearing a
Chinese utterance, someone who does not understand Chinese also recognizes
that one from a range of possible utterances was actualized, but this time,
because lacking the ability to understand Chinese, is unable to specify the
utterance as coherent speech.
To someone who does not understand Chinese, the utterance will appear
gibberish. Gibberish--the utterance of nonsense syllables uninterpretable
within any natural language--always actualizes one utterance from the range
of possible utterances. Nevertheless, gibberish, by corresponding to
nothing we can understand in any language, also cannot be specified. As a
result, gibberish is never taken for intelligent communication, but always
for what Wittgenstein calls "inarticulate gurgling."
This actualizing of one among several competing possibilities, ruling out
the rest, and specifying the one that was actualized encapsulates how we
recognize intelligent agency, or equivalently, how we detect design.
Experimental psychologists who study animal learning and behavior have
known this all along. To learn a task an animal must acquire the ability
to actualize behaviors suitable for the task as well as the ability to rule
out behaviors unsuitable for the task. Moreover, for a psychologist to
recognize that an animal has learned a task, it is necessary not only to
observe the animal making the appropriate discrimination, but also to
specify this discrimination.
Thus to recognize whether a rat has successfully learned how to traverse a
maze, a psychologist must first specify which sequence of right and left
turns conducts the rat out of the maze. No doubt, a rat randomly wandering
a maze also discriminates a sequence of right and left turns. But by
randomly wandering the maze, the rat gives no indication that it can
discriminate the appropriate sequence of right and left turns for exiting
the maze. Consequently, the psychologist studying the rat will have no
reason to think the rat has learned how to traverse the maze.
Only if the rat executes the sequence of right and left turns specified by
the psychologist will the psychologist recognize that the rat has learned
how to traverse the maze. Now it is precisely the learned behaviors we
regard as intelligent in animals. Hence it is no surprise that the same
scheme for recognizing animal learning recurs for recognizing intelligent
agency generally, to wit: actualizing one among several competing
possibilities, ruling out the others, and specifying the one chosen.
Note that complexity is implicit here as well. To see this, consider again
a rat traversing a maze, but now take a very simple maze in which two right
turns conduct the rat out of the maze. How will a psychologist studying
the rat determine whether it has learned to exit the maze. Just putting
the rat in the maze will not be enough. Because the maze is so simple, the
rat could by chance just happen to take two right turns, and thereby exit
the maze. The psychologist will therefore be uncertain whether the rat
actually learned to exit this maze, or whether the rat just got lucky.
But contrast this now with a complicated maze in which a rat must take just
the right sequence of left and right turns to exit the maze. Suppose the
rat must take one hundred appropriate right and left turns, and that any
mistake will prevent the rat from exiting the maze. A psychologist who
sees the rat take no erroneous turns and in short order exit the maze will
be convinced that the rat has indeed learned how to exit the maze, and that
this was not dumb luck.
This general scheme for recognizing intelligent agency is but a thinly
disguised form of our complexity-specification criterion. In general, to
recognize intelligent agency we must observe a choice among competing
possibilities, note which possibilities were not chosen, and then be able
to specify the possibility that was chosen. What's more, the competing
possibilities that were ruled out must be live possibilities, and
sufficiently numerous so that specifying the possibility that was chosen
cannot be attributed to chance. In terms of complexity, this is just
another way of saying that the range of possibilities is complex.
All the elements in this general scheme for recognizing intelligent agency
(i.e., choosing, ruling out, and specifying) find their counterpart in the
complexity-specification criterion. It follows that this criterion
formalizes what we have been doing right along when we recognize
intelligent agency. The complexity-specification criterion pinpoints what
we need to be looking for when we detect design.
As a postscript it's worth pondering the etymology of the word
"intelligent." The word "intelligent" derives from two Latin words, the
preposition inter, meaning between, and the verb lego, meaning to choose or
select. Thus according to its etymology, intelligence consists in choosing
between. It follows that the etymology of the word "intelligent" parallels
the formal analysis of intelligent agency inherent in the
complexity-specification criterion.
So What?
There exists a reliable criterion for detecting design. This criterion
detects design strictly from observational features of the world. Moreover,
this criterion belongs to probability and complexity theory, not to
metaphysics and theology. This criterion is relevant to biology. When
applied to the complex, information-rich structures of biology, this
criterion detects design. In particular, the complexity-specification
criterion shows that Michael Behe's irreducibly complex biochemical systems
are designed. Richard Dawkins's claim that all biological design is only
apparent needs therefore to be modified: "Biology is the study of
complicated things that give the appearance of being designed because they
actually are designed."
What are we to make of these developments? Many scientists remain
unconvinced. So what if we have a reliable criterion for detecting design
and so what if that criterion tells us that biological systems are
designed? How is looking at a biological system and inferring it's designed
any better than shrugging our shoulders and saying God did it? The fear is
that design cannot help but stifle scientific inquiry.
Design is not a science stopper. Detecting design is one intelligence
determining what another intelligence has done. There's nothing
scientifically unfruitful about this. The only reason it seems
scientifically unfruitful is because materialist philosophy so pollutes our
intellectual life. Granted, once design is reinstated within science, it
won't be business as usual. For instance, a lot of the unsubstantiated
Darwinian just-so stories will go by the board (to which I say good
riddance). But new questions will arise and new research opportunities will
present themselves.
Once we know that something is designed, we will want to know how it was
produced, to what extent the design is optimal, and what's its purpose.
Note that we can detect design without knowing what something was designed
for. There's a room at the Smithsonian filled with obviously designed
objects for which no one has a clue about their purpose.
Design also implies constraints. An object that is designed functions
within certain design constraints. Transgress those constraints and the
object functions poorly or breaks. Moreover, we can discover those
constraints empirically by seeing what does and doesn't work. This simple
insight has tremendous implications not just for science but also for
ethics. If humans are in fact designed, then we can expect psychosocial
constraints to be hardwired into us. Transgress those constraints, and we
personally as well as our society will suffer. There's plenty of empirical
evidence to suggest that many of the attitudes and behaviors our society
promotes do not comport with human flourishing. Design promises to
reinvigorate that ethical stream running from Aristotle through Aquinas
known as natural law.
By reinstating design within science, we do much more than simply critique
scientific reductionism. Scientific reductionism holds that everything is
reducible to scientific categories. Scientific reductionism is
self-refuting and easily seen to be self-refuting. The existence of the
world, the laws by which the world operates, the intelligibility of the
world, and the unreasonable effectiveness of mathematics for comprehending
the world are just a few of the questions that science raises, but that
science is incapable of answering.
Simply critiquing scientific reductionism, however, is not enough.
Critiquing scientific reductionism does nothing to change science. And it
is science that must change. By eschewing design, science has for too long
operated with an inadequate set of conceptual categories. This has led to a
constricted vision of reality, skewing how science understands not just the
world, but also ourselves. Evolutionary psychology, which justifies
everything from infanticide to adultery, is just one symptom of this
inadequate conception of science. Barring design from science distorts
science, making it a mouthpiece for materialism instead of a search for
truth.
Martin Heidegger remarked in Being and Time, "A science's level of
development is determined by the extent to which it is capable of a crisis
in its basic concepts." The basic concepts with which science has operated
these last several hundred years are no longer adequate, certainly not in
an information age, certainly not in an age where design is empirically
detectable. Science faces a crisis of basic concepts. The way out of this
crisis is to expand science to include design. To reinstate design within
science is to liberate science, freeing it from restrictions that were
always arbitrary, and now have become intolerable.
William A. Dembski
Center for the Renewal of Science and Culture
Discovery Institute
1402 Third Ave., Suite 400
Seattle, WA 98101