Science in Christian Perspective
Random Processes and Evolution
ALDERT VAN DER ZIEL
Department of Electrical Engineering
University of Minnesota Minneapolis, Minnesota 55455
From: JASA 27
(December 1975): 160-164.
Introduction
In 1827 the British biologist Robert Brown looked with his microscope at pollen
particles immersed in water and found that they executed a zig-zag
random motion.
This motion generally became known as Brownian motion. At first it was thought
to be a property of "living particles", but it soon turned out that
all small particles exhibited the effect. Gradually the idea was accepted that
the zig-zag motion of the particles was caused by the random collisions between
the particles and the water molecules, but it was 1905 before Einstein was able
to give the theory that is now accepted. Einstein generalized his
ideas and showed
that such a "Brownian motion" should occur in other
systems. e.g., that
there should be a "Brownian motion of electricity" in
electrical circuits.
With the advent of electronics it became clear that this "Brownian motion
of electricity", and the companion phenomenon of current fluctuations in
amplifying devices, such as vacuum tubes, transistors, etc., set a
serious limit
to the amplification of electrical signals; these signals simply
"drowned"
in the fluctuating signals generated in the circuits and in the
amplifying devices.
Since the fluctuations, when amplified and fed into a loudspeaker, produced a
hissing sound, the electrical engineers introduced the name noise.
This name has
stuck ever since.
Noise occurs in many different instances and it always sets a lower
limit to sensitive
measurements or to the electrical signals that can be processed electronically.
It determines the smallest TV signals that can be received without
excessive "snow"
on the TV screen, it determines how far radar can see, etc.
It has been my privilege to study these random noise processes for more than 30
years. Gradually it became clear to me that what we have learned about random
processes in physics and engineering should be applicable to biology.
For in biology,
mutations and genetic drift are random processes that play a role in
the various
theories of evolution. This paper is a first attempt at such an
application.
There is, however, one difference between noise processes in physics
and engineering,
and random processes in biology. The first are stationary random processes in
that the systems do not change when time goes on. The latter, however, are strictly speaking non-stationary in that
the systems gradually change with time because of evolution, However,
the changes
are very slow, and they have little or no effect on the conclusions
we are going
to make.
What Are Random Processes and How Do They
Occur?
A random process is a process that cannot be predicted in advance, except on a
statistical basis. Why do such processes occur? There are two
important possibilities:
(1) The systems that we are investigating are so complex that it is impossible to give a complete description of them. There is no reason to assume that such a complete description could not be possible in principle; it is sufficient that it cannot be performed in practice. For that reason one has to be satisfied with statistical considerations.
(2) The system is simple enough, but it is impossible to know the initial conditions of the system with absolute accuracy. Since there is a fundamental uncertainty in the initial conditions there is also uncertainty in the predictions we can make. For that reason one has again to be satisfied with statistical considerations.
As an example of the first case we consider a small mirror suspended in air on
a thin quartz fiber. Due to the irregular bombardment of the mirror by the air
molecules, the mirror shows a fluctuating rotation around an
equilibrium position.
Statistical considerations show that the deviation in angle from the
equilibrium
position has a mean square value equal to kT/D, where k is
Boltzmann's constant,
T the absolute temperature and D the force constant describing the
retarding torque
exerted on the suspended mirror when it suffers an angular deviation. One may
thus determine the atomic constant k from careful measurements on such mirrors.
The same relation also describes the deviation from equilibrium in
sensitive galvanometers.
As a simpler example consider the number of air molecules in a room.
This number
is so huge that one cannot possibly give a complete description of the motion
of all the molecules, but one has to be satisfied with statistical
considerations.
The same situation applies to the prediction of traffic accidents on holidays.
The number of cars on the road on those days is so huge that one cannot predict
except on a statistical basis. One cannot predict in advance where a
certain traffic
accident will occur, however.
The second case, where the initial conditions cannot be fully known,
is especially
acute in the molecular and atomic domain and it comes about because of the wave
character of the atomic particles involved. It is easily shown on the basis of
this wave character that one cannot simultaneously measure the position and the
velocity of a small particle with arbitrary accuracy. Rather one finds that the
product of the uncertainty in position and the uncertainty in the velocity must
always exceed a quantity of the order of h/rn, where h is Planck's constant and
m is the mass of the particle. This relationship is known as
Heisenherg's Uncertainty
Principle. The quantity h is a very small number, but so is the mass
of an electron;
the distances of electrons in an atom are also quite small, so the uncertainty
in velocity is quite large.
Let me illustrate this with an example. Suppose you go deer hunting; you see a
deer, shoot at it and you miss. Don't blame Heisenberg's Uncertainty Principle
for your failure, for the mass of your bullet is so huge in comparison to the
mass of an electron that the uncertainties are negligible. The only reason for
your lack of success was bad shooting.
Let me give another example. The nucleus K4° can transform into the nucleus
Ca4° under emission of an electron. The process must be described by wave
mechanics, and all that the theory can predict is the average rate of emission.
This calculated rate agrees with the experimental rate. While •this rate
of decay can be predicted, we cannot predict the moment of decay of
an individual
nucleus. Such an event is called an elementary event that is not
further explainable.
It is one of the characteristics of wave mechanics that it predicts only on a
statistical basis.
Let us consider another event in a somewhat larger aggregate.
Mutations are caused
by rearrangements of molecules in the genes. A rearrangement in a
particular gene
is an elementary event that cannot be predicted in advance. What can
he predicted
by the theory is the average rate of rearrangements for a large
assembly of identical
genes. When one investigates this rate as a function of temperature, one finds
that it increases exponentically with temperature in a way
describable by an activation
energy E; this means that a rearrangement occurs if the energy of the molecules
is larger than P. This equation gladdens the heart of any theoretical
physicist,
for the same law holds for a host of other molecular processes. This
is an indication
that, at least as far as molecular rearrangements are concerned,
there is no difference
between "living" and "dead" molecules, but that both will
show mutations.
One should not say that this has consequences for the molecular
domain only. For
a mutation gives rise to a different plant or animal. The mutation
occurs at the
molecular level but its end result shows up at the macroscopic level.
What Can Random Processes Do?
In the first place random processes are instrumental in
reestablishing equilibrium
after a disturbance
What we have learned about random processes in physics and engineering should be applicable to biology.
from equilibrium has occurred. Let me illustrate this with an example:
I am in the kitchen and the pilot light of the kitchen stove is off. I turn on
the gas for 5 seconds and then turn it off. At that time there is a
concentration
of gas near the burners. If the system is now left to itself, the gas
will distribute
itself evenly throughout the kitchen. Such an equilibrium occurs, even though
the motion of individual molecules is completely random, because there are more
molecules going from an area of high to an area of low concentration than there
are going from an area of low to an area of high concentration. The
random motion
of molecules thus has the tendency of evening out concentration
differences. Alter
equilibrium conditions have been established, local fluctuations
around the equilibrium
situation occur, but they are so small that it takes very careful measurements
to detect them.
If we now apply the theory to the past, which means that the system
has been left
to itself in the past, this would imply that the system would have come from an
equilibrium situation in the past. This means that the initial disturbance must
have been a spontaneous fluctuation. If the disturbance at t = 0. is
far too large
for that, and it usually is, there is a very high probability that somebody must
have set the initial condition at t = 0.
We can also express things as follows. A system out of equilibrium is in a very
improbable state, but it tends to an equilibrium state of much
greater probability
in the future. The tendency is thus to go from a state of lower to a state of
higher probability. This phenomenon can also be described with the help of the
concept of entropy. If this highly improbable initial state has come
from a previous
equilibrium situation, it would have come about by a huge spontaneous
fluctuation.
If that is ruled out, the highly improbable state must have come from
a situation
that was "set" at some time in the past. This argument is sometimes
used in favor of a creation.
I have no quarrel with the argument itself but I do not like the conclusion for
theological reasons. In this argument "creation" is equated
to "setting
initial conditions". To me creation is a religious concept that
is much richer
than "setting initial conditions". By equating the two, the concept
of creation has been greatly impoverished, and to this I object.
The next question is: Can random events lead to non-random results?
We shall show
from two examples that this is indeed possible. For the first example we again
turn to mutations. In a mutation a gene goes from one stable state to another
stable state. The trantition is an elementary event that cannot be predicted in
advance, but the end result is fully determined by the final state. While the
final state is different for different mutations, each mutation leads
to a well-defined
plant or animal. There is randomness only in the transitions.
As the next example we consider an electronic oscillator. Suppose I
have an amplifier
tuned at 1000 Hz internally and the gain of the amplifier, defined as the output voltage over the input voltage, decreases monotonically
with increasing
input voltage and is 100 at an input voltage of 1 volt. I now feed 1/100 of the
output signal hack to the input and remove the external input signal. Then the
amplifier will generate a 100 volt, 1000 Hz output signal all by itself; it has
become an oscillator.
How does the oscillator start? If I turn the oscillator on at time zero, there
is at first nothing but the spontaneous fluctuations of voltage in the circuits
and the spontaneous fluctuations of current of the amplifying devices. But at
these small signals the ratio of output voltage over input voltage is in excess
of 100 and as a consequence the output voltage builds up until a 100 V, 1000 Hz
output voltage is reached. Here the feedback produces just enough
voltage to maintain
the oscillations, and the buildup stops. The combined effects of the feedback,
the tuning at 1000 Hz arid the non-linearity of the amplifier produce a stable
sinusoidal signal.
If one looks very carefully, one finds that the output amplitude is
not absolutely
constant but varies very slightly and slowly in a random fashion. That slight
fluctuation in amplitude is all that is left of the fluctuations initiating the
oscillations. Apart from that small effect, the output is non-random
but sinusoidal.
Random Processes and Evolution
In the various theories of evolution one deals with random processes
like mutation
and genetic drift, and selective processes like natural selection. What is the
role of each? Do the random processes predominate or do the selective processes
predominate? That depends on the situation.
Let me illustrate that first with a non-biological example. I have a
noise signal
that I want to study. To that end I amplify the signal in a wide-band amplifier
that gives an enlarged replica of my noise signal. Now I put the signal through
a sharply tuned electronic filter that cuts out most of the
frequencies into which
the random signal can be decomposed. What is left of the signal now
reflects more
of the properties of the electronic filter than of the randomness of the input
signal; while there is some randomness left, the effect of the filter
predominates.
If instead I had put the signal through a very broad electronic
filter that cuts
out few of the frequencies into which the random signal can be decomposed, then
the signal coming out of the filter reflects more the properties of
the incoming
signal than the properties of the filter; now the randomness predominates.
If we now equate the selective processes in biology to the filter action in my
electronic example, we see that there is a wide range of possibilities for the
selective processes. The two extremes are
(a) A selective process with a very broad response that admits most of the random processes initiating evolution.
(b) A selective process with a very sharp response thatadmits very few of the random processes initiating evolution.
What happens in these extreme cases? In the first case, given a sufficiently long time, the random processes present all the various possibilities that exist and the natural selection process admits most of them. One then obtains an extreme variety of life forms. This seems to occur in the plant kingdom and in the insect world. We have omitted here the important effect of local conditions on evolution. In each locale the development of plant forms is severely restricted by soil conditions, climate etc. It is only when we consider the development at large that the great variety of life forms occurs. But the contrast between the development of man and the development of plant forms remains.
The objection is often made that the transition from non-living to living matter had an extremely small probability. But that is no valid objection, for all unique events share this extremely small probability.
In the second ease, however, given a sufficiently long time, the
random processes
again present all the various possibilities that exist. But now the selective
processes admit only those few that are compatible with it. In that
ease one obtains
a development with a very strong directivity. For example, in mammals
the development
of a better brain carries such a high premium that it presents the dominating
feature that culminates in the development of man.
We come here to the point where we can understand some of Teilbard de Chardin's
ideas, expressed in his book, The Phenomenon of Man. According to him
the development
of mammals is very strongly directive and leads directly to the development of
man. This agrees with what we just said. But it should be understood that it is
more the exception than the rule; the rule seems to be a rather broad response
resulting in a great wealth of life forms.
This interplay between random processes and selectivity also occurs
in our thinking.
The individual steps in our thought processes are random, in that they have the
same a priori likelihood. But when we concentrate on solving a
particular problem,
this imposes such a selectivity on these random steps that our thinking will be
strongly directed toward solving that problem. It is therefore no surprise that
we are so often successful in finding the solution. If, on the other hand, we
had been unselective in what thoughts we would admit, we might have
come up with
very interesting ideas about a wide variety of topics, but we would
not have succeeded
in solving the problem in question.
In the case of man, we must consider cultural evolution in addition
to biological
evolution. The development of man during the last few hundred
thousand years has
been due mostly to cultural evolution, i.e., to exchange of ideas, inventions
and transmission of information. The time needed to generate a new species is
about 1 million years; the time needed to generate new cultural tools is many
orders of magnitude smaller. Moreover, each change opens up the possibility for
new changes. Therefore cultural development seems to grow
explosively, especially during the last few hundred years.
This explosive growth has been used by Teilhard de Chardin's
followers as an illustration
of what Teilhard de Chardin means by development towards an Omega
point. The objection
to this idea is that Teilbard de Chardin's ideas about the Omega point have a
strongly religious flavor, culminating in the expectation that "God will
be all and in all." This religious emphasis is missing in the discussion
about explosive growth in cultural development.
The Origins of Life, of Cells, and of Multicellular Forms
How did life as we now know it originate? We don't know. But there
are some hard
facts, arid we can speculate. We list these hard facts as follows.
(a) There is sound evidence that electrical discharges, such as lightning, in
the primaeval atmosphere of the earth would have produced all the
important amino
acids, the building blocks of all living matter. These should have combined to
more complex protein structures. Could it have resulted in living
matter? Apparently
it did somehow, but from complex protein structures to a living cell is a very
huge step.
How should one define life? Perhaps an acceptable definition is a
biological entity
that has the ability to reproduce itself. Arguments are sometimes
presented that
viruses are an intermediate step between living and dead matter. They
do not reproduce
themselves, but they are reproduced by a living host. Moreover, mutations are
possible in viruses. However, as we saw before, rearrangements of molecules are
possible in all kinds of complex structures, so that mutations are no sign of
rudimentary life!
How should one visualize the hypothetical first living structures? This is not
certain, but it seems safe to say that they must have been much
simpler than present-day
cells.
(b) If amino acids are made in the laboratory, they come in two structures that
are each other's mirror image; we shall call them
"lefthanded" and "righthanded"
structures. It seems obvious that left-handed building blocks lead to
left-handed
structures and right-handed building blocks to right-handed
structures. A priori
these two kinds of structures would be equally likely. Now the
peculiar property
of all living matter is that they contain only left-handed building
blocks. Schrodinger,
the father of wave mechanics, has therefore proposed that this must have come
about because the first protein structure that acquired the
possibility of reproduction
happened to be built of left-handed building blocks and that all present living
structures must have descended from that first one. But since it was
an operation
at the molecular level, it must be described by wave mechanics, and therefore
the transition from non-living to living matter must have been a
unique elementary
event that cannot be further described scientifically.
God is Creator because He is God. He would still be Creator if I knew everything there is to know.
The objection is often made that such an event had an extremely small
probability.
But that is no valid objection, for all unique events share this extremely small
probability. For example,
if I have to figure out what the probability is that I am what I am, and take
into account all the events that produced me and all the ancestors
that preceded
me. I come to an extremely small probability. Nevertheless, I do exist.
(c) If the first primitive forms of life were much
simpler than the present cells, then a large accumulation of genetic material must have taken place in the transformation
of primitive
to cellular forms of life. The details of this transformation are unknown, but
somehow it must have taken place.
(d) In the transition from cellular forms of life to the much more
complex forms
of present plant and animal life the combined effects of mutations,
genetic drift
and selection must have played an important role in establishing new species,
genera and probably further down the line. How far can these
processes bring us?
We don't know.
(e) It is now very tempting to insist, as the neoDarwinists do, that
the processes
of mutations, genetic drift and natural selection are sufficient to explain all
the changes that have taken place. To me that is too sweeping a
proposal. If our
argument under (c) is adopted, the accumulation of genetic material took place
when going from primitive to cellular forms of life, then it is hard to see why
this accumulation must have stopped at the cellular level. It might
thus be possible
that at certain stages of the development new genetic material has
accumulated.
The argument used against this proposal is that this means
"invoking miracles".
The argument is invalid, for postulating unknown processes is not
invoking miracles,
but means turning our attention to other possibilities. I do not believe that
all the evidence is already in, and many others share this view with me.
One of the biologists from whom I have learned this is the Swiss
biologist Adolf Portmann. He likes to state that we are more surrounded by mystery than by well
established fact, that what we know is small in comparison to what we
don't know.
That instills a sense of modesty about our knowledge and a sense of urgency to
work quietly but persistently on extending our knowledge.
Portmann takes a rather cautious view about evolution, especially of
its neo-Darwinian
form. In his opinion much more evidence should he gathered. And this evidence
should not be put in a neo-Darwinian framework, for in that case all that does
not fit into the framework tends to be disregarded. Rather it should
he left standing
as is, so that it becomes clear what the theory of evolution must
explain. Portmann
has worked this out in detail for the development of man. (A. Portmann, Biologische
F'ragmente zu einer Lehre van Menschen, Schwabe Verlag, Basel, 1957.)
When one looks at the huge amount of information stored in the genes,
the question
arises: "How did it get there?" Could it have gotten there by random
processes followed by selection? Of course it could, but did it? And if so, how
did it come about?
Those who at this point invoke an all-powerful designer or creator
behind it all
have the simplest approach to this question. I know that the
principle of design
has been badly misused by 19th century apologetics, so that it is no surprise
that many scientists shy away from such a conclusion. But can one get around the problem of information storage without the word "design"?
Those who do not think so have in my opinion a strong position.
I would like to add a scientific word of caution, however. When the properties
of the chemical elements were discovered, it looked at first sight as
if all these
elements had been carefully designed. But when the structure of the atom and of
the atomic nucleus was unraveled, it became clear that the properties
of the elements
were actually a necessary consequence of the properties of the
protons and neutrons
constituting the atomic nucleus and of the electrons surrounding the nucleus.
There is therefore no design in fact, even though it looks like design at first
right. So it may be with the huge amount of information stored in the genetic
code. Maybe its structure will also become obvious when the genetic
code has been
fully unravelled.
I would also like to add a theological word of caution in order to
emphasize that
my attitude toward evolution is not motivated by theological bias. I can
not be overly enthusiastic about this "proof" of the existence of a
Divine Creator. God is not Creator because I know so little about nature. God
is not Creator because His actions fill the gaps in my knowledge. God
is not Creator
because there is so much design in nature. But God is Creator because
He is God.
He would still be Creator if I knew everything there is to know. He would still
be Creator if I had a satisfactory scientific interpretation of what I now call
"design".
The problems of the origin of life, of the development of cellular
forms of life,
and of the complex multicellular forms of life will be with us for a long time
to come. It will be necessary to keep an open mind about all the
options available.
It will be exciting to work on these problems quietly but persistently. While
the application of our knowledge of random phenomena may not solve
these problems,
it may give new insights. My effort is a first step in this direction.