Two genes are found that are similar. Similarity is interpreted
> to mean genetic relatedness. The differences are interpreted
> to arise from divergence after duplication. The mutations
> and differentiation of functions, along with the selective
> advantages associated with them, are unknown and simply
> exist in the realm of vague imagination. None of this means
> the model-explanations are incorrect, but I think it important
> to realize we are dealing with explanation that may sound
> more rigorous than they really are.
Chris replied:
>That sounds *fairly* rigorous to me.
It sounds more like a just-so story to me.
Chris:
>But I'm not talking about that when I'm
>referring to laboratory observations. If one organism is found to have a
>gene, and it divides into an two organisms, one of which has that exact same
>gene *except* for a small modification, I'd say that it was pretty well
>shown that that new gene was a variation on the old one. The alternative is
>that some mysterious force reached in and made a new gene that just
>*happened* to be a near-perfect duplicate of the gene in that location in
>the parent genome. Perhaps you are forgetting that the *function* of genes
>during reproduction is: *to be duplicated*.
Okay, fine. That is why I asked for the most impressive
examples that have been observed. You re-posted another
article that cites two, so I'll take a look (when I get time).
Chris:
>Besides, we already know that genes *are* modified during replication. Do
>you think that a *new* gene created by accident by an extra duplication of
>an existing gene *won't* be subject to the same modification processes as
>existing genes? If this is what you think, then I must ask: Why? Is there
>some mysterious force-field around new genes that prevents them from being
>modified like any *other* genes might be? Is there something to keep them
>from getting recombined with other genes during sexual reproduction? Is
>there any reason even to *suspect* that such a fact is the case? What
>observational facts would require such a bizarre hypothesis? Are there
>*known* instances in which such new genes were replicated many zillions of
>times *without* modification?
Actually, there is no mysterious force-field that prevents modification,
but duplicates can be treated differently in a way that is simply
ignored by the classical story of "function-acquisition via duplication
followed by divergence" (I'll call this the FADD model).
I'm thinking of the rapidly growing field of epigenetics. Consider as
an example the RIP phenomena in Neurospora (RIP stands for repeat-
induced point mutation). During RIP, a duplicated DNA sequence
(from a few hundred to few thousand base pairs) is somehow recognized
as a duplicate and quick inactivated (regardless of whether the
duplicate is linked to the original). The duplicate quickly acquires
mutliple G-C to A-T transitions, resulting in numerous missense and
nonsense mutations. What's more, the remaining cytosines are
methylated. Not only is the gene quickly inactivated, but sequence
divergence becomes so extensive that homologous recombination
between the duplicates is prevented.
Of course, there are other epigenetic mechanisms, but what they
all have in common is gene silencing. This phenomena has recently
become appreciated due to work with transgenic animals and
plants. If you introduce transgenes as multiple copies, or genes
identical to endogenous sequences, you might expect to
see an increased dosage of gene expression, right? But what you
usually see is gene silencing.
None of this makes the FADD model invalid. It simply means
that for FADD to be rigorous, it requires we have independent
evidence that the duplicated genes escaped epigenetic gene
silencing. Otherwise, the nonexpressed duplicate is essentially
a pseudogene that acquires mutations without the filtering
effect of natural selection. So the problem is not escaping
modification. The problem is being modified without natural
selection. And redesigning a protein function with pure chance is
quite a just-so story if you ask me.
To make matters even more complicated, I came across a paper
that sets gene duplication in a whole new light. This paper is found
in the April 1999 issue of Genetics, pp. 1531-1545. These authors
(Force et al.) think the classical model of gene duplication (duplication,
followed by mutation, divergence, and the acquisition of new functions) is
at tension with the empirical data. They then offer a theory
of gene duplication that is quite interesting.
To appreciate their model, first consider the classical model
and how it would explain a multi-gene family (like the
Hox genes). In this model, we begin with a single gene
with a function. It is duplicated. The original retains
the original function, and the duplicate mutates to acquire
a new function. One then simply repeats the story again
and again, and after time, we have a family of genes with
different, but related, functions.
Force et al. take a different angle. Instead, we begin with
by considering not only the gene, but also the upstream regulatory
modules that control gene expression (time and place-dependent).
They then propose that after gene duplication, no new functions
emerge, but instead, we get "subfunctionalization." That is,
one regulatory module may be lost in one copy, while another
one (but not the same one) is lost in the other copy. Thus,
the organism now requires two copies of the gene to ensure
it will be expressed properly. As a nice example, they cite
the gene pair eng1 and eng1b from Zebrafish. eng1 is expressed
in the pectoral appendage bud, while eng1b is expressed in
a specific set of neurons in the hindbrain. Yet when they
look at the unduplicated gene (En1) in outgroups (mice and
chicken), it is expressed in both the pectoral appendage
buds and hindbrain ganglia. Thus, gene duplication did
not give rise to novel functions, but instead simply
partitioned the functions associated with the original
gene (eng1 lost the ability to be expressed in neurons
and eng1b lost the ability to be expressed in hindbrain, but
since expression is needed in both places, Zebrafish must
keep both genes).
I think this model of gene duplication/divergence is
interesting for three reasons:
1. The classic gene duplication story is often used
as an example of how evolution generates "new functions."
But as Force et al. write:
"We are not aware , however, of any convincing
evidence that the majority of duplicate copies have
acquired new functions that did not already exist in
the ancestral genes."
And if Force et al.'s model is correct, is gene duplication
typically generating new functions? In other words, if a gene
family has four members that carry out functions a,b,c,and d,
they envision an original gene carrying out all these
functions and then splitting the gene functions under the
control of four different, but similar, genes.
2. If Force et al.'s model is the most common way gene
duplication/divergence expresses itself, then something like
ID is in a stronger position. For now the original gene is not more
primitive and simpler, but more complex. Thus, what is
*its* origin? One could always return to the classic model
of gene duplication/divergence at this point, but now something
is lost.
To see what is lost, consider that currently all evidence of
gene duplication is interpreted in light of the classic model.
Thus, evidence of gene duplication gives indirect assurance
that evolution generates new functions (even though, as Force
et al. note, "even the most basic premise of the classical
model of duplicate gene evolution - that gene duplicates are
preserved only by the evolution of new functions - has
never been tested."). But if gene duplication
doesn't normally work this way, then a huge chunk of the core
"evidence" for thinking evolution commonly generates new functions
is lost. Gene duplication would normally spread out pre-existing
functions (subfunctionalize), meaning evidence of gene duplication and
the origin of multigene families is not necessarily evidence of function
acquisition. At this point, the return to the classic model becomes
much more ad hoc.
3. Force et al.'s model could be employed in speculations
about original design events endowed with a huge amount of genetic
information that splits up with evolution. An ID proponent
could thus develop a model where originally designed organisms
were front-loaded with information such that evolution is
simply the dissipation of this information. If that were true, all the
evidence for evolution does not work in giving us an explanation
for these initial states! In other words, the process of unfolding
does not explain the manner in which the original state was folded
into a specific "conformation." This perspective might not
appeal to many ID proponents, but it would open doors to counter
the views of Gould and friends, as one might make a case that
the initial state was so stacked that the evolution of human beings
was practically inevitable.
Mike