Peter Rust wrote:
>>Glenn presented the Vignere code, an encryption-decryption method, to
demonstrate that random processes "CAN create information". He shows
that a 21-letter message encoded by a random 21-letter key can be "decoded" by
9 other random 21-letter keys to yield 9 different meaningful messages.
In fact, it is quite easy to obtain such solutions: select a meaningful
21-letter phrase, take its first (second,...) letter, locate it in the
first row of the Vignere code table, go down this column to the first
(second,...) letter of the coded message and look up the first letter in
this row: this is the first (second,...) letter of the new "random" key.
Repeat this for the 21 letters.
For instance, take the message 'RandomOriginOfEnzymes': the procedure
yields the key 'yeslsxvafodpqduiwwqtt'. Apply this "random" key to
decipher Glenn's original coded message 'pefogjjrnulceiyvvucxl' and you'll obtain 'RandomOriginOfEnzymes'!
But of course, that's cheated, because we worked backwards! <<<
It is NOT cheating when the main point was that when people say that random sequences can't produce meaning it is clear from these examples, NO MATTER HOW RARE THE PHENOMENON, that meaning is generated from two random sequences. Just because there is randomness in a system doesn't mean that it can't produce meaning. While much discussion has proceeded, that simple fact was my main point. Randomness does not preclude meaning or semantics.
>>>There are 26^21, or about 5.2 x 10^29 (that's 520,000 trillion
trillion), different 21-letter strings of 26 possible letters. How many meaningful phrases of 21 letters might there be? 1000? a million? a trillion? <<<
Probably in the trillion to quadrillion range. That means that there is meaning at a rate of about 10^-14 to 10^-17 range which would then be very similar to the observed rate of randomly made biological molecules having a pre-specified function. And one can't forget that spelling errors will also not render a sequence undecipherable or unreadable. jist bkaus aye spel dis sintance rongly dusn't meen yu kant unnerstan it. That sentence performed its function of conveying a meaningful message. Could it have been better? yes. Was the function destroyed by these errors? No.
The same thing is found with biological molecules in the directed evolution experiments of Joyce, Szostak and others. For any function you want to perform, make a random vat of molecules and you will find a useful molecule at the rate of 10^-12 to 10^-17.
>>>I
don't know. I haven't written a computer program to try to get an estimate.
The "natural selection" routine required for this program must be quite
involved, including a parser, a dictionary, some expert system
algorithms, as well as a user-friendly interface for a human to evaluate the
tentative solutions proposed by the program. But maybe Glenn, who certainly did
not cheat, can provide us with such an estimate. What's your hitting
average,
Glenn? <<<
You obviously haven't read my reply to Brian. I will freely admit I worked the problem backwards. My point was that randomness does not preclude meaning. Everyone else has wanted to ignore that simple little fact and discuss other things. That little fact has profound implications for the role of randomness in this world.
When I worked with randomly generated codes and saved the bits and pieces, it took less than 500 samplings to collect a meaningful sentence. In that case I wasn't working backwards.
>>Manfred Eigen, Nobelist and inventor of the hypercycles, also cheated by
working backwards. In popular lectures about the origin of life, he used
to
present a computer simulation purporting to show that information can
indeed emerge quite rapidly by means of random "evolutionary" processes.
He
generated a random sequence of letters, which he mutated randomly. Each
time a letter happened to equal the corresponding letter of a meaningful
phrase previously deposited, it was and remained fixed. Of course, the
process produced the "information" supplied after not too many
generations! <<<
In my program, there was no previously deposited letters or words in the program.
>>But let's look more closely at what really happens in evolution! Hubert
P. Yockey ("A calculation of the probability of spontaneous biogenesis by
information theory", J.theoret.Biol. 67 (1977), 377) compared the then
known sequences of the small enzyme cytochrome c from different
organisms. He found that 27 of the 101 amino acid positions were completely
invariant, 2 different amino acids occurred at 14 positions, 3 at 21, etc., more than 10 nowhere. Optimistically assuming that the 101 positions are mutually independent and that chemically similar amino acids can replace each
other at the variable positions without harming the enzymatic activity, he
calculated that 4 x 10^61 different sequences of 101 amino acids might
have cytochrome c activity. But this implies that the probability of
spontaneous emergence of any one of them is only 2 x 10^(-65), which is way too low to be considered reasonable (it is unlikely that these numbers would change
appreciably by including all sequences known today). A similar situation
applies to other enzymes, such as ribonucleases. <<<
First, your conclusion about ribozymes is simply not true and to make the claim that this is the same thing flies in the face of observational evidence and displays a failure to keep up. Consider this:
"Figure 2 shows the progress of the selection in terms of the fraction of the pool RNA that bound to the thiopropyl Sepharose and was eluted with 2-mercaptoethanol. Initially about 0.5% of the RNA bound nonspecifically to the matrix and was eluted with 2-mercaptoethanol. After five cycles of selection, greater than 20% of the pool RNA reacted with thiopropyl Sepharose. As we estimated that there were at least 10,000 different molecules left in the pool at this stage, we chose to increase the stringency of the selection in the succeeding cycles by lowering the ATP-[gamma]S concentration and the incubation time, in order to try to isolate the most active catalysts. Because our selection sampled sequence space very sparsely (there are 4^100~10^60 possible 100-mers, but only ~10^16 different molecules in our pool), active molecules are likely to be sub-optimal catalysts. We therefore chose to perform three cycles of mutagenic PCR to allow the evolution of improvements in !
th!
e active molecules. The combined effect of increasing the stringency and performing mutagenic PCR was to increase the activity of the pool by nearly three orders of magnitude from cycle 6 to cycle 13."~Jon R. Lorsch and Jack W. Szostak, "In Vitro Evolution of New Ribozymes with Polynucleotide Kinase Activity," Nature, 371, Sept. 1994, p. 31-32
Another example:
"This problem was avoided by linking smaller DNA pools to generate one larger pool that had 1.6 x 10 15 different molecules, each with a central region containing 220 random positions. This pool was amplified by the polymerase chain reaction (PCR) and then transcribed in vitro to yield a pool of RNA sequences (pool 0 RNA). THe use of such a large random region precludes the possibility of sampling more than a minute fraction of all possible (4^220)[10^132--grm] sequences. Thus, for a selection based on catalysis to succeed, the catalytically active RNAs must have structures that are simple enough to be constructed from relatively common sequences."~David P. Bartel and Jack W. Szostak, "Isolation of New Ribozymes from a large Pool of Random Sequences," Science, 261, Sept. 10, 1993, p. 1412
**
probability argument
"Implications for the RNA world hypothesis. The new ribozymes that we have isolated from a pool of random sequences catalyze a chemical transformation similar to that catalyzed by polymerases. Their abundance in the random-sequence pool is therefore relevant to the hypothesis that life began with an RNA replicase that originated from prebiotically synthesized random-sequence RNA. We detected about 65 sequences capable of carrying out a particular ligation reaction in a pool of more than 10 ^ 15 initial sequences, or a frequency of occurrence of one in about 2 x 10^13 sequences. This number is an underestimate of the abundance of the less active ribozymes, since many of these would have failed to ligate during the first round. A few of the selected sequences in pool 4 (one in about 3 x 10^14 initial sequences) must be more active than the average activity of pool 4 RNA (0.03 per hour, or a rate acceleration of about 10 4)."~David P. Bartel and Jack W. Szostak, "Isolation o!
f !
New Ribozymes from a large Pool of Random Sequences," Science, 261, Sept. 10, 1993, p. 1417
So you can't claim as you erroneously do that 10^-65 equals 10^-16 or 10^-13 which is observed to be the case by DIRECT experiment!!!!
Further, if you are going to cite Yockey, use his more recent source. I know the 1979 article you got the 10^61 different cytochromes from and you are about 13 years (at least) out of date with that data. Please keep up. In his 1992 book Molecular Biology and Information Theory, he calculates that there are 10^93 different cytochrome c's that would work. In 1992 Yockey wrote:
"In the case of the 110 site iso-1-cytochrome c sequence the total possible number of points is 20^110 =1.3 x 10^143. We learned in Excercise 9 of Chapter 9 that the effective number of amino acids is 17.621 so that the effective number of points is (17.621)^110 = 1.15 x 10^137. Of these, 2.316 x 10^93 are occupied by iso-1-cytochrome c, in the high probability set."Hubert Yockey, Information Theory and Molecular Systems, (Cambridge: Cambridge University Press, 1992), p. 328-329.
This is an increase in cytochrome's probability of 32 orders of magnitude in 13 years. At that rate, we should solve the problem sometime this year!
>>Thus, a modern enzyme activity is extremely unlikely to be found by a
random-walk mutational process. <<<<
So exactly how do you suppose Szostak, and colleagues are actually doing it in the lab. You sit there saying it can't be done, while those guys are actually doing it. It is like the old geezer saying that airplanes will never fly while a jet screams over his head at treetop level!
>>>Let's assume that all of the Earth's biomass consists of the most
efficient biosynthesis "machines" known, bacteria, and all of them continually
churn out test sequences for a new enzyme function, which doesn't exist yet in
any organism.<<<<
This is like the apochryphal story in the middle ages where the scholars in a room were arguing about how many teeth were in the mouth of a horse basing their arguments on the Bible. Some knave suggested that they open the mouth of the horse and count. The scholars, indignant, threw the bum out of the room. Why should we assume anything about the earth. Why don't you actually look at the experiments that are going on TODAY!!!!! They are doing what your calculations say is impossible. Doesn't that seem a bit silly?
[mathematical argument that is falsified by direct observational experiment snipped because theoretical arguments that violate direct observation have something wrong with the theory]
You are correct to conclude that by grace we go, but we shouldn't go with blinders on our eyes to what is actually happening and by using 20 year old data for which newer data is available.
This archive was generated by hypermail 2b29 : Fri Sep 22 2000 - 19:11:33 EDT