Personally, I've encountered just the opposite; reasonable theories, but
which are difficult to assess because of experimental limitations. The
difficulties increase dramatically as the systems studied increase in
complexity. In biology we kill ourselves trying to control or manage
the experimental conditions in such a way as to get usable data.
Biology really stinks in terms of being able to get reliable and
*meaningful* quantitative results. More often than not, when theory might
predict a result of either 0 or 1, I see something like 0.653 in my
measurements. Why?** Often because there's too much noise. Thus the
highest respect among experimental biologists goes to those who can
devise elegant experiments which lead to yes/no (binary) answers.
But often this means that most often, we only look at systems
under conditions where easily interpretted results are found.
**(In the past, I have proposed ID as a possible explanation. My
theory of "Ironic Design" was stimulated by repeated observations
that the universe is IC, "Incomprehensibly Complex". That is, the
universe appears to be configured in such a way as to ultimately
thwart all possible explanations, divine and/or natural. This
observed theme is repeated at all levels: the wave-particle duality
in physics, the mind-body problem of psychology, and the "why do
I get 0.635 when I should be getting 0 or 1?" paradox of enzyme
subunit composition measurements. Of course, the odds of generating
by chance a universe so finely-tuned as to erase all clues of
definitive explanation must be infinitesimal. This lack of
comprehensible design directly leads to the conclusion that not only
was the universe designed, but that it could only have been designed
by something with an infinitely ironic sense of humor. Look for
my new book "Defeating Rationality by Steadfastly Ignoring
Counter-Arguments", due out anytime I can get to Kinkos for
photocopying.)
> Let my try to illustrate this with a couple of examples.
>
>Example 1: Hypothesis -- Drug X does not inhibit the activity of enzyme E.
>Experiment -- Take two flasks each containing an equal volume of a solution
>of E; make sure that both volumes have the same concentration of enzyme.
>To one flask add X; to the other flask add an equal volume of water to make
>sure the concentrations in both flasks remain the same. Incubate both
>flasks in the same 37 degree waterbath for one hour. Then remove
>equal-volume aliquotes from both flasks and test the activity of E. If the
>results are the same for both flasks, then the hypothesis has been verified.
Alternate explanations:
1) "X" is unstable under the conditions tested (light, heat, pH,
reducing conditions, & etc).
2) "X" is sequestered (binding to glass in the flask, ar binding
to a contaminant.
3) "X" was not used at a concentration sufficient to observe the
inhibition of "E".
> If, however, the activity of the flask that contained X is
> significantly lower than that of the flask that had water added
> to it, then the hypothesis has been refuted.
Alternate explanations:
1) The experimental conditions only _appeared_ to be the same between
the +/- inhibitor flasks (flask contamination, slightly different
pHs, temperature fluctuations, etc).
2) The inhibition was not caused by "X" but by a contaminant (eg.
a breakdown product or another compound that was not purified
away during extraction).
> And 100% certainty is not needed, because the experimental design
> eliminates having to make "assumptions" about what is going on.
Experimental design is loaded with assumptions. For example, I've seen
examples of all those alternate scenarios listed above. Of course,
the better experiments are constructed to rely mostly on those assumptions
which appear to have the greatest reliability.
>Example 2: Hypothesis -- Drug X kills cancer cells, but has no affect on
>normal cells. Experiment -- Take a batch of cells and split them in half.
[....rest deleted...]
Oh, cells lines can be very difficult to handle. The greater the number
of interacting parts in a system (especially those with many undefined
parts), the greater the chance for misleading results.
Now consider an extension of the test. Assume that the cancer cells
did die from the drug in the experiment described (in vitro monoculture).
Now let's put drug-X into patients and see what happens. This is
the test which has greater interest for people. We know generally that
from drug trials, much more ambigious results are returned. And
even these results take a great deal more effort to evaluate. Thus
as we progress further into more complex, less well-defined systems.
the greater the chances are that our experiments will contain
assumptions that are much harder to confirm. The same is essentially
true of all the sciences when working at the "jungle's edge" of
understanding. The differences between the sciences tend to be how
far one can walk before running into that jungle.
While I think we can be reasonably sure that some results truely address
particular theories, even to the point of entrusting our our lives
to them, there are many for which we can still have a lot of doubt.
For example, while I'm certain that natural selection operates in
nature, I'm not terribly certain how it has operated in particular
instances over the history of life. Similarly, while I'm very
convinced that the data confirms the theory of common descent, I don't
think anyone truely knows how the mechanisms of evolution all fit
together and interact to produce the patterns we see. Thus we are
working with pieces of a puzzle, or on small parts of a much large
question. In many cases in science, to make any headway we must make
greatly simplifying assumptions (or work in limiting cases) which we
know may not hold up in the messy real world.
Regards,
Tim Ikeda
tikeda@sprintmail.hormel.com