>Kevin replies to Gary about interpretting data...
>[...]
>>In point of fact you can often be sure. I think I understand what you
>>mean by "theory-laden". Scientists will use theories to determine what
>>observations/experiments should be done. But far more often than not the
>>observations give results that directly contradict the theory. This is
>>usually not due to a badly designed observation, but to a badly designed
>>theory. I agree that 100% certainty is not possible, but show me a human
>>endeavor where it is possible.
>
>Personally, I've encountered just the opposite; reasonable theories, but
>which are difficult to assess because of experimental limitations.
>
This difference in "worldview" (if I may use such a loaded term here)
between us is most likely based on our different fields of study. As an
evolutionary biologist (?) you are used to thinking in terms of large,
complex systems; even when you are working with individual proteins your
view and ultimate goals are much more expansive. As such your hypotheses
tend to be complex as well, containing many sub-hypotheses that cannot be
adequately separated and tested independently, or which would give erroneous
results if you did. As a consequence, any experiments must themselves be
complex, as they either try to account for all the possible factors that
could influence the outcome or, finding that impossible, make assumptions
about as many of the uncontrollable factors as possible, even if the
assumption amounts to little more than saying, "this factor will have a
negligable influence on the outcome".
As a protein chemist/enzymologist, however, I am used to thinking in terms
of small, simplistic (and I use that term deliberately) systems; even when I
am studying an entire metabolic pathway or system of pathways I am
concentrating on the bits and pieces of that pathway/system rather than with
the whole. As such my hypotheses tend to be simplistic as well,
deliberately constructed to contain as few sub-hypotheses as is possible.
As a consequence, the experiments that I do are themselves simplistic, so it
is easier to account for, or even eliminate, all the possible factors, or at
least to make far more reasonable assumptions about their possible
influence.
My point is, that the reason why experiments are complex and full of
assumptions is because the theory being tested is complex and full of
assumptions, which is a direct result of studying a complex system. In
other words, the experimental limitations are imposed by the theory; the
experimental assumptions are imposed by the theory. They are not inherent
in the experiment itself, but they may appear to be as a consequence of
studying a large, complex system.
>
>The
>difficulties increase dramatically as the systems studied increase in
>complexity. In biology we kill ourselves trying to control or manage
>the experimental conditions in such a way as to get usable data.
>Biology really stinks in terms of being able to get reliable and
>*meaningful* quantitative results.
>
And for exactly the reasons I outlined above.
>
>More often than not, when theory might
>predict a result of either 0 or 1, I see something like 0.653 in my
>measurements. Why?** Often because there's too much noise.
>
And where does this noise come from? It comes from the fact that the theory
being tested cannot adequately account for all the factors involved (or
worse, doesn't even know they exist!), so the factors are either ignored or
assumptions are made about their degree and manner of influence, assumptions
that usually turn out to be wrong. Again, however, these assumptions are
being imposed onto the experiment by the theory; they are not inherent in
the experiment.
>
>Thus the
>highest respect among experimental biologists goes to those who can
>devise elegant experiments which lead to yes/no (binary) answers.
>But often this means that most often, we only look at systems
>under conditions where easily interpretted results are found.
>
Because the theories being tested are themselves fairly simple, requiring
little or no guesswork about what may or may not happen.
>
[snip very interesting cosmological theory]
>
>> Let my try to illustrate this with a couple of examples.
>>
>>Example 1: Hypothesis -- Drug X does not inhibit the activity of enzyme E.
>>Experiment -- Take two flasks each containing an equal volume of a
>>solution
>>of E; make sure that both volumes have the same concentration of enzyme.
>>To one flask add X; to the other flask add an equal volume of water to
>>make
>>sure the concentrations in both flasks remain the same. Incubate both
>>flasks in the same 37 degree waterbath for one hour. Then remove
>>equal-volume aliquotes from both flasks and test the activity of E. If the
>>results are the same for both flasks, then the hypothesis has been
>>verified.
>
>Alternate explanations:
>1) "X" is unstable under the conditions tested (light, heat, pH,
> reducing conditions, & etc).
>
That would have already been checked out before the above experiment was
done. If in fact X proved to be so unstable, it would never have been used
in the experiment, or the experiment would have been done in the dark at a
temperature, pH, reducing condition, etc., that would not have caused the
drug break up. So this cannot be accepted as an alternative explanation
because it would have been eliminated by the experimental design.
>
>2) "X" is sequestered (binding to glass in the flask, or binding
> to a contaminant.)
>
Again, this can be anticipated and so looked into before the experiment was
done or investigated after the experiment, in which case the experiment
would then be repeated under such conditions that would make this problem
insignificant. For our purposes we can say that it is known that X does not
sequestor, or if it does it does so in insignificant amounts, thus having no
affect on the outcome of the experiment.
>
>3) "X" was not used at a concentration sufficient to observe the
> inhibition of "E".
>
Again, this would have been investigated before the experiment had been
done. Or the experiment can be modified to include various concentrations
of X throughout its solubility range. In any event, if X does inhibit the
enzyme, this inhibition will be seen if a high concentration of drug is used
(high compared to its solubility range). If the drug was too insoluble to
make a highly concentrated solution, it would never have been used in the
experiment.
Suffice to say that if the proper preliminary work had been done, the
experiment could be designed to eliminate all these factors, leaving a clean
result that would directly refute the hypothesis.
>
>> If, however, the activity of the flask that contained X is
>> significantly lower than that of the flask that had water added
>> to it, then the hypothesis has been refuted.
>
>Alternate explanations:
>1) The experimental conditions only _appeared_ to be the same between
> the +/- inhibitor flasks (flask contamination, slightly different
> pHs, temperature fluctuations, etc).
>
All of which can and are routinely accounted for in any biochemical
experiment. Unless there is good reason to believe that they may have been
present in the experiment, they are never suggested as an alternative
explanation except by people who simply refuse to believe that the results
are real. In any event, the onus is on them to show that the experimental
conditions were not the same in each flask. Since in the experiment I
describe above no such demonstration can be made, one might as well blame
gremlins for the results.
>
>2) The inhibition was not caused by "X" but by a contaminant (eg.
> a breakdown product or another compound that was not purified
> away during extraction).
>
I will assume that Tim means extraction of the drug and not the enzyme (if
the contaminant was in the purified enzyme extract both flasks would have
been equally inhibited, giving a false negative result). Again, this is
something that could have been checked out before the experiment was done.
In this case, we shall say that gas chromatography-mass spectroscopy
analysis shows that the drug extract is pure, with no contaminants or
break-down products.
Again, while each of these alternatives are legitimate, if the experiment
was set up properly to begin with they would in fact all be eliminated by
the experimental design.
>
>> And 100% certainty is not needed, because the experimental design
>> eliminates having to make "assumptions" about what is going on.
>
>Experimental design is loaded with assumptions.
>
Only because the theory that the experimental design is based on is loaded
with assumptions. The experiment is only as good as the theory it is based
upon.
>
>For example, I've seen
>examples of all those alternate scenarios listed above.
>
So have I. And in every case either the theory itself could not account for
those factors, or the experiment was not properly designed to eliminate
them, or the necessary preliminary work that would have identified and
characterized those factors had not been done.
>
>Of course,
>the better experiments are constructed to rely mostly on those assumptions
>which appear to have the greatest reliability.
>
Which is accomplished because the theory can accurately account for them,
the experiment was designed to render them insignificant and/or the
necessary preliminary work that would have identified and characterized them
had been done.
>
>>Example 2: Hypothesis -- Drug X kills cancer cells, but has no affect on
>>normal cells. Experiment -- Take a batch of cells and split them in half.
>[....rest deleted...]
>
>Oh, cells lines can be very difficult to handle.
>
True, but you cannot simply invoke this, then wave your hand and dismiss the
results. As with any experiment, if the theory is sound enough and detailed
enough, if the experiment is properly defined and if the necessary
preliminary work has been done ahead of time, the results can be trusted to
directly verify or refute the hypothesis with no need to worry about
experimental assumptions.
>
>The greater the number
>of interacting parts in a system (especially those with many undefined
>parts), the greater the chance for misleading results.
>
True, but a properly designed experiment can eliminate or greatly reduce the
chance of a false positive or a false negative. Ambiguous results can still
occur, and I described three possible such outcomes. But ambiguous results
occur because the theory could not account for all the possible variables,
not because the experiment was flawed.
>
>Now consider an extension of the test. Assume that the cancer cells
>did die from the drug in the experiment described (in vitro monoculture).
>Now let's put drug-X into patients and see what happens.
>
If you are testing the same hypothesis as before -- drug X kills cancer
cells, but has no affect on normal cells -- chances are good that you will
get ambiguous or erroneous results. The reason is because this hypothesis
is simply inappropriate for the level of complexity represented by a human
clinical study. If, however, the hypothesis was appropriate for that level
of complexity -- maybe something like there is a fifty percent chance
compared with controls that drug X will cause a significant reduction in
tumor mass (I'm not a clinical scientist, so this may be an atrocious
hypothesis) -- you would probably have a fighting chance of designing a good
enough study that would give you results that would directly verify or
refute the hypothesis. As I mentioned at the beginning, however, the more
complex the situation, the more complex the theories that need to be tested,
the more complex the experiments that must be used to test them, and the
greater the chance of getting ambiguous or erroneous results.
>
>This is
>the test which has greater interest for people. We know generally that
>from drug trials, much more ambigious results are returned. And
>even these results take a great deal more effort to evaluate. Thus
>as we progress further into more complex, less well-defined systems.
>the greater the chances are that our experiments will contain
>assumptions that are much harder to confirm.
>
Only because the theories we are testing have these assumptions to begin
with. If they didn't, the experiments would not have them either.
>
>The same is essentially
>true of all the sciences when working at the "jungle's edge" of
>understanding. The differences between the sciences tend to be how
>far one can walk before running into that jungle.
>
>While I think we can be reasonably sure that some results truely address
>particular theories, even to the point of entrusting our our lives
>to them, there are many for which we can still have a lot of doubt.
>For example, while I'm certain that natural selection operates in
>nature, I'm not terribly certain how it has operated in particular
>instances over the history of life.
>
That's because the current theories cannot address that issue (assuming any
theory ever could). As such, one cannot design experiments or observational
strategies to test specific evolutionary scenarios; all that can be
investigated are general trends.
>
>Similarly, while I'm very
>convinced that the data confirms the theory of common descent, I don't
>think anyone truely knows how the mechanisms of evolution all fit
>together and interact to produce the patterns we see.
>
Again, because the current theories cannot yet address that issue.
>
>Thus we are
>working with pieces of a puzzle, or on small parts of a much large
>question. In many cases in science, to make any headway we must make
>greatly simplifying assumptions (or work in limiting cases) which we
>know may not hold up in the messy real world.
>
Exactly my point; such limited, simplistic situations allow one to test
theories with experiments that produce results that directly verify or
refute the theories with a minimum of theoretical assumptions. It's only
when we try to apply those theories beyond those simplistic situations that
we start getting ambiguous/erroneous results. Then we need to create more
complex theories to handle more complex situations, but then we also have to
start dealing with more theoretical assumptions that we cannot control
without going back to more simplistic situations. Yet these assumptions are
an inherent part of the theories themselves, not the experiments designed to
test them. As we refine our theories these assumptions are either
eliminated or greatly reduced, thus the experiments yield fewer
ambiguous/erroneous results. Given sufficient time, we should in fact be
able to cut that jungle back far enough that even hideously complex theories
can be directly refuted or verified by experimental results without having
to worry about uncontrollable assumptions.
Kevin L. O'Brien