RE: Environment

James Turner (103531.1532@compuserve.com)
24 Jun 96 15:14:47 EDT

On June 24, 1996, Dennis Sweitzer typed:

<<The large variability in weather-related data precludes reaching high levels
of precision as to the degree of climate change, the cause of climate change, or
even whether climate change can be distinguished from natural effects. In
short, we will never get to 99% (and probably not 95%) certainty in this area.

Policy making, on the other hand, inevitably involves acting on lousy data.
There's a lot of noise in the data, there are few experiments, there are many
factors to consider, etc., etc.

The value of the IPCC summary is that it attempts to bridge the gap between what
we know we know (& the certainty level thereof), and a policy recommendation.
The data is lousy, the interactions are complex, but the basic physics imply
climate change due to man-made influences. Sure, there can be other
explainations, but we understand them much less well than the man-made
influences.

There are one of 3 outcomes of the climate change predictions:

Either the prediction will be accurate (the predictions are basically from
computer simulations, which do substantially match observed weather
patterns. Earlier simulations did predict larger than observed climate
warming, but the latest simulations, which are run on more powerfull
computers and incorporate more significant variables [such as cooling due to
sulfite aerosols in pollution] do substantially match observations).

Or the prediction will be high (if some poorly understood climate mechanism
works in our favor; or the observed global changes are due to high points in the
climate cycle and man made greenhouse gases somehow have no effect).

Or the prediction will be low (if some poorly understood climate mechanism works
against us; or the effect of man made greenhouse gases in observed data have
been obscured by low points in the climate cycle) .

Good policy would incorporate the three scenarios, the costs of various
actions, etc., and act to "hedge our bets". This gets into the fields of
risk analysis & decision theory--which is another e-mail all together.>>

Dennis raises some good points here and I agree wholeheartedly with the main
thrust. I would like to raise a concern I have on the topic of predictions.
When I was an undergraduate I took a course in dynamical systems (from the pure
math side) and much later read James Glieck's book _Chaos_. One thing I came
away with from both of them was that if one is using a computer to make
predictions of a system, based upon some mathematical model, the difference
between the predicted and actual value can be quite large the farther in the
future one is trying to predict to.

If you all don't mind some math (hopefully I won't butcher it) let t be the time
variable, x(t) be the wind speed (or temperature or whatever) at time t, and
f(x(t)) the resulting wind speed at time t+s for some fixed s>0. Thus if t =
now for which I measure as accurately as possible and I want to know what it
will be one year from now, but s = 1 day, I have a lot of to do if I'm using my
function f. The problem occurs when f starts spitting out irrational values
(like the square root of 2 or pi) then my lowly mind forces me to approximate.
Unfortunately if I have to do that several hundred times then the value of my
prediction will be very much off (a simple, but crude, way to see this is to try
balancing your check book at the end of the month by ignoring everything that
occurs after a decimal point).

Unfortunately, a computer has the same problem. No matter how sophisticated, it
still must approximate irrational numbers and so it seems to me that any
computer models that attempt to make long term predications could have
unreliable results. But I am also not an expert and I would like to know if this
is a legitimate concern of mine or can it be easily dealt with in theory and/or
practice.

In Christ,
Jim Turner