[asa] Re: Re: Open Source Climate Modeling was Re: [asa] CropYieldsFaceNon-LinearEffects Due to Climate

From: wjp <wjp@swcp.com>
Date: Wed Sep 16 2009 - 09:20:19 EDT

Rich:

Not to beat a dead horse, but I think I'm still a little confused.

1) You say that the longer time scales are run at the same time step,
but run for longer time steps.
2)You say that the shorter time scales (e.g., weather) might on this
scale be considered random. I presume you mean random with a some
sort of longer term tread.
3) So what I imagine is that what we are doing is first running the
code with average time step Dt. When we examine the code at the
time scale Dt, we are looking at the short time scale run.
When we examine the code results at time
interval N*Dt, where N is some significantly large integer, we are looking
at the longer time scale result.

4)If this picture is correct, I've got some questions. First, no one
looks at a code (unless they are debugging it) at the time scale of the
time step Dt. Everyone makes dumps usually at times significantly larger
than the time step Dt. Indeed, what any intelligent user would do is to
attempt to change the dump frequency to catch significant changes.
What a significant change is depends upon the goal, intentions, and
prior understanding of the physics.

5) Every dump, no matter the time interval, contains a "random"
portion. They all contain weather.

6) By looking at longer time intervals we can more easily recognize
trends. So, for example, there have been time lapse photography of
the glaciers taken every half hour or hour. By then displaying the
images in sequence, we can recognize trends and movements that are
too small to recognize, or too dominated by random events, to easily
recognize were one to take photographs at one per second.

7) If this is correct, then the long time scale results are more
reliable than the short time scale runs because the grosser
trends are more reliably predicted by the codes. When we examine
the short time scale results these trends are still being produced
and still there. It's just that they are more difficult
for the user to filter
out of the many other changes that are also taking place between one
dump and the other. Essentially, over sufficiently long time scales
the changes that are observed contain both a trend and a random
component. When the trend signal level is significantly larger
than the random element, the trend stands out for the user to
observe.

8) To take a concrete example, for many years I modeled the impulsion
of tiny capsules in hohlraums under intense laser energy incidence.
Over a very long time scale, the capsule under radiation from
the directly energized hohlraum walls would implode. In detail,
however, the capsule might not implode symmetrically, resulting
in a dud, since capsule temperatures were depressed. It is possible
and has happened that the reasons the capsule did not implode
symmetrically were coding problems and even round off problems.

What do I take
in this example to be random noise and what the long term trend?
The results, if considered as either produces neutrons or doesn't,
are significantly influenced by the random noise, and, indeed, is the
entire point of the simulation. But if I was only interested in
whether the capsule imploded or not, then the randomness would include
whether it produced neutrons or not, and the long time scale result
deemed accurate.

9) So there are a number of problems in claiming that the codes
do better in the long time scale results. First, we have to
decide what it is we are looking for. What is the variable trend
that is important, for some of them may be just plain wrong.
Second, we have to worry about attractors. Small changes in details
can put the code in different attractors. These differences may be
reflected in the observed trends.

10) Empirically we can determine which long time scale variables
are reliable. I don't believe you can know a priori which
long time scale variables are reliable. The next question would
be: why these variables or trends and why not others?

Well, I hope the horse is really dead now. Anyway, it was fun
revisiting pre-retirement ways of doing business.
Now I've got to get back to the newer ones.

thanks,

bill
On Tue, 15 Sep 2009 22:48:04 +0000, rich.blinne@gmail.com wrote:
> On Sep 15, 2009 2:24pm, Bill Powers <wjp@swcp.com> wrote:
>> Rich:
>
>
>
>> Just a short reply. The problem is how are the codes obtaining long term
>> temporal results? All the codes operate on the presumption of
>> infinitesimal time steps. At each step presumably energy and the like is
>> conserved (not true of all codes). In looking at longer temporal scales
>> are we taking larger time steps or simply taking the same time steps,
> but
>> looking further down the road.
>
> The time steps are same and run for more steps. The real information that
> is compared with the models has more "noise" at the smaller time scales.
> Most people refer to said noise as weather. Since most of this noise is
> cyclical over a long enough time it averages out to nearly zero making
> your
> later predictions in the same run more accurate than your earlier ones.
> The
> short-term noise is treated as if it was truly random with multiple
> ensembles being run with different initial conditions. What doesn't go
> away
> (the anthropogenic climate change) changes smoothly. What then comes out
> is
> a range of values bounded by the amount of short-term noise. The size of
> the bounds gives a feel for the accuracy of the modelling.
>
> In order to accurately model the short-term noise as a signal rather than
> noise -- even for such things as average temperature -- you need smaller
> grid sizes which is why the NASA folks are excited about their new
> computer. As the grid sizes have been decreasing over the years even
> climate models are starting to show ENSO and other such phenomena which
> are
> normally associated with weather rather than climate.
>
>
>
>
>> It does not seem that it is the longer time scale that is the salient
>> feature, but the kinds of variables that are being examined on longer
>> time scales. Examining average temperature will likely be fairly
> reliable
>> at all time scales.
>
> That's true with one important caveat. Volcanic eruptions. These can lower
> global temps for a couple of years and go away. Over the long term it is
> as
> if it never happened. Yet another reason for the counter-intuitive result
> of climate predictions being more accurate at longer time scales than
> shorter ones.
>
> Rich

To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Wed Sep 16 09:21:04 2009

This archive was generated by hypermail 2.1.8 : Wed Sep 16 2009 - 09:21:04 EDT