I do want to comment on this thread before it gets too stale. I'm delighted
to see several people on this list with expertise in this field.
Regarding Rich's comment about the increased computing capability for
climate modeling, I like to follow the top 500 list at www.top500.org. They
do a semiannual ranking of the 500 fastest computers in the world. I usually
incorporate in my talks the chart shown at
http://www.top500.org/lists/2009/06/performance_development. The top line is
the sum of the performance of those 500 fastest computers. This is a good
proxy for the trend of compute power generally available for
compute-intensive applications such as climate modeling. Note that this is
an incredible compound annual growth rate of about 85%. It has been
sustained since the rankings began in 1993 and there are no signs of it
letting up.
The stories behind the #1 rank data points are also interesting. The five
points from 2002-2004 are the Earth Simulator from Japan. There was a major
push during that time for the US to regain the lead. I was delighted when
the Blue Gene project that I was connected with took that lead in 2004 and
kept it for 7 rankings. This was a joint project between IBM and Lawrence
Livermore National Labs (LLNL). Ironically, during that time I was the IBM
rep for Los Alamos (LANL) which is always in rivalry with LLNL. LANL
proposed a rather radical joint supercomputer project which I strongly
opposed. My retirement cleared the way for them to revise the proposal and
get it going. Sure enough, last year they took the #1 prize and have kept it
for 3 rankings, breaking the petaflops mark in the process. This was
code-named RoadRunner and was a joint project between IBM and LANL. This
system is based on the Cell microprocessor which is the core for the Sony
PlayStation 3. I still wonder who will have the last laugh. I continue to
suspect that RoadRunner has a much more complex programming model than Blue
Gene and that BlueGene will soon regain the lead. We will see.
One final point. My estimate is that the computer performance growth that is
due to hardware (i.e. faster transistors, etc.) is about 20% or less. The
rest of the 85% is due to architecture and software improvements. What is
also interesting is the sharp retreat from clock frequency to obtain
performance. Remember the days when all the advertising for computers was
about MHz and GHz? All that is gone. In fact, the Blue Gene system that took
the #1 spot at the end of 2004 had only about a 750MHz clock. Today the
trend in general is to reduce the frequency to get speed. Why? Power
consumption is the real limiter. Power increases by about the third power of
frequency. The trick to increasing performance is to move to parallel
processors. Reducing the clock to reduce the power to enable more processors
leads to faster computers that are more efficient. So slower is faster.
Randy
----- Original Message -----
From: "Rich Blinne" <rich.blinne@gmail.com>
To: "Bill Powers" <wjp@swcp.com>
Cc: "asa" <asa@calvin.edu>; "Randy Isaac" <randyisaac@comcast.net>
Sent: Sunday, August 30, 2009 3:56 PM
Subject: Re: [asa] NASA - Climate Simulation Computer Becomes More Powerful
>
> On Aug 30, 2009, at 12:44 PM, Bill Powers wrote:
>
>> I agree and remember much that you recount.
>> But to say that with 64-bit machines roundoff is not an issue is clearly
>> false. At LANL we have had 64-bit machines since the 60s, although some
>> still remember the shift from 32-bit. Those who remember that shift,
>> also remember the lessons learned from roundoff in nonlinear systems.
>> For a long time now we have been clamoring for 128-bit machines because
>> studies of numerical noise, when they are done, indicate that they are
>> biting us in the rear end now. With the advent of 3D codes and their
>> eventual inclusion of the full spectrum of physics packages, cycles
>> increase wildly, making numerical noise the hidden demon that everyone
>> is afraid to examine.
>>
>> There were some machines that could implement 128-bit machines via
>> software. Although this slowed codes down to a crawl, some had the guts
>> to see what would result. We also had the capability with some machines
>> to make numerical noise experiments by controlling the number of bits
>> for real computations. From these results, certain extrapolated results
>> might be derived.
>>
>> Still I know of very few codes, certainly at the national labs, that
>> seriously undertook these kinds of studies.
>>
>> bill
>>
>
> 64-bit is a bit a misnomer here. The Xeon 51xx (Opteron 23xx) series
> introduced 128-bit SSE units. Then came Harpertown and then Nehelem which
> is in the new NASA computer. The Xeon 5570 (I am assuming they are using
> this Nehelem processor) is "only" three times faster than the Xeon 54xx
> machines (Harpertown). Nehelem is faster at multiplication than the
> Opteron while the latter is faster at division. This trade-off was chosen
> since multiplication was more common.
>
> Bottom line: these boxes do 128-bit SIMD hardware floating point really,
> really fast, and have a 64-bit memory address space. This combination is
> what is making the folks at NASA drool.
>
> Rich Blinne
> Member ASA
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Thu Sep 10 11:12:07 2009
This archive was generated by hypermail 2.1.8 : Thu Sep 10 2009 - 11:12:07 EDT