Archive for the ‘High Performance Computing’ category

The Nov 2010 Top500 Supercomputer list

18 November 2010

The Top500 Supercomputer list is published twice a year, in June and November. The November release comes out during the annual SuperComputing conference. This years new release came out November 16th, 2010, the 36th Top500 Supercomputer List. There is no prize for where one places on the list. You just get “bragging rights”.

With my work at Fermilab’s HPC department, I help keep several clusters up and running. Our fastest production cluster is called J/Psi and it placed as #308 on this 36th edition of the list. This J/Psi cluster placed as #188 on the list last June. When J/Psi was in burn-in testing, it placed as #110 on the June 2009 list. We have a new cluster not yet in production and undergoing burn-in testing. We ran the Linpack benchmark against it and it is listed as #216 on the list.

Those of you who were at the LISA 2010 conference last week may have sat in on the “Storage Performance Management” presentation by Matt Provost of Weta Digital. He mentioned their five clusters of high performance compute nodes. Those five clusters placed as #462 through #466 on this November list. These same clusters were ranked #279-283 on the list last June. They were listed as #144-144 a year earlier in June 2009.

The one other fact I wanted to note from this current list. Of the top 30 clusters listed, nine of them are DOE (U.S. Dept of Energy) related sites. Being as Fermilab is a DOE site, I’ll point out with pride that 30% of the Top 30 sites are DOE research facilities. There are lots of other facts and/or statistics that one can draw from these lists. But these are the ones that are of interest to me.

I encourage you to check out the list. What jumps out as interesting to you? Do you work with or use a system on this list? Or is there some Top … list that your systems are listed on?


The TOP500 Supercomputer List

29 November 2009

The TOP500 Supercomputer list is published twice a year in June and November.  The November release comes out in time for the annual SuperComputing conference.  The newest version of the TOP500 list was formally presented a couple weeks ago at SC09 in Portland, Oregon.

The National Labs here in the US are well represented.  The Jaguar Cray XT5 Supercomputer at the Department of Energy’s Oak Ridge Leadership Computing Facility  is the new #1.  Jaguar has bumped the  Los Alamos Roadrunner system down to the #2 slot.  Note that the Roadrunner was the world’s first petaflop/s supercomputer,  topping the June 2008 list. These ratings are based on Linpack benchmark results.

Let’s talk about what makes a supercomputer.  Back in the day, a big monster computer was more often than not a large IBM Mainframe.  It was one computer with lots of hardware that could do lots of processing and talk to lots of users through lots of terminals.   It was designed for size, not speed.  In 1960, Seymour Cray designed the first supercomputer, engineered  for high capacity and high speed processing.  Today’s supercomputers are sometimes a large system like Jaguar, a Cray XT5.  But more often todays supercomputer is actually a large cluster of smaller systems, configured and tuned for tightly coupled multi-processing cluster performance.  An example would be the rendering farms that were used by Weta Digital for CG work on films like the Lord of the Rings.  They have five cluster listings around #195 on the current TOP500 list.

I work at Fermi National Accelerator Lab in the HPC (High Performance Computing) department.  We support several experiments, primarily the LQCD project.  LQCD stands for Lattice Quantum Chromodynamics.  Lattice QCD calculations allow us to understand the results of particle and nuclear physics experiments in terms of QCD, the theory of quarks and gluons. The 7N cluster at JLab is also used by this collaboration.

I get to work as part of a team supporting Fermilab’s HPC systems which are  used for running large scale numerical simulations.  Our J/PSI cluster is made up of 856 nodes, each running an instance of Linux.  They each have two 2.1 GHz Quad Core Opteron processors.   That’s 6720 cores altogether, tightly coupled with a double data rate Infiniband switch.  Our Linpack results were 37.42 TFlops maximal performance achieved.  That put our cluster at #141 on the TOP500 list published earlier this month.  On the list that was published just 12 months ago, that same result would have placed us as #69.

Supercomputing and HPC in general can be  a mind boggling field.  I’m fairly new to this arena having only started with this department about 12 months ago.  I find this stuff pretty exciting.  We expect the J/PSI cluster to more than double in size in the next 12 months.  I can’t wait to see what the next year brings.

More later,

Ken S.

%d bloggers like this: