Posted tagged ‘J/PSI’

The TOP500 Supercomputer List

29 November 2009

The TOP500 Supercomputer list is published twice a year in June and November.  The November release comes out in time for the annual SuperComputing conference.  The newest version of the TOP500 list was formally presented a couple weeks ago at SC09 in Portland, Oregon.

The National Labs here in the US are well represented.  The Jaguar Cray XT5 Supercomputer at the Department of Energy’s Oak Ridge Leadership Computing Facility  is the new #1.  Jaguar has bumped the  Los Alamos Roadrunner system down to the #2 slot.  Note that the Roadrunner was the world’s first petaflop/s supercomputer,  topping the June 2008 list. These ratings are based on Linpack benchmark results.

Let’s talk about what makes a supercomputer.  Back in the day, a big monster computer was more often than not a large IBM Mainframe.  It was one computer with lots of hardware that could do lots of processing and talk to lots of users through lots of terminals.   It was designed for size, not speed.  In 1960, Seymour Cray designed the first supercomputer, engineered  for high capacity and high speed processing.  Today’s supercomputers are sometimes a large system like Jaguar, a Cray XT5.  But more often todays supercomputer is actually a large cluster of smaller systems, configured and tuned for tightly coupled multi-processing cluster performance.  An example would be the rendering farms that were used by Weta Digital for CG work on films like the Lord of the Rings.  They have five cluster listings around #195 on the current TOP500 list.

I work at Fermi National Accelerator Lab in the HPC (High Performance Computing) department.  We support several experiments, primarily the LQCD project.  LQCD stands for Lattice Quantum Chromodynamics.  Lattice QCD calculations allow us to understand the results of particle and nuclear physics experiments in terms of QCD, the theory of quarks and gluons. The 7N cluster at JLab is also used by this collaboration.

I get to work as part of a team supporting Fermilab’s HPC systems which are  used for running large scale numerical simulations.  Our J/PSI cluster is made up of 856 nodes, each running an instance of Linux.  They each have two 2.1 GHz Quad Core Opteron processors.   That’s 6720 cores altogether, tightly coupled with a double data rate Infiniband switch.  Our Linpack results were 37.42 TFlops maximal performance achieved.  That put our cluster at #141 on the TOP500 list published earlier this month.  On the list that was published just 12 months ago, that same result would have placed us as #69.

Supercomputing and HPC in general can be  a mind boggling field.  I’m fairly new to this arena having only started with this department about 12 months ago.  I find this stuff pretty exciting.  We expect the J/PSI cluster to more than double in size in the next 12 months.  I can’t wait to see what the next year brings.

More later,

Ken S.


%d bloggers like this: