Purdue named NSF high-performance computing center

Purdue University has been selected as one of five High Performance Computing Operations centers and will receive a $1.7 million grant to go with the distinction. The National Science Foundation awarded the designation and funds. Purdue will use the money to help expand the nation's growing use of collaborative computing tools in science. Carol X. Song, a senior research scientist in the Office of Information Technology and the principal investigator for the Purdue TeraGrid project, said the naming of Purdue as an a High Performance Operations Computing center, or HPC-Ops center, shows the value of using distributed computing. "We've been exploring cost-effective solutions to achieve supercomputing-level resources from campus computers using distributed computing," Song said. "This affirms that it is the direction that many science applications are going to go." Other HPC-Ops centers have been established at Louisiana State University, the National Center for Supercomputing Applications in Champaign, Ill., the San Diego Supercomputer Center, and at Texas Advanced Computing Center in Austin. The Purdue HPC-Ops resources will be provided in the NSF-funded TeraGrid. The Purdue HPC-Ops Center will focus on three areas:
  • Scientists will be able to run computing jobs using Purdue's distributed computing system, BoilerGrid. BoilerGrid uses Condor distributed computing scheduling software developed at the University of Wisconsin. Purdue is the largest academic Condor pool in the world. The computing cycles gleaned from the system are made available via the NSF's TeraGrid computing network.
  • A resource that renders scientific animations and visualizations using distributed resources on the TeraGrid. The TeraDRE (for TeraGrid Distributed Rendering Environment) allows computer graphics professionals to inexpensively produce scientific animations in hours that would take days or weeks to produce using standard computers.
  • A resource that will serve as a "sandbox" for IT research scientists to explore and test new ways to accelerate science computing applications, such as the genome sequencing tool BLAST and the tools available on science gateways such as nanoHUB. This will be accomplished by using Field Programmable Gate Arrays, or FPGAs, in a high-performance computing environment.

"FPGAs have lower power consumption and, therefore, require less cooling. They are reconfigurable so they can be specialized to accelerate a variety of applications," Song said. "We expect this technology to play a more significant role in high-performance computing data centers."