Appro has announced that it has been awarded a subcontract for a 147.5TF Appro 1U-Tetra supercomputers from Lockheed Martin in support of the DoD High Performance Computing Modernization Program (HPCMP). The HPCMP supports DoD objectives to strengthen national prominence by advancing critical technologies and expertise through use of High Performance Computing (HPC). Research scientists and engineers benefit from HPC innovation to solve complex US defense challenges.

As a subcontractor of Lockheed Martin, Appro will provide system integration, project management, support and technical expertise for the installation and operation of the supercomputers and Lockheed, as a prime contractor will provide overall systems administration, computer operations management, applications user support, and data visualization services supporting five major DoD Supercomputing Resource Centers (DSRCs). This agreement was based on a common goal of helping customers reduce complexity in deploying, managing and servicing their commodity High Performance Computing solutions while lowering their total cost of ownership.

The following are the supercomputing centers where Appro clusters will be deployed through the end of 2010:
Army Research Laboratory DSRC at Aberdeen Providing Ground, MD,
US Air Force Research Laboratory DSRC at Wright Patterson AFB, OH,
US Army Engineer Research and Development Center DSRC in Vicksburg, MS,
Navy DoD Supercomputing Resource Center at Stennis Space Center, MS,
Arctic Region Supercomputing Center DSRC in Fairbanks, AK.

“We are extremely pleased to work with Lockheed Martin and be part of providing advanced cluster technologies and expertise in High Performance Computing (HPC) in support of the DoD High Performance Computing Modernization Program (HPCMP), said Daniel Kim, CEO of Appro. "Lockheed Martin leads its industry in innovation and has raised the bar for reducing costs, decreasing development time, and enhancing product quality for this important government program, and our products and solutions are a perfect fit for their demanding expectations."

Delegates attending the 157th session of the CERN Council have congratulated the laboratory on the LHC's successful first year of running, and looked forward to a bright future for basic science at CERN. Top of the agenda was the opening of CERN to new members. Formal discussions can begin now with Cyprus, Israel, Serbia, Slovenia and Turkey for accession to Membership, while Brazil's candidature for  Associate Membership was also warmly received. 

"It is very pleasing to see the increasing global support for basic science that these applications for CERN membership indicate," said CERN Director General Rolf Heuer. "Basic science responds to our quest to understand nature, and provides the very foundations of future innovation." 

Established in 1954 by 12 European states, CERN's membership had grown to 20 by the end of the 1990s, with many countries from beyond the European region also playing an active role. Discussions on opening CERN to membership from beyond Europe, while at the same time allowing CERN to participate in future projects beyond Europe, reached a conclusion at the Council's June session this year. As of now, any country may apply for Membership or Associate Membership of CERN, and if CERN wishes to  participate in projects outside Europe, mechanisms are in place to make that possible. 

Under the scheme agreed by Council in June, Associate Membership is an essential pre-requisite for Membership. Countries may therefore apply for Associate Membership alone, or Associate Membership as a route to Membership. At this meeting, Council formally endorsed model agreements for both cases, and these will now serve as the basis for negotiations with candidates, which could lead to CERN welcoming its first Associate Members as early as next year. 

The other highlight of the meeting was the success of the LHC programme in 2010. Dozens of scientific papers have been published by the LHC experiments on the basis of data collected this year. These re-measure the science of the Standard Model of Particle Physics, and take the LHC's first steps into new territory. 

"The performance of the LHC this year has by far exceeded our expectations," said President of the CERN Council, Michel Spiro. "This bodes extremely well for the coming years, and I'm eagerly looking forward to new physics from the LHC." 

The LHC switched off for 2010 on 6 December. Details of the 2011 LHC run and plans for 2012 will be set following a special workshop to be held in Chamonix from 24-28 January, while the first beams of 2011 are scheduled for mid-February.

Hundreds of computational scientists from around the world will gather in Chattanooga July 11-15 to participate in technical and scientific talks, poster sessions and discussions of recent advances.

The event, SciDAC 2010, will also highlight successes of the Department of Energy's Scientific Discovery through Advanced Computing Program.

Thomas Zacharia, deputy director for science and technology at Oak Ridge National Laboratory, is the general chair for the event.

The SciDAC program brings together computational scientists, applied mathematicians and computer scientists from universities and national laboratories across the United States.

Areas of focus include understanding our universe on its largest and smallest scales, understanding Earth's climate and ramifications of climate change, and developing new energy sources.

For more information about the program, visit www.scidac.gov.

Computer scientists at Sandia National Laboratories in Livermore, Calif., have for the first time successfully demonstrated the ability to run more than a million Linux kernels as virtual machines.

The achievement will allow cyber security researchers to more effectively observe behavior found in malicious botnets, or networks of infected machines that can operate on the scale of a million nodes. Botnets, said Sandia’s Ron Minnich, are often difficult to analyze since they are geographically spread all over the world.

Sandia scientists used virtual machine (VM) technology and the power of its Thunderbird supercomputing cluster for the demonstration. Sandia National Laboratories computer scientists Ron Minnich (foreground) and Don Rudish (background) have successfully run more than a million Linux kernels as virtual machines, an achievement that will allow cybersecurity researchers to more effectively observe behavior found in malicious botnets. They utilized Sandia's powerful Thunderbird supercomputing cluster for the demonstration. (Photo by Randy Wong)

Running a high volume of VMs on one supercomputer — at a similar scale as a botnet — would allow cyber researchers to watch how botnets work and explore ways to stop them in their tracks. “We can get control at a level we never had before,” said Minnich.

Previously, Minnich said, researchers had only been able to run up to 20,000 kernels concurrently (a “kernel” is the central component of most computer operating systems). The more kernels that can be run at once, he said, the more effective cyber security professionals can be in combating the global botnet problem. “Eventually, we would like to be able to emulate the computer network of a small nation, or even one as large as the United States, in order to ‘virtualize’ and monitor a cyber attack,” he said.

A related use for millions to tens of millions of operating systems, Sandia’s researchers suggest, is to construct high-fidelity models of parts of the Internet.

“The sheer size of the Internet makes it very difficult to understand in even a limited way,” said Minnich. “Many phenomena occurring on the Internet are poorly understood, because we lack the ability to model it adequately. By running actual operating system instances to represent nodes on the Internet, we will be able not just to simulate the functioning of the Internet at the network level, but to emulate Internet functionality.”

A virtual machine, originally defined by researchers Gerald J. Popek and Robert P. Goldberg as “an efficient, isolated duplicate of a real machine,” is essentially a set of software programs running on one computer that, collectively, acts like a separate, complete unit. “You fire it up and it looks like a full computer,” said Sandia’s Don Rudish. Within the virtual machine, one can then start up an operating system kernel, so “at some point you have this little world inside the virtual machine that looks just like a full machine, running a full operating system, browsers and other software, but it’s all contained within the real machine.”

The Sandia research, two years in the making, was funded by the Department of Energy’s Office of Science, the National Nuclear Security Administration’s (NNSA) Advanced Simulation and Computing (ASC) program and by internal Sandia funding.

To complete the project, Sandia utilized its Albuquerque-based 4,480-node Dell high-performance computer cluster, known as Thunderbird. To arrive at the one million Linux kernel figure, Sandia’s researchers ran one kernel in each of 250 VMs and coupled those with the 4,480 physical machines on Thunderbird. Dell and IBM both made key technical contributions to the experiments, as did a team at Sandia’s Albuquerque site that maintains Thunderbird and prepared it for the project.

The capability to run a high number of operating system instances inside of virtual machines on a high performance computing (HPC) cluster can also be used to model even larger HPC machines with millions to tens of millions of nodes that will be developed in the future, said Minnich. The successful Sandia demonstration, he asserts, means that development of operating systems, configuration and management tools, and even software for scientific computation can begin now before the hardware technology to build such machines is mature.

“Development of this software will take years, and the scientific community cannot afford to wait to begin the process until the hardware is ready,” said Minnich. “Urgent problems such as modeling climate change, developing new medicines, and research into more efficient production of energy demand ever-increasing computational resources. Furthermore, virtualization will play an increasingly important role in the deployment of large-scale systems, enabling multiple operating systems on a single platform and application-specific operating systems.”

Sandia’s researchers plan to take their newfound capability to the next level.

“It has been estimated that we will need 100 million CPUs (central processing units) by 2018 in order to build a computer that will run at the speeds we want,” said Minnich. “This approach we’ve demonstrated is a good way to get us started on finding ways to program a machine with that many CPUs.” Continued research, he said, will help computer scientists to come up with ways to manage and control such vast quantities, “so that when we have a computer with 100 million CPUs we can actually use it.”

Scientists use the Roadrunner supercomputer to model a fundamental process in physics that could help explain how stars begin to explode into supernovae

Despite decades of research, understanding turbulence, the seemingly random motion of fluid flows, remains one of the major unsolved problems in physics.

“With the Roadrunner supercomputer, we can now look in detail at previously inaccessible flows,” said Daniel Livescu, of Laboratory’s Computational Physics and Methods group.  Involving a technique known as Direct Numerical Simulations (DNS), researchers use the exact equations of fluid flow to calculate pressures, densities, and velocities, at very high resolution for both time and space, high enough to resolve the smallest eddies in the turbulent flow. This makes the DNS results as “real” as experimental data but requires immense computer power.

In many instances, these simulations are the only way turbulence properties such as those found in cosmic explosions like supernovae can be accurately probed.  In these cases, turbulence is accompanied by additional phenomena such as exothermic reactions, shock waves, and radiation, which drastically increase the computational requirements.

Livescu and colleague Jamaludin Mohd-Yusof of the Laboratory’s Computational Physics and Methods group are using Roadrunner and a high performance Computational Fluid Dynamics code to perform the largest turbulent reacting flow simulations to date. The simulations consider the conditions encountered in the early stages of what is known as a “type Ia” supernova, which results from the explosion of a white dwarf star.

Type Ia supernovae have become a standard in cosmology due to their role in measuring the distances in the universe. Yet, how the explosion occurs is not fully understood. For example, the debate around the models that describe burn rate and explosion mechanics is still not settled. In addition, the flame speed — that is the rate of expansion of a flame front in a combustion reaction — is one of the biggest unknowns in current models.

“Solving the flow problem in a whole supernova is still very far in the future,” said Livescu, “but accurately solving the turbulent flow in a small domain around a single flame, characterizing the early stages of the supernova, has become possible. The very high resolution reacting turbulence simulations enabled by Roadrunner can probe parameter values close to the detonation regime, where the flame becomes supersonic, and explore for the first time the turbulence properties under such complex conditions.”

Page 2 of 22