Scientists use the Roadrunner supercomputer to model a fundamental process in physics that could help explain how stars begin to explode into supernovae

Despite decades of research, understanding turbulence, the seemingly random motion of fluid flows, remains one of the major unsolved problems in physics.

“With the Roadrunner supercomputer, we can now look in detail at previously inaccessible flows,” said Daniel Livescu, of Laboratory’s Computational Physics and Methods group.  Involving a technique known as Direct Numerical Simulations (DNS), researchers use the exact equations of fluid flow to calculate pressures, densities, and velocities, at very high resolution for both time and space, high enough to resolve the smallest eddies in the turbulent flow. This makes the DNS results as “real” as experimental data but requires immense computer power.

In many instances, these simulations are the only way turbulence properties such as those found in cosmic explosions like supernovae can be accurately probed.  In these cases, turbulence is accompanied by additional phenomena such as exothermic reactions, shock waves, and radiation, which drastically increase the computational requirements.

Livescu and colleague Jamaludin Mohd-Yusof of the Laboratory’s Computational Physics and Methods group are using Roadrunner and a high performance Computational Fluid Dynamics code to perform the largest turbulent reacting flow simulations to date. The simulations consider the conditions encountered in the early stages of what is known as a “type Ia” supernova, which results from the explosion of a white dwarf star.

Type Ia supernovae have become a standard in cosmology due to their role in measuring the distances in the universe. Yet, how the explosion occurs is not fully understood. For example, the debate around the models that describe burn rate and explosion mechanics is still not settled. In addition, the flame speed — that is the rate of expansion of a flame front in a combustion reaction — is one of the biggest unknowns in current models.

“Solving the flow problem in a whole supernova is still very far in the future,” said Livescu, “but accurately solving the turbulent flow in a small domain around a single flame, characterizing the early stages of the supernova, has become possible. The very high resolution reacting turbulence simulations enabled by Roadrunner can probe parameter values close to the detonation regime, where the flame becomes supersonic, and explore for the first time the turbulence properties under such complex conditions.”

Government-wide emphasis on community access to data supports substantive push toward more open

During the May 5th meeting of the National Science Board, National Science Foundation (NSF) officials announced a change in the implementation of the existing policy on sharing research data. In particular, on or around October, 2010, NSF is planning to require that all proposals include a data management plan in the form of a two-page supplementary document. The research community will be informed of the specifics of the anticipated changes and the agency's expectations for the data management plans.

The changes are designed to address trends and needs in the modern era of data-driven science.

"Science is becoming data-intensive and collaborative," noted Ed Seidel, acting assistant director for NSF's Mathematical and Physical Sciences directorate. "Researchers from numerous disciplines need to work together to attack complex problems; openly sharing data will pave the way for researchers to communicate and collaborate more effectively."

"This is the first step in what will be a more comprehensive approach to data policy," added Cora Marrett, NSF acting deputy director. "It will address the need for data from publicly-funded research to be made public."

Seidel acknowledged that each discipline has its own culture about data-sharing, and said that NSF wants to avoid a one-size-fits-all approach to the issue. But for all disciplines, the data management plans will be subject to peer review, and the new approach will allow flexibility at the directorate and division levels to tailor implementation as appropriate.

This is a change in the implementation of NSF's long-standing policy that requires grantees to share their data within a reasonable length of time, so long as the cost is modest.

"The change reflects a move to the Digital Age, where scientific breakthroughs will be powered by advanced computing techniques that help researchers explore and mine datasets," said Jeannette Wing, assistant director for NSF's Computer & Information Science & Engineering directorate. "Digital data are both the products of research and the foundation for new scientific insights and discoveries that drive innovation."

NSF has a variety of initiatives focused on advancing the vision of data-intensive science. The issue is central to NSF's Sustainable Digital Data Preservation and Access Network Partners (DataNet) program in the Office of Cyberinfrastructure.

"Twenty-first century scientific inquiry will depend in large part on data exploration," said José Muñoz, acting director of the Office of Cyberinfrastructure. "It is imperative that data be made not only as widely available as possible but also accessible to the broad scientific communities."

Seidel noted that requiring the data management plans was consistent with NSF's mission and with the growing interest from U.S. policymakers in making sure that any data obtained with federal funds be accessible to the general public. Along with other federal agencies, NSF is subject to the Open Government Directive, an effort of the Obama administration to make government more transparent and more participatory.

Performing high-resolution, high-fidelity, three-dimensional simulations of Type Ia supernovae, the largest thermonuclear explosions in the universe, requires not only algorithms that accurately represent the correct physics, but also codes that effectively harness the resources of the next generation of the most powerful supercomputers.CASTRO scaling results on jaguarpf superimposed on a picture of nucleosynthesis during a Type Ia supernova explosion. A weak scaling approach was used in which the number of processors increases by the same factor as the number of unknowns in the problem. The red curve represents a single level of refinement; the blue and green curves are multilevel simulations with 12.5 percent of the domain refined. With perfect scaling the curves would be flat.CASTRO scaling results on jaguarpf superimposed on a picture of nucleosynthesis during a Type Ia supernova explosion. A weak scaling approach was used in which the number of processors increases by the same factor as the number of unknowns in the problem. The red curve represents a single level of refinement; the blue and green curves are multilevel simulations with 12.5 percent of the domain refined. With perfect scaling the curves would be flat.

Through the Department of Energy's Scientific Discovery through Advanced Computing (SciDAC), Lawrence Berkeley National Laboratory's Center for Computational Sciences and Engineering (CCSE) has developed two codes that can do just that.

MAESTRO, a low Mach number code for studying the pre-ignition phase of Type Ia supernovae, as well as other stellar convective phenomena, has just been demonstrated to scale to almost 100,000 processors on the Cray XT5 supercomputer "Jaguar" at the Oak Ridge Leadership Computing Facility. And CASTRO, a general compressible astrophysics radiation/ hydrodynamics code which handles the explosion itself, now scales to over 200,000 processors on Jaguar—almost the entire machine. Both scaling studies simulated a pre-explosion white dwarf with a realistic stellar equation of state and self-gravity.

These and further results will be presented at the 2010 annual SciDAC conference to be held July 11-15 in Chattanooga, Tennessee.

Both CASTRO and MAESTRO are structured grid codes with adaptive mesh refinement (AMR), which focuses spatial resolution on particular regions of the domain. AMR can be used in CASTRO to follow the flame front as it evolves in time, for example, or in MAESTRO to zoom in on the center of the star where ignition is most likely to occur.

Like many other structured grid AMR codes, CASTRO and MAESTRO use a nested hierarchy of rectangular grids. This grid structure lends itself naturally to a hybrid OpenMP/MPI parallelization strategy. At each time step the grid patches are distributed to nodes, and MPI is used to communicate between the nodes. OpenMP is used to allow multiple cores on a node to work on the same patch of data. A dynamic load-balancing technique is used to adjust the load.

Using the low Mach number approach, the time step in MAESTRO is controlled by the fluid velocity instead of the sound speed, allowing a much larger time step than would be taken with a compressible code. This enables researchers to evolve the white dwarf for hours instead of seconds of physical time, thus allowing them to study the convection leading up to ignition. MAESTRO was developed in collaboration with astrophysicist Mike Zingale of Stony Brook University, and in addition to the SNe Ia research, is being used to study convection in massive stars, X-ray bursts, and classical novae.

MAESTRO and CASTRO share a common software framework. Soon, scientists will be able to initialize a CASTRO simulation with data mapped from a MAESTRO simulation, thus enabling them to study SNe Ia from end to end, taking advantage of the accuracy and efficiency of each approach as appropriate.

Scientists working on 10 research projects have been awarded precious computing time on JUGENE, one of the most powerful supercomputers in the world. The projects, which cover fields as diverse as astrophysics, earth sciences, engineering and physics, gained access to JUGENE thanks to the PRACE ('Partnership for advanced computing in Europe') project.

Scientists in varied disciplines require access to supercomputers to solve some of the most pressing issues facing society today. PRACE is meeting this challenge head on by establishing a high performance computing (HPC) research infrastructure in Europe. Its work is supported by the Research Infrastructures budget lines of the EU's Sixth and Seventh Framework Programmes (FP6 and FP7), and it has been identified as a priority infrastructure for Europe by ESFRI, the European Strategy Forum on Research Infrastructures.

JUGENE, which is hosted by Forschungszentrum Jülich in Germany, is the first supercomputer in the network and has the distinction of being Europe's fastest computer available for public research. Competition for access to this world-class facility is fierce; PRACE received 68 applications requesting a total of 1,870 million hours of computing time from this first call for proposals. The 10 winning projects, which are led by scientists in Germany, Italy, the Netherlands, Portugal and the UK, will share over 320 million core computing hours.

The successful projects were selected on the basis of the scientific and technical excellence, their clear need for access to a top supercomputer, and the fact that they will be able to achieve significant research results within their allotted time.

Jochen Blumberger of University College London (UCL) in the UK has been awarded 24.6 million core hours to investigate electron transport in organic solar cells. Organic solar cells are a promising alternative to silicon-based solar cells. In addition to being cheap and easy to produce, they are light and flexible, meaning they can easily be fitted to windows, walls and roofs. On the downside, they suffer from a low light-to-electricity conversion efficiency. One reason for their low efficiency involves the fate of the photogenerated electrons. Dr Blumberger's work on JUGENE will advance our understanding of the processes taking place in organic solar cells.

Another project in the energy field comes from Frank Jenko of the Max Planck Institute for Plasma Physics in Germany. His 50 million core hour project, which will shed new light on plasma turbulence, represents a contribution to the mega international fusion energy project ITER.

Another UCL researcher, Peter Coveney, will use his 17 million core hour time budget to study turbulent liquids. Predicting the properties of turbulent fluids is extremely challenging, and Professor Coveney's work could have implications for our understanding of weather forecasting, transport and the dispersion of pollutants, gas flows in engines and blood circulation.

Meanwhile Zoltán Fodor of the Bergische Universität Wuppertal in Germany has been awarded 63 million core hours to go back in time to the start of the universe, to a period when infinitesimally small particles, such as quarks and gluons, combined to form protons and neutrons which in turn came together to form atomic nuclei. The goal of Dr Fodor and his team is to analyse the properties of strongly interacting matter under 'extreme conditions'.

Atmospheric boundary layers are at the heart of the 35 million core hour project submitted by Harmen Jonker of Delft University in the Netherlands. Boundary layers change as a result of daytime heating and wind shear. Understanding them is crucial for the generation of accurate weather, climate and air quality models.

The other projects awarded access to JUGENE in this round of calls for proposals focus on molecular dynamics, magnetic reconnection, the deformation of metals, supernovae and quarks.

Hundreds of computational scientists from around the world will gather in Chattanooga July 11-15 to participate in technical and scientific talks, poster sessions and discussions of recent advances.

The event, SciDAC 2010, will also highlight successes of the Department of Energy's Scientific Discovery through Advanced Computing Program.

Thomas Zacharia, deputy director for science and technology at Oak Ridge National Laboratory, is the general chair for the event.

The SciDAC program brings together computational scientists, applied mathematicians and computer scientists from universities and national laboratories across the United States.

Areas of focus include understanding our universe on its largest and smallest scales, understanding Earth's climate and ramifications of climate change, and developing new energy sources.

For more information about the program, visit www.scidac.gov.

Page 10 of 17