“Today, we pause to remember the nearly 3,000 men and women who lost their lives in the horrific attacks of 9/11 and to honor the heroes of that terrible day.  The people we lost came from all walks of life, all parts of the country, and all corners of the world.  What they had in common was their innocence and that they were loved by those they left behind.

“Although it has been eight years since that day, we cannot let the passage of time dull our memories or diminish our resolve.  We still face grave threats from extremists, and we are deeply grateful to all those who serve our country to keep us safe.  I’m especially proud of the men and women at the Department of Energy who work hard every day to keep nuclear weapons out of the hands of terrorists.

“So as we honor those we’ve lost, let us also recommit ourselves to protecting and serving the country we love.  After all, our future will be determined not by what terrorists tore down but by what we together build up.

“The families of the victims are in all of our thoughts and prayers today.”

   If you wanted to perform a single run of a current model of the explosion of a star on your home computer, it would take more than three years just to download the data.  In order to do cutting-edge astrophysics research, scientists need a way to more quickly compile, execute and especially visualize these incredibly complex simulations.

Argonne scientists are working on more efficient techniques to allow visualizations of extremely complex phenomena, like this rendering of a supernova.

These days, many scientists generate quadrillions of data points that provide the basis for visualizations of everything from supernovas to protein structures—and they’re quickly overwhelming current computing capabilities. Scientists at the U.S. Department of Energy's Argonne National Laboratory are exploring other ways to speed up the process, using a technique called software-based parallel volume rendering.

Volume rendering is a technique that can be used to make sense of the billions of tiny points of data collected from an X-ray, MRI, or a researcher’s simulation. For example, bone is denser than muscle, so an MRI measuring the densities of every square millimeter of your arm will register the higher readings for the radius bone in your forearm.

Argonne scientists are trying to find better, quicker ways to form a recognizable image from all of these points of data.  Equations can be written to search for sudden density changes in the dataset that might set bone apart from muscle, and researchers can create a picture of the entire arm, with bone and muscle tissue in different colors.

“But on the scale that we’re working, creating a movie would take a very long time on your laptop—just rotating the image one degree could take days,” said Mark Hereld, who leads the visualization and analysis efforts at the Argonne Leadership Computing Facility.

First, researchers divide the data among many processing cores so that they can all work at once, a technique that’s called parallel computing. On Argonne’s Blue Gene/P supercomputer, 160,000 computing cores all work together in parallel. Today’s typical laptop, by comparison, has two cores.

Usually, the supercomputer’s work stops once the data ha­s been gathered, and the data is sent to a set of graphics processors (GPUs), which create the final visualizations. But the driving commercial force behind developing GPUs has been the video game industry, so GPUs aren’t always well suited for scientific tasks. In addition, the sheer amount of data that has to be transferred from location to location eats up valuable time and disk space. 

“It’s so much data that we can’t easily ask all of the questions that we want to ask: each new answer creates new questions and it just takes too much time to move the data from one calculation to the next,” said Hereld. “That drives us to look for better and more efficient ways to organize our computational work.”

Argonne researchers wanted to know if they could improve performance by skipping the transfer to the GPUs and instead performing the visualizations right there on the supercomputer. They tested the technique on a set of astrophysics data and found that they could indeed increase the efficiency of the operation.

“We were able to scale up to large problem sizes of over 80 billion voxels per time step and generated images up to 16 megapixels,” said Tom Peterka, a postdoctoral appointee in Argonne’s Mathematics and Computer Science Division.

Because the Blue Gene/P's main processor can visualize data as they are analyzed, Argonne's scientists can investigate physical, chemical, and biological phenomena with much more spatial and temporal detail.

According to Hereld, this new visualization method could enhance research in a wide variety of disciplines.  “In astrophysics, studying how stars burn and explode pulls together all kinds of physics: hydrodynamics, gravitational physics, nuclear chemistry and energy transport,” he said. “Other models study the migration of dangerous pollutants through complex structures in the soil, to see where they’re likely to end up; or combustion in cars and manufacturing plants—where fuel is consumed and whether it’s efficient.”

“Those kinds of problems often lead to questions that are very complicated to pose mathematically,” Hereld said. “But when you can simply watch a star explode through visualization of the simulation, you can gain insight that’s not available any other way.”

            Argonne’s work in advanced computing is supported by the Department of Energy’s Office of Advanced Scientific Computing Research (ASCR).

The U.S. Department of Energy's Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

Simulation Provides Key Insights in the Design and Optimization of Solar Cells

Synopsys today announced that the U.S. Department of Energy's National Renewable Energy Laboratory (NREL), a leading government laboratory pursuing research in photovoltaic devices, has adopted Synopsys' Sentaurus TCAD for simulating solar cell characteristics to improve performance.

Photovoltaic technologies play a significant role in the worldwide push for the development and deployment of renewable sources of energy to reduce carbon emissions. NREL is at the forefront of these developments through its research in wind and solar energy. Its many accomplishments include the development and demonstration of an inverted metamorphic triple-junction solar cell with world-record efficiency of 40.8 percent. NREL also has research programs in thin-film and third-generation solar cells. Sentaurus TCAD simulations provide NREL scientists with valuable insight into the physical mechanisms that drive solar cell performance, thereby supporting the development of more efficient solar cell designs.  The simulations include the definition of the solar radiation incident on the cell, its reflection and transmission through anti-reflective coatings and surface texturing, and the absorption of the light and conversion to electrical current within semiconductor regions of the cell.

"Solar cells are very complex, with many material layers and design trade-offs affecting major performance metrics such as efficiency," said Dean Levi, a principal scientist at NREL. "We view simulation as an important tool to understand the internal physics of our designs and to point towards ways to improve them."

NREL has recently implemented Sentaurus TCAD to create polycrystalline thin-film CuInGaSe2, Cadmium telluride (CdTe), and silicon solar cell models.  These models have illustrated how material properties, grain boundaries, non-uniformity and interdigitated designs affect both device performance and characterization.

"The photovoltaic industry is experiencing tremendous growth and continues to drive toward higher efficiency and innovative solar cell designs," said Howard Ko, general manager and senior vice president of the Silicon Engineering Group at Synopsys. "Our Sentaurus TCAD tools offer many capabilities to simulate solar cell operation and performance characteristics to guide design improvements.  Having NREL as a user of our tools enables us to better understand the challenges and new directions of the fast-changing photovoltaic field."

  • Spectra Logic continues its proven record of success supporting the federal, state and local government agencies, ranking in the Top 10% of Government GSA contractors for the 3rd year in a row. 

  • Exemplifying this success, Spectra’s Federal sales comprised more than 20 percent of overall company revenue in 2009.

Spectra Logic today announced that it ranked in the top ten percent of U.S. General Services Administration (GSA) information technology (IT) Schedule 70 contractors for 2009. This is the third consecutive year Spectra Logic has ranked as a top vendor based on annual revenues of pre-approved GSA Schedule 70 IT products and services purchased by federal, state and local government agencies. Spectra Logic’s Federal sales division has a proven record of success supporting government organizations, and its sales comprise more than 20 percent of overall company revenue.

"Federal, state and local government agencies want backup and archive solutions that can easily handle large, fast-growing data volumes and high data availability, while helping to deliver greener IT environments that use less energy and minimize floor space," said Brian Grainger, vice president of worldwide sales, Spectra Logic. "Spectra Logic’s solutions are ideally suited for the government market – from high density, energy-efficient tape libraries to disk-based deduplication appliances that reduce stored data volumes."

Spectra Logic added several new products and services to the GSA schedule in 2009, including the Spectra T-Finity enterprise tape library, the Spectra T680 mid-range tape library, Spectra’s disk-based nTier Deduplication product line, media, backup application software and TranScale upgrade service options. Spectra Logic’s archive and backup products have been listed on GSA Schedule 70 since 2003 under GSA contract number GS-35F-0563K.

“The high-capacity Spectra T-Finity tape library enables large enterprise-class organizations to protect, archive and quickly access petabytes of classified and unclassified data,” said Mark Weis, director of federal sales, Spectra Logic. “T-Finity’s inclusion on the GSA Schedule 70 simplifies the purchasing process for our federal, state and local government customers.”

GSA establishes long-term government-wide contracts with commercial firms to provide access to more than 11 million commercial products and services that can be ordered directly from GSA Schedule contractors. The Information Technology Schedule 70 (a Multiple Award Schedule) grants agencies direct access to commercial experts who can thoroughly address the needs of the government IT Community through 20 Special Item Numbers (SINS).  These SINs cover the most general purpose commercial IT hardware, software and services.

In addition to GSA, Spectra Logic’s products are also listed on several Government Acquisition Contracts including ITES, NETCENTS and SEWP.

Particle beams are once again circulating in the world's most powerful particle accelerator, CERN's Large Hadron Collider (LHC). This news comes after the machine was handed over for operation on Wednesday morning. A clockwise circulating beam was established at ten o'clock this evening. This is an important milestone on the road towards first physics at the LHC, expected in 2010.

"It's great to see beam circulating in the LHC again," said CERN Director General Rolf Heuer. "We've still got some way to go before physics can begin, but with this milestone we're well on the way."

The LHC circulated its first beams on 10 September 2008, but suffered a serious malfunction nine days later. A failure in an electrical connection led to serious damage, and CERN has spent over a year repairing and consolidating the machine to ensure that such an incident cannot happen again.

"The LHC is a far better understood machine than it was a year ago," said CERN's Director for Accelerators, Steve Myers. "We've learned from our experience, and engineered the technology that allows us to move on. That's how progress is made."

Recommissioning the LHC began in the summer, and successive milestones have regularly been passed since then. The LHC reached its operating temperature of 1.9 Kelvin, or about -271 Celsius, on 8 October. Particles were injected on 23 October, but not circulated. A beam was steered through three octants of the machine on 7 November, and circulating beams have now been re-established. The next important milestone will be low-energy collisions, expected in about a week from now. These will give the experimental collaborations their first collision data, enabling important calibration work to be carried out. This is significant, since up to now, all the data they have recorded comes from cosmic rays. Ramping the beams to high energy will follow in preparation for collisions at 7 TeV (3.5 TeV per beam) next year.

Particle physics is a global endeavour, and CERN has received support from around the world in getting the LHC up and running again.

"It's been a herculean effort to get to where we are today," said Myers. "I'd like to thank all those who have taken part, from CERN and from our partner institutions around the world."

Page 3 of 22