UC first university in Australasia to teach supercomputing

The University of Canterbury, the first institution in the Southern Hemisphere to have an IBM Blue Gene supercomputer, is to become the first tertiary institution in Australasia to teach high performance computing.

Four new courses this year (2009) will teach students how to use the latest technology in parallel computing and state-of-the-art computing architectures. Ten scholarships (eight domestic, two international) funded by the University and IBM will be available for students taking the courses.

"This development shows UC to be at the forefront of high performance computing in New Zealand and reflects our recognition that 21st century students need 21st century skills," said Professor Tim David, Director of the Centre for Bioengineering, in the Department of Mechanical Engineering.

"Canterbury will be the only university in the country to have high performance computing in its curriculum."

The courses will be taught by Paul Walmsley, an acknowledged expert in high performance computing and an Adjunct Senior Fellow at UC.

They will provide students with an understanding of the different types of parallel computer architectures that are used in computational science and engineering disciplines to solve complex problems.

They will also introduce students to grid computing, a phenomenon becoming more widely used in scientific computing.

Voltaire Ranked on Deloitte's Technology Fast 500

Voltaire has been named to the Deloitte Technology Fast 500 EMEA 2008, a ranking of the 500 fastest growing technology companies in Europe, the Middle East and Africa.  Rankings are based on percentage revenue growth over five years, from 2003--2007.  Voltaire's 4,405 percent growth rate during this period resulted in a 19 ranking in the 2008 Deloitte Technology Fast 500 EMEA.

"This is the second year in a row that Deloitte has recognized Voltaire as a high growth company," said Ronnie Kenneth, Chairman and CEO, Voltaire.  "Our fast and steady growth represents the increasing number of customers who rely on Voltaire switches and software to improve the performance of their applications and increase the efficiency and manageability of their data centers."

"Being one of the 500 fastest growing technology companies in EMEA is an impressive accomplishment. We commend Voltaire for making the Deloitte Technology Fast 500 EMEA with a phenomenal 4,405 percent growth rate over five years," said Karel Bakkes, partner in charge of Deloitte's Technology Fast 500 EMEA program.

In addition to ranking on Deloitte's Technology Fast 500, Voltaire placed no 2 on the Israel Technology Fast 50, which is a ranking of the 50 fastest growing technology firms in Israel.

Additional information on the Deloitte Technology Fast 500 EMEA program is available at http://www.fast500europe.com/.

Humanities, HPC connect at NERSC

High performance computing and the humanities are finally connecting -- with a little matchmaking help from the Department of Energy (DOE) and the National Endowment for the Humanities (NEH). Both organizations have teamed up to create the Humanities High Performance Computing Program, a one-of-a-kind initiative that gives humanities researchers access to some of the world's most powerful supercomputers.

As part of this special collaboration, the DOE's National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory will dedicate a total of one million compute hours on its supercomputers and technical training to humanities experts. Meanwhile, the program's participants were selected through a highly competitive peer review process led by the NEH's Office of Digital Humanities.

"A connection between the humanities and high performance computing communities had never been formally established until this collaboration between DOE and NEH. The partnership allows us to realize the full potential of supercomputers to help us gain a better understanding of our world and history," says Katherine Yelick, NERSC Division Director.

The selected projects are currently getting up to speed with NERSC systems and staff.

"Supercomputers have been a vital tool for science, contributing to numerous breakthroughs and discoveries. The Endowment is pleased to partner with DOE to now make these resources and opportunities available to humanities scholars as well, and we look forward to seeing how the same technology can further their work," says NEH Chairman Bruce Cole.

Three projects have been selected to participate in the program's inaugural run.

The Perseus Digital Library Project, led by Gregory Crane of Tufts University in Medford, Mass., will use NERSC systems to measure how the meanings of words in Latin and Greek have changed over their lifetimes, and compare classic Greek and Latin texts with literary works written in the past 2,000 years. Team members say the work will be similar to methods currently used to detect plagiarism. The technology will analyze the linguistic structure of classical texts and reveal modern pieces of literature, written or translated into English, which may have been influenced by the classics.

"High performance computing really allows us to ask questions on a scale that we haven't been able to ask before. We'll be able to track changes in Greek from the time of Homer to the Middle Ages. We'll be able to compare the 17th century works of John Milton to those of Vergil, which were written around the turn of the millennium, and try to automatically find those places where Paradise Lost is alluding to the Aeneid, even though one is written in English and the other in Latin," says David Bamman, a senior researcher in computational linguistics with the Perseus Project.

According to Bamman, the basic methods for creating such a literary analysis tool have existed for some time, but the capability for analyzing such a huge collection of texts couldn't be fully developed due to a lack of compute power. He notes that the collaboration with DOE and NERSC eliminates that roadblock.

In addition to tracking changes in ancient literature, NERSC computers will also be reconstructing ancient artifacts and architecture with the High Performance Computing for Processing and Analysis of Digitized 3-D Models of Cultural Heritage project, led by David Koller, Assistant Director of the University of Virginia's Institute for Advanced Technology in the Humanities (IATH) in Charlottesville, Va.

Over the past decade, Koller has traveled to numerous museums and cultural heritage sites around the world, taking 3D scans of historical buildings and objects -- recording details down to a quarter of a millimeter.

According to Koller, a 3D scan of the Renaissance statue David, carved by Michelangelo, contains billions of raw data points. To convert this raw data into a finished 3D model is extremely time consuming, and nearly impossible on a desktop computer. Limited compute power has also limited Koller's ability to efficiently recreate large historical sites, like Roman ruins in Italy or Colonial Williamsburg in Virginia. He hopes to use the NERSC resources to digitally restore these sites in three-dimensional images for analysis.

Over the years, Koller has also digitally scanned thousands of fragments that chipped off ancient works of art, some dating back to the ancient Greek and Roman empires. Koller hopes to use NERSC computers to put these broken works back together again like a digital 3D jigsaw puzzle.

"The collaboration with NERSC opens a wealth of resources that is unprecedented in the humanities," says Koller. "For years, science reaped the benefits of using supercomputers to visualize complex concepts like combustion. Humanists, on the other hand, didn't realize that supercomputers could potentially meet their needs too, until NEH and DOE proposed this collaboration last year.… I am really excited to see what comes out of this partnership."

In contrast to the other Humanities High Performance Computing projects that will be done at NERSC, the Visualizing Patterns in Databases of Cultural Images and Video project, led by Lev Manovich, Director of the Software Studies Initiative at the University of California, San Diego, is not focused on working with a single data set. Instead, this project hopes to investigate the full potential of cultural analytics using different types of data including: millions of images, paintings, professional photography, graphic design, user-generated photos; as well as tens of thousands of videos, feature films, animation, anime music videos and user-generated videos.

"Digitization of media collections, the development of Web 2.0 and the rapid growth of social media have created unique opportunities to studying social and cultural processes in new ways. For the first time in human history, we have access to unprecedented amounts of data about people's cultural behavior and preferences as well as cultural assets in digital form," says Manovich.

For approximately three years, Manovich has been developing a broad framework for this research that he calls Cultural Analytics. The framework uses interactive visualization, data mining, and statistical data analysis for research, teaching and presentation of cultural artifacts, processes and flows. Manovich's lab is focusing on analysis and visualization of large sets of visual and spatial media: art, photography, video, cinema, computer games, space design, architecture, graphic and web design, product design. Another focus is on using the wealth of cultural information available on the web to construct detailed interactive spatio-temporal maps of contemporary global cultural patterns.

"I am very excited about his award to use NERSC resources, this opportunity allows us to undertake quantitative analysis of massive amounts of visual data," says Manovich. "We plan to process all images and video selected for our study using a number of algorithms to extract image features and structure; then we will use variety of statistical techniques -- including multivariate statistics methods such asfactor analysis, cluster analysis, and multidimensional scaling -- to analyze this new metadata; finally, we will use the results of our statistical analysis and the original data sets to produce a number of highly detailed visualizations to reveal the new patterns in our data."High performance computing and the humanities are finally connecting -- with a little matchmaking help from the Department of Energy (DOE) and the National Endowment for the Humanities (NEH). Both organizations have teamed up to create the Humanities High Performance Computing Program, a one-of-a-kind initiative that gives humanities researchers access to some of the world's most powerful supercomputers.

As part of this special collaboration, the DOE's National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory will dedicate a total of one million compute hours on its supercomputers and technical training to humanities experts. Meanwhile, the program's participants were selected through a highly competitive peer review process led by the NEH's Office of Digital Humanities.

"A connection between the humanities and high performance computing communities had never been formally established until this collaboration between DOE and NEH. The partnership allows us to realize the full potential of supercomputers to help us gain a better understanding of our world and history," says Katherine Yelick, NERSC Division Director.

The selected projects are currently getting up to speed with NERSC systems and staff.

"Supercomputers have been a vital tool for science, contributing to numerous breakthroughs and discoveries. The Endowment is pleased to partner with DOE to now make these resources and opportunities available to humanities scholars as well, and we look forward to seeing how the same technology can further their work," says NEH Chairman Bruce Cole.

Three projects have been selected to participate in the program's inaugural run.

The Perseus Digital Library Project, led by Gregory Crane of Tufts University in Medford, Mass., will use NERSC systems to measure how the meanings of words in Latin and Greek have changed over their lifetimes, and compare classic Greek and Latin texts with literary works written in the past 2,000 years. Team members say the work will be similar to methods currently used to detect plagiarism. The technology will analyze the linguistic structure of classical texts and reveal modern pieces of literature, written or translated into English, which may have been influenced by the classics.

"High performance computing really allows us to ask questions on a scale that we haven't been able to ask before. We'll be able to track changes in Greek from the time of Homer to the Middle Ages. We'll be able to compare the 17th century works of John Milton to those of Vergil, which were written around the turn of the millennium, and try to automatically find those places where Paradise Lost is alluding to the Aeneid, even though one is written in English and the other in Latin," says David Bamman, a senior researcher in computational linguistics with the Perseus Project.

Cray Wins $52 Million SuperComputer Contract

The system at Department of Energy's Berkeley Lab Will Be One of World's Fastest -- Cray and the U.S. Department of Energy (DOE) Office of Science announced today that Cray has won the contract to install a next-generation supercomputer at the DOE's National Energy Research Scientific Computing Center (NERSC). The systems and multi-year services contract, valued at over $52 million, includes delivery of a Cray massively parallel processor supercomputer, code-named "Hood."

The contract also provides options for future upgrades that would quadruple the size of the system and eventually boost performance to one petaflops (1,000 trillion floating-point operations per second) and beyond. A successor to the massively parallel Cray XT3 supercomputer, the Hood system installed at NERSC will be among the world's fastest general-purpose systems. It will deliver the sustained performance of at least 16 trillion calculations per second -- with a theoretical peak speed of 100 trillion calculations per second -- when running a suite of diverse scientific applications at scale. The system uses thousands of AMD Opteron processors running tuned, lightweight operating system kernels and interfaced to Cray's unique SeaStar network.

Cray will begin shipping the new supercomputer to the NERSC facility at the Lawrence Berkeley National Laboratory later this year, with completion of the installation anticipated in the first half of 2007 and acceptance in mid-2007. As part of a competitive procurement process, NERSC evaluated systems from a number of vendors using the NERSC Sustained System Performance (SSP) metric. The SSP metric, developed by NERSC, measures sustained performance on a set of codes designed to accurately represent the challenging computing environment at the Center.

"While the theoretical peak speed of supercomputers may be good for bragging rights, it's not an accurate indicator of how the machine will perform when running actual research codes," said Horst Simon, director of the NERSC Division at Berkeley Lab. "To better gauge how well a system will meet the needs of our 2,500 users, we developed SSP. According to this test, the new system will deliver over 16 teraflops on a sustained basis." "The Cray proposal was selected because its price/performance was substantially better than other proposals we received, as determined by NERSC's comprehensive evaluation criteria of more than 40 measures," said Bill Kramer, general manager of the NERSC Center.

"We are excited that NERSC will again be home to a large Cray supercomputer," said Cray President and CEO Peter Ungaro. "We are proud to have been selected by NERSC in a challenging and competitive evaluation process using a measurement that emulates real-world conditions, rather than a simplistic peak-performance measurement. NERSC joins a growing number of major high-performance computing centers that have selected Cray systems which exemplify our vision of Adaptive Supercomputing by handling scientific applications of ever-increasing complexity and scaling to the highest performance levels."

The Hood supercomputer at NERSC will consist of over 19,000 AMD Opteron 2.6-gigahertz processor cores, with two cores per socket making up one node. Each node has 4 gigabytes (4 billion bytes) of memory and a dedicated SeaStar connection to the internal network. The full system will consist of over 100 cabinets with 39 terabytes (39 trillion bytes) of aggregate memory capacity.

"AMD and Cray continue to collaborate on innovative ways to leverage Direct Connect Architecture and HyperTransport™ technology," said Marty Seyer, senior vice president, Commercial Segment, AMD. "This innovation, along with Cray's supercomputing expertise and focus on scalable system architectures, has yet again resulted in a significant win. This is confirmation that customers believe that the design and performance of the AMD Opteron processor combined with Cray's superior system architecture provides a winning combination."

In keeping with NERSC's tradition of naming supercomputers after world-class scientists, the new system will be called "Franklin" in honor of Benjamin Franklin, America's first scientist. This year is the 300th anniversary of Franklin's birth. "Ben Franklin's scientific achievements included fundamental advances in electricity, thermodynamics, energy efficiency, material science, geophysics, climate, ocean currents, weather, materials science, population growth, medicine and health, and many other areas," said NERSC's Bill Kramer. "In the tradition of Franklin, we expect this system to make contributions to the science of the same high order."

Linux Networx Accelerators Expected to Drive up to 4x Price/Performance

Linux Networx, The Linux Supercomputing Company, today announced that it is applying its industry-leading supercomputing expertise to the delivery of a series of innovative application acceleration solutions expected to deliver up to 4x the price/performance value of current application accelerators for key applications. Featuring the industry's most powerful accelerators, tight integration with the computing system, and innovative partitioning of overhead tasks the Linux Networx acceleration solutions can solve much larger problems than traditional accelerators and can solve them faster and at a significantly lower overall cost.

"Linux Networx has years of expertise designing and integrating innovative supercomputing solutions that solve complex computational challenges," said Bo Ewald, CEO of Linux Networx. "This expertise has given us the insight and knowledge needed to develop accelerated systems that are expected to achieve results 5-20 times faster than traditional solutions for key applications. In addition, our internal architecture ensures that this performance can be sustained over time. As a result, our accelerators will be ideally suited for complex applications such as those in seismic processing, national defense, and other applications.

" Multi-paradigm computing using application accelerators can achieve phenomenal increases in computational throughput by shifting the execution of selected algorithms from a compute system's general-purpose CPU to one or more highly specialized accelerators. However, the performance of current accelerators is limited by high levels of data communications overhead and insufficient accelerator resources, squandering potential acceleration capacity.

Linux Networx solves these problems by using the industry's most powerful accelerators and ultra-high bandwidth, low latency interface to the host system. Continuing the Linux Networx tradition of innovative leadership in the design, integration, and delivery of advanced supercomputing technologies, Linux Networx accelerators provide advanced acceleration at greatly reduced costs.

By leveraging a foundation of commercial-off-the-shelf components and open source expertise, Linux Networx accelerators are expected to be able to deliver up to 4x price/performance value for key applications. Conversely, traditional accelerators are prohibitively expensive due to their reliance on proprietary hardware and development environments.

Linux Networx also announced that it is initiating a developer program to optimize the performance of the accelerators for targeted applications and industries, including seismic analysis, computational fluid dynamics, and high energy physics The first commercial release of Linux Networx' new accelerators will be in the third quarter of 2006.