Berkeley Lab Proves 10-Gigabit Ethernet Data Transfer is a Reality

By Steve Fisher, Editor -- Just yesterday Lawrence Berkeley National Laboratory, and several key partners put together a demonstration system running a real-world scientific application to produce data on one cluster, and then send the resulting data across a 10 Gigabit Ethernet connection to another cluster, where it is then rendered for visualization. Publicly proving more than switch interoperability, the demonstration was a first.

10 Gig E Team members: Wes Bethel, John Christman, John Shalf, Chip Smith, Mike Bennett

On June 17th it was announced that the final milestone in the IEEE standards approval process was reached last week when the IEEE 802.3ae specification for 10 Gigabit Ethernet was approved as an IEEE standard by the IEEE Standards Association (IEEE-SA) Standards Board. With that announcement the speed of Ethernet operations, at least on paper, saw one heavy-duty increase. However, achieving a 10-fold increase in actual Ethernet performance is still a challenge that can only be met with very high-end equipment and expertise. Yesterday, Lawrence Berkeley National Laboratory, announced that it has teamed with Force10 Networks, SysKonnect, FineTec Computers and Ixia to put together a demonstration system running a real-world scientific application (Cactus -- developed by Professor Ed Seidel and his team at the Albert Einstein Institute in Potsdam, Germany) to produce data on one cluster, and then ship over the resulting data across a 10-Gigabit Ethernet connection to another cluster, where it is then rendered for visualization. Specifically, it visualizes the gravity waves resulting from the collision of two black holes.

To say the test went well would be something of an understatement. The Berkeley team not only met its goal of demonstrating sustained 10 Gigabit Ethernet performance, they surpassed it, delivering a sustained data transfer rate of 10.6 gigabits per second. The demonstration consisted of two powerful Linux clusters, each at the ends of a pair of Force10 Networks switches connected via 2 pairs of 10 Gigabit Ethernet interfaces. One cluster of dual-CPU Linux PCs ran the Cactus simulation code and fed data to another cluster of PC's, which ran the Visapult application (a remote visualization app developed by LBNL’s Wes Bethel) which rendered the received data for real-time visual display and analysis. Each machine in the clusters is capable of delivering at least 930 Mbs of load to the network. The team ran traffic from10 of the 11 machines through one 10 gig link and the remaining traffic through the other 10 gig link. The Ixia equipment used in the demo was for monitoring purposes only; there was no analyzer-generated background traffic. “The only things that limit this demonstration are time and money as the cluster is scalable and the network equipment has the capacity. We are very grateful to Finetec, Syskonnect, Force10 Networks, Quartet Network Storage and Ixia for their generosity in supporting this demonstration,” said LBNL team member Mike Bennett. Bennett continued, "In spite of some last minute glitches, we did what we said we'd do -- achieve more than 10 Gig of data throughput. It was great to see it all come together and run as predicted -- with real-world applications that's not always the case.” 

According to Bennett the point of the demonstration was twofold: one, the demo proves that 10 Gigabit Ethernet, recently ratified by the IEEE, is real and necessary to solve bandwidth problems and two, there are real applications today that can use this new technology. A significant factor to consider as well is that prior to this demonstration, all of the publicly held 10 Gigabit Ethernet demonstrations have been to show interoperability not to test the performance of real world applications. In that sense, it’s really a first. “It is very important to show that all of the various vendors that have 10 Gigabit Ethernet products can actually operate correctly when inter-connected. This proves that the IEEE 802.3ae standard is a success,” Bennett stated. “It is equally important to demonstrate the ability of the network system to deliver the capacity needed by bandwidth hungry applications like Cactus and Visapult.” Commenting on the mood around the lab team member John Shalf said, “We’re very excited and certainly the people with the CACTUS base are pretty excited, but it’s kind of limited by us actually trying to convince people that we really need to deploy this technology, so that kind of tempers the excitement. This is more of a technology demonstration so we can make the argument that this really is the way to go.” Bennett added, “I think it's safe to say that we're all really excited about the new technology. It’s been hectic, sometimes frustrating, but exciting - typical pre-demonstration stuff. We were sure the demo would run fine (several successful dry runs), so everyone's definitely stoked.” Another team member, John Christman had this to say,"It wasn't the lack of bandwidth that held us back, it was the lack of resources. Now that we've done 10 Gig, it's time to start looking at 100." A gutsy statement to be sure, but given some time and additional resources I bet they can do it. I’ll put my money on Berkeley’s team any day. Additional technical information about the demo can be found at Information on Cactus and Visapult can be found at and respectively.