Whether it is malicious or an act of Mother Nature, an infrastructure attack could cripple the nation as more people depend on the interconnected services such as water, electricity, communication, transportation and health care. 

University of Oklahoma School of Industrial and Systems Engineering researcher Kash Barker is leading a team to evaluate how analytics from multiple sources can increase network resilience. The National Science Foundation project, titled "Resilience Analytics: A Data-Driven Approach for Enhanced Interdependent Network Resilience," is a cooperative research effort between OU Gallogly College of Engineering colleague Charles Nicholson and researchers at the University of Virginia, University of Wisconsin-Madison, Stevens Institute of Technology, Penn State University, Virginia Tech and the University of North Texas.

"Resilience is broadly defined as the ability of a system to withstand the effects of a disruption and then recover rapidly and efficiently," Barker said. "As disruptions become more frequent - even inevitable - designing resilience into our infrastructure systems, such as the transportation and electric power networks, is becoming more important."

For example, when a large-scale tornado hits, debris may be strewn across roads, power lines disabled and citizens injured. The related systems - transportation, power grid and emergency care - all rely on each other. Hospitals require electricity to serve an influx of patients, but roads free of debris to repair downed power lines also are required. Understanding how all such systems work together throughout a disruptive event helps decision-makers make better decisions regarding allocation and scheduling of resources.

Barker's project is part of the first round of funding for the National Science Foundation activity known as CRISP: Critical Resilient Interdependent Infrastructure Systems and Processes. These three- and four-year projects, each with funding up to $2.5 million, are part of a multiyear initiative on risk and resilience.

The National Science Foundation's fiscal year 2015 investment in CRISP is a multidisciplinary collaboration between the Directorates for Engineering, Computer and Information Science and Engineering and Social, Behavioral and Economic Sciences. As a result, Barker's project is a multi-disciplinary approach to evaluating and planning for resilience. The systems engineering perspective analyzes how these networks behave together and can be optimized. Computer and data sciences are addressing how to turn large amounts of data into something meaningful to improve interdependent resilience, and the social sciences evaluate how the resilience of the society depends on the resilience of the physical infrastructure.

"Analyzing data from a variety of sources is important," Barker said. "We emphasize the role of the community in providing data about not only their experience, but what is happening in the underlying physical infrastructure to give us a better idea of the behavior of interdependent networks before, during and after a disruption."

Knowledge from the data will lead to innovations in critical infrastructure, strengthening community support functions and in delivering even a broader range of goods and services.

Pramod Khargonekar, National Science Foundation assistant director for engineering, predicts the new understanding of infrastructure, combined with advances in modeling and smart technologies, will offer important, groundbreaking discoveries to improve resilience. "These research investments will help support national security, economy and people for decades to come," Khargonekar said.

Fall Plugfest Follows the Most Successful-to-Date Plugfest Held Last Spring Where More Than 200 Cables and Devices Were Tested

  • IBTA 16th Compliance and Interoperability Plugfest
  • OpenFabrics Alliance 8th Interoperability Event
  • Supercomputing 2009

The InfiniBand Trade Association (IBTA)  today announced the 16th Compliance and Interoperability Plugfest. The Plugfest will take place October 12-16, 2009 in the University of New Hampshire’s Interoperability Lab. The event provides an opportunity for InfiniBand device and cable vendors to test their products for compliance with the InfiniBand architecture specification, as well as interoperability with other InfiniBand products.

This event will include testing of both double data rate (DDR) 20Gb/s devices and quad data rate (QDR) 40Gb/s devices. There is a new test procedure for the recently released 120Gb/s 12x Small Form-Factor Pluggable (CXP) Interface Specification for cables, along with a new memory map test procedure for the EEPROMs included with QSFP and CXP active cables. The updated Wave Dispersion Penalty (WDP) testing will also be included.

The October Plugfest will include interoperability test procedures using Mellanox, QLogic and Voltaire products. The test procedures ensure that InfiniBand products are both compliant and interoperable, which in turn ensures the trouble-free deployment of InfiniBand clusters. More information on test procedures is available for IBTA members at: http://members.infinibandta.org/apps/org/workgroup/ciwg/documents.php?folder_id=298#folder_298

Plugfest registration is free for IBTA members; non-members need to pay a special fee. More information is available on the IBTA website at: www.infinibandta.org.

The Plugfest program has been a significant contributor to the growth of InfiniBand in both the enterprise data center and the high-performance computing markets. According to the June 2009 TOP500 list, InfiniBand is now the leading server interconnect in the Top100 with 59 clusters.

The Integrators’ List has grown from 115 products in October 2008, to 297 products as of the last Plugfest event in April 2009. End users and OEMs frequently reference this list prior to the deployment of InfiniBand-related systems, including both small clusters and large-scale clusters of 1,000 nodes or more. Many OEMs use this list as a gateway in the procurement process.

Fall Plugfest follows the highly successful Spring ‘09 Plugfest

The Spring ‘09 Plugfest was the most successful in IBTA history with more than 20 cable and device vendors in attendance. During the event, over 200 cables and 14 devices were tested. The number of devices qualifying for inclusion on the Integrators’ List has steadily increased; the list now includes more than 297 products.

Vendors recently adding products to the IBTA Integrators’ List include: Amphenol, Avago Technologies, Cinch Connectors, Emcore, LSI, Luxtera, Mellanox, Molex, Obsidian Research, Panduit, Quellan Inc (Intersil), Tyco Electronics, Volex, Voltaire and W.L. Gore. Several additional vendors will attend the October 2009 Plugfest, including QLogic, FCI and Hitachi.

Following Plugfest: OpenFabrics Alliance’s Eighth Interoperability Event

Following the IBTA Plugfest, the OpenFabrics Alliance will be conducting their 8th Interoperability event from Oct. 15-23, 2009. This session will focus on industry-wide interoperability using the OpenFabrics Alliance Software Stack. This event requires separate eligibility, cost and registration. For more information, please visit: http://www.iol.unh.edu/services/testing/ofa/events/Invitation_2009-10_OFA.php

IBTA to Celebrate 10-Year Anniversary at Supercomputing 2009

The IBTA will celebrate its 10-year anniversary at Supercomputing 2009 in Portland, Ore. on November 14-20. The IBTA will host InfiniBand demonstrations and an InfiniBand presentation theater. The IBTA invites all attendees to stop by booth number 139 at the show.

 
PNNL's Power Grid Integrator has demonstrated up to a 50 percent improvement in forecasting future electricity needs over several commonly used tools. Project lead Luke Gosink, right, consults on the use of the new tool, which could save millions in wasted electricity costs.

Accurately forecasting future electricity needs is tricky, with sudden weather changes and other variables impacting projections minute by minute. Errors can have grave repercussions, from blackouts to high market costs. Now, a new forecasting tool that delivers up to a 50 percent increase in accuracy and the potential to save millions in wasted energy costs has been developed by researchers at the Department of Energy's http://www.pnnl.gov Pacific Northwest National Laboratory.

Performance of the tool, called the Power Model Integrator, was tested against five commonly used forecasting models processing a year's worth of historical power system data.

"For forecasts one-to-four hours out, we saw a 30-55 percent reduction in errors," said Luke Gosink, a staff scientist and project lead at PNNL. "It was with longer-term forecasts — the most difficult to accurately make — where we found the tool actually performed best."

The advancement is featured this week as a http://energyenvironment.pnnl.gov/pdf/BMA_NIS_final.pdf best conference paper in the power system modeling and simulation session at the IEEE Power & Energy Society general meeting in Denver.

A delicate balancing act

Fluctuations in energy demand throughout the day, season and year along with weather events and increased use of intermittent renewable energy from the sun and wind all contribute to forecasting errors. Miscalculations can be costly, put stress on power generators and lead to instabilities in the power system.

Grid coordinators have the daily challenge of forecasting the need for and scheduling exchanges of power to and from a number of neighboring entities. The sum of these future transactions, called the net interchange schedule, is submitted and committed to in advance. Accurate forecasting of the schedule is critical not only to grid stability, but a power purchaser's bottom line.

"Imagine the complexity for coordinators at regional transmission organizations who must accurately predict electricity needs for multiple entities across several states," Gosink noted. "Our aim was to put better tools in their hands."

Five heads better than one

Currently, forecasters rely on a combination of personal experience, historical data and often a preferred forecasting model. Each model tends to excel at capturing certain grid behavior characteristics, but not necessarily the whole picture. To address this gap, PNNL researchers theorized that they could develop a method to guide the selection of an ensemble of models with the ideal, collective set of attributes in response to what was occurring on the grid at any given moment.

First, the team developed a statistical framework capable of guiding an iterative process to assemble, design, evaluate and optimize a collection of forecasting models. Researchers then used this patent-pending framework to evaluate and fine tune a set of five forecasting methods that together delivered optimal results.

The resulting Power Model Integrator tool has the ability to adaptively combine the strengths of different forecasting models continuously and in real time to address a variety scenarios that impact electricity use, from peak periods during the day to seasonal swings. To do this, the tool accesses short- and long-term trends on the grid as well as the historical forecasting performance of the individual and combined models. Minute by minute, the system adapts to and accounts for this information to form the best aggregated forecast possible at any given time.

"During these forecasting tasks, we noted that an ensemble of models, even those considered moderate performers, would routinely outperform individual, high-performing models," Gosink said.

Researchers used PNNL's Institutional Computing resources to develop and validate the tool, making it possible to process a year's worth of historical grid data within a few days. Supercomputing also made it possible to evaluate the tool's performance across multiple forecasting periods, ranging from 15, 30 and 60 minutes up to four hours. However, the tool also runs on standard computer workstations commonly used by the electric industry.

Flexibility in application

"The underlying framework is very adaptable, so we envision using it to create other forecasting tools for electric industry use," Gosink said. "We also are exploring other applications, from the prediction of chemical properties studied in computational chemistry applications to the identification of particles for high-energy physics experiments."

Initial development of the Power Model Integrator was funded by PNNL's http://gridoptics.pnnl.gov Future Power Grid Initiative and GridOPTICS.

Joseph Sawasky, currently Chief Information Officer and Associate Vice President, Computing and Information Technology at Wayne State University, has been selected as the President and CEO of Merit Network, Inc. Prior to joining Wayne State, Mr. Sawasky was the Chief Information Officer and Associate Vice President for Information Technology for the University of Toledo and University of Toledo Medical Center. His selection was announced today by Dr. Walter Milligan, Chief Information Officer at Michigan Technological University and Chair of the Search Committee, and a member of Merit's Board of Directors. 

Per Dr. Milligan, "I am very pleased that Joe Sawasky has agreed to accept the position of President and CEO of Merit Network, Inc. Joe has a combination of experience and vision perfectly suited to lead Merit during a time of rapid change. The search process identified several excellent candidates for the position, however, Joe was the clearly the most outstanding candidate.

Mr. Sawasky's appointment, which was recommended by a search committee consisting of members of Merit's Board of Directors and Merit staff, was unanimously approved by Merit's Board of Directors on July 31, 2015.  Mr. Sawasky will join Merit on August 31, 2015.

"I am thrilled to become part of Merit and continue their great tradition of connecting and growing the community of organizations serving society in Michigan and beyond," said Mr. Sawasky. "Merit is the original research and education network, and it has both a celebrated past and the advanced technology expertise to help drive Michigan forward in the 21st century. High performance networking, digital community building and advanced cyber security services are all part of their portfolio -- and these services are all essential elements for a thriving digital community. I am looking forward to working with the fantastic team at Merit, and serving the unique technology needs of their member organizations who are striving to make Michigan a preeminent place to live, learn and work."

Mr. Sawasky brings to Merit extensive IT leadership experience, spanning higher education, healthcare and manufacturing. He is currently the Chair of Merit's Board of Directors, and an active member of the Merit Services Innovation Group and Security Summit Planning Committee. During nearly 22 years with the University of Toledo, Mr. Sawasky focused on IT strategic planning, organizational leadership, project portfolio management, and customer satisfaction/quality programs. During his time at Wayne State, Mr. Sawasky and his IT organization developed Academica-ESP, a new enterprise portal service designed to facilitate and encourage real-time collaboration between students and faculty across campus.

When it comes to developing efficient, robust networks, the brain may often know best.

Researchers from Carnegie Mellon University and the Salk Institute for Biological Studies have, for the first time, determined the rate at which the developing brain eliminates unneeded connections between neurons during early childhood.

Though engineers use a dramatically different approach to build distributed networks of computers and sensors, the research team of computer scientists discovered that their newfound insights could be used to improve the robustness and efficiency of distributed computational networks. The findings, published in PLOS Computational Biology, are the latest in a series of studies being conducted in Carnegie Mellon’s Systems Biology Group to develop computational tools for understanding complex biological systems while applying those insights to improve computer algorithms.

Network structure is an important topic for both biologists and computer scientists. In biology, understanding how the network of neurons in the brain organizes to form its adult structure is key to understanding how the brain learns and functions. In computer science, understanding how to optimize network organization is essential to producing efficient interconnected systems.

But the processes the brain and network engineers use to learn the optimal network structure are very different.

Neurons create networks through a process called pruning. At birth and throughout early childhood, the brain’s neurons make a vast number of connections — more than the brain needs. As the brain matures and learns, it begins to quickly prune away connections that aren’t being used. When the brain reaches adulthood, it has about 50 to 60 percent less synaptic connections than it had at its peak in childhood.

In sharp contrast, computer science and engineering networks are often optimized using the opposite approach. These networks initially contain a small number of connections and then add more connections as needed.

“Engineered networks are built by adding connections rather than removing them. You would think that developing a network using a pruning process would be wasteful,” said Ziv Bar-Joseph, associate professor in Carnegie Mellon’s Machine Learning  and Computational Biology departments. “But as we showed, there are cases where such a process can prove beneficial for engineering as well.”

The researchers first determined key aspects of the pruning process by counting the number of synapses present in a mouse model’s somatosensory cortex over time. After counting synapses in more than 10,000 electron microscopy images, they found that synapses were rapidly pruned early in development, and then as time progressed, the pruning rate slowed.

The results of these experiments allowed the team to develop an algorithm for designing computational networks based on the brain pruning approach. Using simulations and theoretical analysis they found that the neuroscience-based algorithm produced networks were much more efficient and robust than the current engineering methods.

In the networks created with pruning, the flow of information was more direct, and provided multiple paths for information to reach the same endpoint, which minimized the risk of network failure.

“We took this high-level algorithm that explains how neural structures are built during development and used that to inspire an algorithm for an engineered network,” said Alison Barth, professor in Carnegie Mellon’s Department of Biological Sciences and member of the university’s BrainHubSM initiative. “It turns out that this neuroscience-based approach could offer something new for computer scientists and engineers to think about as they build networks.”

As a test of how the algorithm could be used outside of neuroscience, Saket Navlakha, assistant professor at the Salk Institute’s Center for Integrative Biology and a former postdoctoral researcher in Carnegie Mellon’s Machine Learning Department, applied the algorithm to flight data from the U.S. Department of Transportation. He found that the synaptic pruning-based algorithm created the most efficient and robust routes to allow passengers to reach their destinations.

“We realize that it wouldn’t be cost effective to apply this to networks that require significant infrastructure, like railways or pipelines,” Navlakha said. “But for those that don’t, like wireless networks and sensor networks, this could be a valuable adaptive method to guide the formation of networks.”

In addition, the researchers say the work has implications for neuroscience. Barth believes that the change in pruning rates from adolescence to adulthood could indicate that there are different biochemical mechanisms that underlie pruning.

“Algorithmic neuroscience is an approach to identify and use the rules that structure brain function,” Barth said. “There’s a lot that the brain can teach us about computing, and a lot that computer science can do to help us understand how neural networks function.”

Page 2 of 45