New design should enable much more flexible traffic management, without sacrificing speed

Like all data networks, the networks that connect servers in giant server farms, or servers and workstations in large organizations, are prone to congestion. When network traffic is heavy, packets of data can get backed up at network routers or dropped altogether.

Also like all data networks, big private networks have control algorithms for managing network traffic during periods of congestion. But because the routers that direct traffic in a server farm need to be superfast, the control algorithms are hardwired into the routers' circuitry. That means that if someone develops a better algorithm, network operators have to wait for a new generation of hardware before they can take advantage of it.

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and five other organizations hope to change that, with routers that are programmable but can still keep up with the blazing speeds of modern data networks. The researchers outline their system in a pair of papers being presented at the annual conference of the Association for Computing Machinery's Special Interest Group on Data Communication.

"This work shows that you can achieve many flexible goals for managing traffic, while retaining the high performance of traditional routers," says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science at MIT. "Previously, programmability was achievable, but nobody would use it in production, because it was a factor of 10 or even 100 slower."

"You need to have the ability for researchers and engineers to try out thousands of ideas," he adds. "With this platform, you become constrained not by hardware or technological limitations, but by your creativity. You can innovate much more rapidly."

The first author on both papers is Anirudh Sivaraman, an MIT graduate student in electrical engineering and computer science, advised by both Balakrishnan and Mohammad Alizadeh, the TIBCO Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT, who are coauthors on both papers. They're joined by colleagues from MIT, the University of Washington, Barefoot Networks, Microsoft Research, Stanford University, and Cisco Systems.

Different strokes

Traffic management can get tricky because of the different types of data traveling over a network, and the different types of performance guarantees offered by different services. With Internet phone calls, for instance, delays are a nuisance, but the occasional dropped packet -- which might translate to a missing word in a sentence -- could be tolerable. With a large data file, on the other hand, a slight delay could be tolerable, but missing data isn't.

Similarly, a network may guarantee equal bandwidth distribution among its users. Every router in a data network has its own memory bank, called a buffer, where it can queue up packets. If one user has filled a router's buffer with packets from a single high-definition video, and another is trying to download a comparatively tiny text document, the network might want to bump some of the video packets in favor of the text, to help guarantee both users a minimum data rate.

A router might also want to modify a packet to convey information about network conditions, such as whether the packet encountered congestion, where, and for how long; it might even want to suggest new transmission rates for senders.

Computer scientists have proposed hundreds of traffic management schemes involving complex rules for determining which packets to admit to a router and which to drop, in what order to queue the packets, and what additional information to add to them -- all under a variety of different circumstances. And while in simulations many of these schemes promise improved network performance, few of them have ever been deployed, because of hardware constraints in routers.

The MIT researchers and their colleagues set themselves the goal of finding a set of simple computing elements that could be arranged to implement diverse traffic management schemes, without compromising the operating speeds of today's best routers and without taking up too much space on-chip.

To test their designs, they built a compiler -- a program that converts high-level program instructions into low-level hardware instructions -- which they used to compile seven experimental traffic-management algorithms onto their proposed circuit elements. If an algorithm wouldn't compile, or if it required an impractically large number of circuits, they would add new, more sophisticated circuit elements to their palette.

Assessments

In one of the two new papers, the researchers provide specifications for seven circuit types, each of which is slightly more complex than the last. Some simple traffic management algorithms require only the simplest circuit type, while others require more complex types. But even a bank of the most complex circuits would take up only 4 percent of the area of a router chip; a bank of the least complex types would take up only 0.16 percent.

Beyond the seven algorithms they used to design their circuit elements, the researchers ran several other algorithms through their compiler and found that they compiled to some combination of their simple circuit elements. 

"We believe that they'll generalize to many more," says Sivaraman. "For instance, one of the circuits allows a programmer to track a running sum -- something that is employed by many algorithms."

In the second paper, they describe the design of their scheduler, the circuit element that orders packets in the router's queue and extracts them for forwarding. In addition to queuing packets according to priority, the scheduler can also stamp them with particular transmission times and forward them accordingly. Sometimes, for instance, it could be useful for a router to slow down its transmission rate, in order to prevent bottlenecks elsewhere in the network, or to help ensure equitable bandwidth distribution.

Finally, the researchers drew up specifications for their circuits in Verilog, the language electrical engineers typically use to design commercial chips. Verilog's built-in analytic tools verified that a router using the researchers' circuits would be fast enough to support the packet rates common in today's high-speed networks, forwarding a packet of data every nanosecond.

Three young researchers from the Technical University of Munich (TUM) have won a prestigious Bell Labs Prize, tied for third place in a global competition in information and communications technology. They showed how a single type of transceiver could be used across the full range of digital communications systems, ensuring in each instance that its transmission rate will approach the theoretical limit. Their method could enhance flexibility and reduce costs in the engineering of wireless, wireline, optical fiber, and satellite systems. 105340 web

Dr. Georg Boecherer is a postdoctoral researcher in the Department of Electrical and Computer Engineering at TUM; Patrick Schulte and Fabian Steiner are doctoral candidates who began working with Boecherer as master's students. The three share equally in the prize, a personal award of $25,000 from Alcatel-Lucent Bell Labs. The other prizes awarded went to a professor at Carnegie Mellon University and a professor at the University of California, San Diego.

Seeking business breakthroughs through science

The competition called for innovative proposals "that have the potential to change the way we live, work, and communicate with each other." From more than 250 ideas submitted in April, the field was narrowed to 20 teams. These were matched with Bell Labs researchers and business managers to further develop their proposals. Seven finalists presented their ideas at Bell Labs' headquarters in the U.S. The criteria on which they were judged included innovation potential, technical merit, feasibility, and business impact. One first prize was awarded; the TUM team tied for third place, and no second prize was given.

"Nothing brings out the creative power of research better than competing to transform society and grow industries," says Marcus Weldon, president of Bell Labs and chief technology officer of Alcatel-Lucent. "The Bell Labs Prize celebrates the interplay of science and engineering that has flourished here for 90 years, allowing us to dream up the future and create the ideas that will build the technology necessary to get us there."

RateX - "a universal method"

Information theory determines the upper limit to how much data can be transmitted reliably over a given channel, taking into account characteristics such as the signal-to-noise ratio. Over the past few decades, engineers have developed information coding and modulation schemes that seek to optimize performance for specific types of systems, but two serious challenges remain: There's always a gap between what theory predicts and what the technology can deliver, and no approach is universally applicable. The TUM researchers claim to have overcome both of these limitations.

Their "RateX" method brings together three essential functions for the first time, in a way that offers the industry an unprecedented level of flexibility. "It's an elegant solution," says Prof. Gerhard Kramer, Chair for Communications Engineering, "creating a clean layering of signal shaping, encoding, and modulation within the physical layer of the Open Systems Interconnection model. This is a universal method that could become the de facto way of doing things in the future."

Many components used in diverse communications systems today could be replaced by a single chip implementing the RateX algorithm. Not only would such a chip be less complex and more power-efficient than today's technology, but it also could offer cost and reliability advantages associated with economies of scale. Within ten or fifteen years there could be billions of such chips in use if, as Kramer expects, the RateX method becomes standard for 5G wireless, optical, satellite, DSL, and other communications technologies.

The key to closing the capacity gap, Georg Boecherer says, was to add one special device, a distribution matcher. "The only thing this device is doing," he explains, "is transforming bits with a uniform distribution into a sequence of symbols with non-uniform distribution. This mapping is reversible, so from the sequence of symbols we can recover the bits."

Combining the distribution matcher and a novel coding design with existing tools should push practical transmission rates to the theoretical limits - and the first experimental studies with optical fiber and wireline DSL systems appear to confirm this. The practical consequences should include higher data rates, a longer reach, and lower power consumption for all kinds of systems. Because RateX adapts easily to the actual channel, it should be as well suited for the short-range wireless links that will be a ubiquitous feature of the "Internet of Things" as for the world's long-haul fiber-optic backbones.

This research has been supported by the German Federal Ministry of Education and Research (BMBF) in the framework of an Alexander von Humboldt Professorship and by the TUM Institute for Advanced Study.

Fast-growing cyber security software firm attracts new clients in UK and US

Cyber security software company Panaseer has successfully raised $2.25 million through a syndicated seed investment round. The investors include Albion Ventures, Notion Capital, Winton Technology Ventures, C5 Holdings, and Elixirr.

Panaseer was founded in 2014 by Nik Whitfield and a team of cyber security experts from BAE Systems.  It is one of a new wave of UK cyber security start-ups working with commercial enterprises and was selected to accompany Prime Minister David Cameron to Washington DC earlier this year as part of a cyber-security delegation.  Panaseer has since signed its first New York based Financial Services client. 

According to Gartner, corporates spent $71 billion on cybersecurity in 2014 and this sum is expected to increase rapidly. In the UK, 80% of large corporates suffered a cyber breach in 2014, with an average cost of £1m per breach.

A key trend to note is that cybercrime has latterly become highly sophisticated: advanced tools that were previously the domain of national espionage have filtered down to the criminal world, with organised gangs behind highly targeted attacks to steal cash and trade secrets from multi-national enterprises.

Panaseer uses the latest data science techniques to help major corporations answer the question, “How Secure are We?" The company has built a platform which analyses the data provided by all the different cyber security solutions and provides a visual interface to drill down into and understand this information, and so inform board-level decisions on the allocation of security budgets or weaknesses in cybersecurity policies.

As well as growing a New York client base, Panaseer is currently working with several major UK Financial Services customers.

Andy Williams, UK government’s Cyber Envoy to the US says: “As one of the new generation of UK cyber start-ups that accompanied the Prime Minister to the US earlier this year, it is great news that Panaseer has now secured its initial US client business and external investment. Panaseer’s application of advanced data science techniques to help secure the enterprise is an excellent example of British cyber innovation with global market potential.”

Ed Lascelles, Albion Ventures says: “Cyber security is already a huge issue for many firms worldwide and the risks we all face are increasing. We are delighted to be working with Nik and his team at Panaseer, who have a tremendous amount of experience and respect in the industry, as they build on their early success in the emerging Security Intelligence sector.”

Stephen Chandler, Notion Capital says: “Data security and system availability are now board level issues for most large corporates, but the threat is now so varied and complex that it requires a fresh approach. Panaseer has the most comprehensive one we have seen, using modern data science techniques and risk-based analysis to prioritise effort and spend across the CIO’s full IT estate. They have a smart and capable team, both technically and commercially, so should be able to take market share in this exciting growth area."

Daniel Freeman, C5 Accelerate says: “Cyber security has become a central question for boards and CEOs, and Panaseer is perfectly placed to deliver security insights to key decision makers while ensuring that IT managers optimise security spending around solutions that deliver proven value. C5 is excited to support technology innovators that can leverage cloud computing to drive new advances in cyber security, and we are proud to be backing Nik and the team at Panaseer as they continue to develop their world-class platform.”

Stephen Newton, Founder & Managing Partner, Elixirr: “We are delighted to be a part of this seed investment round. We love to invest in game-changing companies and the insight Panaseer can provide to enterprises is unrivalled. We know from our clients that cyber security is a big topic around the boardroom table at the moment and we are looking forward to seeing Panaseer join these conversations.”

Commenting, Nik Whitfield, CEO, Panaseer says: “Recent high profile hacks have shot cyber security to the top of company’s risk concerns and major enterprises are only too aware of the threats they might face from the various hacking groups who target them. The increased awareness of cyber-attacks presents a new problem for those tasked with protecting their organisations and boards are now regularly asking “How Secure Are We?” New Big Data technologies such as the Panaseer Security Data Lake allow us to analyse this cyber-relevant data to produce previously unobtainable insight.

We are delighted to have successfully completed this round of investment. We look forward to working closely with our partners and making use of their considerable experience.”

CAPTION Climate network visualization revealing the backbone structure of strong statistical interrelations (links) between surface air temperature time series (nodes) all over the globe with features including the tropical Walker circulation and surface ocean currents. CREDIT T. Nocke/PIK Potsdam and C. Tominski/Uni Rostock

Researchers in Potsdam have developed a new open source Python-based software package for examining climate change and other data-heavy networks on a macroscopic level

 If you wanted to know whether shifts in the African climate during Paleolithic times correlated with the appearance and disappearance of hominin species, how would you find the answer? It's a tricky question because of the massive amounts of noisy, complicated data you would need to analyze.

Now researchers in Germany have developed a new tool to help grapple with enormous data sets and reveal big picture trends, such as climatic tipping points and their effects on species. The researchers created a software package based on the Python programming language that unifies complex network theory and nonlinear time series analysis - two important data analysis concepts.

A complex network is just that - a social, biological or technological network with patterns of connections that are neither regular nor purely random. Nonlinear time series analyses are often used to look at complex systems, including those that unfold in a chaotic manner. Many natural phenomena, like changing weather patterns, are nonlinear in nature -- as are man-made systems, like financial markets.

The researchers named the software that unifies the two concepts pyunicorn. They discuss their findings in this week's CHAOS, from AIP Publishing.

"Pyunicorn works like a macroscope, [which], if used the right way, allows to distill the essence of information from a network or time series data," said Jonathan Donges, a former Ph.D. student in the group of Jürgen Kurths and co-speaker of a flagship project, called COPAN, at the Potsdam Institute for Climate Impact Research (PIK) in Germany, which aims to develop conceptual models of global socio-environmental dynamics.

The software could be used to identify critical network structures, such as bottlenecks and backbones, for transport processes, as well as revealing tipping points in climatological or physiological time series.

Accordingly, the package's main application is the analysis of data from observations, experiments and model systems by way of graphs and time series of several quantities in parallel, such as temperature, precipitation and wind for climate, or blood pressure and breathing for physiology. By applying recurrence network analysis, which studies when a system returns to a former state, pyunicorn was able to detect tipping points in time series. This includes the aforementioned paleoclimate records, as well as the early emergence of a severe condition in pregnant women known as preeclampsia.

Donges's previous work has involved complex networks and nonlinear time series analysis and their applications to real world data analysis. Developing the pyunicorn package involved collaborators at PIK, Humboldt University Berlin, the Stockholm Resilience Centre, Institute for Marine and Atmospheric Research Utrecht, University of Aberdeen and Nishny Novgorod State University, located respectively in Germany, Sweden, The Netherlands, the United Kingdom and Russia.

"Many of these methods were newly developed by our team and, moreover, there was a lack of coherent software implementations for existing methods," said Donges. "Pyunicorn was developed to close this gap and to provide an integrative software framework for applying and further developing methods for complex networks and nonlinear time series analysis and their combinations."

As its name might imply, pyunicorn is written in Python, a popular open-source programming language. The package is designed in a modular fashion that makes it easy to use in different settings, ranging from interactive analysis sessions on laptops to large-scale parallel data analysis on supercomputer clusters. As with all Python software, pyunicorn runs on a variety of operating systems, including Linux, Mac OSX, Windows and Android.

The software's versatility fulfills a key aim of the project, which was to make the software publicly available and easy to use for researchers and practitioners in a variety of fields, ranging from complex systems science to climatology, medicine, neuroscience, economics and engineering.

"Many of the provided methods were not freely available before to the scientific community, and weren't available in the flexible and popular Python programming language," said Jürgen Kurths, who supervised the work.

Future work for Donges and his colleagues involves speeding up the package's code and ensuring compatibility with the Python 3.x platform. Donges remains optimistic but cautious about the uses of the package.

"Combining well-known approaches in a new way can yield exciting insights and perspectives in complex systems science," he said. "Software packages such as pyunicorn can be highly useful in catalyzing this process, but need to be applied in a thoughtful and theory-based way. Otherwise, the result might be junk science."

The pyunicorn package can be freely downloaded at: https://github.com/pik-copan/pyunicorn.

CAPTION Alisa Javadi, a postdoc in the Quantum Photonic research group, has worked with the experiments in the laboratory at the Niels Bohr Institute, University of Copenhagen. CREDIT Ola Jakup Joensen, Niels Bohr Institute, University of Copenhagen

There is tremendous potential for new information technology based on light (photons). Photons (light particles) are very well suited for carrying information and quantum technology based on photons -- called quantum photonics, will be able to hold much more information than current computer technology. But in order to create a network with photons, you need a photon contact, a kind of transistor that can control the transport of photons in a circuit. Researchers at the Niels Bohr Institute in collaboration with researchers from the Korea Institute of Science and Technology have managed to create such a contact. The results are published in the scientific journal Nature Communications.

Quantum information can be sent optically, that is to say, using light, and the signal is comprised of photons, which is the smallest component (a quantum) of a light pulse. Quantum information is located in whichever path the photon is sent along -- it can, for example, be sent to the right or to the left on a semi-transparent mirror. It can be compared to the use of bits made up of 0s and 1s in the world of conventional computers. But a quantum bit is more than a classical bit, since it is both a 0 and a 1 at the same time and it cannot be read without it being detected, as it is only a single photon. In addition, quantum technology can be used to store far more information than conventional computer technology, so the technology has much greater potential for future information technology.

Controlling the light

Light normally spreads in all directions. But in order to develop quantum technology based on light, you need to be able to control light down to the individual photons. Researchers in the Quantum Photonic research group at the Niels Bohr Institute are working on this and to do so, they use an optical chip embedded with a so-called quantum dot. The optical chip is made up of an extremely small photonic crystal, which is 10 microns across (1 micron is a thousandth of a millimetre) and has a thickness of 160 nanometers (1 nanometer is a thousandth of a micron). Embedded in the middle of the chip is a so-called quantum dot, which is comprised of a collection of atoms.

"We have developed the photonic chip so that the quantum dot emits a single photon at a time and we can control the photon's direction. Our big new achievement is that we can use the quantum dot as a contact for the photons -- a kind of transistor. It is an important component for creating a complex network of photons," explains Peter Lodahl, professor and head of the Quantum Photonic research group at the Niels Bohr Institute at the University of Copenhagen.

'Gateway' for photons

The experiments are carried out in the research group's laboratories, which located in the basement of the Niels Bohr Institute so that there are no tremors from the road or disruptive ambient light.

They use a laser to produce the photons in the experiment. If the laser is fully dimmed, a single photon is released. If the intensity is increased, there is a greater chance of 2 or more photons at the same time. The number of photons is important for the result.

"If we send a single photon into the quantum dot, it will be thrown back -- the gateway is closed. But if we send two photons, the situation changes fundamentally -- the gateway is opened and the two photons become entangled and are sent onwards," explains Alisa Javadi, who is a postdoc in the research group and has worked with the experiments in the laboratory at the Niels Bohr Institute.

So the quantum dot works as a photon contact and this is an important component when you want to build complex quantum photonic circuits on a large scale.

Page 1 of 45