Second annual Bell Labs Prize contest kicks off with $175,000 in cash prizes, collaboration with world-leading researchers and $100,000 to the academic institution associated with first place winner

Bell Labs, the industrial research arm of Alcatel-Lucent, has announced the Second Annual Bell Labs Prize. The Bell Labs Prize is a competition that gives researchers, in participating countries around the globe, the chance to introduce their ideas to the world, and collaborate with world-renowned Bell Labs researchers.

The Bell Labs Prize winners will take home cash awards worth as much as $100,000, and the chance to further develop their ideas at Bell Labs.  The goal of the Prize is to allow innovators around the globe to:

    --  Collaborate with the global Bell Labs community of researchers to 'change the game' in science, engineering, mathematics, or computer science
    --  Tackle the great challenges in communications, collaboration and connectivity that will enable the future 10 years from now, by finding solutions that are 10x better that today's solutions in one or more dimensions

"Coming on the heels of last year's hugely successful competition where we saw some phenomenal ideas ranging from social networks to Bio-networks, to wearable networks, we expect this year's competition to provide the same impact...or more," said Marcus Weldon, President of Bell Labs. "We think that the energy, innovation and collaboration the Bell Labs Prize brings to our industry and our research community is incredible and we strongly believe the winning entries will have the power to fundamentally change our world in profound ways."

At the OCP U.S. Summit 2015, Emulex has introduced new quad-port 10Gb Ethernet (10GbE) PCI Express (PCIe) and dual-port 10GbE Open Compute Project (OCP) form factor Ethernet and Converged Network Adapters (CNAs), further increasing the breadth of its PCIe 3.0 portfolio. The Company's new OneConnect OCe14104 and OCm14000-OCP adapters bring industry-leading protocol offloads, small-packet performance, and server power savings that Emulex adapters are known for. When combined with the recent release of new Emulex OCe14000 10GBASE-T PCIe Ethernet and Converged Network Adapters, the Emulex OneConnect I/O connectivity portfolio provides a full range of form factors and network bandwidth for customers looking to optimize the deployment of new Web-scale applications, virtualized environments and software-defined infrastructures. Additionally, Emulex Ethernet Network Adapters were shown to have a latency advantage of up to 5:1 over Intel's latest Ethernet adapters (X710 10GbE and XL710 40GbE adapters) in a recent Demartek Evaluation Report.

Hyperscale supercomputing is one of the fastest-growing segments in IT today, and is characterized by data center architectures focused on improving total cost of ownership, energy efficiency, and reducing complexity in the scalable computing space. The OCP mezzanine adapter specification enables various network connectivity configurations and was developed as an open standard by a community of engineers as part of the OCP Foundation. Emulex OCm14000-OCP adapters provided advanced capabilities for hyperscale customers such as Network Virtualization and Virtual Extensible LAN (VXLAN), Network Virtualization using Generic Routing Encapsulation (NVGRE), Single-Root I/O Virtualization (SR-IOV), and protocol support for RDMA over Converged Ethernet (RoCE) or block storage (iSCSI and Fibre Channel over Ethernet) on Emulex OneConnect CNAs.

"One of our overriding goals as a company is to enable high performance networking for our customers, regardless of the server form factor that they have chosen," said Shaun Walsh, senior vice president of marketing, Emulex. "Emulex's unified architecture, when combined with enterprise capabilities such as overlay networking and high performance protocol offloads, ensures that organizations can realize predictable performance across a variety of demanding applications and workloads. These new adapters expand our addressable market, with increased bandwidth and port density, particularly for hyperscale customers."

With this announcement, Emulex has also made available a major, new 10.4 software release to the market, providing the following key capabilities:
    --  Telco Network Functions Virtualization (NFV): Linux Poll Mode Driver (PMD) for Data Plane Development Kit (DPDK) support, providing telecom equipment manufacturers and telecom operators with an optimized solution to reduce cost and scale operations by transitioning to open server platforms
    --  RSS with VMware NetQueue: Support for receive side scaling (RSS), a network driver technology that enables the efficient distribution of incoming TCP traffic across multiple cores in multi-core processor systems, particularly in software-defined networking (SDN) environments, makes it possible to support higher network traffic loads than would be possible on a single core.  This new capability joins current RSS and vRSS support for Microsoft Windows Server VMQ and Dynamic VMQ environments.
    --  SDN Offload support for RHEL 7.x and SLES 12: Hardware offload support of VXLAN for the latest releases of Red Hat Enterprise Linux (RHEL) and Novel SUSE Linux Enterprise Server (SLES), preserving precious server resources for application workloads.
    --  Expanded RoCE support: The 10.4 release includes tech previews of iSCSI Extensions for RDMA (iSER) and Linux Network File System (NFS) over RoCE on the latest versions of RHEL. In addition, Quantized Congestion Notification (QCN) support, to manage network congestion while running RoCE, has been added.

With their visualization software, AISEC researchers can monitor every component in software-defined networking (SDN).

Company networks are inflexible – they are made up of many components that require a good deal of effort to be connected together. That’s why networks of the future will be controlled by a central unit. However, this makes them a target for hackers. At CeBIT, Fraunhofer researchers will demonstrate how to protect these future networks.

Today’s company networks comprise hundreds of devices: routers for directing data packets to the right receiver, firewall components for protecting internal networks from the outside world, and network switches. Such networks are extremely inflexible because every component, every router and every switch can carry out only the task it was manufactured for. If the network has to be expanded, the company has to integrate new routers, firewalls or switches and then program them by hand. That’s why experts worldwide have been working on flexible networks of the future for the last five years or so, developing what is known as software-defined networking (SDN). It presents one disadvantage, however; it is susceptible to hacker attacks.

Researchers from the Fraunhofer Institute for Applied and Integrated Security AISEC in Garching, near Munich, will be showing how to make SDN secure at the CeBIT trade fair in Hannover, March 16-20. A demonstrator at the Fraunhofer exhibition stand (Hall 9, Booth E40) will show how SDN and all related components can be monitored. One of these components is visualization software, which displays the network’s individual components and depicts in real time how the various applications are communicating with the controller. “We can show how software influences the behavior of different components using the controller, or, in the case of an attack, how it disrupts them,” says Christian Banse, a security expert at AISEC.

But how exactly does SDN work, and why is it so vulnerable to attack? “In the future, the plan is for a central control unit to tell the many network components what to do. To put it simply, routers, firewalls and switches lose their individual intelligence – they only follow orders from the controller,” says Banse. This makes a network much more flexible, because the controller can allocate completely new tasks to a router or switch that were not intended when the component was manufactured. Plus, the tedious task of manually configuring components during installation is eliminated because components no longer need to be assigned to a specific place in the network – the controller simply uses them as needed at the moment.

The controller is a popular target for hackers

Manufacturers have begun offering the first routers and switches that are SDN-compatible and have the necessary flexibility. “With all the hype surrounding the new adaptability made possible by a central control unit, SDN security has been neglected,” warns Banse. “That’s why we’re developing solutions to make SDN more secure from the outset, before such systems become firmly established.” In the future, networks will be controlled solely by a central controller – Banse sees this as a problem, because it might provide the perfect loophole for attackers to access the entire network. “On top of that, a whole set of new applications are being developed for SDN – for instance for firewall components or routing,” says Banse. “We have make sure that these applications are reliable.” It would be disastrous if, for example, outsiders were able to gain access to the company network using software installed accessing the controller.
 
That’s why Banse and his colleagues started off by analyzing the interaction of all SDN components to identify vulnerabilities. “You have to precisely define how deep into the network a new application is allowed to go, for example. Otherwise the stability and security of the network is not guaranteed.” So far, there are no sufficient security standards for communication among individual SDN components, but AISEC researchers are lobbying hard for an international standard. In addition to their visualization solution, at CeBIT Banse and his team will also present technical means for preventing unauthorized applications or malware from gaining access to SDN systems. They are developing ways to monitor if an app really carries out only the task for which it was intended. If it performs unplanned or undesirable activities, i.e. malware, it is rejected and blocked by the system.

Even though the deployment of 4th generation mobile networks has not yet been completed, operators and handset manufacturers, as well as leading research teams in the field, have launched a series of R&D initiatives to develop the 5th generation of mobile technology, called 5G, intending to commercialize it by 2020.

5G represents a paradigm shift in the design of mobile networks that revolutionizes this technology to support flow, latency and scalability requirements necessary to meet such extreme use cases as augmented reality or connecting trillions of devices.

The future 5G Networks are going to transform the way we perceive and interact with the world around us. 5G Networks bring a combination of advances that will transform current reality into a “connected reality”, in which all things and every person are interconnected forming a united whole. 5G Networks will allow more than six million million systems to be connected, which includes all the planet’s inhabitants, and in addition to that, somewhere in the region of a thousand objects each. Each person will be permanently connected to their doctors, friends, colleagues, clients/suppliers and security services, but what is more, also to their car, their fridge, their favourite bakery, leisure centres, metro, airport, their home and, in short, every object that may be of interest to us. All these objects, in turn, will be connected, in such a way that a pallet, let us say, can “complain” to its source company that its delivery route is incorrect, and our boiler will be able to download software to make its operation more efficient.

The number of connected devices will be complemented by increased network capacity on three orders of magnitude. 5G Networks, therefore, will be capable of carrying 1000 times more mobile data than the 4G networks that are currently beginning to be deployed. Such a massive capability for communications will allow each person to access, send or exchange, quasi instantaneously, the sensations of their choice. Though research in 5G does not cover multimodal interfaces, the network is being designed so that these can be integrated. Augmented reality devices, brain wave interfaces, or the implantation of interface biochips will allow 5G network users to interact with each other and with all their connected devices, quite naturally, free from external devices, as an extension of their five senses. Such direct exchanges of stereoscopic images, smells, tactile information, or brain waves will be possible from wherever we are. Because, counterposed to what happens with 4G systems, 5G Networks are designed for universal geographic coverage, with the added advantage of a more seamless service with regard to the relative positions of the base station and the edge of the cell.

The heart of this network will be based on the intensive development of current virtualization technologies, converging with “cloud computing” technologies. Thus, software will play a much more important role in 5G networks than in today's networks. Much of the dedicated communications hardware of today will be replaced by general purpose computational platforms that provide the necessary communication services via software. This will allow network control to be much more flexible and economical than at present, and better integrated with the services it supports and with telecommunications operators’ business processes. In this way, the time taken to deploy a new service in production will be reduced from the current 90 days to a much shorter term of around 90 minutes. This means that the range of supported services will be considerably more dynamic and tailored to the needs of users, both private and professional.

More economical use of energy is a further key element in 5G Networks. If we were to increase the network capacity by 1000 with today’s technology, the energy requirement would be so high that it could not be met. Therefore, energy efficiency is another major criteria in the design of 5G networks, both as regards to terminals as network elements as to the design of the network as a whole. Therefore, when it comes to 5G networks, technologies are being developed that run on just 10% of the energy used at present, in order to reduce the environmental impact derived from 5G.

To sum up, 5G Networks will lead us to a world in which distances cease to exist, and in which our sense of being and perceiving will blend with those of our fellow citizens and of the objects around us.

The European Union, with the framework research program "Horizonte 2020", which operates since 2014 for a period of 7 years with an estimated budget of 80 billion euro, has designed a specific program to focus on this technology known as “Horizon 2020 Advanced 5G Network Infrastructure for Future Internet PPP ". This program has received a budget of 700 million Euros from the European Commission, and from 3,500 to 7,000 million euros from the private sector. The research institute IMDEA Networks is one of the main leaders at European level in the field of 5G networks.

IMDEA Networks is the coordinator of the iJOIN project, which is one of the major European projects in the field of 5G networks. iJOIN was the first Research and Technological Development (RTD) European project coordinated by a IMDEA Institute, it has received the runner-up prize to the Best European Cooperative R&D Project awarded by the Foundation for Knowledge madri+d, and it has recently been selected for a technical demonstration before the EU Commissioner for Digital Economy & Society, Günther Oettinger, at the Mobile World Congress 2015.

Moreover, IMDEA Networks Institute and Telefónica Research and Development recently established a Joint Research Unit (JRU) called "Telefónica - IMDEA Networks JRU in 5G technologies", which aims to establish a strategic partnership that provides an operational framework for close interaction in a varied set of scientific activities.

There are three more projects associated with the Institute’s work on 5G networks. Firstly, one of IMDEA Networks’ researchers, Joerg Widmer, has been awarded a prestigious ERC Consolidator Grant to develop the SEARCHLIGHT project, devoted to investigate 60GHz networks, one of the key 5G technologies. This project has received funding of 1.7M€ for the next five years. In addition, IMDEA Networks is the technical coordinator of the European project CROWD, which has been selected by the European Commission as one of the “early 5G precursor projects”. Thirdly, IMDEA Networks leads the regional project TIGRE5, which focuses on this technology and brings together the main Madrid-based research teams working in this field.

Finally, note that IMDEA Networks is one of the few academic partners represented in the 5G-PPP Association, Partnership Board, and the ETP Steering Board, which are the entities that lead the development of the European program in 5G technology.

Integrated Device Technology has developed with NVIDIA and Orange Silicon Valley a supercomputing platform that can analyze 4G to 5G base station bandwidth data in real time, enabling network operators to monetize the voluminous data flowing through their communications systems. By connecting clusters of low-power NVIDIA Tegra K1 mobile processors with IDT’s RapidIO interconnect and timing technology, the platform can deliver real-time data that network operators can use to provide consumers a more responsive and interactive experience.

Designed for high-performance computing, IoT appliances and wireless access networks handling 4G and higher traffic, the platform features cutting-edge deep learning and pattern recognition computing capabilities. By installing the hardware at multiple, geographically dispersed base stations along the edge of the network, the Big Data problem is solved by distributing the computing capacity and delivering lightning-quick analysis of locally generated data, such as social media content.

"For example, if you tweet that you’ve just seen a movie and are headed out to dinner, some nearby dining options could pop up on your screen," said Sailesh Chittipeddi, IDT’s vice president of Global Operations and chief technology officer. "Or the technology can be used for mass transit; if you’re standing at a bus stop, you can check your phone for the precise location of your bus. The possibilities are virtually endless."

Chittipeddi refers to the platform as "supercomputing at the edge" because the technology is deployed and analysis conducted at local base stations—at the edge of the wireless network—rather than in a central location, removing the bottleneck between the base station and the core of the network. "The solution was developed with an architecture designed to handle the emerging market for geographically distributed analytics, deep learning and pattern recognition in real time," he said.

"By taking mobile low-power GPU technology and connecting it with 100 ns latency RapidIO interconnect, this modular cluster can be deployed to distribute high-performance compute functionality to the edge of the wireless network, where it is most geographically sensitive," said Jag Bolaria of the Linley Group. "The innovation paves the path for co-locating real-time deployable analytics in the approximately 2 million base stations deployed annually in wireless networks."

The Supercomputing at the Edge platform uses IDT’s 20 Gbps interconnect technology to connect a low-latency cluster of NVIDIA Tegra K1 mobile processors. It’s suitable for micro base station deployment along with larger computing clusters in the C-RAN, a new cellular network architecture. Each computing card is based on connecting up to 4 GPU units per processing card connected with RapidIO low-latency NIC and switching products on board.

The platform, which can support up to 12 teraflops per IU RapidIO server blade, is based on servers from Prodrive Technologies (www.prodrive-technologies.com) and computing cards from Concurrent Technologies PLC (www.gocct.com). Each computing card contains four NVIDIA mobile processors. The processors deliver 192 fully programmable CUDA cores for advanced graphics and compute performance. Each card in the system matches computing cores with 20 Gbps interconnect to each GPU, with over 140 Gbps of built-in switching at each node with IDT’s best-in-class timing solutions.

IDT will present the Supercomputing at the Edge platform at the Linley Data Center Conference Feb. 25-26 in Santa Clara, Calif., and Mobile World Congress March 2-5 in Barcelona, Booth 1H10.

Page 7 of 45