Juniper Networks has been selected by the National Center for Atmospheric Research (NCAR) to provide the networking infrastructure for a new supercomputer that will be used by researchers to predict climate patterns and assess the effects of global warming.

“The new supercomputer is expected to benchmark among the top supercomputers in the world. The network will allow scientists around the world to access resources and foster a community of global collaboration, said Al Kellie, Associate Director, NCAR, Computational and Information Systems laboratory.

The new supercomputer will be installed at the NCAR Wyoming Supercomputing Center (NWSC) this fall and will be fully operational in 2017. This new system will enable more accurate and actionable projections about the impact of weather and climate change and is expected to perform 5.34 quadrillion calculations per second, making it one of the top performing supercomputers in the world. Juniper Networks was selected to provide the system’s networking infrastructure, allowing scientists around the world access to this critical climate and weather research tool.

A critical research tool: NCAR’s supercomputer system will allow scientists to perform some of the world’s most data intensive calculations for weather and climate modeling, providing information that can profoundly impact communities by helping to improve computer models in ways that can better inform future evacuation efforts or aid in the dispatch of recovery teams. It will also be an integral tool to study the effects of global warming, helping governments and communities plan for future changes in water cycles, temperatures and sea levels, among other environmental impacts.

A high performance network that supports global collaboration: NCAR’s new supercomputer network, supported by Juniper Networks, is designed to meet the demanding capacity and bandwidth required by researchers to process climate data and conduct predictive weather analysis, while sharing results with colleagues around the world.

A high-density, highly scalable solution: The Juniper Networks QFX10008 Switch, a high-performance, high-density switch that offers unprecedented network scale and automation, will allow remote users around the country and the world to access this new supercomputer and the upgraded NCAR Globally Accessible Data Environment (GLADE). The QFX10008 is a foundational element of the Juniper Networks MetaFabric Architecture, an open-standards-based architecture that allows simple and scalable migration to next-generation data centers, and is the latest solution chosen by EX Series Ethernet SwitchesQFX Series Nodes and MX Series 3D Universal Edge Routers.

Some of the world’s largest research and education networks are built on Juniper technology: This project builds on Juniper Networks’ expertise in supporting global research and education networks.

“As chair of the Advisory Board for the Community Earth System Model, I can't emphasize enough the importance of remote access to the NWSC. This access enables scientists at institutions widely dispersed geographically to access state-of-the art computational, modeling and analysis tools essential for advancing our knowledge and predictive skills in weather and climate. The NCAR supercomputing infrastructure is integral to the success of our national program in climate research, commented Dr. Leo Donner, Physical Scientist, GFDL/NOAA and Lecturer, Department of Geosciences, Princeton University.

“We’re thrilled that NCAR turned to Juniper Networks to meet its most demanding challenge yet. Juniper is proud to support the creation of a state-of-the-art supercomputing platform that will allow scientists and researchers to study the impact of climate and weather on the world’s populations and environments," said Tim Solms, Vice President of US Federal Sales, Juniper Networks.

The collision of two black holes—an event detected for the first time ever by the Laser Interferometer Gravitational-Wave Observatory, or LIGO—is seen in this still from a computer simulation. LIGO detected gravitational waves, or ripples in space and time, generated as the black holes merged. The simulation shows what the merger would look like if we could somehow get a closer look. Time has been slowed by a factor of 100. The stars appear warped due to the strong gravity of the black holes.  Credit: SXS

For the first time, scientists have observed ripples in the fabric of spacetime called gravitational waves, arriving at the earth from a cataclysmic event in the distant universe. This confirms a major prediction of Albert Einstein’s 1915 general theory of relativity and opens an unprecedented new window onto the cosmos.

Gravitational waves carry information about their dramatic origins and about the nature of gravity that cannot otherwise be obtained. Physicists have concluded that the detected gravitational waves were produced during the final fraction of a second of the merger of two black holes to produce a single, more massive spinning black hole. This collision of two black holes had been predicted but never observed.

The gravitational waves were detected on September 14, 2015 at 5:51 a.m. Eastern Daylight Time (09:51 UTC) by both of the twin Laser Interferometer Gravitational-wave Observatory (LIGO) detectors, located in Livingston, Louisiana, and Hanford, Washington, USA. The LIGO Observatories are funded by the National Science Foundation (NSF), and were conceived, built, and are operated by Caltech and MIT. The discovery, accepted for publication in the journal Physical Review Letters, was made by the LIGO Scientific Collaboration (which includes the GEO Collaboration and the Australian Consortium for Interferometric Gravitational Astronomy) and the Virgo Collaboration using data from the two LIGO detectors.

Based on the observed signals, LIGO scientists estimate that the black holes for this event were about 29 and 36 times the mass of the sun, and the event took place 1.3 billion years ago. About 3 times the mass of the sun was converted into gravitational waves in a fraction of a second—with a peak power output about 50 times that of the whole visible universe. By looking at the time of arrival of the signals—the detector in Livingston recorded the event 7 milliseconds before the detector in Hanford—scientists can say that the source was located in the Southern Hemisphere.

According to general relativity, a pair of black holes orbiting around each other lose energy through the emission of gravitational waves, causing them to gradually approach each other over billions of years, and then much more quickly in the final minutes. During the final fraction of a second, the two black holes collide into each other at nearly one-half the speed of light and form a single more massive black hole, converting a portion of the combined black holes’ mass to energy, according to Einstein’s formula E=mc2. This energy is emitted as a final strong burst of gravitational waves. It is these gravitational waves that LIGO has observed.

The existence of gravitational waves was first demonstrated in the 1970s and 80s by Joseph Taylor, Jr., and colleagues. Taylor and Russell Hulse discovered in 1974 a binary system composed of a pulsar in orbit around a neutron star. Taylor and Joel M. Weisberg in 1982 found that the orbit of the pulsar was slowly shrinking over time because of the release of energy in the form of gravitational waves. For discovering the pulsar and showing that it would make possible this particular gravitational wave measurement, Hulse and Taylor were awarded the Nobel Prize in Physics in 1993.

The new LIGO discovery is the first observation of gravitational waves themselves, made by measuring the tiny disturbances the waves make to space and time as they pass through the earth.

“Our observation of gravitational waves accomplishes an ambitious goal set out over 5 decades ago to directly detect this elusive phenomenon and better understand the universe, and, fittingly, fulfills Einstein’s legacy on the 100th anniversary of his general theory of relativity,” says Caltech’s David H. Reitze, executive director of the LIGO Laboratory.

The discovery was made possible by the enhanced capabilities of Advanced LIGO, a major upgrade that increases the sensitivity of the instruments compared to the first generation LIGO detectors, enabling a large increase in the volume of the universe probed—and the discovery of gravitational waves during its first observation run. The US National Science Foundation leads in financial support for Advanced LIGO. Funding organizations in Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council, STFC) and Australia (Australian Research Council) also have made significant commitments to the project. Several of the key technologies that made

Advanced LIGO so much more sensitive have been developed and tested by the German UK GEO collaboration. Significant supercomputer resources have been contributed by the AEI Hannover Atlas Cluster, the LIGO Laboratory, Syracuse University, and the University of Wisconsin-Milwaukee. Several universities designed, built, and tested key components for Advanced LIGO: The Australian National University, the University of Adelaide, the University of Florida, Stanford University, Columbia University in the City of New York, and Louisiana State University.

“In 1992, when LIGO’s initial funding was approved, it represented the biggest investment the NSF had ever made,” says France Córdova, NSF director. “It was a big risk. But the National Science Foundation is the agency that takes these kinds of risks. We support fundamental science and engineering at a point in the road to discovery where that path is anything but clear. We fund trailblazers. It’s why the U.S. continues to be a global leader in advancing knowledge.”

LIGO research is carried out by the LIGO Scientific Collaboration (LSC), a group of more than 1000 scientists from universities around the United States and in 14 other countries. More than 90 universities and research institutes in the LSC develop detector technology and analyze data; approximately 250 students are strong contributing members of the collaboration. The LSC detector network includes the LIGO interferometers and the GEO600 detector. The GEO team includes scientists at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute, AEI), Leibniz Universität Hannover, along with partners at the University of Glasgow, Cardiff University, the University of Birmingham, other universities in the United Kingdom, and the University of the Balearic Islands in Spain.

“This detection is the beginning of a new era: The field of gravitational wave astronomy is now a reality,” says Gabriela González, LSC spokesperson and professor of physics and astronomy at Louisiana State University.

LIGO was originally proposed as a means of detecting these gravitational waves in the 1980s by Rainer Weiss, professor of physics, emeritus, from MIT; Kip Thorne, Caltech’s Richard P. Feynman Professor of Theoretical Physics, emeritus; and Ronald Drever, professor of physics, emeritus, also from Caltech.

“The description of this observation is beautifully described in the Einstein theory of general relativity formulated 100 years ago and comprises the first test of the theory in strong gravitation. It would have been wonderful to watch Einstein’s face had we been able to tell him,” says Weiss.

“With this discovery, we humans are embarking on a marvelous new quest: the quest to explore the warped side of the universe—objects and phenomena that are made from warped spacetime. Colliding black holes and gravitational waves are our first beautiful examples,” says Thorne.

Virgo research is carried out by the Virgo Collaboration, consisting of more than 250 physicists and engineers belonging to 19 different European research groups: 6 from Centre National de la Recherche Scientifique (CNRS) in France; 8 from the Istituto Nazionale di Fisica Nucleare (INFN) in Italy; 2 in The Netherlands with Nikhef; the Wigner RCP in Hungary; the POLGRAW group in Poland; and the European Gravitational Observatory (EGO), the laboratory hosting the Virgo detector near Pisa in Italy.

Fulvio Ricci, Virgo Spokesperson, notes that, “This is a significant milestone for physics, but more importantly merely the start of many new and exciting astrophysical discoveries to come with LIGO and Virgo.”

Bruce Allen, managing director of the Max Planck Institute for Gravitational Physics (Albert Einstein Institute), adds, “Einstein thought gravitational waves were too weak to detect, and didn’t believe in black holes. But I don’t think he’d have minded being wrong!”

 “The Advanced LIGO detectors are a tour de force of science and technology, made possible by a truly exceptional international team of technicians, engineers, and scientists,” says David Shoemaker of MIT, the project leader for Advanced LIGO. “We are very proud that we finished this NSF-funded project on time and on budget.”

At each observatory, the two-and-a-half-mile (4-km) long L-shaped LIGO interferometer uses laser light split into two beams that travel back and forth down the arms (four-foot diameter tubes kept under a near-perfect vacuum). The beams are used to monitor the distance between mirrors precisely positioned at the ends of the arms. According to Einstein’s theory, the distance between the mirrors will change by an infinitesimal amount when a gravitational wave passes by the detector. A change in the lengths of the arms smaller than one-ten-thousandth the diameter of a proton (10-19 meter) can be detected.

“To make this fantastic milestone possible took a global collaboration of scientists—laser and suspension technology developed for our GEO600 detector was used to help make Advanced LIGO the most sophisticated gravitational wave detector ever created,” says Sheila Rowan, professor of physics and astronomy at the University of Glasgow.

Independent and widely separated observatories are necessary to determine the direction of the event causing the gravitational waves, and also to verify that the signals come from space and are not from some other local phenomenon.

Toward this end, the LIGO Laboratory is working closely with scientists in India at the Inter-University Centre for Astronomy and Astrophysics, the Raja Ramanna Centre for Advanced Technology, and the Institute for Plasma to establish a third Advanced LIGO detector on the Indian subcontinent. Awaiting approval by the government of India, it could be operational early in the next decade. The additional detector will greatly improve the ability of the global detector network to localize gravitational-wave sources.

“Hopefully this first observation will accelerate the construction of a global network of detectors to enable accurate source location in the era of multi-messenger astronomy,” says David McClelland, professor of physics and director of the Centre for Gravitational Physics at the Australian National University. 

Scientists at the University of Leeds will run the equivalent of password cracking software to find the chemical keys to defeating the Ebola virus.

A team from the University's schools of Chemistry and Molecular and Cellular Biology have secured a £200,000 grant from the Wellcome Trust to find drugs to cure the disease.

Although several Ebola vaccines are being developed, there are currently no effective anti-viral drugs to treat people once they get infected.

This is a particularly serious issue because of barriers to implementing vaccine programmes in the most at-risk communities and because of the difficulty of predicting where the disease will strike next. The University of Leeds researchers will focus on finding anti-viral drugs.

Instead of the traditional approach of biologically testing hundreds of candidate drug compounds in the lab, the researchers will run computer software loaded with a library of about one million drug compounds and match them against the atomic structure of the Ebola virus's key proteins.

The second phase of the project will then test the most promising compounds to see if they inhibit Ebola-like molecules in biological tests.

Professor Mark Harris, Professor of Virology at the University of Leeds, who is leading the project, said: "Much of the scientific activity following the recent Ebola outbreak has focussed on repurposing existing drugs or developing vaccines. We are going back to the structure of the Ebola proteins to identify compounds that could be the basis of specially designed antivirals for Ebola."

Professor Colin Fishwick, Professor of Medicinal Chemistry at the University of Leeds, will lead the supercomputer-based phase of the study. He said: "The use of the computer hugely increases our ability to identify the right compounds. It is a bit like trying to crack a password by brute-force: we are able to run through hundreds of thousands of drug compound structures to see if they fit into key 'holes' we have identified in the structure of the virus.

"However, our computers are not dealing with strings of characters but minutely detailed 3D maps of molecules. We are matching key atomic details of the compounds and virus molecules and looking for chemicals that might block the virus' growth and replication. It is an incredibly powerful system that transforms our ability to rapidly identify new drug leads."

A team led by Professor Harris and Dr John Barr, an expert in Ebola-type viruses based in the University's School of Molecular and Cellular Biology, will then take the best candidate chemicals into biological tests.

Dr Barr explained: "In these biological assays, we will using non-infectious molecules that replicate key features of the Ebola virus' structure and lifecycle. Useful compounds could then be tested on Ebola itself at Category Four containment facilities like Porton Down or Marburg in Germany."

The project is looking for anti-viral drugs capable of combatting Ebola in infected patients, rather than vaccines.

Professor Harris said: "There are quite a few vaccines in various stages of development at the moment and some seem to be very promising. However, even if we do have a very successful vaccine for Ebola, we are going to need anti-virals. Getting enough vaccines to people in the communities most at risk from Ebola will be very difficult indeed. We already struggle with established vaccines like polio in some of these areas.

"It is important to stress that we are at the very early stages of identifying possible drug compounds, but this work could be the basis for new drugs for infected patients, much like people with flu can be treated with Tamiflu or HIV patients receive antiretrovirals."

The study will focus on two key components of the Ebola virus: its NP and VP30 proteins. The atomic structures of both have been mapped in high resolution and both are known to be critical to the virus' replication and growth. Two other proteins--the L and VP35 proteins--will also be studied by the team, which also includes Dr Thomas Edwards, an expert in protein structure, and Dr Richard Foster, a medicinal chemist. All of the researchers are members of The Astbury Centre for Structural Molecular Biology, which brings together scientists from across the University of Leeds to allow interdisciplinary approaches to understanding the molecular basis of life.

The research is being funded by a Wellcome Trust Pathfinder Award, which will fund two post-doctoral researchers to work on the supercomputer and biological assays over 18 months. Wellcome's pathfinder funding is targeted at "projects that have significant potential to help develop innovative new products that address an unmet need in healthcare and offer a potential new solution."

CAPTION Benedikt Mayer and Lisa Janker are at the molecular beam epitaxy facility at the Walter Schottky Institute, Technical University of Munich. CREDIT Uli Benz / TUM

Nanolaser for information technology

Physicists at the Technical University of Munich (TUM) have developed a nanolaser, a thousand times thinner than a human hair. Thanks to an ingenious process, the nanowire lasers grow right on a silicon chip, making it possible to produce high-performance photonic components cost-effectively. This will pave the way for fast and efficient data processing with light in the future.

Ever smaller, ever faster, ever cheaper - since the start of the computer age the performance of processors has doubled on average every 18 months. 50 years ago already, Intel co-founder Gordon E. Moore prognosticated this astonishing growth in performance. And Moore's law seems to hold true to this day.

But the miniaturization of electronics is now reaching its physical limits. "Today already, transistors are merely a few nanometers in size. Further reductions are horrendously expensive," says Professor Jonathan Finley, Director of the Walter Schottky Institute at TUM. "Improving performance is achievable only by replacing electrons with photons, i.e. particles of light."

Photonics - the silver bullet of miniaturization

Data transmission and processing with light has the potential of breaking the barriers of current electronics. In fact, the first silicon-based photonics chips already exist. However, the sources of light for the transmission of data must be attached to the silicon in complicated and elaborate manufacturing processes. Researchers around the world are thus searching for alternative approaches.

Scientists at the TU Munich have now succeeded in this endeavor: Dr. Gregor Koblmüller at the Department of Semiconductor Quantum-Nanosystems has, in collaboration with Jonathan Finley, developed a process to deposit nanolasers directly onto silicon chips. A patent for the technology is pending.

Growing a III-V semiconductor onto silicon requires tenacious experimentation. "The two materials have different lattice parameters and different coefficients of thermal expansion. This leads to strain," explains Koblmüller. "For example, conventional planar growth of gallium arsenide onto a silicon surface results therefore in a large number of defects."

The TUM team solved this problem in an ingenious way: By depositing nanowires that are freestanding on silicon their footprints are merely a few square nanometers. The scientists could thus preclude the emerging of defects in the GaAs material.

Atom by atom to a nanowire

But how do you turn a nanowire into a vertical-cavity laser? To generate coherent light, photons must be reflected at the top and bottom ends of the wire, thereby amplifying the light until it reaches the desired threshold for lasing.

To fulfil these conditions, the researchers had to develop a simple, yet sophisticated solution: "The interface between gallium arsenide and silicon does not reflect light sufficiently. We thus built in an additional mirror - a 200 nanometer thick silicon oxide layer that we evaporated onto the silicon," explains Benedikt Mayer, doctoral candidate in the team led by Koblmüller and Finley. "Tiny holes can then be etched into the mirror layer. Using epitaxy, the semiconductor nanowires can then be grown atom for atom out of these holes."

Only once the wires protrude beyond the mirror surface they may grow laterally - until the semiconductor is thick enough to allow photons to jet back and forth to allow stimulated emission and lasing. "This process is very elegant because it allows us to position the nanowire lasers directly also onto waveguides in the silicon chip," says Koblmüller.

Basic research on the path to applications

Currently, the new gallium arsenide nanowire lasers produce infrared light at a predefined wavelength and under pulsed excitation. "In the future we want to modify the emission wavelength and other laser parameters to better control temperature stability and light propagation under continuous excitation within the silicon chips," adds Finley.

The team has just published its first successes in this direction. And they have set their sights firmly on their next goal: "We want to create an electric interface so that we can operate the nanowires under electrical injection instead of relying on external lasers," explains Koblmüller.

"The work is an important prerequisite for the development of high-performance optical components in future computers," sums up Finley. "We were able to demonstrate that manufacturing silicon chips with integrated nanowire lasers is possible."

Disney method relies on similarities in appearances across classes of objects

Seen from any angle, a horse looks like a horse. But it doesn't look the same from every angle. Scientists at Disney Research have developed a method to help computer vision systems avoid the confusion associated with changes in perspective, such as the marked difference in a horse's appearance from the side and from the front.

Alina Kuznetsova and fellow Disney researchers devised a system that is able to estimate the pose of an object, based in part on similarities in how different types of objects appear from the same angle. The machine learning method proved so effective that the researchers demonstrated, for the first time, that the method could predict the pose even of an object it had never seen before.

"Sometimes orientation is really important to know," said Leonid Sigal, a senior research scientist at Disney Research. "A self-driving car, for instance, would be better able to negotiate traffic safely if it can anticipate the directions that other cars and buses on the road appear to be headed."

Moreover, the method he and his colleagues developed not only can predict the orientation or pose of an object, it can also use its knowledge of pose to help identify an object, making it useful for a wide variety of computer vision applications.

The researchers will present their method at the Association for the Advancement of Artificial Intelligence conference, Feb. 12-17 in Phoenix, Arizona.

Figuring out how objects look from different angles is something that comes naturally to people, but is a challenging problem in computer vision.

"People draw inferences from other things they have seen," Sigal said. "If they know what a bicycle looks like from various angles, that can help them predict what a motorcycle might look like in different poses, because of the visual similarities among these two objects."

Kuznetsova, a Ph.D. student at Leibniz University Hannover in Germany who worked as an intern with Disney Research, and Sung Ju Hwang, a former post-doctoral researcher at Disney now on the faculty of Ulsan National Institute of Science and Technology in Korea, relied on a similar intuition as they developed their method.

A side view of a horse, for instance, has more in common with the side view of a cow than it does with the front view of a horse. They were able to use these similarities that are shared across different categories of objects to develop a metric learning approach at the heart of the predictive method.

When shown a computer mouse for the first time, for instance, the method recognized its vague similarities with the shape of a car, helping the method identify the sides, front and back of the mouse.

Page 1 of 107