SETI deploys GPU cluster to receive data from the VLA

A system designed to provide data from the National Science Foundation’s Karl G. Jansky Very Large Array (VLA) for analysis in the Search for Extraterrestrial Intelligence (SETI) has successfully acquired data from a VLA antenna. The system, dubbed COSMIC: the Commensal Open Source Multimode Interferometer Cluster, is designed to receive data from a newly-developed parallel Ethernet interface to the VLA, using the same data stream used for other research but analyzed in parallel by COSMIC.

“As soon as the cabling was physically connected, our interface locked on to the VLA data streams and we were able to grab some preliminary data,” said Dr. Jack Hickish, of the SETI Institute and Real-Time Radio Systems Limited, who is leading the development of the COSMIC system. Credit: Aspen Doan-Isenhour, NRAO/AUI/NSF.

The National Radio Astronomy Observatory (NRAO) and the SETI Institute agreed last year to collaborate on developing and installing the COSMIC system at the VLA. COSMIC, funded by the SETI Institute, will analyze data from the VLA to identify transmissions possibly generated by extraterrestrial technologies.

“As the VLA goes about its normal observing, this system will allow an additional and valuable use for the data we’re already collecting. We’re happy to see this important milestone, and congratulate the SETI Institute and NRAO personnel who achieved it,” said NRAO Director Tony Beasley.

The initial success with a single VLA antenna clears the way for the buildup of the additional hardware required to capture data from all 27 VLA antennas. In addition, the equipment that will do the actual SETI signal analysis still is under development. Full scientific operations of COSMIC are expected to begin in January of 2023.

The complete system “will allow us to conduct a powerful, wide-area SETI survey that will be vastly more complete than any previous such search,” according to Andrew Siemion, Bernard M. Oliver Chair for SETI at the SETI Institute and Principal Investigator for the Breakthrough Listen Initiative at the University of California, Berkeley.

ExoMiner adds 301 exoplanets to Kepler's total tally

Scientists recently added a whopping 301 newly validated exoplanets to the total exoplanet tally. The throng of planets is the latest to join the 4,569 already validated planets orbiting a multitude of distant stars. How did scientists discover such a huge number of planets, seemingly all at once? The answer lies with a new deep neural network called ExoMiner. Over 4,5000 planets have been found around other stars, but scientists expect that our galaxy contains millions of planets. There are multiple methods for detecting these small, faint bodies around much larger, bright stars. Credit: NASA/JPL-Caltech

Deep neural networks are machine learning methods that automatically learn a task when provided with enough data. ExoMiner is a new deep neural network that leverages NASA’s Supercomputer, Pleiades, and can distinguish real exoplanets from different types of imposters, or “false positives.” Its design is inspired by various tests and properties human experts use to confirm new exoplanets. And it learns by using past confirmed exoplanets and false-positive cases. 

ExoMiner supplements people who are pros at combing through data and deciphering what is and isn't a planet. Specifically, data gathered by NASA's Kepler spacecraft and K2, its follow-on mission. For missions like Kepler, with thousands of stars in its field of view, each holding the possibility to host multiple potential exoplanets, it's a hugely time-consuming task to pore over massive datasets. ExoMiner solves this dilemma. When a planet crosses directly between us and its star, we see the star dim slightly because the planet is blocking out a portion of the light. This is one method scientists use to find exoplanets. They make a plot called a light curve with the brightness of the star versus time. Using this plot, scientists can see what percentage of the star's light the planet blocks and how long it takes the planet to cross the disk of the star. Credit: NASA's Goddard Space Flight Center

“Unlike other exoplanet-detecting machine learning programs, ExoMiner isn't a black box – there is no mystery as to why it decides something is a planet or not,” said Jon Jenkins, an exoplanet scientist at NASA's Ames Research Center in California's Silicon Valley. “We can easily explain which features in the data lead ExoMiner to reject or confirm a planet.”

What is the difference between a confirmed and validated exoplanet? A planet is “confirmed,” when different observation techniques reveal features that can only be explained by a planet. A planet is “validated” using statistics – meaning how likely or unlikely it is to be a planet based on the data.

In a paper published in the Astrophysical Journal, the team at Ames shows how ExoMiner discovered the 301 planets using data from the remaining set of possible planets – or candidates – in the Kepler Archive. All 301 machine-validated planets were originally detected by the Kepler Science Operations Center pipeline and promoted to planet candidate status by the Kepler Science Office. But until ExoMiner, no one was able to validate them as planets.

The paper also demonstrates how ExoMiner is more precise and consistent in ruling out false positives and better able to reveal the genuine signatures of planets orbiting their parent stars – all while giving scientists the ability to see in detail what led ExoMiner to its conclusion.

“When ExoMiner says something is a planet, you can be sure it's a planet,” added Hamed Valizadegan, ExoMiner project lead, and machine learning manager with the Universities Space Research Association at Ames. “ExoMiner is highly accurate and in some ways more reliable than both existing machine classifiers and the human experts it's meant to emulate because of the biases that come with human labeling.”

None of the newly confirmed planets are believed to be Earth-like or in the habitable zone of their parent stars. But they do share similar characteristics to the overall population of confirmed exoplanets in our galactic neighborhood.

“These 301 discoveries help us better understand planets and solar systems beyond our own, and what makes ours so unique,” said Jenkins.

As the search for more exoplanets continues – with missions using transit photometry such as NASA’s Transiting Exoplanet Survey Satellite, or TESS, and the European Space Agency's upcoming PLAnetary Transits and Oscillations of stars, or PLATO, mission – ExoMiner will have more opportunities to prove it's up to the task.

“Now that we've trained ExoMiner using Kepler data, with a little fine-tuning, we can transfer that learning to other missions, including TESS, which we're currently working on,” said Valizadegan. “There's room to grow.”

Hurricanes are expected to linger over northeast cities, causing greater damage

More storms like hurricane sandy could be in the east coast’s future, potentially costing billions of dollars in damage and economic losses Hurricane Sandy over the Carolinas. Credit: NASA Goddard

By the late 21st century, northeastern U.S. cities will see worsening hurricane outcomes, with storms arriving more quickly but slowing down once they’ve made landfall. As storms linger longer over the East Coast, they will cause greater damage along the heavily populated corridor, according to a new study.

In the new study, climate scientist Andra Garner at Rowan University analyzed more than 35,000 supercomputer-simulated storms. To assess likely storm outcomes in the future, Garner and her collaborators compared where storms formed, how fast they moved and where they ended from the pre-industrial period through the end of the 21st century.

The researchers found that future East Coast hurricanes will likely cause greater damage than storms of the past. The research predicted that a greater number of future hurricanes will form near the East Coast, and those storms will reach the Northeast corridor more quickly. The simulated storms were slow to a crawl as they approach the East Coast, allowing them to produce more wind, rain, floods, and related damage in the Northeast region. The longest-lived tropical storms are predicted to be twice as long as storms today.

The study was published in Earth’s Future, which publishes interdisciplinary research on the past, present, and future of our planet and its inhabitants.

The changes in storm speed will be driven by changes in atmospheric patterns over the Atlantic, prompted by warmer air temperatures. While Garner and her colleagues note that more research remains to be done to fully understand the relationship between a warming climate and changing storm tracks, they noted that potential northward shifts in the region where Northern and Southern Hemisphere trade winds meet or slowing environmental wind speeds could be to blame.

“When you think of a hurricane moving along the East Coast, there are larger scale wind patterns that generally help push them back out to sea,” Garner said. “We see those winds slowing down over time.” Without those winds, the hurricanes can overstay their welcome on the coast.

Garner, whose previous work focused on the devastating East Coast effects of storms like Hurricane Sandy, particularly in the Mid-Atlantic, said the concern raised by the new study is that more storms capable of producing damage levels similar to Sandy are likely.

And the longer storms linger, the worse they can be, she said.

“Think of Hurricane Harvey in 2017 sitting over Texas, and Hurricane Dorian in 2019 over the Bahamas,” she said. “That prolonged exposure can worsen the impacts.”

From 2010 to 2020, U.S. coastlines were hit by 19 tropical cyclones that qualified as billion-dollar disasters, generating approximately $480 billion in damages, adjusted for inflation. If storms sit over coasts for longer stretches, that economic damage is likely to increase as well. For the authors, that provides clear economic motivation to stem rising greenhouse gas emissions.

“The work produced yet more evidence of a dire need to cut emissions of greenhouse gases now to stop the climate warming,” Garner said.

Co-author Benjamin Horton, who specializes in sea-level rise and leads the Earth Observatory of Singapore at Nanyang Technological University, said, “This study suggests that climate change will play a long-term role in increasing the strength of storms along the east coast of the United States and elsewhere. Planning for how to mitigate the impact of major storms must take this into account.”

Columbia engineers build a breakthrough integrated photonics device

Over the past several decades, researchers have moved from using electric currents to manipulating light waves in the near-infrared range for telecommunications applications such as high-speed 5G networks, biosensors on a chip, and driverless cars. This research area, known as integrated photonics, is fast evolving, and investigators are now exploring the shorter visible wavelength range to develop a broad variety of emerging applications. These include chip-scale light detection and ranging (LIDAR), augmented/virtual/mixed reality (AR/VR/MR) goggles, holographic displays, quantum information processing chips, and implantable optogenetic probes in the brain.

The one device critical to all these applications in the visible range is an optical phase modulator, which controls the phase of a light wave, similar to how the phase of radio waves is modulated in wireless computer networks. With a phase modulator, researchers can build an on-chip optical switch that channels light into different waveguide ports. With a large network of these optical switches, researchers could create sophisticated integrated optical systems that could control light propagating on a tiny chip or light emission from the chip. A visible-spectrum phase modulator (the ring at the center of a radius of 10 microns) is tinier than a butterfly wing scale. Photo credit: Heqing Huang and Cheng-Chia Tsai/Columbia Engineering

But phase modulators in the visible range are very hard to make: there are no materials that are transparent enough in the visible spectrum while also providing large tunability, either through thermo-optical or electro-optical effects. Currently, the two most suitable materials are silicon nitride and lithium niobate. While both are highly transparent in the visible range, neither one provides very much tunability. Visible-spectrum phase modulators based on these materials are thus not only large but also power-hungry: the length of individual waveguide-based modulators ranges from hundreds of microns to several millimeters, and a single modulator consumes tens of milliwatts for phase tuning. Researchers trying to achieve large-scale integration—embedding thousands of devices on a single microchip—have, up to now, been stymied by these bulky, energy-consuming devices.

Today, Columbia Engineering researchers announced that they have found a solution to this problem—they’ve developed a way based on micro-ring resonators to dramatically reduce both the size and the power consumption of a visible-spectrum phase modulator, from one millimeter to 10 microns, and from tens of milliwatts for π phase tuning to below one milliwatt. The study was published today by Nature Photonics.

“Usually the bigger something is, the better. But integrated devices are a notable exception,” said Nanfang Yu, associate professor of applied physics, co-principal investigator (PI) on the team, and an expert in nanophotonics. “It’s really hard to confine light to a spot and manipulate it without losing much of its power. We are excited that in this work we’ve made a breakthrough that will greatly expand the horizon of large-scale visible-spectrum integrated photonics.”

Conventional optical phase modulators operating at visible wavelengths are based on light propagation in waveguides. Yu worked with his colleague Michal Lipson, who is the leading expert on integrated photonics based on silicon nitride, to develop a very different approach.

“The key to our solution was to use an optical resonator and to operate it in the so-called ‘strongly over-coupled’ regime,” said Lipson, co-PI on the team, and Eugene Higgins Professor of Electrical Engineering and professor of applied physics.

Optical resonators are structures with a high degree of symmetry, such as rings that can cycle a beam of light many times and translate tiny refractive index changes to large phase modulation. Resonators can operate under several different conditions and so need to be used carefully. For example, if operating in the “under-coupled” or “critical coupled” regimes, a resonator will only provide a limited phase modulation and, more problematically, introduce a large amplitude variation to the optical signal. The latter is a highly undesirable optical loss because the accumulation of even moderate losses from individual phase modulators will prevent cascading them to form a circuit that has a sufficiently large output signal.

To achieve a complete 2π phase tuning and minimal amplitude variation, the Yu-Lipson team chose to operate a micro-ring in the “strongly over-coupled” regime, a condition in which the coupling strength between the micro-ring and the “bus” waveguide that feeds light into the ring is at least 10 times stronger than the loss of the micro-ring. “The latter is primarily due to optical scattering at the nanoscale roughness on the device sidewalls,” Lipson explained. “You can never fabricate photonic devices with perfectly smooth surfaces.” A visible-spectrum phase modulator (the ring at the center of a radius of 10 microns) is much smaller than a grain of pollen of the morning glory. Photo credit: Heqing Huang and Cheng-Chia Tsai/Columbia Engineering

The team developed several strategies to push the devices into the strongly over-coupled regime. The most crucial one was their invention of an adiabatic micro-ring geometry, in which the ring smoothly transitions between a narrow neck and a wide belly, which are at the opposite edges of the ring. The narrow neck of the ring facilitates the exchange of light between the bus waveguide and the micro-ring, thus enhancing the coupling strength. The ring’s wide belly reduces optical loss because the guided light interacts only with the outer sidewall, not the inner sidewall, of the widened portion of the adiabatic micro-ring, substantially reducing optical scattering at the sidewall roughness.

In a comparative study of adiabatic micro-rings and conventional micro-rings with uniform width fabricated side by side on the same chip, the team found that none of the conventional micro-rings satisfied the strong over-coupling condition they suffered very bad optical losses—while 63% of the adiabatic micro-rings kept operating in the strongly over-coupled regime.

“Our best phase modulators operating at the blue and green colors, which are the most difficult portion of the visible spectrum, have a radius of only five microns, consume power of 0.8 mW for π phase tuning, and introduce an amplitude variation of less than 10%,” said Heqing Huang, a graduate student in Yu’s lab and first author of the paper. “No prior work has demonstrated such compact, power-efficient, and low-loss phase modulators at visible wavelengths.”

The devices were designed in Yu’s lab and fabricated in the Columbia Nano Initiative cleanroom, at the Advanced Science Research Center NanoFabrication Facility at the Graduate Center of the City University of New York, and the Cornell NanoScale Science and Technology Facility. Device characterization was conducted in Lipson’s and Yu’s labs.

The researchers note that while they are nowhere near the degree of integration of electronics, their work shrinks the gap between photonic and electronic switches substantially. “If previous modulator technologies only allow for integration of 100 waveguide phase modulators given a certain chip footprint and power budget, now we can do that 100 times better and integrate 10,000 phase shifters on-chip to realize much more sophisticated functions,” said Yu.

The Lipson and Yu labs are now collaborating to demonstrate visible-spectrum LIDAR consisting of large 2D arrays of phase shifters based on adiabatic micro-rings. The design strategies employed for their visible-spectrum thermo-optical devices can be applied to electro-optical modulators to reduce their footprints and drive voltages, and can be adapted in other spectral ranges (e.g., ultraviolet, telecom, mid-infrared, and terahertz) and in other resonator designs beyond micro-rings.

“Thus, our work can inspire future effort where people can implement strong over-coupling in a wide range of resonator-based devices to enhance light-matter interactions, for example, for enhancing optical nonlinearity, for making novel lasers, for observing novel quantum optical effects, while suppressing optical losses at the same time,” Lipson said.

RIKEN physicists show how heat flow controls the movement of skyrmions in an insulating magnet

Magnetic vortices could be manipulated by waste heat to realize low-power supercomputing applications

Tiny amounts of heat can be used to control the movement of magnetic whirlpools called skyrmions, RIKEN physicists have shown. This ability could help to develop energy-efficient forms of supercomputing that harness waste heat. Figure 1: Skyrmions often arrange themselves into hexagonal lattices (top). RIKEN researchers have shown that a temperature gradient in a thin plate of an insulating magnetic material (bottom) can be used to propel such skyrmion lattices from the cooler (blue) to the warmer side (red) of the device. © 2021 RIKEN Center for Emergent Matter Science

Skyrmions are minuscule vortices that form when the magnetic flux of a group of atoms organizes into swirling patterns. Skyrmions can move around inside a material, and under certain conditions, they cluster together to form a regular arrangement known as a skyrmion lattice (upper part of Fig. 1).

Skyrmions are promising information carriers in next-generation computer chips that have very low power requirements. Researchers can already control skyrmions by applying electrical currents and magnetic fields, but they are seeking to manipulate them using heat flow instead. “This is an exciting prospect since it would raise the possibility of using waste heat to move skyrmions around,” says Xiuzhen Yu at the RIKEN Center for Emergent Matter Science.

Now, Yu and her colleagues have shown how a temperature gradient can be used to propel skyrmions in an electrically insulating magnetic material.

The team built a device that consisted of a plate of this material, a miniature heating element, and two electric thermometers. They then generated skyrmions that were roughly 60 nanometers wide in the plate by cooling it to about −253 degrees Celsius and applying a magnetic field. These skyrmions gathered into a stable honeycomb structure known as a hexagonal skyrmion lattice.

Yu’s team then increased the temperature slightly at one end of the plate and used a transmission electron microscope to watch how this affected the skyrmions. A temperature gradient of 100th of a degree per millimeter of the plate was enough to nudge the skyrmions into motion. Above this threshold, the edge of the honeycomb lattice drifted from the cooler to the warmer end of the plate, traveling in the opposite direction to the flow of heat (lower part of Fig. 1). This required a very low heat power of just 10 microwatts, which is hundreds or thousands of times smaller than the power needed to move skyrmions using electrical currents or magnetic fields. Using a slightly higher power, individual skyrmions could be driven through the plate by the temperature gradient.

The researchers say that this is the first time that heat-driven skyrmion motion has been seen in an insulating magnet. “This finding should stimulate researchers to develop energy-efficient devices by using skyrmions,” says Yu.

The team is now studying the heat-induced dynamics of skyrmions, including their transformation into their anti-particles—anti-skyrmions in metallic systems at room temperature.