Herbert Jaeger, Professor of Computing in Cognitive Materials at CogniGron | Photo Marleen Annema
Herbert Jaeger, Professor of Computing in Cognitive Materials at CogniGron | Photo Marleen Annema

University of Groningen prof Jaeger takes steps towards creating a formal theory for neuromorphic computing

There is currently a search for new materials to build computer microchips that are more energy-efficient and brain-like. However, no theory can guide this effort on a solid foundation. A theory for non-digital computers is necessary to take into account continuous and analog signals, physical effects at the nanoscale, and the fact that the devices created are often not identical. The paper published by Herbert Jaeger, Beatriz Noheda, and Wilfred G. van der Wiel is the first attempt to provide a sketch of what such a theory for neuromorphic computers might look like.

According to Herbert Jaeger, who is a professor of computing in cognitive materials at the University of Groningen in the Netherlands, there needs to be a solid theory behind the engineering of new microchips. Currently, computers rely on stable switches, usually transistors, that can be either on or off, making them logical machines with programming based on logical reasoning. However, the miniaturization of transistors, which has been the key to making computers more powerful, is reaching its physical limit, which is why scientists are now looking for new materials that can produce more versatile switches capable of using more values than just 0 or 1.

Jaeger is a member of the Groningen Cognitive Systems and Materials Center (CogniGron) which is devoted to creating neuromorphic (brain-like) computers. CogniGron brings together scientists with differing approaches, including experimental materials scientists, mathematical theorists, and computer science and AI specialists. Working closely with materials scientists has given Jaeger insight into the challenges they face when developing new computational materials. It has also made him aware of a dangerous pitfall: there is no established theory for the use of non-digital physical effects in computing systems.

Our brain functions differently than a logical system. Although we can reason logically, this is only a small aspect of what our brain can do. The majority of the time, our brain must figure out how to perform simple tasks such as lifting a cup or waving to a colleague. Jaeger explains that "a lot of the information-processing that our brain does is this non-logical stuff, which is continuous and dynamic. It is difficult to formalize this in a digital computer." Additionally, our brain can function despite external factors such as fluctuations in blood pressure, external temperature, and hormone balance. So, how can we create a computer that is both versatile and robust? Jaeger believes that "the brain is proof of principle that it can be done", and is optimistic that it can.

The brain is a source of inspiration for materials scientists who aim to produce materials that mimic the behavior of neurons. Scientists might create materials that oscillate, show bursts of activity, and resemble how neurons work. However, the field is missing a crucial piece of information: even neuroscientists don't fully understand how the brain works. The lack of a theory for neuromorphic computers is a problem, but the field doesn't seem to acknowledge this. In a recent paper, Jaeger, Noheda, and van der Wiel proposed a theory for non-digital computers. The theory suggests that instead of using stable 0/1 switches, non-digital computers should work with continuous, analog signals. It should also account for the various non-standard nanoscale physical effects that materials scientists are studying.

Neuromorphic computing devices made from new materials are difficult to construct, and if you make a hundred of them, they will not all be identical. This is similar to how our neurons are not all the same. Additionally, these devices are often brittle and sensitive to temperature. Therefore, any theory for neuromorphic computing should consider such characteristics. Importantly, a theory supporting neuromorphic computing will not be a single theory but will consist of many sub-theories, just like digital computer theory, which is a layered system of connected sub-theories. To create a theoretical description of neuromorphic computers, experimental materials scientists and formal theoretical modelers must collaborate closely. Computer scientists must be aware of the physics of all these new materials, and materials scientists should be familiar with the fundamental concepts in computing.

The University of Groningen established CogniGron to bridge the gap between materials science, neuroscience, computing science, and engineering. The aim is to bring together these different groups to work collaboratively. Jaeger, one of the researchers at CogniGron, explains that everyone has their blind spots and the biggest gap in their knowledge is the lack of a foundational theory for neuromorphic computing. To overcome this, their paper provides a first attempt at highlighting how such a theory could be formulated and how a common language can be created.

Map of the study area in Chile. Red curve is the DAS array, black dots are earthquakes, dark red triangles are permanent seismic stations. | TSR doi.org/10.1785/0320230018
Map of the study area in Chile. Red curve is the DAS array, black dots are earthquakes, dark red triangles are permanent seismic stations. | TSR doi.org/10.1785/0320230018

Researchers use a deep-learning model to identify earthquake waves from the DAS data from the offshore cable

Unused telecommunications fiber optic cables can provide three seconds of improved warning time for offshore earthquake early warning systems, as researchers have shown. The researchers used a deep-learning artificial intelligence model to identify earthquake waves from the DAS data obtained from the offshore cable. There are over 1500 cable landing stations across the globe, and this technology allows the use of operational cables and integration of DAS systems without disrupting telecommunications data transportation. This presents an exciting opportunity for further research.

Seismic stations located offshore of heavily populated coastlines are lacking, which poses a significant challenge for earthquake early warning systems (EEW). These areas are some of the world's most seismically active regions. A new study published in The Seismic Record shows how the conversion of unused telecommunications fiber optic cable can address this issue for offshore EEW.

Jiuxun Yin, a Caltech researcher now at SLB, and colleagues utilized a 50-kilometer submarine telecom cable that runs between the United States and Chile. They sampled seismic data at 8,960 channels along the cable for four days using the Distributed Acoustic Sensing (DAS) technique. This technique uses the tiny internal flaws in a long optical fiber as thousands of seismic sensors.

During the study period, Yin and colleagues used the cable data to determine earthquake locations and estimate earthquake magnitudes for one onshore (magnitude 3.7) and two offshore (magnitude 2.7 and 3.3) earthquakes.

Their results showed that using this single offshore DAS array offers an approximate three-second improvement in earthquake early warning compared to onshore DAS arrays. In a simulation conducted by the researchers, they found that by deploying multiple DAS arrays spaced 50 kilometers apart and working together in the area, they could improve EEW alert times in the subduction zone by five seconds.

Yin expressed that they had anticipated some improvements due to the offshore placement of the DAS array. However, the actual speed gains were even greater than their initial projections. The array's offshore location eliminates the wait time for seismic waves to reach land-based stations, which is the primary advantage.

Offshore Chile and the Cascadia region offshore Canada and the U.S. Pacific Northwest are alike. They both have an active subduction zone, where tectonic plates collide, and one plate plunges beneath another, causing some of history's largest and most destructive earthquakes. Even Southern California's offshore region has witnessed numerous faults that have hosted earthquakes of magnitude 6 or more. In all these densely populated coastal areas, offshore earthquake early warning could help protect lives and property.

Yin explained that Chile's elevated seismic risk was the primary reason for selecting this cable. The region experiences frequent offshore earthquakes and has been affected by several significant magnitude 8+ earthquakes in history, including the largest ever recorded in 1960. Considering the high seismic risk and potentially devastating impacts of a large earthquake, there is a pressing need for a reliable offshore earthquake early warning system in Chile.

The researchers utilized a deep learning artificial intelligence model, which had been trained and validated on previous seismic and DAS data, to identify the earthquake waves from the DAS data of this offshore cable. According to Yin, the volume of data collected for DAS is substantial and pre-trained deep learning models offer a highly efficient and reliable option for real-time applications like EEW. However, other traditional seismological methods of picking earthquakes can still be effective in processing DAS data with automation.

Yin also noted that researchers require more data, particularly from larger magnitude earthquakes, to develop and test EEW algorithms effectively, as well as more information on how DAS instruments respond before building a real-time EEW system that integrates with existing EEW frameworks. He stated that there are plenty of places around the world to continue this research.

As per Yin, "There are more than 1500 cable landing stations around the globe, and the progress in the technology permits the use of operational cables and adding DAS systems without affecting [telecommunications] data transportation. We believe that this opens up a host of exciting research opportunities, and we are keen to explore these in future studies. We are looking for close interactions with cable owners, environmental agencies, and policymakers to scale the DAS-EEW for the benefit of coastal communities."

This artist’s concept depicts the Surface Water and Ocean Topography (SWOT) satellite, launched in December 2022. Credit: NASA/Jet Propulsion Laboratory
This artist’s concept depicts the Surface Water and Ocean Topography (SWOT) satellite, launched in December 2022. Credit: NASA/Jet Propulsion Laboratory

Machine Learning can translate sea surface heights into climate change insights

Scientists have developed a new machine learning technique that can translate satellite data on sea surface heights into insights on climate change, heat flow, and current flow. The Surface Water and Ocean Topography (SWOT) satellite, launched in December 2022, captures snapshots of sea surface heights at an unprecedented level of detail. The new technique uses a convolutional neural network to estimate various aspects of current flow in the upper ocean, which can help scientists gain a better understanding of and predict climate change.

Oceanographers rely on satellite technology to monitor the ocean's surface elevation and map the circulation of its currents to understand the role of this movement in climate change and heat transport. In late 2022, the Surface Water and Ocean Topography (SWOT) satellite was launched to capture high-resolution snapshots of sea surface heights at a scale of tens of kilometers. However, this high level of detail has resulted in the detection of waves beneath the surface, making it challenging to use simple physics-based approaches to translate sea surface heights into meaningful information about ocean currents.

To address this challenge, researchers Xiao et al. have developed a novel machine learning method that uses SWOT sea surface height data to estimate various aspects of current flow in the upper ocean. The method applies a computational approach inspired by human vision known as a convolutional neural network, which the team trained on data from realistic simulations of sea surface heights and current dynamics.

The researchers have shown that their convolutional neural network can use detailed sea surface heights to estimate certain aspects of current flow. By gaining a better understanding of how currents transport heat and carbon, scientists may be able to predict and comprehend climate change more accurately.

However, the researchers acknowledge that this is only a proof of concept, and further research is needed to refine the new method before it can be reliably used with SWOT data.

Meanwhile, SWOT will continue to capture high-resolution images not only of Earth's oceans but also of almost all surface water around the world, including lakes, rivers, and reservoirs.

Electron micrographs of the 2D-0D hybrid surface implemented in this study (top left), memory characteristics generated by light pulses (top right), and polynomial memory characteristics generated by multiple light pulses (bottom).
Electron micrographs of the 2D-0D hybrid surface implemented in this study (top left), memory characteristics generated by light pulses (top right), and polynomial memory characteristics generated by multiple light pulses (bottom).

KIST develops tech to store, manipulate electronic states in quantum dots that are smaller than 10 nm, ushering in the era of light-powered multi-level memories

Researchers from the Korea Institute of Science and Technology (KIST) and the Daegu Gyeongbuk Institute of Science and Technology (DGIST) have successfully developed a new semiconductor material that can store and manipulate electronic states in quantum dots measuring 10 nanometers or less. This new material makes it possible to store and manipulate data using light, rather than electrical signals, between the computing and storage components of a multi-level computer, thereby significantly enhancing processing speed. The development of this new multi-level optical memory device is expected to contribute towards accelerating the industrialization of next-generation system technologies, such as artificial intelligence systems.

We are facing an overwhelming amount of data. The data centers that store and process this data consume a lot of electricity, which is a major contributor to environmental pollution. To overcome this issue, researchers are exploring polygonal computing systems that have lower power consumption and higher computation speed. However, these systems are unable to handle the huge demand for data processing as they operate with electrical signals, similar to conventional binary computing systems.

Recently, the Korea Institute of Science and Technology (KIST) announced that Dr. Do Kyung Hwang of the Center for Opto-Electronic Materials & Devices and Professor Jong-Soo Lee of the Department of Energy Science & Engineering at Daegu Gyeongbuk Institute of Science and Technology (DGIST) have jointly developed a new semiconductor artificial junction material that can power next-generation memory with light. Transmitting data between the computing and storage parts of a multi-level computer using light instead of electrical signals can significantly increase processing speed.

The research team has created a new semiconductor artificial junction material by joining quantum dots in a core-shell structure with zinc sulfide (ZnS) on the surface of cadmium selenide (CdSe) and a molybdenum sulfide (MoS2) semiconductor. The new material enables the storage and manipulation of electronic states within quantum dots measuring 10 nm or less.

When light hits the cadmium selenide core, some electrons flow out of the molybdenum sulfide semiconductor, trapping holes in the core and making it conductive. The electron state inside cadmium selenide is also quantized. Intermittent light pulses trap electrons in the electron band one after the other, causing a change in the resistance of the molybdenum sulfide through the field effect. The resistance changes cascade depending on the number of light pulses, allowing for more than 0 and 10 states to be maintained, unlike conventional memory which only has 0 and 1 states. The zinc sulfide shell also prevents charge leakage between neighboring quantum dots, allowing each single quantum dot to function as a memory.

Unlike quantum dots in conventional 2D-0D semiconductor artificial junction structures that amplify signals from light sensors, the team's quantum dot structure perfectly mimics the floating gate memory structure, confirming its potential for use as a next-generation optical memory. The researchers verified the effectiveness of the polynomial memory phenomenon with neural network modeling using the CIFAR-10 dataset and achieved a 91% recognition rate.

Dr. Hwang of KIST says that this new multi-level optical memory device will contribute to accelerating the industrialization of next-generation system technologies such as artificial intelligence systems. These systems have been difficult to commercialize due to technical limitations arising from the miniaturization and integration of existing silicon semiconductor devices.

Extreme events such as cyclones will change as a result of climate change - and their effects should be better considered in climate models, say the authors. Photo: Viks_jin/NASA/Adobe Stock
Extreme events such as cyclones will change as a result of climate change - and their effects should be better considered in climate models, say the authors. Photo: Viks_jin/NASA/Adobe Stock

Germany is utilizing supercomputing to demo the impact of weather phenomena on ocean circulation

The impact of climate change on atmospheric synoptic variability (ASV) will have significant consequences on ocean circulation and ecosystems. Even though ASV is often considered as background noise in climate models, it is important to comprehend its complete influence on the ocean. The future changes in ASV will result in a shallower mixing layer in subtropical regions and a deeper mixing layer in equatorial regions.

A recent study conducted by GEOMAR in Kiel, Germany, has explored how changes in weather patterns may affect the tropical Pacific Ocean and its ecosystems in the future. The researchers conducted simulations based on complex supercomputer models to show that these changes will have significant consequences on ocean circulation. The authors stress the need to consider this while creating future climate models.

The strength of the wind has a crucial influence on ocean circulation, especially during extreme weather events such as storm fronts, tropical storms, and cyclones. Due to climate change, these weather patterns are expected to change in the future, particularly in terms of the average energy input into the ocean from mid-latitude storms, which is expected to decrease while equatorial regions will become more active. Scientists refer to these distinct weather patterns as "Atmospheric Synoptic Variability" (ASV).

Dr. Olaf Duteil from the GEOMAR Helmholtz Centre for Ocean Research in Kiel and Professor Dr. Wonsun Park from the IBS Center for Climate Physics and Pusan National University in Korea have now conducted a modeling study to investigate the integrated effects of long-term changes in these weather patterns on the Pacific basin for the first time. According to their supercomputing results, it is crucial to consider these changes while creating climate models.

From a climate perspective, weather is generally regarded as "noise" and is not systematically analyzed in long-term climate projections, according to the two researchers. However, to better understand the influence of climate change on the ocean, it is essential to take into account the cumulative effect of short-term changes in weather patterns rather than just focusing on average atmospheric properties such as mean wind speeds, says Duteil.

According to the researchers, the Atmospheric Synoptic Variability (ASV) will play a vital role in the future mixing of the ocean's layers. Depending on the weather phenomena, the amount of kinetic energy input into the ocean can be either less or more, leading to less or more mixing, respectively. The study predicts that the reduction of ASV in subtropical regions will cause a shallowing of the mixing layer in the ocean. However, the mixing layer will become deeper at the equator with an increase in ASV.

The research findings also indicate that a decrease in ASV in the future will weaken the subtropical and tropical cells, affecting large-scale ocean circulation systems. These systems connect mid-latitudes and equatorial latitudes via upper ocean pathways, driven by the trade winds north and south of the equator. The study further highlights that these cells regulate the upwelling of equatorial waters and play a vital role in determining the surface temperature of the oceans, thus influencing primary productivity in the tropics.

The study emphasizes the importance of accurately quantifying ASV and weather patterns in climate models, which could improve our understanding of future upper ocean circulation and mean properties. The lead author, Duteil, suggests that this quantification should be used to enhance our confidence in projections of future climate, especially while analyzing large ensembles of climate models.