DFG's Priority Program funds project on ML for molecular systems

Machine learning, meaning the ability to recognize important patterns in data sets and to generate solutions with the help of algorithms, is a research field of rapidly growing importance. In physics, chemistry, and biology with complex molecular structures, classical machine learning is reaching its limits. To better understand and use molecular data, new models must be developed. This is what the DFG's Priority Program "Use and Development of Machine Learning for Molecular Applications – Molecular Machine Learning" aims to achieve. At the end of the six-year funding period, software should be available that can be used by scientists in everyday life and enable new applications – for example in pharmacy. The energy transfer between light-absorbing molecules attached to a clay surface is at the focus of the project.

"Representing and calculating the properties of molecules in a computer is anything but trivial," said Professor Zaspel. "One of the main challenges is that we first have to generate new data instead of working with existing data as we normally do. Conceptually, that makes a big difference." The research group applies the so-called "multi-fidelity approach" of machine learning, which combines data of varying accuracy.

Complicated physical and quantum chemical calculations are required to produce the data. "This process is complex, expensive, and can take several days," explained Professor Kleinekathöfer. "We want to develop machine learning models based on the data that make large-scale calculations faster and more efficient. That would be extremely simplifying." This method can be used, for example, to improve special solar cells.

The tandem Kleinekathöfer and Zaspel jointly applied for the DFG project. For both of them, the attraction of the Priority Program lies not only in gaining new insights into their special fields but also in the interdisciplinary exchange with colleagues throughout Germany. More than a dozen universities and research institutions are brought together by the program, which is coordinated by Professor Frank Glorius of the University of Münster. "We hope for fruitful synergies," Kleinekathöfer said. At Jacobs University, the program is linked to the creation of two positions for Ph.D. candidates. The funding volume from the DFG for the two scientists amounts to almost 500,000 euros. 

Washington University in St. Louis’ prof shows how neuromorphic AI systems can learn by doing more with less

Sparsity and energy constraints guide learning and communications in silicon neuronal networks Sparsity makes the spiking activity and communications between the neurons more energy efficient as the neurons learn without using backpropagation.  CREDIT Shantanu Chakrabartty/Washington University in St. Louis

Brains have evolved to do more with less. Take a tiny insect brain, which has less than a million neurons but shows a diversity of behaviors and is more energy-efficient than current AI systems. These tiny brains serve as models for computing systems that are becoming more sophisticated as billions of silicon neurons can be implemented on hardware. 

The secret to achieving energy-efficiency lies in the silicon neurons’ ability to learn to communicate and form networks, as shown by new research from the lab of Shantanu Chakrabartty, the Clifford W. Murphy Professor in the Preston M. Green Department of Electrical & Systems Engineering at Washington University in St. Louis’ McKelvey School of Engineering.

For several years, his research group studied dynamical systems approaches to address the neuron-to-network performance gap and provide a blueprint for AI systems as energy efficient as biological ones.

Previous work from his group showed that in a computational system, spiking neurons create perturbations that allow each neuron to “know” which others are spiking and which are responding. It’s as if the neurons were all embedded in a rubber sheet formed by energy constraints; a single ripple, caused by a spike, would create a wave that affects them all. Like all physical processes, systems of silicon neurons tend to self-optimize to their least-energetic states, while also being affected by the other neurons in the network. These constraints come together to form a kind of secondary communication network, where additional information can be communicated through the dynamic but synchronized topology of spikes. It’s like the rubber sheet vibrating in a synchronized rhythm in response to multiple spikes.

In the latest research result, Chakrabartty and doctoral student Ahana Gangopadhyay showed how the neurons learn to pick the most energy-efficient perturbations and wave patterns in the rubber sheet. They show that if the learning is guided by sparsity (less energy), it’s like the electrical stiffness of the rubber sheet is adjusted by each neuron so that the entire network vibrates in the most energy-efficient way. The neuron does this using only local information which is communicated more efficiently. Communications between the neurons then become an emergent phenomenon guided by the need to optimize energy use.

This result could have significant implications on how neuromorphic AI systems might be designed. “We want to learn from neurobiology,” Chakrabartty said. “But we want to be able to exploit the best principles from both neurobiology and silicon engineering.”

Historically, neuromorphic engineering — modeling AI systems on biology — has been based on a relatively straightforward model of the brain. Take some neurons, a few synapses, connect everything, and, voila, it’s… if not alive, at least it’s able to perform a simple task (recognizing images, for example) as efficiently, or more so, than a biological brain. These systems are built by connecting memory (synapses) and processors (neurons). Each performing its single task, as it was presumed to work in the brain. But this one-structure-to-one-function approach, though it is easy to understand and model, misses the full complexity and flexibility of the brain.

Recent brain research has shown tasks are not so neatly divided, and there may be instances in which the same function is being performed by different brain structures, or multiple structures working together. “There is more and more information showing that this reductionist approach we’ve followed might not be complete,” Chakrabartty said.

The key to building an efficient system that can learn new things is the use of energy and structural constraints as a medium for computing and communications or, as Chakrabartty said, “Optimization using sparsity.”

The situation is reminiscent of the theory of six degrees of Kevin Bacon: The challenge  — or constraint — is to make connections to the actor by connecting six or fewer people.

For a neuron that is physically located on one chip to be its most efficient: The challenge — or constraint — is completing the task within the allotted amount of energy. It might be more efficient for one neuron to communicate through intermediaries to get to the destination neuron. The challenge is how to pick the right set of “friend” neurons among many choices that might be available. Enter energy constraints and sparsity. 

Like a tired professor, a system in which energy has been constrained also will seek the least resistant way to complete an assigned task. Unlike the professor, an AI system can test all of its options at once, thanks to the superposition techniques developed in Chakrabartty’s lab, which uses analog computing methods. In essence, a silicon neuron can attempt all communication routes at once, finding the most efficient way to connect to complete the assigned task.

The current paper shows that a network of 1,000 silicon neurons can accurately detect odors with very few training examples. The long-term goal is to look for analogs in the brain of a locust which has also been shown to be adept in classifying odors. Chakrabartty has been collaborating with Barani Raman, a professor in Department of Biomedical Engineering, and Srikanth Singamaneni, The Lilyan & E. Lisle Hughes Professor in the Department of Mechanical Engineering & Materials Science, to create a sort of cyborg locust — one with two brains, a silicon one connected to the biological one.

“This would be the most interesting and satisfactory aspect of this research if and when we can start connecting the two realms,” Chakrabartty said. “Not just physically, but also functionally.”

Their results were published July 28, 2021, in the journal Frontiers in Neuroscience.

Korean built machine learning models help photovoltaic systems find their place in the sun

Scientists develop algorithms that predict the output of solar cells, easing their integration into existing power grids

With the looming threat of climate change, it is high time we embrace renewable energy sources on a larger scale. Photovoltaic systems, which generate electricity from the nearly limitless supply of sunlight energy, are one of the most promising ways of generating clean energy. However, integrating photovoltaic systems into existing power grids is not a straightforward process. Because the power output of photovoltaic systems depends heavily on environmental conditions, power plant and grid managers need estimations of how much power will be injected by photovoltaic systems to plan optimal generation and maintenance schedules, among other important operational aspects. Integrating photovoltaic systems into existing power grids is not straightforward and requires accurate predictions of the power they will generate to allow for proper grid management.  CREDIT https://unsplash.com/@scienceinhd

In line with modern trends, if something needs predicting, you can safely bet that artificial intelligence will make an appearance. To date, many algorithms can estimate the power produced by photovoltaic systems several hours ahead by learning from previous data and analyzing current variables. One of them, called adaptive neuro-fuzzy inference system (ANFIS), has been widely applied for forecasting the performance of complex renewable energy systems. Since its inception, many researchers have combined ANFIS with a variety of machine learning algorithms to improve its performance even further.

In a recent study published in Renewable and Sustainable Energy Reviews, a research team led by Jong Wan Hu from Incheon National University, Korea, developed two new ANFIS-based models to better estimate the power generated by photovoltaic systems ahead of time by up to a full day. These two models are 'hybrid algorithms' because they combine the traditional ANFIS approach with two different particle swarm optimization methods, which are powerful and computationally efficient strategies for finding optimal solutions to optimization problems.

To assess the performance of their models, the team compared them with other ANFIS-based hybrid algorithms. They tested the predictive abilities of each model using real data from an actual photovoltaic system deployed in Italy in a previous study. The results, as Dr. Hu remarks, were very promising: "One of the two models we developed outperformed all the hybrid models tested, and hence showed great potential for predicting the photovoltaic power of solar systems at both short- and long-time horizons."

The findings of this study could have immediate implications in the field of photovoltaic systems from software and production perspectives. "In terms of software, our models can be turned into applications that accurately estimate photovoltaic system values, leading to enhanced performance and grid operation. In terms of production, our methods can translate into a direct increase in photovoltaic power by helping select variables that can be used in the photovoltaic system's design," explains Dr. Hu. Let us hope this work helps us in the transition to sustainable energy sources!