University of Alaska Fairbanks prof shows need to improve prediction of Arctic melt ponds

New research shows two widely used supercomputer models that predict summer melt pond formation on sea ice vastly overestimate their extent, a key finding as scientists work to make accurate projections about Arctic climate change. Photo by Lianna Nixon Melinda Webster carries a magnaprobe to measure the depth of melt ponds.

The finding comes from measurements made during a year-long expedition aboard the research vessel Polarstern. For the Multidisciplinary Drifting Observatory for the Study of Arctic Climate expedition or MOSAiC, the ship was allowed to freeze into place in the Arctic and drift with the ice pack from September 2019 to October 2020.

The NASA-funded work, which compared supercomputer model assessments to observations made during the last four months of the expedition, was led by Melinda Webster of the University of Alaska Fairbanks Geophysical Institute. Webster, a research assistant professor, spent several months aboard the Polarstern.

“No model is perfect,” Webster said. “This study uses a combination of surface-based, airborne, and satellite data to reveal the possible imperfect representation, or missing physics, of sea-ice melt processes, which we can focus on improving.”

Melt ponds form when water from melting snow and sea ice settles into surface depressions.

The extent of melt ponds and the timing of their seasonal formation affect the surface albedo, which controls the amount of solar radiation reflected from the surface. Ponds reduce the albedo, allowing solar radiation to be absorbed and transmitted to the seawater below.

Increased absorption of solar energy enhances the warming of the upper ocean and hastens sea ice melt. It can also lead to increased growth of certain phytoplankton species better adapted to higher light levels in the upper ocean. That has ramifications for the rest of the food chain.

The amount of absorption also affects the net change in new ice growth versus ice melt.

The field data in the study consisted of observations from multi-kilometer surveys across the frozen icescape during the summer melt from June-July 2020 on one ice floe and during the autumn freeze-up in August-September 2020 at another floe.

Those measurements were compared to airborne and satellite imagery to reveal the coverage of melt ponds in the broader region and then used to examine the supercomputer model predictions.

Melt ponds covered 21% of the observed area during the summer, while the two models indicated 41% and 51%.

“By improving the representation of key physical processes in models, we expect the models to simulate climate states more reliably no matter what period it is — past, present or future,” Webster said.

Webster is continuing research into melting Arctic sea ice.

MSU researchers use AI to stay ahead of COVID-19

The National Institutes of Health have awarded Michigan State University researchers $2.7 million to continue developing artificial intelligence algorithms that predict key features of viruses as they evolve.

The team is led by Guowei Wei, an expert in AI who has published nearly 30 papers on COVID-19, and Yong-Hui Zheng, whose extensive background in virology is helping verify and improve AI predictions. The team also includes Jiahui Chen, a visiting assistant professor at MSU who played an essential role in developing the AI models.   

At the far left of this image is a computer-generated visualization of the novel coronavirus’s spike protein. The magenta dots represent regions where the omicron variants have mutated. The bar graphs to the right show how those mutations affect biochemical interactions as predicted by MSU researchers’ artificial intelligence. Reprinted with permission from Chen, J., Wei, G. J. Phys. Chem. Lett. 2022

The Wei lab already has shown those models can make accurate predictions about new variants of the novel coronavirus and, with this grant, the researchers are working to bolster their algorithms. 

“What we’re doing is making our predictions more accurate and more timely,” said Wei, an MSU Foundation Professor in the College of Natural Science’s Department of Mathematics and Department of Biochemistry and Molecular Biology. “And now our work isn’t just for COVID, but also for many other viral infections.”

The work could one day help drug developers create universal vaccines and therapies that are more effective and “evolution-proof” against a range of viral diseases, including the flu, HIV, and COVID-19. 

“HIV, Ebola, influenza, the coronavirus — they’re all different viruses, but they share common features,” said Zheng, a professor in the College of Osteopathic Medicine in the Department of Microbiology and Molecular Genetics. “If we learn how to attack one, that can inform how we attack the others.”

“The goal is to have us much better prepared for any future disease or pandemic,” Wei said.

How AI, data, and experiments can inform public health 

More immediately, the Wei team believes its AI can help inform public health officials if they need to update their recommended protective measures — such as by issuing masking and social distancing guidance — against emerging coronavirus variants. 

Although vaccines and treatments are now available that didn’t exist when the U.S. first declared a public health emergency in response to the novel coronavirus, the virus is still out there evolving. Our immune responses are naturally influencing the trajectory of that evolution.

Thinking in terms of “survival of the fittest,” a virus that can evade vaccines or natural immunity will be more fit than its predecessor, Wei said. That means it will be better equipped to survive, multiply and infect others. The take-home message isn’t that people shouldn’t protect themselves, Wei said, but that a virus that still infects about 100,000 Americans daily isn’t going to get tired, bored, or just give up.

“Viruses don’t have a personality. They just survive,” Wei said. “We want to make sure we are prepared.”

This new grant, funded by the National Institute of Allergy and Infectious Diseases, is an investment to improve our readiness through cutting-edge technology. But it also leverages the expertise and experience of Wei and Zheng.

Zheng has led NIH-funded grants for two decades, although this will be his first with an explicit focus on the coronavirus. 

“I’m very proud that this is the first one,” he said. “But we don’t want it to be the last. This new grant will expand my lab’s capacity to accommodate more needs campuswide and we want to use that to stimulate more collaboration.”

Zheng brings a unique virology skillset to MSU. He first was recruited in 2005 as an HIV researcher and, over time, his lab has grown to study the molecular biology of influenza and Ebola. When the coronavirus pandemic struck, he knew his team could provide valuable experimental infrastructure to help better study the new virus.

For example, his team developed less dangerous versions of the virus along with lab-grown cells for these “pseudo-viruses” to infect while preserving the biochemistry of real, clinical infections. The researchers also created very sensitive assays or tests, that would reveal which viruses infected which cells. All of this provided researchers with safer, faster, and easier ways to study a complex virus while generating valuable biological data.

Similarly, in early 2020, Wei’s team started putting its unique skills to work combatting the coronavirus.

“Before the pandemic, we had had success in worldwide competitions, being recognized as one of the top labs in combining AI and mathematics for drug discovery,” said Wei, who also holds an appointment in the Department of Electrical and Computer Engineering in the College of Engineering

Wei’s research had focused on using AI to help design new pharmaceuticals in partnership with Pfizer and Bristol-Myers Squibb. Within days of China’s Wuhan lockdown in January 2020, Wei’s team started sharing its AI resources to help find drugs to fight the coronavirus and reveal new potential drug targets. But the researchers also recognized their algorithms could do more. 

With a global community working to fight the coronavirus, there was a wealth of new genomic data describing the virus being shared regularly. Wei and his team saw an opportunity to combine that data with their AI framework to understand how the virus was mutating as time went on. 

For example, they were among the first to see how “survival of the fittest” was playing out in the virus and steering its evolution, Wei said. His team then used that knowledge to look ahead and identify two potentially vital sites on the virus’s spike protein, the protein the virus uses to latch onto cells and infect them. Mutations in those two spike protein sites would later turn out to play crucial roles in the virus’s most prevalent variants, Wei said. 

“We took what we were doing with deep learning and mathematics, then combined that with the viral genomic data to understand the evolution of the virus, look at its trajectory and ask what’s going to happen,” Wei said. “That gives us a way to predict what can happen in the future.” 

Successfully predicting virus behavior

Wei and Zheng have been collaborating for about a year, starting before the grant was awarded. Their teamwork has informed precise algorithms with real-world data and provided real experimental results to compare with AI predictions.

“We need to have that interdisciplinary collaboration for this to work,” Zheng said. “Everything the computer models predicted, we had to confirm with experiments in a living system.”

Although Wei’s team validated its AI with laboratory experiments, the researchers still knew they’d need to prove their algorithms could work with a brand-new variant with very little data. Then, in the fall of 2021, the first omicron variant appeared.

“Back in late November, people didn’t know what was going to happen,” Wei said.

Researchers and public health officials responded immediately, but the process of experimenting and gathering data takes weeks. Meanwhile, Wei’s team put its AI to the test. 

Their projections showed this first iteration of omicron would be more infectious, better at eluding the protection of vaccines, and less responsive to antibody treatments than earlier variants.

“Within days, we had our predictions,” Wei said. “A month and a half later, everything we predicted proved to be true by experimental labs around the world. Using AI, we can give people a month or two to prepare.”

Then, in early 2022, a new subvariant of omicron called BA.2 started spreading. A similar scenario played out. Wei’s team predicted it would be more infective and even more elusive, which would allow it to become the next dominant variant.

“We made our predictions on February 11, and on March 26, the World Health Organization announced it was the dominant form of the virus,” Wei said. 

Now that scientists and officials better understand omicron, the newer versions aren’t garnering the same level of attention as their predecessors. But new variants and subvariants are still emerging. With support from the National Institutes of Health, the MSU team is working to ensure we stay prepared for what’s next, whether that’s a new variant, something more familiar like the flu, or something entirely different.

Japanese researchers investigate ways to make AIs more robust by studying patterns in their answers when faced with the unknown

Today's artificial intelligence systems used for image recognition are incredibly powerful with massive potential for commercial applications. Nonetheless, current artificial neural networks—the deep learning algorithms that power image recognition—suffer one massive shortcoming: they are easily broken by images that are even slightly modified. Image recognition AIs are powerful but inflexible and cannot recognize images unless they are trained on specific data. In Raw Zero-Shot Learning, researchers give these image recognition AIs a variety of data and observe the patterns in their answers. The research team hopes that this methodology can help improve the robustness of future AI. Illustrated by Hiroko Uchida.

This lack of 'robustness' is a significant hurdle for researchers hoping to build better AIs. However, exactly why this phenomenon occurs, and the underlying mechanisms behind it, remain largely unknown.

Aiming to one day overcome these flaws, researchers at Kyushu University's Faculty of Information Science and Electrical Engineering In Fukuoka, Japan have published a method called 'Raw Zero-Shot' that assesses how neural networks handle elements unknown to them. The results could help researchers identify common features that make AIs 'non-robust' and develop methods to rectify their problems.

"There is a range of real-world applications for image recognition neural networks, including self-driving cars and diagnostic tools in healthcare," explains Danilo Vasconcellos Vargas, who led the study. "However, no matter how well trained the AI, it can fail with even a slight change in an image."

In practice, image recognition AIs are 'trained' on many sample images before being asked to identify one. For example, if you want an AI to identify ducks, you would first train it on many pictures of ducks.

Nonetheless, even the best-trained AIs can be misled. Researchers have found that an image can be manipulated such that—while it may appear unchanged to the human eye—an AI cannot accurately identify it. Even a single-pixel change in the image can confuse.

To better understand why this happens, the team began investigating different image recognition AIs with the hope of identifying patterns in how they behave when faced with samples that they had not been trained with, i.e., elements unknown to the AI.

"If you give an image to an AI, it will try to tell you what it is, no matter if that answer is correct or not. So, we took the twelve most common AIs today and applied a new method called 'Raw Zero-Shot Learning,'" continues Vargas. "Basically, we gave the AIs a series of images with no hints or training. Our hypothesis was that there would be correlations in how they answered. They would be wrong, but wrong in the same way."

What they found was just that. In all cases, the image recognition AI would produce an answer, and the answers—while wrong—would be consistent, that is to say, they would cluster together. The density of each cluster would indicate how the AI processed the unknown images based on its foundational knowledge of different images.

"If we understand what the AI was doing and what it learned when processing unknown images, we can use that same understanding to analyze why AIs break when faced with images with single-pixel changes or slight modifications," Vargas states. "Utilization of the knowledge we gained trying to solve one problem by applying it to a different but related problem is known as Transferability."

The team observed that Capsule Networks, also known as CapsNet, produced the densest clusters, giving it the best transferability amongst neural networks. They believe it might be because of the dynamic nature of CapsNet.

"While today's AIs are accurate, they lack the robustness for further utility. We need to understand what the problem is and why it’s happening. In this work, we showed a possible strategy to study these issues," concludes Vargas. "Instead of focusing solely on the accuracy, we must investigate ways to improve robustness and flexibility. Then we may be able to develop a true artificial intelligence."