Mizzou engineers win Army grant to build an explainable AI framework to speed up innovation

A nearly $4.9 million grant from the U.S. Army Engineer Research and Development Center (ERDC) is supporting the project by University of Missouri engineers

More than a century has passed since Thomas Edison developed the first electric light bulb, yet Edison’s hallmark approach of ‘trial and error’ to reach his discovery remains a large part of today’s inventions. Now, a team of engineers at the University of Missouri is embodying the age-old adage of “work smarter, not harder” by using artificial intelligence (AI). While Matt Maschmann (left) focuses on the integration of AI and machine learning into materials processing, Derek T. Anderson is working alongside him to help make AI more intelligent by determining how to better integrate human knowledge into the artificial world.

Supported by a two-year, $4.875 million grant from the U.S. Army Engineer Research and Development Center (ERDC), the team from the MU College of Engineering, including Derek T. Anderson and Matt Maschmann, is developing a theoretical framework around “explainable AI” to describe how the next-generation of AI can be integrated into the innovation process for designing new and existing materials — while also securing the trust of humans along the way.

Maschmann, an associate professor of mechanical and aerospace engineering, knows this process well. For example, he’s been working with carbon nanotubes since 2003, yet Maschmann said their full potential as an engineering material is far from being realized. The same, he said, can be true for many material systems. Therefore, one of the MU team’s goals is to find a way to accelerate the discovery process by helping make better quality materials in a shorter period.

To do this, the team is starting with how to integrate AI and machine learning into the process, said Maschmann, whose passion for developing materials began in the early 2000s during graduate school.

“One of the more pressing challenges in the development of new materials, or optimization of existing materials, is the time required by the processing and characterization steps,” Maschmann said. “Making discoveries takes quite a bit of time and money. For instance, each step of a process may take a day or longer to accomplish. Therefore, in a traditional laboratory environment, scientists will repeat a process multiple times in an attempt to obtain a specific structure or property for a material guided by intuition and previous knowledge. However, if we can introduce machine learning algorithms and AI into the process, it could drastically reduce the time needed to obtain material properties of interest. My hope is this project will greatly increase the rate of discovery for developing materials while also increasing our fundamental understanding of these processes.”  

While Maschmann focuses on the integration of AI and machine learning into materials processing, Anderson, an associate professor of electrical engineering and computer science, is working alongside him to help make AI more intelligent by determining how to better integrate human knowledge into the artificial world. For instance, Anderson said while material scientists, chemists, and physicists have vast knowledge about the physical world, most AI and machine learning do not yet share that same level of intelligence.

“Therefore, we’re looking at how do we design the next generation of AI and machine learning to take advantage of the existing knowledge that people have,” Anderson said. “Then, we want to use that knowledge to intelligently grow AI to be able to design smarter materials. While our efforts are focused on the ‘explainability’ side, and helping scientists and domain experts understand how these processes work, we hope to make AI smarter for everyone’s benefit in the process.”

LSU, RSMAS marine ecologists warn of coral extinction by the end of the century

Vibrant coral reefs teeming with marine life are diminishing throughout the Caribbean as global temperatures rise. Coral reefs are habitats that support the seafood industry, are barriers for coastal communities from storms, flooding, and sea-level rise and are attractions for tourism. Their net economic value worldwide is estimated to be tens of billions of dollars. However, if atmospheric and ocean temperatures continue to rise at the current pace, coral reefs face extinction within the next 80 years, or by the end of this century. Coral reefs could face extinction within the next 80 years, or by the end of this century, if atmospheric and ocean temperatures continue to rise at the current pace, according to new research by LSU Department of Oceanography & Coastal Sciences Assistant Professor Dan Holstein.  CREDIT Dan Holstein, LSU

“Entire reefs that I used to dive and snorkel on are gone. There are species you don’t see on the reef anymore. Change is happening now,” said LSU Department of Oceanography & Coastal Sciences Assistant Professor Dan Holstein.

He and his collaborators have developed a new, open-source computational model that is the first to predict how warming seas will destabilize coral populations throughout the Western Atlantic, including the Florida Keys, the Bahamas, and the Caribbean. Using existing projections of ocean warming, the model computes how coral populations will sustain and thrive, or begin to perish, as ocean temperatures rise.

“This model predicts that ocean warming will reduce the ability for migrating coral larvae to replenish reefs that have bleached and died. The model doesn’t seal the fate of coral reefs, but it is a big wake-up call,” said Holstein, whose work is published in a new paper in the journal, Coral Reefs.

As the ocean warms, it can destabilize marine ecosystems leading to imbalances similar to temperature and weather extremes experienced onshore.

“Thermal stress is not the only problem corals face, but it is considered to be the biggest,” Holstein said. “And how much carbon we put in the atmosphere is something we can decide. We can actually do something about it.”

Corals are marine animals that rely on a healthy symbiotic relationship with microscopic marine algae to survive. The algae live inside the coral’s tissue and produce sugars for the coral through photosynthesis. However, when the ocean becomes too hot, this symbiotic relationship breaks down, leading to a phenomenon called coral bleaching, and eventually, the coral may die from starvation.

Holstein’s model examines how resilient connected coral populations are to the expected temperature changes throughout the Caribbean.

“Coral reef connectivity through sexual reproduction and planktonic larvae remains a critical process to track during climate change,” said co-author Claire Paris, professor at the University of Miami's Rosenstiel School (RSMAS).

The new model uses connectivity information derived from the open-source Connectivity Modeling System developed by Paris.

Although the model suggests a dire outcome for coral reefs and specifically for the widespread yet endangered boulder star coral used in the model given the current trajectory, Holstein does not believe coral reef extinction is inevitable.

Consumers and policymakers can still change how much carbon is emitted into the atmosphere. And countries with the most at stake – and those that are the biggest carbon emitters – need to work together to reverse course.

“The management of coral reefs and the mitigation of this dire future requires cooperation across borders and spatial scales to manage critical habitat. It’s one of the obvious conclusions. If we fail to do that, all of our efforts are at risk of being ineffective,” Holstein said.

NIH scientists use ML to better define long COVID

Using machine learning, researchers find patterns in electronic health record data to better identify those likely to have the condition Transmission electron micrograph of SARS-CoV-2 virus particles, isolated from a patient.NIAID

A research team supported by the National Institutes of Health has identified characteristics of people with long COVID and those likely to have it. Scientists, using machine learning techniques, analyzed an unprecedented collection of electronic health records (EHRs) available for COVID-19 research to better identify who has long COVID. Exploring de-identified EHR data in the National COVID Cohort Collaborative (N3C), a national, centralized public database led by NIH’s National Center for Advancing Translational Sciences (NCATS), the team used the data to find more than 100,000 likely long COVID cases as of October 2021 (as of May 2022, the count is more than 200,000).  

Long COVID is marked by wide-ranging symptoms, including shortness of breath, fatigue, fever, headaches, “brain fog” and other neurological problems. Such symptoms can last for many months or longer after an initial COVID-19 diagnosis. One reason long COVID is difficult to identify is that many of its symptoms are similar to those of other diseases and conditions. A better characterization of long COVID could lead to improved diagnoses and new therapeutic approaches.

“It made sense to take advantage of modern data analysis tools and a unique big data resource like N3C, where many features of long COVID can be represented,” said co-author Emily Pfaff, Ph.D., a clinical informaticist at the University of North Carolina at Chapel Hill.

The N3C data enclave currently includes information representing more than 13 million people nationwide, including nearly 5 million COVID-19-positive cases. The resource enables rapid research on emerging questions about COVID-19 vaccines, therapies, risk factors and health outcomes.

The new research is part of a related, larger trans-NIH initiative, Researching COVID to Enhance Recovery (RECOVER), which aims to improve the understanding of the long-term effects of COVID-19, called post-acute sequelae of SARS-CoV-2 infection (PASC). RECOVER will accurately identify people with PASC and develop approaches for its prevention and treatment. The program also will answer critical research questions about the long-term effects of COVID through clinical trials, longitudinal observational studies, and more.

In the study, Pfaff, Melissa Haendel, Ph.D., at the University of Colorado Anschutz Medical Campus, and their colleagues examined patient demographics, health care use, diagnoses and medications in the health records of 97,995 adult COVID-19 patients in the N3C. They used this information, along with data on nearly 600 long COVID patients from three long COVID clinics, to create three machine learning models to identify long COVID patients.

In machine learning, scientists “train” computational methods to rapidly sift through large amounts of data to reveal new insights — in this case, about long COVID. The models looked for patterns in the data that could help researchers both understand patient characteristics and better identify individuals with the condition.

The models focused on identifying potential long COVID patients among three groups in the N3C database: All COVID-19 patients, patients hospitalized with COVID-19, and patients who had COVID-19 but were not hospitalized. The models proved to be accurate, as people identified as at risk for long COVID were similar to patients seen at long COVID clinics. The machine learning systems classified approximately 100,000 patients in the N3C database whose profiles were close matches to those with long COVID. 

“Once you’re able to determine who has long COVID in a large database of people, you can begin to ask questions about those people,” said Josh Fessel, M.D., Ph.D., senior clinical advisor at NCATS and a scientific program lead in RECOVER. “Was there something different about those people before they developed long COVID? Did they have certain risk factors? Was there something about how they were treated during acute COVID that might have increased or decreased their risk for long COVID?”

The models searched for common features, including new medications, doctor visits and new symptoms, in patients with a positive COVID diagnosis who were at least 90 days out from their acute infection. The models identified patients as having long COVID if they went to a long COVID clinic or demonstrated long COVID symptoms and likely had the condition but hadn’t been diagnosed.

“We want to incorporate the new patterns we’re seeing with the diagnosis code for COVID and include it in our models to try to improve their performance,” said the University of Colorado’s Haendel. “The models can learn from a greater variety of patients and become more accurate. We hope we can use our long COVID patient classifier for clinical trial recruitment.”

This study was funded by NCATS, which contributed to the design, maintenance and security of the N3C Enclave, and the NIH RECOVER Initiative, supported by NIH OT2HL161847. RECOVER is coordinating, among others, the participant recruitment protocol to which this work contributes. The analyses were conducted with data and tools accessed through the NCATS N3C Data Enclave and supported by NCATS U24TR002306.