A new artificial intelligence model could be used to accurately capture cognitive decline linked to neurodegenerative diseases much earlier than previous methods. (Illustration/iStock)
A new artificial intelligence model could be used to accurately capture cognitive decline linked to neurodegenerative diseases much earlier than previous methods. (Illustration/iStock)

USC developed AI-powered analysis reflects risk of cognitive decline, Alzheimer’s disease based on brain age

The human brain holds many clues about a person’s long-term health research shows that a person’s brain age is a more useful and accurate predictor of health risks and future disease than their birthdate. A new artificial intelligence model that analyzes MRI brain scans developed by USC researchers could be used to accurately capture cognitive decline linked to neurodegenerative diseases like Alzheimer’s much earlier than previous methods.  Researchers collated the brain MRIs of 4,681 cognitively normal participants, some of whom went on to develop cognitive decline or Alzheimer’s disease later in life. (Illustration/Courtesy of the USC Leonard Davis School of Gerontology)

Brain aging is considered a reliable biomarker for neurodegenerative disease risk. Such risk increases when a person’s brain exhibits features that appear “older” than expected for someone of that person’s age. By tapping into the deep learning capability of the team’s novel AI model to analyze the scans, the researchers can detect subtle brain anatomy markers that are otherwise very difficult to detect and that correlate with cognitive decline. Their findings offer an unprecedented glimpse into human cognition.  

“Our study harnesses the power of deep learning to identify areas of the brain that are aging in ways that reflect a cognitive decline that may lead to Alzheimer’s,” said Andrei Irimia, assistant professor of gerontology, biomedical engineering and neuroscience at the USC Leonard Davis School of Gerontology and corresponding author of the study.  

“People age at different rates, and so do tissue types in the body,” Irimia said. “We know this colloquially when we say, ‘So-and-so is 40, but looks 30.’ The same idea applies to the brain. The brain of a 40-year-old may look as ‘young’ as the brain of a 30-year-old, or as ‘old’ as that of a 60-year-old.”  

Brain aging: A more accurate alternative to existing methods  

Researchers collated the brain MRIs of 4,681 cognitively normal participants, some of whom went on to develop cognitive decline or Alzheimer’s disease later in life.  

Using these data, they created an AI model called a neural network to predict participants’ ages from their brain MRIs. First, the researchers trained the network to produce detailed anatomic brain maps that reveal subject-specific patterns of aging. They then compared the perceived (biological) brain ages with the actual (chronological) ages of study participants. The greater the difference between the two, the worse the participants’ cognitive scores, which reflect Alzheimer’s risk. 

The results show that the team’s model can predict the true (chronological) ages of cognitively normal participants with an average absolute error of 2.3 years, which is about one year more accurate than an existing, award-winning model for brain age estimation that used a different neural network architecture.  

“Interpretable AI can become a powerful tool for assessing the risk for Alzheimer’s and other neurocognitive diseases,” said Irimia, who also holds faculty positions with the USC Viterbi School of Engineering and the USC Dornsife College of Letters, Arts and Sciences. 

“The earlier we can identify people at high risk for Alzheimer’s disease, the earlier clinicians can intervene with treatment options, monitoring, and disease management.”

Brain aging: differences according to sex  

The new model also reveals sex-specific differences in how aging varies across brain regions. Certain parts of the brain age faster in males than in females, and vice versa.  

Males, who are at higher risk of motor impairment due to Parkinson’s disease, experience faster aging in the brain’s motor cortex, an area responsible for motor function. Findings also show that, among females, typical aging may be relatively slower in the right hemisphere of the brain.  

An emerging field of study shows promise for personalized medicine  

Applications of this work extend far beyond disease risk assessment. Irimia envisions a world in which the novel deep learning methods developed as part of the study are used to help people understand how fast they are aging in general.  

“One of the most important applications of our work is its potential to pave the way for tailored interventions that address the unique aging patterns of every individual,” Irimia said.  

“Many people would be interested in knowing their true rate of aging. The information could give us hints about different lifestyle changes or interventions that a person could adopt to improve their overall health and well-being. Our methods could be used to design patient-centered treatment plans and personalized maps of brain aging that may be of interest to people with different health needs and goals.” 

Korean prof Lee investigates AI in fashion

The use of artificial intelligence (AI) in the fashion industry has grown significantly in recent years. AI is being used for tasks such as personalizing fashion recommendations for customers, optimizing supply chain management, automating processes, and improving sustainability to reduce waste. However, creative processes in fashion design continue to be human-driven, mostly, and not a lot of research exists in the realm of using AI for designing in fashion. Moreover, studies are generally done with data scientists, who build the AI platforms and are involved with the technological aspect of the process. However, the other side of this equation, i.e., designers themselves, are not roped into research often.

To investigate the practical applicability of AI models to implement creative designs and work with human designers, Assistant Professor Prof. Yoon Kyung Lee from Pusan National University in Korea conducted an in-depth study. Her study was made available online in Thinking Skills and Creativity on September 15, 2022, and subsequently published in Volume 46 of the Journal in December 2022.

“At a time when AI is so deeply ingrained into our lives, this study started instead with considering what a human can do better than AI,” says Prof. Lee, explaining her motivation behind the study. “Could there be an effective collaboration between humans and AI for the purpose of creative design?”

Prof. Lee started by generating new textile designs using deep convolution generative adversarial networks (DC-GANs) and cycle-GANs. The outputs from these models were compared to similar designs produced by design students.

The comparison revealed that though designs produced by both were similar, the biggest difference was the uniqueness and originality seen in the human designs, which came from the person’s experiences. However, the use of AI in repetitive tasks can improve the efficiency of designers and frees up their time to focus on more high-difficulty creative work. AI-generated designs can also be used as a learning tool for people who lack expertise in fashion and want to explore their creativity. These people can create designs with assistance from AI.  Thus, Prof. Lee proposes a human-AI collaborative network that integrates GANs with human creativity to produce designs. The professor also defined and studied the various elements of a complex system that are involved in the human-AI collaborated design. She also went on to establish a human-AI model in which the designer collaborates with AI to create a novel design idea. The model is built in such a way that if the designer shares their creative process and ideas with others, the system can interconnect and evolve, thereby improving its designs.

The fashion industry can leverage this to foresee changes in the fashion industry and offer recommendations and co-creation services. Setting objectives, variables, and limits is part of the designer's job in the Human-AI collaborative design environment. Therefore, their work should go beyond only the visual aspect and instead cover a variety of disciplines.

“In the future, everybody will be able to be a creator or designer with the help of AI models. So far, only professional fashion designers have been able to design and showcase clothes. But in the future, it will be possible for anyone to design the clothes they want and showcase their creativity,” concludes Prof. Lee.

We hope her dreams are very close to realization!

Fear can inspire remote workers to protect IT resources

Fear of what could go wrong is the greatest motivator when it comes to getting remote workers to protect their employer’s information technology security, according to a recent study in Computers & Security. But it tends to work best when employees also have a solid understanding of the severity of potential security threats, including the knowledge of what to do when the worst happens. 

As millions of Americans continue to work remotely, the research provides employers with key insights to keep their valuable information safe. 

“Employees need to feel this is a big deal if it happens, so the number one thing employers can do is to clearly communicate what the threats are and how serious they could be,” said Robert Crossler, corresponding author for the study and associate professor in the Carson College of Business at Washington State University. “Because for most people this is not their job. Their job is to make something or sell something, not to make good security choices, even if it is critical for their organization.” 

For the study, the researchers examined and compared two approaches for motivating security compliance behaviors in a changing work environment. 

Protection motivation theory posits that organizations can encourage secure behaviors through fear appeals, and threat messages, and promoting self-efficacy, or the ability to respond to a particular threat. The practice, which often utilizes surveillance to monitor employee actions, has been used effectively for decades to deter people from engaging in risky behaviors at work and to discourage unhealthy practices such as smoking or having unsafe sex. 

The second approach Crossler and his collaborators examined is stewardship theory. Stewardship theory is a form of reciprocal agreement that tries to motivate the employee’s behavior through a sense of moral responsibility that is not forced. In this approach, management attempts to get the employee to buy into the organization’s overall vision while giving them organizational support to act independently when confronted with a security threat. 

For the analysis, 339 people who worked at companies with IT security policies were recruited to answer a scenario-based survey. The three survey scenarios describe common policy violations that are relevant to remote work situations, such as the use of unauthorized storage devices, logging off a sensitive account when it is not in use, and refraining from sharing one’s password with others. 

Each respondent randomly read one of three of the scenarios and then indicated their likelihood to act in a certain way based on various protection motivation and stewardship theory factors. Although working from home would seem to require relying on concepts more consistent with stewardship theory, the study showed that an approach that relied on the fear and threats emphasized in protection motivation theory was far more effective at preventing employees from violating security policy than a strictly stewardship-based approach.

One novel aspect of the study was that Crossler and his collaborators also considered a security approach that integrated the factors of the two theories together. 

The researchers found that promoting a sense of collectivism, a concept from stewardship theory that emphasizes the mutual benefits of good behavior for both the employee and the employer helped increase the efficacy of protection motivation theory-based methods.

“Basically, what we found was that the more workers felt that their organization’s resources were their own, the more likely they were to respond in the desired way,” Crossler said. “Instilling a sense of collectivism in employees is only going to help enhance people’s likelihood of protecting security policies.” 

The study, which was conducted in collaboration with researchers at the University of North Texas and Oklahoma State University, also showed that in some cases, a protection motivation theory approach to IT security would backfire and result in security misbehaviors. As a result of their analysis, the authors recommend that companies should consider removing or reducing surveillance practices that are a common aspect of protection motivation theory. Where such removal is impracticable, employers should consider providing employees with contextual reasons for performing such monitoring. 

“This is really the first study that brings stewardship theory and protection motivation theory together in the context of IT security for people working from home,” Crossler said. “While stewardship theory did not work as well as protection motivation, our results suggest that managerial decisions informed by a stewardship perspective can help to provide a further understanding of security policy violations that motivates employees to make the right decision.”

Douglas-fir  CREDIT Lina DiGregorio
Douglas-fir CREDIT Lina DiGregorio

As climate warms, drier air likely to be more stressful than less rainfall for Douglas-fir trees

Douglas-fir trees will likely experience more stress from drier air as the climate changes than they will from less rain, supercomputer modeling by Oregon State University scientists shows.

The research is important because Douglas-fir is widespread throughout the Pacific Northwest, an iconic species with ecological, cultural, and economic significance, and learning how the trees respond to drought is crucial for understanding forest sensitivity to a shifting climate.

Douglas-fir grow in a range that stretches from northern British Columbia to central California, and also includes the Rocky Mountains and northeastern Mexico. In Oregon, Douglas-fir is found in a variety of mixed conifer and hardwood forests, from sea level to 5,000 feet, and can reach a massive size; a tree on Bureau of Land Management land in Coos County is more than 300 feet tall and greater than 11 feet in diameter.

Native Americans traditionally used the wood of Douglas-fir, Oregon’s official state tree since 1936, for fuel and for tools, its pitch as a sealant and many parts of the tree for medicinal purposes.

A versatile timber tree, Douglas-fir is a source of softwood products including boards, railroad ties, plywood veneer, and wood fiber. Oregon leads all U.S. states in softwood production and most of that is Douglas-fir.

The OSU study, published in Agricultural and Forest Meteorology, simulated the response of a 50-year-old stand of Douglas-fir on the Oregon Cascade Range’s west slope to less rain and higher “vapor pressure deficit,” or VPD – basically the atmosphere’s drying power.

A team led by Karla Jarecke, a postdoctoral researcher in the OSU College of Earth, Ocean, and Atmospheric Sciences, sought to look at how the mechanisms behind carbon fixation and water “fluxes” – exchanges of water between trees and the atmosphere – would respond to decreases in rainfall and increases in VPD.

Douglas-fir, like other plants, creates food for themselves using sunlight, carbon dioxide, and water during photosynthesis. The process pulls CO2, a greenhouse gas, from the air, releases oxygen, and results in the long-term storage of carbon in the wood and roots.

“What governs carbon fixation and water fluxes in response to increased temperatures and water limitation in regions with Mediterranean climates – wet winters and dry summers – is only partially understood,” said Jarecke, who began the research as a doctoral student in the OSU College of Forestry. “High VPD and lack of soil moisture can create significant water stress in forests, but the dry atmosphere and lack of rainfall are strongly linked, making it difficult to discern their independent effects. They tend to both occur during the summer.”

Jarecke and collaborators including the College of Forestry’s Kevin Bladon and Linnia Hawkins and the U.S. Forest Service’s Steven Wondzell used a supercomputer model to disentangle the effects of the two phenomena. The model uses a series of equations that illustrate how well Douglas-firs are equipped to deal with water stress, and it showed that less spring and summer rain is likely to have a comparatively smaller impact on forest productivity than increased VPD.

“Decreasing spring and summer precipitation did not have much of an effect on Douglas-fir water stress because moisture remained plentiful deep in the soil profile,” Jarecke said. “This demonstrated that the effect of reduced rainfall under future climate change may be minimal but will depend on subsurface water availability, which is determined by soil properties and rooting depths.”

She said heat-driven increases in vapor pressure deficit, however, are likely to cause water stress regardless of the amount of moisture in the soil, adding that “many knowledge gaps remain concerning how trees will respond to extreme temperatures and VPD anomalies such as the record-breaking temperatures that occurred in the Northwest in the summer of 2021.”

Bladon added that the Oregon State study shows the important role of atmospheric droughts in creating stress conditions for trees.

“This has potential implications for not only driving substantial tree mortality but also influencing wildfires, as other studies have shown strong relationships between VPD and forest area burned in the western United States,” he said.

Mahmoud Moradi
Mahmoud Moradi

Prof Moradi builds supercomputer models to determine drug candidate’s ability to bind to proteins

Combing computational physics with experimental data, University of Arkansas researchers have developed supercomputer models for determining a drug candidate’s ability to target and bind to proteins within cells.

If accurate, such an estimator could computationally demonstrate binding affinity and thus prevent experimental researchers from needing to investigate millions of chemical compounds. The work could substantially reduce the cost and time associated with developing new drugs.

“We developed a theoretical framework for estimating ligand-protein binding,” said Mahmoud Moradi, associate professor of chemistry and biochemistry at the Fulbright College of Arts and Sciences. “The proposed method assigns an effective energy to the ligand at every grid point in a coordinate system, which has its origin at the most likely location of the ligand when it is in its bound state.”

A ligand is a substance — an ion or molecule — such as a drug that binds to another molecule, such as a protein, to form a complex system that may cause or prevent a biological function.

Moradi’s research focuses on computational simulations of diseases, including coronavirus. For this project, he collaborated with Suresh Thallapuranam, professor of biochemistry and the Cooper Chair of Bioinformatics Research.

Moradi and Thallapuranam used biased simulations — as well as non-parametric re-weighting techniques to account for the bias — to create a binding estimator that was computationally efficient and accurate. They then used a mathematically robust technique called orientation quaternion formalism to further describe the ligand’s conformational changes as it bound to targeted proteins.

The researchers tested this approach by estimating the binding affinity between human fibroblast growth factor 1 — a specific signaling protein — and heparin hexasaccharide 5, a popular medication.

The project was conceived because Moradi and Thallapuranam were studying human fibroblast growth factor 1 protein and its mutants in the absence and presence of heparin. They found strong qualitative agreement between simulations and experimental results.

“When it came to binding affinity, we knew that the typical methods we had at our disposal would not work for such a difficult problem,” Moradi said. “This is why we decided to develop a new method. We had a joyous moment when the experimental and computational data were compared with each other, and the two numbers matched almost perfectly.”

Moradi previously received attention for developing computational simulations of the behavior of SARS-CoV-2 spike proteins prior to fusion with human cell receptors. SARS-CoV-2 is the virus that causes COVID-19.