SUNY Buffalo's brain model offers new insights into damage caused by stroke, other injuries

UB researcher's background in supercomputer modeling made his pioneering advancement of combining existing approaches seem obvious, a 'chocolate and peanut butter moment'

He calls it his "chocolate and peanut butter moment."

A University at Buffalo neuroimaging researcher has developed a computer model of the human brain that more realistically simulates actual patterns of brain impairment than existing methods. The novel advancement represents the union of two established approaches to create a digital simulation environment that could help stroke victims and patients with other brain injuries by serving as a testing ground for hypotheses about specific neurological damage.

"This model is tied accurately to the functional connectivity of the brain and is able to demonstrate realistic patterns of cognitive impairment," says Christopher McNorgan, an assistant professor of psychology in UB's College of Arts and Sciences. "Since the model reflects how the brain is connected, we can manipulate it in ways that provide insights, for example, into the areas of a patient's brain that might be damaged. {module INSIDE STORY}

"This recent work doesn't prove that we have a digital facsimile of the human brain, but the findings indicate that the model is performing in a way that is consistent with how the brain performs, and that at least suggests that the model is taking on properties that are moving in the direction of possibly one day creating a facsimile."

The findings provide a powerful means of identifying and understanding brain networks and how they function, which could lead to what once were unrealized possibilities for discovery and understanding.

Details on the model and the results of its testing appear in the journal NeuroImage.

Explaining McNorgan's model starts with a look at the two fundamental components of its design: functional connectivity and multivariate pattern analyses (MVPA).

For many years, traditional brain-based models have relied on a general linear approach. This method looks at every spot in the brain and how those areas respond to stimuli. This approach is used in traditional studies of functional connectivity, which rely on functional magnetic resonance imaging (fMRI) to explore how the brain is wired. A linear model assumes a direct relationship between two things, such as the visual region of the brain becoming more or less active when a light flickers on or off.

While linear models excel at identifying which areas are active under certain conditions, they often fail to detect complicated relationships potentially existing among multiple areas. That's the domain of more recent advances, like MVPA, a "teachable" machine-learning technique that operates on a more holistic level to evaluate how the activity is patterned across brain regions.

MVPA is non-linear. Assume for instance that there's a set of neurons dedicated to recognizing the meaning of a stop sign. These neurons are not active when we see something red or something octagonal because there's not a one-to-one linear mapping between being red and being a stop sign (an apple isn't a stop sign), nor between being octagonal and being a stop sign (a board room table isn't a stop sign).

"A non-linear response ensures that they do light up when we see an object that is both red and octagonal," explains McNorgan. "For this reason, non-linear methods like MVPA have been at the core of so-called 'Deep Learning' approaches behind technologies, such as the computer vision software required for self-driving cars."

But MVPA uses brute force machine-learning techniques. The process is opportunistic, sometimes confusing coincidence with correlation. Even ideal models require researchers to provide evidence that activity in the theoretical model would also be present under the same conditions in the brain.

On their own, both traditional functional connectivity and MVPA approaches have limitations, and integrating results generated by each of these approaches requires considerable effort and expertise for brain researchers to puzzle out the evidence.

When combined, however, the limitations are mutually constrained -- and McNorgan is the first researcher to successfully integrate functional connectivity and MVPA to develop a machine-learning model that's explicitly grounded in real-world functional connections among brain regions. In other words, the mutually constrained results are a self-assembling puzzle.

"It was my chocolate and peanut butter moment," says McNorgan, an expert in neuroimaging and computational modeling.

"I've had a particular career trajectory that has allowed me to work extensively with different theoretical models. That background provided a particular set of experiences that made the combination seem obvious in hindsight."

To build his models, McNorgan begins by gathering the brain data that will teach them the patterns of brain activity that are associated with each of three categories - in this case, tools, musical instruments, and fruits. These data came from 11 participants who imagined the appearance and sound of familiar category examples, like hammers, guitars, and apples while undergoing an MRI scan. These scans indicate which areas are more or less active based on blood oxygen levels.

"There are certain patterns of activity across the brain that are consistent with thinking about one category versus another," says McNorgan. "We might think of this as a neural fingerprint."

These MRI patterns were then digitized and used to train a series of computer models to recognize which activity patterns were associated with each category.

"After training, models are given previously unseen activity patterns," he explains. "Significantly above-chance classification accuracy indicates that the models have learned a generalizable relationship between specific brain activity patterns and thinking about a specific category."

To test whether the digital brain models produced by this new method were more realistic, McNorgan gave them "virtual lesions" by disrupting activations in regions known to be important for each of the categories.

He found that the mutually constrained models showed classification errors consistent with the lesion location. For example, lesions to areas thought to be important for representing tools disrupted accuracy for tool patterns, but not the other two categories. By comparison, other versions of models not trained using the new method did not show this behavior.

"The model now suggests how brain areas that might not appear to be important for encoding information when considered individually may be important when it's functioning as part of a larger configuration or network," he says. "Knowing these areas may help us understand why someone who suffered a stroke or other injury is having trouble making these distinctions."

Swiss supercomputing reduces the risk of blood clots in artificial heart valves

People with mechanical heart valves need blood thinners on a daily basis because they have a higher risk of blood clots and stroke. Researchers at the ARTORG Center of the University of Bern, Switzerland, now identified the root cause of blood turbulence leading to clotting. Design optimization could greatly reduce the risk of clotting and enable these patients to live without life-long medication.

Most people are familiar with turbulence in aviation: certain wind conditions cause a bumpy passenger flight. But even within human blood vessels, blood flow can be turbulent. Turbulence can appear when blood flows along vessel bends or edges, causing an abrupt change in flow velocity. Turbulent blood flow generates extra forces that increase the odds of blood clots to form. These clots grow slowly until they may be carried along by the bloodstream and cause stroke by blocking an artery in the brain.

Mechanical heart valves produce turbulent blood flows Hadi Zolfaghari (front) and Dominik Obrist (back) are discussing the turbulent flow in the mechanical heart valve. © M. Kugemann for ARTORG Center, University of Bern

{module INSIDE STORY}Patients with artificial heart valves are at a higher risk of clot formation. The elevated risk is known from the observation of patients after the implantation of an artificial valve. The clotting risk factor is particularly severe for the recipients of mechanical heart valves, where the patients must receive blood thinners every day to combat the risk of stroke. So far, it is unclear why mechanical heart valves promote clot formation far more than other valve types, e.g. biological heart valves.

A team of engineers from the Cardiovascular Engineering Group at the ARTORG Center for Biomedical Engineering Research at the University of Bern has now successfully identified a mechanism that can significantly contribute to clot formation. They used complex mathematical methods of hydrodynamic stability theory, a subfield of fluid mechanics, which has been used successfully for many decades to develop fuel-efficient aircraft. This is the first translation of these methods, which combine physics and applied mathematics, into medicine.

Using complex computer simulations on flagship supercomputers at the Centro Svizzero di Calcolo Scientifico in Lugano, the research team was able to show that the current shape of the flow-regulating flaps of the heart valve leads to strong turbulence in the blood flow. "By navigating through the simulation data, we found how the blood impinges at the front edge of the valve flaps, and how the blood flow quickly becomes unstable and forms turbulent vortices," explains Hadi Zolfaghari, first author of the study. "The strong forces generated in this process could activate the blood coagulation and cause clots to form immediately behind the valve. Supercomputers helped us to capture one root cause of turbulence in these valves, and hydrodynamic stability theory helped us to find an engineering fix for it."

The mechanical heart valves which were used in the study consist of a metal ring and two flaps rotating on hinges; the flaps open and close in each heartbeat to allow blood to flow out of the heart but not back in again. In the study, the team also investigated how the heart valve could be improved. It showed that even a slightly modified design of the valve flaps allowed the blood to flow without generating instabilities which lead to turbulence – more like a healthy heart. Such a blood flow without turbulence would significantly reduce the chance of clot formation and stroke.

Life without blood thinners?

More than 100,000 people per year receive a mechanical heart valve. Because of the high risk of clotting, all these people must take blood thinners, every day, and for the rest of their lives. If the design of the heart valves is improved from a fluid mechanics point of view, it is conceivable that recipients of these valves would no longer need blood thinners. This could lead to normal life – without the lasting burden of receiving blood thinner medication. "The design of mechanical heart valves has hardly been adapted since their development in the 1970s," says Dominik Obrist, head of the research group at the ARTORG Center. "By contrast, a lot of research and development has been conducted in other engineering areas, such as aircraft design. Considering how many people have an artificial heart valve, it is time to talk about design optimizations also in this area in order to give these people a better life."

SAIC wins $727 million application modernization contract for the DOD

The U.S. Air Force has awarded the Common Computing Environment (Cloud One) contract to Science Applications International Corp. SAIC will migrate approximately 800 Air Force and U.S. Army mission applications into the cloud.

“The Air Force and Army need speed, scale, and security in migrating their mission apps to the cloud,” said Michael LaRouche, executive vice president and general manager of SAIC’s National Security Customer Group. “We leveraged the best of our own proprietary solutions and partners’ technologies to develop a smart migration approach to address the military’s critical needs. We have invested heavily in our IT modernization capabilities, and the opportunity to assist the DOD reinforces the value we deliver to our customers.” {module INSIDE STORY}

The contract consists of firm fixed price, labor hour, and cost reimbursement elements with a nine-month period of performance and four one-year options. Though awarded by the Air Force, the contract will also serve Army applications, as part of the Army’s cloud strategic framework. SAIC’s solution leverages proven success and capabilities in cloud, IT modernization, software, cyber, and data analytics.

"We help our clients to realize the full potential of the cloud for their applications," said Josh Jackson, executive vice president and general manager of SAIC's Solutions & Technology Group. "We proposed a forward-leaning and comprehensive model for what they want to achieve, remaining focused on the mission while accelerating and simplifying adoption of DOD's cloud computing options and the associated benefits. Our customer-vetted approach provides a fast, affordable, and highly secure migration strategy that protects the current state while accelerating the use of new and emerging technologies."