Destiny of science modeled in new study by UH prof

What is the common thread among mRNA vaccines, genomic drugs, NASA’s mission to the moon, and the harnessing of nuclear power? They all have been products of science convergence, where knowledge from multiple scientific disciplines is integrated into new overarching knowledge that propels modern civilization. In the last 70 years, convergence achieved more than what science achieved in all its previous multi-millennial history combined. Ioannis Pavlidis, Eckhard-Pfeiffer Professor of Computer Science and the director of the Computational Physiology Laboratory at UH.

In a new article in American Scientist magazine, professors Ioannis Pavlidis (University of Houston), Ergun Akleman (Texas A&M University), and Alexander M. Petersen (University of California, Merced) show that despite appearances to the contrary, convergence is not a new phenomenon that took science by storm, but a streak that runs deep into science’s nature.

Spanning 10 years, the researchers modeled the evolution of convergence by analyzing millions of scientific works using machine learning and other advanced data analytic methods.

In their account, the researchers identify several stages in the evolution of science, each characterized by a different form of convergence. First, polymathic convergence, which characterized early science up to the Renaissance period, is exemplified by famous polymaths, such as Aristotle and Leonardo da Vinci. In polymathic convergence, knowledge integration was taking place within the minds of singular scholars at the time.

This was followed by a period of disciplinary divergence where theories developed within specific disciplines were turned into generalized templates with broader applications - a phenomenon the authors call convergence through divergence. Darwin's theory of evolution in biology, which was used by others to explain economic and social systems, is a case in point.

Then, by the mid-20th century, dawned the era of multi-disciplinary team convergence, where experts from different disciplines were working together toward a common goal. In multi-disciplinary team convergence, knowledge integration has been taking place across teams of scientists with diverse expertise. A famous example of this type of convergence was the Manhattan Project, which ushered humanity into the nuclear era.

“Now in the early 21st century, we have detected the emergence of yet another form of convergence, which we call polymathic team convergence,” said Pavlidis, Eckhard-Pfeiffer Professor of Computer Science and the director of the Computational Physiology Laboratory at UH. “In polymathic team convergence, knowledge integration takes place both within and across scholars, that is, a mix of individual polymathic and multi-disciplinary team convergence. Recent research in brain science exhibits telltale signs of polymathic team convergence.”

"This is not the first theory about the underlying mechanisms of science evolution. However, it is the first scientific evolution theory that is largely based on massive data analysis and modeling, which allows us to not only ‘prove’ the theory's points for the past, but also estimate the confidence about the theory's predictions for the future," said Pavlidis.

Regarding the latter, the team of researchers predicts that convergence by the mid-21st century will evolve into what they call cyborg team convergence, where polymathic scientists will collaborate with artificial intelligence (AI) agents in mixed human-machine teams.

"Early signs of cyborg team convergence are here and are thoroughly described in our article,” Petersen noted.

Indian Institute of Science develops GPU-based ML algo to accelerate connectome discovery at scale

A new GPU-based machine learning algorithm developed by researchers at the Indian Institute of Science (IISc) can help scientists better understand and predict connectivity between different regions of the brain.

The algorithm, called Regularized, Accelerated, Linear Fascicle Evaluation, or ReAl-LiFE, can rapidly analyze the enormous amounts of data generated from diffusion Magnetic Resonance Imaging (dMRI) scans of the human brain. Using ReAL-LiFE, the team was able to evaluate dMRI data over 150 times faster than existing state-of-the-art algorithms. The image shows connections between the midbrain and various regions of the neocortex. Connections to each region are shown in a different colour, and were all estimated with diffusion MRI and tractography in the living human brain  CREDIT Varsha Sreenivasan and Devarajan Sridharan

“Tasks that previously took hours to days can be completed within seconds to minutes,” says Devarajan Sridharan, Associate Professor at the Centre for Neuroscience (CNS), IISc, and corresponding author of the study.

Millions of neurons fire in the brain every second, generating electrical pulses that travel across neuronal networks from one point in the brain to another through connecting cables or “axons”. These connections are essential for computations that the brain performs. “Understanding brain connectivity is critical for uncovering brain-behavior relationships at scale,” says Varsha Sreenivasan, a Ph.D. student at CNS and the first author of the study. However, conventional approaches to studying brain connectivity typically use animal models and are invasive. dMRI scans, on the other hand, provide a non-invasive method to study brain connectivity in humans. 

The cables (axons) that connect different areas of the brain are its information highways. Because bundles of axons are shaped like tubes, water molecules move through them, along their length, in a directed manner. dMRI allows scientists to track this movement, to create a comprehensive map of the network of fibers across the brain, called a connectome.

Unfortunately, it is not straightforward to pinpoint these connectomes. The data obtained from the scans only provide the net flow of water molecules at each point in the brain. “Imagine that the water molecules are cars. The obtained information is the direction and speed of the vehicles at each point in space and time with no information about the roads. Our task is similar to inferring the networks of roads by observing these traffic patterns,” explains Sridharan. 

To identify these networks accurately, conventional algorithms closely match the predicted dMRI signal from the inferred connectome with the observed dMRI signal. Scientists had previously developed an algorithm called LiFE (Linear Fascicle Evaluation) to carry out this optimization, but one of its challenges was that it worked on traditional Central Processing Units (CPUs), which made the computation time-consuming. 

In the new study, Sridharan’s team tweaked their algorithm to cut down the computational effort involved in several ways, including removing redundant connections, thereby improving upon LiFE’s performance significantly. To speed up the algorithm further, the team also redesigned it to work on specialized electronic chips, the kind found in high-performance gaming computers, called Graphics Processing Units (GPUs), which helped them analyze data at speeds 100-150 times faster than previous approaches. 

This improved algorithm, ReAl-LiFE, was also able to predict how a human test subject would behave or carry out a specific task. In other words, using the connection strengths estimated by the algorithm for each individual, the team was able to explain variations in behavioral and cognitive test scores across a group of 200 participants. 

Such analysis can have medical applications too. “Data processing on large scales is becoming increasingly necessary for big-data neuroscience applications, especially for understanding healthy brain function and brain pathology,” says Sreenivasan.

For example, using the obtained connectomes, the team hopes to be able to identify early signs of aging or deterioration of brain function before they manifest behaviourally in Alzheimer’s patients. “In another study, we found that a previous version of ReAL-LiFE could do better than other competing algorithms for distinguishing patients with Alzheimer’s disease from healthy controls,” says Sridharan.  He adds that their GPU-based implementation is very general, and can be used to tackle optimization problems in many other fields as well. 

Indian Institute of Science performs simulations to analyze the COVID-19 spread during short conversations

When a person sneezes or coughs, they can potentially transmit droplets carrying viruses like SARS-CoV-2 to others in their vicinity. Does talking to an infected person also carry an increased risk of infection? How do speech droplets or “aerosols” move in the air space between the people interacting? Interactions of speech jets during short conversations between two people separated by a distance of four feet, visualised by an iso-surface of the aerosol concentration. Three different height differences are shown. The blue and red colours represent the simulated speech jets emanating from the mouths of the two people. The simulations were performed on SahasraT at IISc  CREDIT Rohit Singhal

To answer these questions, a research team has carried out supercomputer simulations to analyze the movement of speech aerosols. The team includes researchers from the Department of Aerospace Engineering, Indian Institute of Science (IISc), along with collaborators from the Nordic Institute for Theoretical Physics (NORDITA) in Stockholm and the International Centre for Theoretical Sciences (ICTS) in Bengaluru. Their study was published in the journal Flow.

The team visualized scenarios in which two maskless people are standing two, four, or six feet apart and talking to each other for about a minute, and then estimated the rate and extent of spread of the speech aerosols from one to another. Their simulations showed that the risk of getting infected was higher when one person acted as a passive listener and didn’t engage in a two-way conversation. Factors like the height difference between the people talking and the number of aerosols released from their mouths also appear to play an important role in viral transmission. 

“Speaking is a complex activity … and when people speak, they’re not really conscious of whether this can constitute a means of virus transmission,” says Sourabh Diwan, Assistant Professor in the Department of Aerospace Engineering, and one of the corresponding authors.

In the early days of the COVID-19 pandemic, experts believed that the virus mostly spread symptomatically through coughing or sneezing. Soon, it became clear that asymptomatic transmission also leads to the spread of COVID-19. However, very few studies have looked at aerosol transport by speech as a possible mode of asymptomatic transmission, according to Diwan.

To analyze speech flows, he and his team modified a computer code they had originally developed to study the movement and behavior of cumulus clouds – the puffy cotton-like clouds that are usually seen on a sunny day. The code (called Megha-5) was written by S Ravichandran from NORDITA, the other corresponding author on the paper, and was used recently for studying particle-flow interaction in Rama Govindarajan’s group at ICTS. The analysis carried out by the team on speech flows incorporated the possibility of viral entry through the eyes and mouth in determining the risk of infection – most previous studies had only considered the nose as the point of entry. 

“The computational part was intensive, and it took a lot of time to perform these simulations,” explains Rohit Singhal, first author and Ph.D. student at the Department of Aerospace Engineering. Diwan adds that it is hard to numerically simulate the flow of speech aerosols because of the highly fluctuating (“turbulent”) nature of the flow; factors like the flow rate at the mouth and the duration of speech also play a role in shaping its evolution. 

In the simulations, when the speakers were either of the same height or drastically different heights (one tall and another short), the risk of infection was found to be much lower than when the height difference was moderate – the variation looked like a bell curve. Based on their results, the team suggests that just turning their heads away by about nine degrees from each other while still maintaining eye contact can reduce the risk for the speakers considerably. 

Moving forward, the team plans to focus on simulating differences in the loudness of the speakers’ voices and the presence of ventilation sources in their vicinity to see what effect they can have on viral transmission. They also plan to engage in discussions with public health policymakers and epidemiologists to develop suitable guidelines. “Whatever precautions we can take while we come back to normalcy in our daily interactions with other people, would go a long way in minimizing the spread of infection,” Diwan says.