Cosmological thinking meets neuroscience in new theory about brain connections

A collaboration between a former cosmologist and a computational neuroscientist at Janelia generates a new way to identify essential connections between brain cells.

After a career spent probing the mysteries of the universe, a Janelia Research Campus senior scientist is now exploring the mysteries of the human brain and developing new insights into the connections between brain cells.  

Tirthabir Biswas had a successful career as a theoretical high-energy physicist when he came to Janelia on a sabbatical in 2018. Biswas still enjoyed tackling problems about the universe, but the field had lost some of its excitement, with many major questions already answered.

“Neuroscience today is a little like physics a hundred years ago when physics had so much data and they didn’t know what was going on and it was exciting,” says Biswas, who is part of the Fitzgerald Lab. “There is a lot of information in neuroscience and a lot of data, and they understand some specific big circuits, but there is still not an overarching theoretical understanding, and there is an opportunity to make a contribution.”

One of the biggest unanswered questions in neuroscience revolves around connections between brain cells. There are hundreds of times more connections in the human brain than there are stars in the Milky Way, but which brain cells are connected and why remains a mystery. This limits scientists’ ability to precisely treat mental health issues and develop more accurate artificial intelligence.

The challenge of developing a mathematical theory to better understand these connections was a problem Janelia Group Leader James Fitzgerald first posed when Tirthabir Biswas arrived in his lab.

While Fitzgerald was out of town for a few days, Biswas sat down with pen and paper and used his background in high-dimensional geometry to think about the problem – a different approach than that of neuroscientists, who typically rely on calculus and algebra to address mathematical problems. Within days, Biswas had a major insight into the solution and approached Fitzgerald as soon as he returned.

“It seemed this was a very difficult problem, so if I say, ‘I’ve solved the problem,’ he’ll probably think I’m crazy,” Biswas recalls. “But I decided to say it anyway.” Fitzgerald was initially skeptical, but once Biswas finished laying out his work, they both realized he was on to something important. James Fitzgerald

“He had an insight that is really fundamental to how these networks work that people hadn’t had before,” Fitzgerald says. “This insight was enabled by cross-disciplinary thinking. This insight was a flash of brilliance that he had because of how he thinks, and it just translated to this new problem he’s never worked on before.”

Biswas’s idea helped the team develop a new way to identify essential connections between brain cells, which was published on June 29 in Physical Review Research. By analyzing neural networks – mathematical models that mimic brain cells and their connections – they were able to figure out that certain connections in the brain may be more essential than others.

Specifically, they looked at how these networks transform inputs into outputs. For example, an input could be a signal detected by the eye and the output could be the resulting brain activity. They looked at which connection patterns resulted in the same input-output transformation.  

As expected, there were an infinite number of possible connections for each input-output combination. But they also found that certain connections appeared in every model, leading the team to suggest that these necessary connections could be present in real brains. A better understanding of which connections are more essential than others could lead to greater awareness of how real neural networks in the brain perform computations.

The next step is for experimental neuroscientists to test this new mathematical theory to see if it can be used to make predictions about what is happening in the brain. The theory has direct applications to Janelia’s efforts to map the connectome of the fly brain and record brain activity in larval zebrafish. Figuring out underlying theoretical principles in these small animals can be used to understand connections in the human brain, where recording such activity is not yet feasible.

“What we are trying to do is put forward some theoretical ways of understanding what really matters and use these simple brains to test those theories,” Fitzgerald says. “As they are verified in simple brains, the general theory can be used to think about how brain computation works in larger brains.”

Destiny of science modeled in new study by UH prof

What is the common thread among mRNA vaccines, genomic drugs, NASA’s mission to the moon, and the harnessing of nuclear power? They all have been products of science convergence, where knowledge from multiple scientific disciplines is integrated into new overarching knowledge that propels modern civilization. In the last 70 years, convergence achieved more than what science achieved in all its previous multi-millennial history combined. Ioannis Pavlidis, Eckhard-Pfeiffer Professor of Computer Science and the director of the Computational Physiology Laboratory at UH.

In a new article in American Scientist magazine, professors Ioannis Pavlidis (University of Houston), Ergun Akleman (Texas A&M University), and Alexander M. Petersen (University of California, Merced) show that despite appearances to the contrary, convergence is not a new phenomenon that took science by storm, but a streak that runs deep into science’s nature.

Spanning 10 years, the researchers modeled the evolution of convergence by analyzing millions of scientific works using machine learning and other advanced data analytic methods.

In their account, the researchers identify several stages in the evolution of science, each characterized by a different form of convergence. First, polymathic convergence, which characterized early science up to the Renaissance period, is exemplified by famous polymaths, such as Aristotle and Leonardo da Vinci. In polymathic convergence, knowledge integration was taking place within the minds of singular scholars at the time.

This was followed by a period of disciplinary divergence where theories developed within specific disciplines were turned into generalized templates with broader applications - a phenomenon the authors call convergence through divergence. Darwin's theory of evolution in biology, which was used by others to explain economic and social systems, is a case in point.

Then, by the mid-20th century, dawned the era of multi-disciplinary team convergence, where experts from different disciplines were working together toward a common goal. In multi-disciplinary team convergence, knowledge integration has been taking place across teams of scientists with diverse expertise. A famous example of this type of convergence was the Manhattan Project, which ushered humanity into the nuclear era.

“Now in the early 21st century, we have detected the emergence of yet another form of convergence, which we call polymathic team convergence,” said Pavlidis, Eckhard-Pfeiffer Professor of Computer Science and the director of the Computational Physiology Laboratory at UH. “In polymathic team convergence, knowledge integration takes place both within and across scholars, that is, a mix of individual polymathic and multi-disciplinary team convergence. Recent research in brain science exhibits telltale signs of polymathic team convergence.”

"This is not the first theory about the underlying mechanisms of science evolution. However, it is the first scientific evolution theory that is largely based on massive data analysis and modeling, which allows us to not only ‘prove’ the theory's points for the past, but also estimate the confidence about the theory's predictions for the future," said Pavlidis.

Regarding the latter, the team of researchers predicts that convergence by the mid-21st century will evolve into what they call cyborg team convergence, where polymathic scientists will collaborate with artificial intelligence (AI) agents in mixed human-machine teams.

"Early signs of cyborg team convergence are here and are thoroughly described in our article,” Petersen noted.

Indian Institute of Science develops GPU-based ML algo to accelerate connectome discovery at scale

A new GPU-based machine learning algorithm developed by researchers at the Indian Institute of Science (IISc) can help scientists better understand and predict connectivity between different regions of the brain.

The algorithm, called Regularized, Accelerated, Linear Fascicle Evaluation, or ReAl-LiFE, can rapidly analyze the enormous amounts of data generated from diffusion Magnetic Resonance Imaging (dMRI) scans of the human brain. Using ReAL-LiFE, the team was able to evaluate dMRI data over 150 times faster than existing state-of-the-art algorithms. The image shows connections between the midbrain and various regions of the neocortex. Connections to each region are shown in a different colour, and were all estimated with diffusion MRI and tractography in the living human brain  CREDIT Varsha Sreenivasan and Devarajan Sridharan

“Tasks that previously took hours to days can be completed within seconds to minutes,” says Devarajan Sridharan, Associate Professor at the Centre for Neuroscience (CNS), IISc, and corresponding author of the study.

Millions of neurons fire in the brain every second, generating electrical pulses that travel across neuronal networks from one point in the brain to another through connecting cables or “axons”. These connections are essential for computations that the brain performs. “Understanding brain connectivity is critical for uncovering brain-behavior relationships at scale,” says Varsha Sreenivasan, a Ph.D. student at CNS and the first author of the study. However, conventional approaches to studying brain connectivity typically use animal models and are invasive. dMRI scans, on the other hand, provide a non-invasive method to study brain connectivity in humans. 

The cables (axons) that connect different areas of the brain are its information highways. Because bundles of axons are shaped like tubes, water molecules move through them, along their length, in a directed manner. dMRI allows scientists to track this movement, to create a comprehensive map of the network of fibers across the brain, called a connectome.

Unfortunately, it is not straightforward to pinpoint these connectomes. The data obtained from the scans only provide the net flow of water molecules at each point in the brain. “Imagine that the water molecules are cars. The obtained information is the direction and speed of the vehicles at each point in space and time with no information about the roads. Our task is similar to inferring the networks of roads by observing these traffic patterns,” explains Sridharan. 

To identify these networks accurately, conventional algorithms closely match the predicted dMRI signal from the inferred connectome with the observed dMRI signal. Scientists had previously developed an algorithm called LiFE (Linear Fascicle Evaluation) to carry out this optimization, but one of its challenges was that it worked on traditional Central Processing Units (CPUs), which made the computation time-consuming. 

In the new study, Sridharan’s team tweaked their algorithm to cut down the computational effort involved in several ways, including removing redundant connections, thereby improving upon LiFE’s performance significantly. To speed up the algorithm further, the team also redesigned it to work on specialized electronic chips, the kind found in high-performance gaming computers, called Graphics Processing Units (GPUs), which helped them analyze data at speeds 100-150 times faster than previous approaches. 

This improved algorithm, ReAl-LiFE, was also able to predict how a human test subject would behave or carry out a specific task. In other words, using the connection strengths estimated by the algorithm for each individual, the team was able to explain variations in behavioral and cognitive test scores across a group of 200 participants. 

Such analysis can have medical applications too. “Data processing on large scales is becoming increasingly necessary for big-data neuroscience applications, especially for understanding healthy brain function and brain pathology,” says Sreenivasan.

For example, using the obtained connectomes, the team hopes to be able to identify early signs of aging or deterioration of brain function before they manifest behaviourally in Alzheimer’s patients. “In another study, we found that a previous version of ReAL-LiFE could do better than other competing algorithms for distinguishing patients with Alzheimer’s disease from healthy controls,” says Sridharan.  He adds that their GPU-based implementation is very general, and can be used to tackle optimization problems in many other fields as well.