Texas A&M researchers develop Computational Fluid Dynamics-Discrete Element Methods model for studying the flow in the next-generation reactors to improve safety

The model can better predict the physical phenomenon inside of very-high-temperature pebble-bed reactors

When one of the largest modern earthquakes struck Japan on March 11, 2011, the nuclear reactors at Fukushima-Daiichi automatically shut down, as designed. The emergency systems, which would have helped maintain the necessary cooling of the core, were destroyed by the subsequent tsunami. Because the reactor could no longer cool itself, the core overheated, resulting in a severe nuclear meltdown, the likes of which haven't been seen since the Chernobyl disaster in 1986.

Since then, reactors have improved exponentially in terms of safety, sustainability, and efficiency. Unlike the light-water reactors at Fukushima, which had liquid coolant and uranium fuel, the current generation of reactors has a variety of coolant options, including molten-salt mixtures, supercritical water, and even gases like helium.

Dr. Jean Ragusa and Dr. Mauricio Eduardo Tano Retamales from the Department of Nuclear Engineering at Texas A&M University have been studying a new fourth-generation reactor, pebble-bed reactors. Pebble-bed reactors use spherical fuel elements (known as pebbles) and a fluid coolant (usually a gas). Pebble-bed reactors use passive natural circulation to cool down, making it theoretically impossible for a core meltdown to occur.  CREDIT Dr. Jean Ragusa and Dr. Mauricio Eduardo Tano Retamales/Texas A&M University Engineering

"There are about 40,000 fuel pebbles in such a reactor," said Ragusa. "Think of the reactor as a really big bucket with 40,000 tennis balls inside."

During an accident, as the gas in the reactor core begins to heat up, the cold air from below begins to rise, a process known as natural convection cooling. Additionally, the fuel pebbles are made from pyrolytic carbon and tristructural-isotropic particles, making them resistant to temperatures as high as 3,000 degrees Fahrenheit. As a very high-temperature reactor (VHTR), pebble-bed reactors can be cooled down by passive natural circulation, making it theoretically impossible for an accident like Fukushima to occur.

However, during normal operation, a high-speed flow cools the pebbles. This flow creates movement around and between the fuel pebbles, similar to the way a gust of wind changes the trajectory of a tennis ball. How do you account for the friction between the pebbles and the influence of that friction in the cooling process?

This is the question that Ragusa and Tano aimed to answer in their most recent publication in the journal Nuclear Technology titled "Coupled Computational Fluid Dynamics-Discrete Element Method Study of Bypass Flows in a Pebble-Bed Reactor."

"We solved for the location of these 'tennis balls' using the Discrete Element Method, where we account for the flow-induced motion and friction between all the tennis balls," said Tano. "The coupled model is then tested against thermal measurements in the SANA experiment."

The SANA experiment was conducted in the early 1990s and measured how the mechanisms in a reactor interchange when transmitting heat from the center of the cylinder to the outer part. This experiment allowed Tano and Ragusa to have a standard to which they could validate their models.

As a result, their teams developed a coupled Computational Fluid Dynamics-Discrete Element Methods model for studying the flow over a pebble bed. This model can now be applied to all high-temperature pebble-bed reactors and is the first computational model of its kind to do so. It's very high-accuracy tools such as this that allow vendors to develop better reactors.

"The computational models we create help us more accurately assess different physical phenomena in the reactor," said Tano. "As a result, reactors can operate at a higher margin, theoretically producing more power while increasing the safety of the reactor. We do the same thing with our models for molten-salt reactors for the Department of Energy."

As artificial intelligence continues to advance, its applications to super computational modeling and simulation grow. "We're in a very exciting time for the field," said Ragusa. "And we encourage any prospective students who are interested in computational modeling to reach out because this field will hopefully be around for a long time."

ALMA discovers the most ancient galaxy with spiral morphology

Analyzing data obtained with the Atacama Large Millimeter/submillimeter Array (ALMA), researchers found a galaxy with a spiral morphology by only 1.4 billion years after the Big Bang. This is the most ancient galaxy of its kind ever observed. The discovery of a galaxy with a spiral structure at such an early stage is an important clue to solving the classic questions of astronomy: "How and when did spiral galaxies form?"

"I was excited because I had never seen such clear evidence of a rotating disk, spiral structure, and centralized mass structure in a distant galaxy in any previous literature," says Takafumi Tsukui, a graduate student at SOKENDAI and the lead author of the research paper published in the journal Science. "The quality of the ALMA data was so good that I was able to see so much detail that I thought it was a nearby galaxy."

The Milky Way Galaxy, where we live, is a spiral galaxy. Spiral galaxies are fundamental objects in the Universe, accounting for as much as 70% of the total number of galaxies. However, other studies have shown that the proportion of spiral galaxies declines rapidly as we look back through the history of the Universe. So, when were the spiral galaxies formed? ALMA detected emissions from carbon ions in the galaxy. Spiral arms are visible on both sides of the compact, bright area in the center of the galaxy.  CREDIT ALMA (ESO/NAOJ/NRAO), T. Tsukui & S. Iguchi

Tsukui and his supervisor Satoru Iguchi, a professor at SOKENDAI and the National Astronomical Observatory of Japan, noticed a galaxy called BRI 1335-0417 in the ALMA Science Archive. The galaxy existed 12.4 billion years ago and contained a large amount of dust, which obscures the starlight. This makes it difficult to study this galaxy in detail with visible light. On the other hand, ALMA can detect radio emissions from carbon ions in the galaxy, which enables us to investigate what is going on in the galaxy.

The researchers found a spiral structure extending 15,000 light-years from the center of the galaxy. This is one-third of the size of the Milky Way Galaxy. The estimated total mass of the stars and interstellar matter in BRI 1335-0417 is roughly equal to that of the Milky Way.

"As BRI 1335-0417 is a very distant object, we might not be able to see the true edge of the galaxy in this observation," comments Tsukui. "For a galaxy that existed in the early Universe, BRI 1335-0417 was a giant."

Then the question becomes, how was this distinct spiral structure formed in only 1.4 billion years after the Big Bang? The researchers considered multiple possible causes and suggested that it could be due to an interaction with a small galaxy. BRI 1335-0417 is actively forming stars and the researchers found that the gas in the outer part of the galaxy is gravitationally unstable, which is conducive to star formation. This situation is likely to occur when a large amount of gas is supplied from outside, possibly due to collisions with smaller galaxies.

The fate of BRI 1335-0417 is also shrouded in mystery. Galaxies that contain large amounts of dust and actively produce stars in the ancient Universe are thought to be the ancestors of the giant elliptical galaxies in the present Universe. In that case, BRI 1335-0417 changes its shape from a disk galaxy to an elliptical one in the future. Or, contrary to the conventional view, the galaxy may remain a spiral galaxy for a long time. BRI 1335-0417 will play an important role in the study of galaxy shape evolution over the long history of the Universe.

"Our Solar System is located in one of the spiral arms of the Milky Way," explains Iguchi. "Tracing the roots of spiral structure will provide us with clues to the environment in which the Solar System was born. I hope that this research will further advance our understanding of the formation history of galaxies."

These research results are presented in T. Tsukui & S. Iguchi "Spiral morphology in an intensely star-forming disk galaxy more than 12 billion years ago" published online by the journal Science on Thursday, 20 May 2021.

MD Anderson's Chen develops an AI tool for finding rare cell populations in large single-cell datasets

The super computational approach enables analysis of meaningful data that otherwise may be lost in the noise

Researchers at The University of Texas MD Anderson Cancer Center have developed a first-of-its-kind artificial intelligence (AI)-based tool that can accurately identify rare groups of biologically important cells from single-cell datasets, which often contain gene or protein expression data from thousands of cells.

This super computational tool, called SCMER (Single-Cell Manifold presERving feature selection), can help researchers sort through the noise of complex datasets to study cells that would likely not be identifiable otherwise.

SCMER may be used broadly for many applications in oncology and beyond, explained senior author Ken Chen, Ph.D., associate professor of Bioinformatics & Computational Biology, including the study of minimal residual disease, drug resistance, and distinct populations of immune cells. Ken Chen, Ph.D.

"Modern techniques can generate lots of data, but it has become harder to determine which genes or proteins actually are important in those contexts," Chen said. "Small groups of cells can have important features that may play a role in drug resistance, for example, but those features may not be sufficient to distinguish them from more common cells. It's become very important in analyzing single-cell datasets to be able to detect these rare cells and their unique molecular features."

Developing methods to effectively study small or rare cell populations in cancer research is a direct response to one of the provocative questions posed by the National Cancer Institute (NCI) in 2020, designating this an important and underexplored research area. SCMER was designed to address the issue and to enable researchers to get the most out of increasingly complex datasets.

Rather than the traditional approach of sorting cells into clusters based on all data contained in a dataset, SCMER takes an unbiased look to detect the most meaningful distinguishing features that define unique groups of cells. This allows researchers not only to detect rare cell populations but to generate a compact set of genes or proteins that can be used to detect those cells among many others. To highlight the utility of SCMER, the research team applied it to analyze several published single-cell datasets and found it compared favorably to currently available computational approaches.

In a reanalysis of more than 4,500 melanoma cells, SCMER was able to distinguish the cell types present using the expression of just 75 genes. The results also pointed to a number of genes involved in tumor development and drug resistance that were not identified as meaningful in the original study.

In a complex dataset of nearly 40,000 gastrointestinal immune cells, SCMER separated cells using only 250 distinct features. This analysis identified all the original cell types detected in the original study, but in many cases further defined subgroups of rare cells that were not previously identified.

Finally, the research team applied SCMER to study more than 1,400 lung cancer cells taken at various points in time after drug treatment. Using just 80 genes, the tool was able to accurately distinguish cells based on treatment responses and pointed to possible novel drivers of therapeutic resistance.

"Using state-of-the-art AI techniques, we have developed an efficient and user-friendly tool capable of uncovering new biological insights from rare cell populations," Chen said. "SCMER offers researchers the ability to reduce high dimensional, complex datasets into a compact set of actionable features with biological significance."

Brown researchers use holey math, machine learning to study cellular self-assembly

The field of mathematical topology is often described in terms of donuts and pretzels.

To most of us, the two differ in the way they taste or in their compatibility with morning coffee. But to a topologist, the only difference between the two is that one has a single hole and the other has three. There's no way to stretch or contort a donut to make it look like a pretzel -- at least not without ripping it or pasting different parts together, both of which are verboten in topology. The different number of holes make two shapes that are fundamentally, inexorably different.

In recent years, researchers have drawn on the mathematical topology to help explain a range of phenomena like phase transitions in the matter, aspects of Earth's climate, and even how zebrafish form their iconic stripes. Now, a Brown University research team is working to use topology in yet another realm: training computers to classify how human cells organize into tissue-like architectures.

In a study published in the May 7 issue of the journal Soft Matter, the researchers demonstrate a machine learning technique that measures the topological traits of cell clusters. They showed that the system can accurately categorize cell clusters and infer the motility and adhesion of the cells that comprise them. 

"You can think of this as topology-informed machine learning," said Dhananjay Bhaskar, a recent Ph.D. graduate who led the work. "The hope is that this can help us to avoid some of the pitfalls that affect the accuracy of machine learning algorithms." Topology-based machine learning classifies how human cells organize into spatial patterns based on the presence of persistent topological loops around empty regions, which can be used to infer cellular behaviors such as adhesion and migration.

Bhaskar developed the algorithm with Ian Y. Wong, an assistant professor in Brown's School of Engineering, and William Zhang, a Brown undergraduate.

There's been a significant amount of work in recent years to use artificial intelligence as a means of analyzing big data with spatial information, such as medical imaging of patient tissues. Progress has been made in training these systems to classify accurately, "but how they work is opaque and a little finicky," Wong said. "Just like people, sometimes computers hallucinate. You can have a few pixels in the wrong place, and it can confuse the algorithm. So Dhananjay has been thinking about ways we might be able to make those analyses a little more robust."

In developing this new system, Bhaskar took inspiration from modern art, specifically Pablo Picasso's "Bull." The series of 11 lithographs starts with a bull depicted in full detail. Each successive frame strips away a bit of detail, ending in a simple drawing capturing only the animal's fundamental attributes. By employing topology, Bhaskar thought he might be able to do something similar to understand the underlying form of tissue-like architectures.

The way in which cells migrate and interact depends on the physiology of the cells involved. For example, healthy tissues contain higher numbers of stationary epithelial cells. Processes like wound repair or cancer, however, often involve more mobile mesenchymal cells. Differences in physiology between the two cell types cause them to cluster together differently. Epithelial cells tend to aggregate into larger, more closely packed clusters. Mesenchymal cells tend to be more dispersed, with groups of cells branching off in different directions. But when assemblages contain a mix of both kinds of cells, it can be difficult to accurately analyze them.

The new algorithm uses a mathematical framework called persistent homology to examine microscope images of cell assemblages. Specifically, it looks at the topological patterns -- loops or holes -- that the cells form collectively. By looking at which patterns persist across different spatial resolutions, the algorithm determines which patterns are intrinsic to the image.

It starts by looking at the cells in their finest detail, determining which cells seem to be part of topological loops. Then it blurs the detail a bit by drawing a circle around each cell -- effectively making each cell a little larger -- to see which loops persist at that more coarse-grained scale and which get blurred out. The process is repeated until all the topological features eventually disappear. In the end, the algorithm produces a sort of bar code showing which loops persist across spatial scales. Those that are most persistent are stored as a simplified representation of the overall shape.

As it turns out, those persistent topological objects can be used to categorize clusters of different types of cells. After training their algorithm on computer-simulated cells programmed to behave like different types of cells, the team turned it loose on real experimental images of migratory cells. Those cells had been exposed to varying biochemical treatments so that some were more epithelial, some were more mesenchymal, and some were somewhere in between. The study showed that the topological algorithm was able to correctly classify different spatial patterns according to which biochemical treatment the cells had received.

"It was able to pull out all of these experimental treatments just by identifying these persistent topological loops," Wong said. "We were kind of amazed at how well it did."

The team hopes that one day the algorithm could be used in laboratory experiments to test drugs, helping to determine how different drugs can alter cell migration and adhesion. Eventually, it may also be used on medical images of tumors, potentially helping doctors to determine how malignant those tumors may be.

"We're looking for ways to catch subtleties that might not be apparent to the human eye," Wong said. "We hope that this might be a human interpretable approach that complements existing machine learning approaches."

Penn State's molecular dynamics simulation of proteins unveils clues on origins of Parkinson's disease

Parkinson's disease is the second most common neurodegenerative disease and affects more than 10 million people around the world. To better understand the origins of the disease, researchers from Penn State College of Medicine and The Hebrew University of Jerusalem have developed an integrative approach, combining experimental and computational methods, to understand how individual proteins may form harmful aggregates, or groupings, that are known to contribute to the development of the disease. They said their findings could guide the development of new therapeutics to delay or even halt the progression of neurodegenerative diseases.

Alpha-synuclein is a protein that helps regulate the release of neurotransmitters in the brain and is found in neurons. It exists as a single unit but commonly joins together with other units to perform cellular functions. When too many units combine, it can lead to the formation of Lewy bodies, which are associated with neurodegenerative diseases like Parkinson's Disease and dementia.

Although researchers know that aggregates of this protein cause disease, how they form is not well understood. Alpha-synuclein is highly disordered, meaning it exists as an ensemble of different conformations, or shapes, rather than a well-folded 3D structure. This characteristic makes the protein difficult to study using standard laboratory techniques -- but the research team used computers together with leading-edge experiments to predict and study the different conformations it may fold into.

"Computational biology allows us to study how forces within and outside of a protein may act on it," said Nikolay Dokholyan, professor of pharmacology at the College of Medicine and Penn State Cancer Institute researcher. "Using experiments performed in professor Eitan Lerner's laboratory at the Biological Chemistry Department at The Hebrew University of Jerusalem, a series of algorithms accounts for effective forces acting in and upon a specific protein and can identify the various conformations it will take based on those forces. This allows us to study the conformations of alpha-synuclein in a way that is otherwise difficult to identify in experimental studies alone."

In the paper published today (May 19) in the journal Structure, the researchers detailed their methodology for studying the different conformations of alpha-synuclein. They used data from previous experiments to program the molecular dynamics of the protein into their calculations. Their experiments revealed the conformational ensemble of alpha-synuclein, which is a series of different shapes the protein can assume.

Using leading-edge experiments, the researchers found that some shapes of alpha-synuclein are surprisingly stable and last longer than milliseconds. They said this is much slower than estimates of a disordered protein that constantly changes conformations.

"Prior knowledge showed this spaghetti-like protein would undergo structural changes in microseconds," Lerner said. "Our results indicate that alpha-synuclein is stable in some conformations for milliseconds -- slower than previously estimated."

"We believe that we've identified stable forms of alpha-synuclein that allow it to form complexes with itself and other biomolecules," said Jiaxing Chen, a graduate student at the College of Medicine. "This opens up possibilities for the development of drugs that can regulate the function of this protein."

Chen's lead co-author, Sofia Zaer, alongside colleagues at Hebrew University, used a series of experimental techniques to verify that alpha-synuclein could fold into the stable forms the simulation predicted. The research team continues to study these stable conformations as well as the whole process of alpha-synuclein aggregation in the context of Parkinson's disease.

"The information from our study could be used to develop small molecule regulators of alpha-synuclein activity," Lerner said. "Drugs that prevent protein aggregation and enhance its normal neurophysiological function may interfere with the development and progression of neurodegenerative diseases."