Brout's combined Pantheon+ dataset produces the most precise view of dark energy, dark matter

Credit: NASA/CXC/U.TexasAnalyzing more than two decades' worth of supernova explosions convincingly bolsters modern cosmological theories and reinvigorates efforts to answer fundamental questions.

Astrophysicists have performed a powerful new analysis that places the most precise limits yet on the composition and evolution of the universe. With this analysis, dubbed Pantheon+, cosmologists find themselves at a crossroads.

Pantheon+ convincingly finds that the cosmos is composed of about two-thirds dark energy and one-third matter — mostly in the form of dark matter — and is expanding at an accelerating pace over the last several billion years. However, Pantheon+ also cements a major disagreement over the pace of that expansion that has yet to be solved.

By putting prevailing modern cosmological theories, known as the Standard Model of Cosmology, on even firmer evidentiary and statistical footing, Pantheon+ further closes the door on alternative frameworks accounting for dark energy and dark matter. Both are bedrocks of the Standard Model of Cosmology but have yet to be directly detected and rank among the model's biggest mysteries. Following through on the results of Pantheon+, researchers can now pursue more precise observational tests and hone explanations for the ostensible cosmos.

"With these Pantheon+ results, we are able to put the most precise constraints on the dynamics and history of the universe to date," says Dillon Brout, an Einstein Fellow at the Center for Astrophysics | Harvard & Smithsonian. "We've combed over the data and can now say with more confidence than ever before how the universe has evolved over the eons and that the current best theories for dark energy and dark matter hold strong."

Brout is the lead author of a series of papers describing the new Pantheon+ analysis, published jointly today in a special issue of The Astrophysical Journal.

Pantheon+ is based on the largest dataset of its kind, comprising more than 1,500 stellar explosions called Type Ia supernovae. These bright blasts occur when white dwarf stars — remnants of stars like our Sun — accumulate too much mass and undergo a runaway thermonuclear reaction. Because Type Ia supernovae outshine entire galaxies, the stellar detonations can be glimpsed at distances exceeding 10 billion light years, or back through about three-quarters of the universe's total age. Given that the supernovae blaze with nearly uniform intrinsic brightnesses, scientists can use the explosions' apparent brightness, which diminishes with distance, along with redshift measurements as markers of time and space. That information, in turn, reveals how fast the universe expands during different epochs, which is then used to test theories of the fundamental components of the universe.

The breakthrough discovery in 1998 of the universe's accelerating growth was thanks to a study of Type Ia supernovae in this manner. Scientists attribute the expansion to invisible energy, therefore monikered dark energy, inherent to the fabric of the universe itself. Subsequent decades of work have continued to compile ever-larger datasets, revealing supernovae across an even wider range of space and time, and Pantheon+ has now brought them together into the most statistically robust analysis to date.

"In many ways, this latest Pantheon+ analysis is a culmination of more than two decades' worth of diligent efforts by observers and theorists worldwide in deciphering the essence of the cosmos," says Adam Riess, one of the winners of the 2011 Nobel Prize in Physics for the discovery of the accelerating expansion of the universe and the Bloomberg Distinguished Professor at Johns Hopkins University (JHU) and the Space Telescope Science Institute in Baltimore, Maryland. Riess is also an alum of Harvard University, holding a Ph.D. in astrophysics.

Brout's own career in cosmology traces back to his undergraduate years at JHU, where he was taught and advised by Riess. There Brout worked with then-PhD-student and Riess-advisee Dan Scolnic, who is now an assistant professor of physics at Duke University and another co-author on the new series of papers.

Several years ago, Scolnic developed the original Pantheon analysis of approximately 1,000 supernovae.

Now, Brout and Scolnic and their new Pantheon+ team have added some 50 percent more supernovae data points in Pantheon+, coupled with improvements in analysis techniques and addressing potential sources of error, which ultimately has yielded twice the precision of the original Pantheon.

"This leap in both the dataset quality and in our understanding of the physics that underpins it would not have been possible without a stellar team of students and collaborators working diligently to improve every facet of the analysis," says Brout.

Taking the data as a whole, the new analysis holds that 66.2 percent of the universe manifests as dark energy, with the remaining 33.8 percent being a combination of dark matter and matter. To arrive at an even more comprehensive understanding of the constituent components of the universe at different epochs, Brout and colleagues combined Pantheon+ with other strongly evidenced, independent, and complementary measures of the large-scale structure of the universe and with measurements from the earliest light in the universe, the cosmic microwave background.

Another key Pantheon+ result relates to one of the paramount goals of modern cosmology: nailing down the current expansion rate of the universe, known as the Hubble constant. Pooling the Pantheon+ sample with data from the SH0ES (Supernova H0 for the Equation of State) collaboration, led by Riess, results in the most stringent local measurement of the current expansion rate of the universe.

Pantheon+ and SH0ES together find a Hubble constant of 73.4 kilometers per second per megaparsec with only 1.3% uncertainty. Stated another way, for every megaparsec, or 3.26 million light years, the analysis estimates that in the nearby universe, space itself is expanding at more than 160,000 miles per hour.

However, observations from an entirely different epoch of the universe's history predict a different story. Measurements of the universe's earliest light, the cosmic microwave background, when combined with the current Standard Model of Cosmology, consistently peg the Hubble constant at a rate that is significantly less than observations taken via Type Ia supernovae and other astrophysical markers. This sizable discrepancy between the two methodologies has been termed the Hubble tension.

The new Pantheon+ and SH0ES datasets heighten this Hubble tension. In fact, the tension has now passed the important 5-sigma threshold (about one-in-a-million odds of arising due to random chance) that physicists use to distinguish between possible statistical flukes and something that must accordingly be understood. Reaching this new statistical level highlights the challenge for both theorists and astrophysicists to try and explain the Hubble constant discrepancy.

"We thought it would be possible to find clues to a novel solution to these problems in our dataset, but instead we’re finding that our data rule out many of these options and that the profound discrepancies remain as stubborn as ever," says Brout.

The Pantheon+ results could help point to where the solution to the Hubble tension lies. "Many recent theories have begun pointing to exotic new physics in the very early universe, however, such unverified theories must withstand the scientific process and the Hubble tension continues to be a major challenge," says Brout.

Overall, Pantheon+ offers scientists a comprehensive look back through much of cosmic history. The earliest, most distant supernovae in the dataset gleam forth from 10.7 billion light years away, meaning from when the universe was roughly a quarter of its current age. In that earlier era, dark matter and its associated gravity held the universe's expansion rate in check. Such a state of affairs changed dramatically over the next several billion years as the influence of dark energy overwhelmed that of dark matter. Dark energy has since flung the contents of the cosmos ever farther apart and at an ever-increasing rate.

"With this combined Pantheon+ dataset, we get a precise view of the universe from the time when it was dominated by dark matter to when the universe became dominated by dark energy," says Brout. "This dataset is a unique opportunity to see dark energy turn on and drive the evolution of the cosmos on the grandest scales up through present time."

Studying this changeover now with even stronger statistical evidence will hopefully lead to new insights into dark energy's enigmatic nature.

"Pantheon+ is giving us our best chance to date of constraining dark energy, its origins, and its evolution," says Brout.

Brazilian built AI helps predict performance of sugarcane in the field

A technique developed by Brazilian researchers enhances the efficiency of breeding programs, saving selection time and the cost of plant genotyping and characterization  CREDIT IAC’s Sugarcane Center in Ribeirão PretoA technique developed by Brazilian researchers enhances the efficiency of breeding programs, saving selection time and the cost of plant genotyping and characterization

A Brazilian study shows that artificial intelligence (AI) can be used to create efficient models for the genomic selection of sugarcane and forage grass varieties and predict their performance in the field on the basis of their DNA. 

In terms of accuracy compared with traditional breeding techniques, the methodology developed with support from FAPESP improved predictive power by more than 50%. This is the first time a highly efficient genomic selection method based on machine learning has been proposed for polyploid plants (in which cells have more than two complete sets of chromosomes), including the grasses studied. 

Machine learning is a branch of AI and computer science involving statistics and optimization, with countless applications. Its main goal is to create algorithms that automatically extract patterns from datasets. It can be used to predict the performance of a plant, including whether it will be resistant to or tolerant of biotic stresses such as pests and diseases caused by insects, nematodes, fungi, or bacteria, and or abiotic stresses such as cold, drought, salinity or insufficient soil nutrients.

The crossing is the most widely used technique in traditional breeding programs. “You establish populations by crossing plants that are interesting. In the case of sugarcane, you cross a variety that produces a lot of sugar with another that’s more resistant, for example. You cross them and then assess the performance of the resulting genotypes in the field,” said computer scientist Alexandre Hild Aono, first author of the article on the study published in Scientific Reports. Aono is a researcher at the State University of Campinas’s Center for Molecular Biology and Genetic Engineering (CBMEG-UNICAMP). He graduated from the Federal University of São Paulo (UNIFESP). 

“But this assessment process takes a long time and is very expensive. The method we propose can predict the performance of these plants even before they grow. We succeeded in predicting yield on the basis of the genetic material. This is significant because it saves many years of assessment,” Aono explained.

In the case of sugarcane, the challenge is highly complex. Traditional breeding techniques take between nine and 12 years and incur high costs, according to Anete Pereira de Souza, a professor of plant genetics at UNICAMP’s Institute of Biology and Aono’s Ph.D. supervisor at CBMEG.

“When breeders identify an interesting plant, they multiply it by cloning so that the genotype isn’t lost, but this takes time and costs a great deal. An extreme example is the breeding of rubber trees, which can take as long as 30 years,” Souza said. One way to surmount these difficulties is what she called “plant breeding 4.0”, which makes intensive use of data analysis and highly efficient computational and statistical tools. Each genotyping-by-sequencing process can involve 1 billion sequences.

The main hurdle scientists face in trying to breed better varieties of polyploid plants such as sugarcane and forage grass is the complexity of their genomes. “In this case, we didn’t even know if genomic selection would be possible, given the scarce resources and the difficulty of working with this complexity,” Aono said. 

Methods

The researchers began the genomic selection process with diploid plants [containing cells with two sets of chromosomes], as they have simpler genomes. “The problem is that high-value tropical plants like sugarcane aren’t diploids but polyploids, which is a complication,” Souza said. 

While human beings and almost all animals are diploid, sugarcane may have as many as 12 copies of every chromosome. Any individual of the species Homo sapiens can have up to two variants of each gene, one inherited from the father and the other from the mother. Sugarcane is more complex because theoretically, any gene can have many variants in the same individual. There are regions of its genome with six sets of chromosomes, and others with eight, ten, and even 12 sets. “The genetics is so complex that breeders work with sugarcane as if it were diploid,” Souza said.

In 2001, Theodorus Meuwissen, a Dutch scientist who is currently a professor of animal breeding and genetics at the Norwegian University of Life Sciences (NMBU), proposed genomic selection to predict complex traits in animals and plants in association with their phenotypes (observable characteristics resulting from the interaction of their genotypes with the environment). The advantage of this approach to plant breeding is the link between the phenotypic traits of interest, such as yield, sugar level or precocity, and single nucleotide polymorphisms (SNPs). A “snip” (as SNP is pronounced) is a genomic variant at a single base position in the DNA, Souza explained. 

“It’s the difference in the genomes of any two individuals. For example, one may have an A [corresponding to the nucleotide adenine] that produces a little more than another with a G [guanine] at the same location in the genome. That changes everything,” she said. “When you find an association with what you’re looking for, like a high level of sugar production, and specific SNPs at different locations in the genome, you can sequence only the population on which your breeding work focuses.”

The advances proposed by Aono and colleagues dispense with the need to plant and phenotype throughout the breeding cycle. “We do field experiments in the initial stages of the program to obtain the phenotype of interest for each clone,” Souza said. “In parallel, we sequence all the clones in the breeding population quite straightforwardly, without needing to have the whole genome for every clone. This is what’s called genotyping-by-sequencing – partial sequencing in search of the differences and similarities in the base pairs for the clones, and their association with each clone’s production. The association between phenotype and genome shows which produces more and which SNPs are associated with higher production. In this manner, we can identify clones with a large proportion of the SNPs that contribute to the higher production observed in the initial experiments and obtain the most productive variety faster and more cheaply.”

The project succeeded thanks to collaboration lasting years with scientists at several research institutions and universities, such as the University of São Paulo’s Luiz de Queiroz College of Agriculture (ESALQ-USP), UNIFESP’s Institute of Science and Technology, the Campinas Agronomic Institute (IAC) and its Sugarcane Center in Ribeirão Preto, the Beef Cattle Unit of the Brazilian Agricultural Research Corporation (EMBRAPA) in Campo Grande, Mato Grosso do Sul state, the Aeronautical Technology Institute (ITA) in São José dos Campos, São Paulo state, and Edinburgh University’s Roslin Institute in the United Kingdom.

Trinity College Dublin physicists suggest our brains use quantum computation

Scientists from Trinity College Dublin believe our brains could use quantum computation after adapting an idea developed to prove the existence of quantum gravity to explore the human brain and its workings.

The brain functions measured were also correlated to short-term memory performance and conscious awareness, suggesting quantum processes are also part of cognitive and conscious brain functions. 

If the team’s results can be confirmed – likely requiring advanced multidisciplinary approaches –they would enhance our general understanding of how the brain works and potentially how it can be maintained or even healed. They may also help find innovative technologies and build even more advanced quantum supercomputers.

Dr. Christian Kerskens, the lead physicist at the Trinity College Institute of Neuroscience (TCIN), is the co-author of the research article that has just been published in the Journal of Physics Communications. He said: “We adapted an idea, developed for experiments to prove the existence of quantum gravity, whereby you take known quantum systems, which interact with an unknown system. If the known systems entangle, then the unknown must be a quantum system, too. It circumvents the difficulties to find measuring devices for something we know nothing about.

“For our experiments, we used proton spins of ‘brain water’ as the known system. ‘Brain water’ builds up naturally as fluid in our brains and the proton spins can be measured using MRI (Magnetic Resonance Imaging). Then, by using a specific MRI design to seek entangled spins, we found MRI signals that resemble heartbeat-evoked potentials, a form of EEG signals. EEGs measure electrical brain currents, which some people may recognize from personal experience or simply from watching hospital dramas on TV.”

Electrophysiological potentials like the heartbeat-evoked potentials are normally not detectable with MRI and the scientists believe they could only observe them because the nuclear proton spins in the brain were entangled. 

Dr. Kerskens added: “If entanglement is the only possible explanation here then that would mean that brain processes must have interacted with the nuclear spins, mediating the entanglement between the nuclear spins. As a result, we can deduce that those brain functions must be quantum. 

“Because these brain functions were also correlated to short-term memory performance and conscious awareness, it is likely that those quantum processes are an important part of our cognitive and conscious brain functions.  

“Quantum brain processes could explain why we can still outperform supercomputers when it comes to unforeseen circumstances, decision making, or learning something new. Our experiments performed only 50 meters away from the lecture theatre, where Schrödinger presented his famous thoughts about life, may shed light on the mysteries of biology, and on consciousness which scientifically is even harder to grasp.”

This research was supported by Science Foundation Ireland and TCIN.