Figure 1. (Top) Current exclusion limits for axion mass from both experiments and astrophysical observation. KSVZ line is the expectation for standard KSVZ axion, while DFSZ line is GUT DFSZ axion. Searching for DFSZ axions requires much higher sensitivity than KSVZ axions. IBS-CAPP axion search experiment explored axion dark matter around 1.1 GHz frequency range at DFSZ sensitivity, denoted by red. Blue denotes the ranges previously excluded by ADMX. (Bottom) IBS-CAPP axion search experiment explored axion dark matter around 1.1 GHz frequency range at DFSZ sensitivity, denoted by blue. Red denotes the axion search experiment previously conducted by ADMX.
Figure 1. (Top) Current exclusion limits for axion mass from both experiments and astrophysical observation. KSVZ line is the expectation for standard KSVZ axion, while DFSZ line is GUT DFSZ axion. Searching for DFSZ axions requires much higher sensitivity than KSVZ axions. IBS-CAPP axion search experiment explored axion dark matter around 1.1 GHz frequency range at DFSZ sensitivity, denoted by red. Blue denotes the ranges previously excluded by ADMX. (Bottom) IBS-CAPP axion search experiment explored axion dark matter around 1.1 GHz frequency range at DFSZ sensitivity, denoted by blue. Red denotes the axion search experiment previously conducted by ADMX.

South Korea begins to search for DFSZ axion dark matter

Search for physics beyond the Standard Model using a colossal magnet 300,000 stronger than the Earth’s magnetic field

A South Korean research team at the Center for Axion and Precision Physics Research (CAPP) within the Institute for Basic Science (IBS) recently announced the most advanced experimental setup to search for axions. The group has successfully taken its first step toward the search for Dine-Fischler-Srednicki-Zhitnitskii (DFSZ) axion dark matter originating from the Grand Unification Theory (GUT). Not only that, IBS-CAPP experimental setup allowed for far greater search speed compared to any other axion search experiments in the world.

The notion of physics being dead has been a recurrent opinion across the longstanding history of the subject. In the late 19th century, William Thompson, also known as Lord Kelvin, erroneously believed that there would be no discovery in physics after 1900. Likewise, some have thought that there were no new particles to be found after neutrons were discovered in the 1930s. Even today, some worry that modern theoretical physics is at a dead end.

However, this is far from the truth. Our current limit of knowledge in physics, the Standard Model, is capable of only explaining 5% of the universe, with the other 95% consisting of dark matter and dark energy.

Not only that, the current Standard Model has limitations in explaining problems such as the strong CP (charge conjugation-parity) problem. The problem arises from the observation that the strong force, which is described by quantum chromodynamics (QCD), does not appear to violate CP symmetry, while the electroweak force violates CP symmetry to a small extent. This contradicts the Standard Model which predicts that CP symmetry should be violated by the strong force at a level that is much larger than what has been observed.

One proposed solution to the problem involves the existence of hypothetical particles called axions, which could resolve the discrepancy between the predicted and observed levels of CP violation in the strong force. Axion is one of the strongest candidates for dark matter. The discovery of axion dark matter is a landmark event in human history that can unveil the reality of 27% of the Universe, and its discoverers will undoubtedly win a ticket to Stockholm. Schematic of the CAPP-12TB experiment.

Currently, two different proposals for “beyond the Standard Model” exist to explain the strong CP problem. The main difference between the two models is that they predict different types of couplings between axions and other particles. In the “Kim-Shifman-Vainshtein-Zakharov” (KSVZ) model, axions are primarily coupled to heavy quarks, while in the “Dine-Fischler-Srednicki-Zhitnitsky” (DFSZ) model, they are coupled to the Standard Model quarks and leptons via Higgs bosons.

As a dark matter, axions have very weak (or little) interaction with ordinary matter, so searching for them can be a tricky business. One commonly used approach involves microwave cavity experiments. These experiments use a strong magnetic field to convert axions (if they exist) into resonant electromagnetic waves, which are then detected using a receiver. The axion’s mass can then be calculated from the detected wave’s frequency.

Since axion mass is unknown, physicists must expand their search and scan a huge range of frequencies (see Figure 1).

The problem is exacerbated in the case of searching for a DFSZ axion, which requires much greater sensitivity than the KSVZ axion. In microwave cavity search experiments, achieving higher sensitivity requires exponentially higher search time, and hence searching for DFSZ axion is out of reach for almost all existing experimental setups.

As a result, while a few axion search experiments have successfully searched for signals in the KSVZ axion sensitivity ranges, so far the only experiment that was capable of attaining the sensitivity necessary to search for DFSZ axions was the ADMX (Axion Dark Matter eXperiment) conducted by the ADMX collaboration. This makes IBS-CAPP the second group in the world to successfully search for an axion with DFSZ sensitivity.

IBS-CAPP group utilized a 12T magnet, which is more powerful than the 8T magnet used by the ADMX. In order to minimize the background noise, the experiment setup was maintained at close to absolute zero temperature.

In addition to using a more powerful magnet, the IBS-CAPP experiment used quantum technologies and a more effective computational approach to curate the data. This allowed the IBS-CAPP to search for DFSZ axions at 3.5 times the rate of the ADMX setup. CAPP-12TB receiver diagram.

The latest publication by the IBS-CAPP details the demonstration of their new setup for DFSZ axion search from 2022 March 1st to March 18th. As a result, the group was able to exclude axion dark matter around 4.55 µeV at DFSZ sensitivity.

“Discovery of axion will allow us to understand up to 32% of the mass-energy of the universe, up from 5% offered by the current Standard Model,” states research fellow KO Byeong Rok of the IBS-CAPP. He added, “We plan to take advantage of the blazingly fast speed of our experimental setup to quickly search for DFSZ axions at the wide frequency ranges of 1 to 2 GHz.”

It is hoped that the discovery of axion will support the Grand Unification Theory (GUT), which unites the three fundamental forces – strong, weak, and electromagnetism. It is believed that the three fundamental forces were united and indistinguishable at the earliest moment after the Big Bang, under conditions orders of magnitudes higher than achievable in the Large Hadron Collider today. It is hoped that the GUT will serve as a stepping stone to the coveted Theory of Everything (TOE) that has eluded theoretical physicists all these years.

Director Yannis SEMERTZIDIS of IBS-CAPP said, “We are highly grateful for all the funding and support that the Institute for Basic Science and South Korean taxpayers provided for this project. It is thanks to them that South Korea now hosts the most advanced axion search experimental facility in the world. If axion exists, I have no doubt it will be found right here in South Korea.”

 

Homo neanderthalensis adult male.  Reconstruction based on Shanidar 1 by John Gurche for the Human Origins Program, NMNH.  Date:  225,000 to 28,000 years.
Homo neanderthalensis adult male. Reconstruction based on Shanidar 1 by John Gurche for the Human Origins Program, NMNH. Date: 225,000 to 28,000 years.

UB researchers use computational models to show gene variations for immune, metabolic conditions have persisted in humans for more than 700,000 years

Genetic study of modern humans, Neanderthals, and Denisovans points to the importance of ‘balancing selection’ in evolution

Like a merchant of old, balancing the weights of two different commodities on a scale, nature can keep different genetic traits in balance as a species evolves over millions of years.

Depending on the environment, these traits can be beneficial (for example, fending off disease) or harmful (making humans more susceptible to illness).

The theory behind these evolutionary trade-offs is called balancing selection. A University at Buffalo-led study accepted on Jan. 10 and published on Feb. 21 in eLife explores this phenomenon by analyzing thousands of modern human genomes alongside ancient hominin groups, such as Neanderthal and Denisovan genomes.

The research has “implications for understanding human diversity, the origin of diseases, and biological trade-offs that may have shaped our evolution,” says evolutionary biologist Omer Gokcumen, the study’s corresponding author. Omer Gokcumen

Gokcumen, Ph.D., associate professor of biological sciences at the UB College of Arts and Sciences, adds that the study shows that many biologically relevant variants “have been segregating among our ancestors for hundreds of thousands, or even millions, of years. These ancient variations are our shared legacy as a species.”

Ties with Neanderthals stronger than previously thought

The work builds upon genetic discoveries in the past decade, including when scientists uncovered that modern humans and Neanderthals interbred as early humans moved out of Africa.

It also coincides with the growth of personalized genetic testing, with many people now claiming that a small percentage of their genome comes from Neanderthals. But, as the eLife study shows, humans share much more in common with Neanderthals than those small percentages indicate.

This additional sharing can be traced back to a common ancestor of  Neanderthals and humans that lived about 700,000 years ago. This common ancestor bequeathed to the Neanderthals and modern humans a shared legacy in the form of genetic variation.

The research team explored this ancient genetic legacy, focusing on a particular type of genetic variation: deletions.

Gokcumen says that the “deletions are strange because they affect large segments. Some of us are missing large chunks of our genome. These deletions should have negative effects and, as a result, be eliminated from the population by natural selection. However, we observed that some deletions are older than modern humans, dating back millions of years ago.”

Gene variations passed down over millions of years

The researchers used computational models to show an excess of these ancient deletions, some of which have persisted since our ancestors first learned to make tools, some 2.6 million years ago. Furthermore, the models found that balancing selection can explain this surplus of ancient deletions.

“Our study contributes to the growing body of evidence suggesting that balancing selection may be an important force in the evolution of genomic variation among humans,” says first author Alber Aqil, a Ph.D. candidate in biological sciences in Gokcumen’s lab. Alber Aqil

The investigators found that deletions dating back millions of years are more likely to play an outsized role in metabolic and autoimmune conditions.

Indeed, the persistence of versions of genes that cause severe disease in human populations has long baffled scientists since they expect natural selection to get rid of these versions of genes. It is, after all, very unusual for potentially disease-causing variation to persist for such long periods. The authors argue that balancing selection can solve this riddle.

Aqil says these variations may “protect against infectious diseases, outbreaks, and starvation, which have occurred periodically throughout human history. Thus, the findings represent a considerable leap in our understanding of how genetic variations evolve in humans. A variant may be protective against a pathogen or starvation while also underlying certain metabolic or autoimmune disorders, like Crohn’s disease.”

Additional co-authors include Leo Speidel, Ph.D., Sir Henry Wellcome Postdoctoral Fellow at the Genetics Institute of University College London and the Francis Crick Institute; and Pavlos Pavlidis of the Institute of Computer Science, one of eight institutes of the Foundation of Research and Technology-Hellas in Greece.

The National Science Foundation, the Sir Henry Wellcome Fellowship, and the Wellcome Trust supported the research.

Nilas, the Southern Ocean Mapping Platform developers, Anton Steketee (left), Sean Chua and Dr Petra Heil. Photo: Wendy Pyper
Nilas, the Southern Ocean Mapping Platform developers, Anton Steketee (left), Sean Chua and Dr Petra Heil. Photo: Wendy Pyper

Australian Antarctic scientists deploy Nilas online for sea-ice zone data sets

A new interactive Antarctic map promises to assist voyage planning and enhance climate research in the sea-ice zone, by bringing together Southern Ocean data from the past four decades.

Developed by Australian Antarctic Division sea-ice scientists Dr. Petra Heil, Sean Chua, and Anton Steketee, ‘Nilas’ presents near-real-time and historical data on sea ice, chlorophyll (a proxy for phytoplankton production), and sea-surface temperature around Antarctica. 

Nilas is a thin, elastic crust of sea ice, that easily bends on waves and swells, and under pressure.

Dr. Heil said the historical ice and ocean data and the ability to superimpose past or proposed ship trajectories or animal or instrument tracks over the data, made it a powerful planning, analysis, and research tool.

The tool includes sea-ice data dating back to 1980, chlorophyll data from 1998, and sea surface temperature from 1981.

“We have used this tool to plan a marine-science voyage, and are currently using it to pinpoint deployment locations for autonomous instruments to study ice-edge processes, such as wave-induced ice breakup,” Dr. Heil said.

A Nilas screen shot of the monthly sea-ice concentration around Antarctica for November 2022. White areas are 100% sea-ice concentration (coverage). (Photo: Nilas) 

This graphic shows the sea-ice concentration for July 2022 overlaid with the sea-ice freeboard (ice height above the ocean surface). It allows users to identify zones of thicker versus thinner sea ice and to assess these against the areal sea-ice coverage. This information assists, for example, selection of deployment sites for instruments or planning navigation in an unfamiliar ice-covered ocean. (Photo: Nilas)

“The tool allows us to look back at the sea-ice concentration and extent over the past few years to gain an understanding of the likely sea-ice conditions in the month of our voyage. We can then identify the most suitable location to take samples and deploy instruments.

“The tool also allows us to look at ice conditions in locations where we may have limited or no experience in navigating the ice area and make decisions about the best time of year to visit to achieve our objective.”

Mr. Chua said the ability to look at different ice and ocean variables at the same time could spark new research ideas, or enable scientists to explore links between different Earth system components.

“If you’re an atmospheric scientist sampling air from a vessel and you see a signal that does not make sense, you could look back at the ice or chlorophyll conditions at the time and location of your signal and check for any correlation,” he said.

“Phytoplankton may have released sulfate aerosols into the air, affecting the atmospheric properties, for example. So having all these data in one interface enables connections between scientific disciplines.”

To build the mapping platform the team used existing data sets generated from satellite observations. Data sets came from a range of sources, including the National Snow and Ice Data Center, Unversität Bremen, the Met Office, and the Ocean Colour Climate Change Initiative.

Data sets include daily and monthly sea-ice concentration (amount of sea-ice cover), sea-ice freeboard (height above the ocean surface), chlorophyll concentration, and sea-surface temperature. They also include in-house derived parameters.

“In consultation with Antarctic scientists, we chose source data and products that would be the most useful for looking at the long-term climate record,” Mr. Steketee said.

“We standardized that data added some functionality, and presented it on a platform that doesn’t need any technical expertise to use.

“This tool does not require any software or downloads to run, it can be configured to run without an internet connection, and it displays multiple variables at different time scales.”

Dr. Heil said the availability of different sea-ice variables within one application was also important in teasing out sea-ice conditions. For example, an area of interest could show 100% sea-ice concentration. However, including the sea-ice freeboard variable (height of sea ice above the water level), could identify areas of thicker or thinner ice within.

“An area of interest may be 100% covered in ice, and only the freeboard data will show whether it is freshly frozen over and very thin, or if it’s thick, multi-year ice that one would not want to venture into,” she said.

The development team said the mapping platform could also be used by students to conceptualize climate variability or to provide climate modelers with an accessible, visual means of comparing model outputs with actual observations.

“There are many ways to look at sea ice in Antarctica, but our tool brings together a diverse set of observations to explore Earth system characteristics and processes that are relevant to the Australian Antarctic Division and the Australian Southern Ocean science community,” Mr. Chua said.

“While people can view many of these variables in isolation, the power of our mapping tool is in the combination of variables and the ability to overlay them within an accessible interface.”

The tool is available at nilas.org. The Australian Antarctic Data Centre houses the software and data in the Australian Antarctic Data Centre.

UK scientists find radioactive isotopes reach Earth by surfing supernova blast waves

Scientists researching the origin of elements in our Galaxy have new insights into how they are transported to Earth, thanks to a new study led by authors at the University of Hertfordshire in the UK and the Konkoly Observatory, Research Centre for Astronomy and Earth Sciences (CSFK) in Hungary.

As well as understanding how our planet became enriched with these elements, the results could also help scientists uncover which exoplanets outside our solar system are most likely to contain life.

Many elements around us were produced either through stellar explosions called supernovae, or violent collisions of extremely dense objects called neutron stars. One of the questions puzzling scientists was how these heavy elements then reach us here on Earth – and in particular, how elements that originate in different places seem to have reached our planet at the same time.

Using sophisticated supercomputer modeling of the elements’ journey through space, scientists have now found that the heavy elements produced in collisions of neutron stars can “surf” on blast waves of other supernovae across our Galaxy and down to Earth.

The mystery was first raised in 2021 when radioactive isotopes discovered inside deep-sea rocks revealed a surprise for the scientists studying their origin. The isotopes did not originate inside our Solar System but in explosions of stars elsewhere in the Galaxy. Some of the detected isotopes especially raised eyebrows in the research community, because of their very different production sites.

Specifically, scientists found manganese-53 (associated with explosions of white dwarfs); iron-60 (produced in core-collapse supernovae); and plutonium-244 (which can usually only be produced by merging two extreme objects called neutron stars) sitting in layers of a similar depth in deep-sea rock samples.

To reach Earth, these isotopes would have rained down from the sky at some point during the last couple of million years. Since deep-sea sediments accumulate layer by layer over time to form rocks, researchers were very puzzled by the fact that these three isotopes, originating from different types of stellar explosions, were found in rock layers of similar depth. Finding them at similar depths means that they must have arrived on Earth together, even though their origin sites are so vastly different.

To understand how it was possible for these isotopes to arrive on Earth together, a team led by Dr. Benjamin Wehmeyer at the University of Hertfordshire in the UK, and the CSFK in Hungary, used supercomputer models to simulate how the isotopes travel from their Galactic production sites throughout space.

The study found that the ejected content of different astrophysical sites – from colliding neutron stars to exploding white dwarfs – are pushed around in the Galaxy by the shock waves of the much more frequent core-collapse supernovae. These supernovae are explosions of the cores of massive stars, which are much more common than explosions triggered by the merging of two neutron stars or explosions of white dwarfs.

Dr. Wehmeyer and his team observed that after they are produced, the isotopes can then “surf” on the shockwaves of these supernovae. This means that isotopes produced in very different sites can end up traveling together on the edges of the shock waves of core-collapse supernova explosions. Some of this swept-up material ends up on Earth, which can explain why the isotopes were found together within similar layers of deep-sea rocks.

The lead writer Dr. Wehmeyer explained, “Our colleagues have dug up rock samples from the ocean floor, dissolved them, put them in an accelerator, and examined the changes in their composition layer by layer. Using our computer models, we were able to interpret their data to find out how exactly atoms move throughout the Galaxy.

“It’s a very important step forward, as it not only shows us how isotopes propagate through the Galaxy but also how they become abundant on exoplanets – that is, planets beyond our solar system. This is extremely exciting since isotopic abundances are a strong factor in determining whether an exoplanet is able to hold liquid water – which is key to life. In the future, this might help to identify regions in our Galaxy where we could find habitable exoplanets”.

Dr Chiaki Kobayashi, Professor of Astrophysics at the University of Hertfordshire and co-writer of the study, adds: "I have been working on the origins of stable elements in the periodic table for many years, but I am thrilled to achieve results on radioactive isotopes in this paper. Their abundance can be measured by gamma-ray telescopes in space as well as by digging the rocks underwater on the Earth.

“By comparing these measurements with Benjamin's models, we can learn so much about how and where the composition of the solar system comes from”.

The full paper ‘Inhomogeneous enrichment of radioactive nuclei in the Galaxy: Deposition of live 53Mn, 60Fe, 182Hf, and 244Pu into deep-sea archives. Surfing the wave?’ is available now to read in Astrophysical Journal.

Tokyo Tech demos multi-policy-based annealer for solving real-world combinatorial optimization problems

A fully-connected annealer extendable to a multi-chip system and featuring a multi-policy mechanism has been designed by Tokyo Tech researchers to solve a broad class of combinatorial optimization (CO) problems relevant to real-world scenarios quickly and efficiently. Named Amorphica, the annealer has the ability to fine-tune parameters according to a specific target CO problem and has potential applications in logistics, finance, machine learning, and so on.

The modern world has grown accustomed to the efficient delivery of goods right at our doorsteps. But did you know that realizing such efficiency requires solving a mathematical problem, namely what is the best possible route between all the destinations? Known as the “traveling salesman problem,” this belongs to a class of mathematical problems known as “combinatorial optimization” (CO) problems.

As the number of destinations increases, the number of possible routes grows exponentially, and a brute force method based on an exhaustive search for the best route becomes impractical. Instead, an approach called “annealing computation” is adopted to find the best route quickly without an exhaustive search. Yet, a numerical study done by Tokyo Tech researchers has shown that while there exist many annealing computation methods, there is no one method suitable for solving a broad class of CO problems. Therefore, there is a need for an annealing mechanism that features multiple annealing methods (a multi-policy mechanism) to target a variety of such problems.

Fortunately, the same team of researchers, led by Assistant Professor Kazushi Kawamura and Professor Masato Motomura from the Tokyo Institute of Technology (Tokyo Tech), have reported a new annealer that features such a multi-policy approach or “metamorphic annealing.” Their findings are published in the Proceeding of ISSCC2023 and will be presented in the upcoming 2023 International Solid-State Circuits Conference.

“In the annealing computation, a CO problem is represented as an energy function in terms of (pseudo) spin vectors. We start from an initially randomized spin vector configuration and then update it stochastically to find the minimum energy states by reducing its (pseudo) temperature. This closely mirrors the annealing process of metals where hot metals are cooled down in a controlled manner,” explains Dr. Kawamura. “Our annealer named Amorphica features multiple annealing methods, including a new one proposed by our team. This provides it the ability to adopt the annealing method to the specific CO problem at hand.”

The team designed Amorphica to address the limitations of previous annealers, namely that their applicability is limited to only a few CO problems. This is first because these annealers are local-connection ones, meaning they can only deal with spin models having local inter-spin coupling. Another reason is that they do not have flexibility in terms of annealing methods and parameter control. These issues were solved in Amorphica by employing a full-connection spin model and incorporating finely controllable annealing methods and parameters. In addition, the team introduced a new annealing policy called “ratio-controlled parallel annealing” to improve the convergence speed and stability of existing annealing methods.

Additionally, Amorphica can be extended to a multi-chip, full-connection system with reduced inter-chip data transfer. On testing Amorphica against a GPU, the researchers found that it was up to 58 times faster while using only (1/500) power consumption, meaning it achieves around 30k times more energy efficiency.

“With a full-connection annealer like Amorphica, we can now deal with arbitrary topologies and densities of inter-spin couplings, even when they are irregular. This, in turn, would allow us to solve real-world CO problems such as those related to logistics, finance, and machine learning,” concludes Prof. Motomura.