York astrophysicist Kannan takes step forward in supercomputer simulations of cosmology

York University and an international team of astrophysicists have made an ambitious attempt to simulate the formation of galaxies and the cosmic large-scale structure throughout staggeringly large swaths of space.

The first results of their new calculations for the “MillenniumTNG” project help to subject the standard cosmological model to precision tests and to unravel the full power of upcoming new cosmological observations, say the researchers including York Assistant Professor Rahul Kannan of the Faculty of Science. Figure 1: Projections of gas (top left), dark matter (top right), and stellar light (bottom center) for a slice in the largest hydrodynamical simulation of MillenniumTNG at the present epoch. The slice is about 35 million light-years thick. The projections show the vast physical scales in the simulation from size, about 2400 million light-years across, to an individual spiral galaxy (final round inset) with a radius of ~150 000 light-years. The underlying calculation is presently the largest high-resolution hydrodynamical simulation of galaxy formation, containing more than 160 billion resolution elements © MPA

Over the past decades, cosmologists have gotten used to the perplexing conjecture that the universe’s matter content is dominated by enigmatic dark matter and that an even stranger dark energy field, that acts as some kind of anti-gravity, accelerates the expansion of today’s cosmos. Ordinary baryonic matter makes up less than five percent of the cosmic mix, but this source material forms the basis for the stars and planets of galaxies like our own Milky Way.

This seemingly strange cosmological model is known under the name LCDM. It provides a stubbornly successful description of many observational data, ranging from the cosmic microwave background radiation – the rest-heat left behind by the Big Bang – to the “cosmic web,” where galaxies are arranged along an intricate network of dark matter filaments. However, the real physical nature of dark matter and dark energy is still not understood, prompting astrophysicists to search for cracks in the LCDM theory. Identifying tensions with observational data could lead to a better understanding of these fundamental puzzles about the universe. Sensitive tests are required that need both: powerful new observational data as well as more detailed predictions about what the LCDM model implies.

An international team of researchers led by the Max Planck Institute for Astrophysics (MPA) in Germany, Harvard University in the U.S., Durham University in the U.K., and the Donostia International Physics Center in Spain, along with York University, have now managed to take a decisive step forward on the latter challenge. Building upon their previous successes with the “Millennium” and “IllustrisTNG” projects, they developed a new suite of simulation models dubbed “MillenniumTNG,” which trace the physics of cosmic structure formation with considerably higher statistical accuracy than what was possible with previous calculations.

Large simulations including new physical details

The team utilized the advanced cosmological code GADGET-4, custom-built for this purpose, to compute the largest high-resolution dark matter simulations to date, covering a region nearly 10 billion light-years across. In addition, they employed the moving-mesh hydrodynamical code AREPO to follow the processes of galaxy formation directly, throughout volumes still so large that they can be considered representative of the universe as a whole. Comparing both types of simulations allows a precise assessment of the impact of baryonic processes related to supernova explosions and supermassive black holes on the total matter distribution. Accurate knowledge of this distribution is key for interpreting upcoming observations correctly, such as so-called weak gravitational lensing effects, which respond to matter irrespective of whether it is of dark or baryonic type.

Furthermore, the team included massive neutrinos in their simulations, for the first time in simulations big enough to allow meaningful cosmological mock observations. Previous cosmological simulations had usually omitted them for simplicity, because they make up at most one to two percent of the dark matter mass, and since their nearly relativistic velocities mainly prevent them from clumping together. Now, however, upcoming cosmological surveys (such as those of the recently launched Euclid satellite of the European Space Agency) will reach a precision allowing detection of the associated per-cent-level effects. This raises the tantalizing prospect to constrain the neutrino mass itself, a profound open question in particle physics, so the stakes are high.

For their groundbreaking MillenniumTNG simulations, the researchers made efficient use of two extremely powerful supercomputers, the SuperMUC-NG machine at the Leibniz Supercomputing Center in Garching, and the Cosma8 machine at Durham Universe. More than 120,000 computer cores toiled away for nearly two months at SuperMUC-NG, using computing time awarded by the German Gauss Centre for Supercomputing, to produce the most comprehensive hydrodynamical simulation model to date. MillenniumTNG is tracking the formation of about 100 million galaxies in a region of the universe around 2,400 million light-years across (see Figure 1). This calculation is about 15 times bigger than the previous best in this category, the TNG300 model of the IllustrisTNG project.

Using Cosma8, the team computed an even bigger volume of the universe, filled with more than a trillion dark matter particles and more than 10 billion particles for tracking massive neutrinos. Even though this simulation did not follow the baryonic matter directly, its galaxy content can be accurately predicted in MillenniumTNG with a semi-analytic model that is calibrated against the full physical calculation of the project. This procedure leads to a detailed distribution of galaxies and matter in a volume that, for the first time, is large enough to represent the universe as a whole, comparing upcoming observational surveys on a sound statistical basis.

Mount Sinai researchers build AI model to predict which drugs may cause birth defects

Data harnessed to identify previously unknown associations between genes, congenital disabilities, and drugs

Data scientists at the Icahn School of Medicine at Mount Sinai in New York and colleagues have created an artificial intelligence model that may more accurately predict which existing medicines, not currently classified as harmful, may lead to congenital disabilities.

The model, or “knowledge graph,” also has the potential to predict the involvement of pre-clinical compounds that may harm the developing fetus. The study is the first known of its kind to use knowledge graphs to integrate various data types to investigate the causes of congenital disabilities.

Birth defects are abnormalities that affect about 1 in 33 births in the United States. They can be functional or structural and are believed to result from various factors, including genetics. However, the causes of most of these disabilities remain unknown. Certain substances found in medicines, cosmetics, food, and environmental pollutants can potentially lead to birth defects if exposed during pregnancy.

“We wanted to improve our understanding of reproductive health and fetal development, and importantly, warn about the potential of new drugs to cause birth defects before these drugs are widely marketed and distributed,” says Avi Ma’ayan, Ph.D., Professor, Pharmacological Sciences, and Director of the Mount Sinai Center for Bioinformatics at Icahn Mount Sinai, and senior author of the paper. “Although identifying the underlying causes is a complicated task, we offer hope that through complex data analysis like this that integrates evidence from multiple sources, we will be able, in some cases, to better predict, regulate, and protect against the significant harm that congenital disabilities could cause.”

The researchers gathered knowledge across several datasets on birth-defect associations noted in published work, including those produced by NIH Common Fund programs, to demonstrate how integrating data from these resources can lead to synergistic discoveries. Particularly, the combined data is from the known genetics of reproductive health, the classification of medicines based on their risk during pregnancy, and how drugs and pre-clinical compounds affect the biological mechanisms inside human cells.

Specifically, the data included studies on genetic associations, drug- and preclinical-compound-induced gene expression changes in cell lines, known drug targets, genetic burden scores for human genes, and placental crossing scores for small molecule drugs.

Importantly, using ReproTox-KG, with semi-supervised learning (SSL), the research team prioritized 30,000 preclinical small molecule drugs for their potential to cross the placenta and induce birth defects. SSL is a branch of machine learning that uses a small amount of labeled data to guide predictions for much larger unlabeled data. In addition, by analyzing the topology of the ReproTox-KG more than 500 birth-defect/gene/drug cliques were identified that could explain molecular mechanisms that underlie drug-induced birth defects. In graph theory terms, cliques are subsets of a graph where all the nodes in the clique are directly connected to all other nodes in the clique.

The investigators caution that the study's findings are preliminary and that further experiments are needed for validation.

Next, the investigators plan to use a similar graph-based approach for other projects focusing on the relationship between genes, drugs, and diseases. They also aim to use the processed dataset as training materials for courses and workshops on bioinformatics analysis. In addition, they plan to extend the study to consider more complex data, such as gene expression from specific tissues and cell types collected at multiple stages of development.

“We hope that our collaborative work will lead to a new global framework to assess potential toxicity for new drugs and explain the biological mechanisms by which some drugs, known to cause birth defects, may operate. It’s possible that at some point in the future, regulatory agencies such as the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency may use this approach to evaluate the risk of new drugs or other chemical applications,” says Dr. Ma’ayan.

The project was supported by National Institutes of Health grants OT2OD030160, OT2OD030546, OT2OD032619, and OT2OD030162. 

Scanning tunnelling microscope image of two of the superconducting structures created, which consist of individual chromium atoms.
Scanning tunnelling microscope image of two of the superconducting structures created, which consist of individual chromium atoms.

University of Zurich prof Neupert designs superconductors one atom at a time

The future of electronics will be based on novel kinds of materials. Sometimes, however, the naturally occurring topology of atoms makes it difficult for new physical effects to be created. To tackle this problem, researchers at the University of Zurich have now successfully designed superconductors one atom at a time, creating new states of matter.

What will the computer of the future look like? How will it work? The search for answers to these questions is a major driver of basic physical research. There are several possible scenarios, ranging from the further development of classical electronics to neuromorphic supercomputing and quantum supercomputers. The common element in all these approaches is that they are based on novel physical effects, some of which have so far only been predicted in theory. Researchers go to great lengths and use state-of-the-art equipment in their quest for new quantum materials that will enable them to create such effects. But what if there are no suitable materials that occur naturally?

A novel approach to Superconductivity

In a recent study published in Nature Physics, the research group of UZH Professor Titus Neupert, working closely together with physicists at the Max Planck Institute of Microstructure Physics in Halle (Germany), presented a possible solution. The researchers made the required materials themselves – one atom at a time. They are focusing on novel types of superconductors, which are particularly interesting because they offer zero electrical resistance at low temperatures. Sometimes referred to as “ideal diamagnets”, superconductors are used in many quantum computers due to their extraordinary interactions with magnetic fields. Theoretical physicists have spent years researching and predicting various superconducting states. “However, only a small number have so far been conclusively demonstrated in materials,” says Professor Neupert.

Two new types of superconductivity

In their exciting collaboration, the UZH researchers predicted in theory how the atoms should be arranged to create a new superconductive phase, and the team in Germany then conducted experiments to implement the relevant topology. Using a scanning tunneling microscope, they moved and deposited the atoms in the right place with atomic precision. The same method was also used to measure the system’s magnetic and superconductive properties. By depositing chromium atoms on the surface of superconducting niobium, the researchers were able to create two new types of superconductivity. Similar methods had previously been used to manipulate metal atoms and molecules, but until now it has never been possible to make two-dimensional superconductors with this approach. 

The results not only confirm the physicists’ theoretical predictions, but also give them reason to speculate about what other new states of matter might be created in this way, and how they could be used in the quantum supercomputers of the future.

The Hurricane Analysis and Forecast System (HAFS) “moving nest" Model. Global map showcasing land mass in green and water in black, clouds in white and tropical storms outlined in a green boxes representing the moving nest model. (Image credit: NOAA)
The Hurricane Analysis and Forecast System (HAFS) “moving nest" Model. Global map showcasing land mass in green and water in black, clouds in white and tropical storms outlined in a green boxes representing the moving nest model. (Image credit: NOAA)

NOAA launches new hurricane forecast model as Atlantic season starts strong

NOAA’s National Hurricane Center — a division of the National Weather Service — has a new model to help produce hurricane forecasts this season. The Hurricane Analysis and Forecast System (HAFS) was put into operation on June 27 and will run alongside existing models for the 2023 season before replacing them as NOAA’s premier hurricane forecasting model. 

"The quick deployment of HAFS marks a milestone in NOAA's commitment to advancing our hurricane forecasting capabilities, and ensuring continued improvement of services to the American public," said NOAA Administrator Rick Spinrad, Ph.D. "Development, testing, and evaluations were jointly carried out between scientists at NOAA Research and the National Weather Service, marking a seamless transition from development to operations.”

Running the experimental version of HAFS from 2019 to 2022 showed a 10-15% improvement in track predictions compared to NOAA’s existing hurricane models. HAFS is expected to continue increasing forecast accuracy, therefore reducing storm impacts on lives and property. 

HAFS is as good as NOAA’s existing hurricane models when forecasting storm intensity — but is better at predicting rapid intensification. HAFS was the first model last year to accurately predict that Hurricane Ian would undergo secondary rapid intensification as the storm moved off the coast of Cuba and barreled toward southwest Florida. 

Over the next four years, HAFS will undergo several major upgrades, ultimately leading to even more increased accuracy of forecasts, warnings, and life-saving information. An objective of the NOAA Hurricane Forecast Improvement Program (HFIP)offsite link is, by 2027, to reduce all model forecast errors by nearly half compared to errors seen in 2017.

HAFS provides more accurate, higher-resolution forecast information both over land and ocean and is comprised of five major components: a high-resolution moving nest; high-resolution physics; multi-scale data assimilation that allows for vortex initialization and vortex cycling; 3-D ocean coupling; and improved assimilation techniques that allow for the assimilation of novel observations. The foundational component is the moving nest, which allows the model to zoom in with a resolution of 1.2 miles on areas of a hurricane that are key to improving wind intensity and rain forecasts.

“With the introduction of the HAFS forecast model into our suite of tropical forecasting tools, our forecasters are better equipped than ever to safeguard lives and property with enhanced accuracy and timely warnings,” said Ken Graham, director of NOAA’s National Weather Service. “HAFS is the result of strong collaborative efforts throughout the science community and marks significant progress in hurricane prediction.”

HAFS, the first regional coupled model to go into operations under the Unified Forecast Systemoffsite link (UFS), was developed through community-based collaboration and the streamlining of the operational transition process. As HAFS uses the FV3 — the same dynamic core as the U.S. Global Forecast System — it will have a unified starting point when initiated for hurricane prediction and will also integrate with ocean and wave models as underlying inputs. The current standalone regional hurricane models, HWRF and HMON, each have their starting point for modeling the atmosphere. Leveraging the FV3 in HAFS reduces overlapping efforts, making the NOAA modeling portfolio more consistent and efficient.

HAFS is also the first new major forecast model implementation using NOAA’s updated weather and climate supercomputers, which were installed last summer. HAFS would not be possible without the speed and power of these new supercomputers, called the Weather and Climate Operational Supercomputing System 2 (WCOSS2).

NOAA developed HAFS as a requirement of the Weather Research and Forecasting Innovation Act of 2017, which directed the agency to conduct ongoing research and development to improve hurricane prediction and warning under the Hurricane Forecast Improvement Programoffsite link. Specifically, the Act called for NOAA to improve prediction capability for rapid intensification and storm track. HAFS development was also enabled by fiscal year 2018 and 2019 hurricane and disaster supplemental funding, and continued acceleration with support from the 2022 Disaster Relief Supplemental Appropriations Act.

HAFS was jointly created by NOAA's National Weather Service Environmental Modeling CenterAtlantic Oceanographic & Meteorological Laboratory, and NOAA's Cooperative Institute for Marine & Atmospheric Studies offsite link.

MIT economist Martin Beraja is co-author of a new research paper showing that China’s increased investments in AI-driven facial-recognition technology both help the regime repress dissent and may drive the technology forward, a mutually reinforcing condition the paper’s authors call an “AI-Tocracy.” Credits:Image: Jose-Luis Olivares/MIT with figures from iStock
MIT economist Martin Beraja is co-author of a new research paper showing that China’s increased investments in AI-driven facial-recognition technology both help the regime repress dissent and may drive the technology forward, a mutually reinforcing condition the paper’s authors call an “AI-Tocracy.” Credits:Image: Jose-Luis Olivares/MIT with figures from iStock

MIT econ prof Beraja shows how an 'AI-tocracy' emerges in China

Many scholars, analysts, and other observers have suggested that resistance to innovation is an Achilles’ heel of authoritarian regimes. Such governments can fail to keep up with technological changes that help their opponents; they may also, by stifling rights, inhibit innovative economic activity and weaken the long-term condition of the country. 

But a new study co-led by an MIT professor suggests something quite different. In China, the research finds, the government has increasingly deployed AI-driven facial-recognition technology to suppress dissent; has been successful at limiting protest; and in the process, has spurred the development of better AI-based facial-recognition tools and other forms of software.

“What we found is that in regions of China where there is more unrest, that leads to greater government procurement of facial-recognition AI, subsequently, by local government units such as municipal police departments,” says MIT economist Martin Beraja, who is co-author of a new paper detailing the findings. 

What follows, as the paper notes, is that “AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.”

The scholars call this state of affairs an “AI-tocracy,” describing the connected cycle in which increased deployment of AI-driven technology quells dissent while also boosting the country’s innovation capacity.

The open-access paper, also called “AI-tocracy,” appears in the August issue of the Quarterly Journal of Economics. An abstract of the uncorrected proof was first posted online in March. The co-authors are Beraja, who is the Pentti Kouri Career Development Associate Professor of Economics at MIT; Andrew Kao, a doctoral candidate in economics at Harvard University; David Yang, a professor of economics at Harvard; and Noam Yuchtman, a professor of management at the London School of Economics. 

To conduct the study, the scholars drew on multiple kinds of evidence spanning much of the last decade. To catalog instances of political unrest in China, they used data from the Global Database of Events, Language, and Tone (GDELT) Project, which records news feeds globally. The team turned up 9,267 incidents of unrest between 2014 and 2020. 

The researchers then examined records of almost 3 million procurement contracts issued by the Chinese government between 2013 and 2019, from a database maintained by China’s Ministry of Finance. They found that local governments’ procurement of facial-recognition AI services and complementary public security tools — high-resolution video cameras — jumped significantly in the quarter following an episode of public unrest in that area.

Given that Chinese government officials were responding to public dissent activities by ramping up facial-recognition technology, the researchers then examined a follow-up question: Did this approach work to suppress dissent?

The scholars believe that it did, although as they note in the paper, they “cannot directly estimate the effect” of the technology on political unrest. But as one way of getting at that question, they studied the relationship between weather and political unrest in different areas of China. Certain weather conditions are conducive to political unrest. But in prefectures in China that had already invested heavily in facial-recognition technology, such weather conditions are less conducive to unrest compared to prefectures that had not made the same investments. 

In so doing, the researchers also accounted for issues such as whether or not greater relative wealth levels in some areas might have produced larger investments in AI-driven technologies regardless of protest patterns. However, the scholars still reached the same conclusion: Facial-recognition technology was being deployed in response to past protests, and then reducing further protest levels. 

“It suggests that the technology is effective in chilling unrest,” Beraja says. 

Finally, the research team studied the effects of increased AI demand on China’s technology sector and found the government’s greater use of facial-recognition tools appears to be driving the country’s tech sector forward. For instance, firms that are granted procurement contracts for facial-recognition technologies subsequently produce about 49 percent more software products in the two years after gaining the government contract than they had beforehand. 

“We examine if this leads to greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.

Such data — from China’s Ministry of Industry and Information Technology — also indicates that AI-driven tools are not necessarily “crowding out” other kinds of high-tech innovation.

Adding it all up, the case of China indicates how autocratic governments can potentially reach a near-equilibrium state in which their political power is enhanced, rather than upended when they harness technological advances.

“In this age of AI, when the technologies not only generate growth but are also technologies of repression, they can be very useful” to authoritarian regimes, Beraja says. 

The finding also bears on larger questions about forms of government and economic growth. A significant body of scholarly research shows that rights-granting democratic institutions do generate greater economic growth over time, in part by creating better conditions for technological innovation. Beraja notes that the current study does not contradict those earlier findings, but in examining the effects of AI in use, it does identify one avenue through which authoritarian governments can generate more growth than they otherwise would have. 

“This may lead to cases where more autocratic institutions develop side by side with growth,” Beraja adds. 

Other experts in the societal applications of AI say the paper makes a valuable contribution to the field. 

“This is an excellent and important paper that improves our understanding of the interaction between technology, economic success, and political power,” says Avi Goldfarb, the Rotman Chair in Artificial Intelligence and Healthcare and a professor of marketing at the Rotman School of Management at the University of Toronto. “The paper documents a positive feedback loop between the use of AI facial-recognition technology to monitor and suppress local unrest in China and the development and training of AI models. This paper is pioneering research in AI and political economy. As AI diffuses, I expect this research area to grow in importance.”

For their part, the scholars are continuing to work on related aspects of this issue. One forthcoming paper of theirs examines the extent to which China is exporting advanced facial recognition technologies around the world — highlighting a mechanism through which government repression could grow globally.