Mystery beneath the ice: Supercomputers illuminate the Antarctic gravity anomaly

For years, geophysicists have been baffled by an unusual gravitational “hole” beneath Antarctica’s massive ice sheet. Recent advances in supercomputer modeling are now revealing what lies beneath the frozen landscape and how deep-Earth processes may be influencing the continent’s surface. Research led by the University of Florida demonstrates how sophisticated computational tools are bringing hidden aspects of our planet’s interior to light.
 
This anomaly, an area with unexpectedly low gravitational pull, about the size of a small country, was first identified using satellite gravity data. Usually, gravity readings over ice correspond to the total mass of rock and ice below. However, in this region of Antarctica, the gravitational pull was weaker than anticipated, hinting that something unusual lies within the deep crust or upper mantle. The anomaly is located inland from the Ross Ice Shelf, one of Antarctica’s largest floating ice extensions.
 
To investigate the anomaly, a team of geoscientists, led by the U.S. Antarctic Program and collaborating with researchers worldwide, turned to supercomputer-based geophysical models. Their goal was to test whether variations in rock composition, temperature, and structure could reproduce the gravity signal seen at the surface. These models combine a range of data, seismic imaging from prior surveys, satellite gravity measurements, and the physics governing how rocks deform under pressure, into a comprehensive simulation of Earth’s interior beneath Antarctica.
 
Running these simulations is a formidable computational challenge. Researchers must solve the complex equations of continuum mechanics and gravity simultaneously, accounting for thousands of variables that span many orders of magnitude in scale. The only tools capable of handling such a workload are high-performance computing (HPC) systems with extensive parallel processing capabilities. Without supercomputers, exploring thousands of potential configurations of rock density and structure beneath Antarctica would be all but impossible.
 
The results suggest that the gravity hole may be explained by a combination of lighter-than-expected rock compositions and localized thermal anomalies in the upper mantle. In particular, regions where rocks are warmer and thus less dense can create a measurable reduction in gravitational acceleration. These warmer zones may arise from ancient mantle processes, remnants of tectonic activity that predate Antarctica’s current icy quilt.
 
Lead author Dr. Matthew Schmidt describes the finding as “a fascinating clue to Antarctica’s deep past.” Rather than pointing to a void or missing mass beneath the ice, the gravity anomaly appears to reflect variations in the physical properties of deep rocks, information that can only be teased out through computational modeling anchored in robust physics and constrained by observational data.
 
For computational geoscientists, this work exemplifies the transformative role of supercomputing in Earth science. Supercomputers allow researchers to experiment with a wide range of theoretical models, fine-tuning parameters until the simulations align with real-world measurements. In the case of the Antarctic gravity hole, this meant iterating through many plausible combinations of rock types, temperature distributions, and structural configurations, an effort that would be impractical on conventional computing hardware.
 
The implications extend beyond one anomaly. Understanding gravitational variations beneath Antarctica has significance for models of ice sheet stability and long-term sea level change, because subtle differences in the Earth’s internal structure can influence how ice flows and how the land beneath it responds. As climate change accelerates ice loss in polar regions, accurate models of both ice dynamics and the solid Earth are essential for forecasting future impacts.
 
Supercomputing has become the bridge between observation and understanding in such contexts, enabling scientists to visualize what cannot be seen and test hypotheses that would otherwise remain speculative. By integrating diverse datasets and the laws of physics into unified simulations, researchers are now able to explore what lies beneath remote and inaccessible places like Antarctica.
 
In a broader sense, the Antarctic gravity hole reminds us that the Earth still holds deep mysteries, and that supercomputers are among the most powerful instruments available for unlocking them. As computational capabilities continue to grow, so too will our ability to decode the planet’s hidden signals and better understand the forces that shape the world beneath our feet.
Flooding impacts in Worcester, VT (2024). Photo by AOT.

NextGen Water Resources Modeling Framework: Integrating hydrologic science, data systems

As torrential storms drive rivers to overflow, the importance of precise flood forecasting has never been greater. With climate extremes becoming more severe, scientists increasingly rely on advanced computing, and especially supercomputing, to expand the frontiers of water prediction. A recent partnership between the National Weather Service’s Office of Water Prediction (OWP) and the University of Vermont (UVM) has resulted in a potentially game-changing advancement in forecasting technology, grounded in supercomputing and next-generation modeling.
 
At the heart of this effort is the newly published NextGen Water Resources Modeling Framework. This framework isn’t just another hydrologic model; it is a flexible, model-agnostic platform designed for the modern era of computing. It enables researchers to run diverse hydrologic and hydraulic models under a common architecture, whether on a laptop, in the cloud, or on a high-performance supercomputer.
 
What makes NextGen intriguing for the supercomputing community is its ambition to fuse massive geospatial datasets, physical process models, and performance-oriented compute resources. Traditional flood forecasting systems have often been constrained by rigid, single-model architectures that struggle to scale across regions or use the full capacity of parallel computing systems. The NextGen framework sidesteps these limits by allowing heterogeneous models, written in languages such as C, Fortran, and Python, to execute concurrently in a unified environment, leveraging standards like the Basic Model Interface for data exchange and configuration.
 
Supercomputers excel at breaking down complex equations across millions of computing cores. Flood forecasting requires solving sophisticated, multi-dimensional physical processes, rainfall infiltration, snowmelt runoff, and river routing across vast spatial domains. By opening doors to distributed execution and modular coupling of models, NextGen lays the groundwork for future implementations that could harness supercomputers to deliver real-time, high-resolution forecasts at continental scales.
 
In their institutional announcement, UVM researchers highlighted how the framework addresses long-standing challenges in hydrologic prediction, particularly the need to simulate water’s movement through a landscape that varies wildly in terrain, soil, vegetation, and climate. With computing at the crux, NextGen treats a wide variety of models and data inputs with standardized outputs, enabling researchers and forecasters to run experiments that were once computationally prohibitive.
 
For computational scientists, the framework’s support for high-performance environments isn’t just about raw speed; it’s about collaboration across disciplines. The ability to prototype a new flood-inundation algorithm in Python one day, and then scale it to run across thousands of nodes on a supercomputer the next, opens doors for innovative research pipelines that blur the line between development and deployment.
 
Looking ahead, the NextGen framework promises to influence not just national operational models, such as the forthcoming version of the National Water Model, but also fundamental research in hydrology and Earth system simulation. When paired with advances in machine learning, GPU-accelerated computing, and real-time data assimilation, this modular foundation could spur a new generation of forecasting applications that bring supercomputing power directly to the urgent task of flood prediction.
 
Every hour of reliable flood warning can mean saved lives and billions of dollars saved in damages. The integration of supercomputing and hydrologic science is no longer a technological novelty; it is an urgent need. As NextGen takes the lead, the flood forecasting field stands poised for a paradigm shift, fueled by high-performance computing once exclusive to fields like physics and cosmology.

How big can a planet be? Supercomputing unlocks the secrets of giant worlds

Planetary science is undergoing a remarkable transformation as astronomers revisit a core cosmic mystery: What are the true limits on how large a planet can grow? By combining the latest astronomical observations with the extraordinary capabilities of supercomputers, researchers are discovering that the boundary between massive planets and failed stars is less distinct than previously believed. This work highlights how crucial computational power has become in unraveling the complexities of the universe. Driving this scientific revolution is the HR 8799 star system, situated roughly 133 light-years from Earth in the constellation Pegasus. Here, four gigantic gas planets, each five to ten times the mass of Jupiter, are challenging traditional models of planet formation that are based on our own solar system.

From JWST’s Spectra to Computational Insights

The groundbreaking observations came from the James Webb Space Telescope (JWST), humanity's most powerful space observatory. JWST’s advanced spectrographs captured faint light from these distant giants, around 10,000 times fainter than their star, and revealed the spectral fingerprints of molecules previously hidden from view. Among these was hydrogen sulfide (H₂S), a refractory molecule that is a tell-tale marker of solid materials in the early planetary disk.
 
Identifying sulfur and other heavy elements in these far-off worlds was only possible thanks to supercomputing-driven atmospheric models and spectral extraction techniques. Researchers had to push simulations far beyond traditional grids, iteratively refining the physics and chemistry encoded in their models to match the rich JWST data. These computational efforts let scientists separate the faint planetary signals from the overwhelming glare of the host star, and decode what the spectral lines say about formation paths.
 
What they found is remarkable: the HR 8799 giants appear to have formed via core accretion, a process where planets grow gradually by accumulating solids into a dense core before capturing surrounding gas. This is the same fundamental mechanism thought to have shaped Jupiter and Saturn, but on a much grander scale and at far greater distances from their star.

Uniform Enrichment: A Shared Planetary Heritage

In the companion work, scientists reported that these massive exoplanets are uniformly enriched in heavy elements compared to their star across both volatile (like carbon and oxygen) and refractory species such as sulfur. This uniformity strongly points to efficient solid accretion during planet formation and suggests that the ingredients of planet-building are similar across a wide range of environments, even for giants many times Jupiter’s mass.
 
Crucially, interpreting these complex chemistry wouldn’t be possible without high-performance computing. Supercomputers are used to:
  • Simulate protoplanetary disk conditions, exploring how cores form and accrete material over millions of years.
  • Generate atmospheric models that predict how molecules absorb and emit light under varying temperatures and pressures.
  • Fit these models to real spectral data from JWST, using optimization techniques only feasible at scale.
These tasks require petaflops of processing power and terabytes of memory, and they leverage algorithms developed by astrophysicists and computational scientists alike.

Beyond Our Solar System and Beyond Traditional Limits

Why does this matter for supercomputing? Answering today's big questions about planets, whether they are Earth-like, Neptune-like, or giants towering over Jupiter depends on the ability to compute the physics of formation and evolution under conditions we cannot recreate in the lab.
 
Where once planetary formation theories were built around our solar system’s modest giants, the HR 8799 results push us to ask even bolder questions: Can planets reach 15, 20, or even 30 times Jupiter's mass while still forming like planets, rather than stars? And, if so, what does that mean for how we define planets versus brown dwarfs?
 
With supercomputing as our engine, astronomers are not just cataloging distant worlds; they are rewriting the science of how those worlds came to be. As more data from JWST and future observatories pour in, this fusion of observation, theory, and computation promises to transform our understanding of planetary systems across the galaxy.
 
In that sense, the answer to "how big can a planet be?" isn’t just about mass, it’s about the growing scale of human curiosity and the computational tools we build to answer it.

Cracking the code of spider silk: Supercomputers reveal nature's molecular secrets

Spider silk is renowned as one of nature's most extraordinary materials, being both lightweight and exceptionally strong. It surpasses Kevlar in toughness and is stronger than steel when compared by weight. For years, scientists could only speculate about how this protein-based fiber achieved such a unique blend of strength and flexibility. Recently, however, researchers from King's College London and San Diego State University have revealed the molecular secret behind spider silk's remarkable properties. By combining advanced computational modeling with laboratory experiments, they have shown how supercomputers are transforming our understanding of materials science.
 
The study identifies how specific chemical interactions between the amino acids arginine and tyrosine drive the transformation of spider silk proteins from a dense liquid into solid, high-performance fibers. These interactions serve as molecular "stickers," triggering protein clustering in the earliest moments of silk formation and continuing to influence the fiber as its complex nanostructure develops.
 
Understanding this process at the molecular level would have been nearly impossible without computational tools. The researchers used molecular dynamics simulations, structural predictions from tools like AlphaFold3, and other high-performance modeling techniques to explore how vast numbers of atoms interact over time as the silk proteins assemble. These calculations involve solving complex physics equations for millions of interacting particles, a task that demands supercomputing resources capable of parallel processing at scale.
 
Professor Chris Lorenz, lead author and expert in computational materials science, explains that the study reveals atom-by-atom mechanisms previously hidden from view. "This study provides an atomistic-level explanation of how disordered proteins assemble into highly ordered, high-performance structures," he said, highlighting the power of computational modeling to connect molecular behavior directly to macroscopic material performance.
 
Indeed, spider silk’s performance has puzzled scientists for decades precisely because its constituent proteins begin as a concentrated liquid, often referred to as “silk dope,” before being spun into fibers that combine elasticity and toughness in ways few man-made materials approach. The key insight from the new study is that arginine–tyrosine interactions create clustering behavior during the liquid-to-solid transition, guiding the assembly of nanoscale structures that underpin silk’s exceptional mechanical properties.
 
Such detailed mechanistic insight isn’t merely academic. By uncovering the design principles that nature uses to build spider silk, researchers now have a blueprint for engineering next-generation sustainable materials, from lightweight protective gear and aircraft components to biodegradable medical implants and soft robotics. These applications are only imaginable because computational models allow scientists to test hypotheses in silico before moving to costly and time-consuming experiments.
 
The implications extend beyond materials science. Gregory Holland, co-author from SDSU, noted that the mechanisms observed in silk protein assembly mirror molecular processes seen in other biological systems, including those involved in human health and disease. “What surprised us was how sophisticated the chemistry turned out to be,” he said, suggesting that insights from silk may inform studies of protein phase separation in conditions such as Alzheimer’s disease.
 
For the supercomputing community, this research exemplifies how advanced modeling and simulation are transforming our ability to decode complex biological materials. Supercomputers enable scientists to explore how and why nature optimizes performance at the molecular level, and to translate those insights into engineered solutions that could be more sustainable, resilient, and energy-efficient than current technologies.
 
As computational power continues to grow, researchers anticipate that even more intricate biological materials will yield their secrets to simulation-based science. For now, the decoding of spider silk’s molecular stickers offers a striking example of how supercomputing not only accelerates discovery but also inspires new directions in engineering and materials design.

Glasgow sets its sights on 'cognitive' cities, where urban systems learn, predict, adapt

Imagine a city capable of sensing trouble before it occurs, anticipating traffic jams, worsening air quality, infrastructure strain, or emerging health concerns, much like a living organism responds to its surroundings. According to a new research center at the University of Glasgow, this vision is closer to reality than many realize.
 
Launched this week, the Centre for Integrated Sensing and Communication Enabling Cognitive Cities (ISAC³) is set to explore how next-generation digital technologies, particularly 6G communications, artificial intelligence, and large-scale data analytics, can transform today’s “smart cities” into cognitive ones. Unlike current urban systems, which primarily monitor conditions in real-time, cognitive cities aim to predict and adapt, shifting city management from a reactive response to proactive decision-making.
 
At the heart of this ambition lies data, vast volumes of it. Future 6G networks are expected to collect streams of information from advanced sensors embedded throughout urban infrastructure, as well as from next-generation mobile devices carried by residents. Processing, analyzing, and acting on this data in real time will demand not just clever algorithms, but significant computational power, placing high-performance computing and AI-driven analytics at the core of the cognitive city concept.
 
Professor Qammer H. Abbasi, Founding Director of ISAC³ and Professor at the University of Glasgow’s James Watt School of Engineering, describes the initiative as a response to mounting global pressures on cities. Population growth, climate change, cybersecurity risks, and the push toward decarbonization are converging challenges that traditional urban planning struggles to address in isolation.
 
“Next-generation technologies like real-time data collection, advanced communications, cyber-physical systems, and AI-driven analytics will provide the tools required to turn urban spaces into cognitive cities,” Abbasi said. “ISAC³ brings together the expertise needed to explore how these tools can work together responsibly and at scale.”
The center unites researchers across engineering, computing science, cybersecurity, public health, business, social science, and urban planning, reflecting the inherently interdisciplinary nature of future cities. Cognitive systems, the researchers suggest, will require tightly coupled sensing, communication, computation, and action, an architectural challenge that mirrors the integrated workflows increasingly seen in modern supercomputing environments.
 
One intriguing aspect of ISAC³’s vision is its focus on health and well-being. Professor Frances Mair, Head of the University of Glasgow’s School of Health & Wellbeing, notes that healthcare services have often lagged behind other sectors in adopting advanced digital tools. Cognitive city technologies, she argues, could change that dynamic by identifying early warning signs of health risks and connecting people to support before conditions worsen.
 
"In the future, ISAC technology could work quietly in the background,” Mair said, “helping communities stay healthier, safer, and more supported."
 
Beyond technical innovation, ISAC³ is also emphasizing responsible and inclusive development. Professor Nuran Acur of the Adam Smith Business School highlighted the Centre’s “quadruple helix” approach, which brings together academia, industry, public services, and society from the earliest stages of research. The goal is to ensure that technologies are not only advanced but also practical, socially responsible, and ready for real-world deployment.
 
The center's first year will focus on building a deployment roadmap through workshops and webinars with international experts in integrated sensing, communication, and computing. A key question underpinning this work is how to balance innovation with robust data protection, particularly as cities collect increasingly sensitive information from sensors and personal devices.
 
Glasgow itself will serve as a living laboratory. The University’s campus will be used as a testbed for prototype systems, developed in collaboration with industry partners including BT, Virgin Media O2, Ericsson, InterDigital, and Neutral Wireless. According to Mallik Tatipamula, CTO of Ericsson Silicon Valley, the work underway in Glasgow could have global implications.
 
“Cities that succeed will be those that can sense, interpret, and respond to their environments in real time,” Tatipamula said. “ISAC³ has the potential to help redefine how future societies function.”
 
ISAC³ presents the supercomputing community with intriguing challenges: How can vast urban data streams be processed both efficiently and securely? What part will high-performance computing and AI accelerators play in facilitating real-time, citywide predictions? And how might computational models allow policymakers to test decisions virtually before they are implemented?
 
As ISAC³ embarks on its mission, it does not claim to possess all the solutions. Rather, it is establishing a research environment driven by curiosity, exploring how cities could learn, adapt, and evolve, and examining the computational backbone necessary to make these ambitions a reality. Through this work, Glasgow is setting itself apart as a hub where the future of urban life is not just envisioned, but rigorously investigated through computation.