Part of the set-up for creating medium-density amorphous ice: ordinary ice and steel balls in a jar (not amorphous ice)  Credit: Christoph Salzmann
Part of the set-up for creating medium-density amorphous ice: ordinary ice and steel balls in a jar (not amorphous ice) Credit: Christoph Salzmann

UK prof Davies creates a new form of amorphous ice using an atomic-scale model of it in supercomputer

A collaboration between scientists at Cambridge and UCL has led to the discovery of a new form of ice that more closely resembles liquid water than any other and may hold the key to understanding this most famous of liquids.

The new form of ice is amorphous. Unlike ordinary crystalline ice where the molecules arrange themselves in a regular pattern, in amorphous ice, the molecules are in a disorganized form that resembles a liquid.

The team created a new form of amorphous ice in an experiment and achieved an atomic-scale model of it in a supercomputer simulation. The experiments used a technique called ball-milling, which grinds crystalline ice into small particles using metal balls in a steel jar. Ball-milling is regularly used to make amorphous materials, but it had never been applied to ice.

The team found that ball-milling created an amorphous form of ice, which, unlike all other known ices, had a density similar to that of liquid water and whose state resembled water in solid form. They named the new ice medium-density amorphous ice (MDA).

To understand the process at the molecular scale the team employed super-computational simulation. By mimicking the ball-milling procedure via repeated random shearing of crystalline ice, the team successfully created a super-computational model of MDA.

“Our discovery of MDA raises many questions on the very nature of liquid water and so understanding MDA’s precise atomic structure is very important,” said co-author Dr. Michael Davies, who carried out the super-computational modeling. “We found remarkable similarities between MDA and liquid water.”

A happy medium

Amorphous ices have been suggested to be models for liquid water. Until now, there have been two main types of amorphous ice: high-density and low-density amorphous ice.

As the names suggest, there is a large density gap between them. This density gap, combined with the fact that the density of liquid water lies in the middle, has been a cornerstone of our understanding of liquid water. It has led in part to the suggestion that water consists of two liquids: one high- and one low-density liquid.

Senior author Professor Christoph Salzmann said: “The accepted wisdom has been that no ice exists within that density gap. Our study shows that the density of MDA is precisely within this density gap and this finding may have far-reaching consequences for our understanding of liquid water and its many anomalies.”

A high-energy geophysical material

The discovery of MDA gives rise to the question: where might it exist in nature? Shear forces were discovered to be vital to creating MDA in this study. The team suggests ordinary ice could undergo similar shear forces in the ice moons due to the tidal forces exerted by gas giants such as Jupiter.

Moreover, MDA displays one remarkable property that is not found in other forms of ice. Using calorimetry, they discovered that when MDA recrystallizes to ordinary ice it releases an extraordinary amount of heat. The heat released from the recrystallization of MDA could play a role in activating tectonic motions. More broadly, this discovery shows water can be a high-energy geophysical material.

Professor Angelos Michaelides, the lead author from Cambridge's Yusuf Hamied Department of Chemistry, said: “Amorphous ice in general is said to be the most abundant form of water in the universe. The race is now on to understand how much of it is MDA and how geophysically active MDA is.”

Image sequence of projectiles being fired through three different steel plates. The pictures on the right show the exit holes in the various plates.
Image sequence of projectiles being fired through three different steel plates. The pictures on the right show the exit holes in the various plates.

NTNU's SIMLab supercomputing shows how to create buildings that can withstand the most extreme stress loads

In an explosion, fragments, and debris can be ejected at great speed and strike the surroundings. Then comes the shock wave. It's a scary combination.

Combined ballistic impacts pose a major challenge for engineers who build structures that must withstand extreme stresses. The combination of blast pressure and impact at high speed increases the chances of greater damage. Ph.D. candidate at the Norwegian University of Science and Technology (NTNU), Benjamin Stavnar Elveli describes it as the scariest stress there is.

“These combined impacts work in the same way as shrapnel bombs,” he says.

Infrastructure shift from massive and military to light and civilian

In the past, protective structures have involved massive concrete military buildings. In recent decades, new threats have emerged, and the need to protect civilian buildings and structures in urban areas has increased.

This has fuelled interest in lighter, thin-walled solutions that can withstand large deformations without cracking and collapsing.

The regulations have not followed this same development. No standards address this type of load yet, and research in the field is very limited.

Elveli has investigated how different types of thin steel plates behave when exposed to such extreme stress loads. His work can help to establish guidelines for how resistant, lightweight structures should be designed.

Initial projectiles do the most damage

Whether they occur in accidents or on purpose: explosions can cause massive damage. Debris and fragments can be torn loose from parts of buildings, cars, gravel, or stones. When they hit, they can act like projectiles.

Elveli says that any buildings, cars, or other objects in the vicinity would be exposed to a load that is more serious than if either stress load occurred alone. The damage is believed to be greatest when fragments hit first.

“That’s because the structure already has a defect or weakness from the projectile and then has to withstand the shock wave itself,” he says. Most often, cracking and destruction starts in the weak spots.”

Safer structures, safer society

Elveli's Ph.D. is based on more than 80 small-scale explosion tests on three different types of steel. By combining physical experiments with theory and mathematical modeling, he has recreated explosive loads in supercomputer simulations. The aim is to gain as much control as possible over how structures react to such loads.

The more scientists understand the actual physics of these loads, the more accurate, safe, and sustainable solutions the construction engineers of the future can deliver.

The danger of overestimating the strength

A shock wave can last for several milliseconds and cause great destruction over a large area. A fragment moves even faster and produces concentrated damage. Simulating this combined effect means that you have to describe two completely different phenomena in one and the same model. It's complicated.

“Often you’ll end up with some sort of trade-off. In order to capture the local weaknesses that occur during the explosion, we need to determine how accurate the descriptions of the impact of the fragments should be. If we don’t achieve full control of this, the model could overestimate the strength of the building to withstand the stress,” says Elveli.

Need solutions that can be trusted

Overestimating strength can have fatal consequences. The solutions that construction engineers deliver have to be dependable. A large part of Elveli’s doctoral work has been to investigate how accurate the models need to be to ensure reliable buildings.

A common approach has been to assume that the fragments hit before the shock wave happens. The physical experiments than have to be divided into two different sequences that follow each other. Often such studies use a simplified approach, where the test pieces have holes milled out by a machine to mimic damage from real fragments.

Overestimating resilience

Elveli has compared the behavior of machined plates with plates hit with real projectiles. Real projectiles created small petal-like cracks and deformation around the points of impact, whereas the pre-formed defects had perfectly even edges.

The explosion tests showed that the destruction started in the cracks and spread outwards. The researcher thus shows that the simplified approach has weaknesses.

“Idealized defects, like in the machined plates, are easier to test and simulate. But because they lack the deformations and damage that occur in real explosions, there’s a risk of exaggerating the strength of the materials in these models,” he says.

Great need for supercomputer simulations

Understanding the need to develop accurate supercomputer simulations is easy enough. Researchers who work with strength calculations cannot blow up actual buildings to test their resilience.

Elveli has put a lot of work into designing controlled and reliable small-scale explosion tests. He believes that his research will be useful for other researchers in the military and civilian arenas. For industrial use, precise and reliable simulations are currently expensive and time-consuming.

The many tests have produced large amounts of data that may interest the research and development departments of large companies. Elveli’s work makes it possible to simulate how structures behave when they are bent, stretched, or otherwise deformed.

In total, he has carried out 110 tests, of which 82 were explosion experiments. High-speed cameras filming 37 000 frames per second have captured the details as the steel plates are damaged. Elveli obtained his doctorate at NTNU’s SIMLab/Department of Structural Engineering

UZH prof Schwank develops AI that improves the efficiency of genome editing

Researchers at the University of Zurich have developed a new tool that uses artificial intelligence to predict the efficacy of various genome-editing repair options. Unintentional errors in the correction of DNA mutations of genetic diseases can thus be reduced.

Genome editing technologies offer great opportunities for treating genetic diseases. Methods such as the widely used CRISPR/Cas9 gene scissors directly address the cause of the disease in the DNA. The scissors are used in the laboratory to make targeted modifications to the genetic material in cell lines and model organisms and to study biological processes.

Further development of this classic CRISPR/Cas9 method is called prime editing. Unlike conventional gene scissors, which create a break in both strands of the DNA molecule, prime editing cuts and repairs DNA on a single strand only. The prime editing guide RNA (pegRNA) precisely targets the relevant site in the genome and provides the new genetic information, which is then transcribed by a “translation enzyme” and incorporated into the DNA.

Finding the most efficient DNA repair options
Prime editing promises to be an effective method of repairing disease-causing mutations in patients’ genomes. However, when it comes to applying it successfully, it is important to minimize unintended side effects such as errors in DNA correction or alteration of DNA elsewhere in the genome. According to initial studies, prime editing leads to a significantly lower number of unintended changes than conventional CRISPR/Cas9 approaches.

However, researchers currently still have to spend a significant amount of time optimizing the pegRNA for a specific target in the genome. “There are over 200 repair possibilities per mutation. In theory, we would have to test every single design option each time to find the most efficient and accurate pegRNA,” says Gerald Schwank, professor at the Institute of Pharmacology and Toxicology at the University of Zurich (UZH).

Analyzing a large data set with AI
Schwank and his research group needed to find an easier solution. Together with Michael Krauthammer, UZH professor at the Department of Quantitative Biomedicine, and his team, they developed a method that can predict the efficiency of pegRNAs. By testing over 100,000 different pegRNAs in human cells, they were able to generate a comprehensive prime editing data set. This enabled them to determine which properties of a pegRNA – such as the length of the DNA sequence, the sequence of DNA building blocks or the shape of the DNA molecule – positively or negatively influence the prime editing process.

Subsequently, the team developed an AI-based algorithm to recognize patterns in the pegRNA of relevance for efficiency. Based on these patterns, the trained tool can predict both the effectiveness and accuracy of genome editing with a particular pegRNA. “In other words, the algorithm can determine the most efficient pegRNA for correcting a particular mutation,” says Michael Krauthammer. The tool has already been successfully tested in human and mouse cells and is freely available to researchers.

Long-term goal: repairing hereditary diseases
Further pre-clinical studies are still needed before the new prime editing tool can be used in humans. However, the researchers are confident that in the foreseeable future, it will be possible to use prime editing to repair the DNA mutations of common inherited diseases such as sickle cell anemia, cystic fibrosis, or metabolic diseases.

The tool can be accessed by researchers at https://pridict.it. The study was supported by the University of Zurich Research Priority Program Human Reproduction Reloaded and the Swiss National Science Foundation.