UZH prof Schwank develops AI that improves the efficiency of genome editing

Researchers at the University of Zurich have developed a new tool that uses artificial intelligence to predict the efficacy of various genome-editing repair options. Unintentional errors in the correction of DNA mutations of genetic diseases can thus be reduced.

Genome editing technologies offer great opportunities for treating genetic diseases. Methods such as the widely used CRISPR/Cas9 gene scissors directly address the cause of the disease in the DNA. The scissors are used in the laboratory to make targeted modifications to the genetic material in cell lines and model organisms and to study biological processes.

Further development of this classic CRISPR/Cas9 method is called prime editing. Unlike conventional gene scissors, which create a break in both strands of the DNA molecule, prime editing cuts and repairs DNA on a single strand only. The prime editing guide RNA (pegRNA) precisely targets the relevant site in the genome and provides the new genetic information, which is then transcribed by a “translation enzyme” and incorporated into the DNA.

Finding the most efficient DNA repair options
Prime editing promises to be an effective method of repairing disease-causing mutations in patients’ genomes. However, when it comes to applying it successfully, it is important to minimize unintended side effects such as errors in DNA correction or alteration of DNA elsewhere in the genome. According to initial studies, prime editing leads to a significantly lower number of unintended changes than conventional CRISPR/Cas9 approaches.

However, researchers currently still have to spend a significant amount of time optimizing the pegRNA for a specific target in the genome. “There are over 200 repair possibilities per mutation. In theory, we would have to test every single design option each time to find the most efficient and accurate pegRNA,” says Gerald Schwank, professor at the Institute of Pharmacology and Toxicology at the University of Zurich (UZH).

Analyzing a large data set with AI
Schwank and his research group needed to find an easier solution. Together with Michael Krauthammer, UZH professor at the Department of Quantitative Biomedicine, and his team, they developed a method that can predict the efficiency of pegRNAs. By testing over 100,000 different pegRNAs in human cells, they were able to generate a comprehensive prime editing data set. This enabled them to determine which properties of a pegRNA – such as the length of the DNA sequence, the sequence of DNA building blocks or the shape of the DNA molecule – positively or negatively influence the prime editing process.

Subsequently, the team developed an AI-based algorithm to recognize patterns in the pegRNA of relevance for efficiency. Based on these patterns, the trained tool can predict both the effectiveness and accuracy of genome editing with a particular pegRNA. “In other words, the algorithm can determine the most efficient pegRNA for correcting a particular mutation,” says Michael Krauthammer. The tool has already been successfully tested in human and mouse cells and is freely available to researchers.

Long-term goal: repairing hereditary diseases
Further pre-clinical studies are still needed before the new prime editing tool can be used in humans. However, the researchers are confident that in the foreseeable future, it will be possible to use prime editing to repair the DNA mutations of common inherited diseases such as sickle cell anemia, cystic fibrosis, or metabolic diseases.

The tool can be accessed by researchers at https://pridict.it. The study was supported by the University of Zurich Research Priority Program Human Reproduction Reloaded and the Swiss National Science Foundation.

 

A visualization of a cascaded-mode resonator, where a supermode resonance is created by reflecting the light back in a different mode at each reflection. (Photo credit: Capasso Lab, Harvard SEAS)
A visualization of a cascaded-mode resonator, where a supermode resonance is created by reflecting the light back in a different mode at each reflection. (Photo credit: Capasso Lab, Harvard SEAS)

Capasso lab creates supermode optical resonator at Harvard SEAS

What does it take for scientists to push beyond the current limits of knowledge? Researchers in Federico Capasso’s group at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed an effective formula.

“Dream big, question everything we know, question the textbooks,” says Vincent Ginis, a visiting professor at SEAS and first author of a new paper reporting a breakthrough in optical resonator technology. “That’s how Federico asks our lab team to work together. He challenges us to rethink all the classical rules to see if we can make devices do things better and in novel ways.”

That approach led to the team’s latest result, an optical resonator capable of manipulating light in never-before-observed ways. The breakthrough could influence how resonators are understood and open doors for new capabilities.

“This is an advance that alters fundamentally the design of resonators by using reflectors that convert light from one pattern to another as it bounces back and forth,” says Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS.

Optical resonators play a key role in many aspects of modern life.

“Resonators are central components in most applications of optics, lasers, microscopy, sensing—they appear in all of these technologies as essential building blocks,” says Ginis, who is also an assistant professor of mathematics and physics at the Vrije Universiteit Brussel. “They consist of two reflectors that bounce light back and forth, concentrating light in lasers for example, or filtering out frequencies of light such as in fiber optics and telecommunications.”

Optical resonators are key to telecommunications transmissions, encoding images and audio through frequencies of light.

“Each message, to keep separate from the others, is encoded on its specific frequency,” Ginis says. “Resonators allow us to ‘tape off’ exact, unique frequencies to allow many different messages to be transmitted simultaneously.”

Until now, resonators and the two reflective mirrors inside them controlled the intensity and frequency of light, but not the mode of light, which determines the shape and manner in which photons flow through space and time. We often think of light as moving in a beam like a straight line, but beams of light are also capable of traveling in other modes, like spirals. The new optical resonator developed by Capasso’s team is the first such device that gives scientists precise control over the mode of light, and even more importantly, enables multi-mode coupled light to exist within the resonator.

The team achieved this by etching a new type of pattern on the surface of the reflectors at each end of the resonator device.

“We realized that we could test our novel resonator concept in an integrated photonics platform, and chose silicon-on-insulator, which is used by many scientists and companies for applications such as sensing or communications,” says Cristina Benea-Chelmus, a research associate in the Capasso group and assistant professor of microengineering at the EPFL Institute of Electro and Microengineering, who spearheaded the experimental part of the work.

The etchings, about 300-600 nanometers in size, gave the team control over the shape of light beams inside the resonator. Using reflectors with different patterns on either end of the resonator unlocked their ability to change the shape of light as it moves.

“We can make these light modes play with each other, turning one mode into another, and then back into the first mode, creating loops of different light modes moving through the same space,” Ginis says. “When we saw this, we realized we were in ‘terra incognita’ here.”

Combining more than one mode of light creates what the researchers called a “supermode.”

“In traditional resonators, as light moves back and forth, the mode is always the same—the properties of light are always symmetric,” he says. “In ours, as the light goes from left to right or right to left, the modes are different. We’ve figured out how to break symmetry inside a resonator.”

“Having multimode control of light will have a huge impact on the bandwidth of information that can be transmitted using light,” he says. “It opens up many channels of transmission that we haven’t been able to access simultaneously until now.”

The Capasso team’s optical resonator provides a new tool to conduct fundamental physics experiments, including optomechanics, using light to make things move.

“By placing an object inside a resonator, you can manipulate materials like tiny atoms, molecules, and strands of DNA,” Ginis says. The new device, with its supermode capabilities, could unlock new degrees of freedom for researchers to manipulate minuscule materials with different shapes of light beams.

“By questioning the foundations of textbook resonator theory, we have discovered completely new and counterintuitive properties of light not found in traditional resonators,” Capasso says. These properties, including “mode-independent resonances and directionally dependent propagation,” unlock unforeseen opportunities for photonics, acoustics, and beyond, he adds.

Harvard’s Office of Technology Development has protected the intellectual property arising from the Capasso Lab’s optical resonator innovations and is exploring commercialization opportunities.

Additional authors include postdoctoral fellow Jinsheng Lu and research associate Marco Piccardo.

This work was supported by the Air Force Office of Scientific Research (grants FA550-19-1-0352 and FA95550-19-1-0135), the Research Foundation Flanders, and the Hans Eggenberger Foundation. This work was performed in part at the Center for Nanoscale Systems (CNS), a member of the National Nanotechnology Coordinated Infrastructure Network (NNCI), which is supported by the National Science Foundation under NSF Award no. 1541959.

Courtesy of NASA/JPL/SSI/SwRI
Courtesy of NASA/JPL/SSI/SwRI

SwRI investigations reveal more evidence that Mimas is a stealth ocean world

When a Southwest Research Institute scientist discovered surprising evidence that Saturn’s smallest, the innermost moon could generate the right amount of heat to support a liquid internal ocean, colleagues began studying Mimas’ surface to understand how its interior may have evolved. Numerical simulations of the moon’s Herschel impact basin, the most striking feature on its heavily cratered surface, determined that the basin’s structure and the lack of tectonics on Mimas are compatible with a thinning ice shell and geologically young ocean.

“In the waning days of NASA’s Cassini mission to Saturn, the spacecraft identified a curious libration, or oscillation, in Mimas’ rotation, which often points to a geologically active body able to support an internal ocean,” said SwRI’s Dr. Alyssa Rhoden, a specialist in the geophysics of icy satellites, particularly those containing oceans, and the evolution of giant planet satellite systems. She is the second author of a new Geophysical Research Letters paper on the subject. “Mimas seemed like an unlikely candidate, with its icy, heavily cratered surface marked by one giant impact crater that makes the small moon look much like the Death Star from Star Wars. If Mimas has an ocean, it represents a new class of small, ‘stealth’ ocean worlds with surfaces that do not betray the ocean’s existence.”

Rhoden worked with Purdue graduate student Adeene Denton to better understand how a heavily cratered moon like Mimas could possess an internal ocean. Denton modeled the formation of the Hershel impact basin using iSALE-2D simulation software. The models showed that Mimas’ ice shell had to be at least 34 miles (55 km) thick at the time of the Herschel-forming impact. In contrast, observations of Mimas and models of its internal heating limit the present-day ice shell thickness to less than 19 miles (30 km) thick, if it currently harbors an ocean. These results imply that a present-day ocean within Mimas must have been warming and expanding since the basin formed. It is also possible that Mimas was entirely frozen both at the time of the Herschel impact and present. However, Denton found that including an interior ocean in impact, models helped produce the shape of the basin.

“We found that Herschel could not have formed in an ice shell at the present-day thickness without obliterating the ice shell at the impact site,” said Denton, who is now a postdoctoral researcher at the University of Arizona. “If Mimas has an ocean today, the ice shell has been thinning since the formation of Herschel, which could also explain the lack of fractures on Mimas. If Mimas is an emerging ocean world, that places important constraints on the formation, evolution, and habitability of all of the mid-sized moons of Saturn.”

“Although our results support a present-day ocean within Mimas, it is challenging to reconcile the moon’s orbital and geologic characteristics with our current understanding of its thermal-orbital evolution,” Rhoden said. “Evaluating Mimas’ status as an ocean moon would benchmark models of its formation and evolution. This would help us better understand Saturn’s rings and mid-sized moons as well as the prevalence of potentially habitable ocean moons, particularly at Uranus. Mimas is a compelling target for continued investigation.”

A process of continual learning for a synthetic multi-label dataset   The figure shows how new information is learned each time a data distribution is input, while retaining information learned in the past.
A process of continual learning for a synthetic multi-label dataset The figure shows how new information is learned each time a data distribution is input, while retaining information learned in the past.

Osaka Metro prof Masuyama proposes new data learning methods for AI

Advances in information technology have made it possible for us to easily and continually obtain large amounts of diverse data. Artificial intelligence technology is gaining attention as a tool to put this big data to use.

Conventional machine learning mainly deals with single-label classification problems, in which data and corresponding phenomena or objects (label information) are in a one-to-one relationship. However, in the real world, data, and label information rarely have a one-to-one relationship. In recent years, therefore, attention has focused on the multi-label classification problem, which deals with data that has a one-to-many relationship between data and label information. For example, a single landscape photo may include multiple labels for elements such as sky, mountains, and clouds. In addition, to efficiently learn from big data that is obtained continually, the ability to learn over time without destroying things that were learned previously is also required.

A research group led by Associate Professor Naoki Masuyama and Professor Yusuke Nojima of the Osaka Metropolitan University Graduate School of Informatics has developed a new method that combines classification performance for data with multiple labels, with the ability to continually learn with data. Numerical experiments on real-world multi-label datasets showed that the proposed method outperforms conventional methods.

The simplicity of this new algorithm makes it easy to devise an evolved version that can be integrated with other algorithms. Since the underlying clustering method groups data based on the similarity between data entries, it is expected to be a useful tool for continual big data preprocessing. In addition, the label information assigned to each cluster is learned continually, using a method based on the Bayesian approach. By learning the data and learning the label information corresponding to the data separately and continually, both high classification performance and continual learning capability are achieved.

“We believe that our method is capable of continual learning from multi-label data and has capabilities required for artificial intelligence in a future big data society,” Professor Masuyama concluded.

A group of small galaxies, seen almost 13 billion years back in time, likely in the process of forming a massive galaxy. The colors are composed from three different infrared colors. The white, horisontal bar shows the scale of approximately 20,000 lightyears. Credit: Shuowen Jin et al. (2023).
A group of small galaxies, seen almost 13 billion years back in time, likely in the process of forming a massive galaxy. The colors are composed from three different infrared colors. The white, horisontal bar shows the scale of approximately 20,000 lightyears. Credit: Shuowen Jin et al. (2023).

Giant galaxy formation caught in action with Danish supercomputing, JWST

Astronomers from the Cosmic Dawn Center have unveiled the nature of the densest region of galaxies seen with the James Webb Space Telescope (JWST) in the early Universe. They find it to be likely the progenitor of a massive, Milky Way-like galaxy, seen at a time when it is still assembling from smaller galaxies. The discovery corroborates our understanding of how galaxies form. 

Four snapshots of the evolution of a simulated proto-galaxy from the "EAGLE" simulation, chosen to resemble the observed group CGG-z5. The brightness show the density of stars in the galaxies, and the symbols follow individual clumps of matter. In the 1.2 billion years that pass between the upper left and the lower right, the galaxies grows from a total stellar mass of 5 billion Suns to 65 billion Suns. Credit: A. Vijiayan and S. Jin.

According to our current understanding of structure formation in the Universe, galaxies form hierarchically, with small structures forming first in the very early Universe, later merging to build up larger structures. This is the prediction of theories and supercomputer simulations and is verified by observations of galaxies at various epochs in the history of the Universe.

To observe the very first structures assembling, we have to look as far back in time, and hence as far away, as possible. But these sources are both very small and very faint, and their detection requires advanced technologies.

In a new study, the early progenitor of what today will likely have evolved into a massive, Milky Way-sized galaxy, has been detected. This group of smaller galaxies, dubbed CGG-z5, was found through the observational program called "CEERS" with the James Webb Space Telescope and is seen when the Universe was only 1.1 billion years old, 8% of its current age.

CGG-z5 was discovered using the code GalCluster, which was created by Nikolaj Sillassen, an MSc student at the Cosmic Dawn Center (DAWN). 

"I developed the software during my studies to detect this kind of structure, and now we applied it to data from the CEERS program," says Nikolaj Sillassen, who already found a similar but more nearby group while testing the software.

"It's great to see how useful my code is becoming."

Impossible without James Webb

The brightest members of the galaxy group were discovered previously with the Hubble Space Telescope. But the CEERS program revealed new and smaller members.

"The other members of the group are both small and faint. Without the sensitivity and the spatial resolution of James Webb, we simply wouldn't be able to detect them," explains Shuowen Jin, Marie Curie Fellow at the Cosmic Dawn Center (DAWN) and lead author of the current study.

Exactly what the "future" of the galaxy group CGG-z5 will be, is of course unknown. Rather than forming a single galaxy, it could be that the group evolves into a large cluster of galaxies at later times. Yet another possibility is that the members are in reality not so closely packed as it seems, but instead a part of a filamentary structure that we just happen to view from one end to the other.

Help from supercomputer simulations

To distinguish between these scenarios, more precise observations involving the more time-consuming spectroscopy are needed. But in the meantime, help is available from supercomputer simulations:

"To better understand the nature and evolution of CGG-z5, we searched for similar structures in large-scale, hydrodynamical simulations," says Aswin Vijiayan, Postdoctoral Fellow at the Cosmic Dawn Center who conducted the simulation analysis in the study. "We found 14 structures that match closely the physical properties of our observed group CGG-z5, and then traced the evolution of these structures through time in the simulations, from the early Universe to the present epoch.

Although the exact unfolding of the evolution of these 14 structures is different, they all shared the same fate: Roughly 0.5 to 1 billion years later, they merge to form a single galaxy which, by the time the Universe is half its current age, have masses comparable to our own Milky Way.

"Given the predictions of the simulations, it is therefore tempting to speculate that the CGG-z5 system will also follow a similar evolutionary path and that we captured the process of small galaxies assembling into a single massive galaxy," Shuowen Jin concludes.

"Interestingly, the number of these early groups like CGG-z5 in a given volume of space is similar to the number of massive galaxies at later cosmic times", says Georgios Magdis, associate professor at DAWN and partaker in the study. "This makes merging groups appealing as the main progenitors of massive galaxies at later epochs".

Large samples and further work are needed to verify this picture.