Caltech research demonstrates how the use of molecules in quantum supercomputing leads to fewer errors

The technology behind the quantum computers of the future is fast developing, with several different approaches in progress. Many of the strategies, or "blueprints," for quantum computers rely on atoms or artificial atom-like electrical circuits. In a new theoretical study in the journal Physical Review X, a group of physicists at Caltech demonstrates the benefits of a lesser-studied approach that relies not on atoms but molecules.

"In the quantum world, we have several blueprints on the table and we are simultaneously improving all of them," says lead author Victor Albert, the Lee A. DuBridge Postdoctoral Scholar in Theoretical Physics. "People have been thinking about using molecules to encode information since 2001, but now we are showing how molecules, which are more complex than atoms, could lead to fewer errors in quantum computing."

At the heart of quantum computers are what are known as qubits. These are similar to the bits in classical computers, but unlike classical bits they can experience a bizarre phenomenon known as superposition in which they exist in two states or more at once. Like the famous Schrödinger's cat thought experiment, which describes a cat that is both dead and alive at the same time, particles can exist in multiple states at once. The phenomenon of superposition is at the heart of quantum computing: the fact that qubits can take on many forms simultaneously means that they have exponentially more computing power than classical bits. CAPTION An illustration showing a molecule in a state of superposition.

But the state of superposition is a delicate one, as qubits are prone to collapsing out of their desired states, and this leads to computing errors.

"In classical computing, you have to worry about the bits flipping, in which a '1' bit goes to a '0' or vice versa, which causes errors," says Albert. "This is like flipping a coin, and it is hard to do. But in quantum computing, the information is stored in fragile superpositions, and even the quantum equivalent of a gust of wind can lead to errors."

However, if a quantum computer platform uses qubits made of molecules, the researchers say, these types of errors are more likely to be prevented than in other quantum platforms. One concept behind the new research comes from work performed nearly 20 years ago by Caltech researchers John Preskill, Richard P. Feynman Professor of Theoretical Physics and director of the Institute of Quantum Information and Matter (IQIM), and Alexei Kitaev, the Ronald and Maxine Linde Professor of Theoretical Physics and Mathematics at Caltech, along with their colleague Daniel Gottesman (PhD '97) of the Perimeter Institute in Ontario, Canada. Back then, the scientists proposed a loophole that would provide a way around a phenomenon called Heisenberg's uncertainty principle, which was introduced in 1927 by German physicist Werner Heisenberg. The principle states that one cannot simultaneously know with very high precision both where a particle is and where it is going.

"There is a joke where Heisenberg gets pulled over by a police officer who says he knows Heisenberg's speed was 90 miles per hour, and Heisenberg replies, 'Now I have no idea where I am,'" says Albert.

The uncertainty principle is a challenge for quantum computers because it implies that the quantum states of the qubits cannot be known well enough to determine whether or not errors have occurred. However, Gottesman, Kitaev, and Preskill figured out that while the exact position and momentum of a particle could not be measured, it was possible to detect very tiny shifts to its position and momentum. These shifts could reveal that an error has occurred, making it possible to push the system back to the correct state. This error-correcting scheme, known as GKP after its discoverers, has recently been implemented in superconducting circuit devices.

"Errors are okay but only if we know they happen," says Preskill, a co-author on the Physical Review X paper and also the scientific coordinator for a new Department of Energy-funded science center called the Quantum Systems Accelerator. "The whole point of error correction is to maximize the amount of knowledge we have about potential errors."

In the new paper, this concept is applied to rotating molecules in superposition. If the orientation or angular momentum of the molecule shifts by a small amount, those shifts can be simultaneously corrected.

"We want to track the quantum information as it's evolving under the noise," says Albert. "The noise is kicking us around a little bit. But if we have a carefully chosen superposition of the molecules' states, we can measure both orientation and angular momentum as long as they are small enough. And then we can kick the system back to compensate."

Jacob Covey, a co-author on the paper and former Caltech postdoctoral scholar who recently joined the faculty at the University of Illinois, says that it might be possible to eventually individually control molecules for use in quantum information systems such as these. He and his team have made strides in using optical laser beams, or "tweezers," to control single neutral atoms (neutral atoms are another promising platform for quantum-information systems). {module INSIDE STORY}

"The appeal of molecules is that they are very complex structures that can be very densely packed," says Covey. "If we can figure out how to utilize molecules in quantum computing, we can robustly encode information and improve the efficiency in which qubits are packed."

Albert says that the trio of himself, Preskill, and Covey provided the perfect combination of theoretical and experimental expertise to achieve the latest results. He and Preskill are both theorists while Covey is an experimentalist. "It was really nice to have somebody like John to help me with the framework for all this theory of error-correcting codes, and Jake gave us crucial guidance on what is happening in labs."

Says Preskill, "This is a paper that no one of the three of us could have written on our own. What's really fun about the field of quantum information is that it's encouraging us to interact across some of these divides, and Caltech, with its small size, is the perfect place to get this done."

Artificial intelligence learns continental hydrology

The complex distribution of continental water masses in South America has been determined with a new Deep-Learning-Method using satellite data

Changes to water masses that are stored on the continents can be detected with the help of satellites. The data sets on the Earth's gravitational field which are required for this, stem from the GRACE and GRACE-FO satellite missions. As these data sets only include the typical large-scale mass anomalies, no conclusions about small scale structures, such as the actual distribution of water masses in rivers and river branches, are possible. Using the South American continent as an example, the Earth system modelers at the German Research Centre for Geosciences GFZ, have developed a new Deep-Learning-Method, which quantifies small as well as large-scale changes to the water storage with the help of satellite data. This new method cleverly combines Deep-Learning, hydrological models, and Earth observations from gravimetry and altimetry. CAPTION Comparison of monthly-mean terrestrial water storage anomalies (TWSAs) in selected months of the prediction year 2019.  CREDIT Image: Irrgang et al. 2020, Geophysical Research Letters{module INSIDE STORY}

So far it is not precisely known, how much water a continent really stores. The continental water masses are also constantly changing, thus affecting the Earth's rotation and acting as a link in the water cycle between atmosphere and ocean. Amazon tributaries in Peru, for example, carry huge amounts of water in some years, but only a fraction of it in others. In addition to the water masses of rivers and other bodies of freshwater, considerable amounts of water are also found in soil, snow, and underground reservoirs, which are difficult to quantify directly.

Now the research team around primary author Christopher Irrgang developed a new method in order to draw conclusions on the stored water quantities of the South American continent from the coarsely-resolved satellite data. "For the so-called down-scaling, we are using a convolutional neural network, in short CNN, in connection with a newly developed training method", Irrgang says. "CNNs are particularly well suited for processing spatial Earth observations because they can reliably extract recurrent patterns such as lines, edges, or more complex shapes and characteristics."

In order to learn the connection between continental water storage and the respective satellite observations, the CNN was trained with simulation data of a numerical hydrological model over the period from 2003 until 2018. Additionally, data from the satellite altimetry in the Amazon region was used for validation. What is extraordinary, is that this CNN continuously self-corrects and self-validates in order to make the most accurate statements possible about the distribution of the water storage. "This CNN, therefore, combines the advantages of numerical modeling with high-precision Earth observation" according to Irrgang.

The researchers' study shows that the new Deep-Learning-Method is particularly reliable for the tropical regions north of the -20° latitude on the South American continent, where rain forests, vast surface waters, and also large groundwater basins are located. Same as for the groundwater-rich, western part of South America's southern tip. The down-scaling works less well in dry and desert regions. This can be explained by the comparably low variability of the already low water storage there, which therefore only has a marginal effect on the training of the neural network. However, for the Amazon region, the researchers were able to show that the forecast of the validated CNN was more accurate than the numerical model used.

In the future, large-scale as well as regional analysis and forecasts of the global continental water storage will be urgently needed. Further development of numerical models and the combination with innovative Deep-Learning-Methods will take up a more important role in this, in order to gain a comprehensive insight into continental hydrology. Aside from purely geophysical investigations, there are many other possible applications, such as studying the impact of climate change on continental hydrology, the identification of stress factors for ecosystems such as droughts or floods, and the development of water management strategies for agricultural and urban regions.

A.I. tool promises faster, more accurate Alzheimer's diagnosis

Stevens researchers use explainable A.I. to address trustability of A.I. systems in the medical field

By detecting subtle differences in the way that Alzheimer's sufferers use language, researchers at Stevens Institute of Technology have developed an A.I. algorithm that promises to accurately diagnose Alzheimer's without the need for expensive scans or in-person testing. The software not only can diagnose Alzheimer's, at negligible cost, with more than 95 percent accuracy but is also capable of explaining its conclusions, allowing physicians to double-check the accuracy of its diagnosis.

"This is a real breakthrough," said the tool's creator, K.P. Subbalakshmi, founding director of Stevens Institute of Artificial Intelligence and professor of electrical and computer engineering at the Charles V. Schaeffer School of Engineering. "We're opening an exciting new field of research, and making it far easier to explain to patients why the A.I. came to the conclusion that it did while diagnosing patients. This addresses the important question of trustability of A.I .systems in the medical field." {module INSIDE STORY}

It has long been known that Alzheimer's can affect a person's use of language. People with Alzheimer's typically replace nouns with pronouns, such as by saying 'He sat on it' rather than 'The boy sat on the chair.' Patients might also use awkward circumlocutions, saying "My stomach feels bad because I haven't eaten" instead of simply "I'm hungry." By designing an explainable A.I. engine that uses attention mechanisms and convolutional neural network-- a form of A.I. that learns over time -- Subbalakshmi and her students were able to develop software that could not only accurately identify well-known telltale signs of Alzheimer's, but also detect subtle linguistic patterns previously overlooked.

Subbalakshmi and her team trained her algorithm using texts produced by both healthy subjects and known Alzheimer's sufferers as they described a drawing of children stealing cookies from a jar. Using tools developed by Google, Subbalakshmi and her team converted each individual sentence into a unique numerical sequence, or vector, representing a specific point in a 512-dimensional space.

Such an approach allows even complex sentences to be assigned a concrete numerical value, making it easier to analyze structural and thematic relationships between sentences. By using those vectors along with handcrafted features - those that subject matter experts have identified - the A.I. system gradually learned to spot similarities and differences between sentences spoken by healthy or unhealthy subjects, and thus to determine with remarkable accuracy how likely any given text was to have been produced by an Alzheimer's sufferer.

"This is absolutely state-of-the-art," said Subbalakshmi, who presented her work, in collaboration with her doctorate students, Mingxuan Chen and Ning Wang, on Aug. 24 at the 19th International Workshop on Data Mining in Bioinformatics at BioKDD. "Our A.I. software is the most accurate diagnostic tool currently available while also being explainable."

The system can also easily incorporate new criteria that may be identified by other research teams in the future, so it will only get more accurate over time. "We designed our system to be both modular and transparent," Subbalakshmi explained. "If other researchers identify new markers of Alzheimer's, we can simply plug those into our architecture to generate even better results."

In theory, A.I. systems could one day diagnose Alzheimer's based on any text, from a personal email to a social media post. First, though, an algorithm would need to be trained using many different kinds of texts produced by known Alzheimer's sufferers, rather than just picture descriptions, and that kind of data isn't yet available. "The algorithm itself is incredibly powerful," Subbalakshmi said. "We're only constrained by the data available to us."

In the coming months, Subbalakshmi hopes to gather new data that will allow her software to be used to diagnose patients based on speech in languages other than English. Her team is also exploring the ways that other neurological conditions -- such as aphasia, stroke, traumatic brain injuries, and depression -- can affect language use. "This method is definitely generalizable to other diseases," said Subbalakshmi. "As we acquire more and better data, we'll be able to create streamlined, accurate diagnostic tools for many other illnesses too."