Japanese researchers demo quantum supercomputer memory resilient against errors

Quantum supercomputing holds the potential to be a game-changing future technology in fields ranging from chemistry to cryptography to finance to pharmaceuticals. Compared to conventional computers, scientists suggest that quantum computers could operate many thousand times faster. To harness this power, scientists today are looking at ways to construct quantum computer networks. Fault-tolerant quantum memory, which responds well when hardware or software malfunctions occur, will play an important role in these networks. A research team from Yokohama National University is exploring quantum memory that is resilient against operational or environmental errors.

For quantum supercomputers to reach their full potential, scientists need to be able to construct quantum networks. In these networks, fault-tolerant quantum memory is essential. When scientists manipulate spin quantum memory, a magnetic field is required. The magnetic field hinders the integration with the superconducting quantum bits or qubits. The qubits in quantum computing are basic units of information, similar to the binary digits, or bits, in conventional computers. Nitrogen-vacancy (NV) center in diamond serves as quantum memories, which is error-correction coded to correct errors automatically.

To scale up a quantum supercomputer based on superconducting qubits, scientists need to operate under a zero magnetic field. In their search to further the technology toward a fault-tolerant quantum computer, the research team studied nitrogen-vacancy centers in diamonds. Nitrogen-vacancy centers hold promise in a range of applications including quantum supercomputing. Using a diamond nitrogen-vacancy center with two nuclear spins of the surrounding carbon isotopes, the team demonstrated quantum error correction in quantum memory. They tested a three-qubit quantum error correction against both a bit-flip and phase-flip error, under a zero magnetic field. The bit-flip or phase-flip errors can occur when there are changes in the magnetic field. To achieve a zero magnetic field, the team used a three-dimensional coil to cancel out the residual magnetic field including the geomagnetic field. This quantum memory is error-correction coded to correct errors automatically as they occur.

Previous research had demonstrated quantum error correction, but it was all carried out under relatively strong magnetic fields. The Yokohama National University research team is the first to demonstrate the quantum operation of the electron and nuclear spins in the absence of a magnetic field.

The team members are Takaya Nakazato, Raustin Reyes, Nobuaki Imaike, Kazuyasu Matsuda, Kazuya Tsurumoto, from the Department of Physics, Graduate School of Engineering Science, Yokohama National University in Yokohama, Japan; Yuhei Sekiguchi from the Institute of Advanced Science, Yokohama National University; and Hideo Kosaka, who works at both the Graduate School of Engineering Science and the Institute of Advanced Sciences, Yokohama National University.

“The quantum error correction makes quantum memory resilient against operational or environmental errors without the need for magnetic fields and opens a way toward distributed quantum computation and a quantum internet with memory-based quantum interfaces or quantum repeaters,” said Hideo Kosaka, a professor at Yokohama University and lead author on the study.

The team’s demonstration can be applied to the construction of a large-scale distributed quantum supercomputer and a long-haul quantum communication network by connecting quantum systems vulnerable to a magnetic field, such as superconducting qubits with spin-based quantum memories. Looking ahead, the research team has plans to take the technology a step further. “We want to develop a quantum interface between superconducting and photonic qubits to realize a fault-tolerant large-scale quantum computer,” said Kosaka.

University of Groningen computer scientist develops new algorithm that should help app developers who respect the privacy of their users

The last thing you want to do when installing a new, free app on your phone is to scroll through pages of information on what kind of access to your personal information it requires. App builders count on this, and their intrusive apps harvest data that they can then sell. That is why the University of Groningen in the city of Groningen in the Netherlands computer scientist Fadi Mohsen, together with colleagues from the University of Michigan-Flint (USA) and the Palestinian An-Najah National University, has developed an algorithm that ranks similar apps on privacy scores. A description of the system was published in the journal Concurrency and Computation: Practice and Experience on September 2.

When you are installing an app, it has to tell you which information it will access. "However, users don’t pay much attention to this as a rule," says Mohsen. ‘They are, generally speaking, the weakest link in privacy protection. That is why we wanted to develop a system to mitigate intrusive apps that reduce the reliance on the attention and understanding of the users." This is dr. Fadi Mohsen, research scientist at the Computer Science department, Bernoulli Insitute, Faculty of Science and Engineering at the University of Groningen, the Netherlands. He is first author of the paper in the journal Concurrency and Computation: Practice and Experience .

Functionality

Mohsen and his colleagues collected data on more than one million apps from the Google Play Store to use them in demo systems and experiments. "We rely on features that we extracted from the metadata of these apps and their configuration/manifest files. Additionally, we built a web-based interface to collect the privacy preferences of users." Their method is based on scoring applications on these features and users’ preferences. The score reflects the intrusiveness behavior of each application relative to other apps in the same category and is used to rank the applications.

Next, the scientists built a trial search engine to find new apps, which incorporates their methodology. The apps that are shown on the top of the list are the least intrusive. Mohsen: "A normal search will rank the apps by their functionality. Our engine compares apps with similar functionalities on their privacy score." So the app at the top of the list will respect your privacy the most.

Advertising

The ranking algorithm considers two scores: the score for permission, and that for listeners. The former determines how much access each application is granted on the user’s phone, such as reading SMS messages, use your default calendar, and even deleting pictures. The latter gives the apps the ability to keep track of the occurrences on the user’s phone, such as whether the user is present or a new SMS message has arrived. "The information that is gathered by these free apps can be sold, for example to companies who produce targeted advertising," Mohsen explains. The system that he and his colleagues have devised could help users to avoid the most intrusive apps without having to read all of the privacy information.

The website and search engine were tested by a group of test subjects. "The results show that they found the system for setting up their permission preferences easy to use. They also said that they would value it if app stores took their preferences into account when recommending certain apps," says Mohsen. This shows that the approach is useful and would be effective in helping users to choose apps that respect their privacy.

Google

Ideally, companies like Google could use this system in the search engine for their app store. However, another option is to create a website like the one built for this study, where users can express their preferences on privacy issues and then look for apps via the website’s search engine. Mohsen: "Such dedicated websites are quite normal these days, so the approach is viable."

In the meantime, Mohsen is looking at other privacy issues. "We are developing a system that monitors apps after installation. In some cases, updates can require extra permissions from the users." In the end, the systems that he creates should give privacy-respecting apps an advantage over the more intrusive ones. "Our aim is to help app developers who respect the privacy of their users."

Spanish researcher uses machine learning tools to create a new model to assess ICU patients' mortality risk

A research team led by Dr. Rosario Delgado from the Autonomous University of Barcelona, Spain (UAB) Department of Mathematics, in collaboration with the Hospital de Mataró, developed a new machine learning-based model that predicts the risk of mortality of intensive care unit patients according to their characteristics. The research was published in the latest edition of the journal Artificial Intelligence in Medicine, with a special mention as a "Position paper".

Under the framework of Artificial Intelligence, machine learning allows a model to gain knowledge based on the information provided by available historical data and automatically modifies its information when new information appears. One of the current challenges is creating models with which to make personalized medical predictions. One of the areas in which artificial intelligence can be of great help in deciding how to proceed with intensive care unit (ICU) patients. This process is complex and comes at a high cost, and depends on the inherent variability of the opinion of specialists, based on their experience and instinct. Therefore, to improve the quality of care in the ICUs, it is important to set down protocols based on objective data and accurately predict a patient's risk of mortality according to their characteristics. In this sense, machine learning tools may be of great help to medical experts.

A group of researchers led by Dr. Rosario Delgado from the Department of Mathematics of the UAB, in collaboration with Head of the ICU at Hospital de Mataró Dr. Juan Carlos Yébenes, UAB associate lecturer Àngel Lavado from the Information Management Unit of the Maresme Health Consortium, and José David Núñez-González, Ph.D. student of the UAB Department of Mathematics, used machine learning tools to create a model capable of predicting the risk of mortality of ICU patients, based on a real database which also served to validate the model. The model will aid in the decision-making process of healthcare workers by improving the prediction of premature deaths, making medical decisions about high-risk patients more efficient, evaluating the effectiveness of new treatments, and detecting changes in clinical practices.

The use of this model represents a clear improvement in traditional approaches, consistent with predicting the risk of mortality based on the Acute Physiology And Chronic Health Evaluation (APACHE) score - a questionnaire widely used to assess a person's state of health with the help of different indicators. The new model makes use of an estimated logistical regression that was validated in previous groups of patients. Researchers were able to demonstrate experimentally that the new model they created overcomes the weak points of traditional approaches, offering good results and presenting itself as a better alternative.

The predictive self-learning prognosis model created by researchers consists of a set of Bayesian classifiers used by assigning a life prognosis label (live or die) to each individual, according to traits such as demography, gender, and age; the Charlson comorbidity index; their place of origin; the cause of admission; the presence or lack of sepsis; severity reached in the first 24 hours after admission; and the APACHE II score.

Researchers improved the model's prediction through a combination of individual predictions of each classifier designed in a way that the faults of some predictions could be compensated with other correct predictions and taking into account the imbalance represented by a low proportion of patients dying in the ICUs. The model predicts the cause of death of patients at high risk and the outcome of patients at low risk of dying. This type of model is known as a hierarchical predictive model, given that there are two stages of prediction.

"The hierarchical predictive prognosis model we have introduced has good predictive behavior, and it also allows studying which of the patient's traits are the most decisive, which can become risk factors, in assessing their risk of death. It also can be extrapolated to compare different ICUs, or in a longitudinal study to analyze improvements through the timing of protocols in specific ICUs", explains Dr. Rosario Delgado. "This is a useful and promising methodology, and has important clinical applicability from the moment in which it can help physicians make patient-tailored medical decisions, and also for health authorities in their management of available resources", she concludes.