Noninteracting energy dispersions in the honeycomb (left) and pi-flux (right) lattices

Scientists from the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan, have used the powerful K supercomputer, along with a number of other supercomputers, to perform a large-scale analysis of the behavior of electrons in a material transitioning from a metal to an insulator phase. Their analysis, just published in Physical Review X, confirmed that there is a direct movement between the two phases, and that the behavior is tied to the loss of mass of the electrons—called Dirac electrons—in the materials as they become correlated with one another.

Most materials maintain a certain property—either conducting or insulating—consistently, but there are a class of materials that can transition from one state to another based either on environmental changes, such as pressure or magnetic field, or by “doping” with other materials. How this happens has long been a mystery. Hints came from the fact that “band theory”—which explains the properties of materials by looking at the bands occupied by electrons and the difficulties of jumping from one to another—gave wrong predictions for certain classes of materials. Nevill Mott, in work that earned him the 1977 Nobel Prize in Physics, came to understand that this inaccuracy was due to the correlation between electrons in these materials, which are affected by the repulsive Coulomb interactions between them.

To shed new light on this problem, the team began modeling large numbers of electrons on two lattices with about 100 to 3,000 sites, using variations of the Hubbard model, which has been successful in modeling such materials. They found that, in contradiction to theories that there was a transitional phase, there was in fact a direct transition from metal to insulator as the correlation between the electrons increased, and the use of the large-scale simulation enabled them to determine, with new accuracy, when the transition would take place. They discovered also that the key element was the loss of mass of the Dirac electrons, not the speed at which they traveled, which seemed unrelated to the transition. 

The authors also discussed the fact that this metal-insulator transition, expected to arise in graphene-like materials, can be understood in terms of the language of particle physics, despite the fact that the electrons in the graphene and the elementary particles such as quarks dealt with in particle physics have totally different energy scales. According to this language, the transition they elucidated corresponds to the breaking of the so-called chiral-Heisenberg symmetry in the celebrated Gross-Neveu model introduced more than 40 years ago. 

According to Yuichi Otsuka, the first author of the work, “Our study determines for the first time the diuniversality class of the metal-insulator transition of interacting Dirac electrons. We expect that our findings will be relevant not only to condensed matter materials but also to Dirac fermions in particle physics.” He continued, “There are still many things we do not understand about interaction-driven insulators, and research in this area may help us elucidate those mysteries.”

Research from the University of Adelaide hopes to provide advances in the planning for flood risk, thanks to a new, faster method of assessing the highly complex factors that cause floods in a specific location.

The results of the study, published in this month's issue of the Journal of Hydrology, have shown it's possible to increase the speed of a highly accurate flood risk prediction by between 100-1000 times compared with techniques currently used by researchers to estimate flood risk under climate change.

"Engineering companies and local councils involved in flood risk assessment and infrastructure planning have a major challenge ahead for them, and that's driven by climate change," says Associate Professor Mark Thyer, from the University's School of Civil, Environmental and Mining Engineering. He led the research team, which also included collaborators from the School of Mathematical Sciences at the University of Adelaide and the School of Engineering at the University of Newcastle.

"Approaches typically used by industry for flood risk assessment have been based on information about historical flood events. But climate change will eventually make that method obsolete, because with a change in climate those historical events start to become more irrelevant as predictors of future flood activity," he says.

"The other main contender for predicting flood events under climate change, called continuous simulation, can be incredibly slow, as it uses long-term rainfall sequences spanning hundreds of years, taking into account climate variability and its impact on the catchment processes that drive major flood events. This can take anywhere from weeks to months to generate an accurate prediction for a single catchment," he says.

The new method tested by the research team is aimed at providing a highly accurate assessment at a much faster rate. The method (known as hybrid causative events, or HCE) relies on an algorithm that knocks out all of the unnecessary information used by the slower, continuous simulation approach - such as long, dry periods without rainfall.

"Our new predictive method focuses on the key causative events that drive major floods such as high catchment saturation and extreme rainfall events. By extracting these key drivers, we realised we don't need to run the catchment model for all the long-term dry periods. This greatly reduces the time taken for our modelling, while also maintaining the high level of accuracy we're seeking," Associate Professor Thyer says.

"So far our method has been tested in a virtual laboratory on eight different sites in Australia, ranging from desert to Mediterranean, tropical and sub-tropical climate zones. We've found it to be highly accurate at each location, and increasing the speed of the flood prediction by between 100-1000 times compared with continuous simulation."

While it might take another five years or so for this method to be available to industry, Associate Professor Thyer says the need for such predictions was highlighted during the 2012 flood at the city of Wagga Wagga, in the Australian state of New South Wales.

"An 11-metre levy had been built at Wagga based on historical flood data. In 2012, Wagga was hit with a flood that peaked at 10.8 metres, just 20 centimetres from the top of the levy. This narrowly averted disaster and was an excellent example of how flood prediction can help save properties and lives. We hope that our new, faster and more accurate predictive method will eventually have the same effect elsewhere in Australia and overseas," he says.

A microwave-free approach to superconducting quantum supercomputing uses design principles gleaned from semiconductor spin qubits.

Builders of future superconducting quantum supercomputers could learn a thing or two from semiconductors, according to a report in Nature Communications this week. By leveraging the good ideas of the natural world and the semiconductor community, researchers may be able to greatly simplify the operation of quantum devices built from superconductors. They call this a "semiconductor-inspired" approach and suggest that it can provide a useful guide to improving superconducting quantum circuits.

Superconducting quantum bits, or qubits, are circuits made from superconducting components--such as wires, capacitors or non-linear inductors--that have zero resistance to electrical current. Designing these circuits from scratch offers tremendous flexibility, and has gone a long way toward realizing a full-scale quantum computer. On the other hand, qubits found in semiconductor materials like ultra-pure silicon offer good properties for quantum computing, like long quantum memory times and fast two-qubit gates. These benefits come with constraints, but those constraints have led to creative solutions from the semiconductor community.

Yun-Pil Shim and Charles Tahan at the Laboratory for Physical Sciences (1) and the University of Maryland in College Park are exploring whether ideas gleaned from semiconductor qubits may be useful in designing better approaches to superconducting quantum computers. As a first step, they considered applying novel control approaches to state-of-the-art superconducting qubits. They found that they could eliminate one of the most costly overheads for control--microwave sources--by using a solution developed in the semiconductor qubit community. Notably, they found an even more efficient implementation in superconducting qubits, making the approach easier to realize than the semiconductor original.

"If the community could mimic the great properties of semiconductor qubits in man-made superconducting circuits, they might be able to have the best of both worlds," Tahan says. "In a large sea of parameters sometimes the best guide is nature."

Qubits can be realized in many different physical platforms, such as a superconducting circuit or an electron's spin. Spin is a quantum property of particles that physicists often think of as a small magnet that will point along the direction of an applied magnetic field. A spin can point up or down, corresponding the the 0 or 1 of conventional bits, but it can also point horizontally. This results in a quantum "superposition" of 0 and 1, a key feature of qubits. In some systems, these spin qubits can carry quantum information robustly because they are unaffected by electrical charge, a common source of noise. 

Spins and superconducting qubits are controlled in similar ways. In both, microwave radiation can drive transitions between the two levels of the qubit allowing for quantum logic gates. But semiconductor spin qubits are also different. They often have weak coupling to the environment, leading to long memory times but slow quantum gates. Additionally, spin qubits are quite small, making them susceptible to inadvertent crosstalk from nearby spins.

The semiconductor community has dealt with both problems by developing "all-electrical" approaches to quantum computation that represent one qubit with multiple physical spins. Operations on this "encoded" qubit are performed by pairwise interactions between the physical spins. This requires at least three spins per encoded qubit and a large number of physical pulses to achieve a single encoded gate--a costly overhead for quantum computing, especially when pulses aren't perfect.

Shim and Tahan show that an encoded qubit approach can work even better with superconducting qubits. In fact, they show that modern superconducting qubits called transmons or fluxmons, which can be tuned individually, require only two physical qubits per encoded qubit. More importantly, the encoded gate time and gate error don't change much. For example, while a controlled-NOT gate may take roughly 20 qubit-qubit interactions to accomplish in semiconductor spins, Shim and Tahan show that a similar two-qubit gate can be accomplished using only one two-qubit pulse. This means that all quantum logic gates can be performed with fast DC pulses instead of relying on microwave-driven qubit rotations.

The authors claim that their scheme can be implemented with current superconducting qubits and control methods, but there are still open questions. In the encoded scheme, initializing qubits may be noisy. And ubiquitous "transmon" qubits maybe be outperformed by newer qubit types like the "fluxmon" or "fluxonium." 

Quantum supercomputers must preserve qubits from outside interference for as long as a calculation proceeds. Despite rapid progress in the quality of superconducting qubits (qubit lifetimes now surpass 100 microseconds, up from tens of nanoseconds a decade ago), qubit gate error rates are still limited by loss in the metals, insulators, substrates and interfaces that make up these devices. These limitations will also limit the performance of the encoded scheme as proposed, and more progress on these fundamental device issues is still needed.

A major goal on the path to a full-scale quantum computer is the demonstration of "fault-tolerant" quantum error correction, where the error of physical quantum gates is reduced by repeated error correction on a "logical" qubit consisting of many physical qubits. Removing the need for microwave control, along with the other benefits of the encoded qubit proposal, could make realizing a logical qubit with superconducting qubits easier. While the authors believe that this work represents an advance, they suggest that additional progress can be made by looking closer still at spin qubits.

roguewaveseuro

A unique new supercomputer model built on highly complex mathematics could make it possible to design safer versions of the 'fast ships' widely used in search-and-rescue, anti-drugs, anti-piracy and many other vital offshore operations.

Travelling at up to 23-30 knots, fast ships are especially vulnerable to waves that amplify suddenly due to local weather and sea conditions - extreme funnelling effects, for example, may turn waves a few metres high into dangerous waves tens of metres tall that can destabilise ships, resulting in damage, causing injuries and threatening lives. 

Developed with Engineering and Physical Sciences Research Council (EPSRC) support at the University of Leeds by Dr Anna Kalogirou and Dr Vijaya Ambati with Professor Onno Bokhove, the new model produces unprecedentedly accurate animations and simulations that can show exactly how sea waves can affect fast ships. It highlights the importance of having accurate predictions of the pressure forces that these craft are subjected to, and could aid the design of fast ships better able to withstand the effects of rough seas.

The researchers can already simulate the complex interactions of sea waves that can lead to an anomalously high freak wave, but adding the motion of ships into the equation complicates matters significantly.

Dr Kalogirou said: "We have managed to develop a simulation tool that uses sophisticated mathematical methods and produces fast and accurate simulations of linear wave-ship interactions. Our tool can also provide measurements in terms of wave amplitudes around ships, as well as pressures on ships' surfaces."

The aim is to extend the model over the next three years to produce a tool that can be used extensively by ship designers and maritime engineers.

The model has been validated through laboratory experiments on a man-made freak or rogue wave (the so-called 'soliton splash') using test tanks. A comparison with wave and ship motion, for a ship moored on two anchors, has been set up in a small test tank, which is also used for public demonstrations.

Results from the project are being disseminated to a range of organisations including the Maritime Research Institute Netherlands (MARIN). A related European Industry Doctorate project with MARIN on rogue and breaking waves against offshore structures has strengthened Professor Bokhove's EPSRC-funded research on wave impact against ships, as well as his EU-funded work on fixed offshore structures.

Fast ships deliver all kinds of services in fields such as disaster response, the fight against crime, the provision of supplies for oil and gas platforms and the transportation of wind farm maintenance personnel. Each year, however, around 100 such ships worldwide are lost or damaged in heavy seas, with around 2,500 casualties in 2013 (PDF 3.7MB).

Professor Bokhove says: "Describing mathematically the complex behaviour of waves and their interaction with fast ships and then incorporating all of this into a robust [super]computer model has been very challenging. We're delighted to have provided further proof of how advanced mathematics can have real-world applications that help save money and safeguard lives."

Artificial intelligence meets massive repositories of multi-omics data linked to chemical compounds to streamline drug discovery R&D

Insilico Medicine, one of the leaders in advanced signaling pathway activation analysis and deep learning for aging and cancer research is proud to announce the formation of the Pharmaceutical Artificial Intelligence division focused on applying latest advances in artificial intelligence to streamline drug discovery and drug repurposing processes and significantly cutting time to market.

"Since its inception Insilico Medicine is taking the umbrella view on aging research developing biomarkers and drug candidates in a broad number of fields. We collaborate with some of the largest pharmaceutical companies, cosmetics companies and academic institutions on a number of disease- or drug-specific projects. However, our focus on aging and "we do everything" approach is confusing for many of our customers and partners and with the launch of Pharma.AI as a division, we will highlight a core part of our business and explore the possibility of spinning it off as a separate company in the future", said Alex Zhavoronkov, PhD, CEO of Insilico Medicine, Inc.

With the exception of cancer immunology, most of the pharmaceutical companies are facing declining returns on their R&D investments and are not open to external innovation in early stage development. Pharma.AI aims to bridge this gap by providing cutting-edge machine learning services delivered by an experienced team of bioinformatics and deep learning experts working with millions of drugs, annotated gene expression samples and blood biochemistry data sets that can be used to augment customer's data.

With 11 highly-expert machine learning experts worldwide, Pharma.AI team is developing deep learned transcriptomics-, proteomics-, blood biochemistry-based biomarkers of multiple diseases, predictors of alternative therapeutic uses of multiple drugs and analytical tools for high-throughput screening. Pharmaceutical companies utilizing Broad Institute's Connectivity Map or LINCS projects of pipelines will find powerful analytical drug discovery tools readily available.

"I am happy to join Pharma.AI division as a research scientist focusing on interpreting the results of our AI analytical systems in skin care applications. I am already supporting this team as a geneticist, but with a full-time appointment I have a chance to help transform the pharmaceutical industry forever. I am also happy with Insilico Medicine's mission to empower women in emerging geographies and giving us visibility and skills that will be in high demand over the next two decades when many other jobs will be lost to automation", said Polina Mamoshina, senior research scientist at Insilico Medicine.

Page 3 of 48