CAPTION A crystal made of manganese and other elements that provides a strong hyperfine interaction between the nucleus and electrons is just a few millimeters wide. It is shown next to a 100 Yen coin for scale.

Researchers at the Okinawa Institute of Science and Technology Graduate University (OIST) have identified a system that could store quantum information for longer times, which is critical for the future of quantum computing. This study was recently published in Physical Review Letters.

Quantum computing -- which aims to use particles on the atomic scale to make calculations and store the results -- has the potential to solve some key problems much faster than current supercomputers.

To make quantum computing a reality, scientists must find a system that remains stable long enough to make the calculations. While this is an extremely short time frame, only thousandths of a second, the particles involved are so small that they are easily influenced by their surroundings. If the motion of the particles is disturbed, even a little, it throws off the whole calculation.

Nuclei are promising contenders for quantum memory because they are not easily influenced by their surroundings. However, that also makes them extremely difficult to manipulate. Many quantum physicists have tried with little success. 

"In usual materials it is very difficult to control nuclei directly," said Prof. Denis Konstantinov, who runs the Quantum Dynamics Unit at OIST.

Instead of trying control the nucleus directly, the researchers focused on a "middle man" of sorts - the electrons orbiting the nucleus.

The nucleus has a tiny internal magnet, called a "magnetic moment," and the electrons orbiting around it also have magnetic moments that are about 1,000 times larger. Those magnets interact with each other, which is called the "hyperfine interaction."

The hyperfine interaction is stronger in some materials than others. The researchers found that a crystal made of manganese and some other elements has a strong hyperfine interaction. This enabled them to manipulate the nuclei by first targeting the electrons. 

Information in quantum computing is conveyed by photons, which are individual particles of light, which also make up other nonvisible electromagnetic waves, such as ultraviolet and microwaves. The information transmitted is actually the quantum state of the photon. The quantum state of the photon needs to be transferred to another particle so it will last long enough for the computation to take place.

In this experiment, the researchers beamed microwaves through a manganese carbonate crystal. The magnetic field of the microwaves interacted with the magnetic moments of the electrons that are orbiting around the nuclei of the manganese atoms. The electrons' movements started to change, which in turn altered the movement of the nuclei because they are connected by the hyperfine interaction. The quantum state of the microwave photon was transferred to the nuclei when the nuclei's internal magnets flipped to point in the opposite direction.

This all has to happen very quickly before the quantum state of the photon changes. To transmit the information and flip the nuclei fast enough, there has to be a strong connection between the microwaves and nuclei via the electrons.

"To our knowledge, our experiment is the first demonstration of the strong coupling between microwave photons and nuclear spins," said Leonid Abdurakhimov, a post-doctoral scholar at OIST and first author of the paper.

Next, the team plans to cool down the system to nearly -273 C, or -500 F, to see if they can strengthen the connection and extend the time information can be stored by minimizing temperature fluctuations.

"We are making the first and important steps towards using an ensemble of nuclear spins for quantum memory," Konstantinov said. "We now have a whole class of materials that can be used for this purpose. Future experiments promise to be quite exciting."

New insight into unification of general relativity and quantum mechanics

A collaboration of physicists and a mathematician has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The paper announcing the discovery by Hirosi Ooguri, a Principal Investigator at the University of Tokyo's Kavli IPMU, with Caltech mathematician Matilde Marcolli and graduate students Jennifer Lin and Bogdan Stoica, will be published in Physical Review Letters as an Editors' Suggestion "for the potential interest in the results presented and on the success of the paper in communicating its message, in particular to readers from other fields."

Physicists and mathematicians have long sought a Theory of Everything (ToE) that unifies general relativity and quantum mechanics. General relativity explains gravity and large-scale phenomena such as the dynamics of stars and galaxies in the universe, while quantum mechanics explains microscopic phenomena from the subatomic to molecular scales.

The holographic principle is widely regarded as an essential feature of a successful Theory of Everything. The holographic principle states that gravity in a three-dimensional volume can be described by quantum mechanics on a two-dimensional surface surrounding the volume. In particular, the three dimensions of the volume should emerge from the two dimensions of the surface. However, understanding the precise mechanics for the emergence of the volume from the surface has been elusive.

Now, Ooguri and his collaborators have found that quantum entanglement is the key to solving this question. Using a quantum theory (that does not include gravity), they showed how to supercompute energy density, which is a source of gravitational interactions in three dimensions, using quantum entanglement data on the surface. This is analogous to diagnosing conditions inside of your body by looking at X-ray images on two-dimensional sheets. This allowed them to interpret universal properties of quantum entanglement as conditions on the energy density that should be satisfied by any consistent quantum theory of gravity, without actually explicitly including gravity in the theory. The importance of quantum entanglement has been suggested before, but its precise role in emergence of spacetime was not clear until the new paper by Ooguri and collaborators.

Quantum entanglement is a phenomenon whereby quantum states such as spin or polarization of particles at different locations cannot be described independently. Measuring (and hence acting on) one particle must also act on the other, something that Einstein called "spooky action at distance." The work of Ooguri and collaborators shows that this quantum entanglement generates the extra dimensions of the gravitational theory.

"It was known that quantum entanglement is related to deep issues in the unification of general relativity and quantum mechanics, such as the black hole information paradox and the firewall paradox," says Hirosi Ooguri. "Our paper sheds new light on the relation between quantum entanglement and the microscopic structure of spacetime by explicit calculations. The interface between quantum gravity and information science is becoming increasingly important for both fields. I myself am collaborating with information scientists to pursue this line of research further."

Scientists at the University of York’s Centre for Quantum Technology have made an important step in establishing scalable and secure high rate quantum networks.

Working with colleagues at the Technical University of Denmark (DTU), Massachusetts Institute of Technology (MIT), and the University of Toronto, they have developed a protocol to achieve key-rates at metropolitan distances at three orders-of-magnitude higher than previously.

Standard protocols of Quantum Key Distribution (QKD) exploit random sequences of quantum bits (qubits) to distribute secret keys in a completely secure fashion. Once these keys are shared by two remote parties, they can communicate confidentially by encrypting and decrypting binary messages. The security of the scheme relies on one of the most fundamental laws of quantum physics, the uncertainty principle.

Today's classical communications by email or phone are vulnerable to eavesdroppers but quantum communications based on single particle levels (photons) can easily detect eavesdroppers because they invariably disrupt or perturb a quantum signal. By making quantum measurements, two remote parties can estimate how much information an eavesdropper is stealing from the channel and can apply suitable protocols of privacy amplification to negate the effects of the information loss.

However, the problem with QKD protocols based on simple quantum systems, such as single-photon qubits, is their low key-rate, despite their effectiveness in working over long distances. This makes them unsuitable for adaptation for use in metropolitan networks.

The team, led by Dr Stefano Pirandola, of the Department of Computer Science at York, overcame this problem, both theoretically and experimentally, using continuous-variable quantum systems. These allow the parallel transmission of many qubits of information while retaining the quantum capability of detecting and defeating eavesdroppers. The research is published in Nature Photonics.

Dr Pirandola said: “You want a high rate and a fast connection particularly for systems that serve a metropolitan area. You have to transmit a lot of information in the fastest possible way; essentially you need a quantum equivalent of broadband.

“Continuous-variable systems can use many more photons but are still quantum based. Our system reaches extremely high speeds by three orders of magnitude higher than ever before over a distance of 25 kilometres. Its effectiveness above that distance decreases rapidly however.

“Nevertheless, our protocol could be used to build high-rate quantum networks where devices securely connect to nearby access points or proxy servers.”

Dr Pirandola was funded by the Engineering and Physical Sciences Research Council.

The University of York leads a unique collaboration to exploit fundamental laws of quantum physics for the development of secure communication technologies and services for consumer, commercial and government markets.

The Quantum Communications Hub is one of four in the EPSRC’s new £155m National Network of Quantum Technology Hubs.

Protocol corrects virtually all errors in quantum memory, but requires little measure of quantum states

Quantum supercomputers are largely theoretical devices that could perform some computations exponentially faster than conventional computers can. Crucial to most designs for quantum supercomputers is quantum error correction, which helps preserve the fragile quantum states on which quantum computation depends.

The ideal quantum error correction code would correct any errors in quantum data, and it would require measurement of only a few quantum bits, or qubits, at a time. But until now, codes that could make do with limited measurements could correct only a limited number of errors -- one roughly equal to the square root of the total number of qubits. So they could correct eight errors in a 64-qubit quantum computer, for instance, but not 10.

In a paper they're presenting at the Association for Computing Machinery's Symposium on Theory of Computing in June, researchers from MIT, Google, the University of Sydney, and Cornell University present a new code that can correct errors afflicting a specified fraction of a computer's qubits, not just the square root of their number. And that fraction can be arbitrarily large, although the larger it is, the more qubits the computer requires.

"There were many, many different proposals, all of which seemed to get stuck at this square-root point," says Aram Harrow, an assistant professor of physics at MIT, who led the research. "So going above that is one of the reasons we're excited about this work."

Like a bit in a conventional computer, a qubit can represent 1 or 0, but it can also inhabit a state known as "quantum superposition," where it represents 1 and 0 simultaneously. This is the reason for quantum computers' potential advantages: A string of qubits in superposition could, in some sense, perform a huge number of computations in parallel.

Once you perform a measurement on the qubits, however, the superposition collapses, and the qubits take on definite values. The key to quantum algorithm design is manipulating the quantum state of the qubits so that when the superposition collapses, the result is (with high probability) the solution to a problem.

Baby, bathwater

But the need to preserve superposition makes error correction difficult. "People thought that error correction was impossible in the '90s," Harrow explains. "It seemed that to figure out what the error was you had to measure, and measurement destroys your quantum information."

The first quantum error correction code was invented in 1994 by Peter Shor, now the Morss Professor of Applied Mathematics at MIT, with an office just down the hall from Harrow's. Shor is also responsible for the theoretical result that put quantum supercomputing on the map, an algorithm that would enable a quantum supercomputer to factor large numbers exponentially faster than a conventional supercomputer can. In fact, his error-correction code was a response to skepticism about the feasibility of implementing his factoring algorithm.

Shor's insight was that it's possible to measure relationships between qubits without measuring the values stored by the qubits themselves. A simple error-correcting code could, for instance, instantiate a single qubit of data as three physical qubits. It's possible to determine whether the first and second qubit have the same value, and whether the second and third qubit have the same value, without determining what that value is. If one of the qubits turns out to disagree with the other two, it can be reset to their value.

In quantum error correction, Harrow explains, "These measurement always have the form 'Does A disagree with B?' Except it might be, instead of A and B, A B C D E F G, a whole block of things. Those types of measurements, in a real system, can be very hard to do. That's why it's really desirable to reduce the number of qubits you have to measure at once."

Time embodied

A quantum computation is a succession of states of quantum bits. The bits are in some state; then they're modified, so that they assume another state; then they're modified again; and so on. The final state represents the result of the computation.

In their paper, Harrow and his colleagues assign each state of the computation its own bank of qubits; it's like turning the time dimension of the computation into a spatial dimension. Suppose that the state of qubit 8 at time 5 has implications for the states of both qubit 8 and qubit 11 at time 6. The researchers' protocol performs one of those agreement measurements on all three qubits, modifying the state of any qubit that's out of alignment with the other two.

Since the measurement doesn't reveal the state of any of the qubits, modification of a misaligned qubit could actually introduce an error where none existed previously. But that's by design: The purpose of the protocol is to ensure that errors spread through the qubits in a lawful way. That way, measurements made on the final state of the qubits are guaranteed to reveal relationships between qubits without revealing their values. If an error is detected, the protocol can trace it back to its origin and correct it.

It may be possible to implement the researchers' scheme without actually duplicating banks of qubits. But, Harrow says, some redundancy in the hardware will probably be necessary to make the scheme efficient. How much redundancy remains to be seen: Certainly, if each state of a computation required its own bank of qubits, the computer might become so complex as to offset the advantages of good error correction.

But, Harrow says, "Almost all of the sparse schemes started out with not very many logical qubits, and then people figured out how to get a lot more. Usually, it's been easier to increase the number of logical qubits than to increase the distance -- the number of errors you can correct. So we're hoping that will be the case for ours, too."

Presentation will address the role of today’s optical transport networks in implementing a successful cloud strategy

Stephan Rettenberger, Vice President Marketing, ADVA Optical Networking will be presenting “Cloud Computing – Opportunities and Challenges for Service Providers” at the IIR WDM and Next-Generation Optical Networking Conference 2010, June 14-17, 2010 at the Fairmont Monte Carlo, Monaco.

Europe's premier optical networking event moves to Monaco for 2010 and will once again bring together over 400 optical professionals for four days of structured networking and debate. The educational operator led program and professionally organized netwokring agenda make WDM & Next Generation Optical Networking an unmissable event in the industry calendar.

DETAILS:

Cloud computing – Opportunities and challenges for service providers

This presentation will address:

• Cloud computing trends changing the ITC industry

• The role of the transport network for enabling cloud services

• Service provider opportunities to capitalize on the cloud revolution

• Ways to increase efficiency by evolving intra-cloud connectivity networks

• Enabling new applications by reliable and secure access to cloud resources

WHEN:            Thursday, June 17, 2010, 1.50-2.15 pm

WHERE:         Main conference hall

Page 6 of 61