'Multiplying' light could be key to ultra-powerful optical computers

An important class of challenging computational problems, with applications in graph theory, neural networks, artificial intelligence and error-correcting codes can be solved by multiplying light signals, according to researchers from the University of Cambridge and Skolkovo Institute of Science and Technology in Russia.

In a paper published in the journal Physical Review Letters, they propose a new type of computation that could revolutionise analogue computing by dramatically reducing the number of light signals needed while simplifying the search for the best mathematical solutions, allowing for ultra-fast optical computers. Schematic of light pulse interactions as the proposed optical computer solves higher order binary optimisation problems. The light phases coming from several light pulses combine to change the phases of each light pulse until the solution is found.  CREDIT Gleb Berloff

Optical or photonic computing uses photons produced by lasers or diodes for computation, as opposed to classical computers which use electrons. Since photons are essentially without mass and can travel faster than electrons, an optical computer would be superfast, energy-efficient and able to process information simultaneously through multiple temporal or spatial optical channels.

The computing element in an optical computer - an alternative to the ones and zeroes of a digital computer - is represented by the continuous phase of the light signal, and the computation is normally achieved by adding two light waves coming from two different sources and then projecting the result onto '0' or '1' states.

However, real life presents highly nonlinear problems, where multiple unknowns simultaneously change the values of other unknowns while interacting multiplicatively. In this case, the traditional approach to optical computing that combines light waves in a linear manner fails. {module INSIDE STORY}

Now, Professor Natalia Berloff from Cambridge's Department of Applied Mathematics and Theoretical Physics and PhD student Nikita Stroev from Skolkovo Institute of Science and Technology have found that optical systems can combine light by multiplying the wave functions describing the light waves instead of adding them and may represent a different type of connections between the light waves.

They illustrated this phenomenon with quasi-particles called polaritons - which are half-light and half-matter - while extending the idea to a larger class of optical systems such as light pulses in a fibre. Tiny pulses or blobs of coherent, superfast-moving polaritons can be created in space and overlap with one another in a nonlinear way, due to the matter component of polaritons.

"We found the key ingredient is how you couple the pulses with each other," said Stroev. "If you get the coupling and light intensity right, the light multiplies, affecting the phases of the individual pulses, giving away the answer to the problem. This makes it possible to use light to solve nonlinear problems."

The multiplication of the wave functions to determine the phase of the light signal in each element of these optical systems comes from the nonlinearity that occurs naturally or is externally introduced into the system.

"What came as a surprise is that there is no need to project the continuous light phases onto '0' and '1' states necessary for solving problems in binary variables," said Stroev. "Instead, the system tends to bring about these states at the end of its search for the minimum energy configuration. This is the property that comes from multiplying the light signals. On the contrary, previous optical machines require resonant excitation that fixes the phases to binary values externally."

The authors have also suggested and implemented a way to guide the system trajectories towards the solution by temporarily changing the coupling strengths of the signals.

"We should start identifying different classes of problems that can be solved directly by a dedicated physical processor," said Berloff. "Higher-order binary optimisation problems are one such class, and optical systems can be made very efficient in solving them."

There are still many challenges to be met before optical computing can demonstrate its superiority in solving hard problems in comparison with modern electronic computers: noise reduction, error correction, improved scalability, guiding the system to the true best solution are among them.

"Changing our framework to directly address different types of problems may bring optical computing machines closer to solving real-world problems that cannot be solved by classical computers," said Berloff.

Columbia researchers discover a new way to program light on an ultra-small scale

A team of researchers led by Columbia University has developed a unique platform to program a layered crystal, producing imaging capabilities beyond common limits on demand.

The discovery is an important step toward control of nanolight, which is light that can access the smallest length scales imaginable. The work also provides insights for the field of optical quantum information processing, which aims to solve difficult problems in supercomputing and communications.  

"We were able to use ultrafast nano-scale microscopy to discover a new way to control our crystals with light, turning elusive photonic properties on and off at will," said Aaron Sternbach, postdoctoral researcher at Columbia who is lead investigator on the study. "The effects are short-lived, only lasting for trillionths of one second, yet we are now able to observe these phenomena clearly."   An optically excited gas of electronic carriers confined to the planes of the layered van-der Waals semiconductor tungsten diselenide is shown. The consequent hyperbolic response permits passage of nanolight.  CREDIT Ella Maru Studio{module INSIDE STORY}

The research appears Feb. 4 in the journal Science.

Nature sets a limit on how tightly light can be focused. Even in microscopes, two different objects that are closer than this limit would appear to be one.  But within a special class of layered crystalline materials--known as van de Waals crystals--these rules can, sometimes, be broken. In these special cases, light can be confined without any limit in these materials, making it possible to see even the smallest objects clearly.

In their experiments, the Columbia researchers studied the van der Waals crystal called tungsten diselenide, which is of high interest for its potential integration in electronic and photonic technologies because its unique structure and strong interactions with light. 

When the scientists illuminated the crystal with a pulse of light, they were able to change the crystal's electronic structure. The new structure, created by the optical-switching event, allowed something very uncommon to occur: Super-fine details, on the nanoscale, could be transported through the crystal and imaged on its surface.

The report demonstrates a new method to control the flow of light of nanolight. Optical manipulation on the nanoscale, or nanophotonics, has become a critical area of interest as researchers seek ways to meet the increasing demand for technologies that go well beyond what is possible with conventional photonics and electronics.

Dmitri Basov, Higgins professor of physics at Columbia University, and senior author on the paper, believes the team's findings will spark new areas of research in quantum matter.

"Laser pulses allowed us to create a new electronic state in this prototypical semiconductor, if only for a few pico-seconds," he said. "This discovery puts us on track toward optically programmable quantum phases in new materials. "

Goto's new algorithms quickly deliver highly accurate solutions to complex problems

Breaks the limitations of classical mechanics by introducing a quasi-quantum effect; expected to accelerate complex problem-solving in finance, pharmaceuticals, and logistics

Toshiba Corporation (TOKYO: 6502) and Toshiba Digital Solutions Corporation (collectively Toshiba), industry leaders in solutions for large-scale optimization problems, today announced the Ballistic Simulated Bifurcation Algorithm (bSB) and the Discrete Simulated Bifurcation Algorithm (dSB), new algorithms that far surpass the performance of Toshiba's previous Simulated Bifurcation Algorithm (SB). The new algorithms will be applied to finding solutions to highly complex problems in areas as diverse as portfolio management, drug development and logistics management.

Introduced in April 2019, the previous SB broke new ground as a platform for finding solutions to combinatorial optimization problems, surpassing other approaches by a factor of 10*1. Toshiba has now extended this achievement with two new algorithms that apply innovative approaches, such as a quasi-quantum tunneling effect, to performance improvement, allowing them to acquire optimal solutions (exact solutions) for large-scale combinatorial optimization problems that challenge the capabilities of their predecessor. Implemented on a 16-GPU machine, dSB can find a nearly optimal solution of a one-million-bit problem, the world's largest scale combinatorial problem yet reported in scientific papers, in 30 minutes--a computation that would take 14 months on a typical CPU-based computer. The research results were published in the online academic journal, Science Advances.

The new algorithms have different characteristics. bSB is optimized and named for speed of operation and finds good approximate solutions in a short time. It generates fewer errors than a previously reported Adiabatic Simulated Bifurcation Algorithm (aSB)*3, and so returns faster, more accurate results. Implemented on a field-programmable gate array (FPGA), dubbed the ballistic simulated bifurcation machine (bSBM), it obtains a good solution to a 2,000-bit problem approximately 10 times faster than the previous aSB machine (aSBM) (Figure 1). The bSBM is approximately10x faster than the aSBM in solving a 2000-bit problem*2.{module INSIDE STORY}

dSB is a high-accuracy algorithm. Although implemented in a classical computer, it nonetheless arrives at optimal solutions faster than current quantum machines. Its name is derived from the replacement of continuous variables with discrete variables in equations of motion. This exhibits a quasi-quantum tunneling effect that breaks through the limits of approaches grounded in classical mechanics, reaching the optimal solution of the 2000-bit problem.

Toshiba has implemented dSB on a FPGA and built a discrete simulated bifurcation machine (dSBM) that achieves a higher speed than other machines in terms of computation times required to obtain optimal solutions for various problems (Figure 2). dSBM benchmarked against other machines for computation times to obtain optimal solutions for various problems*2.{module INSIDE STORY}

Implemented on a 16-GPU machine, the dSBM solved a one-million-bit problem, the largest yet reported in scientific papers, and arrived at a nearly optimal solution in 30 minutes--20,000 times faster than a CPU-based simulated annealing machine, which would take 14 months to carry out the computation (Figure 3). Computation times for a one-million-bit problem*2.

In applying the two algorithms to real-world problems, Toshiba proposes bSB for applications that require an immediate response, and dSB for applications that require high accuracy, even if it takes a little longer time.

Toshiba expects the new algorithms to bring higher efficiencies to industry, business and complex decision-making by addressing combinatorial optimization problems in fields including investment portfolios, drug development, and delivery route planning.

Commenting on the algorithms, Hayato Goto, Chief Research Scientist at Toshiba Corporation's Corporate Research & Development Center, said: "We face many real-world problems where we must find the optimal solution among a huge number of choices, and we must also deal with combinatorial explosion, where the number of combination patterns increases exponentially as a problem increases in scale. This is why research into special-purpose computers for combinatorial optimization is being carried out worldwide. Our aim is to develop a software solution--algorithms that can solve large-scale combinatorial optimization problems quickly and accurately, and contribute to the realization of higher efficiencies."

Toshiba will offer the newly developed simulated bifurcation algorithms as a GPU-based cloud service and as an on-premises version implemented on an FPGA within 2021.