As the countdown continues to the Presidential election, new analytical tools by physicists at The City College of New York promise a quicker and remarkably accurate method of predicting election trends with Twitter.

Hern´an A. Makse, Alexandre Bovet and Flaviano Morone have developed analytic tools combining statistical physics of complex networks, percolation theory, natural language processing and machine learning classification to infer the opinion of Twitter users regarding the Presidential candidates this year.

"Forecasting opinion trends from real-time social media is the long-standing goal of modern-day big-data analytics," said Makse, a Fellow of the American Physical Society. "Despite its importance, there has been no conclusive scientific evidence so far that social media activity can capture the opinion of the general population at large."

However, by using a large-scale dataset of 73 million tweets collected from June 1 to September 1, 2016, Makse and his associates are able to investigate the temporal social networks formed by the interactions among Twitter users.

"We infer the support of each user to the presidential candidates and show that the resulting Twitter trends follow the New York Times National Polling Average, which represents an aggregate of hundreds of independent traditional polls, with remarkable accuracy (r = 0.9)," Makse said. More importantly, for the CCNY team, the Twitter opinion trend forecasts the aggregated Times polls by 6 to 15 days, showing that Twitter can be an early warning signal of global opinion trends at the national level.

"Our analytics, which are available at kcorelab.com, unleash the power of Twitter to predict social opinion trends from elections, brands to political movements. Our results suggest that the multi-billion public opinion polling industry could be replaced by Twitter analytics performed practically for free," concluded Makse.

Exact simulation lifts the 80-year-old mystery of the degree to which atoms can be dressed with photons

In 1937, US physicist Isidor Rabi introduced a simple model to describe how atoms emit and absorb particles of light. Until now, this model had still not been completely explained. In a recent paper, physicists have for the first time used an exact numerical technique: the quantum Monte Carlo technique, which was designed to explain the photon absorption and emission phenomenon. These findings were recently published in EPJ D by Dr Flottat from the Nice -Sophia Antipolis Non Linear Institute (INLN) in France and colleagues. They confirm previous results obtained with approximate simulation methods.

According to the Rabi model, when an atom interacts with light in a cavity, and they reach a state of equilibrium, the atom becomes "dressed" with photons. Because this takes place at the quantum scale, the system is, in fact, a superposition of different states -- the excited and unexcited atom -- with different numbers of photons.

In the study, the team adapted a quantum Monte Carlo algorithm to address this special case. They created a novel version of the existing algorithm, one which accounts for the fluctuating number of photons. This made it possible to study atoms dressed with up to 20 photons each. No other existing exact simulation method -- including the exact diagonalisation and density matrix renormalisation group approaches -- can factor in these effects.

The authors found that there are dramatic consequences at quantum scale for strongly coupled light-atom systems. They showed that it is essential to take into account the effects resulting from the number of excitations not being conserved, because the atom-photon coupling is substantial enough for these effects to matter. For example, in a conventional light-atom coupling experiment in a macroscopic cavity, the coupling is so small that an atom is, on average, dressed with much less than one photon. With a coupling that is increased by a factor of, say, ten thousands, physicists have observed dressed states with tens of photons per atom. 

The meanfield phase diagrams of the Jaynes-Cummings-Hubbard (left) and Rabi-Hubbard (right) models.
The meanfield phase diagrams of the Jaynes-Cummings-Hubbard (left) and Rabi-Hubbard (right) models.

The processing power of standard computers is likely to reach its maximum in the next 10 to 25 years. Even at this maximum power, traditional computers won't be able to handle a particular class of problem that involves combining variables to come up with many possible answers, and looking for the best solution.

Now, an entirely new type of supercomputer that blends optical and electrical processing, reported Oct. 20 in the journal Science, could get around this impending processing constraint and solve those problems. If it can be scaled up, this non-traditional supercomputer could save costs by finding more optimal solutions to problems that have an incredibly high number of possible solutions.

"This is a machine that's in a sense the first in its class, and the idea is that it opens up a sub-field of research in the area of non-traditional computing machines," said Peter McMahon, postdoctoral scholar in applied physics and co-author of the paper. "There are many, many questions that this development raises and we expect that over the next few years, several groups are going to be investigating this class of machine and looking into how this approach will pan out."

The traveling salesman problem

There is a special type of problem - called a combinatorial optimization problem - that traditional computers find difficult to solve, even approximately. An example is what's known as the "traveling salesman" problem, wherein a salesman has to visit a specific set of cities, each only once, and return to the first city, and the salesman wants to take the most efficient route possible. This problem may seem simple but the number of possible routes increases extremely rapidly as cities are added, and this underlies why the problem is difficult to solve.

"Those problems are challenging for standard computers, even supercomputers, because as the size grows, at some point, it takes the age of the universe to search through all the possible solutions," said Alireza Marandi, a former postdoctoral scholar at Stanford and co-author of the study. "This is true even with a supercomputer because the growth in possibilities is so fast."

It may be tempting to simply give up on the traveling salesman, but solving such hard optimization problems could have enormous impact in a wide range of areas. Examples include finding the optimal path for delivery trucks, minimizing interference in wireless networks, and determining how proteins fold. Even small improvements in some of these areas could result in massive monetary savings, which is why some scientists have spent their careers creating algorithms that produce very good approximate solutions to this type of problem.

An Ising machine

The Stanford team has built what's called an Ising machine, named for a mathematical model of magnetism. The machine acts like a reprogrammable network of artificial magnets where each magnet only points up or down and, like a real magnetic system, it is expected to tend toward operating at low energy.

The theory is that, if the connections among a network of magnets can be programmed to represent the problem at hand, once they settle on the optimal, low-energy directions they should face, the solution can be derived from their final state. In the case of the traveling salesman, each artificial magnet in the Ising machine represents the position of a city in a particular path.

Rather than using magnets on a grid, the Stanford team used a special kind of laser system, known as a degenerate optical parametric oscillator, that, when turned on, will represent an upward- or downward-pointing "spin." Pulses of the laser represent a city's position in a path the salesman could take. In an earlier version of this machine (published two years ago), the team members extracted a small portion of each pulse, delayed it and added a controlled amount of that portion to the subsequent pulses. In traveling salesman terms, this is how they program the machine with the connections and distances between the cities. The pulse-to-pulse couplings constitute the programming of the problem. Then the machine is turned on to try to find a solution, which can be obtained by measuring the final output phases of the pulses.

The problem in this previous approach was connecting large numbers of pulses in arbitrarily complex ways. It was doable but required an added controllable optical delay for each pulse, which was costly and difficult to implement.

Scaling up

The latest Stanford Ising machine shows that a drastically more affordable and practical version could be made by replacing the controllable optical delays with a digital electronic circuit. The circuit emulates the optical connections among the pulses in order to program the problem and the laser system still solves it.

Nearly all of the materials used to make this machine are off-the-shelf elements that are already used for telecommunications. That, in combination with the simplicity of the programming, makes it easy to scale up. Stanford's machine is currently able to solve 100-variable problems with any arbitrary set of connections between variables, and it has been tested on thousands of scenarios.

A group at NTT in Japan that consulted with Stanford's team has also created an independent version of the machine; its study has been published alongside Stanford's by Science. For now, the Ising machine still falls short of beating the processing power of traditional digital computers when it comes to combinatorial optimization. But it is gaining ground fast and the researchers are looking forward to seeing what other work will be possible based on this breakthrough.

"I think it's an exciting avenue of exploration for finding alternative computers. It can get us closer to more efficient ways of tackling some of the most daunting computational problems we have," said Marandi. "So far, we've made a laser-based computer that can target some of these problems, and we have already shown some promising results."

CAPTION Post-doctoral scholar Peter McMahon, left, and visiting researcher Alireza Marandi examine a prototype of a new type of light-based computer. CREDIT L.A. Cicero
CAPTION Post-doctoral scholar Peter McMahon, left, and visiting researcher Alireza Marandi examine a prototype of a new type of light-based computer. CREDIT L.A. Cicero

By combining 2 different scanning technologies, researchers have succeeded in creating completely new and detailed images of cancer tumors in mice; this could eventually pave the way for the development of more effective drugs

A Danish research team is behind a new method for studying how a tracer is distributed in a cancer tumour via its extensive vascular network.

The method can be used for purposes such as closely studying the effect of medical treatment using cancer inhibitors.

By means of mathematical modelling, the researchers combined two previously known scanning technologies - magnetic resonance imaging (MRI) and computed tomography (CT) - and used these to study tumours in laboratory animals.

This resulted in completely new images at very high resolution, which provide detailed mapping of the branching of tumour blood vessels.

"We can lay two images of the same cancer tumour on top of each other so to speak, so we get a more geometrically complex understanding of the individual tumour's blood vessels, and thereby an opportunity to very precisely study the way drugs are distributed," says Associate Professor Jens Vinge Nygaard, Department of Engineering, Aarhus University.

He is responsible for the mathematical modelling work for the imaging, and he expects that the method could ultimately be used to develop new drugs and optimise dosing for the individual patient.

15,000 blood vessels under the microscope

An MR image can show how a tracer used as a cancer-inhibiting drug is distributed inside the tumour, but only in a relatively coarse resolution.

An image from a micro-CT scanner, on the other hand, can show an extensive network of blood vessels in the tumour at very high resolution, but it is unable to identify how the drug is transported locally.

The combination of the two imaging technologies can thereby provide significantly improved scanning images of cancer, which can play an important role in developing new drugs.

"The new images give us an opportunity to follow the way a tracer travels through the blood vessels in the tumour and into the surrounding tissue to the cancer cells. As scientists, we're interested in mapping the size and branching of the blood vessels, and understanding what goes on between the blood vessels over time. This can provide us with more detailed insight into specific treatment needs," says Thomas Rea Wittenborn, who is a cancer researcher at the Department of Experimental Clinical Oncology, Department of Clinical Medicine, Aarhus University Hospital.

The tumours they studied measure approximately 200-300 cubic millimetres and typically contain 15,000-20,000 branches of blood vessels.

SuperComputer models replace experimental animals

The researchers followed a total of ten mice with tumours on their feet, and used the two scanning technologies to develop a supercomputer model for each of these.

In principle, the models form the foundation for a completely unique experimental platform.

"Using the computer models, we've created a virtual experimental platform so to speak, and can thereby considerably extend our experiments because we're not dependent on experimental animals. In practice, we can sit in front of our computer screens and study what happens to the tumour if we use drugs that stay in the tissue for longer or shorter periods, or are adapted for small or large blood vessels," says Associate Professor Nygaard.

In the time ahead, the researchers will expand their experiment with more mice and follow the cancer tumours over a period of time. This will provide them with an opportunity to develop supercomputer models that not only describe drug distribution in a static stage of the cancer process, but also generate precise growth scenarios for cancer tumours.

"Using the new imaging method, we'll be capable in purely mathematical terms of predicting tumour development in connection with different drug strategies," says Associate Professor Nygaard. 

This image shows a cancer tumour located on the foot of a mouse. It contains more than 15,000 vascular branches. The colours show the diameter of the blood vessels. (Laboratory photo: Jens Vinge Nygaard)
This image shows a cancer tumour located on the foot of a mouse. It contains more than 15,000 vascular branches. The colours show the diameter of the blood vessels. (Laboratory photo: Jens Vinge Nygaard)

Researchers led by Carnegie Mellon University physicist Markus Deserno and University of Konstanz (Germany) chemist Christine Peter have developed a supercomputer simulation that crushes viral capsids. By allowing researchers to see how the tough shells break apart, the simulation provides a computational window for looking at how viruses and proteins assemble. The study is published in the October issue of The European Physical Journal Special Topics.

"The concept of breaking something to see how it's made isn't new. It's what's being done at particle accelerators and in materials science labs worldwide--not to mention by toddlers who break their toys to see what's inside," said Deserno, a professor in the department of physics and member of the department's Biological Physics Initiative. "With a simulation we can build the virus, crush it and see what happens at a very high level of resolution."

Viral capsids, the protein shells that encapsulate and transport the viral genome, are one of nature's strongest nanocontainers. The shells are made when copies of capsid proteins spontaneously come together and assemble into a round, geometric shell. Understanding how these proteins come together to form capsids may help researchers to make similar nanocontainers for a variety of uses, including targeted drug delivery. Additionally, the simulation could fill a void for virologists, allowing them to study the stages of viral assembly that they aren't able to see experimentally. 

Studying the self-assembly of viral capsids is difficult. Most viruses are too small -- about 30 to 50 nanometers -- and the capsid proteins come together too rapidly for their assembly to be seen using traditional microscopy. As an alternative, Deserno and colleagues thought that a better way to learn about capsid assembly might be to see what happens when an already formed capsid breaks apart.

To do this, Deserno and colleagues created a coarse-grained model of the Cowpea Chlorotic Mottle Virus (CCMV) capsid. In the simulation, they applied forces to the capsid and viewed how it responded to those forces. Their model is based on the MARTINI force field, a commonly used coarse-grained model, with an added stabilizing network within the individual proteins that compensated for the model's shortcomings in stabilizing a protein's folding geometry.

The CCMV capsid is made up of 180 identical proteins. In assembly, the proteins first form pairs, called dimers, and those dimers then join together at interfaces. While the proteins are the same, the interfaces can be different. At some locations on the capsid, five proteins meet; at others, six. In the simulation, the researchers found that when force was applied to the capsid, the capsid would start to fracture at the hexametric interfaces first, indicating that those protein-protein contacts were weaker than those at the pentametric interfaces. In contrast, the pentametric contacts never broke. Since stronger connections assemble first and weaker ones assemble later, the researchers can use this information to begin to recreate how the capsid formed.

In the simulation, the researchers also found a likely explanation for a strange structural feature found in the CCMV capsid. At the center of the hexametric association site, the tail-ends of the six proteins come together and form a beta barrel. Beta barrels are coiled secondary protein structures. The researchers believe that they act to provide further late-stage stabilization to the weaker hexametric interfaces.

CAPTION Computer simulation of Cowpea Chlorotic Mottle Virus (CCMV) capsid. Carnegie Mellon and University of Konstanz researchers learn about viral assembly by smashing capsids in a coarse-grained simulation. CREDIT Venkatramanan Krishnamani
CAPTION SuperComputer simulation of Cowpea Chlorotic Mottle Virus (CCMV) capsid. Carnegie Mellon and University of Konstanz researchers learn about viral assembly by smashing capsids in a coarse-grained simulation. CREDIT Venkatramanan Krishnamani

Page 9 of 392