Indian Institute of Science computational analysis shows how dengue virus evolved in India

A multi-institutional study on dengue led by researchers at the Indian Institute of Science (IISc) shows how the virus causing the disease has evolved dramatically over the last few decades in the Indian subcontinent.  

Cases of dengue – a mosquito-borne viral disease – have steadily increased in the last 50 years, predominantly in South-East Asian counties. And yet, there are no approved vaccines against dengue in India, although some vaccines have been developed in other countries. 

“We were trying to understand how different the Indian variants are, and we found that they are very different from the original strains used to develop the vaccines,” says Rahul Roy, Associate Professor at the Department of Chemical Engineering (CE), IISc, and corresponding author of the study published in PLoS Pathogens. He and collaborators examined all available (408) genetic sequences of Indian dengue strains from infected patients collected between the years 1956 and 2018 by others as well as the team themselves.

There are four broad categories – serotypes – of the dengue virus (Dengue 1, 2, 3, and 4). Using computational analysis, the team examined how much each of these serotypes deviated from their ancestral sequence, from each other, and other global sequences. “We found that the sequences are changing in a very complex fashion,” says Roy.  

Until 2012, the dominant strains in India were Dengue 1 and 3. But in recent years, Dengue 2 has become more dominant across the country, while Dengue 4 – once considered the least infectious – is now making a niche for itself in South India, the researchers found. The team sought to investigate what factors decide which strain is the dominant one at any given time. One possible factor could be Antibody-Dependent Enhancement (ADE), says Suraj Jagtap, a Ph.D. student at CE and first author of the study.  

Jagtap explains that sometimes, people might be infected first with one serotype and then develop a secondary infection with a different serotype, leading to more severe symptoms. Scientists believe that if the second serotype is similar to the first, the antibodies in the host’s blood generated after the first infection bind to the new serotype and bind to immune cells called macrophages. This proximity allows the newcomer to infect macrophages, making the infection more severe. “We knew that ADE enhances severity, [but] we wanted to know if that can also change the evolution of dengue virus,” Jagtap adds.  

At any given time, several strains of each serotype exist in the viral population. The antibodies generated in the human body after a primary infection provide complete protection from all serotypes for about 2-3 years. Over time, the antibody levels begin to drop, and cross-serotype protection is lost. The researchers propose that if the body is infected around this time by a similar – not identical – viral strain, then ADE kicks in, giving a huge advantage to this new strain, causing it to become the dominant strain in the population. Such an advantage lasts for a few more years, after which the antibody levels become too low to make a difference. “This is what is new about this paper,” says Roy. “Nobody has shown such interdependence between the dengue virus and the immunity of the human population before.” This is probably why the recent Dengue 4 strains, which supplanted the Dengue 1 and 3 strains, were more similar to the latter than their own ancestral Dengue 4 strains, the researchers believe.

Such insights are possible only from studying the disease in countries like India with genomic surveillance, explains Roy, because the infection rates here have been historically high, and a huge population carries antibodies from a previous infection. 

This simulated Roman deep field image, containing hundreds of thousands of galaxies, represents just 1.3 percent of the synthetic survey, which is itself just one percent of Roman's planned survey. The full simulation is available here. The galaxies are color coded – redder ones are farther away and whiter ones are nearer. The simulation showcases Roman’s power to conduct large, deep surveys and study the universe statistically in ways that aren’t possible with current telescopes. Credits: M. Troxel and Caltech-IPAC/R. Hurt
This simulated Roman deep field image, containing hundreds of thousands of galaxies, represents just 1.3 percent of the synthetic survey, which is itself just one percent of Roman's planned survey. The full simulation is available here. The galaxies are color coded – redder ones are farther away and whiter ones are nearer. The simulation showcases Roman’s power to conduct large, deep surveys and study the universe statistically in ways that aren’t possible with current telescopes. Credits: M. Troxel and Caltech-IPAC/R. Hurt

NASA Goddard scientists comb through the new simulated Roman data to get a taste of the bonus science

Scientists have created a gargantuan synthetic survey that shows what we can expect from the Nancy Grace Roman Space Telescope’s future observations. Though it represents just a small chunk of the real future survey, this simulated version contains a staggering number of galaxies – 33 million of them, along with 200,000 foreground stars in our home galaxy. This animation shows the type of science that astronomers will be able to do with future Roman deep field observations. The gravity of intervening galaxy clusters and dark matter can lens the light from farther objects, warping their appearance as shown in the animation. By studying the distorted light, astronomers can study elusive dark matter, which can only be measured indirectly through its gravitational effects on visible matter. As a bonus, this lensing also makes it easier to see the most distant galaxies whose light they magnify.

The simulation will help scientists plan the best observing strategies, test different ways to mine the mission's vast quantities of data and explore what we can learn from tandem observations with other telescopes.

"The volume of data Roman will return is unprecedented for a space telescope,” said Michael Troxel, an assistant professor of physics at Duke University in Durham, North Carolina. “Our simulation is a testing ground we can use to make sure we will get the most out of the mission’s observations.”

The team drew data from a mock universe originally developed to support science planning with the Vera C. Rubin Observatory, which is located in Chile and set to begin full operations in 2024. Because the Roman and Rubin simulations use the same source, astronomers can compare them and see what they can expect to learn from pairing the telescopes’ observations once they’re both actively scanning the universe.

Cosmic Construction

Roman’s High Latitude Wide Area Survey will consist of both imaging – the focus of the new simulation – and spectroscopy across the same enormous swath of the universe. Spectroscopy involves measuring the intensity of light from cosmic objects at different wavelengths, while Roman’s imaging will reveal precise positions and shapes of hundreds of millions of faint galaxies that will be used to map dark matter. Although this mysterious substance is invisible, astronomers can infer its presence by observing its effects on regular matter. 

Anything with mass warps the fabric of space-time. The bigger the mass, the greater the warp. This creates an effect called gravitational lensing, which happens when light from a distant source becomes distorted as it travels past intervening objects. When those lensing objects are massive galaxies or galaxy clusters, background sources can be smeared or appear as multiple images.

Less massive objects can create more subtle effects called weak lensing. Roman will be sensitive enough to use weak lensing to see how clumps of dark matter warp the appearance of distant galaxies. By observing these lensing effects, scientists will be able to fill in more of the gaps in our understanding of dark matter.

“Theories of cosmic structure formation make predictions for how the seed fluctuations in the early universe grow into the distribution of matter that can be seen through gravitational lensing,” said Chris Hirata, a physics professor at Ohio State University in Columbus, and a co-author of the paper. “But the predictions are statistical in nature, so we test them by observing vast regions of the cosmos. Roman, with its wide field of view, will be optimized to efficiently survey the sky, complementing observatories such as the James Webb Space Telescope that are designed for deeper investigation of individual objects.”

Ground and Space This graphic compares the relative sizes of the synthetic image (inset, outlined in orange), the whole area astronomers simulated (the square in the upper-middle outlined in green), and the size of the complete future survey astronomers will conduct (the large square in the lower-left outlined in blue). The background, from the Digitized Sky Survey, illustrates how much sky area each region covers. The synthetic image covers about as much sky as a full moon, and the future Roman survey will cover much more area than the Big Dipper. While it would take the Hubble Space Telescope or James Webb Space Telescope around a thousand years to image an area as large as the future survey, Roman will do it in just over seven months. Credits: NASA’s Goddard Space Flight Center and M. Troxel

The synthetic Roman survey covers 20 square degrees of the sky, which is roughly equivalent to 95 full moons. The actual survey will be 100 times larger, unveiling more than a billion galaxies. Rubin will scan an even greater area – 18,000 square degrees, nearly half of the entire sky – but with lower resolution since it will have to peer through Earth’s turbulent atmosphere.

Pairing the Roman and Rubin simulations offers the first opportunity for scientists to try to detect the same objects in both sets of images. That’s important because ground-based observations aren’t always sharp enough to distinguish multiple, close sources as separate objects. Sometimes they blur together, which affects weak lensing measurements. Now, scientists can determine the difficulties and benefits of “deblending” such objects in Rubin's images by comparing them with Roman ones. 

{media id=301,layout=solo}

With Roman’s colossal cosmic view, astronomers will be able to accomplish far more than the survey's primary goals, which are to study the structure and evolution of the universe, map dark matter, and discern between the leading theories that attempt to explain why the expansion of the universe is speeding up. Scientists can comb through the new simulated Roman data to get a taste of the bonus science that will come from seeing so much of the universe in such exquisite detail.

“With Roman’s gigantic field of view, we anticipate many different scientific opportunities, but we will also have to learn to expect the unexpected,” said Julie McEnery, the senior project scientist for the Roman mission at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “The mission will help answer critical questions in cosmology while potentially revealing brand new mysteries for us to solve.”

The Nancy Grace Roman Space Telescope is managed at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, with participation by NASA's Jet Propulsion Laboratory and Caltech/IPAC in Southern California, the Space Telescope Science Institute in Baltimore, and a science team comprising scientists from various research institutions. The primary industrial partners are Ball Aerospace and Technologies Corporation in Boulder, Colorado; L3Harris Technologies in Melbourne, Florida; and Teledyne Scientific & Imaging in Thousand Oaks, California.

Joachim Kock, Associate Professor at the Department of Mathematics, University of Copenhagen. Photo: Jim Høyer
Joachim Kock, Associate Professor at the Department of Mathematics, University of Copenhagen. Photo: Jim Høyer

University of Copenhagen prof Kock's COVID computations trigger a solution to an old problem in computer science

A mathematician from the University of Copenhagen was keen to forecast the evolvement of the COVID epidemic. Instead, he ended up solving a problem that had troubled computer scientists for decades.

During the corona epidemic, many of us became amateur mathematicians. How quickly would the number of hospitalized patients rise, and when would herd immunity be achieved? Professional mathematicians were challenged as well, and a researcher at the University of Copenhagen became inspired to solve a 30-year-old problem in computer science.

“Like many others, I was out to calculate how the epidemic would develop. I wanted to investigate certain ideas from theoretical computer science in this context. However, I realized that the lack of a solution to the old problem was a showstopper,” says Joachim Kock, Associate Professor at the Department of Mathematics, University of Copenhagen.

His solution to the problem can be of use in epidemiology and computer science, and potentially in other fields as well. A common feature for these fields is the presence of systems where the various components exhibit mutual influence. For instance, when a healthy person meets a person infected with COVID, the result can be two people infected.

The smart method invented by a German teenager

To understand the breakthrough, one needs to know that such complex systems can be described mathematically through so-called Petri nets. The method was invented in 1939 by German Carl Adam Petri (by the way at the age of only 13) for chemistry applications. Just like a healthy person meeting a person infected with COVID can trigger a change, the same may happen when two chemical substances mix and react.

In a Petri net, the various components are drawn as circles while events such as a chemical reaction or an infection are drawn as squares. Next, circles and squares are connected by arrows which show the interdependencies in the system.

Computer scientists regarded the problem as unsolvable

In chemistry, Petri nets are applied for calculating how the concentrations of various chemical substances in a mixture will evolve. This manner of thinking has influenced the use of Petri nets in other fields such as epidemiology: we are starting with a high “concentration” of un-infected people, whereafter the “concentration” of infected starts to rise. In computer science, the use of Petri nets is somewhat different: the focus is on individuals rather than concentrations, and the development happens in steps rather than continuously.

What Joachim Kock had in mind was to apply the more individual-oriented Petri nets from computer science for COVID calculations. This was when he encountered the old problem:

“The processes in a Petri net can be described through two separate approaches. The first approach regards a process as a series of events, while the second approach sees the net as a graphical expression of the interdependencies between components and events,” says Joachim Kock, adding:

“The serial approach is well suited for performing calculations. However, it has a downside since it describes causalities less accurately than the graphical approach. Further, the serial approach tends to fall short when dealing with events that take place simultaneously.”

“The problem was that nobody had been able to unify the two approaches. The computer scientists had more or less resigned, regarding the problem as unsolvable. This was because no one had realized that you need to go back and revise the very definition of a Petri net,” says Joachim Kock.

Small modifications with a large impact

The Danish mathematician realized that a minor modification to the definition of a Petri net would enable a solution to the problem:

“By allowing parallel arrows rather than just counting them and writing a number, additional information is made available. Things work out and the two approaches can be unified.”

The exact mathematical reason why this additional information matters is complex, but can be illustrated by an analogy:

“Assigning numbers to objects has helped humanity greatly. For instance, it is highly practical that I can arrange the right number of chairs in advance for a dinner party instead of having to experiment with different combinations of chairs and guests after they have arrived. However, the number of chairs and guests does not reveal who will be sitting where. Some information is lost when we consider numbers instead of real objects.”

Similarly, information is lost when the individual arrows of the Petri net are replaced by a number.

“It takes a bit more effort to treat the parallel arrows individually, but one is amply rewarded as it becomes possible to combine the two approaches so that the advantages of both can be obtained simultaneously.”

The circle to COVID has been closed

The solution helps our mathematical understanding of how to describe complex systems with many interdependencies, but will not have much practical effect on the daily work of computer scientists using Petri nets, according to Joachim Kock:

“This is because the necessary modifications are mostly back-compatible and can be applied without the need for revision of the entire Petri net theory.”

“Somewhat surprisingly, some epidemiologists have started using the revised Petri nets. So, one might say the circle has been closed!”

Joachim Kock does see a further point to the story:

“I wasn’t out to find a solution to the old problem in computer science at all. I just wanted to do COVID calculations. This was a bit like looking for your pen but realizing that you must find your glasses first. So, I would like to take the opportunity to advocate the importance of research that does not have a predefined goal. Sometimes research driven by curiosity will lead to breakthroughs.”