University of Copenhagen's Lorenzen implements new algo on Computerome 2 to predict COVID-related ICU resource use, saves lives

The COVID-19 pandemic is on the rise in many European countries, and hospitals worldwide are under maximum pressure. 

Now, an innovative algorithm will help alleviate pressure whenever hospitals are confronted by new waves of COVID. Researchers from the University of Copenhagen, among others, have developed the algorithm, which can predict the course of COVID patients' illnesses with how many of them will be highly likely or unlikely to require intensive care or ventilation.

This is important for the allocation of staff across the hospitals in for example Denmark explains one of the study's authors. 

"If we can see that we’ll have capacity issues five days out because too many beds are taken at Rigshospitalet, for example, we can plan better and divert patients to hospitals with more space and staffing. As such, our algorithm has the potential to save lives," explains Stephan Lorenzen, a postdoc at the University of Copenhagen’s Department of Computer Science. Getty Images

The algorithm uses individual patient data from Sundhedsplatform (the National Health Platform) including information about a patient’s gender, age, medications, BMI, whether they smoke or not, blood pressure, and more.

This allows the algorithm to predict how many patients, within a one-to-fifteen day time frame, will need intensive care in the form of, for example, ventilators and constant monitoring by nurses and doctors.

Along with colleagues at the University of Copenhagen, as well as researchers at Rigshospitalet and Bispebjerg Hospital, Lorenzen developed the new algorithm based on health data from 42,526 Danish patients who tested positive for the coronavirus between March 2020 and May 2021.

Predicts the number of intensive care patients with 90 percent accuracy

Traditionally, researchers have used regression models to predict Covid-related hospital admissions. However, these models haven’t taken individual disease histories, age, gender, and other factors into account.

"Our algorithm is based on more detailed data than other models. This means that we can predict the number of patients who will be admitted to intensive care units or who need a ventilator within five days with over 90 percent accuracy," states Stephan Lorenzen.

The algorithm provides extremely accurate predictions for the likely number of intensive care patients for up to ten days.

"We make better predictions than comparable models because we are able to more accurately map the potential need for ventilators and 24-hour intensive care for up to ten days. Precision decreases slightly beyond that, similar to that of the existing algorithmic models used to predict the course of illness in Covid cases," he elaborates.

In principle, the algorithm is ready to be deployed in Danish hospitals. As such, the researchers are about to begin discussions with relevant health professionals.

"We have shown that data can be used for so incredibly much. And, that we in Denmark, are lucky to have so much health information to draw from. Hopefully, our new algorithm can help our hospitals avoid Covid overload when a new wave of the illness hits," concludes Stephan Lorenzen.

What distinguishes the new algorithm from others

Most existing algorithms in the field do not take the gender, age, and medical history of individuals into account. They look at the number of hospitalized COVID patients in need of intensive care on any given day. Based on this, along with mortality and new infection data, existing models try to predict how many people will be hospitalized tomorrow.

"For example, typical models cannot distinguish between younger or older people. Whether there are five people who are 80-years-old or more hospitalized, or five 25-year-old patients, has a major impact on the prediction in relation to what the probability of hospitalization is. Our new algorithm accounts for this," says Stephan Lorenzen.

Ethical considerations

  • The new algorithm uses health data approved for use under section 42 d of the Danish Health Act.
  • Data is processed on Computerome 2, a secure supercomputer for personal data, and under the permission of the Danish Patient Safety Authority, data owners, and other relevant authorities.
  • The Danish Council on Ethics has approved the study and the regional executive boards have approved the use of data.

Using GIZMO massively parallel, multiphysics code, Wisconsin scientists discover the Magellanic Stream is five times closer than previously thought

Our galaxy is not alone. Swirling around the Milky Way are several smaller, dwarf galaxies — the biggest of which are the Small and Large Magellanic Clouds, visible in the night sky of the Southern Hemisphere. COLIN LEGG / SCOTT LUCCHINI A view of the gas in the Magellanic System as it would appear in the night sky. This image, taken directly from the numerical simulations, has been modified slightly for aesthetics.

During their dance around the Milky Way over billions of years, the Magellanic Clouds’ gravity has ripped from each of them an enormous arc of gas — the Magellanic Stream. The stream helps tell the history of how the Milky Way and its closest galaxies came to be and what their future looks like.

New astronomical models developed by scientists at the University of Wisconsin–Madison and the Space Telescope Science Institute recreate the birth of the Magellanic Stream over the last 3.5 billion years. Using the latest data on the structure of the gas, the researchers discovered that the stream is maybe five times closer to Earth than previously thought.  

The findings suggest that the stream may collide with the Milky Way far sooner than expected, helping fuel new star formation in our galaxy.

“The Magellanic Stream origin has been a big mystery for the last 50 years. We proposed a new solution with our models,” says Scott Lucchini, a graduate student in physics at UW–Madison and lead author of the paper. “The surprising part was that the models brought the stream much closer to the Milky Way.”

The new models also provide a precise prediction of where to find the stream’s stars. These stars would have been ripped from their parent galaxies with the rest of the stream’s gas, but only a few have been tentatively identified. Future telescope observations might finally spot the stars and confirm the new reconstruction of the stream’s origin is correct.

“It’s shifting the paradigm of the stream,” says Lucchini. “Some have thought the stars are too faint to see because they’re too far away. But we now see that the stream is basically at the outer part of the disk of the Milky Way.”

That’s close enough to spot, says Elena D’Onghia, a professor of astronomy at UW–Madison, and supervisor of the project. “With the current facilities, we should be able to find the stars. That’s exciting,” she says.

Lucchini, D’Onghia, and Space Telescope Science Institute scientist Andrew Fox published their findings in The Astrophysical Journal Letters on Nov. 8.

The latest work was based both on fresh data and different assumptions about the history of the Magellanic Clouds and Stream. In 2020, the research team predicted that the stream is enveloped by a large corona of warm gas. So, they plugged this new corona into their simulations, while also accounting for a new model of the dwarf galaxies that suggests they have a relatively brief history of orbiting one another — a mere 3 billion years or so.

“Adding the corona to the problem changed the orbital history of the clouds,” Lucchini explains.

In this new recreation, as the dwarf galaxies were captured by the Milky Way, the Small Magellanic Cloud orbited around the Large Magellanic Cloud in the opposite direction than previously thought. As the orbiting dwarf galaxies stripped gas from one another, they produced the Magellanic Stream.

The opposite-direction orbit pushed and pulled the stream so it arced toward Earth, rather than stretching farther away into intergalactic space. The stream’s closest approach is likely to be just 20 kiloparsecs from Earth, or about 65,000 light-years away. The clouds themselves sit between 55 and 60 kiloparsecs away.

“The revised distance changes our understanding of the stream. It means our estimates of many of the stream’s properties, such as mass and density, will need to be revised,” says Fox.

If the stream is this close, then it likely has just one-fifth the mass previously thought. The closest approach of the stream also means this gas will start merging with the Milky Way in about 50 million years, providing the fresh material needed to jump-start the birth of new stars in the galaxy.

The stars in the Magellanic Stream itself have eluded researchers for decades. But the new study suggests that perhaps they were simply looking in the wrong place.

“This model tells us exactly where the stars should be,” says D’Onghia.

The discovery of the new exoplanets was made possible by a planet detection algorithm that UCLA postdoc Zink has developed

UCLA astronomers have identified 366 new exoplanets, thanks to an algorithm developed by a UCLA postdoctoral scholar. Among their most noteworthy findings is a planetary system that comprises a star and at least two gas giant planets, each roughly the size of Saturn and located unusually close to one another. Tiago Campante/Peter Devine via NASA UCLA researchers identified 366 new exoplanets using data from the Kepler Space Telescope, including 18 planetary systems similar to the one illustrated here, Kepler-444, which was previously identified using the telescope.

The term “exoplanets” is being used to describe planets outside of our solar system. The number of exoplanets that have been identified by astronomers numbers fewer than 5,000 in all, so the identification of hundreds of new ones is a significant advance. Studying such a large new group of bodies could help scientists better understand how planets form and orbits evolve, and it could provide new insights into how unusual our solar system is.

“Discovering hundreds of new exoplanets is a significant accomplishment by itself, but what sets this work apart is how it will illuminate features of the exoplanet population as a whole,” said Erik Petigura, a UCLA astronomy professor and co-author of the research.

The paper’s lead author is Jon Zink, who earned his doctorate from UCLA in June and is currently a UCLA postdoctoral scholar. He and Petigura, as well as an international team of astronomers called the Scaling K2 project, identified the exoplanets using data from the NASA Kepler Space Telescope’s K2 mission.

The discovery was made possible by a new planet detection algorithm that Zink developed. One challenge in identifying new planets is that reductions in staller brightness may originate from the instrument or from an alternative astrophysical source that mimics a planetary signature. Teasing out which ones require extra investigation, which traditionally has been extremely time-consuming and can only be accomplished through visual inspection. Zink’s algorithm can separate which signals indicate planets and which are merely noise.

“The catalog and planet detection algorithm that Jon and the Scaling K2 team came devised is a major breakthrough in understanding the population of planets,” Petigura said. “I have no doubt they will sharpen our understanding of the physical processes by which planets form and evolve.”

Kepler’s original mission came to an unexpected end in 2013 when a mechanical failure left the spacecraft unable to precisely point at the patch of sky it had been observing for years.

But astronomers repurposed the telescope for a new mission known as K2, whose objective is to identify exoplanets near distant stars. Data from K2 is helping scientists understand how stars’ location in the galaxy influences what kind of planets can form around them. Unfortunately, the software used by the original Kepler mission to identify possible planets was unable to handle the complexities of the K2 mission, including the ability to determine the planets’ size and their location relative to their star.

Previous work by Zink and collaborators introduced the first fully automated pipeline for K2, with software to identify likely planets in the processed data.

For the new study, the researchers used the new software to analyze the entire dataset from K2 about 500 terabytes of data encompassing more than 800 million images of stars to create a “catalog” that will soon be incorporated into NASA’s master exoplanet archive. The researchers used UCLA’s Hoffman2 Supercomputer Cluster to process the data.

In addition to the 366 new planets the researchers identified, the catalog lists 381 other planets that had been previously identified.

Zink said the findings could be a significant step toward helping astronomers understand which types of stars are most likely to have planets orbiting them and what that indicates about the building blocks needed for successful planet formation.

“We need to look at a wide range of stars, not just ones like our sun, to understand that,” he said.

The discovery of the planetary system with two gas giant planets was also significant because it’s rare to find gas giants — like Saturn in our solar system — as close to their host star as they were in this case. The researchers cannot yet explain why it occurred there, but Zink said that makes the finding especially useful because it could help scientists form a more accurate understanding of the parameters for how planets and planetary systems develop.

“The discovery of each new world provides a unique glimpse into the physics that play a role in planet formation,” he said.