Tel Aviv University prof saves lives by predicting bloodstream infection outcomes using machine learning

State-of-the-art technology will allow physicians to identify patients who are at risk for serious illness ahead of time

A new technology developed at Tel Aviv University in Isreal will make it possible, using artificial intelligence, to identify patients who are at risk of serious illness as a result of blood infections. The researchers trained the AI program to study the electronic medical records of about 8,000 patients at Tel Aviv’s Ichilov Hospital who were found to be positive for blood infections. These records included demographic data, blood test results, medical history, and diagnosis. After studying each patient’s data and medical history, the program was able to automatically identify medical files’ risk factors with an accuracy of 82%. According to the researchers, in the future, this model could even serve as an early warning system for doctors, by enabling them to rank patients based on their risk of serious disease. Prof. Noam Shomron  CREDIT Corinna Kern

Behind this groundbreaking research with the potential to save many lives are students Yazeed Zoabi and Dan Lahav from the laboratory of Prof. Noam Shomron of Tel Aviv University’s Sackler Faculty of Medicine, in collaboration with Dr. Ahuva Weiss Meilik, head of the I-Medata AI Center at Ichilov Hospital, Prof. Amos Adler, and Dr. Orli Kehat. 

The researchers explain that blood infections are one of the leading causes of morbidity and mortality in the world, so it is very important to identify the risk factors for developing serious illness at the early stage of infection with a bacterium or fungus. Most of the time, the blood system is a sterile one, but infection with a bacterium or fungus can occur during surgery, or as the result of complications from other infections, such as pneumonia or meningitis. The diagnosis of infection is made by taking a blood culture and transferring it to a growth medium for bacteria and fungi. The body’s immunological response to the infection can cause sepsis or shock, dangerous conditions that have high mortality rates.

“We worked with the medical files of about 8,000 Ichilov Hospital patients who were found to be positive for blood infections between the years 2014 and 2020, during their hospitalization and up to 30 days after, whether the patient died or not,” explains Prof. Noam Shomron. “We entered the medical files into software based on artificial intelligence; we wanted to see if the AI would identify patterns of information in the files that would allow us to automatically predict which patients would develop serious illness, or even death, as a result of the infection.”

To the researchers’ satisfaction, following their training the AI reached an accuracy level of 82% in predicting the course of the disease, even when ignoring obvious factors such as the age of the patients and the number of hospitalizations they had endured. After the researchers entered the patient's data, the algorithm knew how to predict the course of the disease, which suggests that in the future it will be possible to rank patients in terms of the danger posed to their health – ahead of time.

“Using artificial intelligence, the algorithm was able to find patterns that surprised us, parameters in the blood that we hadn’t even thought about taking into account,” says Prof. Shomron. “We are now working with medical staff to understand how this information can be used to rank patients in terms of the severity of the infection. We can use the software to help doctors detect the patients who are at maximum risk.”

Since the study’s success, Ramot, Tel Aviv University's technology transfer company, is working to register a global patent for the groundbreaking technology. Keren Primor Cohen, CEO of Ramot, says, “Ramot believes in this innovative technology’s ability to bring about a significant change in the early identification of patients at risk and help hospitals reduce costs. This is an example of effective cooperation between the university’s researchers and hospitals, which improves the quality of medical care in Israel and around the world.”

UB pharmacy prof builds AI-powered supercomputer model to predict disease progression during aging

The model could support the assessment of long-term chronic drug therapies and help clinicians develop more effective treatments for complex diseases

Using artificial intelligence, a team of University at Buffalo researchers has developed a novel system that models the progression of chronic diseases as patients age. 

Published in Oct. in the Journal of Pharmacokinetics and Pharmacodynamics, the model assesses metabolic and cardiovascular biomarkers – measurable biological processes such as cholesterol levels, body mass index, glucose, and blood pressure – to calculate health status and disease risks across a patient’s lifespan.

The findings are critical due to the increased risk of developing metabolic and cardiovascular diseases with aging, a process that has adverse effects on cellular, psychological and behavioral processes. 1637777727685 9a6ca

“There is an unmet need for scalable approaches that can provide guidance for pharmaceutical care across the lifespan in the presence of aging and chronic co-morbidities,” says lead author Murali Ramanathan, Ph.D., professor of pharmaceutical sciences in the UB School of Pharmacy and Pharmaceutical Sciences. “This knowledge gap may be potentially bridged by innovative disease progression modeling.”

1638894327685_a5287_b516a.jpgThe model could facilitate the assessment of long-term chronic drug therapies, and help clinicians monitor treatment responses for conditions such as diabetes, high cholesterol, and high blood pressure, which become more frequent with age, says Ramanathan. 

Additional investigators include the first author and UB School of Pharmacy and Pharmaceutical Sciences alumnus Mason McComb, Ph.D.; Rachael Hageman Blair, Ph.D., associate professor of biostatistics in the UB School of Public Health and Health Professions; and Martin Lysy, Ph.D., associate professor of statistics and actuarial science at the University of Waterloo.

The research examined data from three case studies within the third National Health and Nutrition Examination Survey (NHANES) that assessed the metabolic and cardiovascular biomarkers of nearly 40,000 people in the United States. 

Biomarkers, which also include measurements such as temperature, body weight, and height, are used to diagnose, treat and monitor the overall health and numerous diseases. 

The researchers examined seven metabolic biomarkers: body mass index, waist-to-hip ratio, total cholesterol, high-density lipoprotein cholesterol, triglycerides, glucose, and glycohemoglobin. The cardiovascular biomarkers examined include systolic and diastolic blood pressure, pulse rate, and homocysteine.

By analyzing changes in metabolic and cardiovascular biomarkers, the model “learns” how aging affects these measurements. With machine learning, the system uses a memory of previous biomarker levels to predict future measurements, which ultimately reveal how metabolic and cardiovascular diseases progress over time.

Michigan Medicine study shows how bias can creep into medical databanks that drive precision health, clinical AI

Findings have already prompted improvements in how the University of Michigan recruits new participants for its biobank

In the race to harness medical data for artificial intelligence tools and personalized health care, a new study shows how easily unintentional design bias can affect those efforts.

It also points to specific ways to increase the chances that patients who are traditionally underrepresented in research can be included in the massive banks of genetic samples and data from digital medical records that underlie these efforts.

Not only could that be important to the accuracy of the tools based on those data, but it would also make it more likely that they’d benefit diverse patient communities.

The study, in the December issue of Health Affairs, comes from a team at the University of Michigan and Michigan State University that studied U-M’s efforts to build a large bank of data and samples for researchers to use.

The findings have already led to improvements in how Precision Healthat U-M recruits participants and the racial and ethnic categories that patients can self-select to be added to their records.

Key findings

The study focuses on the Michigan Genomics Initiative (MGI), which originally designed its recruitment effort around approaching patients to donate a small amount of blood for the research biobank when they were waiting for surgery at Michigan Medicine, U-M’s academic medical center. Trained MGI recruiters aimed to approach all adult surgical patients in the preoperative setting during typical surgical hours.

There were several reasons why MGI used this approach — including the fact that patients in such settings have time to engage in recruitment and enrollment procedures, and that they often already have an intravenous line placed in preparation for their treatment, so it’s convenient to draw a blood sample for research use if they consent.

But the new study found that that the pool of surgical patients from which MGI staff recruited were more likely to be older, white, and socioeconomically advantaged men when compared to the general Michigan Medicine patient population.

In addition, when approached, patients who consented to enroll in MGI were younger than the average patient waiting for surgery and less likely to be Black or African American, Asian, or Hispanic.

The result: The blood samples collected for the biobank came from a sub-population that was less demographically diverse than Michigan Medicine’s overall patient population.

Changing the approach

While recruiting surgical patients remains a key component of MGI’s recruitment strategy, Precision Health has since expanded its recruiting efforts to include a mail-in saliva-collection kit — giving a broader patient population the opportunity to engage in the research if they choose. Precision Health’s MY PART effort aims to recruit a nationally representative study population into the university’s biobank.

The authors hope that by sharing their deep-dive into differences in recruitment and consent rates, they can help other institutions, organizations, and companies design more equitable databanks of their own.

If they don’t, all the tools and products that will emerge from research using those databanks will reflect demographic biases and make them less accessible or generalizable for underrepresented communities, the researchers say.

“We know that large research datasets often do not reflect the diversity of the patient population across the United States, but our study gives a detailed analysis about how these disparities become embedded in scientific advances from the ground up,” said  Kayte Spector-Bagdady, J.D., M.B.E., co-first author of the new paper and a research ethicist at Michigan Medicine. “This way we were able to highlight practical improvements that we could implement immediately,” she added.

Downstream effects

Spector-Bagdady, a U-M Medical School assistant professor who is the Associate Director of U-M’s Center for Bioethics and Social Sciences in Medicine, led the study along with senior author Jenna Wiens, Ph.D., one of the co-directors of Precision Health and an associate professor of computer science and engineering at the U-M College of Engineering. Both are members of the U-M Institute for Healthcare Policy and Innovation.

“A lot of the research that goes on in precision health, machine learning, and AI for health care across the country leverage data from the electronic health records of major health systems, and data from the subset of patients who have consented to give biospecimens,” Wiens explained. “For an AI researcher who builds machine learning and clinical decision support tools, generalizability is so important. Otherwise, we risk building tools that perpetuate disparities in care and outcomes.”

Levels of consent unlock more precision

The authors note that many academic medical centers, including Michigan Medicine, inform patients when they consent to receive care that their medical records might be used by researchers. At U-M, such use is permitted with authorization from the Institutional Review Boards at the Medical School.

Taking part in MGI involves consenting to allow those records to be used in conjunction with a sample of their DNA.

For instance, researchers might analyze part of their genetic sequence and look at how their genetic traits relate to conditions they have or how well they do when given certain treatments.

This is a powerful tool for understanding what drives certain diseases, or what treatments work best for people with different characteristics who have the same type of cancer, for instance.

It could also form the basis for AI tools that can predict which patients will suffer certain complications, or help doctors pick from among various treatments for them.

Using just the Michigan Medicine electronic medical record data would mean capturing a patient population with more demographic diversity, but does not offer patients the same research-level informed consent as the biobank consent process.

Records-based research also means less precision for some studies, because it doesn’t include the ability to study genetic variation and biomarkers -- such as proteins in the blood that could be associated with the disease.

That means biobank teams must go to extra lengths to recruit people from groups that are less likely to give consent.

“Building long-term trust between healthcare systems and those underrepresented in biobanks, and the research enterprise in general, is a task that must be prioritized. Any attempts at equity building must be hyper-localized, attentive to historical neglect, and situated in justice considerations beyond the research question,” added co-author Melissa Creary, Ph.D., who is an assistant professor at the U-M School of Public Health and the Senior Director of Public Health Initiatives at the American Thrombosis and Hemostasis Network, and who has written extensively on these issues.

Making it clear to participants how their data will be used if they give consent, including any commercial uses, and being careful about sharing data with industry is crucial for earning trust and is already a top priority at U-M. Michigan Medicine’s leader, Marschall Runge, M.D., Ph.D., recently wrote on this topic.

“There’s an important tension between respecting patients’ informed consent and also supporting generalizable research,” Spector-Bagdady said. “The ideal resolution is a structure that doesn’t put those two in tension, to begin with.”

How two UMass Amherst scientists are balancing the planet's natural carbon budget

New research is first to pin down the mechanics of CO2 fluxes in rivers and streams

A pair of researchers at the University of Massachusetts Amherst recently published the results of a study that is the first to take a process-based modeling approach to understand how much CO2 rivers and streams contribute to the atmosphere. The team focused on the East River watershed in Colorado’s Rocky Mountains and found that their new approach is far more accurate than traditional approaches, which overestimated CO2 emissions by up to a factor of 12. An early online version of the research was recently published by Global Biogeochemical Cycles.

Scientists refer to the total CO2 circulating through the earth and the atmosphere as the carbon budget. This budget includes both anthropogenic sources of CO2, such as those that come from burning fossil fuels, as well as more natural sources of CO2 that are part of the planet’s regular carbon cycle. “In the era of global climate change,” says Brian Saccardi, a graduate student in geosciences at UMass Amherst and lead author of the new research, “we need to know what the baseline levels of CO2 are, where they come from and how those physical process of carbon emission work.” Without such a baseline, it makes it difficult to know how the earth is changing as CO2 levels increase. Brian Saccardi collecting stream data from the East River watershed, Colorado

Streams and rivers are one of the many venues that naturally emit CO2—scientists have long known this, but it’s been a very difficult number to pin down. In part, this is because CO2 emissions fluctuate rapidly and it has proved impracticable to physically monitor all of the earth’s river networks. And so scientists typically rely on statistical models to estimate how much CO2 streams and rivers emit. The problem, Saccardi explains, is that the models don’t account for the full complexity of how CO2 moves from groundwater into the stream or river, what happens to it once there and how much gets emitted to the atmosphere.

“This is the first time we’re accounting for the physical processes themselves,” says Matthew Winnick, professor of geosciences at UMass Amherst and the paper’s co-author. “We need to know how each step of the movement of CO2 works, so we know how they will react to climate change.”

Saccardi and Winnick designed, tested, and validated a “process-based” model that relies on the laws of physics as well as empirical measurements to arrive at its estimates. The pair took 121 measurements of streams in the remote East River watershed in Colorado, against which they could test their new model. And the results were clear: according to the research, their model is far more accurate than the standard approaches.

Though Saccardi and Winnick are quick to point out that their conclusions apply to the East River watershed only, they have plans to apply their process-based model more widely and suspect that their new method may help to radically reevaluate the earth’s natural carbon budget.

Chinese built divide, conquer algorithm offers a promising route for big data analysis

We live in the era of big data. The huge volume of information we generate daily has major applications in various fields of science and technology, economy, and management. For example, more and more companies now collect, store and analyze large-scale data sets from multiple sources to gain business insights or measure risk.

However, as Prof. Yong Zhou, one of the authors of a new study notes: “Typically, these large or massive data sets cannot be processed with independent computers, which poses new challenges for traditional data analysis in terms of computational methods and statistical theory.”

Together with colleagues at the Chinese University of Hong Kong, Zhou, a professor at China’s East China Normal University, has developed a new algorithm that promises to address these computational problems.

He explains: “State-of-the-art numerical algorithms already exist, such as optimal subsampling algorithms and divide and conquer algorithms. In contrast to the optimal subsampling algorithm, which samples small-scale, informative data points, the divide and conquer algorithm divides large data sets randomly into sub-datasets and processes them separately on multiple machines. While the divide and conquer method is effective in using computational resources to provide a big data analysis, a robust and efficient meta-method is usually required when integrating the results.”

In this study, the researchers have focused on the large-scale inference of a linear expectile regression model, which has wide applications in risk management. They propose a communication-effective, divide and conquer algorithm, in which the summary statistics from the subsystems are combined by the confidence distribution. Zhou explains: “This is a robust and efficient meta-method for integrating the results. More importantly, we studied the relationship between the number of machines and the sample size. We found that the requirement for the number of machines is a trade-off between statistical accuracy and computational efficiency.”

Zhou adds: “We believe the algorithm we have developed can significantly help to address the computational challenges arising from large-scale data.”