National Academy of Medicine publishes special report on AI's future potential hinges on consensus

The role of artificial intelligence, or machine learning, will be pivotal as the industry wrestles with a gargantuan amount of data that could improve -- or muddle -- health and cost priorities, according to a National Academy of Medicine Special Publication on the use of AI in health care.

Yet, the current explosion of investment and development is happening without an underpinning of consensus of responsible, transparent deployment, which potentially constrains its potential.

The new report is designed to be a comprehensive reference for organizational leaders, health care professionals, data analysts, model developers and those who are working to integrate machine learning into health care, said Vanderbilt University Medical Center's Michael Matheny, MD, MS, MPH, Associate Professor in the Department of Biomedical Informatics, and co-editor of AI in Healthcare: The Hope, The Hype, The Promise, The Peril.

"It's critical for the health care community to learn from both the successes, but also the challenges and recent failures in the use of these tools. We set out to catalog important examples in health care AI, highlight best practices around AI development and implementation, and offer key points that need to be discussed for consensus to be achieved on how to address them as an AI community and society," said Matheny. AI and Health Care cover PREPUB FINAL scaled 990b0{module INSIDE STORY}

Matheny underscores the applications in health care look nothing like the mass-market imagery of self-driving cars that is often synonymous with machine learning or tech-driven systems.

For the immediate future, in health care, AI should be thought of as a tool to support and complement the decision-making of highly trained professionals in delivering care in collaboration with patients and their goals, Matheny said.

Recent advances in deep learning and related technologies have met with great success in imaging interpretations, such as radiology and retina exams, which have spurred a rush toward AI development that brought first, venture capital funding, and then industry giants. However, some of the tools have had problems with bias from the populations they were developed from, or from the choice of an inappropriate target. Data analysts and developers need to work toward increased data access and standardization as well as thoughtful development so algorithms aren't biased against already marginalized patients.

The editors hope this report can contribute to the dialog of patient inclusivity and fairness in the use of AI tools, and the need for careful development, implementation, and surveillance of them to optimize their chance of success, Matheny said.

Matheny along with Stanford University School of Medicine's Sonoo Thadaney Israni, MBA, and Mathematica Policy Research's Danielle Whicher, PhD, MS, penned an accompanying piece for JAMA Network about the watershed moment in which the industry finds itself.

"AI has the potential to revolutionize health care. However, as we move into a future supported by technology together, we must ensure high data quality standards, that equity and inclusivity are always prioritized, that transparency is use-case-specific, that new technologies are supported by appropriate and adequate education and training, and that all technologies are appropriately regulated and supported by specific and tailored legislation," the National Academy of Medicine wrote in a release.

"I want people to use this report as a foil to hone the national discourse on a few key areas including education, equity in AI, uses that support human cognition rather than replacing it, and separating out AI transparency into data, algorithmic, and performance transparency," said Matheny.

SMU develops efficient methods to simulate how electromagnetic waves interact with devices

 It takes a tremendous amount of supercomputer simulations to create a device like an MRI scanner that can image your brain by detecting electromagnetic waves propagating through tissue. The tricky part is figuring out how electromagnetic waves will react when they come in contact with the materials in the device.

SMU researchers have developed an algorithm that can be used in a wide range of fields - from biology and astronomy to military applications and telecommunications - to create equipment more efficiently and accurately.

Currently, it can take days or months to do simulations. And because of cost, there is a limit to the number of simulations typically done for these devices. SMU math researchers have revealed a way to do a faster algorithm for these simulations with the help of grants from the U.S. Army Research Office and the National Science Foundation.

"We can reduce the simulation time from one month to maybe one hour," said lead researcher Wei Cai, Clements Chair of Applied Mathematics at SMU. "We have made a breakthrough in these algorithms."

"This work will also help create a virtual laboratory for scientists to simulate and explore quantum dot solar cells, which could produce extremely small, efficient and lightweight solar military equipment," said Dr. Joseph Myers, Army Research Office mathematical sciences division chief. CAPTION (From Left) Wei Cai, Dr. Bo Wang and Wenzhong Zhang.  CREDIT Photo courtesy of SMU (Southern Methodist University), Hillsman S. Jackson{module INSIDE STORY}

Dr. Bo Wang, a postdoctoral researcher at SMU (Southern Methodist University) and Wenzhong Zhang, a graduate student at the university, also contributed to this research. The study was published today by the SIAM Journal on Scientific Computing.

The algorithm could have significant implications in a number of scientific fields.

"Electromagnetic waves exist as radiation of energies from charges and other quantum processes," Cai explained. 

They include things like radio waves, microwaves, light, and X-rays. Electromagnetic waves are also the reason you can use a mobile phone to talk to someone in another state and why you can watch TV. In short, they're everywhere.

An engineer or mathematician would be able to use the algorithm for a device whose job is to pick out a certain electromagnetic wave. For instance, she or he could potentially use it to design a solar light battery that lasts longer and is smaller than currently exists.

"To design a battery that is small in size, you need to optimize the material so that you can get the maximum conversion rate from the light energy to electricity," Cai said. "An engineer could find that maximum conversion rate by going through simulations faster with this algorithm."

Or the algorithm could help an engineering design a seismic monitor to predict earthquakes by tracking elastic waves in the earth, Cai noted.

"These are all waves, and our method applies for different kinds of waves," he said. "There is a wide range of applications with what we have developed."

Supercomputer simulations map out how materials in a device like semiconductor materials will interact with light, in turn giving a sense of what a particular wave will do when it comes in contact with that device.

The manufacturing of many devices involving light interactions uses a fabrication process by layering material on top of each other in a lab, just like Legos. This is called layered media. Computer simulations then analyze the layered media using mathematical models to see how the material in question is interacting with light.

SMU researchers have found a more efficient and less expensive way to solve Helmholtz and Maxwell's equations - difficult to solve but essential tools to predict the behavior of waves.

The problem of the wave source and material interactions in the layer structure has been a very challenging one for the mathematicians and engineers for the last 30 years.

Professor Weng Cho Chew from Electrical and Computer Engineering at Purdue, a world-leading expert on computational electromagnetics, said the problem "is notoriously difficult."

Commenting on the work of Cai and his team, Chew said, "Their results show excellent convergence to small errors. I hope that their results will be widely adopted."

The new algorithm modifies a mathematical method called the fast multipole method, or FMM, which was considered one of the top 10 algorithms in the 20th century.

To test the algorithm, Cai and the other researchers used SMU's ManeFrame II - which is one of the fastest academic supercomputers in the nation - to run many different simulations.

Swedish radiologist develops AI that improves breast cancer risk prediction

A sophisticated type of artificial intelligence (AI) can outperform existing models at predicting which women are at future risk of breast cancer, according to a study published in the journal Radiology.

Most existing breast cancer screening programs are based on mammography at similar time intervals--typically, annually or every two years--for all women. This "one size fits all" approach is not optimized for cancer detection on an individual level and may hamper the effectiveness of screening programs.

"Risk prediction is an important building block of an individually adapted screening policy," said study lead author Karin Dembrower, M.D., breast radiologist and Ph.D. candidate from the Karolinska Institute in Stockholm, Sweden. "Effective risk prediction can improve attendance and confidence in screening programs."

High breast density, or a greater amount of glandular and connective tissue compared to fat, is considered a risk factor for cancer. While density may be incorporated into risk assessment, current prediction models may fail to fully take advantage of all the rich information found in mammograms. This information has the potential to identify women who would benefit from additional screening with MRI. Patient inclusion flowchart shows selection of women in the training and validation samples used for deep neural network development, as well as in the test sample (current study sample). Exclusions are detailed in the footnote. PACS = picture archiving and communication system.{module INSIDE STORY}

Dr. Dembrower and colleagues developed a risk model that relies on a deep neural network, a type of AI that can extract vast amounts of information from mammographic images. It has inherent advantages over other methods like visual assessment of mammographic density by the radiologist that may not be able to capture all risk-relevant information in the image.

The new model was developed and trained on mammograms from cases diagnosed between 2008 and 2012 and then studied on more than 2,000 women ages 40 to 74 who had undergone mammography in the Karolinska University Hospital system. Of the 2,283 women in the study, 278 were later diagnosed with breast cancer.

The deep neural network showed a higher risk association for breast cancer compared to the best mammographic density model. The false negative rate--the rate at which women who were not categorized as high-risk were later diagnosed with breast cancer--was lower for the deep neural network than for the best mammographic density model.

"The deep neural network overall was better than density-based models," Dr. Dembrower said. "And it did not have the same bias as the density-based model. Its predictive accuracy was not negatively affected by more aggressive cancer subtypes."

The study findings support a future role for AI in breast cancer risk assessment.

"We are not reporting mammographic density currently," Dr. Dembrower said. "In the introduction of individually adapted screening, we use deep learning networks trained to predict cancer rather than taking the indirect route that density offers."

As an additional benefit, the AI approach can continually be improved with exposure to more high-quality data sets.

"Our deep learning experts at the Royal Institute of Technology in Stockholm are working on an update to the model," Dr. Dembrower said. "After that, we aim to test the model clinically next year by offering MRI to the women who stand to benefit the most."