Georgetown biologists use AI to search for the next SARS-like virus

An international research team led by scientists at Georgetown University has demonstrated the power of artificial intelligence to predict which viruses could infect humans — like SARS-CoV-2, the virus that led to the COVID-19 pandemic — which animals host them, and where they could emerge. Rhinolophus rouxi, which inhabits parts of South Asia, was identified as a likely but undetected betacoronavirus host by the study authors.  CREDIT Brock and Sherri Fenton

Their ensemble of predictive models of likely reservoir hosts was validated in an 18-month project to identify specific bat species likely to carry beta coronaviruses, the group that includes SARS-like viruses.

“If you want to find these viruses, you have to start by profiling their hosts — their ecology, their evolution, even the shape of their wings,” explains the study’s senior author, Colin Carlson, Ph.D., an assistant research professor in the Department of Microbiology & Immunology and a member of Georgetown’s Center for Global Health Science and Security at Georgetown University Medical Center. “Artificial intelligence lets us take data on bats and turn it into concrete predictions: where should we be looking for the next SARS?”

Despite global investments in disease surveillance, it remains difficult to identify and monitor wildlife reservoirs of viruses that could someday infect humans. Statistical models are increasingly being used to prioritize which wildlife species to sample in the field, but the predictions being generated from any one model can be highly uncertain. Scientists also rarely track the success or failure of their predictions after they make them, making it hard to learn and make better models in the future. Together, these limitations mean that there is high uncertainty in which models may be best suited to the task.

This new study suggests that the search for closely-related viruses could be non-trivial, with over 400 bat species around the world predicted to host beta coronaviruses, a large group of viruses that includes those responsible for SARS-CoV (the virus that caused the 2002-2004 outbreak of SARS) and SARS-CoV-2 (the virus that causes COVID-19).  Although the origin of SARS-CoV-2 remains uncertain, the spillover of other viruses from bats is a growing problem due to factors like agricultural expansion and climate change.

Greg Albery, Ph.D., a postdoctoral fellow in Georgetown’s Biology Department, says COVID-19 provided the impetus to expedite their research. “This is a really rare opportunity,” explains Albery. “Outside of a pandemic, we’d never learn this much about these viruses in this small a timeframe. A decade of research has been collapsed into about a year of publications, and it means we can show that these tools work.”

In the first quarter of 2020, the researcher team trained eight different statistical models that predicted which kinds of animals could host beta coronaviruses. Over more than a year, the team then tracked the discovery of 40 new bat hosts of beta coronaviruses to validate initial predictions and dynamically update their models. The researchers found that models harnessing data on bat ecology and evolution performed extremely well at predicting new hosts. In contrast, cutting-edge models from network science that used high-level mathematics – but less biological data – performed roughly as well or worse than expected at random.

“One of the most important things our study gives us is a data-driven shortlist of which bat species should be studied further,” says Daniel Becker, Ph.D., assistant professor of biology at the University of Oklahoma. “After identifying these likely hosts, the next step is then to invest in monitoring to understand where and when beta coronaviruses are likely to spill over.”

Carlson says that the team is now working with other scientists around the world to test bat samples for coronaviruses based on their predictions.

“If we spend less money, resources, and time looking for these viruses, we can put all of those resources into the things that save lives down the road. We can invest in building universal vaccines to target those viruses or monitoring for spillover in people that live near bats,” says Carlson. “It’s a win-win for science and public health.”

USC Medicine Crump lab develops the Constellations algo for understanding head development

Cranial neural crest cells, or CNCCs, contribute to many more body parts than their humble name suggests. These remarkable stem cells not only form most of the skull and facial skeleton in all vertebrates ranging from fish to humans but also can generate everything from gills to the cornea. To understand this versatility, scientists from the lab of Gage Crump created a series of atlases over time to understand the molecular decisions by which CNCCs commit to forming specific tissues in developing zebrafish. Their findings may provide new insights into normal head development, as well as craniofacial birth defects. Confocal microscopy image of an adult zebrafish head with neural crest-derived cells in red. The Crump lab has used single-cell sequencing to understand how these cells build and repair the head skeleton, with implications for understanding human craniofacial birth defects and improving repair of skeletal tissues.  CREDIT Image courtesy of Peter Fabian

“CNCCs have long fascinated biologists by the incredible diversity of cell types they can generate. By studying this process in the genetically tractable zebrafish, we have identified many of the potential switches that allow CNCCs to form these very different cell types,” said Gage Crump, professor of stem cell biology and regenerative medicine at the Keck School of Medicine of USC.

Led by postdoc Peter Fabian and Ph.D. students Kuo-Chang Tseng, Mathi Thiruppathy, and Claire Arata, the team of scientists permanently labeled CNCCs with a red fluorescent protein to keep track of which cell types came from CNCCs throughout the lifetime of zebrafish. They then used a powerful type of approach, known as “single-cell genomics,” to identify the complete set of active genes and the organization of the DNA across hundreds of thousands of individual CNCCs. The massive quantity of data generated required the scientists to develop a new computational tool to make sense of it.

“We created a type of computational analysis that we called ‘Constellations,’ because the final visual output of the technique is reminiscent of constellations of stars in the sky,” said Fabian. “In contrast to astrology, our Constellations algorithm really can predict the future of cells and reveal the key genes that likely control their development.”

Through this new bioinformatic approach, the team discovered that CNCCs do not start with all the information required to make the huge diversity of cell types. Instead, only after they disperse throughout the embryo do CNCCs begin reorganizing their genetic material in preparation for becoming specific tissues. Constellations accurately identified genetic signs that point to these specific destinies for CNCCs. Real-life experiments confirmed that Constellations correctly pinpointed the role of a family of “FOX” genes in facial cartilage formation and a previously unappreciated function for “GATA” genes in the formation of gill respiratory cell types that allow fish to breathe.

“By conducting one of the most comprehensive single-cell studies of a vertebrate cell population to date, we not only gained significant insights into the development of the vertebrate head but also created a broadly useful computational tool for studying the development and regeneration of organ systems throughout the body,” said Crump.

University of Tokyo researchers find public trust in AI varies greatly depending on the app

Prompted by the increasing prominence of artificial intelligence (AI) in society, University of Tokyo researchers investigated public attitudes toward the ethics of AI. Their findings quantify how different demographics and ethical scenarios affect these attitudes. As part of this study, the team developed an octagonal visual metric, analogous to a rating system, which could be useful to AI researchers who wish to know how their work may be perceived by the public. Octagon chart. An example chart showing a respondent’s ratings of the eight themes for each of the four ethical scenarios on a different application of AI. © 2021 Yokoyama et al.

Many people feel the rapid development of technology often outpaces that of the social structures that implicitly guide and regulate it, such as law or ethics. AI in particular exemplifies this as it has become so pervasive in everyday life for so many, seemingly overnight. This proliferation, coupled with the relative complexity of AI compared to more familiar technology, can breed fear and mistrust of this key component of modern living. Who distrusts AI and in what ways are matters that would be useful to know for developers and regulators of AI technology, but these kinds of questions are not easy to quantify.

Researchers at the University of Tokyo, led by Professor Hiromi Yokoyama from the Kavli Institute for the Physics and Mathematics of the Universe, set out to quantify public attitudes toward ethical issues around AI. There were two questions, in particular, the team, through analysis of surveys, sought to answer: how attitudes change depending on the scenario presented to a respondent, and how the demographic of the respondent changed attitudes.

Ethics cannot be quantified, so to measure attitudes toward the ethics of AI, the team employed eight themes common to many AI applications that raised ethical questions: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. These, which the group has termed “octagon measurements,” were inspired by a 2020 paper by Harvard University researcher Jessica Fjeld and her team.

Survey respondents were given a series of four scenarios to judge according to these eight criteria. Each scenario looked at a different application of AI. They were: AI-generated art, customer service AI, autonomous weapons, and crime prediction.

The survey respondents also gave the researchers information about themselves such as age, gender, occupation, and level of education, as well as a measure of their level of interest in science and technology by way of an additional set of questions. This information was essential for the researchers to see what characteristics of people would correspond to certain attitudes.

“Prior studies have shown that risk is perceived more negatively by women, older people, and those with more subject knowledge. I was expecting to see something different in this survey given how commonplace AI has become, but surprisingly we saw similar trends here,” said Yokoyama. “Something we saw that was expected, however, was how the different scenarios were perceived, with the idea of AI weapons being met with far more skepticism than the other three scenarios.” Octagon measurements. The eight themes common to a wide range of AI scenarios for which the public have pressing ethical concerns. © 2021 Yokoyama et al.

The team hopes the results could lead to the creation of a sort of universal scale to measure and compare ethical issues around AI. This survey was limited to Japan, but the team has already begun gathering data in several other countries.

“With a universal scale, researchers, developers, and regulators could better measure the acceptance of specific AI applications or impacts and act accordingly,” said Assistant Professor Tilman Hartwig. “One thing I discovered while developing the scenarios and questionnaire is that many topics within AI require significant explanation, more so than we realized. This goes to show there is a huge gap between perception and reality when it comes to AI.”