Can the sewers disclose the scale of COVID-19?

In an application to the Research Council of Norway’s emergency call for proposals in the fight against the Coronavirus, scientists from NMBU, NIVA, Karolinska Institutet and The Veterinary Institute wants to analyze samples from the sewer in the hunt for COVID-19.

During the ongoing epidemic, diagnostics on the individual level has been important in order to track and prevent transmission of the virus. However, the limited capacity of the public health surveillance is a big issue already shown in other epidemics caused by SARS-CoV (2003), influenza virus H1N1 (2009), Ebola virus (2014) and Zika virus (2015).

“The future of public health surveillance depends on developing innovative bioanalytical approaches”, says Jose Antonio Baz Lomba, research scientist at The Norwegian Institute for Water Research (NIVA).

“Implementing simple screening on population level, e.g. city populations and smaller, rural populations would provide better overview of the situation, could earlier facilitate more correct preventive measures and could function as an early warning system”, Baz Lomba says.

Wastewater virus monitoring has proven its efficiency for early warning by detecting the presence of pathogens before onset of symptoms in the population. Further, quantitative monitoring enables supercomputer modeling of the epidemic in near real time. This approach will be independent from hospital reporting systems, free from societal biases and adaptable to new and emerging public health threats. As COVID-19 replicates in the intestine and is shed in feces, the project partners aim to provide a new and unique screening system to better understand transmission dynamics and risk factors of the virus through screening and quantification of sewage particles. 

COVID-19 spreads directly through airborne droplets from infected individuals, and probably indirectly through contaminated environment. Especially contaminated surfaces in healthcare settings are potential sources of virus transmission. Water or sewage contaminated with virus, which becomes aerosolized, could potentially expose large number of people. As there is little knowledge on these topics, the proposed project will additionally study the environmental stability of coronaviruses. The aim is to provide data on virus reduction in natural settings, which can be used in risk based public health and healthcare system responses.

Japanese researchers develop rapid, automatic identification of individual, live brain cells

Supercomputer program advances efforts to map every neuron in worms

Researchers working towards understanding the brain in high-definition, single-cell level of detail have designed a new supercomputer program to identify each nerve cell in fluorescent microscope images of living worms. Previous attempts to automate the identification of individual nerve cells have been thwarted by the fact that the same cell can be in vastly different locations in different worms.

The worms are C. elegans, tiny roundworms common in soil and research labs around the world. Each of the 959 cells in the animals' transparent, 1 millimeter-long body has been identified, named and mapped, including their 302 nerve cells.

Scientists completed the first map of the C. elegans nervous system in 1986 and have been improving it ever since. More recent projects include OpenWorm, an ongoing global effort to design a cell-by-cell and behaviorally accurate virtual C. elegans - a research-worthy version of a Tamagotchi pet. CAPTION Nerve cells are shaped like young plants: big round seeds (cell bodies) surrounded by a nest of frizzy roots in one direction (dendrites) and a single long stem stretching out in the other direction (axon). This image shows variations in the location of some neuron cell bodies between different animals as ellipses. Each neuron is randomly colored. Neurons are arranged top-to-bottom and left-to-right in the graph as they are located nose-to-tail (anterior-posterior) and back-to-belly (dorsal-ventral) in a worm.  CREDIT CC BY-ND 4.0 Toyoshima et al., 2020, DOI: 10.1186/s12915-020-0745-2{module INSIDE STORY}

Despite their value, generalized brain atlases, so-called connectome maps, are still no help for identifying neurons in individual, live, wriggling worms.

"Imagine if you knew the names of all the cities on a map, but the cities moved each time you looked. That is what it's like, trying to compare current brain atlases to living organisms," said Professor Yuichi Iino from the University of Tokyo, co-last author of the recent research paper published in BMC Biology.

Iino's research group wants to identify and map each nerve cell in living C. elegans so that they can chart the pathways of electrical impulses that make behaviors, learning, and memory possible.

C. elegans brain neurons are not trapped in a skull, but just form a loosely packed group of 150 neurons in the head region of the animal. 

{media id=238,layout=solo} {module INSIDE STORY} 

"The neurons are tiny, and in the head of C. elegans they are surrounding this large bulb that's part of the digestive system, so they get pushed and pulled around a lot like the animal moves or eats," explained Iino.

Researchers began by finding unique combinations of genes that, when artificially attached to fluorescent protein tags, would cause 35 different small groups of neurons to glow under a microscope.

These new genetically modified strains of C. elegans made all of the researchers' subsequent image studies and supercomputer programming work possible.

Researchers identified individual neurons in 311 worms in total, about 10 worms for each of the 35 neuron groups, and measured the distances and relative positions between pairs of neurons in the microscopy images.

Although neurons were known to shift within each worm, no one expected the neurons to have different "home base" locations in different individuals. The positions of the central cell body of some neurons can vary by more than 0.02 millimeters between different animals, a significant distance for an animal only 1-millimeter long.

"Individual C. elegans are thought to be uniform because they all have almost the same cell lineages and a stereotyped neural circuit. It was really surprising, though, how large the positional differences are between individual animals," said Assistant Professor Yu Toyoshima, a co-first author of the recent research paper and member of the Iino lab.

The research team then used their new position variation data and the C. elegans connectome brain atlas to develop a computer program to automatically identify neurons. The program uses a mathematical algorithm to analyze a microscopy image of the C. elegans brain and assign the statistically most likely identity to each neuron based on that neuron's position in relation to other neurons.

"The algorithm is only 60 percent accurate, which is too low for fully automatic cell identification, but it speeds up our work enough to make other projects possible to understand neural networks based on whole-brain imaging data," said Toyoshima.

Part of what made this project possible in C. elegans is that every neuron was already known and named. Using a similar technique in other animals would require fine-tuned genetic manipulation to cause groups of neurons to glow under a microscope and knowing how many neurons need to be identified. CAPTION Nerve cells are shaped like young plants: big round seeds (cell bodies) surrounded by a nest of frizzy roots in one direction (dendrites) and a single long stem stretching out in the other direction (axon). This image shows variations in the location of some neuron cell bodies between different animals as ellipses. Each neuron is randomly colored. Neurons are arranged top-to-bottom and left-to-right in the graph as they are located nose-to-tail (anterior-posterior) and back-to-belly (dorsal-ventral) in a worm.  CREDIT CC BY-ND 4.0 Toyoshima et al., 2020, DOI: 10.1186/s12915-020-0745-2

{module INSIDE STORY}

"The human brain has billions of neurons, so understanding our own brains at the single-cell level would be extremely difficult. C. elegans have small brains, but they can still learn and change behaviors, so they could allow us to understand how networks of neurons create behavior," said Iino.

UK computer scientists develop novel artificial intelligence system that predicts air pollution levels

An unusual concept for folks in the UK, but it is a genuine concern for communities all over the world with air pollution killing an estimated seven million people every year.

A team of Loughborough University computer scientists are hoping to help eradicate this fear with a new artificial intelligence (AI) system they have developed that can predict air pollution levels hours in advance.

The technology is novel for a number of reasons, one being that it has the potential to provide new insight into the environmental factors that have significant impacts on air pollution levels.

Professor Qinggang Meng and Dr. Baihua Li are leading the project which is focussed on using AI to predict ‘PM2.5’ - a particulate matter of fewer than 2.5 microns (10−6 m) in diameter – that is often characterized as reduced visibility in cities and hazy-looking air when levels are high.

Particulate matter is a type of air pollutant and it is the pollutant with the strongest evidence for public health concern.

This is because the particles are so small they can easily get into the lungs and then the bloodstream, resulting in cardiovascular, cerebrovascular and respiratory impacts.

According to the Department for Environment, Food and Rural Affairs, there is understood to be ‘no safe threshold below which no adverse effects would be anticipated’. Prediction uncertainty analysis. The green line is the actual PM2.5 levels measured from a sensor. The blue line is the system’s PM2.5 prediction. The red lines outline the probability range the system believes the levels will fall within.{module INSIDE STORY}

There are systems that already exist that can predict PM2.5 but Loughborough University’s research looks to take the technology to the next level.

The system the researchers have developed is novel for the following aspects:

  • It predicts PM2.5 levels in advance – giving predictions for the levels in one hour to several hours’ time, plus 1-2 days ahead
  • It interprets the various factors and data used for prediction, which could lead to a better understanding of the weather, seasonal and environmental factors that can impact PM2.5
  • It doesn’t just predict one figure; it predicts the PM2.5 level plus a range of values the air pollution reading could fall within – known as ‘uncertainty analysis’
  • It has the capability to be used as an air pollution analysis tool in a carbon credit trading system.

The system’s uncertainty analysis and ability to understand factors that affect PM2.5 are particularly important as this will allow potential end-users, policymakers and scientists to better understand related causes of PM2.5 and how reliable the prediction is.

Dr. Yuanlin Li is the Research Associate working on the project at Loughborough University. The LU team created the system using machine learning – a type of artificial intelligence technology that uses large amounts of data to learn rules and features, so a system can make predictions.

The researchers used public historical data on air pollution in Beijing to train and test the algorithms; China was selected as the focus as 145 of 161 Chinese cities have serious air pollution problems.

The developed system will now be tested on live data captured by sensors deployed in Shenzhen, China.  

The system developed at Loughborough University is part of a wider research project funded by the Newton Fund, which has four partners: Satoshi Systems Ltd, Loughborough University, Shenzhen Institutes of Advanced Technology, and EEG Smart Intelligent Technology in China.

The aim of the project is to explore how carbon can be used as a tradeable commodity to establish a new effective economic leverage for controlling emissions.

It is envisaged that cities, regions, and factories will be given credits for how much carbon they can emit and if they go over it must ‘buy’ more credits. Alternatively, if a location falls under its limit, it can sell the surplus credits on the carbon market for a profit.

The aim is to integrate Loughborough University’s PM2.5 prediction model onto an online platform that can be accessed by participants of the carbon trading scheme.

This will allow participants to use the system to access real-time, meaningful information on pollution levels that will aid them in designing a trading strategy.

Of the research, Professor Meng said: “Air pollution is a long-term accumulated challenge faced by the whole world, and especially in many developing countries.

“The project aims to measure and forecast air quality and pollution levels. We also explore the feasibility of linking the real-time information on carbon emission to end-to-end carbon credit trading, thus dedicating to carbon control and greenhouse gas emission reduction.

“We hope this research will help lead to cleaner air for the community and improve people’s health in the future.”

Mr. Saurabh Goyal, CEO of the industry partner Satoshi Systems Ltd, added: “We are impressed and excited by the work done by Loughborough University.

“We believe that all types of participants such as polluters, cleaners, market makers, hedgers, speculators, government and policymakers will find this data very useful before they buy or sell carbon credits on our platform.

“We are currently under discussions with governmental and civic authorities in both China as well as the UK to set up the exchange.

“Anyone interested in participating in this emissions exchange platform can reach out to me at saurabh.goyal@satoshi.ltd.