AI tech helps Johns Hopkins researchers peer into the brains of mice

Johns Hopkins biomedical engineers have developed an artificial intelligence (AI) training strategy to capture images of mouse brain cells in action. The researchers say the AI system, in concert with specialized ultra-small microscopes, makes it possible to find precisely where and when cells are activated during movement, learning, and memory. The data gathered with this technology could someday allow scientists to understand how the brain functions and is affected by the disease.  

“When a mouse’s head is restrained for imaging, its brain activity may not truly represent its neurological function,” says Xingde Li, Ph.D., professor of biomedical engineering at the Johns Hopkins University School of Medicine. “To map brain circuits that control daily functions in mammals, we need to see precisely what is happening among individual brain cells and their connections, while the animal is freely moving around, eating and socializing.” Photo credit: metamorworks/Getty Images

To gather this extremely detailed data, Li’s team developed ultra-small microscopes that the mice can wear on the top of their head. Measuring a couple of millimeter in diameter, the size of these microscopes limit the imaging technology they can carry onboard. In comparison to benchtop models, the frame rate on the miniature microscopes is low, which makes them susceptible to interference from motion. Disturbances such as the mouse’s breathing or heart rate would affect the accuracy of the data these microscopes can capture. Researchers estimate that Li’s miniature microscope would need to exceed 20 frames per second to eliminate all the disturbances from the motion of a freely moving mouse.

“There are two ways to increase frame rate,” says Li. “You can increase the scanning speed and you can decrease the number of points scanned.”

In previous research, Li’s engineering team quickly found they hit the physical limits of the scanner, reaching six frames per second, which maintained excellent image quality but was far below the required rate. So, the team moved on to the second strategy for increasing frame rate — decreasing the number of points scanned. However, similar to reducing the number of pixels in an image, this strategy would cause the microscope to capture lower-resolution data.

Li hypothesized that an AI program could be trained to recognize and restore the missing points, enhancing the images to a higher resolution. Such AI training protocols are used when it is impossible or time-consuming to create a computer program for a task, such as reliably recognizing a cluster of features as a human face. Instead, computer scientists use the approach of letting computers learn to program themselves through processing large sets of data.

One significant challenge in the proposed AI approach was the lack of similar images of mouse brains to train the AI against. To overcome this gap, the team developed a two-stage training strategy. The researchers began training the AI to identify the building blocks of the brain from images of fixed samples of mouse brain tissue. They next trained the AI to recognize these building blocks in a head-restrained living mouse under their ultra-small microscope. This step trained the AI to recognize brain cells with natural structural variation and a small bit of motion caused by the movement of the mouse’s breathing and heartbeat. In these three examples of soft tissue lesions, the images are unperturbed on the left column and blurred on the right column. The AI system was sensitive to the blurring, while the radiologists were not. This showed that the AI system relies on details in soft tissue lesions that are considered irrelevant by the radiologists. Image courtesy of Taro Makino, NYU’s Center for Data Science

“The hope was that whenever we collect data from a moving mouse, it will still be similar enough for the AI network to recognize,” says Li.

Then, the researchers tested the AI program to see if it could accurately enhance mouse brain images by incrementally increasing the frame rate. Using a reference image, the researchers reduced the microscope scanning points by factors of 2, 4, 8, 16, and 32 and observed how accurately the AI could enhance the image and restore the image resolution.

The researchers found that the AI could adequately restore the image quality up to 26 frames per second. 

The team then tested how well the AI tool performed in combination with a mini microscope attached to the head of a moving mouse. With the combination of AI and microscope, the researchers were able to precisely see activity spikes of individual brain cells activated by the mouse walking, rotating, and generally exploring its environment.

“We could never have seen this information at such high resolution and frame rate before,” says Li. “This development could make it possible to gather more information on how the brain is dynamically connected to the action on a cellular level.”

The researchers say that with more training, the AI program may be able to accurately interpret images up to 52 or even 104 frames per second.

Other researchers involved in this study include Honghua Guan, Dawei Li, Hyeon-cheol Park, Ang Li, Yungtian Gau, and Dwight Bergles of the Johns Hopkins University School of Medicine; Yuanlei Yue and Hui Lu of George Washington University; and Ming-Jun Li from Corning Inc.

UK ecologists develop modeling tools to predict the distributions of species

In one of the first studies of its kind, scientists from Newcastle University used Community Distribution Models (CDMs) to predict upland vegetation communities from published data on a national scale.

Lead author Dr. Liam Butler developed novel approaches to mapping upland vegetation via CDMs in the UK, using publicly available and open-access NVC records and environmental data. Rainfall and temperature were key predictor variables, with models based on random forests (a type of machine learning classifier) being the most accurate.

Publishing their findings in the Journal of Applied Ecology, the team has shown that this technique could be used in any country where maps of vegetation communities have been created.

Dr. Butler conducted the study as a Ph.D. student under the supervision of Dr. Roy Sanderson at Newcastle University’s School of Natural and Environmental Sciences.

He said: “One advantage of the CDM approach is that it is generalizable and can easily be adapted for other countries that have their vegetation community classifications. Another is that it can aid field ecologists in conducting targeted surveys for endangered species. For example, in the UK the distribution of the English sundew, Drosera anglicans, has greatly declined in the last 100 years due to drainage and eutrophication and is now on the British Red Data List of endangered species. It has been recorded in over 20% of surveyed quadrats for the M17 Scirpus cespitosus-Eriophorum vaginatum blanket mire NVC community, whose distribution was predicted accurately via the new CDM methods. Thus, M17 hotspots could indicate areas where D. anglicans are more likely to occur, and where CS surveys and potential conservation efforts should be focussed.”

Study co-author, Dr. Roy Sanderson, Senior Lecturer in Biological Modelling at Newcastle University’s School of Natural and Environmental Sciences, added: “These models can also take advantage of a large number of publicly available-species records from, for example, historical collections, or more recently citizen science (CS) surveys.

“In most habitats, a plant species does not grow in isolation, but instead co-occurs with other plants to form a characteristic assemblage or “community”, and Community Distribution Models provide a method to create predictive maps of these assemblages, and hence their constituent individual plant species, across wide areas.

“There have, however, been few attempts to map vegetation communities, i.e. groups of plant species that often co-occur under certain environmental conditions, to create characteristic assemblages. Many countries have developed standardized methods to survey and record the occurrence of vegetation communities, for example, the National Vegetation Classification (NVC) in the United Kingdom, which could be used to build such maps.

Tropical cyclones could double globally by 2050

Human-caused climate change will make strong tropical cyclones twice as frequent by the middle of the century, putting large parts of the world at risk, according to a new study published in Scientific Advances. The analysis also projects that maximum wind speeds associated with these cyclones could increase by around 20%.

Despite being amongst the world’s most destructive extreme weather events, tropical cyclones are relatively rare. In a given year, only around 80-100 tropical cyclones form globally, most of which never make landfall. In addition, accurate global historical records are scarce, making it hard to predict where they will occur and what actions Governments should take to prepare.

To overcome this limitation, an international group of scientists involving Ivan Haigh from the University of Southampton developed a new approach that combined historical data with global climate models to generate hundreds of thousands of “synthetic tropical cyclones”.

Dr. Nadia Bloemendaal  from the Institute for Environmental Studies, Vrije Universiteit Amsterdam, who led the study, said:

“Our results can help identify the locations prone to the largest increase in tropical cyclone risk. Local governments can then take measures to reduce risk in their region so that damage and fatalities can be reduced”

“With our publicly available data, we can now analyze tropical cyclone risk more accurately for every individual coastal city or region”

By creating a very large dataset with these supercomputer-generated cyclones, which have similar features to natural cyclones, the researchers were able to much more accurately project the occurrence and behavior of tropical cyclones around the world over the next decades in the face of climate change, even in regions where tropical cyclones hardly ever occur today.

The team’s analysis found that the frequency of the most intense cyclones, those from Category 3 or higher, will double globally due to climate change, while weaker tropical cyclones and tropical storms will become less common in most of the world’s regions. The exception to this will be the Bay of Bengal, where the researchers found a decrease in the frequency of intense cyclones

Many of the most at-risk locations will be in low-income countries. Countries, where tropical cyclones are relatively rare today, will see an increased risk in the coming years, including Cambodia, Laos, Mozambique, and many Pacific Island Nations, such as the Solomon Islands and Tonga. Globally, Asia will see the largest increase in the number of people exposed to tropical cyclones, with additional millions exposed in China, Japan, South Korea, and Vietnam.

Dr. Ivan Haigh, Associate Professor at the University of Southampton, said:

“Of particular concern is that the results of our study highlight that some regions that don’t currently experience tropical cyclones are likely to in the near future with climate change”

“The new tropical cyclone dataset we have produced will greatly aid the mapping of changing flood risk in tropical cyclone regions”

The study could help governments and organizations better assess the risk from tropical cyclones, thereby supporting the development of risk mitigation strategies to minimize impacts and loss of life.