From left to right, through accounting for more and more factors in the simulation pipeline using D3P (in the top column), the simulated image would turn to be more and more realistic (in the bottom column).
From left to right, through accounting for more and more factors in the simulation pipeline using D3P (in the top column), the simulated image would turn to be more and more realistic (in the bottom column).

Chinese prof Liu uses deep learning for a new method for counting leaves

In cereal crops, the number of new leaves each plant produces is used to study the periodic events that constitute the biological life cycle of the crop. The conventional method of determining leaf numbers involves manual counting, which is slow, labor-intensive, and usually associated with large uncertainties because of the small sample sizes involved. It is thus difficult to get accurate estimates of some traits by manually counting leaves.

Conventional methods have, however, been improved upon with technology. Deep learning has enabled the use of object detection and segmentation algorithms to estimate the number of plants (and leaves on these plants) in an area. There is, however, a roadblock to using these algorithms. They count leaf tips, which appear tiny in images, proving difficult to detect. Consequently, deep learning methods often fail to perform in actual field conditions. Aiming to solve this problem, a multinational research team developed a self-supervised leaf-tip counting method based on deep learning techniques, which yielded wheat leaf count with high accuracy. The study was led by Professor Shouyang Liu of the Nanjing Agricultural University located in Nanjing, Jiangsu Province, China. and was published online in Plant Phenomics on March 20, 2023.

Speaking about their work, Prof. Liu says, “We developed a high-throughput method to count the number of leaves on wheat plants by detecting leaf tips in RGB (red-green-blue) images. The Digital Plant Phenotyping platform (D3P) was used to simulate a large, diverse dataset of RGB images and corresponding leaf-tip labels of wheat plant seedlings. Over 150,000 images were generated, with over 2 million labels.”         When no domain adaptation is applied, the results displayed correspond to models trained with the real dataset.

The researchers used domain adaptation—in which a neural network trained on a “source” dataset is applied to a “test” dataset, also referred to as a “target” dataset. This was achieved through deep learning techniques that mimic neural processes used by the human brain and use algorithms inspired by its structure and function.

Next, the researchers collected 2,763 RGB images of juvenile wheat fields from 11 locations spread across five countries. A variety of measures were used to create a robust and reliable source dataset—different types of cameras, varying imaging angles, and images with diverse soil backgrounds/light conditions were used. Besides capturing field images, the team also generated simulated wheat images, which were automatically annotated using the D3P. Domain adaptation was used to improve the realism of these images, which were then used to train the deep-learning models.

Six combinations of deep learning models and domain adaptation techniques were used in this study; the Faster-RCNN model with the CycleGAN adaptation technique demonstrated the best performance. This was evident from its high coefficient of determination (R2 = 0.94)—a measure that determines the goodness of fit of a model—and optimal root mean square error (RMSE = 8.7)—a standard way to measure the error of a model in predicting quantitative data.

Moreover, of the three factors evaluated for the performance of the leaf counting models, the light condition was found to be of utmost importance. On the other hand, leaf texture and soil brightness were found to be less important for performance, but the combination of all three factors was found to significantly improve the realism of the images. The results also revealed that a spatial resolution higher than 0.6 mm per pixel was required to ensure accurate identification of leaf tips.

Explaining the implications of their study, Prof. Liu says, “The resulting proposed deep learning method appears very attractive since it eliminates the tedious, expensive, and sometimes inaccurate manual labeling task by simulating images for which the labels are automatically generated. The images were also made more realistic using domain adaptation techniques.”           

The research team has made the trained networks available at https://github.com/YinglunLi/Wheat-leaf-tip-detection to facilitate further research in this area.  

The study, led by Costas Anastassiou, PhD, a research scientist in the departments of Neurology, Neurosurgery and Biomedical Sciences at Cedars-Sinai, used data from laboratory mice to establish a new method for examining relationships between neuron type and function, and focused on the mouse primary visual cortex, which receives and processes visual information. Photo by Cedars-Sinai.
The study, led by Costas Anastassiou, PhD, a research scientist in the departments of Neurology, Neurosurgery and Biomedical Sciences at Cedars-Sinai, used data from laboratory mice to establish a new method for examining relationships between neuron type and function, and focused on the mouse primary visual cortex, which receives and processes visual information. Photo by Cedars-Sinai.

Cedars-Sinai researchers develop computational models to determine the identities, roles of individual neurons in the brain’s complex machine

Investigators at Cedars-Sinai have created supercomputer-generated models to bridge the gap between “test tube” data about neurons and the function of those cells in the living brain. Their study could help in the development of treatments for neurological diseases and disorders that target specific neuron types based on their roles. Keith Black, MD

“This work allows us to start looking at the brain like the complex machine that it is, rather than as one homogenous piece of tissue,” said Costas Anastassiou, Ph.D., a research scientist in the departments of Neurology, Neurosurgery, and Biomedical Sciences at Cedars-Sinai and senior author of the study. “Once we are able to distinguish between the different cell types, instead of saying that the entire brain has a disease, we can ask which neuron types are affected by the disease and tailor treatments to those neurons.” 

Neurons are the main functional units of the brain. The signals passing through these cells—in the form of electrical waves—give rise to all thought, sensation, movement, memory, and emotion. 

The study used data from laboratory mice to establish a new method for examining relationships between neuron type and function and focused on the mouse's primary visual cortex, which receives and processes visual information. It is one of the best-studied parts of the brain—both in vitro, where tissue is studied in a dish or test tube outside the living organism, and in vivo, where it is studied in the living animal. 

The investigators’ goal was to link the two worlds. 

“Based on in vitro studies of genetic makeup and physical structure, we know something about what various classes of neurons look like, but not their function in the living brain,” Anastassiou said. “When we record the activity of brain cells in vivo, we can see what neurons are responding to and what their function is, but not which classes of neurons they are.” 

To link form to function, investigators first used in vitro information to create computational models of various types of neurons and to simulate their signaling patterns. 

They next took advantage of the newest technology in single-neuron recording to observe activity in the brains of laboratory mice while the mice were exposed to different sorts of visual stimuli. Based on the shapes of the signals or “spikes” of neurons in response to visual input, investigators separated the cells they recorded into six groups. 

{media id=307,layout=solo}

“Once we had our models and our in vivo data, the fundamental question was which computational models produced the most similar signaling shape and waveform to each of the six in vivo clusters we identified, and vice versa,” Anastassiou said. “Not all of the in vivo clusters and models matched perfectly, but some did.”  

More data, and possibly experiments involving more-sophisticated visual stimuli, might be required to match all the computational models and cell clusters, and Anastassiou said that future studies will be dedicated to perfecting the method established in the current paper. 

“There’s a wealth of information about the identity of cell types in the human brain, but not about the role of those cell types in cognitive functioning or how they are affected by the disease,” Anastassiou said. “Now there is a window through which we can look at these things and ask these questions. It’s clear that we have a long way to go, but we’re excited about the next steps in this journey.”

The ultimate goal is to pave the way for discoveries that change patients’ lives.

“Our research scientists are continually striving to expand our knowledge of the workings of the human brain at the most detailed level,” said Keith L. Black, MD, chair of the Department of Neurosurgery and the Ruth and Lawrence Harvey Chair in Neuroscience at Cedars-Sinai. “Pinning down the specific type and function of each neuron may one day lead to the discovery of lifesaving treatments for brain diseases and neurological disorders.”

Funding: This research was supported by National Institutes of Health grant number RO1 NS120300-01; National Natural Science Foundation of China grant number 12101570; and Scientific Project of Zhejiang Lab grants 2021KE0PI03, 2022KI0AC01, 2022KI0AC02 and 2022ND0AN01.

UK, India collaborate on nuclear energy research to address barriers to innovation

The risk and cost of developing new nuclear energy technologies could be reduced, thanks to a new research project bringing together scientists from the UK and India.

The four-year project, called Enhanced Methodologies for Advanced Nuclear System Safety (EMEANSS) will use experimental data and machine learning to develop sophisticated safety systems and models across three key areas: nuclear physics, structural components, and fuels. Dr Simon Middleburgh from Bangor University’s Nuclear Futures Institute.

The systems and models developed through the research could also enable improvements in the safety and efficiency of existing nuclear power plants.

Leading the UK research is Dr. Simon Middleburgh from Bangor University’s Nuclear Futures Institute. Dr. Middleburgh said: “Designing and building next-generation nuclear power plants is a complex task. By creating intelligent safety systems and models that offer greater predictability, we can drive efficiencies and support innovation in the nuclear industry, helping the UK achieve a low-carbon future.”

UK and Indian scientists will work independently but compare findings as the project progresses.

The nuclear physics research aims to fill gaps in our current knowledge, where low accuracy data leads to poor predictability that is currently dealt with by over-engineering or reducing the performance and efficiency of the overall system.

Next-generation nuclear reactors require materials, such as graphite components, to operate in a harsh nuclear environment while maintaining their strength and structural properties. The team will test and analyze these materials and draw on existing data to model their behavior, using the novel techniques developed through the project.

The scientists will also model the performance of new fuels, to fill gaps in the data to allow for greater efficiency and safety. Nuclear fuels operate in some of the most extreme conditions and predicting their behavior as they are used in the reactor is important, to ensure they remain within their safe operating perimeter. The new modeling methods combined with new data from experiments will enable the researchers to significantly improve the predictability of nuclear fuels, supporting both current and next-generation designs.

The team brings together scientists from the Universities of Bristol, Cambridge, Oxford, Liverpool, and Strathclyde with Bangor University, Imperial College London, The Open University, and the Bhabha Atomic Research Centre in India. The research is funded by the Engineering and Physical Sciences Research Council (EPSRC) as part of UK Research and Innovation, through the UK-India Civil Nuclear Collaboration between the EPSRC and the Department of Atomic Energy in India.