Scheme of shape prediction of nanoparticles using NTA + deep learning analysis
Scheme of shape prediction of nanoparticles using NTA + deep learning analysis

Deep learning now solves nanoparticle shape identification challenges

The Innovation Center of NanoMedicine (iCONM) and The University of Tokyo have proposed a new method to evaluate the shape anisotropy of nanoparticles. This new method solves a long-standing issue in nanoparticle evaluation that dates back to the time of Einstein. The method uses deep learning to detect differences in shape and has achieved an 80% classification accuracy on a single-particle basis for two types of gold nanoparticles that are approximately the same size but have different shapes. This innovative approach has the potential to advance fundamental research on Brownian motion of non-spherical particles in liquid. Furthermore, it can be useful in practical applications such as the detection of foreign substances in homogeneous systems.

The paper titled "Analysis of Brownian motion trajectories of non-spherical nanoparticles using deep learning" was published online in the APL Machine Learning journal on October 25, 2023. The group led by Prof. Takanori Ichiki, Research Director of iCONM (Professor, Department of Materials Engineering, Graduate School of Engineering, The University of Tokyo in Japan) proposed this new method.

Nanoparticles are useful materials in the medical, pharmaceutical, and industrial fields. Therefore, it is necessary to evaluate the properties and agglomeration state of each nanoparticle and perform quality control. Nanoparticle Tracking Analysis (NTA) is one way to evaluate nanoparticles in liquid by analyzing the trajectory of Brownian motion. It is a simple method to measure single particles from micro to nano size. However, it has a long-standing problem that it cannot evaluate the shape of nanoparticles.

Detecting the trajectory of Brownian motion is important as it reflects the influence of particle shape. However, measuring extremely fast motion is difficult, and conventional analysis methods are not accurate because they assume that the particle is spherical. Their research group has developed a deep learning model that identifies shapes from measured Brownian motion trajectory data without altering the experimental method.

Their model includes a 1-dimensional CNN model that extracts local features through convolution and a bidirectional LSTM model that accumulates temporal dynamics. By integrating these models, they have achieved approximately 80% classification accuracy on a single-particle basis for two types of gold nanoparticles that are approximately the same size but have different shapes. This is a significant improvement compared to conventional NTA alone.

Furthermore, they were able to create a calibration curve to determine the mixing ratio of a mixed solution of two types of nanoparticles (spherical and rod-shaped). This method is sufficiently accurate in detecting the shape of various types of nanoparticles in liquid using deep learning analysis, making it a practical tool for the first time.

In traditional NTA methods, it is not possible to directly observe the shape of particles, and the information obtained is limited. Although the trajectory of Brownian motion measured by the NTA device contains information on the shape of nanoparticles, detecting the shape anisotropy of nanoparticles has been a challenge due to the extremely short relaxation time. Also, conventional analysis methods assume particles to be spherical, which leads to inaccurate results when the particle is non-spherical. To overcome these challenges, they aimed to develop a new method that is simple and accessible. They introduced deep learning, which is good at finding hidden correlations in large-scale data, into data analysis without changing simple experimental methods. This approach enabled them to solve a long-standing problem in Brownian motion analysis and accurately detect the shape anisotropy of nanoparticles.

In this paper, they aimed to determine the shapes of two types of particles. However, they believe that the method used can have practical applications such as detecting foreign substances in homogeneous systems, considering the shapes of commercially available nanoparticles. The expansion of NTA can lead to applications not only in research but also in the industrial field. It can be useful in evaluating the properties, agglomeration state, and uniformity of non-spherical nanoparticles, and in quality control. This technology can be particularly helpful in evaluating the properties of diverse biological nanoparticles, such as extracellular vesicles, in an environment similar to that of living organisms. Furthermore, it has the potential to be an innovative approach in fundamental research on Brownian motion of non-spherical particles in liquid.

The West Antarctic Ice Sheet will continue to increase its rate of melting over the rest of the century, no matter how much we reduce fossil fuel use.
The West Antarctic Ice Sheet will continue to increase its rate of melting over the rest of the century, no matter how much we reduce fossil fuel use.

Scientists use a supercomputer to simulate the melting of the West Antarctic Ice Sheet, determine controllable melting by reducing greenhouse gas emissions

The melting rate of the West Antarctic Ice Sheet will increase in the coming century, regardless of how much we reduce our fossil fuel use. Even if we manage to limit global temperature rise to 1.5°C, melting will still occur three times faster than it did during the 20th century. Scientists simulated the ocean-driven melting of the West Antarctic Ice Sheet using the UK’s national supercomputer to determine how much melting is inevitable and how much can still be controlled by reducing greenhouse gas emissions. They found that even under the most ambitious targets of the 2015 Paris Agreement, the impact of mid-range emissions scenarios on melting is not significantly different when considering climate variability like El Niño.

The West Antarctic Ice Sheet is losing ice and is the largest contributor to sea-level rise in Antarctica. Previous models suggest that this loss is due to the warming of the Southern Ocean, particularly the Amundsen Sea region. The West Antarctic Ice Sheet contains enough ice to raise the global mean sea level by up to five meters, which will greatly impact the millions of people living near the coast worldwide. A better understanding of future changes will allow policymakers to plan and adapt more readily.

Lead author Dr Kaitlin Naughten, a researcher at the British Antarctic Survey, states that it appears that we have lost control of the melting of the West Antarctic Ice Sheet. This means that if we wanted to preserve it in its original state, we would have needed to take measures to combat climate change decades ago. However, recognizing the situation in advance provides the world with more time to adapt to the rising sea levels. In case there is a need to abandon or substantially re-engineer a coastal region, a 50-year lead time will make all the difference.

The team carried out simulations of four future scenarios of the 21st century and one historical scenario of the 20th century. The future scenarios either stabilized the global temperature rise at the targets set out by the Paris Agreement, 1.5°C and 2°C or followed standard scenarios for medium and high carbon emissions.

All scenarios resulted in significant and widespread warming of the Amundsen Sea and increased melting of its ice shelves. The three lower-range scenarios followed nearly identical pathways over the 21st century. Even under the best-case scenario, the warming of the Amundsen Sea accelerated by a factor of three, and the melting of the floating ice shelves that stabilized the inland glaciers followed, although it began to flatten by the end of the century.

The worst-case scenario had more ice shelf melting than the others, but only after 2045. The authors warn that this high fossil fuel scenario, where emissions increase rapidly, is unlikely to occur.

Naughten cautions that reducing our dependence on fossil fuels is crucial. What we do now will help to slow the rate of sea level rise in the long term. The slower the sea level changes, the easier it will be for governments and society to adapt, even if it can’t be stopped altogether stopped.

The chip contains 8,400 functional artificial neurons made of waveguide-coupled phase-change material. The neural network was trained to differentiate between German and English texts based on vowel frequency. © Jonas Schütte / AG Pernice
The chip contains 8,400 functional artificial neurons made of waveguide-coupled phase-change material. The neural network was trained to differentiate between German and English texts based on vowel frequency. © Jonas Schütte / AG Pernice

German researchers develop an adaptive optical neural network that connects thousands of artificial neurons

An international team of researchers has created a photonic processor that features adaptive neural connectivity. This processor is capable of processing data more efficiently and at a faster pace than traditional digital computers. Constructed from waveguide-coupled phase-change material, this processor comprises almost 8,400 functioning artificial neurons. It can differentiate between German and English texts based on vowel frequency. The German Research Association, the European Commission, and UK Research and Innovation have provided support for this research.

Supercomputer models for complex AI applications are pushing traditional digital computer processes to their limits. New types of computing architecture, which emulate the working principles of biological neural networks, hold the promise of faster, more energy-efficient data processing. A team of researchers has now developed an event-based architecture that uses photonic processors to transport and process data using light. Similar to the brain, this makes it possible to continuously adapt the connections within the neural network, which are the basis for learning processes.

The team of researchers used a network consisting of almost 8,400 optical neurons made of waveguide-coupled phase-change material. They showed that the connection between two neurons can indeed become stronger or weaker (synaptic plasticity) and that new connections can be formed, or existing ones eliminated (structural plasticity). In contrast to other similar studies, the synapses were not hardware elements but were coded as a result of the properties of the optical pulses.

Compared to traditional electronic processors, light-based processors offer a significantly higher bandwidth, making it possible to carry out complex computing tasks with lower energy consumption. The researchers aim to develop an optical computing architecture that will make it possible to compute AI applications in a rapid and energy-efficient way in the long term.

The non-volatile phase-change material can be switched between an amorphous structure and a crystalline structure with a highly ordered atomic lattice. This feature allows permanent data storage even without an energy supply. The researchers tested the performance of the neural network by using an evolutionary algorithm to train it to distinguish between German and English texts based on the number of vowels in the text.

The researchers received financial support from the German Research Association, the European Commission, and "UK Research and Innovation."

Artist's impression of the discovery of microsecond bursts. The foreground shows the Green Bank Telescope (United States) with which the research was done. Incoming radio waves are shown as white, red, and orange streaks that follow each other in rapid succession. The long red streaks are the previously known millisecond flashes. (c) Daniëlle Futselaar/www.artsource.nl
Artist's impression of the discovery of microsecond bursts. The foreground shows the Green Bank Telescope (United States) with which the research was done. Incoming radio waves are shown as white, red, and orange streaks that follow each other in rapid succession. The long red streaks are the previously known millisecond flashes. (c) Daniëlle Futselaar/www.artsource.nl

Dutch astronomers search through telescope archives; recently discovering a burst event that lasted for just a few microseconds

An international team of researchers led by Dutch Ph.D. candidate Mark Snelders has discovered radio pulses from the distant universe that last only millionths of a second. The ultrafast bursts are a new kind of fast radio bursts, which are unpredictable flashes of radio waves far beyond our Milky Way, possibly caused by magnetic neutron stars. The discovery of these ultrafast bursts could help researchers create a map of the space between stars and galaxies to better understand how galaxies are being fed by the surrounding gas.

Researchers at the University of Amsterdam and ASTRON discovered microsecond radio bursts after meticulously examining archival data from a known millisecond source. The origin of these ultra-fast bursts remains unclear, but they may be caused by magnetic neutron stars, also known as magnetars. The first bursts were discovered in 2007 and most of them last longer than a thousandth of a second, emitting as much energy as our sun generates in a day.

In 2022, Mark Snelders, a Ph.D. candidate at ASTRON and the University of Amsterdam, led a team that hypothesized the existence of bursts that would last only millionths of a second. The researchers used a public archive from the Breakthrough Listen project, which searches for extraterrestrial life, to analyze five hours of data from the known repeating fast radio burst FRB 20121102A, located three billion light years away toward the constellation of Auriga.

The researchers used software filters and machine learning to analyze half a million individual images per second, discovering eight ultra-fast bursts that lasted only ten-millionths of a second or less. While researchers expect to find more such sources, some data files may not be detailed enough to analyze in such detail.

Ultimately, the researchers hope to use the bursts to create a map of the space between stars and galaxies to better understand how galaxies are fed by the surrounding gas.

AI hardware processing is going 3D, from square to cube, to boost processing power

A team of researchers from the University of Oxford, in collaboration with other universities, has developed an innovative hardware system that combines photonic and electronic technologies to process 3D data. The system significantly enhances processing power for AI tasks. To test the hardware, the team analyzed 100 electrocardiogram signals simultaneously and achieved a 93.5% accuracy rate in identifying the risk of sudden death. The researchers believe that this approach could lead to a 100-fold increase in energy efficiency and compute density compared to current electronic processors if scaled up.

The efficiency of conventional computer chip processing doubles every 18 months. However, modern AI tasks require processing power that is currently doubling every 3.5 months. This means that new supercomputing paradigms are urgently needed to cope with this rising demand.

One possible solution is to use light instead of electronics to carry out multiple calculations in parallel using different wavelengths to represent different sets of data. In 2021, the same authors published groundbreaking work demonstrating a form of integrated photonic processing chip that could carry out matrix-vector multiplication at a much faster speed than the fastest electronic approaches. This breakthrough led to the creation of Salience Labs, a photonic AI company that emerged from the University of Oxford.

The team has now taken this concept further by adding an extra parallel dimension to the processing capability of their photonic matrix-vector multiplier chips. This higher-dimensional processing is made possible by using multiple different radio frequencies to encode the data, thereby achieving a level of parallelism that was previously impossible.

The team tested the hardware by applying it to the task of assessing the risk of sudden death from electrocardiograms of heart disease patients. They were able to successfully analyze 100 electrocardiogram signals simultaneously, accurately identifying the risk of sudden death with 93.5% accuracy.

The researchers estimated that even with a moderate scaling of 6 inputs x 6 outputs, this approach could outperform state-of-the-art electronic processors, potentially providing a 100-times enhancement in energy efficiency and compute density. The team anticipates further enhancement in supercomputing parallelism in the future by exploiting more degrees of freedom of light, such as polarization and mode multiplexing.

Dr. Bowei Dong, the first author of the publication, expressed his gratitude for the vibrant and collaborative platform provided by Oxford, which gave him the opportunity and courage to push the frontiers of advanced AI supercomputing hardware. Professor Harish Bhaskaran, the co-founder of Salience Labs and leader of this work, said that this is an exciting time to be doing research in AI hardware at the fundamental scale, and this work is one example of how what we assumed was a limit can be further surpassed.