Japan produces simulation that leads to a better understanding of the motion of living organisms; spontaneous organization of a living system

As anyone who drinks their coffee with milk knows, it's much easier to mix liquids together than to separate them. In fact, the second law of thermodynamics would seem to dictate that a mixture would never be able to separate again if there are no attractive forces between similar particles. However, investigators from the Institute of Industrial Science at The University of Tokyo showed the mechanism by which a mixture of actively spinning particles, such as bacteria, in a fluid can sort themselves in a process called phase separation even without attractions between particles.

In a study published recently in Communications Physics, researchers from the Institute of Industrial Science at The University of Tokyo have shown that the demixing behavior of two groups of discs rotating in opposite directions, induced only through self-generated flow, can be explained by turbulent effects.

Sometimes mixed liquids can spontaneously "unmix", in a process of phase separation, such as oil and water. While systems without external energy input have been studied for a long time, the situation with the so-called active matter in which particles expend energy to move autonomously, like bacteria or algae, remains poorly understood.

Now, a team of researchers from The University of Tokyo created a supercomputer simulation of a mixture of discs rotating in opposite directions in a fluid to elucidate this phenomenon. The active motion of bacteria or other living organisms in a straight line that leads to a mixture spontaneously separating is already known as "motility-induced phase separation." However, active motion can include rotation as well as translation, but the organization of self-spinning particles has been studied much less.

"Active matter serves as a bridge between biological and physical worlds when considering the laws of self-organization," says the first author of the study, Bhadra Hrishikesh. The researchers found that in the case of self-spinning particles, phase separation creates the largest structure directly from a chaotic state. This is in contrast with ordinary phase separation, in which phase-separated domains grow gradually over time, as we see in salad dressing.

"It was known that a mixture of oppositely rotating disks can undergo phase separation even without a fluid. We were interested in comparing our system--in which the only interactions between particles are carried by the fluid--with a similar driven system without these interactions," says Hajime Tanaka, senior author. The investigators found that the sudden phase separation of the discs into regions of clockwise and counterclockwise collections is due to nonlinear turbulent effects. This research may lead to a better understanding of the motion of living organisms and thereby, the spontaneous organization of living systems.

Sparsification of the U.S. mobility network. On the left is the original network with about 26 million edges. On the right, a sparsified network based on effective resistance sampling.  CREDIT Mercier et al.
Sparsification of the U.S. mobility network. On the left is the original network with about 26 million edges. On the right, a sparsified network based on effective resistance sampling. CREDIT Mercier et al.

Santa Fe Institute uses mobility network sparsification for high-fidelity epidemic simulations

Simulations that help determine how a large-scale pandemic will spread can take weeks or even months to run. A recent study in PLOS Computational Biology offers a new approach to epidemic modeling that could drastically speed up the process. 

The study uses sparsification, a method from graph theory and computer science, to identify which links in a network are the most important for the spread of disease.

By focusing on critical links, the authors found they could reduce the computation time for simulating the spread of diseases through highly complex social networks by 90% or more. 

“Epidemic simulations require substantial computational resources and time to run, which means your results might be outdated by the time you are ready to publish,” says lead author Alexander Mercier, a former Undergraduate Research Fellow at the Santa Fe Institute and now a Ph.D. student at the Harvard T.H. Chan School of Public Health. “Our research could ultimately enable us to use more complex models and larger data sets while still acting on a reasonable timescale when simulating the spread of pandemics such as COVID-19.”

For the study, Mercier, with SFI researchers Samuel Scarpino and Cristopher Moore, used data from the U.S. Census Bureau to develop a mobility network describing how people across the country commute. 

Then, they applied several different sparsification methods to see if they could reduce the network’s density while retaining the overall dynamics of a disease spreading across the network. 

The most successful sparsification technique they found was effective resistance. This technique comes from computer science and is based on the total resistance between two endpoints in an electrical circuit. In the new study, effective resistance works by prioritizing the edges, or links, between nodes in the mobility network that are the most likely avenues of disease transmission while ignoring links that can be easily bypassed by alternate paths.

“It’s common in the life sciences to naively ignore low-weight links in a network, assuming that they have a small probability of spreading a disease,” says Scarpino. “But as in the catchphrase ‘the strength of weak ties,’ even a low-weight link can be structurally important in an epidemic — for instance, if it connects two distant regions or distinct communities.”

Using their effective resistance sparsification approach, the researchers created a network containing 25 million fewer edges — or about 7% of the original U.S. commuting network — while preserving overall epidemic dynamics.

“Computer scientists Daniel Spielman and Nikhil Srivastava had shown that sparsification can simplify linear problems, but discovering that it works even for nonlinear, stochastic problems like an epidemic was a real surprise,” says Moore.

While still in an early stage of development, the research not only helps reduce the computational cost of simulating large-scale pandemics but also preserves important details about disease spread, such as the probability of a specific census tract getting infected and when the epidemic is likely to arrive there.

The University of Texas at El Paso received a $5 million grant from the National Science Foundation (NSF) to provide financial support and professional development experiences to talented students in the field of computer science. The initiative will provide partial scholarships to 26 students at UTEP who are working on their bachelor’s degrees and focusing on data science or cybersecurity.
The University of Texas at El Paso received a $5 million grant from the National Science Foundation (NSF) to provide financial support and professional development experiences to talented students in the field of computer science. The initiative will provide partial scholarships to 26 students at UTEP who are working on their bachelor’s degrees and focusing on data science or cybersecurity.

University of Texas at El Paso wins $5M grant to support computer science students

The program offers scholarships to UTEP and EPCC students

The University of Texas at El Paso received a $5 million grant from the National Science Foundation (NSF) to provide financial support and professional development experiences to talented students in the field of computer science.

As part of NSF’s Scholarships for STEM (S-STEM) program, the initiative will provide partial scholarships to 26 students at UTEP who are working on their bachelor’s degrees and focusing on data science or cybersecurity.

The UTEP Computer Science Department also will collaborate with El Paso Community College (EPCC) to fund scholarships for 15 students who start at EPCC and transfer to UTEP to complete their bachelor’s degrees.

“This S-STEM program builds on years of NSF support in the Paso del Norte region,” said Kenith Meissner, Ph.D., dean of the College of Engineering. “Moreover, the coordinated effort between UTEP and EPCC will help broaden the talent pool needed to address critical national needs in data science and cybersecurity. We are excited to be part of this collaboration that expands opportunities for highly motivated students in high-demand STEM areas.”

The grant was first awarded to UTEP in 2016. Salamah Salamah, Ph.D., chair of the computer science department and the project’s principal investigator, said it’s unusual for the S-STEM grant to be awarded twice to the same institution.

“The stature of UTEP and what we’re doing here in this department is something that can’t be ignored,” he said. “NSF understands the great things we’re doing.”

Of the 41 students who received scholarships under the first S-STEM grant, nearly all graduated with a bachelor’s degree, 40 attended conferences, 15 were involved in research, and 15 pursued a graduate degree. Additionally, more than half of the program participants were women.

“The S-STEM program has provided the ideal bridge for students from EPCC who want to pursue their computing degree at UTEP,” said Christian Servin, Ph.D., associate professor of computer science at EPCC. “This partnership prepares students mentally and financially to succeed at the four-year institution once they transfer, speeding up the process of developing marketable skills, including research and computational thinking skills.”

Influenced by the best practices pioneered by the Computing Alliance for Hispanic-Serving Institutions (CAHSI), UTEP’s computer science department provides S-STEM scholars with professional development training and opportunities that can build their confidence and give them an edge in the job market. For example, professors accompany students to the annual Great Minds in STEM conference, where students learn how to network with job recruiters, share their stories and highlight their skills.

“One of the greatest things you can see is how the students start to become leaders,” said Diego Aguirre, Ph.D., co-principal investigator of the grant and assistant professor of computer science. “Many of them come into the program with a desire to help others. As they learn skills and move into this space, they start sharing that newfound knowledge with others. The program's impact is not just in the students who get the scholarships, it’s in the impact those students have wherever they go.”

Researchers compared the output (activity on the top and decoder accuracy on the bottom) associated with real neural data (left column) and several models of working memory to the right. The ones that best resembled the real data were the "PS" models featuring short-term synaptic plasticity.
Researchers compared the output (activity on the top and decoder accuracy on the bottom) associated with real neural data (left column) and several models of working memory to the right. The ones that best resembled the real data were the "PS" models featuring short-term synaptic plasticity.

MIT neuroscientists produce insights into how holding information in mind may mean storing it among synapses

Comparing models of working memory with real-world data, MIT researchers found that information resides not in persistent neural activity, but in the pattern of their connections

Between the time you read the Wi-Fi password off the café’s menu board and the time you can get back to your laptop to enter it, you have to hold it in mind. If you’ve ever wondered how your brain does that, you are asking a question about working memory that researchers have strived for decades to explain. Now MIT neuroscientists have published a key new insight to explain how it works.

In a study in PLOS Computational Biology, scientists at The Picower Institute for Learning and Memory compared measurements of brain cell activity in an animal performing a working memory task with the output of various supercomputer models representing two theories of the underlying mechanism for holding information in mind. The results strongly favored the newer notion that a network of neurons stores the information by making short-lived changes in the pattern of their connections, or synapses, and contradicted the traditional alternative that memory is maintained by neurons remaining persistently active (like an idling engine).

While both models allowed for information to be held in mind, only the versions that allowed for synapses to transiently change connections (“short-term synaptic plasticity”) produced neural activity patterns that mimicked what was observed in real brains at work. The idea that brain cells maintain memories by being always “on” may be simpler, acknowledged senior author Earl K. Miller, but it doesn’t represent what nature is doing and can’t produce the sophisticated flexibility of thought that can arise from intermittent neural activity backed up by short-term synaptic plasticity.

“You need these kinds of mechanisms to give working memory activity the freedom it needs to be flexible,” said Miller, Picower Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS). “If working memory was just sustained activity alone, it would be as simple as a light switch. But working memory is as complex and dynamic as our thoughts.”

Co-lead author Leo Kozachkov, who earned his Ph.D. at MIT in November for theoretical modeling work including this study, said matching supercomputer models to real-world data was crucial.

“Most people think that working memory ‘happens’ in neurons—persistent neural activity gives rise to persistent thoughts. However, this view has come under recent scrutiny because it does not really agree with the data,” said Kozachkov who was co-supervised by co-senior author Jean-Jacques Slotine, a professor in BCS and mechanical engineering. “Using artificial neural networks with short-term synaptic plasticity, we show that synaptic activity (instead of neural activity) can be a substrate for working memory. The important takeaway from our paper is: these ‘plastic’ neural network models are more brain-like, in a quantitative sense, and also have additional functional benefits in terms of robustness.”

Matching models with nature

Alongside co-lead author John Tauber, an MIT graduate student, Kozachkov’s goal was not just to determine how working memory information might be held in mind, but to shed light on which way nature does it. That meant starting with “ground truth” measurements of the electrical “spiking” activity of hundreds of neurons in the prefrontal cortex of an animal as it played a working memory game. In each of the many rounds, the animal was shown an image that then disappeared. A second later it would see two images including the original and had to look at the original to earn a little reward. The key moment is that intervening second, called the “delay period,” in which the image must be kept in mind in advance of the test.

The team consistently observed what Miller’s lab has seen many times before: The neurons spike a lot when seeing the original image, spike only intermittently during the delay, and then spike again when the images must be recalled during the test (these dynamics are governed by an interplay of beta and gamma frequency brain rhythms). In other words, spiking is strong when information must be initially stored and when it must be recalled but is only sporadic when it has to be maintained. The spiking is not persistent during the delay.

Moreover, the team trained software “decoders” to read out the working memory information from the measurements of spiking activity. They were highly accurate when spiking was high, but not when it was low, as in the delay period. This suggested that spiking doesn’t represent information during the delay. But that raised a crucial question: If spiking doesn’t hold information in mind, what does?

Researchers including Mark Stokes at the University of Oxford have proposed that changes in the relative strength, or “weights,” of synapses could store the information instead. The MIT team put that idea to the test by computationally modeling neural networks embodying two versions of each main theory. As with the real animal, the machine learning networks were trained to perform the same working memory task and to output neural activity that could also be interpreted by a decoder.

The upshot is that the computational networks that allowed for short-term synaptic plasticity to encode information spiked when the actual brain spiked and didn’t when it didn’t. The networks featuring constant spiking as the method for maintaining memory spiked all the time including when the natural brain did not. And the decoder results revealed that accuracy dropped during the delay period in the synaptic plasticity models but remained unnaturally high in the persistent spiking models.

In another layer of analysis, the team created a decoder to read out information from the synaptic weights. They found that during the delay period, the synapses represented the working memory information that the spiking did not.

Among the two model versions that featured short-term synaptic plasticity the most realistic one was called “PS-Hebb,” which features a negative feedback loop that keeps the neural network stable and robust, Kozachkov said.

Workings of working memory

In addition to matching nature better, the synaptic plasticity models also conferred other benefits that likely matter to real brains. One was that the plasticity models retained information in their synaptic weightings even after as many as half of the artificial neurons were “ablated.” The persistent activity models broke down after losing just 10-20 percent of their synapses. And, Miller added, just spiking occasionally requires less energy than spiking persistently.

Furthermore, Miller said, quick bursts of spiking rather than persistent spiking leaves room in time for storing more than one item in memory. Research has shown that people can hold up to four different things in working memory. Miller’s lab plans new experiments to determine whether models with intermittent spiking and synaptic weight-based information storage appropriately match real neural data when animals must hold multiple things in mind rather than just one image.

In addition to Miller, Kozachkov, Tauber, and Slotine, the paper’s other authors are Mikael Lundqvist and Scott Brincat.

The Office of Naval Research, the JPB Foundation, and ERC and VR Starting Grants funded the research.

Technical snowmaking on the Gemsstock. (Photo: Valentin Luthiger)
Technical snowmaking on the Gemsstock. (Photo: Valentin Luthiger)

Swiss scientists investigate skiing over the Christmas holidays with snowmaking

For many people in Switzerland, holidays in the snow are as much a part of the end of the year as Christmas trees and fireworks. As global warming progresses, however, white slopes are becoming increasingly rare. Researchers at the University of Basel have calculated how well one of Switzerland’s largest ski resorts will remain snow reliable with technical snowmaking by the year 2100, and how much water this snow will consume.

The future for ski sports in Switzerland looks anything but rosy – or rather white. Current climate models predict that there will be more precipitation in winter in the coming decades, but that it will fall as rain instead of snow. Despite this, one investor recently spent several million Swiss francs on expanding the Andermatt-Sedrun-Disentis ski resort. A short-sighted decision they will regret in the future?

A research team led by Dr. Erika Hiltbrunner from the Department of Environmental Sciences at the University of Basel has now calculated the extent to which this ski resort can maintain its economically important Christmas holidays and a ski season of at least 100 days with and without snowmaking. The team collected data on the aspects of the slopes, where and when the snow is produced at the ski resort, and with how much water. They then applied the latest climate change scenarios (CH2018) in combination with the SkiSim 2.0 simulation software for projections of snow conditions with and without technical snowmaking. The results of their investigations were recently published in the International Journal of Biometeorology.

No guarantee of a white Christmas

According to the results, the use of technical snow can indeed guarantee a 100-day ski season – in the higher parts of the ski resort (at 1,800 meters and above), at least. But business is likely to be tight during the Christmas holidays in coming decades, with the weather often not cold enough at this time and in the weeks before. In the scenario with unabated greenhouse gas emissions, the Sedrun region in particular will no longer be able to offer guaranteed snow over Christmas in the longer term. New snow guns may alleviate the situation to a certain extent, say the researchers, but will not resolve the issue completely.

“Many people don’t realize that you also need certain weather conditions for snowmaking,” explains Hiltbrunner. “It must not be too warm or too humid, otherwise there will not be enough evaporation cooling for the sprayed water to freeze in the air and come down as snow.” Warm air absorbs more moisture and so, as winters become warmer, it also gets increasingly difficult or impossible to produce snow technically. In other words: “Here, the laws of physics set clear limits for snowmaking.”

540 million liters

The skiing will still go on, however, because technical snowmaking at least enables resort operators to keep the higher ski runs open for 100 consecutive days – even up until the end of the century and with climate change continues unabated. But there is a high price to be paid for this. The researchers’ calculations show that water consumption for snowmaking will increase significantly, by about 80% for the resort as a whole. In an average winter toward the end of the century, consumption would thus amount to about 540 million liters of water, compared with 300 million liters today.

But this increase in water demand is still relatively moderate compared with other ski resorts, the researchers emphasize. Earlier studies had shown that water consumption for snowmaking in the Scuol ski resort, for example, would increase by a factor of 2.4 to 5, because the area covered with snow there will have to be largely expanded to guarantee snow reliability.

For their analysis, the researchers considered periods of 30 years. However, there are large annual fluctuations: In addition, extreme events are not depicted in the climate scenarios. In the winter of 2017 with low levels of snow, water consumption for snowmaking in one of the three sub-areas of Andermatt-Sedrun-Disentis tripled.

Conflicts over water use

Today, some of the water used for snowmaking in the largest sub-area of Andermatt-Sedrun-Disentis comes from the Oberalpsee. A maximum of 200 million liters may be withdrawn annually for this purpose. If climate change continues unabated, this source of water will last until the middle of the century, at which point new sources will have to be exploited. “The Oberalpsee is also used to produce hydroelectric power,” says Dr. Maria Vorkauf, lead author of the study, who now works at the Agroscope research station. “Here, we are likely to see a conflict between the water demands for the ski resort and those for hydropower generation.”

At first, this ski resort may even benefit from climate change – if lower-lying and smaller ski resorts are obliged to close, tourists will move to larger resorts at higher altitudes, one of which is Andermatt-Sedrun-Disentis.

What is certain is that increased snowmaking will drive up costs and thus also the price of ski holidays. “Sooner or later, people with average incomes will simply no longer be able to afford them,” says Hiltbrunner.