LATEST

SwRI scientists modeled the protracted period of bombardment after the Moon formed, determining that impactor metals may have descended into Earth’s core. This artistic rendering illustrates a large impactor crashing into the young Earth. Light brown and gray particles indicate the projectile’s mantle (silicate) and core (metal) material, respectively.

Large planetesimals delivered more mass to nascent Earth than previously thought

Southwest Research Institute scientists recently modeled the protracted period of bombardment following the Moon's formation, when leftover planetesimals pounded the Earth. Based on these simulations, scientists theorize that moon-sized objects delivered more mass to the Earth than previously thought.

young earth projectile D022813 0

Early in its evolution, Earth sustained an impact with another large object, and the Moon formed from the resulting debris ejected into an Earth-orbiting disk. A long period of bombardment followed, the so-called "late accretion," when large bodies impacted the Earth delivering materials that were accreted or integrated into the young planet.

"We modeled the massive collisions and how metals and silicates were integrated into Earth during this 'late accretion stage,' which lasted for hundreds of millions of years after the Moon formed," said SwRI's Dr. Simone Marchi, lead author of a Nature Geoscience paper outlining these results. "Based on our simulations, the late accretion mass delivered to Earth may be significantly greater than previously thought, with important consequences for the earliest evolution of our planet."

Previously, scientists estimated that materials from planetesimals integrated during the final stage of terrestrial planet formation made up about half a percent of the Earth's present mass. This is based on the concentration of highly "siderophile" elements -- metals such as gold, platinum and iridium, which have an affinity for iron -- in the Earth's mantle. The relative abundance of these elements in the mantle points to late accretion, after Earth's core had formed. But the estimate assumes that all highly siderophile elements delivered by the later impacts were retained in the mantle.

Late accretion may have involved large differentiated projectiles. These impactors may have concentrated the highly siderophile elements primarily in their metallic cores. New high-resolution impact simulations by researchers at SwRI and the University of Maryland show that substantial portions of a large planetesimal's core could descend to, and be assimilated into, the Earth's core -- or ricochet back into space and escape the planet entirely. Both outcomes reduce the amount of highly siderophile elements added to Earth's mantle, which implies that two to five times as much material may have been delivered than previously thought.

"These simulations also may help explain the presence of isotopic anomalies in ancient terrestrial rock samples such as komatiite, a volcanic rock," said SwRI co-author Dr. Robin Canup. "These anomalies were problematic for lunar origin models that imply a well-mixed mantle following the giant impact. We propose that at least some of these rocks may have been produced long after the Moon-forming impact, during late accretion."

New method creates time-efficient way of supercomputing models of complex systems reaching equilibrium

When the maths cannot be done by hand, physicists modelling complex systems, like the dynamics of biological molecules in the body, need to use supercomputer simulations. Such complicated systems require a period of time before being measured, as they settle into a balanced state. The question is: how long do supercomputer simulations need to run to be accurate? Speeding up processing time to elucidate highly complex study systems has been a common challenge. And it cannot be done by running parallel computations. That's because the results from the previous time lapse matters for computing the next time lapse. Now, Shahrazad Malek from the Memorial University of Newfoundland, Canada, and colleagues have developed a practical partial solution to the problem of saving time when using supercomputer simulations that require bringing a complex system into a steady state of equilibrium and measuring its equilibrium properties. These findings are part of a special issue on "Advances in Computational Methods for Soft Matter Systems," recently published in EPJ E.

One solution is to run multiple copies of the same simulation, with randomised initial conditions for the positions and velocities of the molecules. By averaging the results over this ensemble of 10 o 50 runs, each run in the ensemble can be shorter than a single long run and still produce the same level of accuracy in the results. In this study, the authors go one step further and focus on an extreme case of examining an ensemble of 1,000 runs--dubbed a swarm. This approach reduces the overall time required to get the answer to estimating the value of the system at equilibrium.

Since this sort of massive multi-processor system is gradually becoming more common, this work contributes to increasing the techniques available to scientists. The solutions can be applied to computational studies in fields such as biochemistry, materials physics, astrophysics, chemical engineering, and economics.

The upper and lower series of pictures each show a simulation of a neutron star merger. In the scenario shown in the upper panels the star collapses after the merger and forms a black hole, whereas the scenario displayed in the lower row leads to an at least temporarily stable star. CREDIT Picture: Andreas Bauswein, HITS

Neutron stars are the densest objects in the Universe; however, their exact characteristics remain unknown. Using supercomputing simulations based on recent observations, an international team of scientists has managed to narrow down the size of these stars

When a very massive star dies, its core contracts. In a supernova explosion, the star's outer layers are expelled, leaving behind an ultra-compact neutron star. For the first time, the LIGO and Virgo Observatories have recently been able to observe the merger of two neutron stars and measure the mass of the merging stars. Together, the neutron stars had a mass of 2.74 solar masses. Based on these observational data, an international team of scientists from Germany, Greece, and Japan including HITS astrophysicist Dr. Andreas Bauswein has managed to narrow down the size of neutron stars with the aid of supercomputer simulations. The calculations suggest that the neutron star radius must be at least 10.7 km. The international research team's results have been published in "Astrophysical Journal Letters."

The Collapse as Evidence

In neutron star collisions, two neutron stars orbit around each other, eventually merging to form a star with approximately twice the mass of the individual stars. In this cosmic event, gravitational waves - oscillations of spacetime - whose signal characteristics are related to the mass of the stars, are emitted. This event resembles what happens when a stone is thrown into water and waves form on the water's surface. The heavier the stone, the higher the waves.

The scientists simulated different merger scenarios for the recently measured masses to de-termine the radius of the neutron stars. In so doing, they relied on different models and equations of state describing the exact structure of neutron stars. Then, the team of scientists checked whether the calculated merger scenarios are consistent with the observations. The conclusion: All models that lead to the direct collapse of the merger remnant can be ruled out because a collapse leads to the formation of a black hole, which in turn means that relatively little light is emitted during the collision. However, different telescopes have observed a bright light source at the location of the stars' collision, which provides clear evidence against the hypothesis of collapse.

The results thereby rule out a number of models of neutron star matter, namely all models that predict a neutron star radius smaller than 10.7 kilometers. However, the internal structure of neutron stars is still not entirely understood. The radii and structure of neutron stars are of particular interest not only to astrophysicists, but also to nuclear and particle physicists because the inner structure of these stars reflects the properties of high-density nuclear mat-ter found in every atomic nucleus.

Neutron Stars Reveal Fundamental Properties of Matter

While neutron stars have a slightly larger mass than our Sun, their diameter is only a few 10 km. These stars thus contain a large mass in a very small space, which leads to extreme conditions in their interior. Researchers have been exploring these internal conditions for already some decades and are particularly interested in better narrowing down the radius of these stars as their size depends on the unknown properties of density matter.

The new measurements and new calculations are helping theoreticians better understand the properties of high-density matter in our Universe. The recently published study already represents a scientific progress as it has ruled out some theoretical models, but there are still a number of other models with neutron star radii greater than 10.7 km. However, the scien-tists have been able to demonstrate that further observations of neutron star mergers will continue to improve these measurements. The LIGO and Virgo Observatories have just be-gun taking measurements, and the sensitivity of the instruments will continue to increase over the next few years and provide even better observational data. "We expect that more neutron star mergers will soon be observed and that the observational data from these events will reveal more about the internal structure of matter," HITS scientist Andreas Bauswein concludes.

The reference gene is depicted by black circle. The initial static global PIN is projected onto normal and cancer samples based on gene expression, and each function (red and green) are diffused through each PIN. In this case, the reference gene is assigned green function in normal and red function in cancer, i.e., the gene gained red and lost the green function in cancer.

Changes in gene function in tumor samples correlate with patient survival

A given gene may perform a different function in breast cancer cells than in healthy cells due to changes in networks of interacting proteins, according to a new study published in PLOS Computational Biology.

Previous research has shown that a protein produced by a single gene can potentially have different functions in a cell depending on the proteins with which it interacts. Protein interactions can differ depending on context, such as in different tissues or developmental stages. For instance, one protein produced by a key fruit fly gene serves two separate functions over the course of fly development.

Building on this concept, Sushant Patkar of the University of Maryland and colleagues hypothesized that alterations in protein interaction networks in breast cancer cells may change the function of individual genes. To test this idea, they analyzed protein expression in 1,047 breast cancer tumors and 110 healthy breast tissue samples, using data from The Cancer Genome Atlas project.

The researchers developed a supercomputational framework to determine the structure of protein interaction networks in each sample and infer which genes performed different cellular functions within these networks. Then, they compared the number of genes inferred to perform each function in cancer cells relative to healthy cells.

The analysis revealed that several functions were associated with more or fewer genes in cancer cells than in healthy cells, but not because of changes in the expression of these genes. Instead, their function changed due to changes in their protein interaction networks. The researchers also showed that profiling a patient's tumor tissue according to these functional shifts served as an effective method to predict their cancer subtype and their survival.

"While it is completely plausible for a gene to lose or acquire novel biological functions, examples of such changes have predominantly been observed in the context of evolution," Patkar says. "We have developed a bioinformatics approach that suggests that such changes might alternatively occur through changes in the interactions of proteins encoded by the gene."

Next, the team plans to validate their supercomputational framework as a tool to assess changes in gene function in other biological contexts, such as in other diseases and tissues.

Access to the freely available article in PLOS Computational Biology: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005793

Information attacks have emerged as a major concern of societies worldwide. They come under different names and in different flavors — fake news, disinformation, political astroturfing, influence operations, etc. And they may arrive as a component of hybrid warfare — in combination with traditional cyber-attacks (use of malware), and with conventional military action or covert physical attacks. (Photo Credit: Shutterstock)

A team of U.S. Army researchers recently joined an international group of scientists in Chernihiv, Ukraine to initiate a first-of-its-kind global science and technology research program to understand and ultimately combat disinformation attacks in cyberspace. 

Scientists from the Bulgarian Defense Institute in Sophia, Bulgaria; the Chernihiv National University of Technology in Chernihiv, Ukraine; and the National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic" in Kyiv, Ukraine joined ARL researchers Nov. 14-15, at the kickoff meeting of the Cyber Rapid Analysis for Defense Awareness of Real-time Situation project. The participation of Bulgarian and Ukrainian institutions is funded by NATO Science for Peace and Security Programme, which promotes dialogue and practical cooperation between NATO member states and non-NATO partner nations -- in this case Ukraine -- based on scientific research, technological innovation and knowledge exchange.



Over the next three years, the group will develop theoretical foundations, methods, and approaches towards software tools for situational awareness that will enable a nation's defense forces to monitor cyberspace to detect malicious information injections and give timely notification of an information attack, said Dr. Alexander Kott, ARL chief scientist, who attended the meeting together with ARL's Dr. Brian Rivera, the chief of the Network Science Division. They'll also help create conditions necessary for decision making about prevention or timely response to adversarial disinformation injections or manipulations. Especially important in meeting these objectives will be the real world experiences pertaining to actual disinformation attacks directed against Ukraine.

"Information attacks have emerged as a major concern of societies worldwide. They come under different names and in different flavors -- fake news, disinformation, political astroturfing, influence operations, etc. And they may arrive as a component of hybrid warfare -- in combination with traditional cyber-attacks (use of malware), and with conventional military action or covert physical attacks. A particularly poignant example of a victim of such attacks has been Ukraine," Kott said.

He said the ARL scientists bring to this project a number of critical scientific elements. These include published research results -- theories and algorithms -- that explain and predict propagation of opinions and trust within a network, find untrustworthy sources within cyberspace, and detect false news. Much of these were developed in the context of ARL's extensive Network Science research in alliance with multiple academic institutions, and will help jump-start CyRADARS.

"ARL also operates a unique Open Campus business model. It enables scientists from both USA and other countries to conduct collaborative research at ARL. Within the context of CyRADARS, students and faculty from Ukraine and Bulgaria will be able to come to ARL and use ARL's Open Campus facilities and test beds while working on joint projects with ARL scientists," Kott said.

The research efforts will take place at all four institutions in a virtual, distributed networked laboratory that the project will create.

Each rectangular structure represents a heart cell in the supercomputer model. The color bursts depict propagating waves of calcium. Each cell is identical but exhibits a distinct pattern of calcium waves due to random ion channel gating. The team investigated how this randomness gives rise to sudden unpredictable heart arrhythmia. CREDIT PLOS Computational Biology/Mark A. Walker

Some heart disease patients face a higher risk of sudden cardiac death, which can happen when an arrhythmia -- an irregular heartbeat-- disrupts the normal electrical activity in the heart and causes the organ to stop pumping. Arrhythmias linked to sudden cardiac death are very rare, however, making it difficult to study how they occur and how they might be prevented.

To make it much easier to discover what triggers this deadly disorder, a team led by Johns Hopkins researchers constructed a powerful new supercomputer model that replicates the biological activity within the heart that precedes sudden cardiac death.

In a study published recently in PLOS Computational Biology, the team reported that the digital model has yielded important clues that could provide treatment targets for drug makers.

"For the first time, we have come up with a method for relating how distressed molecular mechanisms in heart disease determine the probability of arrhythmias in cardiac tissue," said Raimond L. Winslow, the Raj and Neera Singh Professor of Biomedical Engineering at Johns Hopkins and senior author of the study.

"The importance of this," said Winslow, who also is director of the university's Institute for Computational Medicine, "is that we can now quantify precisely how these dysfunctions drive and influence the likelihood of arrhythmias. That is to say, we now have a way of identifying the most important controlling molecular factors associated with these arrhythmias."

With this knowledge, Winslow said, researchers will be better able to develop treatments to keep some deadly heart rhythms from forming. "It could lead to better drugs that target the right mechanisms," Winslow said. "If you can find a molecule that blocks this particular action, then doing so will significantly reduce the probability of an arrhythmia, whereas other manipulations will have comparatively negligible effects."

The lead author of the study was Mark A. Walker, who worked on the problem while earning his Johns Hopkins doctoral degree under Winslow's supervision. Walker said he and his colleagues used supercomputer models to determine what activity was linked to arrhythmia at three biological levels: in the heart tissue as a whole, within individual heart cells, and within the molecules that make up the cells, including small proteins called ion channels that control the movement of calcium in the heart.

"Calcium is an important player in the functioning of a heart cell," said Walker, who now works as a computational biologist at the Broad Institute, a research center affiliated with Harvard University and the Massachusetts Institute of Technology. "There are a lot of interesting questions about how the handling of calcium in heart cells can sort-of go haywire."

Walker and his colleagues chose to focus on one intriguing question about this process: when heart cells possess too much calcium, which can happen in heart disease patients, how does this overload of calcium trigger an arrhythmia?

The team discovered that heart cells respond by expelling excess calcium, and in doing so, they generate an electrical signal. If by chance, a large enough number of these signals are generated at the same time, it can trigger an arrhythmia.

"Imagine if you have a bunch of people in a room, and you want to test the strength of the floor they are all standing on," Walker said. "It's not a very strong floor, so if there's enough force on it, it will break. You tell everyone that on the count of three, jump in the air. They all land on the floor, and you try to figure out what's the probability that the floor will break, given that everyone is going to jump at a slightly different time, people will weigh different amounts, and they might jump to different heights. These are all random variables."

Similarly, Walker said, random variables also exist in trying to determine the probability that enough calcium-related electrical signals will simultaneously discharge in the heart to set off a lethal arrhythmia. Because the circumstances that cause sudden cardiac death are so rare, it makes them very tough to predict.

"You're trying to figure out what the probability is," Walker said. "The difficulty of doing that is if the probability is one in a million, then you have to do tens of millions of trials to estimate that probability. One of the advances that we made in this work was that we were able to figure out how to do this with really just a handful of trials."

Walker and Winslow both cautioned that, at present, the new supercomputer model cannot predict which heart patients face a higher risk of sudden cardiac death. But they said the model should speed up the pace of heart research and the development of related medicines or treatments such as gene therapy. They said the model will be shared as free open-source software. 

Stefano Baroni professor of theoretical condensed-matter physics

From the evolution of planets to electronics: Thermal conductivity plays a fundamental role in many processes

"Our goal? To radically innovate numerical simulations in the field of thermal transport to take on the great science and technology issues in which this phenomenon is so central. This new study, which has designed a new method with which to analyze heat transfer data more efficiently and accurately, is an important step in this direction".

This is how Stefano Baroni describes this new research performed at Trieste's SISSA by a group led by him, which has just been published in the Scientific Reports journal.

The research team focused on studying thermal transfer, the physics mechanism by which heat tends to flow from a warmer to a cooler body. Familiar to everyone, this process is involved in a number of fascinating scientific issues such as, for example, the evolution of the planets, which depends crucially on the cooling process within them. But it is also crucial to the development of various technological applications: from thermal insulation in civil engineering to cooling in electronic devices, from maintaining optimal operating temperatures in batteries to nuclear plant safety and storage of nuclear waste.

"Studying thermal transfer in the laboratory is complicated, expensive and sometimes impossible, as in the case of planetology. Numerical simulation, on the other hand, enables us to understand the hows and whys of such phenomena, allowing us to calculate precisely physical quantities which are frequently not accessible in the lab, thereby revealing their deepest mechanisms", explains Baroni. The problem is that until a short time ago it was not possible to do supercomputing in this field with the same sophisticated quantum methodologies used so successfully for many other properties: "The equations needed to compute heat currents from the molecular properties of materials were not known. Our research group overcame this obstacle a few years ago formulating a new microscopic theory of heat transfer."

But a further issue needed resolving. The simulation times required to describe the heat transfer process are hundreds of times longer than those currently used to simulate other properties. And this understandably posed a number of problems.

"With this new research, bringing together concepts demonstrated by previous theories - especially that known as the Green-Kubo theory - with our knowledge of the quantum simulation field we understood how to analyse the data to simulate heat conductivity in a sustainable way in terms of supercomputer resources and, consequently, cost. And this opens up extremely important research possibilities and potential applications for these studies".

With one curiosity which Baroni reveals: "The technique we have formulated is adapted from a methodology used in completely different sectors, such as electronic engineering, to study the digitilization of sound, and quantitative social sciences and economics, to study the dynamics of complex processes such as financial markets, for example. It is interesting to see how unexpected points of contact and cross fertilization can sometimes arise amongst such different fields".

Amplitude of the displacement field after a train passes on the track. The left-hand figure corresponds to a simulation with homogeneous ballast and the right-hand image to a simulation with heterogeneous ballast. © Lucio de Abreu Corrêa, Laboratoire de Mécanique des Sols, Structures et Matériaux (CNRS/CentraleSupélec).

SNCF engineers have been using mathematical models for many years to simulate the dynamic behavior of railways. These models have not been able to take into account large portions of the track have been extremely limited at modelling ballast, the gravel layer located under railway tracks. This is why SNCF Innovation & Recherche asked for help from specialists in wave propagation for all types of media and at varied scales: CNRS and INSA Strasbourg researchers. Together, they have shown that a large part of the energy introduced by a train passing is trapped by the ballast. Their work, published in the November issue of Computational Mechanics, shows that this trapping phenomenon, which is very dependent on train speed, could cause accelerated ballast degradation in railway tracks.

SNCF engineers currently have two ways that they can take ballast into account to attempt to understand how railway tracks behave as a train passes. One is high-level modeling of interactions between each "grain" and the other is a simpler model where the ballast is represented as a homogeneous and continuous whole. Though taking into account interactions between grains allows demonstration of wear mechanisms locally, it becomes too complex to be applied to the entire track, to the passage of an entire train. By contrast the simple models can be used for large portions of tracks but cannot really tell us what happens in the gravel layer. In addition, measurements on vibrations near the tracks were much lower that what calculations predicted. In this context, the question is how to model an entire train passing, for several meters, or even kilometers, while retaining the specifics of the ballast's mechanical behavior. Something was missing in the modeling to be able to describe the influence of a train passing on the immediate surroundings of the railway.

The researchers have proposed a new mechanism that helps explain why vibrations are lower than predicted as the distance from the track increases. They stopped considering the ballast as a homogeneous medium and started considering it as a heterogeneous medium. This time, the mathematical model and physical measurements agree: they have shown that a large part of the energy introduced by a train passing is trapped in the heterogeneous ballast layer. This trapping phenomenon, very dependent on train speed, could cause degradation in the ballast layer, as the energy provided by the train passing dissipates by the grains rubbing together.

Therefore this work opens paths to a better understanding of the behavior of how railway tracks behave as a train passes. By understanding where in the tracks the ballast traps the most energy, these results particularly open new perspectives on increasing the lifetime of railway tracks and reducing maintenance costs.

Rodion Kutsaev @frostroomhead

Neutron stars are made out of cold ultra-dense matter. How this matter behaves is one of the biggest mysteries in modern nuclear physics. Researchers developed a new method for measuring the radius of neutron stars which helps them to understand what happens to the matter inside the star under extreme pressure.

A new method for measuring neutron star size was developed in a study led by a high-energy astrophysics research group at the University of Turku. The method relies on modeling how thermonuclear explosions taking place in the uppermost layers of the star emit X-rays to us. By comparing the observed X-ray radiation from neutron stars to the  state-of-the-art theoretical radiation models, researchers were able to put constraints on the size of the emitting source. This new analysis suggests that the neutron star radius should be about 12.4 kilometres.

– Previous measurements have shown that the radius of a neutron star is circa 10–16 kilometres. We constrained it to be around 12 kilometres with about 400 metres accuracy, or maybe 1000 metres if one wants to be really sure. Therefore, the new measurement is a clear improvement compared to that before, says Doctoral Candidate Joonas Nättilä who developed the method.

The new measurements help researchers to study what kind of nuclear-physical conditions exist inside extremely dense neutron stars. Researchers are particularly interested in determining equation of state of the neutron matter, which shows how compressible the matter is at extremely high densities.

– The density of neutron star matter is circa 100 million tons per cubic centimetre. At the moment, neutron stars are the only objects appearing in nature, with which these types of extreme states of matter can be studied, says Juri Poutanen, the leader of the research group.

The new results also help to understand the recently discovered gravitational waves that originated from the collision of two neutron stars. That is why the LIGO/VIRGO consortium that discovered these waves was quick to compare their recent observations with the new constraints obtained by the Finnish researchers.

– The specific shape of the gravitational wave signal is highly dependent on the radii and the equation of state of the neutron stars. It is very exciting how these two completely different measurements tell the same story about the composition of neutron stars. The next natural step is to combine these two results. We have already been having active discussions with our colleagues on how to do this, says Nättilä.

  1. Using machine learning algorithms, German team finds ID microstructure of stock useful in financial crisis
  2. Russian prof Sergeyev introduces methodology working with numerical infinities and infinitesimals; opens new horizons in a supercomputer called Infinity
  3. ASU, Chinese researchers develop human mobility prediction model that offers scalability, a more practical approach
  4. Marvell Technology buys rival chipmaker Cavium for $6 billion in a cash-and-stock deal
  5. WPI researchers use machine learning to detect when online news are a paid-for pack of lies
  6. Johns Hopkins researchers develop model estimating the odds of events that trigger sudden cardiac death
  7. Young Brazilian researcher creates supercomputer model to explain the origin of Earth's water
  8. With launch of new night sky survey, UW researchers ready for era of big data astronomy
  9. CMU software assembles RNA transcripts more accurately
  10. Russian scientists create a prototype neural network based on memristors
  11. Data Science Institute at Columbia develops statistical method that makes better predictions
  12. KU researchers untangle vexing problem in supercomputer-simulation technology
  13. University of Bristol launches £43 million Quantum Technologies Innovation Centre
  14. Utah researchers develop mile stone for ultra-fast communication
  15. Pitt supercomputing helps doctors detect acute kidney injury earlier to save lives
  16. American University prof builds models to help solve few-body problems in physics
  17. UW prof helps supercompute activity around quasars, black holes
  18. Nottingham's early warning health, welfare system could save UK cattle farmers millions of pounds, reduce antibiotic use
  19. Osaka university researchers roll the dice on perovskite interfaces
  20. UM biochemist Prabhakar produces discovery that lights path for alzheimer's research

Page 1 of 36