Long-runout This 1994 landslide in Mesa County, Colorado contained 30 million cubic meters or rock and ran out for 2.8 miles. New research helps explain how these large slides are able to run out so far. Jon White/Colorado Geological Survey

New research shows why some large landslides travel greater distances across flat land than scientists would generally expect, sometimes putting towns and populations far from mountainsides at risk.

A new study may finally explain why some landslides travel much greater distances than scientists would normally expect. A team of researchers used a sophisticated supercomputer model to show that vibrations generated by large slides can cause tons of rock to flow like a fluid, enabling the rocks to rumble across vast distances.

The research, by geoscientists at Brown University, Purdue University and the University of Southern California, is described in the Journal of Geophysical Research: Earth Surface.

The “runout” distance of most landslides – the distance debris travels once it reaches flat land – tends to be about twice the vertical distance that the slide falls. So if a slide breaks loose a half-mile vertically up a slope, it can be expected to run out about a mile. But “long-runout” landslides, also known as sturzstroms, are known to travel horizontal distances 10 to 20 times further than they fall, according to Brandon Johnson, an assistant professor of earth, environmental and planetary sciences at Brown and the new study’s lead author.

“There are a few examples where these slides have devastated towns, even when they were located at seemingly safe distances from a mountainside,” said Johnson, who started studying these slides as a student of Jay Melosh, distinguished professor of earth, atmospheric and planetary sciences and physics at Purdue University.

One such example was a slide in 1806 that slammed into the village of Goldau, Switzerland, and claimed nearly 500 lives.

“It has been known for more than a century that very large, dry landslides travel in a fluid-like manner, attaining speeds of more than 100 miles per hour, traveling tens to hundreds of kilometers from their sources and even climbing uphill as they overwhelm surprisingly large areas,” said Melosh, who was a part of the research team. “However, the mechanism by which these very dry piles of rock obtained their fluidity was a mystery.”

Scientists developed several initial hypotheses. Perhaps the slides were floating on a cushion of air, or perhaps they ran atop a layer of water or ice, which would lower the friction they encountered. But the fact that these types of landslides also occur on dry, airless bodies like the Moon cast doubt on those hypotheses.

In 1979, Melosh proposed a mechanism called “acoustic fluidization” to explain these long runouts. Slides of sufficient size, Melosh proposed, would generate vibrational waves that propagate through the rock debris. Those vibrations reduce the effect of friction acting on the slide, enabling it to travel further than smaller slides, which don’t generate as much vibration. The mechanism is similar to the way a car is more likely to slide if it’s bouncing down a bumpy road as opposed to rolling along a smooth one.

In 1995, Charles Campbell from the University of Southern California created a supercomputer model that was able to replicate the behavior of long-runout slides using only the dynamic interactions between rocks. No special circumstances like water or air cushions were required. However, due to the limitations of supercomputers at the time, he was unable to determine what mechanism was responsible for the behavior.

“The model showed that there was something about rocks, when you get a lot of them together, that causes them to slide out further than you expect,” Johnson said. “But it didn’t tell us what was actually happening to give us this lower friction.”

For this new study, Johnson was able to resurrect that model, tweak it a bit, and run it on a modern workstation to capture the dynamics in finer detail. The new model showed that, indeed, vibrations do reduce the effective friction acting on the slide.

The amount of friction acting on a slide depends in part on gravity pulling it downward. The same gravitational force that accelerates the slide as it moves downslope tends to slow it down when it reaches flat land. But the model showed that vibrational waves counteract the gravitational force for brief moments. The rocks tend to slide more when the vibration reduces the friction effect of the gravitational force. Because the vibrational waves affect different rocks in the slide at different times, the entire slide tends to move more like a fluid.

Those results of the new model are consistent with the acoustic fluidization idea that Melosh had proposed nearly 40 years ago, before supercomputer power was adequate to confirm it.

“Campbell and I had a long-standing friendly rivalry and he did not believe my proposed acoustic fluidization mechanism could possibly explain his findings in the simulations,” Melosh said. “As a result of Brandon’s careful analysis of the interactions of the rock fragments in the simulations, we’ve now put to rest the debate, and it was a lot of fun for the three of us to work together.”

Ultimately, the researchers hope this work might be a step toward better predicting these types of potentially devastating landslides.

“I would suggest that understanding why these landslides run out so far is really is a first step to understanding when and where they might occur in the future,” Johnson said. “Our work suggests that all you need is enough volume to get these long runouts. This leads to the somewhat unsatisfying conclusion that these slides can happen nearly anywhere.”

The results may also help scientists understand other types of events. For example, acoustic fluidization might play a role in slippage along fault lines, which contributes to large earthquakes.

“This emergent phenomenon, arising from the simple interactions of individual particles, is likely at play whenever large movements of rock occur,” Johnson said.

The research was supported in part by a grant from NASA, which supported Johnson’s graduate work.

Society for Industrial and Applied Mathematics recognizes distinguished work through Fellows Program

The Society for Industrial and Applied Mathematics (SIAM) has announced the 2016 Class of SIAM Fellows. These distinguished members were nominated for their exemplary research as well as outstanding service to the community. Through their contributions, SIAM Fellows help to advance the fields of applied mathematics and computational science. These individuals will be recognized for their achievements at an awards ceremony during the SIAM Annual Meeting, happening July 11-15, 2016 in Boston, MA.

SIAM congratulates these 30 esteemed members of the community, listed below in alphabetical order:

  • Linda J. S. Allen, Texas Tech University
  • Chandrajit Bajaj, University of Texas at Austin
  • Egon Balas, Carnegie Mellon University
  • Gang Bao, Zheijiang University
  • Dwight Barkley, University of Warwick
  • John J. Benedetto, University of Maryland, College Park
  • Gregory Beylkin, University of Colorado Boulder
  • Paul C. Bressloff , University of Utah
  • Xiao-Chuan Cai, University of Colorado Boulder
  • Thomas F. Coleman, University of Waterloo
  • Clint N. Dawson, University of Texas at Austin
  • Maria J. Esteban, Centre national de la recherche scientifique
  • Michael Hintermüller, Humboldt-Universität zu Berlin
  • Michael Holst, University of California, San Diego
  • Bo Kågström, Umeå University
  • Andrew Knyazev, Mitsubishi Electric Research Laboratories
  • Alan J. Laub, University of California, Los Angeles
  • Xiaoye Sherry Li, Lawrence Berkeley National Laboratory
  • William M. McEneaney, University of California, San Diego
  • James G. Nagy, Emory University
  • Helena J. Nussenzveig Lopes, Universidade Federal do Rio de Janeiro
  • Cynthia A. Phillips, Sandia National Laboratories
  • Michael C. Reed , Duke University
  • Arnd Scheel, University of Minnesota
  • Christoph Schwab, ETH Zürich
  • Endre Süli, University of Oxford
  • Françoise Tisseur, University of Manchester
  • Sabine Van Huffel, KU Leuven
  • David P. Williamson, Cornell University
  • Xunyu Zhou, Columbia University and University of Oxford
CAPTION Conventional protein X-ray diffraction images are processed to remove the sharp Bragg reflections, producing noisy images of diffuse intensity (left). Images from multiple crystal orientations are integrated into a 3-D dataset in which the signal is statistically averaged (center, showing a level surface of diffuse intensity in three-dimensions). The 3-D data show greatly enhanced diffuse features, as seen in simulated diffraction images obtained using the integrated data (right, compare to left panel). The 3-D data are used to validate and refine models of protein motions.

Diffuse data integrated in 3-D to reveal dynamics in protein crystals

A new 3D modeling and data-extraction technique is about to transform the field of X-ray crystallography, with potential benefits for both the pharmaceutical industry and structural biology. A paper this week inProceedings of the National Academy of Sciences describes the improved blending of experimentation and supercomputer modeling, extracting valuable information from diffuse, previously discarded data.

"The accomplishment here is to demonstrate that we can analyze conventionally collected protein crystallography data and pull out background features that are usually thrown away," said Michael E. Wall, a scientist in the Computer, Computational, and Statistical Sciences Division at Los Alamos National Laboratory and co-corresponding author of the paper with James S. Fraser, Assistant Professor in the Department of Bioengineering and Therapeutic Sciences at University of California, San Francisco.

"What's been reclaimed is information about how the protein moves, the more detailed dynamics within the sample," Wall said. Traditional crystallography data provide a blurred picture of how the protein moves, like overlaying the frames of a movie. Our approach sharpens the picture, he noted, providing information about which atoms are moving in a concerted way, such as ones on the swinging arm of the protein or on opposite sides of a hinge opening or closing, and which ones are moving more independently.

"This is a method that will eventually change the way X-ray crystallography is done, bringing in this additional data stream in addition to the sharply peaked Bragg scattering, which is the traditional analysis method," Wall said We're working toward using both data sets simultaneously to increase the clarity of the crystallography model and more clearly map how proteins are moving."

In the work described in the paper, the 3-D diffuse scattering data were measured from crystals of the enzymes cyclophilin A (a type of protein) and trypsin (an enzyme that acts to degrade protein) at Stanford Synchrotron Radiation Lightsource (SSRL), a U.S. Department of Energy (DOE) Office of Science User Facility. The measurements were extracted and movements were modeled using supercomputers at Los Alamos National Laboratory, Lawrence Berkeley National Laboratory, and the University of California, San Francisco. The ongoing computational work includes simulations on Conejo and Mustang, supercomputing clusters in Los Alamos National Laboratory's Institutional Computing Program.

Averaging relatively weak features in the data improves the clarity of the imaging of diffuse features, which has value as researchers have had an increasing interest in the role of protein motions. In designing a new drug, for example, one seeks to produce a small molecule that binds to a functional site on a specific protein and blocks its activity. With better modeling, adapted to more closely match experimental diffuse data, the steps toward a new pharmaceutical product, can be reduced, by more accurately accounting for protein motions in drug interactions.

The new approach can improve new and ongoing experiments and could potentially be used to explore data from previously conducted crystallography experiments if the level of background noise is not too severe.

"Data coming off modern X-ray sources with the latest detectors are tending to be the type of data we can best analyze," Wall said, but "some of the older data could be reexamined as well."

With this new method, scientists can experimentally validate predictions of detailed models of protein motions, such as computationally expensive all-atom molecular dynamics simulations, and less expensive "normal mode analysis," in which the protein motions are modeled as vibrations in a network of atoms interconnected by soft springs.

A key finding is that normal modes models of both cyclophilin A and trypsin resemble the diffuse data; this creates an avenue for adjusting detailed models of protein motion to better agree with the data. "We are planning to add in future a refinement step to increase accuracy even more," Wall said. A more detailed model provides a more direct connection between protein structure and biological mechanisms, which is desired for pharmaceutical applications.

This use of diffuse scattering data illustrates the potential to increase understanding of protein structure variations in any X-ray crystallography experiment. Said Wall, "This represents a significant step toward moving diffuse scattering analysis into the mainstream of structural biology."

Physical and biological models often have hundreds of inputs, many of which may have a negligible effect on a model's response. Establishing parameters that can be fixed at nominal values without significantly affecting model outputs is often challenging; sometimes these parameters cannot be simply discerned by the outputs. Thus, verifying that a parameter is noninfluential is both computationally challenging and quite expensive.

In a paper publishing this week in the SIAM Journal on Uncertainty Quantification, authors Mami Wentworth, Ralph Smith, and H.T. Banks apply robust parameter selection and verification techniques to a dynamic HIV model. "Biological and physical models, such as the HIV model, often have a large number of parameters and initial conditions that cannot be measured directly, and instead must be inferred through statistical analysis," says Smith. "For this to be successfully accomplished, measured responses must adequately reflect changes in these inputs."

The authors implement global sensitivity analysis to identify input subsets, fix noninfluential inputs, and pinpoint those with the most potential to affect model response. "The role of global sensitivity analysis is to isolate those parameters that are influential and that can and must be inferred through a fit to data," says Smith. "Noninfluential parameters are fixed at nominal values for subsequent analysis." Discerning influential parameters from noninfluential ones enables the authors to reduce the parameter dimensions and look more closely at the portions of the model that affect HIV treatment plans.

Using data from patients who were part of a clinical study, the authors verify the HIV model's predictive capability. A system of ordinary differential equations (ODEs) describes HIV in the model, including uninfected and infected cells that are both activated and resting. The selection and verification techniques enhance the model's reliability, and are more effective than the local sensitivity-based method originally performed on the HIV model, which is used as a point of comparison. "Models of this type [those analyzed by local sensitivity-based methods] exhibit highly nonlinear dependencies between parameters and responses, which limits the applicability of local sensitivity analysis," says Smith. Ultimately, more reliable models facilitate the development of enhanced treatment methods that increase T-cell counts in patients with HIV.

Determining their model's influential factors allow Wentworth et al. to fix the noninfluential parameters and minimize the parameter dimensions for future uncertainty quantification. Their selection techniques are essential in regulating better control for drug therapy. Ultimately, the authors seek to better comprehend HIV dynamics and eventually establish optimal treatment strategies. They chose to employ an HIV model because of Banks' familiarity with the model type. "He developed it and has substantial experience employing it to characterize HIV dynamics and develop potential treatment regimes," says Smith.

The authors' parameter selection and verification techniques are applicable to multiple types of physical and biological models, including those of behavior patterns and other diseases. "The techniques illustrated here are general in nature and can be applied to a wide range of biological and physical applications modeled by systems of ordinary differential equations or partial differential equations," says Smith.

Researchers have used modeling to estimate the true impact of infectious diseases, such as swine flu, when underreporting can mean the surveillance from time of the pandemic can miss the vast majority of infections that occur in the population.

New research published in PLOS Computational Biology by Mikhail Shubin et al from the National Institute for Health and Welfare uses simulations to estimate the effect of the swine flu pandemic in Finland. This research offers a platform to assess the severity of flu seasons at various levels of the healthcare system, when previously the number of infected individuals has been uncertain.

The researchers built a low-scale simulation model of Finland that simulates the spread of influenza in the population. The model accounts for the transmission of influenza in the population, the impact of vaccination, outcomes of varying severity and imperfect detection of flu.

The study shows that the impact on Finland was minor, as less than 10% of the population was infected with swine flu during the first two seasons in 2009-2011, with the highest incidences of the disease initially occurring in younger people.

Shubin et al's research also measures the impact of the vaccination campaign in which approximately half of the Finnish population were vaccinated by February 2010. They show that vaccinations significantly reduced the transmissibility of the virus as the proportion of the population infected during the second season was only 3%. This research shows that the second season could have started earlier and caused a larger outbreak, leading to 4-8 times more infections overall.

The study emphasises that statistical modeling and simulation can be used to evaluate incomplete infectious disease surveillance data in emerging infections.

Page 1 of 48