This is Amir Toor, M.D.

Scientists at Virginia Commonwealth University have developed supercomputer models that can simulate the recovery of the immune system in patients undergoing stem cell transplants. In two recent studies, they reinforce the potential of using DNA sequencing and computer modeling to predict which stem cell transplant recipients might suffer complications such as graft-versus-host-disease, a condition where the donor's immune system attacks the recipient's body. The studies build upon prior research by scientists at VCU Massey Cancer Center, the VCU Center for the Study of Biological Complexity and VCU's Department of Psychiatry and Statistical Genomics that found evidence that the immune system may be modeled as a dynamical system.

Dynamical systems describe phenomenon in which the relationships between the variables in the system determine its future state. Some systems, such as a swinging pendulum on a clock, can have relatively few variables, making their future states fairly easy to predict. Other systems, such as the weather, require advanced forecasting models due to the large number of variables affecting their present and future conditions. The ability to "forecast" immune system recovery could potentially allow doctors to better personalize post-transplant care for improved patient outcomes.

The first study, published in the journal Biology of Blood and Marrow Transplantation, sequenced the DNA of 34 stem cell transplant donor-recipient pairs and used the resulting information in an advanced computer model to simulate how the recipient's T cell repertoire will recover following transplantation. The T cell repertoire is the army of immune system cells called T cells that a person develops in response to disease and other pathogens in their environment.

"This study is the first to simulate the growth of the T cell repertoire following transplantation using variables that aren't accounted for in typical HLA donor-recipient matching," says Amir Ahmed Toor, M.D., hematologist-oncologist in the Bone Marrow Transplantation Program and member of the Developmental Therapeutics research program at VCU Massey Cancer Center as well as associate professor in the Division of Hematology, Oncology and Palliative Care at the VCU School of Medicine. "Using a larger cohort of patients than in previous studies, we were able to mathematically predict the interactions of these variables, which led to simulations that appear to be very similar to clinically observed post-transplantation T cell repertoire development."

Human leucocyte antigen (HLA) testing is the current gold standard for matching stem cell transplant (SCT) donors and recipients. The HLA is a system of genes responsible for regulating immune responses. However, previous research by Toor and his colleagues uncovered large variations between donor-recipient minor histocompatibility antigens (mHA) that could potentially contribute to transplant complications not accounted for by HLA testing. mHA are protein fragments presented on the surface of the molecules that the HLA creates in order to regulate immune responses.

The models used in the computer simulations were driven by population growth formulas developed from past studies by Toor that discovered distinct patterns of lymphocyte recovery in SCT recipients. Using matrix mathematics to develop the simulations, the researchers also observed competition among T cells as the T cell repertoire develops. This competition leads to certain families of T cells becoming dominant and more numerous, which crowds out weaker T cell families, causing them to develop later and in fewer numbers.

"We are attempting to account for the many variables that could impact T cell repertoire development and, in turn, patient outcomes," says Toor. "In future studies, we hope to explore the impact of organ-specific antigen expression. The knowledge gained from this research could potentially allow more accurate predictions of which organs could be most affected by graft-versus-host-disease."

The second study, published in the Journal of the Royal Society Interface, examines the ordering of the DNA segments that make up the T cell receptor loci, which are the part of the genome responsible for assembling the T cell repertoire. In this study, he provides further evidence of an underlying mathematical order and presents trigonometric ratios describing the positioning of the DNA segments.

Toor also hypothesizes that interactions between the two strands in the DNA double helix may influence gene segment order based on wave-mechanical properties inherent to the structure of DNA. If true, this theory has implications for understanding the entire genome as it could allow scientists to quantify gene expression.

"Typically, the locations of gene segments are considered as if they were numbers on a straight line," says Toor. "This study is unique in that we have used trigonometry to account for the spiral nature of DNA."

Toor's research is an exciting marriage of nature, math and science applied for human health. If successful, it could lead not only to improved stem cell transplantation practices but also a much greater understanding of how our body assembles the building blocks needed to keep us alive.

University of Rochester team leads competition for best computer-generated captions

A team of University and Adobe researchers is outperforming other approaches to creating computer-generated image captions in an international competition. The key to their winning approach? Thinking about words - what they mean and how they fit in a sentence structure - just as much as thinking about the image itself.

The Rochester/Adobe model mixes the two approaches that are often used in image captioning: the "top-down" approach, which starts from the "gist" of the image and then converts it into words, and the "bottom-up" approach, which first assigns words to different aspects of the image and then combines them together to form a sentence.

The Rochester/Adobe model is currently beating Google, Microsoft, Baidu/UCLA, Stanford University, University of California Berkeley, University of Toronto/Montreal, and others to top the leaderboard in an image captioning competition run by Microsoft, called the Microsoft COCO Image Captioning Challenge. While the winner of the year-long competition is still to be determined, the Rochester "Attention" system - or ATT on the leaderboard - has been leading the field since last November.

Other groups have also tried to combine these two methods by having a feedback mechanism that allows a system to improve on what just one of the approaches would be able to do. However, several systems that tried to blend these two approaches focused on "visual attention," which tries to take into account which parts of an image are visually more important to describe the image better.

The Rochester/Adobe system focuses on what the researchers describe as "semantic attention." In a paper accepted by the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), entitled "Image Captioning with Semantic Attention," computer science professor Jiebo Luo and his colleagues define semantic attention as "the ability to provide a detailed, coherent description of semantically important objects that are needed exactly when they are needed."

"To describe an image you need to decide what to pay more attention to," said Luo. "It is not only about what is in the center of the image or a bigger object, it's also about coming up with a way of deciding on the importance of specific words."

For example, take an image that shows a table and seated people. The table might be at the center of the image but a better caption might be "a group of people sitting around a table" instead of "a table with people seated." Both are correct, but the former one also tries to take into account what might be of interest to readers and viewers.

Computer image captioning brings together two key areas in artificial intelligence: computer vision and natural language processing. For the computer vision side, researchers train their systems on a massive dataset of images, so they learn to identify objects in images. Language models can then be used to put these words together. For the algorithm that Luo and his team used in their system, they also trained their system on many texts. The objective was not only to understand sentence structure but also the meanings of individual words, what words often get used together with these words, and what words might be semantically more important.

Web based application workshop for prospective users on 22 March

The novel, brain-inspired supercomputing system BrainScaleS will be launched at the Kirchhoff Institute for Physics of Heidelberg University. A workshop will inform about possible applications of the neuromorphic system now coming online. The workshop is addressed to users from academic research, industry and education and will be broadcasted as a web stream on 22 March 2016 from 3 pm to 6 pm CET. The BrainScaleS system has been constructed by an international research team led by Prof. Dr. Karlheinz Meier in the framework of the Human Brain Project (HBP) funded by the European Commission.

Neuromorphic supercomputers mimic aspects of architectures and principles found in biological brains on silicon chips. “The Heidelberg BrainScaleS system goes beyond the paradigms of a Turing machine and the von Neumann architecture, both formulated during the middle of the 20th century by the computer pioneers Alan Turing and John von Neumann. It is neither executing a sequence of instructions nor is it constructed as a system of physically separated computing and memory units. It is rather a direct, silicon based image of the neuronal networks found in nature, realising cells, connections and inter-cell communications by means of modern analogue and digital microelectronics,” explains Prof. Meier.

The recently completed system is composed of 20 silicon wafers with a total of four million neurons and a billion dynamic synaptic connections. Learn and developmental processes can be emulated with a thousand fold acceleration over real time, so that a biological day can be compressed to 100 seconds on the machine. Beyond basic research on self organisation in neural networks, potential applications are for example in energy and time efficient realisations of Deep Learning, a technology developed by companies like Google and Facebook for the analysis of large data volumes using conventional supercomputers.

In parallel to the launch of the Heidelberg BrainScaleS system a complementary system with comparable size called SpiNNaker will become operational at the University of Manchester (UK). Led by the computer scientist Prof. Dr. Steve Furber, co-designer of the ARM chip architecture in the 1980s, a large-scale system consisting of 500.000 densely interconnected ARM cores was constructed there. Also this system will be introduced during the web based workshop on 22 March. Together, the systems located in Heidelberg and Manchester constitute the “Neuromorphic Computing Platform” of the Human Brain Project.

The European developments are based on two previous European projects, FACETS and BrainScaleS, funded from 2005 to 2015 by the “Future Emerging Technologies” (FET) scheme of the European Commission, and on the national project SpiNNaker in the UK. With the two new machines now coming online Europe has a strong position in hardware development in the field of alternative computing. In the USA, the IBM Research Laboratory in Almaden (California) has developed the TrueNorth Chip, yet another approach, which is complementary to both of the European systems.

Rajat Mittal, left, a Johns Hopkins mechanical engineering professor, and Neda Yaghoobian, a visiting postdoctoral scholar, devised a supercomputer simulation to determine how wind conditions affect the of trajectory of a golf ball in flight. Photo by Will Kirk/Johns Hopkins University.

Johns Hopkins engineers have devised a supercomputer model to unravel the wicked wind conditions that plague the world’s greatest golfers at a course that hosts one of the sport’s most storied tournaments, The Masters, in Augusta, Ga.

Rajat Mittal, an aerodynamics expert and professor of mechanical engineering in Johns Hopkins’ Whiting School of Engineering, who also describes himself as a recreational golfer, developed this system with Neda Yaghoobian, a postdoctoral visiting scholar on his lab team. Yaghoobian earned her PhD in mechanical engineering at the University of California, San Diego, focusing on urban energy and microclimate, as well as atmospheric and environmental flow research. She presented the team’s early findings in November at the 68th annual meeting of the American Physical Society Division of Fluid Dynamics.

Yaghoobian’s recent research examines the significant role that wind conditions can play in golf. In her APS presentation, she reported that she and Mittal had devised a model based on computational fluid dynamics that incorporates wind conditions and information on local tree canopies to evaluate, and even predict how the wind’s direction and speed are likely to affect the accuracy of a golf shot on any particular hole. The researchers also used computer simulations to explore the impact that factors such as spin and launch angle have on the ball itself as it travels toward its destination. 

For their proof-of-concept research, the team collected data from the notorious 12th hole at Georgia’s Augusta National Golf Club, site of the annual Masters Golf Tournament. Although this par-3 hole is the shortest on the course, it is subject to unpredictable winds that swirl over and around the tall tree canopies that surround the hole. It also features water, sand and a particularly small green that compound the complexity.

2012 Golf Digest article about Augusta National’s 12th holedubbed it “the scariest 155 yards in golf.” The story described how even the world’s top golfers often misjudge the wind conditions, leaving shots short into the water or sometimes overshooting the green clear into a different golf course next door. 

To help gauge wind conditions, golfers often throw a tuft of grass in the air. But Mittal and Yaghoobian collected and processed more precise scientific data. In addition to local weather records—particularly wind conditions—they also gathered information about the topography of the hole and its plant canopy. Their computer simulations showed that the tall trees surrounding the 12th hole do indeed have a significant impact on the accuracy of the golf shot. They also found that winds from certain directions are the most dangerous for this hole.

“Our primary goal was to develop a computational tool that could integrate all of these kinds of information to see if it can help predict how the wind will influence a golf ball’s flight on a difficult hole like this one,” Mittal said. “This level of analysis has not been available to golfers. But in our early work, we’ve been able to demonstrate proof-of-concept that it is possible to generate these kind of detailed predictions about a particular golf hole.”

Eventually, the researchers say, the system might be incorporated into a portable device or application that could help advise golfers about what club to use, how hard to hit the ball and how best to aim the shot, all based on the hole’s weather conditions, terrain and other factors. “We think that this prototype system is a promising first step toward an app or software program that could help golfers, course designers and even sports commentators,” Mittal said. 

Yaghoobian added. “I really knew very little about golf when I started on this research, but having worked on this project for over a year now, I have come to appreciate the inherent complexity of this sport.”

Working with the Johns Hopkins Technology Ventures staff, Mittal and Yaghoobian have obtained a provisional patent covering the computational tool developed for this project.

New model corrects assumption that drowning is only scenario for low-lying coasts

Much of the coast from Maine to Virginia is more likely to change than to simply drown in response to rising seas during the next 70 years or so, according to a new study led by the U.S. Geological Survey. The study is based on a new supercomputer model that captures the potential of the Northeast coast to change, driven by geological and biological forces, in ways that will reshape coastal landscapes.

In a paper published Monday in Nature Climate Change, the researchers reported that 70 percent of the Northeast Atlantic Coast is made up of ecosystems that have the capacity to change over the next several decades in response to rising seas. For example, barrier islands may migrate inland, build dunes, change shape, or be split by new inlets as tides, winds, waves and currents sculpt their sands. Marshes trap sediment and break down decaying plants into new soil, which may elevate them sufficiently in some areas to keep pace with sea-level increases.

While most sea-level rise models that cover large areas show low-lying coastal land converting to open water in coming decades, many of these inundation models over-predict the land likely to submerge. The USGS model, developed in collaboration with Columbia University’s Earth Institute, produces a more nuanced picture of sea level rise as a mosaic of dry land, wetlands, and open seas, rather than as a uniform response across the landscape.

The USGS model is the first to factor in natural forces and make detailed predictions from the 2020s through the 2080s over a large coastal area, some 38,000 square kilometers (about 9.4 million acres). It is an advance over most regional models, which project drowning as the only outcome as the oceans rise. These are often referred to as “bathtub models” and assume the coast is progressively submerged as sea levels rise.

Projections from inundation models are straightforward: some coastal land will remain above the levels of the rising seas and some will drown. The new model includes the potential for dynamic coastal change and shows where in response to future sea levels, coastal lands fall on a continuum between dry land and open water.

“Geologists have always known that the coast has some potential for give and take,” said lead author Erika Lentz, a research geologist at the USGS Coastal and Marine Science Center in Woods Hole, Massachusetts. “But the standard bathtub models of sea level rise don’t reflect that. This approach couples what we do know about these systems with what we still need to learn—how different ecosystems may respond to different sea-level rise scenarios— to estimate the odds that an area will persist or change instead of simply drown.”

By casting results in terms of odds, the new model provides a more accurate picture of sea-level rise vulnerability for informing adaptation strategies and reducing hazards, the USGS researchers say. They make it clear, however, that just because an area is less likely to drown might not mean it is less vulnerable. “Our model results suggest that even natural changes may pose problems,” Lentz said. “For example, the likelihood that barrier islands will change could impact the infrastructure and economies of coastal communities, and the barrier islands or marshes may not protect coastal communities in the same way they do today.”

In fact, the outcome is uncertain for the Northeast’s low-lying developed coastlines, where seawalls, buildings and other immovable structures thwart some natural processes. The model found the region’s developed coastal lands lying 1 meter (about 3 1/2 feet) or less above sea level will likely face a tipping point by the 2030s, when humans’ decisions about whether and how to protect each area will determine if it survives or drowns.

A2012 USGS study identified the densely populated region from Cape Hatteras to Boston as a hot spot where seas are rising faster than the global average, so land managers urgently need to understand how their coastal landscape may change, said John Haines, coordinator of the USGS Coastal and Marine Geology Program. 

“The model allows us to identify vulnerable areas, and that information can be very valuable to land managers as they consider whether to protect, relocate or let go of certain assets,” Haines said. “Even when the results are uncertain, it’s useful to know there’s a 50 percent chance that an important habitat or infrastructure project may be lost in a few decades.”

To come up with their model for the Northeastern United States, the researchers mapped all coastal land between 10 meters (about 33 feet) above sea level and 10 meters below it, from the Virginia-North Carolina line to the Maine-Canada border. They factored in a variety of forces that affect coastal change, from planetary phenomena like the movement of Earth’s tectonic plates to local ones like falling groundwater levels that cause land surfaces to sink. Looking at parcels of 30 meters by 30 meters—about the size of two NBA basketball courts side by side—they weighed the balance of forces on each parcel.

Using scenarios that assume humans will continue adding moderate to high levels of greenhouse gases to the atmosphere through the 21st century, the team projected global sea level rise for the 2020s through the 2080s, and applied that to the coast. The model then estimated the likelihood, from 0 to 100 percent, that each parcel will persist above sea level at the end of each decade.

Predictions for many parcels fell close to 50 percent in the first few decades, a tossup between drowning and surviving. The uncertainty was greatest when the researchers had to wrestle with more than one question that can’t yet be definitively answered. Among them are, how fast will seas rise, can coastal marshes make new soil quickly enough to stay above the waves, and what engineering strategies will people use to protect some shorelines?

“By building in our understanding of the sea level rise response of the coastal landscape, we’re providing a more realistic picture of coastal change in the Northeastern U.S. over the next several decades,” Lentz said.

The researchers’ next step will be to group the basketball-court-sized parcels into larger areas, such as major coastal cities, national wildlife refuges, and national seashores, and assess the vulnerability of these areas to future change and drowning. This information may assist decisionmakers as they develop management priorities to address longer-term coastal challenges.

Page 4 of 48