LATEST

Using a combination of infrared spectroscopy and supercomputer simulation, researchers at Ruhr-Universität Bochum (RUB) have gained new insights into the workings of protein switches. With high temporal and spatial resolution, they verified that a magnesium atom contributes significantly to switching the so-called G-proteins on and off.

G-proteins affect, for example, seeing, smelling, tasting and the regulation of blood pressure. They constitute the point of application for many drugs. “Consequently, understanding their workings in detail is not just a matter of academic interest,” says Prof Dr Klaus Gerwert, Head of the Department of Biophysics. Together with his colleagues, namely Bochum-based private lecturer Dr Carsten Kötting and Daniel Mann, he published his findings in the Biophysical Journal. The journal features the subject matter as its cover story in the edition published on January 10, 2017 (http://www.cell.com/biophysj/issue?pii=S0006-3495%2816%29X0003-3).

G-proteins as source of disease

GTP can bind to all G-proteins. If an enzyme cleaves a phosphate group from the bound GTP, the G-protein is switched off. This so-called GTP hydrolysis takes place in the active centre of enzymes within seconds. If the process fails, severe diseases may be triggered, such as cancer, cholera or the rare McCune-Albright syndrome, which is characterised by, for example, abnormal bone metabolism.

Magnesium important for switch mechanism

In order for GTP hydrolysis to take place, a magnesium atom has to be present in the enzyme’s active centre. The research team observed for the first time directly in what way the magnesium affects the geometry and charge distribution in its environment. After being switched off, the atom remains in the enzyme’s binding pocket. To date, researchers had assumed that the magnesium leaves the pocket after the switching off process is completed.

The new findings have been gathered thanks to a method developed at the RUB Department of Biophysics. It allows to monitor enzymatic processes at a high temporal and spatial resolution in their natural state. The method in question is a special type of spectroscopy, namely the time-resolved Fourier Transform Infrared Spectroscopy. However, the data measured with its aid do not provide any information about the precise location in the enzyme where a process is taking place. The researchers gather this information through quantum-mechanical computer simulations of structural models. “Computer simulations are crucial for decoding the information hidden in the infrared spectra,” explains Carsten Kötting. The supercomputer, so to speak, becomes a microscope.

How proteins accelerate the switching off process

In the current study, the RUB biophysicists also demonstrated in what way the specialised protein environment affects the acceleration of GTP hydrolysis. They analysed the role of a lysine amino acid, which is located in the same spot in many G-proteins. It binds precisely that phosphate group of the GTP molecule from which a phosphate is separated when the G-protein is switched off.

“The function of lysine is to accelerate GTP hydrolysis by transferring negative charges from the third phosphate group to the second phosphate group,” elaborates Daniel Mann. “This is a crucial starting point for the development of drugs for the treatment of cancer and other severe diseases in the long term.”

The research team Carsten Kötting, Daniel Mann and Klaus Gerwert (left to right) sets up the measuring equipment. The detector of the spectrometer has to be cooled with liquid nitrogen. © RUB, Marquard
The research team Carsten Kötting, Daniel Mann and Klaus Gerwert (left to right) sets up the measuring equipment. The detector of the spectrometer has to be cooled with liquid nitrogen. © RUB, Marquard

Copper is an essential element of our society with main uses in the field of electricity and electronics. About 70% of the copper comes from deposits formed several million years ago during events of magma degassing within the Earth's crust just above subduction zones. Despite similar ore forming processes, the size of these deposits can vary orders of magnitude from one place to another, the main reason of which has remained unclear. A new study led by researchers from the Universities of Geneva (UNIGE, Switzerland) and the Saint-Etienne (France), to be published in Scientific Reports, suggests that the answer may come from the volume of magma emplaced in the crust and proposes an innovative method to better explore these deposits.

Magmas formed above subduction zones contain important amount of water that is essentially degassed during volcanic eruptions or upon magma cooling and solidification at depth. The water escaping from the crystallizing magma at several kilometers below surface carries most of the copper initially dissolved in the magma. On its way toward the surface the magmatic fluids cool and deposit copper in the fractured rocks forming giant metal deposits such as those exploited along the Andean Cordillera.

By modeling the process of magma degassing, the researchers could reproduce the chemistry of the fluids that form metal deposits. "Comparing the model results with available data from known copper deposits, we could link the timescales of magma emplacement and degassing in the crust, the volume of magma, and the size of the deposit", explains Luca Caricchi, researcher at the UNIGE. The scientists also propose a new method to estimate the size of the deposits, based on high-precision geochronology, one of the specialties of the Department of Earth Sciences in UNIGE's Science Faculty.

This technique is a new add-in in the prospector toolbox with the possibility to identify deposits with the best potential, early in the long and costly process of mineral exploration. It is anticipated that the computational approach developed in this study can also provide important insights on the role of magma degassing as a potential trigger for volcanic eruptions.

CAPTION This is an activ magmatic system.
CAPTION This is an activ magmatic system.

Physicists at the National Institute of Standards and Technology (NIST) have cooled a mechanical object to a temperature lower than previously thought possible, below the so-called "quantum limit."

The new NIST theory and experiments, described in the Jan. 12, 2017, issue of Nature, showed that a microscopic mechanical drum--a vibrating aluminum membrane--could be cooled to less than one-fifth of a single quantum, or packet of energy, lower than ordinarily predicted by quantum physics. The new technique theoretically could be used to cool objects to absolute zero, the temperature at which matter is devoid of nearly all energy and motion, NIST scientists said.

"The colder you can get the drum, the better it is for any application," said NIST physicist John Teufel, who led the experiment. "Sensors would become more sensitive. You can store information longer. If you were using it in a quantum computer, then you would compute without distortion, and you would actually get the answer you want."

"The results were a complete surprise to experts in the field," Teufel's group leader and co-author José Aumentado said. "It's a very elegant experiment that will certainly have a lot of impact."

The drum, 20 micrometers in diameter and 100 nanometers thick, is embedded in a superconducting circuit designed so that the drum motion influences the microwaves bouncing inside a hollow enclosure known as an electromagnetic cavity. Microwaves are a form of electromagnetic radiation, so they are in effect a form of invisible light, with a longer wavelength and lower frequency than visible light.

The microwave light inside the cavity changes its frequency as needed to match the frequency at which the cavity naturally resonates, or vibrates. This is the cavity's natural "tone," analogous to the musical pitch that a water-filled glass will sound when its rim is rubbed with a finger or its side is struck with a spoon.

NIST scientists previously cooled the quantum drum to its lowest-energy "ground state," or one-third of one quantum. They used a technique called sideband cooling, which involves applying a microwave tone to the circuit at a frequency below the cavity's resonance. This tone drives electrical charge in the circuit to make the drum beat. The drumbeats generate light particles, or photons, which naturally match the higher resonance frequency of the cavity. These photons leak out of the cavity as it fills up. Each departing photon takes with it one mechanical unit of energy--one phonon--from the drum's motion. This is the same idea as laser cooling of individual atoms, first demonstrated at NIST in 1978 and now widely used in applications such atomic clocks.

The latest NIST experiment adds a novel twist--the use of "squeezed light" to drive the drum circuit. Squeezing is a quantum mechanical concept in which noise, or unwanted fluctuations, is moved from a useful property of the light to another aspect that doesn't affect the experiment. These quantum fluctuations limit the lowest temperatures that can be reached with conventional cooling techniques. The NIST team used a special circuit to generate microwave photons that were purified or stripped of intensity fluctuations, which reduced inadvertent heating of the drum.

"Noise gives random kicks or heating to the thing you're trying to cool," Teufel said. "We are squeezing the light at a 'magic' level--in a very specific direction and amount--to make perfectly correlated photons with more stable intensity. These photons are both fragile and powerful."

The NIST theory and experiments indicate that squeezed light removes the generally accepted cooling limit, Teufel said. This includes objects that are large or operate at low frequencies, which are the most difficult to cool.

The drum might be used in applications such as hybrid quantum supercomputers combining both quantum and mechanical elements, Teufel said. A hot topic in physics research around the world, quantum supercomputers could theoretically solve certain problems considered intractable today.

CAPTION NIST researchers applied a special form of microwave light to cool a microscopic aluminum drum to an energy level below the generally accepted limit, to just one fifth of a single quantum of energy. The drum, which is 20 micrometers in diameter and 100 nanometers thick, beat10 million times per second while its range of motion fell to nearly zero. CREDIT Teufel/NIST
CAPTION NIST researchers applied a special form of microwave light to cool a microscopic aluminum drum to an energy level below the generally accepted limit, to just one fifth of a single quantum of energy. The drum, which is 20 micrometers in diameter and 100 nanometers thick, beat10 million times per second while its range of motion fell to nearly zero. CREDIT Teufel/NIST

Scientists offer a new path to creating the extreme conditions found in stars, using ultra-short laser pulses irradiating nanowires

The energy density contained in the center of a star is higher than we can imagine - many billions of atmospheres, compared with the 1 atmosphere of pressure we live with here on Earth's surface.

These extreme conditions can only be recreated in the laboratory through fusion experiments with the world's largest lasers, which are the size of stadiums. Now, scientists have conducted an experiment at Colorado State University that offers a new path to creating such extreme conditions, with much smaller, compact lasers that use ultra-short laser pulses irradiating arrays of aligned nanowires.

The experiments, led by University Distinguished Professor Jorge Rocca in the Departments of Electrical and Computer Engineering and Physics, accurately measured how deeply these extreme energies penetrate the nanostructures. These measurements were made by monitoring the characteristic X-rays emitted from the nanowire array, in which the material composition changes with depth.

Numerical models validated by the experiments predict that increasing irradiation intensities to the highest levels made possible by today's ultrafast lasers could generate pressures to surpass those in the center of our sun.

The results, published Jan. 11 in the journal Science Advances, open a path to obtaining unprecedented pressures in the laboratory with compact lasers. The work could open new inquiry into high energy density physics; how highly charged atoms behave in dense plasmas; and how light propagates at ultrahigh pressures, temperatures, and densities.

Creating matter in the ultra-high energy density regime could inform the study of laser-driven fusion - using lasers to drive controlled nuclear fusion reactions - and to further understanding of atomic processes in astrophysical and extreme laboratory environments.

The ability to create ultra-high energy density matter using smaller facilities is thus of great interest for making these extreme plasma regimes more accessible for fundamental studies and applications. One such application is the efficient conversion of optical laser light into bright flashes of X-rays.

The work was a multi-institutional effort led by CSU that included graduate students Clayton Bargsten, Reed Hollinger, Alex Rockwood, and undergraduate David Keiss, all working with Rocca. Also involved were research scientists Vyacheslav Shlyapsev, who worked in modeling, and Yong Wang and Shoujun Wang, all from the same group.

Co-authorship included Maria Gabriela Capeluto from the University of Buenos Aires, and Richard London, Riccardo Tommasini and Jaebum Park from Lawrence Livermore National Laboratory (LLNL). Numerical simulations were conducted by Vural Kaymak and Alexander Pukhov from Heinrich-Heine University in Dusseldorf, using atomic data by Michael Busquet and Marcel Klapisch from Artep, Inc.

CAPTION Representation of the creation of ultra-high energy density matter by an intense laser pulse irradiation of an array of aligned nanowires. CREDIT R. Hollinger and A. Beardall
CAPTION Representation of the creation of ultra-high energy density matter by an intense laser pulse irradiation of an array of aligned nanowires. CREDIT R. Hollinger and A. Beardall

For the first time, University of New South Wales biomedical engineers have woven a 'smart' fabric that mimics the sophisticated and complex properties of one nature's ingenious materials, the bone tissue periosteum.

Having achieved proof of concept, the researchers are now ready to produce fabric prototypes for a range of advanced functional materials that could transform the medical, safety and transport sectors. Patents for the innovation are pending in Australia, the United States and Europe.

Potential future applications range from protective suits that stiffen under high impact for skiers, racing-car drivers and astronauts, through to 'intelligent' compression bandages for deep-vein thrombosis that respond to the wearer's movement and safer steel-belt radial tyres.

The research is published today in Nature's Scientific Reports.

Many animal and plant tissues exhibit 'smart' and adaptive properties. One such material is the periosteum, a soft tissue sleeve that envelops most bony surfaces in the body. The complex arrangement of collagen, elastin and other structural proteins gives periosteum amazing resilience and provides bones with added strength under high impact loads.

Until now, a lack of scalable 'bottom-up' approaches by researchers has stymied their ability to use smart tissues to create advanced functional materials.

UNSW's Paul Trainor Chair of Biomedical Engineering, Professor Melissa Knothe Tate, said her team had for the first time mapped the complex tissue architectures of the periosteum, visualised them in 3D on a computer, scaled up the key components and produced prototypes using weaving loom technology.

"The result is a series of textile swatch prototypes that mimic periosteum's smart stress-strain properties. We have also demonstrated the feasibility of using this technique to test other fibres to produce a whole range of new textiles," Professor Knothe Tate said.

In order to understand the functional capacity of the periosteum, the team used an incredibly high fidelity imaging system to investigate and map its architecture.

"We then tested the feasibility of rendering periosteum's natural tissue weaves using computer-aided design software," Professor Knothe Tate said.

The computer modelling allowed the researchers to scale up nature's architectural patterns to weave periosteum-inspired, multidimensional fabrics using a state-of-the-art computer-controlled jacquard loom. The loom is known as the original rudimentary computer, first unveiled in 1801.

"The challenge with using collagen and elastin is their fibres, that are too small to fit into the loom. So we used elastic material that mimics elastin and silk that mimics collagen," Professor Knothe Tate said.

In a first test of the scaled-up tissue weaving concept, a series of textile swatch prototypes were woven, using specific combinations of collagen and elastin in a twill pattern designed to mirror periosteum's weave. Mechanical testing of the swatches showed they exhibited similar properties found in periosteum's natural collagen and elastin weave.

First author and biomedical engineering PhD candidate, Joanna Ng, said the technique had significant implications for the development of next-generation advanced materials and mechanically functional textiles.

While the materials produced by the jacquard loom have potential manufacturing applications - one tyremaker believes a titanium weave could spawn a new generation of thinner, stronger and safer steel-belt radials - the UNSW team is ultimately focused on the machine's human potential.

"Our longer term goal is to weave biological tissues - essentially human body parts - in the lab to replace and repair our failing joints that reflect the biology, architecture and mechanical properties of the periosteum," Ms Ng said.

An NHMRC development grant received in November will allow the team to take its research to the next phase. The researchers will work with the Cleveland Clinic and the University of Sydney's Professor Tony Weiss to develop and commercialise prototype bone implants for pre-clinical research, using the 'smart' technology, within three years.

CAPTION Periosteum is a tissue fabric layer on the outside of bone, as seen in the upper diagonal segment of the tissue image volume. The natural weave of elastin (green) and collagen (yellow) are evident when viewed under the microscope. Elastin gives periosteum its stretchy properties and collagen imparts toughness. Muscle is organized into fiber bundles, observed as round structures in the lower diagonal segment of the tissue image volume. The volume is approximately 200 x 200 microns (width x height) x 25 microns deep. CREDIT Professor Melissa Knothe Tate
CAPTION Periosteum is a tissue fabric layer on the outside of bone, as seen in the upper diagonal segment of the tissue image volume. The natural weave of elastin (green) and collagen (yellow) are evident when viewed under the microscope. Elastin gives periosteum its stretchy properties and collagen imparts toughness. Muscle is organized into fiber bundles, observed as round structures in the lower diagonal segment of the tissue image volume. The volume is approximately 200 x 200 microns (width x height) x 25 microns deep. CREDIT Professor Melissa Knothe Tate

Researchers at the University of California San Diego have demonstrated the world's first laser based on an unconventional wave physics phenomenon called bound states in the continuum. The technology could revolutionize the development of surface lasers, making them more compact and energy-efficient for communications and computing applications. The new BIC lasers could also be developed as high-power lasers for industrial and defense applications.

"Lasers are ubiquitous in the present day world, from simple everyday laser pointers to complex laser interferometers used to detect gravitational waves. Our current research will impact many areas of laser applications," said Ashok Kodigala, an electrical engineering Ph.D. student at UC San Diego and first author of the study.

"Because they are unconventional, BIC lasers offer unique and unprecedented properties that haven't yet been realized with existing laser technologies," said Boubacar Kanté, electrical engineering professor at the UC San Diego Jacobs School of Engineering who led the research.

For example, BIC lasers can be readily tuned to emit beams of different wavelengths, a useful feature for medical lasers made to precisely target cancer cells without damaging normal tissue. BIC lasers can also be made to emit beams with specially engineered shapes (spiral, donut or bell curve) -- called vector beams -- which could enable increasingly powerful computers and optical communication systems that can carry up to 10 times more information than existing ones.

"Light sources are key components of optical data communications technology in cell phones, computers and astronomy, for example. In this work, we present a new kind of light source that is more efficient than what's available today in terms of power consumption and speed," said Babak Bahari, an electrical engineering Ph.D. student in Kanté's lab and a co-author of the study.

Bound states in the continuum (BICs) are phenomena that have been predicted to exist since 1929. BICs are waves that remain perfectly confined, or bound, in an open system. Conventional waves in an open system escape, but BICs defy this norm -- they stay localized and do not escape despite having open pathways to do so.

In a previous study, Kanté and his team demonstrated, at microwave frequencies, that BICs could be used to efficiently trap and store light to enable strong light-matter interaction. Now, they're harnessing BICs to demonstrate new types of lasers. The team published the work Jan. 12 in Nature.

Making the BIC laser

The BIC laser in this work is constructed from a thin semiconductor membrane made of indium, gallium, arsenic and phosphorus. The membrane is structured as an array of nano-sized cylinders suspended in air. The cylinders are interconnected by a network of supporting bridges, which provide mechanical stability to the device.

By powering the membrane with a high frequency laser beam, researchers induced the BIC system to emit its own lower frequency laser beam (at telecommunication frequency).

"Right now, this is a proof of concept demonstration that we can indeed achieve lasing action with BICs," Kanté said.

"And what's remarkable is that we can get surface lasing to occur with arrays as small as 8 × 8 particles," he said. In comparison, the surface lasers that are widely used in data communications and high-precision sensing, called VCSELs (vertical-cavity surface-emitting lasers), need much larger (100 times) arrays -- and thus more power -- to achieve lasing.

"The popular VCSEL may one day be replaced by what we're calling the 'BICSEL' -- bound state in the continuum surface-emitting laser, which could lead to smaller devices that consume less power," Kanté said. The team has filed a patent for the new type of light source.

The array can also be scaled up in size to create high power lasers for industrial and defense applications, he noted. "A fundamental challenge in high power lasers is heating and with the predicted efficiencies of our BIC lasers, a new era of laser technologies may become possible," Kanté said.

The team's next step is to make BIC lasers that are electrically powered, rather than optically powered by another laser. "An electrically pumped laser is easily portable outside the lab and can run off a conventional battery source," Kanté said.

CAPTION This is a schematic of the BIC laser: a high frequency laser beam (blue) powers the membrane to emit a laser beam at telecommunication frequency (red). CREDIT Kanté group, UC San Diego
CAPTION This is a schematic of the BIC laser: a high frequency laser beam (blue) powers the membrane to emit a laser beam at telecommunication frequency (red). CREDIT Kanté group, UC San Diego

An analysis that would have taken more than a thousand years on a single computer has found within one year more than a dozen new rapidly rotating neutron stars in data from the Fermi gamma-ray space telescope. With computing power donated by volunteers from all over the world an international team led by researchers at the Max Planck Institute for Gravitational Physics in Hannover, Germany, searched for tell-tale periodicities in 118 Fermi sources of unknown nature. In 13 they discovered a rotating neutron star at the heart of the source. While these all are – astronomically speaking – young with ages between tens and hundreds of thousands of years, two are spinning surprisingly slow – slower than any other known gamma-ray pulsar. Another discovery experienced a “glitch”, a sudden change of unknown origin in its otherwise regular rotation. 

“We discovered so many new pulsars for three main reasons: the huge computing power provided by Einstein@Home; our invention of novel and more efficient search methods; and the use of newly-improved Fermi-LAT data. These together provided unprecedented sensitivity for our large survey of more than 100 Fermi catalog sources,” says Dr. Colin Clark, lead author of the paper now published in The Astrophysical Journal.

Neutron stars are compact remnants from supernova explosions and consists of exotic, extremely dense matter. They measure about 20 kilometers across and weigh as much as half a million Earths. Because of their strong magnetic fields and fast rotation they emit beamed radio waves and energetic gamma rays similar to a cosmic lighthouse. If these beams point towards Earth once or twice per rotation, the neutron star becomes visible as a pulsating radio or gamma-ray source – a so-called pulsar.

“Blindly” detecting gamma-ray pulsars

Finding these periodic pulsations from gamma-ray pulsars is very difficult. On average only 10 photons per day are detected from a typical pulsar by the Large Area Telescope (LAT) onboard the Fermi spacecraft. To detect periodicities, years of data must be analyzed, during which the pulsar might rotate billions of times. For each photon one must determine exactly when during a single split-second rotation it was emitted. This requires searching over years long data sets with very fine resolution in order not to miss a signal. The computing power required for these “blind searches” – when little to no information about the pulsar is known beforehand – is enormous.

Previous similar blind searches have detected 37 gamma-ray pulsars in Fermi-LAT data. All blind search discoveries in the past 4 years have been made by Einstein@Home which has found a total of 21 gamma-ray pulsars in blind searches, more than a third of all such objects discovered through blind searches.

SuperComputing resource: Einstein@Home

Enlisting the help of tens of thousands of volunteers from all around world donating idle compute cycles on their tens of thousands of computers at home, the team was able to conduct a large-scale survey with the distributed computing project Einstein@Home. In total this search required about 10,000 years of CPU core time. It would have taken more than one thousand years on a single household computer. On Einstein@Home it finished within one year – even though it only used part of the project’s resources.

The scientists selected their targets from 1000 unidentified sources in the Fermi-LAT Third Source Catalog by their gamma-ray energy distribution as the most “pulsar-like” objects. For each of the 118 selected sources, they used novel, highly efficient methods to analyze the detected gamma-ray photons for hidden periodicities.

One dozen and one new neutron star

“So far we’ve identified 17 new pulsars among the 118 gamma-ray sources we searched with Einstein@Home. The latest publication in The Astrophysical Journal presents 13 of these discoveries,” says Clark. “We knew that there had to be several unidentified pulsars in the Fermi data, but it’s always very exciting to actually detect one of them and at the same time it’s very satisfying to understand what its properties are.” About half of the discoveries would have been missed in previous Einstein@Home surveys, but the novel improved methods made the difference.

Most of the discoveries were what the scientists expected: gamma-ray pulsars that are relatively young and were born in supernovae some tens to hundreds of thousands of years ago. Two of them however spin slower than any other gamma-ray pulsar known. Slow-spinning young pulsars on average emit less gamma-rays than faster-spinning ones. Finding these fainter objects is therefore useful to explore the entire gamma-ray pulsar population. Another newly discovered pulsar experienced a strong “glitch”, a sudden speedup of unknown origin in its otherwise regular rotation. Glitches are observed in other young pulsars and might be related to re-arrangements of the neutron star interior but are not well understood.

Searching for gamma-ray pulsars in binary systems

“Einstein@Home searched through 118 unidentified pulsar-like sources from the Fermi-LAT Catalog,” says Prof. Dr. Bruce Allen, director of Einstein@Home and director at the Max Planck Institute for Gravitational Physics in Hanover. “Colin has shown that 17 of these are indeed pulsars, and I would bet that many of the remaining 101 are also pulsars, but in binary systems, where we lack sensitivity. In the future, using improved methods, Einstein@Home is going to chase after those as well, and I am optimistic that we will find at least some of them.”

The entire sky as seen by the Fermi Gamma-ray Space Telescope and the 13 pulsars discovered by Einstein@Home that were now published. The field below each inset shows the pulsar name and its rotation frequency. The flags in the insets show the nationalities of the volunteers whose computers found the pulsars. Knispel/Clark/Max Planck Institute for Gravitational Physics/NASA/DOE/Fermi LAT Collaboration
The entire sky as seen by the Fermi Gamma-ray Space Telescope and the 13 pulsars discovered by Einstein@Home that were now published. The field below each inset shows the pulsar name and its rotation frequency. The flags in the insets show the nationalities of the volunteers whose computers found the pulsars. Knispel/Clark/Max Planck Institute for Gravitational Physics/NASA/DOE/Fermi LAT Collaboration

A new predictive supercomputer model could help optimize the recovery of stroke patients

After a stroke, patients typically have trouble walking and few are able to regain the gait they had before suffering a stroke. Researchers funded by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) have developed a computational walking model that could help guide patients to their best possible recovery after a stroke. Computational modeling uses supercomputers to simulate and study the behavior of complex systems using mathematics, physics, and computer science. In this case, researchers are developing a computational modeling program that can construct a model of the patient from the patient’s walking data collected on a treadmill and then predict how the patient will walk after different planned rehabilitation treatments. They hope that one day the model will be able to predict the best gait a patient can achieve after completing rehabilitation, as well as recommend the best rehabilitation approach to help the patient achieve an optimal recovery. 

Currently, there is no way for a clinician to determine the most effective rehabilitation treatment prescription for a patient. Clinicians cannot always know which treatment approach to use, or how the approach should be implemented to maximize walking recovery. B.J. Fregly, Ph.D. and his team (Andrew Meyer, Ph.D., Carolynn Patten, PT., Ph.D., and Anil Rao, Ph.D.) at the University of Florida developed a computational modeling approach to help answer these questions. They tested the approach on a patient who had suffered a stroke.

The team first measured how the patient walked at his preferred speed on a treadmill. Using those measurements, they then constructed a neuromusculoskeletal computer model of the patient that was personalized to the patient’s skeletal anatomy, foot contact pattern, muscle force generating ability, and neural control limitations. Fregly and his team found that the personalized model was able to predict accurately the patient’s gait at a faster walking speed, even though no measurements at that speed were used for constructing the model.

Fregly and his team believe this advance is the first step toward the creation of personalized neurorehabilitation prescriptions, filling a critical gap in the current treatment planning process for stroke patients. Together with devices that would ensure the patient is exercising using the proper force and torque, personalized computational models could one day help maximize the recovery of patients who have suffered a stroke.

“This modeling effort is an excellent example of how computer models can make predictions of complex processes and accelerate the integration of knowledge across multiple disciplines,” says Grace Peng, Ph.D., director of the NIBIB program in Mathematical Modeling, Simulation and Analysis.

"Through additional NIH funding, we are embarking with collaborators at Emory University on our first project to predict optimal walking treatments for two individuals post-stroke,” says Fregly. “We are excited to begin exploring whether model-based personalized treatment design can improve functional outcomes."

Study suggests computational role for neurons that prevent other neurons from firing.

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory have developed a new supercomputational model of a neural circuit in the brain, which could shed light on the biological role of inhibitory neurons -- neurons that keep other neurons from firing.

The model describes a neural circuit consisting of an array of input neurons and an equivalent number of output neurons. The circuit performs what neuroscientists call a "winner-take-all" operation, in which signals from multiple input neurons induce a signal in just one output neuron.

Using the tools of theoretical computer science, the researchers prove that, within the context of their model, a certain configuration of inhibitory neurons provides the most efficient means of enacting a winner-take-all operation. Because the model makes empirical predictions about the behavior of inhibitory neurons in the brain, it offers a good example of the way in which computational analysis could aid neuroscience.

The researchers will present their results this week at the conference on Innovations in Theoretical Computer Science. Nancy Lynch, the NEC Professor of Software Science and Engineering at MIT, is the senior author on the paper. She's joined by Merav Parter, a postdoc in her group, and Cameron Musco, an MIT graduate student in electrical engineering and computer science.

For years, Lynch's group has studied communication and resource allocation in ad hoc networks -- networks whose members are continually leaving and rejoining. But recently, the team has begun using the tools of network analysis to investigate biological phenomena.

"There's a close correspondence between the behavior of networks of computers or other devices like mobile phones and that of biological systems," Lynch says. "We're trying to find problems that can benefit from this distributed-computing perspective, focusing on algorithms for which we can prove mathematical properties."

Artificial neurology

In recent years, artificial neural networks -- computer models roughly based on the structure of the brain -- have been responsible for some of the most rapid improvement in artificial-intelligence systems, from speech transcription to face recognition software.

An artificial neural network consists of "nodes" that, like individual neurons, have limited information-processing power but are densely interconnected. Data are fed into the first layer of nodes. If the data received by a given node meet some threshold criterion -- for instance, if it exceeds a particular value -- the node "fires," or sends signals along all of its outgoing connections.

Each of those outgoing connections, however, has an associated "weight," which can augment or diminish a signal. Each node in the next layer of the network receives weighted signals from multiple nodes in the first layer; it adds them together, and again, if their sum exceeds some threshold, it fires. Its outgoing signals pass to the next layer, and so on.

In artificial-intelligence applications, a neural network is "trained" on sample data, constantly adjusting its weights and firing thresholds until the output of its final layer consistently represents the solution to some computational problem.

Biological plausibility

Lynch, Parter, and Musco made several modifications to this design to make it more biologically plausible. The first was the addition of inhibitory "neurons." In a standard artificial neural network, the values of the weights on the connections are usually positive or capable of being either positive or negative. But in the brain, some neurons appear to play a purely inhibitory role, preventing other neurons from firing. The MIT researchers modeled those neurons as nodes whose connections have only negative weights.

Many artificial-intelligence applications also use "feed-forward" networks, in which signals pass through the network in only one direction, from the first layer, which receives input data, to the last layer, which provides the result of a computation. But connections in the brain are much more complex. Lynch, Parter, and Musco's circuit thus includes feedback: Signals from the output neurons pass to the inhibitory neurons, whose output in turn passes back to the output neurons. The signaling of the output neurons also feeds back on itself, which proves essential to enacting the winner-take-all strategy.

Finally, the MIT researchers' network is probabilistic. In a typical artificial neural net, if a node's input values exceed some threshold, the node fires. But in the brain, increasing the strength of the signal traveling over an input neuron only increases the chances that an output neuron will fire. The same is true of the nodes in the researchers' model. Again, this modification is crucial to enacting the winner-take-all strategy.

In the researchers' model, the number of input and output neurons is fixed, and the execution of the winner-take-all computation is purely the work of a bank of auxiliary neurons. "We are trying to see the trade-off between the computational time to solve a given problem and the number of auxiliary neurons," Parter explains. "We consider neurons to be a resource; we don't want too spend much of it."

Inhibition's virtues

Parter and her colleagues were able to show that with only one inhibitory neuron, it's impossible, in the context of their model, to enact the winner-take-all strategy. But two inhibitory neurons are sufficient. The trick is that one of the inhibitory neurons -- which the researchers call a convergence neuron -- sends a strong inhibitory signal if more than one output neuron is firing. The other inhibitory neuron -- the stability neuron -- sends a much weaker signal as long as any output neurons are firing.

The convergence neuron drives the circuit to select a single output neuron, at which point it stops firing; the stability neuron prevents a second output neuron from becoming active once the convergence neuron has been turned off. The self-feedback circuits from the output neurons enhance this effect. The longer an output neuron has been turned off, the more likely it is to remain off; the longer it's been on, the more likely it is to remain on. Once a single output neuron has been selected, its self-feedback circuit ensures that it can overcome the inhibition of the stability neuron.

Without randomness, however, the circuit won't converge to a single output neuron: Any setting of the inhibitory neurons' weights will affect all the output neurons equally. "You need randomness to break the symmetry," Parter explains.

The researchers were able to determine the minimum number of auxiliary neurons required to guarantee a particular convergence speed and the maximum convergence speed possible given a particular number of auxiliary neurons.

Adding more convergence neurons increases the convergence speed, but only up to a point. For instance, with 100 input neurons, two or three convergence neurons are all you need; adding a fourth doesn't improve efficiency. And just one stability neuron is already optimal.

But perhaps more intriguingly, the researchers showed that including excitatory neurons -- neurons that stimulate, rather than inhibit, other neurons' firing -- as well as inhibitory neurons among the auxiliary neurons cannot improve the efficiency of the circuit. Similarly, any arrangement of inhibitory neurons that doesn't observe the distinction between convergence and stability neurons will be less efficient than one that does.

Assuming, then, that evolution tends to find efficient solutions to engineering problems, the model suggests both an answer to the question of why inhibitory neurons are found in the brain and a tantalizing question for empirical research: Do real inhibitory neurons exhibit the same division between convergence neurons and stability neurons?

Page 36 of 36