Versions of a gene linked to schizophrenia may trigger runaway pruning of the teenage brain’s still-maturing communications infrastructure, NIH-funded researchers have discovered.  People with the illness show fewer such connections between neurons, or synapses.  The gene switched on more in people with the suspect versions, who faced a higher risk of developing the disorder, characterized by hallucinations, delusions and impaired thinking and emotions. 

“Normally, pruning gets rid of excess connections we no longer need, streamlining our brain for optimal performance, but too much pruning can impair mental function,” explained Thomas Lehner, Ph.D., director of the Office of Genomics Research Coordination of the NIH’s National Institute of Mental Health (NIMH), which co-funded the study along with the Stanley Center for Psychiatric Research at the Broad Institute and other NIH components. “It could help explain schizophrenia’s delayed age-of-onset of symptoms in late adolescence/early adulthood and shrinkage of the brain’s working tissue. Interventions that put the brakes on this pruning process-gone-awry could prove transformative.”

The gene, called C4 (complement component 4), sits in by far the tallest tower on schizophrenia’s genomic “skyline” (see graph below) of more than 100 chromosomal sites harboring known genetic risk for the disorder. Affecting about 1 percent of the population, schizophrenia is known to be as much as 90 percent heritable, yet discovering how specific genes work to confer risk has proven elusive, until now.

A team of scientists led by Steve McCarroll, Ph.D.(link is external), of the Broad Institute and Harvard Medical School, Boston, leveraged the statistical power conferred by analyzing the genomes of 65,000 people, 700 postmortem brains, and the precision of mouse genetic engineering to discover the secrets of schizophrenia’s strongest known genetic risk.  C4’s role represents the most compelling evidence, to date, linking specific gene versions to a biological process that could cause at least some cases of the illness.

“Since schizophrenia was first described over a century ago, its underlying biology has been a black box, in part because it has been virtually impossible to model the disorder in cells or animals,” said McCarroll. “The human genome is providing a powerful new way in to this disease. Understanding these genetic effects on risk is a way of prying open that block box, peering inside and starting to see actual biological mechanisms.”

McCarroll’s team, including Harvard colleagues Beth Stevens, Ph.D.(link is external)Michael Carroll, Ph.D.(link is external), and Aswin Sekar(link is external), report on their findings online Jan. 27, 2016 in the journal Nature.

A swath of chromosome 6 encompassing several genes known to be involved in immune function emerged as the strongest signal associated with schizophrenia risk in genome-wide analyses by the NIMH-funded Psychiatric Genomics Consortium(link is external) over the past several years. Yet conventional genetics failed to turn up any specific gene versions there linked to schizophrenia.

To discover how the immune-related site confers risk for the mental disorder, McCarroll’s team mounted a search for “cryptic genetic influences” that might generate “unconventional signals.” C4, a gene with known roles in immunity, emerged as a prime suspect because it is unusually variable across individuals.  It is not unusual for people to have different numbers of copies of the gene and distinct DNA sequences that result in the gene working differently.

The researchers dug deeply into the complexities of how such structural variation relates to the gene’s level of expression and how that, in turn, might relate to schizophrenia. They discovered structurally distinct versions that affect expression of two main forms of the gene in the brain. The more a version resulted in expression of one of the forms, called C4A, the more it was associated with schizophrenia. The more a person had the suspect versions, the more C4 switched on and the higher their risk of developing schizophrenia. Moreover, in the human brain, the C4 protein turned out to be most prevalent in the cellular machinery that supports connections between neurons.

Adapting mouse molecular genetics techniques for studying synaptic pruning and C4’s role in immune function, the researchers also discovered a previously unknown role for C4 in brain development. During critical periods of postnatal brain maturation, C4 tags a synapse for pruning by depositing a sister protein in it called C3. Again, the more C4 got switched on, the more synapses got eliminated.

In humans, such streamlining/pruning occurs as the brain develops to full maturity in the late teens/early adulthood – conspicuously corresponding to the age-of-onset of schizophrenia symptoms.

Future treatments designed to suppress excessive levels of pruning by counteracting runaway C4 in at risk individuals might nip in the bud a process that could otherwise develop into psychotic illness, suggest the researchers.  And thanks to the head start gained in understanding the role of such complement proteins in immune function, such agents are already in development, they note.

“This study marks a crucial turning point in the fight against mental illness. It changes the game,” added acting NIMH director Bruce Cuthbert, Ph.D. “Thanks to this genetic breakthrough, we can finally see the potential for clinical tests, early detection, new treatments and even prevention.”

A bewildering physics problem has apparently been solved by researchers, in a study which provides a mathematical basis for understanding issues ranging from predicting the formation of deserts, to making artificial intelligence more efficient.

In research carried out at the University of Cambridge, a team developed a computer program that can answer this mind-bending puzzle: Imagine that you have 128 soft spheres, a bit like tennis balls. You can pack them together in any number of ways. How many different arrangements are possible?

The answer, it turns out, is something like 10250 (1 followed by 250 zeros). The number, also referred to as ten unquadragintilliard, is so huge that it vastly exceeds the total number of particles in the universe.

Far more important than the solution, however, is the fact that the researchers were able to answer the question at all. The method that they came up with can help scientists to calculate something called configurational entropy - a term used to describe how structurally disordered the particles in a physical system are.

Being able to calculate configurational entropy would, in theory, eventually enable us to answer a host of seemingly impossible problems - such as predicting the movement of avalanches, or anticipating how the shifting sand dunes in a desert will reshape themselves over time.

These questions belong to a field called granular physics, which deals with the behaviour of materials such as snow, soil or sand. Different versions of the same problem, however, exist in numerous other fields, such as string theory, cosmology, machine learning, and various branches of mathematics. The research shows how questions across all of those disciplines might one day be addressed.

Stefano Martiniani, a Benefactor Scholar at St John's College, University of Cambridge, who carried out the study with colleagues in the Department of Chemistry, explained: "The problem is completely general. Granular materials themselves are the second most processed kind of material in the world after water and even the shape of the surface of the Earth is defined by how they behave."

"Obviously being able to predict how avalanches move or deserts may change is a long, long way off, but one day we would like to be able to solve such problems. This research performs the sort of calculation we would need in order to be able to do that."

At the heart of these problems is the idea of entropy - a term which describes how disordered the particles in a system are. In physics, a "system" refers to any collection of particles that we want to study, so for example it could mean all the water in a lake, or all the water molecules in a single ice cube.

When a system changes, for example because of a shift in temperature, the arrangement of these particles also changes. For example, if an ice cube is heated until it becomes a pool of water, its molecules become more disordered. Therefore, the ice cube, which has a tighter structure, is said to have lower entropy than the more disordered pool of water.

At a molecular level, where everything is constantly vibrating, it is often possible to observe and measure this quite clearly. In fact, many molecular processes involve a spontaneous increase in entropy until they reach a steady equilibrium.

In granular physics, however, which tends to involve materials large enough to be seen with the naked eye, change does not happen in the same way. A sand dune in the desert will not spontaneously change the arrangement of its particles (the grains of sand). It needs an external factor, like the wind, for this to happen.

This means that while we can predict what will happen in many molecular processes, we cannot easily make equivalent predictions about how systems will behave in granular physics. Doing so would require us to be able to measure changes in the structural disorder of all of the particles in a system - its configurational entropy.

To do that, however, scientists need to know how many different ways a system can be structured in the first place. The calculations involved in this are so complicated that they have been dismissed as hopeless for any system involving more than about 20 particles. Yet the Cambridge study defied this by carrying out exactly this type of calculation for a system, modelled on a computer, in which the particles were 128 soft spheres, like tennis balls.

"The brute force way of doing this would be to keep changing the system and recording the configurations," Martiniani said. "Unfortunately, it would take many lifetimes before you could record it all. Also, you couldn't store the configurations, because there isn't enough matter in the universe with which to do it."

Instead, the researchers created a solution which involved taking a small sample of all possible configurations and working out the probability of them occurring, or the number of arrangements that would lead to those particular configurations appearing.

Based on these samples, it was possible to extrapolate not only in how many ways the entire system could therefore be arranged, but also how ordered one state was compared with the next - in other words, its overall configurational entropy.

Martiniani added that the team's problem-solving technique could be used to address all sorts of problems in physics and maths. He himself is, for example, currently carrying out research into machine learning, where one of the problems is knowing how many different ways a system can be wired to process information efficiently.

"Because our indirect approach relies on the observation of a small sample of all possible configurations, the answers it finds are only ever approximate, but the estimate is a very good one," he said. "By answering the problem we are opening up uncharted territory. This methodology could be used anywhere that people are trying to work out how many possible solutions to a problem you can find."

The paper, Turning intractable counting into sampling: computing the configurational entropy of three-dimensional jammed packings, is published in the journal, Physical Review E.

Stefano Martiniani is a St John's Benefactor Scholar and Gates Scholar at the University of Cambridge.

CAPTION See how the Wyss-developed FISSEQ technology is able to capture the location of individual RNA molecules within cells, which will allow the reconstruction of neuronal networks in the 3-dimensional space of intact brain tissue.

The Wyss Institute for Biologically Inspired Engineering at Harvard University today announced a cross-institutional consortium to map the brain's neural circuits with unprecedented fidelity. The consortium is made possible by a $21 million contract from the Intelligence Advanced Research Projects Activity (IARPA) and aims to discover the brain's learning rules and synaptic 'circuit design', further helping to advance neurally-derived machine learning algorithms.

The consortium will leverage the Wyss Institute's FISSEQ (fluorescent in-situ sequencing) method to push forward neuronal connectomics, the science of identifying the neuronal cells that work together to bring about specific brain functions. FISSEQ was developed in 2014 by the Wyss Institute Core Faculty member George Church and colleagues and, unlike traditional sequencing technologies, it provides a method to pinpoint the precise locations of specific RNA molecules in intact tissue. The consortium will harness this FISSEQ capability to accurately trace the complete set of neuronal cells and their connecting processes in intact brain tissue over long distances, which is currently difficult to do with other methods.

Awarded a competitive IARPA MICrONS contract, the consortium will further the overall goals of President Obama's BRAIN initiative, which aims to improve the understanding of the human mind and uncover new ways to treat neuropathological disorders like Alzheimer's disease, schizophrenia, autism and epilepsy. The consortium's work will fundamentally innovate the technological framework used to decipher the principal circuits neurons use to communicate and fulfill specific brain functions. The learnings can be applied to enhance artificial intelligence in different areas of machine learning such as fraud detection, pattern and image recognition, and self-driving car decision making.

"Historically, the mapping of neuronal paths and circuits in the brain has required brain tissue to be sectioned and visualized by electron microscopy. Complete neurons and circuits are then reconstructed by aligning the individual electron microsope images, this process is costly and inaccurate due to use of only one color (grey)," said Church, who is the Principal Investigator for the IARPA MICrONs consortium. "We are taking an entirely new approach to neuronal connectomics--immensely colorful barcodes--that should overcome this obstacle; and by integrating molecular and physiological information we are looking to render a high-definition map of neuronal circuits dedicated first to specific sensations, and in the future to behaviors and cognitive tasks."

Church is Professor of Genetics at Harvard Medical School, and Professor of Health Sciences and Technology at Harvard and MIT.

To map neural connections, the consortium will genetically engineer mice so that each neuron is barcoded throughout its entire structure with a unique RNA sequence, a technique called BOINC (Barcoding of Individual Neuronal Connections) developed by Anthony Zador at Cold Spring Harbor Laboratory. Thus a complete map representing the precise location, shape and connections of all neurons can be generated.

The key to visualizing this complex map will be FISSEQ, which is able to sequence the total complement of barcodes and pinpoint their exact locations using a super-resolution microscope. Importantly, since FISSEQ analysis can be applied to intact brain tissue, the error-prone brain-sectioning procedure that is part of common mapping studies can be avoided and long neuronal processes can be more accurately traced in larger numbers and at a faster pace.

In addition, the scientists will provide the barcoded mice with a sensory stimulus, such as a flash of light, to highlight and glean the circuits corresponding to that stimulus within the much more complex neuronal map. An improved understanding of how neuronal circuits are composed and how they function over longer distances will ultimately allow the team to build new models for machine learning.

The multi-disciplinary consortium spans 6 institutions. In addition to Church, the Wyss Institute's effort will be led by Samuel Inverso, Ph.D., who is a Staff Software Engineer and Co-investigator of the project. Complementing the Wyss team, are co-Principal Investigators Anthony Zador, Ph.D., Alexei Koulakov, Ph.D., and Jay Lee, Ph.D., at Cold Spring Harbor Laboratory. Adam Marblestone, Ph.D., and Liam Paninski, Ph.D. are co-Investigator at MIT and co-Principal Investigator at Columbia University, respectively. The Harvard-led consortium is partnering with another MICrONS team led by Tai Sing Lee, Ph.D. of Carnegie Mellon University as Principal investigator under a separate multi-million contract, with Sandra Kuhlman, Ph.D. of Carnegie Mellon University and Alan Yuille, Ph.D. of Johns Hopkins University as co-Principal investigators, to develop computational models of the neural circuits and a new generation of machine learning algorithms by studying the behaviors of a large population of neurons in behaving animals, as well as the circuitry of the these neurons revealed by the innovative methods developed by the consortium.

"It is very exciting to see how technology developed at the Wyss Institute is now becoming instrumental in showing how specific brain functions are wired into the neuronal architecture. The methodology implemented by this research can change the trajectory of brain mapping world wide," said Wyss Institute Founding director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children's Hospital and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences.

It's hard to imagine life today without computers, but computer technology was not warmly welcomed in Germany following World War II.

That's one conclusion Assistant Professor Corinna Schlombs, who teaches history in Rochester Institute of Technology's College of Liberal Arts, found while studying productivity and trans-Atlantic technology transfer and their impact following the war. Her research was made possible with a $90,000 grant from the National Science Foundation.

"The American government was bringing computers to western Germany after World War II as productivity machines to help raise their standard of living," Schlombs said. "Germany looked at them as automation, machines that would bring poverty and technological unemployment. It's a very different reception than what Americans had expected."

Her findings will be part of a book she's writing about productivity and culture. The grant is enabling her to take time away from teaching to focus on her research for the book, which she expects to complete this summer. In the course of her research, she has traveled to Munich, Koblenz and Sindelfingen in her native Germany, the National Archives near Washington, D.C., and the Walter Reuther Library in Detroit. A trip to Frankfurt, Germany, is also planned.

"Computer history for a long time was about nuts and bolts, how they worked," Schlombs said. "Since the mid-'90s, historians started asking more questions about human relations, how technology was embedded in our lives and culture, who works with these computers, who maintained them... In other countries, computer development had a much different background. In Britain, for example, there was far less government funding for computing research than in the U.S., and in Germany, government funding early on included small and medium computers. That has influenced how computers have developed in other countries."

Computer technology transfer followed the Marshall Plan initiative to help rebuild European economies following World War II. The United States was the leader in computer technology in the 1950s, as the production of the vacuum tubes needed to make computers work were allied controlled, she said.

The first electronic supercomputers were brought to Europe in 1956. One of them, a Univac, was the size of a room and was flown overseas, so it wasn't damaged by salty ocean air during a long ship voyage.

The Germans constructed a building specifically for the massive supercomputer, leaving one wall open to accommodate its delivery and installation.

"Americans brought in this notion of productivity which was highly debated in Germany," Schlombs said. "As it turned out, unemployment rates did not skyrocket as they did in the U.S."

Schlombs plans to create a website to go with her book so other historians can use the resources she's found. "It will help us reflect how we will deal with productivity in the United States in the future. Our way is not the only way. Technology carries the values of the people who created it."

Iowa State's Ming-Chen Hsu is developing a computational toolkit to improve the design, engineering and operation of all kinds of machines. Larger photo. Photo by Christopher Gannon.

Three thin leaflets blew open and blood blasted through an artificial heart valve, the center stream firing reds and yellows, the colors indicating a flow speed up to 125 centimeters per second. When the leaflets slammed shut, the flow turned to light blue eddies, indicating blood flow had nearly stopped.

And then Ming-Chen Hsu, an Iowa State University assistant professor of mechanical engineering, searched his computer for another video and clicked play.

This time the tip of a wind turbine blade appeared on his monitor, constantly moving, flexing and vibrating as the blade rotated around the rotor hub. Red indicated air moving at a relative speed of 52 meters per second over the top of the blade; blue and green marked the slower air around the blade.

These are supercomputer models featuring technologies called computational mechanics, fluid-structure interaction and isogeometric analysis. They show the flow fields and stresses that mechanical systems have to withstand. And they’re part of a toolkit Hsu and his research group are developing to improve the design, engineering and operation of all kinds of machines.  

“If we are able to use computers to model and simulate these engineering designs, we can save a lot of time and money,” Hsu said. “We don’t have to build and test every prototype anymore.”

Hsu said it would be impractical, for example, for the wind energy industry to build and test full-scale prototypes of each and every idea for improving the performance of wind turbines.

Instead, the wind energy industry can opt for computational models. Hsu said they’re based on complex mathematical equations. They’re full of data. And studies show they’re accurate.

Using the models, “We can predict the real physics of the problems we are looking at,” he said.

And so those videos showing blood flowing through an artificial heart valve or the vibrations of a wind turbine blade are a lot more than colorful graphics. To engineers, they can be as good as full-scale prototypes for studying durability and performance.

Hsu has a background in computational mechanics and started modeling wind turbines during his doctoral studies at the University of California, San Diego. He started modeling heart valves as a postdoctoral research associate at the University of Texas at Austin.

He’s been at Iowa State since the fall of 2013 and has built a research group that currently includes doctoral students Austin Herrema, Chenglong Wang, Michael Wu and Fei Xu plus undergraduate student Carolyn Darling. The group is now working on two wind turbine studies and an engine project:

  • They’re modeling the performance of the “Hexcrete” concrete wind turbine towers being developed by Sri Sritharan, Iowa State’s Wilson Engineering Professor in Civil, Construction and Environmental Engineering. The goal is to use prefabricated concrete to build taller wind turbine towers that can access the steadier winds at 120 meters and higher. The project is primarily supported by the U.S. Department of Energy.
  • They’re also developing software to help engineers design wind turbine blades. The software will bridge a wide gap between blade design tools and performance simulations.  The project is supported by a National Science Foundation grant that established Iowa State’s graduate program in wind energy science, engineering and policy.
  • And Hsu’s research group is modeling the performance of the rotors inside gas turbines. The models will help engineers design the next generation of turbine engines. The project is supported by a grant from the U.S. Army Research Office.

Hsu, who teaches courses in fluid mechanics, said the modeling can be applied to all sorts of questions about a machine. In wind turbines, for example, the models can provide answers about material stress and fatigue, rotor aerodynamics, blade design, the wake behind turbines and power efficiency.

“Ten to 15 years ago, computational fluid-structure interaction was new to everyone,” Hsu said. “But with the success of this field, more and more methods are being picked up by industry. Our computational methods are improving engineering designs.”

Page 5 of 106