The study of the evolution of thousands of bacterial proteins allows deciphering many interactions between human proteins. The results will help to clarify the molecular details of thousands of interactions potentially involved in diseases such as cancer.

Cells operate like an incredibly well-synchronized orchestra of molecular interactions among proteins. Understanding this molecular network is essential not only to understand how an organism works but also to determine the molecular mechanisms responsible for a multitude of diseases. In fact, it has been observed that protein interacting regions are preferentially mutated in tumours. The investigation of many of these interactions is challenging. However, a study coordinated by Simone Marsili and David Juan, from Alfonso Valencia's team at the CNIO, will advance our knowledge on thousands of them. The work, published in the journal Proceedings of the National Academy of Sciences (PNAS), demonstrates that it is possible to understand a significant number of interactions among human proteins from the evolution of their counterparts in simpler cells, such as bacteria cells.

According to Juan Rodríguez, from the Structural Computational Biology Group at the CNIO and first author of the paper, "the complexity of human beings does not only result from the number of proteins that we have, but primarily from how they interact with each other. However, out of 200,000 protein-protein interactions estimated, only a few thousand have been characterised at the molecular level". It is very difficult to study the molecular properties of many important interactions without reliable structural information. It is this "twilight zone" that, for the first time, CNIO researchers have managed to explore.

FROM BACTERIA TO HUMANS TO UNDERSTAND DISEASES

Although more than 3,000 million years of evolution separate bacteria and humans, the CNIO team has utilized the information accumulated over thousands of bacterial sequences to predict interactions between proteins in humans. "We have used the protein coevolution phenomenon: proteins that interact tend to experience coordinated evolutionary changes that maintain the interaction despite the accumulation of mutations over time," says David Juan. "We have demonstrated that we can use this phenomenon to detect molecular details of interactions in humans that we share with very distant species. What is most interesting is that this allows us to transfer information from bacteria in order to study interactions in humans that we knew almost nothing about," adds Simone Marsili.

These new results may lead to important implications for future research. "A deeper understanding of these interactions opens the door to the modeling of three-dimensional structures that may help us to design drugs targeting important interactions in various types of cancer," explains David Juan. "This knowledge can also improve our predictions of the effects of various mutations linked to tumour development," says Rodríguez.

DATA-BASED SCIENCE

The laboratory of Alfonso Valencia, head of the Structural Biology and Biocomputing Programme, has been working in the field of protein coevolution since the 1990s. This field has significantly advanced in recent years. "Thanks to the amount of biological data that is being generated today, we can use new [super]computational methods that take into account a greater number of factors," explains Valencia. According to the researchers, the pace of innovation in massive experimental techniques is providing additional data, making it possible to design more complex statistical models that provide an ever more complete view of the biological systems, "something particularly important in multifactorial diseases, such as cancer."

This work was supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund.

CAPTION LEFT: Calculated shear stress (SS). RIGHT: Soft tis-sue is subtracted using image processing. Images represent a 0.62 mm x 0.91 mm x 0.62 mm volume. In these images gray scale represents the poly-L-lactic acid fiber-mesh scaffold of 30 micrometre diameter, green color marks individual cells, yellow is soft tissue and red is calcification. The "cool" color map represents the fluid-induced surface SS for physiologically-relevant stress levels of > 0.1 g/cm s^2.

A novel tool for realistic simulations of artificial bone cultures.

A team of researchers from the New Jersey Institute of Technology (NJIT) in Newark, NJ in collaboration with their colleagues from the University of Oklahoma (OU) in Norman, OK have demonstrated a novel image-based simulation approach consisting of Bone Tissue Engineering (BTE) experiments, micro-Computed Tomography (µCT) sample scanning, "Virtual Histology" image segmentation, and Lattice Boltzmann Method (LBM) fluid dynamics which results in realistic simulations of BTE scaffolds cultured in flow perfusion bioreactors. Understanding the interplay between scaffold manufacturing parameters, culturing conditions and cell biology within the construct is necessary for transitioning regenerative medicine to a clinical setting. Although previous attempts have been made at modeling artificial tissue cultures, they were limited by oversimplifying assumptions, such as a uniform cell / tissue monolayer coverage of the scaffold's surface and idealized scaffold geometries. On the other hand, this novel and scalable technology can enable researchers to account for the realistic architectural non-idealities inherent to tissue engineering scaf-folds; as well as for the presence of cells / tissues in their pores. An additional advantage of our method is that it allows for correlating cell behavior and tissue growth with the flow physics occurring inside of complex 3D scaffold microenvironments. Moreover, such relationships can be tracked over time via modeling based on nondestructive repeated scanning. The report appears in the December 2016 issue of the journal TECHNOLOGY.

"By taking advantage of the exact spatio-temporal information provided by the high-resolution micro-CT imaging, this approach opens the door for transforming computer-assisted tissue engineering which is traditionally done based on virtual drawings of scaffolds and very little-to-no cross validation against experiment", says Professor Roman Voronov, Ph.D. of the New Jersey Institute of Technology and Principal Investigator of the paper. The manuscript offers a 'recipe' for the technology which culminated out of an over a decade-long ef-fort by the team working on the BTE-modeling problem. While the imaging aspect of their approach offers an unprecedented ability to detect and tell apart individual cells, soft tissues and calcification embedded within scaffold, the LBM is chosen for its ability to handle large-scale simulations (such as those resulting from the sub-micron voxel resolution shown here) and complex boundary conditions typical of the BTE scaffolds. And, although the presented results are meant to serve as proof-of-concept only, the demonstrated technology is not limited by sample size, since its algorithms are fully parallelizable for supercomputing.

Therefore, it is straight-forward to extend to full-scaffold models, with the only physical limitation being the micro-CT sample-chamber. However, those are typically much larger than the scaffolds.

"Moreover, in case repeated scanning is not possible, or not desirable, the number of scans required can be minimized by simply subtracting any bio-matter from an end-point image of a fully-cultured scaffolds. This would help to estimate the scaffold's initial geometry prior to the cell seeding.", said Taseen Alam of NJIT, the first author of the paper. Once this is done, the fluid flow patterns established within the initial empty scaffold can be correlated to the tissue growth observed in the end-point image. Finally, since the calculated results are overlaid onto the experimental images in 3D, the method itself serves as a direct comparison to the experiment. And any correlation obtained as a result of the image-based modeling can be subsequently tested by attempting predictions in samples previously-unencountered by the code.

The team is now working to further extend the technology by including molecular transport, which is rarely simulated by the conventional models. For example, distribution of O2 and nutrient/waste within the scaffold, transport of scaffold degradation byproducts whose acidic nature may be detrimental to the cells, and molecular signals between the cells could all be accounted for using the same image-based approach. In this way, a more complete picture of cell behavior in complex micro-environments can be generated.

Finally, application of this technology to driving BTE cultures by the computer, in real time and in a closed-loop manner, presents an exciting new direction for commercial deployment of the artificially grown tissues and organs in a hospital setting.

Additional co-authors of the TECHNOLOGY paper are Quang Long Pham from NJIT, Vassilios I. Si-kavitsas, Ph.D., Dimitrios V. Papavassiliou, Ph.D., and Robert L. Shambaugh, Ph.D., from the University of Oklahoma at Norman.

This work was funded by the Gustavus and Louise Pfeiffer Research Foundation and by National Science Foundation (NSF) under award number CBET-0700813.

Corresponding author for this study in TECHNOLOGY is Professor Roman Voronov, Ph.D., rvoronov@njit.edu.

Purdue University researchers - during the international IEDM 2016 conference the week of Dec. 5 - showcased a range of concepts and technologies that foreshadow the future of the semiconductor industry.

The concepts included innovations to extend the performance of today's silicon-based transistors, along with entirely new types of nanoelectronic devices to complement and potentially replace conventional technology in future computers.

"For the past 50 years, ever more electronic devices envelop us in our day-to-day life, and electronic-device innovation has been a major economic factor in the U.S. and world economy," said Gerhard Klimeck, a professor of electrical and computer engineering and director of Purdue's Network for Computational Nanotechnology in the university's Discovery Park. "These advancements were enabled by making the basic transistors in computer chips ever smaller. Today the critical dimensions in these devices are just some 60 atoms thick, and further device size reductions will certainly stop at small atomic dimensions." 

New technologies will be needed for industry to keep pace with Moore's law, an observation that the number of transistors on a computer chip doubles about every two years, resulting in rapid progress in computers and telecommunications. It is becoming increasingly difficult to continue shrinking electronic devices made of conventional silicon-based semiconductors, called complementary metal-oxide-semiconductor (CMOS) technology, said Muhammad Ashraful Alam, Purdue University's Jai N. Gupta Professor of Electrical and Computer Engineering.

 “As transistors are becoming smaller they are facing a number of challenges in terms of increasing their performance and ensuring their reliability,” he said.

 Purdue researchers presented five papers proposing innovative designs to extend CMOS technology and new devices to potentially replace or augment conventional transistors during the annual International Electron Devices Meeting (IEDM 2016) Dec. 5-7 in San Francisco. The conference showcases the latest developments in electronic device technology.

Integrated circuits, or chips, now contain around 2 billion transistors. The more devices that are packed onto a chip, the greater the heating, with today’s chips generating around 100 watts per square centimeter, comparable to that of a nuclear reactor.

 "As a result, self-heating has become a fundamental concern that hinders performance and can damage transistors, and we are making advances to address it," Alam said.

Two of the IEDM conference papers detail research to suppress self-heating and enhance the performance of conventional CMOS chips. The remaining papers deal with new devices for future computer technologies that require lower power to operate, meaning they would not self-heat as significantly. 

"We are not only working to extend the state-of-art of traditional technology, but also to develop next-generation transistor technologies," Alam said.

Transistors are electronic switches that turn on and off to allow computations using the binary code of ones and zeros. A critical component in transistors, called the gate, controls this switching. As progressively smaller transistors are designed, however, this control becomes increasingly difficult because electrons leak around the ultra-small gate.

One of the conference papers focuses on a potential solution to this leakage: creating transistors that are surrounded by the gate, instead of the customary flat design. Unfortunately, enveloping the transistor with a gate causes increased heating, which hinders reliability and can damage the device. The researchers used a technique called submicron thermo-reflectance imaging to pinpoint locations of excessive heating. Another paper details a potential approach to suppress this self-heating, modeling how to more effectively dissipate heat by changing how the transistor connects to the complex circuitry in the chip.

The three remaining papers propose next-generation devices: networks of nanomagnets, extremely thin layers of a material called black phosphorous and "tunnel" field effect transistors, or FETs. Such technologies would operate at far lower voltages than existing electronics, generating less heat.

 "You want to use as low a voltage as possible because that reduces power dissipation and if you can reduce power dissipation the battery of your cell phone will last longer, you can do more computing with a smaller amount of power and you will be able to cram more functional elements into a given area," Klimeck said.

The tunnel FETS could potentially reduce power consumption by more than 40 times.

"Reducing power consumption by a factor of 40 would be a huge development," Klimeck said.

Another conference paper details research to develop devices made of black phosphorous, which might one day replace silicon as a semiconductor in transistors. Findings showed the devices can pass large amounts of current with ultra-low resistance while demonstrating good switching performance, said Peide Ye, the Richard J. and Mary Jo Schwartz Professor of Electrical and Computer Engineering.

"We have demonstrated the highest performance of this kind of 2-D device," Ye said.

Devices made from the material also could bring new types of optical and chemical sensors. The devices were created using a technique called chemical vapor deposition in research performed at Purdue's Birck Nanotechnology Center.

Future research will include efforts to create smaller black phosphorous devices, Ye said.

A fifth paper details how networks of nanomagnets could serve as the building blocks of future computers. Findings show the networks mimic Ising networks - named after German physicist Ernst Ising - which harness mathematics to solve complex probabilistic problems.

The nanomagnet networks might be used to draw from huge databases to perform demanding jobs in areas ranging from business and finance, to health care and scientific research.

The conventional approach to performing big data computations is through new software running on CMOS devices. However, nanomagnet networks represent a different approach:  developing an entirely new type of hardware for the feat, said Zhihong Chen, an associate professor of electrical and computer engineering.

"We have shown experimentally that the nanomagnet arrays are potential building blocks for probabilistic computer hardware," Chen said.

She is working with a team also led by Supriyo Datta, the Thomas Duncan Distinguished Professor of Electrical and Computer Engineering; and Joerg Appenzeller, the Barry M. and Patricia L. Epstein Professor of Electrical and Computer Engineering and scientific director of nanoelectronics in the Birck Nanotechnology Center. The three researchers are members of a "spintronics" preeminent team formed by Purdue's College of Engineering

Purdue University researchers recently showcased a range of concepts and technologies that foreshadow the future of the semiconductor industry. Here, a device is made from the semiconductor germanium, in research led by Peide Ye, Purdue's Richard J. and Mary Jo Schwartz Professor of Electrical and Computer Engineering. (Purdue University image/Erin Easterling)
Purdue University researchers recently showcased a range of concepts and technologies that foreshadow the future of the semiconductor industry. Here, a device is made from the semiconductor germanium, in research led by Peide Ye, Purdue's Richard J. and Mary Jo Schwartz Professor of Electrical and Computer Engineering. (Purdue University image/Erin Easterling)

Scientists from MIPT, the Institute for Nuclear Research (INR) of the Russian Academy of Sciences, and Novosibirsk State University (NSU) have discovered that the proportion of unstable particles in the composition of dark matter in the days immediately after the Big Bang was no more than 2%-5%. Their study has been published in Physical Review D.

"The discrepancy between the cosmological parameters in the modern Universe and the Universe shortly after the Big Bang can be explained by the fact that the proportion of dark matter has decreased. We have now, for the first time, been able to calculate how much dark matter could have been lost and what the corresponding size of the unstable component would be," says a co-author of the study academician Igor Tkachev, Head of the Department of Experimental Physics at INR and a lecturer at MIPT's Department of Fundamental Interactions and Cosmology.

Astronomers first suspected that there was a large proportion of "hidden mass" in the Universe back in the 1930s, when Fritz Zwicky discovered "peculiarities" in a cluster of galaxies in the constellation Coma Berenices - the galaxies moved as if they were under the effect of gravity from an unseen source. This hidden mass that does not manifest itself in any way, except for a gravitational effect, was given the name dark matter. According to data from the Planck space telescope, the proportion of dark matter in the Universe is 26.8%, the rest is "ordinary" matter (4.9%) and dark energy (68.3%).

The nature of dark matter remains unknown, however, its properties could potentially help scientists to solve the problem that arose after studying observations from the Planck telescope. This device accurately measured the fluctuations in the temperature of the cosmic microwave background radiation - the "echo" of the Big Bang. By measuring these fluctuations, the researchers were able to calculate key cosmological parameters using observations of the Universe in the recombination era - approximately 300,000 years after the Big Bang.

"However, it turned out that some of these parameters, namely the Hubble parameter, which describes the rate of expansion of the Universe, and also the parameter associated with the number of galaxies in clusters vary significantly with data that we obtain from observations of the modern Universe, by directly measuring the speed of expansion of galaxies and studying clusters. This variance was significantly more than margins of error and systematic errors known to us. Therefore we are either dealing with some kind of unknown error, or the composition of the ancient Universe is considerably different to the modern Universe," says Tkachev.

The discrepancy can be explained by the decaying dark matter (DDM) hypothesis, which states that in the early Universe there was more dark matter, but then part of it decayed.

"Let us imagine that dark matter consists of several components, as in ordinary matter (protons, electrons, neutrons, neutrinos, photons). And one component consists of unstable particles with a rather long lifespan: in the era of the formation of hydrogen (hundreds of thousands of years after the Big Bang) they are still in the Universe, but by now (billions of years later) they have disappeared, having decayed into neutrinos or hypothetical relativistic particles. In that case, the amount of dark matter in the era of hydrogen formation and today will be different," says the lead author of the research, Dmitry Gorbunov, a professor at MIPT and staff member at INR.

The authors of the study, Igor Tkachev, Dmitry Gorbunov, and Anton Chudaykin from IRN, MIPT and NSU analyzed Planck data and compared them with the DDM model and the standard ΛCDM (Lambda-Cold Dark Matter) model with stable dark matter. The comparison showed that the DDM model is more consistent with the observational data. However, the researchers found that the effect of gravitational lensing (the distortion of cosmic microwave background radiation by a gravitational field) greatly limits the proportion of decaying dark matter in the DDM model.

Using data from observations of various cosmological effects, the researchers were able to give an estimate of the relative concentration of the decaying components of dark matter in the region of 2% to 5%.

"This means that in today's Universe there is 5% less dark matter than in the recombination era. We are not currently able to say how quickly this unstable part decayed; dark matter may still be disintegrating even now, although that would be a different and considerably more complex model," says Tkachev. 

CAPTION The concentration of the unstable component of dark matter F against the speed of expansion of non-gravitationally bound objects (proportional to the age of the Universe) when examining various combinations of Planck data for several different cosmological phenomena.
CAPTION The concentration of the unstable component of dark matter F against the speed of expansion of non-gravitationally bound objects (proportional to the age of the Universe) when examining various combinations of Planck data for several different cosmological phenomena.

Abstraction -- walking electrons.

Russian scientists find a way to reliably connect quantum elements

Scientists from the Institute of Physics and Technology of the Russian Academy of Sciences and MIPT have let two electrons loose in a system of quantum dots to create a quantum supercomputer memory cell of a higher dimension than a qubit (a quantum bit). In their study published in Scientific Reports, the researchers demonstrate for the first time how quantum walks of several electrons can help to implement quantum supercomputation.

"By studying the system with two electrons, we solved the problems faced in the general case of two identical interacting particles. This paves the way toward compact high-level quantum structures," comments Leonid Fedichkin, Expert at the Russian Academy of Sciences, Vice-Director for Science at NIX (a Russian computer company), and Associate Professor at MIPT's Department of Theoretical Physics.

In a matter of hours, a quantum supercomputer would be able to hack through the most popular cryptosystem used even in your web browser. As far as more benevolent applications are concerned, a quantum supercomputer would be capable of molecular modeling that takes into account all interactions between the particles involved. This in turn would enable the development of highly efficient solar cells and new drugs. To have practical applications, a quantum supercomputer needs to incorporate hundreds or even thousands of qubits. And that is where it gets tricky.

As it turns out, the unstable nature of the connection between qubits remains the major obstacle preventing us from using quantum walks of particles for quantum supercomputation. Unlike their classical analogs, quantum structures are extremely sensitive to external noise. To prevent a system of several qubits from losing the information stored in it, liquid nitrogen (or helium) needs to be used for cooling. Plenty of schemes have been proposed for the experimental realization of a separate qubit. In an earlier study, a research team led by Prof. Fedichkin demonstrated that a qubit could be physically implemented as a particle "taking a quantum walk" between two extremely small semiconductors known as quantum dots, which are connected by a "quantum tunnel." From the perspective of an electron, the quantum dots represent potential wells. Thus, the position of the electron can be used to encode the two basis states of the qubit--|0? and |1?--depending on whether the particle is in one well or the other. Rather than sit in one of the two wells, the electron is smeared out between the two different states, taking up a definite position only when its coordinates are measured. In other words, it is in a superposition of two states.

If an entangled state is created between several qubits, their individual states can no longer be described separately from one another, and any valid description must refer to the state of the whole system. This means that a system of three qubits has a total of 8 basis states and is in a superposition of them: A|000?+B|001?+C|010?+D|100?+E|011?+F|101?+G|110?+H|111?. By influencing the system, one inevitably affects all of the 8 coefficients, whereas influencing a system of regular bits only affects their individual states. By implication, n bits can store n variables, while n qubits can store 2? variables. Qudits offer an even greater advantage, since n four-level qudits (aka ququarts) can encode 4?, or 2?×2? variables. To put this into perspective, 10 ququarts store approximately 100,000 times more information than 10 bits. With greater values of n, the zeros in this number start to pile up very quickly.

In this study, Alexey Melnikov and Leonid Fedichkin obtain a system of two qudits implemented as two entangled electrons quantum-walking around the so-called cycle graph. To make one, the scientists had to "connect the dots" forming a circle (once again, these are quantum dots, and they are connected by the effect called quantum tunneling). The entanglement of the two electrons is caused by the mutual electrostatic repulsion experienced by like charges. It is possible to create a system of even more qudits in the same volume of semiconductor material. To do this, it is necessary to connect quantum dots in a pattern of winding paths and have more wandering electrons. The quantum walks approach to quantum supercomputation is convenient because it is based on a natural process. Nevertheless, the presence of two identical electrons in the same structure was a source of additional difficulties that had remained unsolved.

The phenomenon of particle entanglement plays a pivotal role in quantum information processing. However, in experiments with identical particles, it is necessary to distinguish so-called false entanglement, which can arise between electrons that are not interacting, from genuine entanglement. To do this, the scientists performed mathematical calculations for both cases, viz., with and without entanglement. They observed the changing distribution of probabilities for the cases with 6, 8, 10, and 12 dots, i.e., for a system of two qudits with three, four, five, and six levels each. The scientists demonstrated that their proposed system is characterized by a relatively high degree of stability.

It has been a long time since people first set their hearts on building a universal quantum supercomputer, but so far we have been unable to connect a sufficient number of qubits. The work of the Russian researchers brings us one step closer to a future where quantum supercomputations are commonplace. And although there are algorithms that quantum supercomputers could never accelerate, others would still benefit enormously from devices able to exploit the potential of large numbers of qubits (or qudits). These alone would be enough to save us a couple of thousand years.

Page 1 of 392