LATEST

Credit: Vikram Mulligan. This conceptual art "Illuminating the energy landscape" shows the power of computational design to explore and illuminate structured peptides across the vast energy landscape.

New approaches developed at the UW Institute of Protein Design offer advantages for researchers attempting to design midsize drug compounds.

New supercomputational strategies reported this week in Science might help realize the promise of peptide-based drugs.

Macrocyclic peptides have sparked pharmaceutical industry interest, because they have certain physical and chemical properties that could become the basis of a new generation of medications. Peptides are similar to protein molecules, but differ in their smaller size, structure and functions.

Small peptides have the benefits of small molecule drugs, like aspirin, and large antibody therapies, like rituximab, with fewer drawbacks.  They are stable like small molecules and potent and selective like antibodies. 

An example of a macrocyclic peptide drug success story is cyclosporine, used as an immunosuppressant for organ transplants and some autoimmune disorders. 

Before the work described in the Science paper, there was no way to systematically design ordered peptide macrocycles like cyclosporine. 

Naturally occurring peptides that might serve as reliable starting points, or scaffolds, are few.  Equally as frustrating is that they often fail to perform as expected when repurposed.  Instead, researchers had resorted to screening through large, randomly generated libraries of compounds in the hopes of finding what they needed.

The methods covered in the report, "Comprehensive computational design of ordered peptide macrocycles" now solve these problems.

The lead authors are Parisa Hossienzadeh, Gaurav Bhardwaj and Vikram Mulligan, of the University of Washington School of Medicine Department of Biochemistry and the UW Institute for Protein Design. The senior author is David Baker, professor of biochemistry and head of the institute.   Baker is also a Howard Hughes Medical Institute investigator.

"In our paper," the researchers noted, "we describe computational strategies for designing peptides that adopt diverse shapes with very high accuracy and for providing comprehensive coverage of the structures that can be formed by short peptides."

A conceptual illustration of sampling many peptide structures and selecting the best through their energy landscapes by Ahmad Hosseinzadeh and Khosro Khosravi.A conceptual illustration of sampling many peptide structures and selecting the best through their energy landscapes by Ahmad Hosseinzadeh and Khosro Khosravi.

They pointed out the advantages of this new computational approach:

First, they were able to design and compile a library of many new stable peptide scaffolds that can provide the basic platforms for drug candidate architecture.  Their methods also can be used to design additional custom peptides with arbitrary shapes on demand.

"We sampled the diverse landscape of shapes that peptides can form, as a guide for designing the next generation of drugs," the researchers said.

Key to control of the geometry and chemistry of molecules was the design of peptides with natural amino acids, called L-amino acids, and their mirror opposites containing D-amino acids. (The L and D stand for Latin words for rotating to the left or the right, as some molecular structures can have left-or-right handedness or chirality).

The D-amino acids improved pharmacological properties by increasing resistance to natural enzymes that breakdown peptides.  Inclusion of D-amino acids in designs also allows for a more diverse range of shapes.

Designing peptides takes intensive computer power, resulting in expensive calculations.  The researchers credited a cadre of citizen scientists and volunteers who donated their spare cellular smartphone minutes and computer times.  The Hyak Supercomputer at the University of Washington also ran some of the programs.

The researchers pointed to future directions for their peptide supercomputational design approaches.  They hope to design peptides that can permeate cell membranes and go inside living cells.

In other aspects, they plan to add new functionalities to peptide structures by stabilizing the binding motifs at protein-protein interfaces for basic science studies.   For clinical applications, they anticipate using their methods and scaffolds for developing peptide-based drugs.

The work published this week in Science was supported by awards from the National Institutes of Health, the Washington Research Foundation, the American Cancer Society, a Pew Latin-American fellowship, and the Howard Hughes Medical Institute.   Facilities sponsored by the U.S. Department of Energy also were used.

Other researchers on the project were: Matthew D. Shortridge, Timothy W. Craven, Fatima Párdo-Avila, Stephen A. Rettie, David E. Kim, Daniel-Adriano Silva, Yehia M. Ibrahim, Ian K. Webb, John R. Cort, Joshua N. Adkins and Gabriele Varani.

Jeffrey Alan Golden, M.D., Brigham and Women's Hospital, Boston

Supercomputer algorithms detected the spread of cancer to lymph nodes in women with breast cancer as well as or better than pathologists.

Digital imaging of tissue sample slides for pathology has become possible in recent years because of advances in slide scanning technology. Artificial intelligence, where computers learn to do tasks that normally require  human intelligence, has potential for making diagnoses. Using supercomputer algorithms to analyze digital pathology slide images could potentially improve the accuracy and efficiency of pathologists.

 
Researchers competed in an international challenge in 2016 to produce computer algorithms to detect the spread of breast cancer by analyzing tissue slides of sentinel lymph nodes, the lymph node closest to a tumor and the first place it would spread. The performance of the algorithms was compared against the performance of a panel of 11 pathologists participating in a simulation exercise.

Authors: Babak Ehteshami Bejnordi, M.S., Radboud University Medical Center, Nijmegen, the Netherlands and coauthors

Results:

  • Some computer algorithms were better at detecting cancer spread than pathologists in an exercise that mimicked routine pathology workflow.
  • Some algorithms were as good as an expert pathologist interpreting images without any time constraints.

Study Limitations: The test data on which algorithms and pathologists were evaluated are not comparable to the mix of cases pathologists encounter in clinical practice.

Study Conclusions: Supercomputer algorithms detected the spread of cancer to lymph nodes in women with breast cancer as well as or better than pathologists. Evaluation in a clinical setting is required to determine the benefit of using artificial intelligence in pathology to detect cancer requires.

The editorial, "Deep Learning Algorithms for Detection of Lymph Node Metastases From Breast Cancer," by Jeffrey Alan Golden, M.D., Brigham and Women's Hospital, Boston

The study, "Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes," by Tien Yin Wong, M.D., Ph.D., Singapore National Eye Center, Singapore, and coauthors.

Follow a link to this report here.

Novel combination of security proof techniques and protocols help solve the puzzle of high speed quantum key distribution for secure communication

A quantum information scientist from the National University of Singapore (NUS) has developed efficient "toolboxes" comprising theoretical tools and protocols for quantifying the security of high-speed quantum communication. Assistant Professor Charles Lim is part of an international team of experimental and theoretical scientists from Duke University, Ohio State University and Oak Ridge National Laboratory that has recently achieved a significant breakthrough in high-rate quantum secure communication.

Quantum computers are powerful machines that can break today's most prevalent encryption technologies in minutes. Crucially, recent progress in quantum computing has indicated that this threat is no longer theoretical but real, and large-scale quantum computers are now becoming a reality. If successfully implemented, these computers could be exploited to decrypt any organisation's trade secrets, confidential communication, and sensitive data retrospectively or remotely.

Quantum key distribution (QKD) is an emerging quantum technology that enables the establishment of secret keys between two or more parties in an untrusted network. Importantly, unlike conventional encryption techniques, the security of QKD is mathematically unbreakable -- it is based solely on the established laws of nature. As such, messages and data encrypted using QKD keys are completely secure against any attacks on the communication channel. For this reason, QKD is widely seen as the solution that will completely resolve the security threats posed by future quantum computers.

Today, QKD technology is relatively mature and there are now several companies selling QKD systems. Very recently, researchers from China have managed to distribute QKD keys to two ground stations located 1200 kilometres apart. However, despite these major developments and advances, practical QKD systems still face some inherent limitations. One major limitation is the secret key throughput -- current QKD systems are only able to transmit 10,000 to 100,000 secret bits per second. This limitation is largely due to the choice of quantum information basis: many QKD systems are still using low-dimensional information basis, such as the polarisation basis, to encode quantum information.

"Poor secret key rates arising from current QKD implementations have been a major bottleneck affecting the use of quantum secure communication on a wider scale. For practical applications, such systems need to be able to generate secret key rates in the order of megabits per second to meet today's digital communication requirements," said Asst Prof Lim, who is from the Department of Electrical and Computer Engineering at NUS Faculty of Engineering as well as Centre for Quantum Technologies at NUS.

In the study, the research team developed a QKD system based on time and phase bases which allows for more secret bits to be packed into a single photon. Notably, the team had achieved two secret bits in a single photon, with a secret key rate of 26.2 megabits per second.

The findings of the study were published online in scientific journal Science Advances on 24 November 2017.

Time-bin encoding

Encoding quantum information in the time and phase bases is a promising approach that is highly robust against typical optical channel disturbances and yet scalable in the information dimension. In this approach, secret bits are encoded in the arrival time of single photons, while the complementary phase states -- for measuring information leakages -- are encoded in the relative phases of the time states. This encoding technique, in principle, could allow one to pack arbitrarily many bits into a single photon and generate extremely high secret key rates for QKD. However, implementing such high-dimensional systems is technically challenging and tools for quantifying the practical security of high-dimensional QKD are limited.

To overcome these problems for their QKD system, the researchers used a novel combination of security proof techniques developed by Asst Prof Lim and an interferometry technique by Professor Daniel Gauthier's research group from Duke University and Ohio State University. Asst Prof Lim was involved in the protocol design of the QKD system as well as proving the security of the protocol using quantum information theory.

"Our newly developed theoretical and experimental techniques have resolved some of the major challenges for high-dimensional QKD systems based on time-bin encoding, and can potentially be used for image and video encryption, as well as data transfer involving large encrypted databases. This will help pave the way for high-dimensional quantum information processing," added Asst Prof Lim, who is one of the co-corresponding authors of the study.

Next steps

Moving forward, the team will be exploring ways to generate more bits in a single photon using time-bin encoding. This will help advance the development of commercially viable QKD systems for ultra-high rate quantum secure communication.

Illustration of the molecular structure of the graphene nanoribbons prepared by UCLA chemistry professor Yves Rubin and colleagues.  CREDIT Courtesy of Yves Rubin

Tiny structures could be next-generation solution for smaller electronic devices

Silicon -- the shiny, brittle metal commonly used to make semiconductors -- is an essential ingredient of modern-day electronics. But as electronic devices have become smaller and smaller, creating tiny silicon components that fit inside them has become more challenging and more expensive.

Now, UCLA chemists have developed a new method to produce nanoribbons of graphene, next-generation structures that many scientists believe will one day power electronic devices.

This research is published online in the Journal of the American Chemical Society.

The nanoribbons are extremely narrow strips of graphene, the width of just a few carbon atoms. They're useful because they possess a bandgap, which means that electrons must be "pushed" to flow through them to create electrical current, said Yves Rubin, a professor of chemistry in the UCLA College and the lead author of the research.

"A material that has no bandgap lets electrons flow through unhindered and cannot be used to build logic circuits," he said.

Rubin and his research team constructed graphene nanoribbons molecule by molecule using a simple reaction based on ultraviolet light and exposure to 600-degree heat.

"Nobody else has been able to do that, but it will be important if one wants to build these molecules on an industrial scale," said Rubin, who also is a member of the California NanoSystems Institute at UCLA.

The process improves upon other existing methods for creating graphene nanoribbons, one of which involves snipping open tubes of graphene known as carbon nanotubes. That particular approach is imprecise and produces ribbons of inconsistent sizes -- a problem because the value of a nanoribbon's bandgap depends on its width, Rubin said.

To create the nanoribbons, the scientists started by growing crystals of four different colorless molecules. The crystals locked the molecules into the perfect orientation to react, and the team then used light to stitch the molecules into polymers, which are large structures made of repeating units of carbon and hydrogen atoms.

The scientists then placed the shiny, deep blue polymers in an oven containing only argon gas and heated them to 600 degrees Celsius. The heat provided the necessary boost of energy for the polymers to form the final bonds that gave the nanoribbons their final shape: hexagonal rings composed of carbon atoms, and hydrogen atoms along the edges of the ribbons.

"We're essentially charring the polymers, but we're doing it in a controlled way," Rubin said.

The process, which took about an hour, yielded graphene nanoribbons just eight carbon atoms wide but thousands of atoms long. The scientists verified the molecular structure of the nanoribbons, which were deep black in color and lustrous, by shining light of different wavelengths at them.

"We looked at what wavelengths of light were absorbed," Rubin said. "This reveals signatures of the structure and composition of the ribbons."

The researchers have filed a patent application for the process.

Rubin said the team now is studying how to better manipulate the nanoribbons -- a challenge because they tend to stick together.

"Right now, they are bundles of fibers," Rubin said. "The next step will be able to handle each nanoribbon one by one."

Three-dimensional rendering of Risso's dolphin echolocation click spectra recorded in the Gulf of Mexico, aggregated by an unsupervised learning algorithm.  CREDIT Kaitlin Frasier

Machine learning approach could help scientists monitor wild dolphin populations

Scientists have developed a new algorithm that can identify distinct dolphin click patterns among millions of clicks in recordings of wild dolphins. This approach, presented in PLOS Computational Biology by Kaitlin Frasier of Scripps Institution of Oceanography, California, and colleagues, could potentially help distinguish between dolphin species in the wild.

Frasier and her colleagues build autonomous underwater acoustic sensors that can record dolphins' echolocation clicks in the wild for over a year at a time. These instruments serve as non-invasive tools for studying many aspects of dolphin populations, including how they are affected by the Deepwater Horizon oil spill, natural resource development, and climate change.

Because the sensors record millions of clicks, it is difficult for a human to recognize any species-specific patterns in the recordings. So, the researchers used advances in machine learning to develop an algorithm that can uncover consistent click patterns in very large datasets. The algorithm is "unsupervised," meaning it seeks patterns and defines different click types on its own, instead of being "taught" to recognize patterns that are already known.

The new algorithm was able to identify consistent patterns in a dataset of over 50 million echolocation clicks recorded in the Gulf of Mexico over a two-year period. These click types were consistent across monitoring sites in different regions of the Gulf, and one of the click types that emerged is associated with a known dolphin species.

The research team hypothesizes that some of the consistent click types revealed by the algorithm could be matched to other dolphin species and therefore may be useful for remote monitoring of wild dolphins. This would improve on most current monitoring methods, which rely on people making visual observations from large ships or aircraft and are only possible in daylight and good weather conditions.

Next, the team plans to integrate this work with deep learning methods to improve their ability to identify click types in new datasets recorded different regions. They will also perform fieldwork to verify which species match with some of the new click types revealed by the algorithm.

"It's fun to think about how the machine learning algorithms used to suggest music or social media friends to people could be re-interpreted to help with ecological research challenges," Frasier says. "Innovations in sensor technologies have opened the floodgates in terms of data about the natural world, and there is a lot of room for creativity right now in ecological data analysis."

This image depicts a poverty map (552 communities) of Senegal generated using the researchers' supercomputational tools.

Researchers harness big data to improve poverty maps, a much-needed tool to aid world's most vulnerable people

For years, policymakers have relied upon surveys and census data to track and respond to extreme poverty.

While effective, assembling this information is costly and time-consuming, and it often lacks detail that aid organizations and governments need in order to best deploy their resources.

That could soon change.

A new mapping technique, described in the Nov. 14 issue of the Proceedings of the National Academies of Sciences, shows how researchers are developing supercomputational tools that combine cellphone records with data from satellites and geographic information systems to create timely and incredibly detailed poverty maps.

"Despite much progress in recent decades, there are still more than 1 billion people worldwide lacking food, shelter and other basic human necessities," says Neeti Pokhriyal, one of the study's co-lead authors, and a PhD candidate in the Department of Computer Science and Engineering at the University at Buffalo.

The study is titled "Combining Disparate Data Sources for Improved Poverty Prediction and Mapping."

Some organizations define extreme poverty as a severe lack of food, health care, education and other basic needs. Others relate it to income; for example, the World Bank says people living on less than $1.25 per day (2005 prices) are extremely impoverished.

While declining in most areas of the world, roughly 1.2 billion people still live in extreme poverty. Most are in Asia, sub-Saharan Africa and the Caribbean. Aid organizations and governmental agencies say that timely and accurate data are vital to ending extreme poverty.

The study focuses on Senegal, a sub-Saharan country with a high poverty rate.

The first data set are 11 billion calls and texts from more than 9 million Senegalese mobile phone users. All information is anonymous and it captures how, when, where and with whom people communicate with.

The second data set comes from satellite imagery, geographic information systems and weather stations. It offers insight into food security, economic activity and accessibility to services and other indicators of poverty. This can be gleaned from the presence of electricity, paved roads, agriculture and other signs of development.

The two datasets are combined using a machine learning-based framework.

Using the framework, the researchers created maps detailing the poverty levels of 552 communities in Senegal. Current poverty maps divide the nation in four regions. The framework also can help predict certain dimensions of poverty such as deprivations in education, standard of living and health.

Unlike surveys or censuses, which can take years and cost millions of dollars, these maps can be generated quickly and cost-efficiently. And they can be updated as often as the data sources are updated. Plus, their diagnostic nature can help assist policymakers in designing better interventions to fight poverty.

Pokhriyal, who began work on the project in 2015 and has travelled to Senegal, says the goal is not to replace census and surveys but to supplement these sources of information in between cycles. The approach could also prove useful in areas of war and conflict, as well as remote regions.

SwRI scientists modeled the protracted period of bombardment after the Moon formed, determining that impactor metals may have descended into Earth’s core. This artistic rendering illustrates a large impactor crashing into the young Earth. Light brown and gray particles indicate the projectile’s mantle (silicate) and core (metal) material, respectively.

Large planetesimals delivered more mass to nascent Earth than previously thought

Southwest Research Institute scientists recently modeled the protracted period of bombardment following the Moon's formation, when leftover planetesimals pounded the Earth. Based on these simulations, scientists theorize that moon-sized objects delivered more mass to the Earth than previously thought.

young earth projectile D022813 0

Early in its evolution, Earth sustained an impact with another large object, and the Moon formed from the resulting debris ejected into an Earth-orbiting disk. A long period of bombardment followed, the so-called "late accretion," when large bodies impacted the Earth delivering materials that were accreted or integrated into the young planet.

"We modeled the massive collisions and how metals and silicates were integrated into Earth during this 'late accretion stage,' which lasted for hundreds of millions of years after the Moon formed," said SwRI's Dr. Simone Marchi, lead author of a Nature Geoscience paper outlining these results. "Based on our simulations, the late accretion mass delivered to Earth may be significantly greater than previously thought, with important consequences for the earliest evolution of our planet."

Previously, scientists estimated that materials from planetesimals integrated during the final stage of terrestrial planet formation made up about half a percent of the Earth's present mass. This is based on the concentration of highly "siderophile" elements -- metals such as gold, platinum and iridium, which have an affinity for iron -- in the Earth's mantle. The relative abundance of these elements in the mantle points to late accretion, after Earth's core had formed. But the estimate assumes that all highly siderophile elements delivered by the later impacts were retained in the mantle.

Late accretion may have involved large differentiated projectiles. These impactors may have concentrated the highly siderophile elements primarily in their metallic cores. New high-resolution impact simulations by researchers at SwRI and the University of Maryland show that substantial portions of a large planetesimal's core could descend to, and be assimilated into, the Earth's core -- or ricochet back into space and escape the planet entirely. Both outcomes reduce the amount of highly siderophile elements added to Earth's mantle, which implies that two to five times as much material may have been delivered than previously thought.

"These simulations also may help explain the presence of isotopic anomalies in ancient terrestrial rock samples such as komatiite, a volcanic rock," said SwRI co-author Dr. Robin Canup. "These anomalies were problematic for lunar origin models that imply a well-mixed mantle following the giant impact. We propose that at least some of these rocks may have been produced long after the Moon-forming impact, during late accretion."

New method creates time-efficient way of supercomputing models of complex systems reaching equilibrium

When the maths cannot be done by hand, physicists modelling complex systems, like the dynamics of biological molecules in the body, need to use supercomputer simulations. Such complicated systems require a period of time before being measured, as they settle into a balanced state. The question is: how long do supercomputer simulations need to run to be accurate? Speeding up processing time to elucidate highly complex study systems has been a common challenge. And it cannot be done by running parallel computations. That's because the results from the previous time lapse matters for computing the next time lapse. Now, Shahrazad Malek from the Memorial University of Newfoundland, Canada, and colleagues have developed a practical partial solution to the problem of saving time when using supercomputer simulations that require bringing a complex system into a steady state of equilibrium and measuring its equilibrium properties. These findings are part of a special issue on "Advances in Computational Methods for Soft Matter Systems," recently published in EPJ E.

One solution is to run multiple copies of the same simulation, with randomised initial conditions for the positions and velocities of the molecules. By averaging the results over this ensemble of 10 o 50 runs, each run in the ensemble can be shorter than a single long run and still produce the same level of accuracy in the results. In this study, the authors go one step further and focus on an extreme case of examining an ensemble of 1,000 runs--dubbed a swarm. This approach reduces the overall time required to get the answer to estimating the value of the system at equilibrium.

Since this sort of massive multi-processor system is gradually becoming more common, this work contributes to increasing the techniques available to scientists. The solutions can be applied to computational studies in fields such as biochemistry, materials physics, astrophysics, chemical engineering, and economics.

The upper and lower series of pictures each show a simulation of a neutron star merger. In the scenario shown in the upper panels the star collapses after the merger and forms a black hole, whereas the scenario displayed in the lower row leads to an at least temporarily stable star. CREDIT Picture: Andreas Bauswein, HITS

Neutron stars are the densest objects in the Universe; however, their exact characteristics remain unknown. Using supercomputing simulations based on recent observations, an international team of scientists has managed to narrow down the size of these stars

When a very massive star dies, its core contracts. In a supernova explosion, the star's outer layers are expelled, leaving behind an ultra-compact neutron star. For the first time, the LIGO and Virgo Observatories have recently been able to observe the merger of two neutron stars and measure the mass of the merging stars. Together, the neutron stars had a mass of 2.74 solar masses. Based on these observational data, an international team of scientists from Germany, Greece, and Japan including HITS astrophysicist Dr. Andreas Bauswein has managed to narrow down the size of neutron stars with the aid of supercomputer simulations. The calculations suggest that the neutron star radius must be at least 10.7 km. The international research team's results have been published in "Astrophysical Journal Letters."

The Collapse as Evidence

In neutron star collisions, two neutron stars orbit around each other, eventually merging to form a star with approximately twice the mass of the individual stars. In this cosmic event, gravitational waves - oscillations of spacetime - whose signal characteristics are related to the mass of the stars, are emitted. This event resembles what happens when a stone is thrown into water and waves form on the water's surface. The heavier the stone, the higher the waves.

The scientists simulated different merger scenarios for the recently measured masses to de-termine the radius of the neutron stars. In so doing, they relied on different models and equations of state describing the exact structure of neutron stars. Then, the team of scientists checked whether the calculated merger scenarios are consistent with the observations. The conclusion: All models that lead to the direct collapse of the merger remnant can be ruled out because a collapse leads to the formation of a black hole, which in turn means that relatively little light is emitted during the collision. However, different telescopes have observed a bright light source at the location of the stars' collision, which provides clear evidence against the hypothesis of collapse.

The results thereby rule out a number of models of neutron star matter, namely all models that predict a neutron star radius smaller than 10.7 kilometers. However, the internal structure of neutron stars is still not entirely understood. The radii and structure of neutron stars are of particular interest not only to astrophysicists, but also to nuclear and particle physicists because the inner structure of these stars reflects the properties of high-density nuclear mat-ter found in every atomic nucleus.

Neutron Stars Reveal Fundamental Properties of Matter

While neutron stars have a slightly larger mass than our Sun, their diameter is only a few 10 km. These stars thus contain a large mass in a very small space, which leads to extreme conditions in their interior. Researchers have been exploring these internal conditions for already some decades and are particularly interested in better narrowing down the radius of these stars as their size depends on the unknown properties of density matter.

The new measurements and new calculations are helping theoreticians better understand the properties of high-density matter in our Universe. The recently published study already represents a scientific progress as it has ruled out some theoretical models, but there are still a number of other models with neutron star radii greater than 10.7 km. However, the scien-tists have been able to demonstrate that further observations of neutron star mergers will continue to improve these measurements. The LIGO and Virgo Observatories have just be-gun taking measurements, and the sensitivity of the instruments will continue to increase over the next few years and provide even better observational data. "We expect that more neutron star mergers will soon be observed and that the observational data from these events will reveal more about the internal structure of matter," HITS scientist Andreas Bauswein concludes.

  1. Patkar’s new supercomputational framework shows shifting protein networks in breast cancer may alter gene function
  2. Army researchers join international team to defeat disinformation cyberattacks
  3. John Hopkins researcher Winslow builds new supercomputer model sheds light on biological events leading to sudden cardiac death
  4. Italian scientist Baroni creates new method to supercompute heat transfer from optimally short MD simulations
  5. French researchers develop supercomputer models for better understanding of railway ballast
  6. Finnish researchers develop method to measure neutron star size uses supercomputer modeling based on thermonuclear explosions
  7. Using machine learning algorithms, German team finds ID microstructure of stock useful in financial crisis
  8. Russian prof Sergeyev introduces methodology working with numerical infinities and infinitesimals; opens new horizons in a supercomputer called Infinity
  9. ASU, Chinese researchers develop human mobility prediction model that offers scalability, a more practical approach
  10. Marvell Technology buys rival chipmaker Cavium for $6 billion in a cash-and-stock deal
  11. WPI researchers use machine learning to detect when online news are a paid-for pack of lies
  12. Johns Hopkins researchers develop model estimating the odds of events that trigger sudden cardiac death
  13. Young Brazilian researcher creates supercomputer model to explain the origin of Earth's water
  14. With launch of new night sky survey, UW researchers ready for era of big data astronomy
  15. CMU software assembles RNA transcripts more accurately
  16. Russian scientists create a prototype neural network based on memristors
  17. Data Science Institute at Columbia develops statistical method that makes better predictions
  18. KU researchers untangle vexing problem in supercomputer-simulation technology
  19. University of Bristol launches £43 million Quantum Technologies Innovation Centre
  20. Utah researchers develop mile stone for ultra-fast communication

Page 1 of 36