Washington, Rosetta@home's deep learning supercomputer dreams up new protein structures

Researchers show that a neural network trained exclusively to predict protein shapes can also generate new ones

Just as convincing images of cats can be created using artificial intelligence, new proteins can now be made using similar tools. In a study, a team including researchers at the University of Washington in Seattle, Washington, Rensselaer Polytechnic Institute, and Harvard University describe the development of a neural network that “hallucinates” proteins with new, stable structures. A neural network "hallucinated" proteins that were synthesized to confirm their structure.

“The potential to hallucinate brand-new proteins that bind particular biomolecules or form desired enzymatic active sites is very exciting,” said Gaetano Montelione, a professor of chemistry and chemical biology at Rensselaer, where synthesized versions of “hallucinated” proteins invented by a neural network were analyzed. 

Proteins are string-like molecules found in every cell that spontaneously fold into intricate three-dimensional shapes. These folded shapes are key to nearly every process in biology, including cellular development, DNA repair, and metabolism. But the complexity of protein shapes makes them difficult to study. Biochemists often use supercomputers to predict how protein strings, or sequences, might fold. In recent years, artificial intelligence techniques like neural networks and deep learning have revolutionized the accuracy of this work.

“For this project, we made up completely random protein sequences and introduced mutations into them until our neural network predicted that they would fold into stable structures,” said co-lead author Ivan Anishchenko, a postdoctoral scholar in the Baker lab in the Institute for Protein Design at the University of Washington School of Medicine. “At no point did we guide the software toward a particular outcome — these new proteins are just what a computer dreams up.”

In the future, the team believes it should be possible to steer artificial intelligence so that it generates new proteins with useful features. “We’d like to use deep learning to design proteins with function, including protein-based drugs, enzymes, you name it,” said co-lead author Sam Pellock, a postdoctoral scholar in the Baker lab.

The research team generated 2,000 new protein sequences that were predicted to fold. Over 100 of these were produced in the laboratory and studied. Detailed analysis on three such proteins confirmed that the shapes predicted by the supercomputer were indeed realized in the lab.

“Our solution NMR studies, along with X-ray crystal structures determined by the University of Washington team, demonstrate the remarkable accuracy of protein designs created by the hallucination approach,” said co-author Theresa Ramelot, a senior research scientist in the Montelione lab within the Rensselaer Center for Biotechnology and Interdisciplinary Studies.  

Montelione notes, “The hallucination approach builds on earlier observations we made together with the Baker lab revealing that protein structure prediction with deep learning can be quite accurate even for a single protein sequence, without recourse to contact predictions usually obtained by analysis of many evolutionary-related protein sequences.”  

“This approach greatly simplifies protein design,” said senior author David Baker, recipient of the 2021 Breakthrough Prize in Life Sciences. “Before, to create a new protein with a particular shape, people first carefully studied related structures in nature to come up with a set of rules that were then applied in the design process. New sets of rules were needed for each new type of fold. Here, by using a deep-learning network that already captures general principles of protein structure, we eliminate the need for fold-specific rules and open up the possibility of focusing on just the functional parts of a protein directly.” 

“Exploring how to best use this strategy for specific applications is now an active area of research, and this is where I expect the next breakthroughs,” said Baker.

Harvard astronomers observe a new type of binary star long predicted to exist

Researchers predicted the new class of stars’ existence for 50 years but until now had never observed it in space

Researchers at the Center for Astrophysics | Harvard & Smithsonian have observed a new type of binary star that has long been theorized to exist. The discovery finally confirms how a rare type of star in the universe forms and evolves.

Named pre-extremely low mass (ELM) white dwarfs, the new class of stars is described in their study. It was discovered by postdoctoral fellow Kareem El-Badry using the Shane Telescope at the Lick Observatory in San Jose, California, and data from several astronomical surveys. Artist's depiction of a new type of binary star: a pre-extremely low mass (ELM) white dwarf. Pictured in blue, the star is losing mass to a white dwarf companion and transitioning to an ELM white dwarf. Credit: M.Weiss/Center for Astrophysics | Harvard & Smithsonian/

“We have observed the first physical proof of a new population of transitional binary stars,” said El-Badry, a member of the Institute for Theory and Computation at the Center for Astrophysics. “This is exciting; it’s a missing evolutionary link in binary star formation models that we’ve been looking for.”

A New Type of Star

When a star dies, there is a 97% chance it will become a white dwarf, a small dense object that has contracted and dimmed after burning through all its fuel.

But in rare instances, a star can become an ELM white dwarf. Less than one-third the mass of the sun, these stars present a conundrum: if stellar evolution calculations are correct, all ELM white dwarfs would seem to be more than 13.8 billion years old—older than the age of the universe itself and, thus, physically impossible.

“The universe is just not old enough to make these stars by normal evolution,” El-Badry said.

Over the years, astronomers have concluded that the only way for an ELM white dwarf to form is with the help of a binary companion. The gravitational pull from a nearby companion star could quickly (at least, in less than 13.8 billion years) eat away at a star until it became an ELM white dwarf. But the evidence for this picture is not foolproof.

Astronomers have observed normal, massive stars like the Earth’s sun accreting onto white dwarfs—something called cataclysmic variables. They have also observed ELM white dwarfs with normal white dwarf companions. However, they had not observed the transitional phase of evolution or the transformation in-between: when the star has lost most of its mass and has nearly contracted to an ELM white dwarf.

A Missing Evolutionary Link

El-Badry often compares stellar astronomy to 19th-century zoology.

“You go out into the jungle and find an organism,” he said. “You describe how big it is, how much it weighs—and then you go on to some other organism. You see all these different types of objects and need to piece together how they are all connected.”

In 2020, El-Badry decided to go back into the jungle in search of the star that had long alluded scientists: the pre-ELM white dwarf (also referred to as an evolved cataclysmic variable).

Using new data from Gaia, the space-based observatory launched by the European Space Agency, and the Zwicky Transient Facility at Caltech, El-Badry narrowed down 1 billion stars to 50 potential candidates.

El-Badry emphasized the importance of public data from astronomical surveys for his work. "If it weren't for projects like the Zwicky Transient Facility and Gaia, which represent a huge amount of work behind the scenes from hundreds of people, this work just wouldn't be possible," he said.

El-Badry then followed up with close observations of 21 of the stars. The selection strategy worked.

“One hundred percent of the candidates were these pre-ELMs we’d been looking for,” he said. “They were more puffed up and bloated than ELMs. They also were egg-shaped because the gravitational pull of the other star distorts their spherical shape. We found the evolutionary link between two classes of binary stars—cataclysmic variables and ELM white dwarfs—and we found a decent number of them.”

Thirteen of the stars showed signs that they were still losing mass to their companion, while eight of the stars seemed to be no longer losing mass. Each of them was also hotter than previously observed cataclysmic variables.

El-Badry plans to continue studying the pre-ELM white dwarfs and may follow up on the 29 other candidate stars he previously discovered. Like modern-day anthropologists who are filling the gaps in human evolution, he is amazed by the rich diversity of stars that can arise from simple science. 

Dutch physicists show that only models with sufficient mathematical complexity satisfy Born’s rule for solving the quantum measurement problem

The quantum world and our everyday world are very different places. In a publication that appeared as the “Editor’s Suggestion” in Physical Review A this week, the University of Amsterdam located in Amsterdam, Netherlands (UvA) physicists Jasper van Wezel and Lotte Mertens and their colleagues investigate how the act of measuring a quantum particle transforms it into an everyday object.

Quantum mechanics is the theory that describes the tiniest objects in the world around us, ranging from the constituents of single atoms to small dust particles. This microscopic realm behaves remarkably differently from our everyday experience – even though all objects in our human-scale world are made of quantum particles themselves. This leads to intriguing physical questions: why are the quantum world and the macroscopic world so different, where is the dividing line between them, and what exactly happens there? Despite the fuzziness of the quantum world, measurements of quantum particles yield precise outcomes in our everyday world. How does the act of measuring achieve this transformation?

Measurement problem

One particular area where the distinction between quantum and classical becomes essential is when we use an everyday object to measure a quantum system. The division between the quantum and everyday worlds then amounts to asking how ‘big’ the measurement device should be to be able to show quantum properties using a display in our everyday world. Finding out the details of measurement, such as how many quantum particles it takes to create a measurement device, is called the quantum measurement problem.

As experiments probing the world of quantum mechanics become ever more advanced and involve ever larger quantum objects, the invisible line where pure quantum behavior crosses over into classical measurement outcomes are rapidly being approached. In an article that was highlighted as “Editor’s Suggestion” in Physical Review A this week, UvA physicists Jasper van Wezel and Lotte Mertens and their colleagues take stock of current models that attempt to solve the measurement problem, and particularly those that do so by proposing slight modifications to the one equation that rules all quantum behavior: Schrödinger's equation.

Born’s rule

The researchers show that such amendments can in principle lead to consistent proposals for solving the measurement problem. However, it turns out to be difficult to create models that satisfy Born’s rule, which tells us how to use Schrödinger’s equation for predicting measurement outcomes. The researchers show that only models with sufficient mathematical complexity (in technical terms: models that are non-linear and non-unitary) can give rise to Born’s rule and therefore have a chance of solving the measurement problem and teaching us about the elusive crossover between quantum physics and the everyday world.

Hewlett Packard Enterprise reports flat supercomputer sales, but execs say demand is strong

Hewlett Packard Enterprise has announced financial results for the fiscal year 2021 and the fourth quarter, which ended October 31, 2021.

The company's high-performance computer and AI revenue was $1.0 billion in Q4, up 1% from the prior-year period or flat when adjusted for currency, which was below expectations, but the company said it’s on track for growth.

Q4 net revenue of $7.35 billion was up 2% from a year ago, or flat adjusted for constant currency. That was below the $7.38 billion that financial analysts expected.

FY21 net revenue of $27.8 billion, was up 3% from the prior-year period or up 1% when adjusted for currency.

The company has reported fiscal fourth-quarter profits that beat consensus estimates, but the company’s stock lost more than 2% after hours as revenues fell just short of expectations. Shares initially fell nearly 9% but recovered after the company's conference call, in which execs attributed much of the sales weakness to supply chain constraints.

Throughout the call, execs focused on orders rather than sales as evidence that demand is strong, saying the company recorded record orders in its businesses.

“HPE ended the fiscal year 2021 with record demand for our edge-to-cloud portfolio, and we are well-positioned to capitalize on the significant opportunity in front of us,” said Antonio Neri, president, and CEO of Hewlett Packard Enterprise. “In 2021, we accelerated our pivot to as a service, strengthened our core capabilities, and invested in bold innovation in high-growth segments. As our customers continue to demand greater connectivity, access to solutions that allow them to extract value from their data no matter where it lives, and a cloud-everywhere experience, HPE is poised to accelerate our market leadership and provide strong shareholder returns.”

“HPE executed with discipline and exceeded all of our key financial targets in FY21,” said Tarek Robbiati, EVP and CFO of Hewlett Packard Enterprise. “The demand environment has been incredibly strong and accelerated in the second half of the year, which gives us important momentum heading into next year. We are operating with greater focus and more agility and are well-positioned to deliver against our FY22 outlook.”

Columbia engineering team combines quantum mechanics, machine learning to predict chemical reactions

Extracting metals from oxides at high temperatures is essential not only for producing metals such as steel but also for recycling. Because current extraction processes are very carbon-intensive, emitting large quantities of greenhouse gases, researchers have been exploring new approaches to developing “greener” processes. This work has been especially challenging to do in the lab because it requires costly reactors. Building and running computer simulations would be an alternative, but currently, there is no computational method that can accurately predict oxide reactions at high temperatures when no experimental data is available.

Columbia Engineering team reports that they have developed a new computation technique that, through combining quantum mechanics and machine learning, can accurately predict the reduction temperature of metal oxides to their base metals. Their approach is computationally as efficient as conventional calculations at zero temperature and, in their tests, more accurate than computationally demanding simulations of temperature effects using quantum chemistry methods. The study was led by Alexander Urban, assistant professor of chemical engineeringSchematic of the bridging of the cold quantum world and high-temperature metal extraction with machine learning  CREDIT Rodrigo Ortiz de la Morena and Jose A. Garrido Torres/Columbia Engineering

“Decarbonizing the chemical industry is critical if we are to transition to a more sustainable future, but developing alternatives for established industrial processes is very cost-intensive and time-consuming,” Urban said. “A bottom-up computational process design that doesn’t require initial experimental input would be an attractive alternative but has so far not been realized. This new study is, to our knowledge, the first time that a hybrid approach, combining computational calculations with AI, has been attempted for this application. And it’s the first demonstration that quantum-mechanics-based calculations can be used for the design of high-temperature processes.”

The researchers knew that, at very low temperatures, quantum-mechanics-based calculations can accurately predict the energy that chemical reactions require or release. They augmented this zero-temperature theory with a machine-learning model that learned the temperature dependence from publicly available high-temperature measurements. They designed their approach, which focused on extracting metal at high temperatures, to also predict the change of the “free energy'' with the temperature, whether it was high or low. 

“Free energy is a key quantity of thermodynamics and other temperature-dependent quantities can, in principle, be derived from it,” said José A. Garrido Torres, the study’s first scholar who was a postdoctoral fellow in Urban’s lab and is now a research scientist at Princeton. “So we expect that our approach will also be useful to predict, for example, melting temperatures and solubilities for the design of clean electrolytic metal extraction processes that are powered by renewable electric energy.”

“The future just got a little bit closer,” said Nick Birbilis, Deputy Dean of the Australian National University College of Engineering and Computer Science and an expert for materials design with a focus on corrosion durability, who was not involved in the study. “Much of the human effort and sunken capital over the past century has been in the development of materials that we use every day – and that we rely on for our power, flight, and entertainment. Materials development is slow and costly, which makes machine learning a critical development for future materials design. For machine learning and AI to meet their potential, models must be mechanistically relevant and interpretable. This is precisely what the work of Urban and Garrido Torres demonstrates. Furthermore, the work takes a whole-of-system approach for one of the first times, linking atomistic simulations on one end engineering applications on the other – via advanced algorithms.” 

The team is now working on extending the approach to other temperature-dependent materials properties, such as solubility, conductivity, and melting, that are needed to design electrolytic metal extraction processes that are carbon-free and powered by clean electric energy.