Earliest galaxies in the Universe (photo: NASA - James Webb Space Telescope)
Earliest galaxies in the Universe (photo: NASA - James Webb Space Telescope)

Tel Aviv University's prof Barkana models the SARAS result of the earliest galaxies in the Universe

First-of-its-kind study sheds light on the epoch of the first stars, 200M years after the Big Bang

An international team of astrophysicists, including Prof. Rennan Barkana from Tel Aviv University's Sackler School of Physics and Astronomy at Raymond & Beverly Sackler Faculty of Exact Sciences, has managed for the first time to statistically characterize the first galaxies in the Universe, which formed only 200 million years after the Big Bang. Prof. Rennan Barkana from TAU's Sackler School of Physics and Astronomy

According to the groundbreaking results, the earliest galaxies were relatively small and dim. They were fainter than present-day galaxies, and likely processed only 5% or less of their gas into stars. Moreover, the intensity of the radio waves emitted by the earliest galaxies wasn't much higher than that of modern galaxies.

“We are trying to understand the epoch of the first stars in the Universe, known as the 'cosmic dawn', about 200 million years after the Big Bang." Prof. Rennan Barkana

Researching the "Cosmic Dawn"

This new study, carried out together with the SARAS observation team, was led by the research group of Dr. Anastasia Fialkov from the University of Cambridge, England, a former Ph.D. student of TAU's Prof. Barkana. The results of this innovative study were published in the prestigious journal Nature Astronomy.

“This is a very new field and a first-of-its-kind study”, explains Prof. Barkana. “We are trying to understand the epoch of the first stars in the Universe, known as the 'cosmic dawn', about 200 million years after the Big Bang."

"The James Webb Space Telescope, for example, can’t see these stars. It might only detect a few particularly bright galaxies from a somewhat later period. Our goal is to probe the entire population of the first stars.” 

"Since stellar radiation affects the light emitted by hydrogen atoms, we use hydrogen as a detector in our search for the first stars: if we can detect the effect of stars on hydrogen, we will know when they were born, and in what types of galaxies." Prof. Rennan Barkana.

Searching for the First Stars

According to the standard picture, before stars began to fuse heavier elements inside their cores, our Universe was nothing but a cloud of hydrogen atoms from the Big Bang (other than some helium and a lot of dark matter).

Today, the Universe is also filled with hydrogen, but in the modern Universe, it is mostly ionized due to radiation from stars.

“Hydrogen atoms naturally emit light at a wavelength of 21cm, which falls within the spectrum of radio waves”, explains Prof. Barkana. “Since stellar radiation affects the light emitted by hydrogen atoms, we use hydrogen as a detector in our search for the first stars: if we can detect the effect of stars on hydrogen, we will know when they were born, and in what types of galaxies. I was among the first theorists to develop this concept 20 years ago, and now observers can implement it in actual experiments. Teams of experimentalists all over the world are currently attempting to discover the 21cm signal from hydrogen in the early Universe.”

One of these teams is EDGES, which uses a small radio antenna that measures the average intensity of the entire sky of radio waves arriving from different periods of the cosmic dawn. In 2018, the EDGES team announced that it had found the 21cm signal from ancient hydrogen.

“There was a problem with their findings, however," says Prof. Barkana. "We could not be sure that the measured signal did indeed come from hydrogen in the early Universe. It could have been a fake signal produced by the electrical conductivity of the ground below the antenna. Therefore, we all waited for an independent measurement that would either confirm or refute these results."

"Every year the experiments become more reliable and precise, and consequently we expect to find stronger upper limits, giving us even better constraints on the cosmic dawn." Prof. Rennan Barkana

Setting Limits

"Last year, astronomers in India carried out an experiment called SARAS, in which the antenna was made to float on a lake, a uniform surface of water that could not mimic the desired signal. According to the results of the new experiment, there was a 95% probability that EDGES did not detect a real signal from the early Universe."

"SARAS found an upper limit for the genuine signal, implying that the signal from early hydrogen is likely significantly weaker than the one measured by EDGES. We modeled the SARAS result and worked out the implications for the first galaxies, i.e., what their properties were, given the upper limit determined by SARAS.  Now we can say for the first time that galaxies of certain types could not have existed at that early time.”

Prof. Barkana concludes: “Modern galaxies, such as our own Milky Way, emit large amounts of radio waves. In our study, we placed an upper limit on the star formation rate in ancient galaxies and on their overall radio emission. And this is only the beginning. Every year the experiments become more reliable and precise, and consequently, we expect to find stronger upper limits, giving us even better constraints on the cosmic dawn. We hope that shortly we will have not only limits but a precise, reliable measurement of the signal itself.”

Artist’s impression of GRB 211211A  CREDIT Soheb Mandhai @TheAstroPhoenix
Artist’s impression of GRB 211211A CREDIT Soheb Mandhai @TheAstroPhoenix

British physicist prof Nicholl models the extra emissions from kilonovae, the main factories of gold in the Universe

A highly unusual blast of high-energy light from a nearby galaxy has been linked by scientists to a neutron star merger.

The event, detected in December 2021 by NASA’s Neil Gehrels Swift Observatory and the Fermi Gamma-ray Space Telescope, was a gamma-ray burst – an immensely energetic explosion that can last from a few milliseconds to several hours.

This gamma-ray burst, identified as GRB 211211A, lasted about a minute – a relatively lengthy explosion, which would usually signal the collapse of a massive star into a supernova. But this event contained an excess of infrared light and was much fainter and faster-fading than a classical supernova, hinting that something different was going on.

In a new study, an international team of scientists showed that the infrared light detected in the burst came from a kilonova. This is a rare event, thought to be generated as neutron stars, or a neutron star and a black hole collide, producing heavy elements such as gold and platinum. Thus far, these events, called kilonovae, have only been associated with gamma-ray bursts with durations of less than two seconds.

The work was led by Jillian Rastinejad at Northwestern University in the US along with physicists from the University of Birmingham and the University of Leicester in the UK, and Radboud University in The Netherlands.

Dr. Matt Nicholl, an Associate Professor at the University of Birmingham, modeled the kilonova emission. “We found that this one event produced about 1,000 times the mass of the Earth in very heavy elements. This supports the idea that these kilonovae are the main factories of gold in the Universe,” he said.

Although up to 10 percent of long gamma-ray bursts are suspected to be caused by the merging of a neutron star or neutron stars and black holes, no firm evidence – in the form of kilonovae – had previously been identified.

Dr. Gavin Lamb, a post-doctoral researcher at the University of Leicester, explained: "A gamma-ray burst is followed by an afterglow that can last several days. These afterglows behave in a very characteristic manner, and by modeling them we can expose any extra emission components, such as a supernova or a kilonova."

The kilonova generated by GRB 211211A is the closest to having been discovered without gravitational waves, and has exciting implications for the upcoming gravitational wave observation run, starting in 2023. Its proximity to a neighboring galaxy only 1bn light years away allowed scientists to study the properties of the merger in unprecedented detail.

A related paper from the same collaboration in Nature Astronomy, led by Dr. Benjamin Gompertz, Assistant Professor at the University of Birmingham, describes some of these properties.

In particular, the team identified how the jet of high-energy electrons, traveling at almost the speed of light and causing the gamma-ray burst, changed with time. The cooling down of this jet was shown to be responsible for the long-lasting GRB emission.

In the paper, the team also described how close observation of GRB 211211A can offer fascinating insights into other previously unexplained gamma-ray bursts which have appeared not to fit with standard interpretations.

Dr. Gompertz said: “This was a remarkable GRB. We don’t expect mergers to last more than about two seconds. Somehow, this one powered a jet for almost a full minute. It’s possible the behavior could be explained by a long-lasting neutron star, but we can’t rule out that what we saw was a neutron star being ripped apart by a black hole.

“Studying more of these events will help us determine which is the right answer and the detailed information we gained from GRB 211211A will be invaluable for this interpretation.”

The work was funded by the European Research Council under the KilonovaRank project, which harnesses the power of Big Data in investigating large cosmic events.

A magnetic vortex, known as a skyrmion (grey dot), being displaced into the corners of a triangular field by electrical currents, where it bounces off the sides. The potentials shown in red are sufficient for carrying out Boolean logic operations.
A magnetic vortex, known as a skyrmion (grey dot), being displaced into the corners of a triangular field by electrical currents, where it bounces off the sides. The potentials shown in red are sufficient for carrying out Boolean logic operations.

German physicists demo prototype of energy-efficient supercomputing with tiny magnetic vortices

A large percentage of energy used today is consumed in the form of electrical power for processing and storing data and for running the relevant terminal equipment and devices. According to predictions, the level of energy used for these purposes will increase even further in the future. Innovative concepts, such as neuromorphic supercomputing, employ energy-saving approaches to solve this problem. In a joint project undertaken by experimental and theoretical physicists at Johannes Gutenberg University Mainz (JGU) with the funding of an ERC Synergy Grant such an approach, known as Brownian reservoir computing, has now been realized. 

Brownian computing uses ambient thermal energy

Brownian reservoir computing is a combination of two unconventional computing methods. Brownian computing exploits the fact that computer processes typically run at room temperature so that there is the option of using the surrounding thermal energy and thus cutting down on electricity consumption. The thermal energy used in the computing system is basically the random movement of particles, known as Brownian motion; which explains the name of this computing method.

Reservoir computing is ideal for exceptionally efficient data processing

Reservoir computing utilizes the complex response of a physical system to external stimuli, resulting in an extremely resource-efficient way of processing data. Most of the computation is performed by the system itself, which does not require additional energy. Furthermore, this type of reservoir computer can easily be customized to perform various tasks as there is no need to adjust the solid-state system to suit specific requirements.

A team headed by Professor Mathias Kläui of the Institute of Physics at Mainz University, supported by Professor Johan Mentink of Radboud University Nijmegen in the Netherlands, has now succeeded in developing a prototype that combines these two computing methods. This prototype is able to perform Boolean logic operations, which can be used as standard tests for the validation of reservoir computing.

The solid-state system selected in this instance consists of metallic thin films exhibiting magnetic skyrmions. These magnetic vortices behave like particles and can be driven by electrical currents. The behavior of skyrmions is influenced not only by the applied current but also by their own Brownian motion. This Brownian motion of skyrmions can result in significantly increased energy savings as the system is automatically reset after each operation and prepared for the next computation.

First prototype was developed in Mainz

Although there have been many theoretical concepts for skyrmion-based reservoir supercomputing in recent years, the researchers in Mainz succeeded in developing the first functional prototype only when combining these concepts with the principle of Brownian computing. "The prototype is easy to produce from a lithographic point of view and can theoretically be reduced to a size of just nanometers," said experimental physicist Klaus Raab. "We owe our success to the excellent collaboration between the experimental and theoretical physicists here at Mainz University," emphasized theoretical physicist Maarten Brems. Project coordinator Professor Mathias Kläui added: "I'm delighted that the funding provided through a Synergy Grant from the European Research Council enabled us to collaborate with outstanding colleagues in the Department of Theoretical Physics in Nijmegen, and it was this collaboration that resulted in our achievement. I see great potential in unconventional computing, a field which also receives extensive support here at Mainz through funding from the Carl Zeiss Foundation for the Emergent Algorithmic Intelligence Center."

Roberto Furfaro
Roberto Furfaro

UA Prof. Furfaro wins $4.5M to develop AI-powered hypersonic guidance, navigation systems

As countries around the world work to advance weapons traveling at Mach 5 and faster, a team led by University of Arizona experts builds a "brain" for high-speed vehicles and interceptors.

Roberto Furfaro, a University of Arizona professor of systems and industrial engineering, has been awarded $4.5 million to lead the development of improved guidance, navigation, and control systems for autonomous vehicles operating at hypersonic speeds. The three-year proposed research is sponsored by the Joint Hypersonic Transition Office through the University Consortium for Applied Hypersonics (UCAH).

Hypersonic speed – Mach 5 or higher – is the speed that exceeds five times the speed of sound. As the United States works to develop hypersonic technologies, research in the field has never been more important.

"Many conventional systems are designed using linear theory, and are not designed to fly or intercept at that speed," Furfaro said. "There are a lot of things happening in hypersonic flow that are so nonlinear that they are not fully understood, and that we need to characterize if we want to design systems that work under these conditions."

Consider how, when a car is moving at 80 mph, a one-second delay in the driver's decision-making can have catastrophic results. Hypersonic vehicles, which travel thousands of miles per hour and face additional factors such as shockwaves and extreme heat, have even less room for error.

UArizona is home to the Arizona Research Center for Hypersonics, where researchers conduct supercomputer simulations and wind tunnel tests to learn more about how vehicles behave in extreme environments. The lab develops and employs novel CFD codes for hypersonic vehicle simulations. 

The artificial intelligence-powered guidance, control, and navigation methods Furfaro and his team develop will act as the "brain" of hypersonic vehicles – including interceptors, which are high-speed, maneuverable vehicles designed for defense against enemy aircraft.

"This investment is a major win for our burgeoning hypersonic research program," said David W. Hahn, the Craig M. Berge Dean of the College of Engineering. "Roberto has a broad range of expertise in areas including space flight mechanics and machine learning, making him and his team exceptionally well qualified to lead this effort."

To train hypersonic systems to navigate and react to extremely complex, high-speed situations on their own, the team is using a type of machine learning called meta-reinforcement learning.

"With meta-learning, we can train it not only on one scenario but on many scenarios," Furfaro said. "The system is able to learn over a distribution environment, and every time it converges faster to the next one. By enabling this continuous learning, we are basically able to have a system that continually adapts."

A strong team builds a test environment

UArizona alumnus Brian Gaudet, a research engineer in the university's Space Systems Engineering Laboratory, is playing a critical role in developing and implementing the AI system. Other collaborators and co-investigators include aerospace and mechanical engineering professor Samy Missoum, who is working on Department of Defense-funded work to characterize hypersonic environments; and materials science and engineering professor Erica Corral, who serves as the co-director of industrial and national lab engagement and workforce development for UCAH's Consortium Engagement Board.

Furfaro will also work with aerospace and mechanical engineering faculty members Alex Craig and Jesse Little, who work in experimental aerodynamics, and Kyle Hanquist, an assistant professor in the same department who specializes in computational fluid dynamics. Other collaborators are at the University of Texas at Austin and Raytheon Missiles and Defense.

"The University of Arizona has a nationally prominent hypersonics research program, which received $10 million in federal and state support in 2021 to enhance our research facilities," said University of Arizona President Robert C. Robbins. "Many of the field's top experts agree that artificial intelligence will play an increasingly important role in the advancement of the field, and Professor Furfaro's receipt of this highly competitive grant will bring together many areas of expertise to advance this critical area."

The researchers will use this data – gathered from simulations and wind tunnel tests about how vehicles behave in hypersonic flow – to characterize and create a simulated environment for training the adaptive brain of the system.

"We're incredibly supportive of the University of Arizona's work in developing hypersonic technologies and talent," said Wes Kremer, president of Raytheon Missiles & Defense. "The advancements Professor Furfaro and his team will make to guidance, navigation, and control systems will directly impact our nation’s ability to develop advanced hypersonic capabilities."

L-R: Amir Livne, Dr. Gil Shamai and Prof. Ron Kimmel
L-R: Amir Livne, Dr. Gil Shamai and Prof. Ron Kimmel

Technion-developed deep-learning system looks at breast cancer scans better than a human

One in nine women in the developed world will be diagnosed with breast cancer at some point in her life. The prevalence of breast cancer is increasing, an effect caused in part by the modern lifestyle and increased lifespans. Thankfully, treatments are becoming more efficient and personalized. However, what isn’t increasing – and is in fact decreasing –  is the number of pathologists or doctors whose specialization is examining body tissues to provide the specific diagnosis necessary for personalized medicine. A team of researchers at the Technion – Israel Institute of Technology have therefore made it their quest to turn supercomputers into effective pathologists’ assistants, simplifying and improving the human doctor’s work. 

The specific task that Dr. Gil Shamai and Amir Livne from the lab of Professor Ron Kimmel from the Henry and Marilyn Taub Faculty of Computer Science at the Technion set out to achieve lies within the realm of immunotherapy. Immunotherapy has been gaining prominence in recent years as an effective, sometimes even game-changing, treatment for several types of cancer. The basis of this form of therapy is encouraging the body’s own immune system to attack the tumor. However, such therapy needs to be personalized as the correct medication must be administered to the patients who stand to benefit from it based on the specific characteristics of the tumor.

Multiple natural mechanisms prevent our immune systems from attacking our own bodies. These mechanisms are often exploited by cancer tumors to evade the immune system. One such mechanism is related to the PD-L1 protein – some tumors display it, and it acts as a sort of password by erroneously convincing the immune system that cancer should not be attacked. Specific immunotherapy for PD-L1 can persuade the immune system to ignore this particular password, but of course, would only be effective when the tumor expresses the PD-L1.

It is a pathologist’s task to determine whether a patient’s tumor expresses PD-L1. Expensive chemical markers are used to stain a biopsy taken from the tumor in order to obtain the answer. The process is non-trivial, time-consuming, and at times inconsistent. Dr. Shamai and his team took a different approach. In recent years, it has become an FDA-approved practice for biopsies to be scanned so they can be used for digital pathological analysis. Amir Livne, Dr. Shamai, and Prof. Kimmel decided to see if a neural network could use these scans to make the diagnosis without requiring additional processes. “They told us it couldn’t be done,” the team said, “so of course, we had to prove them wrong.”

Neural networks are trained in a manner similar to how children learn: they are presented with multiple tagged examples. A child is shown many dogs and various other things, and from these examples forms an idea of what “dog” is. The neural network Prof. Kimmel’s team developed was presented with digital biopsy images from 3,376 patients that were tagged as either expressing or not expressing PD-L1. After preliminary validation, it was asked to determine whether additional clinical trial biopsy images from 275 patients were positive or negative for PD-L1. It performed better than expected: for 70% of the patients, it was able to confidently and correctly determine the answer. For the remaining 30% of the patients, the program could not find the visual patterns that would enable it to decide one way or the other. Interestingly, in the cases where artificial intelligence (AI) disagreed with the human pathologist’s determination, a second test proved the AI to be right.

“This is a momentous achievement,” Prof. Kimmel explained. “The variations that the computer found – they are not distinguishable to the human eye. Cells arrange themselves differently if they present PD-L1 or not, but the differences are so small that even a trained pathologist can’t confidently identify them. Now our neural network can.”

This achievement is the work of a team comprised of Dr. Gil Shamai and graduate student Amir Livne, who developed the technology and designed the experiments, Dr. António Polónia from the Institute of Molecular Pathology and Immunology of the University of Porto, Portugal, Professor Edmond Sabo and Dr. Alexandra Cretu from Carmel Medical Center in Haifa, Israel, who are expert pathologists that conducted the research, and with the support of Professor Gil Bar-Sela, head of oncology and hematology division at Haemek Medical Center in Afula, Israel.

“It’s an amazing opportunity to bring together artificial intelligence and medicine,” Dr. Shamai said. “I love mathematics, I love developing algorithms. Being able to use my skills to help people, to advance medicine – it’s more than I expected when I started out as a computer science student.” He is now leading a team of 15 researchers, who are taking this project to the next level.

“We expect AI to become a powerful tool in doctors’ hands,” shared Prof. Kimmel. “AI can assist in making or verifying a diagnosis, it can help match the treatment to the individual patient, it can offer a prognosis. I do not think it can, or should, replace the human doctor. But it can make some elements of doctors’ work simpler, faster, and more precise.”