Johns Hopkins researchers discover 'spooky' similarity in how brains, computers see

Natural and artificial intelligence networks process 3D fragments of visual images in the same way

The brain detects 3D shape fragments (bumps, hollows, shafts, spheres) in the beginning stages of object vision - a newly discovered strategy of natural intelligence that Johns Hopkins University researchers also found in artificial intelligence networks trained to recognize visual objects.

A new paper in Current Biology details how neurons in area V4, the first stage-specific to the brain's object vision pathway, represent 3D shape fragments, not just the 2D shapes used to study V4 for the last 40 years. The Johns Hopkins researchers then identified nearly identical responses of artificial neurons, in an early stage (layer 3) of AlexNet, an advanced computer vision network. In both natural and artificial vision, early detection of 3D shape presumably aids the interpretation of solid, 3D objects in the real world.

"I was surprised to see strong, clear signals for 3D shape as early as V4," said Ed Connor, a neuroscience professor and director of the Zanvyl Krieger Mind/Brain Institute. "But I never would have guessed in a million years that you would see the same thing happening in AlexNet, which is only trained to translate 2D photographs into object labels." {module INSIDE STORY}

One of the long-standing challenges for artificial intelligence has been to replicate human vision. Deep (multilayer) networks like AlexNet have achieved major gains in object recognition, based on high capacity Graphical Processing Units (GPU) developed for gaming and massive training sets fed by the explosion of images and videos on the Internet.

Connor and his team applied the same tests of image responses to natural and artificial neurons and discovered remarkably similar response patterns in V4 and AlexNet layer 3. What explains what Connor describes as a "spooky correspondence" between the brain - a product of evolution and lifetime learning - and AlexNet - designed by computer scientists and trained to label object photographs?

AlexNet and similar deep networks were actually designed in part based on the multi-stage visual networks in the brain, Connor said. He said the close similarities they observed may point to future opportunities to leverage correlations between natural and artificial intelligence.

"Artificial networks are the most promising current models for understanding the brain. Conversely, the brain is the best source of strategies for bringing artificial intelligence closer to natural intelligence," Connor said.

Australian invention to make it easier to find 'new Earths'

Photonics combined with AI will help decipher the ‘twinkle’ of stars

University of Sydney scientists have developed a sensor that will help decipher the 'twinkle' of stars and allow for ground-based exploration of exoplanets. Their invention will be deployed in one of the world's largest telescopes at Mauna Kea, Hawaii.

Australian scientists have developed a new type of sensor to measure and correct the distortion of starlight caused by viewing through the Earth’s atmosphere, which should make it easier to study the possibility of life on distant planets.

Using artificial intelligence and machine learning, the University of Sydney optical scientists have developed a sensor that can neutralize a star’s ‘twinkle’ caused by heat variations in the Earth’s atmosphere. This will make the discovery and study of planets in distant solar systems easier from optical telescopes on Earth.

“The main way we identify planets orbiting distant stars is by measuring regular dips in starlight caused by planets blocking out bits of their sun,” said lead author Dr Barnaby Norris, who holds a joint position as a Research Fellow in the University of Sydney Astrophotonic Instrumentation Laboratory and in the University of Sydney node of Australian Astronomical Optics in the School of Physics. 

“This is really difficult from the ground, so we needed to develop a new way of looking up at the stars. We also wanted to find a way to directly observe these planets from Earth,” he said.

The team’s invention will now be deployed in one of the largest optical telescopes in the world, the 8.2-meter Subaru telescope in Hawaii, operated by the National Astronomical Observatory of Japan.

“It is really hard to separate a star’s ‘twinkle’ from the light dips caused by planets when observing from Earth,” Dr Norris said. “Most observations of exoplanets have come from orbiting telescopes, such as NASA’s Kepler. With our invention, we hope to launch a renaissance in exoplanet observation from the ground.”

Novel methods

Using the new ‘photonic wavefront sensor’ will help astronomers directly image exoplanets around distant stars from Earth.

Over the past two decades, thousands of planets beyond our solar system have been detected, but only a small handful have been directly imaged from Earth. This severely limits the scientific exploration of these exoplanets.

Making an image of the planet gives far more information than indirect detection methods, like measuring starlight dips. Earth-like planets might appear a billion times fainter than their host star. And observing the planet separate from its star is like looking at a 10-cent coin held in Sydney, as viewed from Melbourne.

To solve this problem, the scientific team in the School of Physics developed a ‘photonic wavefront sensor’, a new way to allow the exact distortion caused by the atmosphere to be measured, so it can then be corrected by the telescope’s adaptive optics systems thousands of times a second.

“This new sensor merges advanced photonic devices with deep learning and neural networks techniques to achieve an unprecedented type of wavefront sensor for large telescopes,’ Dr. Norris said.

“Unlike conventional wavefront sensors, it can be placed at the same location in the optical instrument where the image is formed. This means it is sensitive to types of distortions invisible to other wavefront sensors currently used today in large observatories,” he said.

Professor Olivier Guyon from the Subaru Telescope and the University of Arizona is one of the world’s leading experts in adaptive optics. He said: “This is no doubt a very innovative approach and very different from all existing methods. It could potentially resolve several major limitations of the current technology. We are currently working in collaboration with the University of Sydney team towards testing this concept at Subaru in conjunction with SCExAO, which is one of the most advanced adaptive optics systems in the world.”

Wider applications 

The scientists have achieved this remarkable result by building on a novel method to measure (and correct) the wavefront of light that passes through atmospheric turbulence directly at the focal plane of an imaging instrument. This is done using an advanced light converter, known as a photonic lantern, linked to a neural network inference process.

Co-author and graduate student Fiona (Jin) Wei from the School of Physics at the University of Sydney {module INSIDE STORY}

Co-author and postgraduate student Fiona (Jin) Wei in the School of Physics.

“This is a radically different approach to existing methods and resolves several major limitations of current approaches,” said co-author Jin (Fiona) Wei, a postgraduate student at the Sydney Astrophotonic Instrumentation Laboratory.

The Director of the Sydney Astrophotonic Instrumentation Laboratory in the School of Physics at the University of Sydney, Associate Professor Sergio Leon-Saval, said: “While we have come to this problem to solve a problem in astronomy, the proposed technique is extremely relevant to a wide range of fields. It could be applied in optical communications, remote sensing, in-vivo imaging, and any other field that involves the reception or transmission of accurate wavefronts through a turbulent or turbid medium, such as water, blood, or air.”

Evidence of broadside collision with dwarf galaxy discovered in Milky Way

'Shell structures' are the first of their kind found in the galaxy

Nearly 3 billion years ago, a dwarf galaxy plunged into the center of the Milky Way and was ripped apart by the gravitational forces of the collision. Astrophysicists announced today that the merger produced a series of telltale shell-like formations of stars in the vicinity of the Virgo constellation, the first such "shell structures" to be found in the Milky Way. The finding offers further evidence of the ancient event and new possible explanations for other phenomena in the galaxy.

Astronomers identified an unusually high density of stars called the Virgo Overdensity about two decades ago. Star surveys revealed that some of these stars are moving toward us while others are moving away, which is also unusual, as a cluster of stars would typically travel in concert. Based on emerging data, astrophysicists at Rensselaer Polytechnic Institute proposed in 2019 that the overdensity was the result of a radial merger, the stellar version of a T-bone crash.

"When we put it together, it was an 'aha' moment," said Heidi Jo Newberg, Rensselaer professor of physics, applied physics, and astronomy, and lead author of The Astrophysical Journal paper detailing the discovery. "This group of stars had a whole bunch of different velocities, which was very strange. But now that we see their motion as a whole, we understand why the velocities are different, and why they are moving the way that they are." Stars identified in the research have formed {module INSIDE STORY}

The newly announced shell structures are planes of stars curved, like umbrellas, left behind as the dwarf galaxy was torn apart, literally bouncing up and down through the center of the galaxy as it was incorporated into the Milky Way, an event the researchers have named the "Virgo Radial Merger." Each time the dwarf galaxy stars pass quickly through the galaxy center, slow down as they are pulled back by the Milky Way's gravity until they stop at their farthest point, and then turn around to crash through the center again, another shell structure is created. Simulations that match survey data can be used to calculate how many cycles the dwarf galaxy has endured, and therefore when the original collision occurred.

The new paper identifies two shell structures in the Virgo Overdensity and two in the Hercules Aquila Cloud region, based on data from the Sloan Digital Sky Survey, the European Space Agency's Gaia space telescope, and the LAMOST telescope in China. Supercomputer modeling of the shells and the motion of the stars indicates that the dwarf galaxy first passed through the galactic center of the Milky Way 2.7 billion years ago.

Newberg is an expert on the halo of the Milky Way, a spherical cloud of stars that surrounds the spiral arms of the central disk. Most if not all of those stars appear to be "immigrants," stars that formed in smaller galaxies that were later pulled into the Milky Way. As the smaller galaxies coalesce with the Milky Way, their stars are pulled by so-called "tidal forces," the same kind of differential forces that make tides on Earth, and they eventually form a long cord of stars moving in unison within the halo. Such tidal mergers are fairly common and have formed much of Newberg's research over the past two decades.

More violent "radial mergers" are considered far less common. Thomas Donlon II, a Rensselaer graduate student and first author of the paper, said that they were not initially seeking evidence of such an event.

"There are other galaxies, typically more spherical galaxies, that have a very pronounced shell structure, so you know that these things happen, but we've looked in the Milky Way and hadn't seen really obvious gigantic shells," said Donlon, who was lead author on the 2019 paper that first proposed the Virgo Radial Merger. As they modeled the movement of the Virgo Overdensity, they began to consider a radial merger. "And then we realized that it's the same type of merger that causes these big shells. It just looks different because, for one thing, we're inside the Milky Way, so we have a different perspective, and also this is a disk galaxy and we don't have as many examples of shell structures in disk galaxies."

View a video simulation of the formation of the stellar shell structures. 

{media id=242,layout=solo}


 

The finding poses potential implications for a number of other stellar phenomena, including the Gaia Sausage, a formation of stars believed to have resulted from the merger of a dwarf galaxy between 8 and 11 billion years ago. Previous work supported the idea that the Virgo Radial Merger and the Gaia Sausage resulted from the same event; the much lower age estimate for the Virgo Radial Merger means that either the two are different events or the Gaia Sausage is much younger and could not have caused the creation of the thick disk of the Milky Way, as previously claimed. A recently discovered spiral pattern in position and velocity data for stars close to the sun sometimes called the Gaia Snail, and a proposed event called the Splash, may also be associated with the Virgo Radial Merger.

"There are lots of potential tie-ins to this finding," Newberg said. "The Virgo Radial Merger opens the door to a greater understanding of other phenomena that we see and don't fully understand, and that could very well have been affected by something having fallen right through the middle of the galaxy less than 3 billion years ago."