Genetic algorithm finds the fittest models explaining quantum correlations through evolutionary strategies

Australian and German researchers have collaborated to develop a genetic algorithm to confirm the rejection of classical notions of causality.

Dr Alberto Peruzzo from RMIT University in Melbourne said: "Bell's theorem excludes classical concepts of causality and is now a cornerstone of modern physics. 

"But despite the fundamental importance of this theorem, only recently was the first 'loophole-free' experiment reported which convincingly verified that we must reject classical notions of causality. 

"Given the importance of this data, an international collaboration between Australian and German institutions has developed a new method of analysis to robustly quantify such conclusions."

The team's approach was to use genetic programming, a powerful machine learning technique, to automatically find the closest classical models for the data. 

Together, the team applied machine learning to find the closest classical explanations of experimental data, allowing them to map out many dimensions of the departure from classical that quantum correlations exhibit.

Dr Chris Ferrie, from the University of Technology Sydney, said: "We've light-heartedly called the region mapped out by the algorithm the 'edge of reality,' referring to the common terminology 'local realism' for a model of physics satisfying Einstein's relativity. 

"The algorithm works by building causal models through simulated evolution imitating natural selection - genetic programming. 

"The algorithm generates a population of 'fit' individual causal models which trade off closeness to quantum theory with the minimisation of causal influences between relativistically disconnected variables."

The team used photons, single particles of light, to generate the quantum correlations that cannot be explained using classical mechanics. 

Quantum photonics has enabled a wide range of new technologies from quantum computation to quantum key distribution.

The photons were prepared in various states possessing quantum entanglement, the phenomenon which fuels many of the advantages in quantum technology. The data collected was then used by the genetic algorithm to find a model that best matches the observed correlations. 

These models then quantify the region of models which are ruled out by nature itself. 

A team of astronomers has made the first measurements of small-scale ripples in primeval hydrogen gas using rare double quasars

The most barren regions known are the far-flung corners of intergalactic space. In these vast expanses between the galaxies there is just one solitary atom per cubic meter -- a diffuse haze of hydrogen gas left over from the Big Bang. On the largest scales, this material is arranged in a vast network of filamentary structures known as the "cosmic web," its tangled strands spanning billions of light years and accounting for the majority of atoms in the universe. 

Now, a team of astronomers, including UC Santa Barbara physicist Joseph Hennawi, have made the first measurements of small-scale ripples in this primeval hydrogen gas using rare double quasars. Although the regions of cosmic web they studied lie nearly 11 billion light years away, they were able to measure variations in its structure on scales 100,000 times smaller, comparable to the size of a single galaxy. The results appear in the journal Science

Intergalactic gas is so tenuous that it emits no light of its own. Instead astronomers study it indirectly by observing how it selectively absorbs the light coming from faraway sources known as quasars. Quasars constitute a brief hyperluminous phase of the galactic life cycle powered by matter falling into a galaxy's central supermassive black hole. Acting like cosmic lighthouses, they are bright, distant beacons that allow astronomers to study intergalactic atoms residing between the location of the quasar and the Earth. But because these hyperluminous episodes last only a tiny fraction of a galaxy's lifetime, quasars are correspondingly rare and are typically separated from each other by hundreds of millions of light years.

In order to probe the cosmic web on much smaller length scales, the astronomers exploited a fortuitous cosmic coincidence: They identified exceedingly rare pairs of quasars and measured subtle differences in the absorption of intergalactic atoms along the two sightlines. 

"Pairs of quasars are like needles in a haystack," explained Hennawi, associate professor in UCSB's Department of Physics. Hennawi pioneered the application of algorithms from "machine learning" -- a brand of artificial intelligence -- to efficiently locate quasar pairs in the massive amounts of data produced by digital imaging surveys of the night sky. "In order to find them, we combed through images of billions of celestial objects millions of times fainter than what the naked eye can see."

Once identified, the quasar pairs were observed with the largest telescopes in the world, including the 10-meter Keck telescopes at the W.M. Keck Observatory on Mauna Kea, Hawaii, of which the University of California is a founding partner.

"One of the biggest challenges was developing the mathematical and statistical tools to quantify the tiny differences we measured in this new kind of data," said lead author Alberto Rorai, Hennawi's former Ph.D. student who is now a postdoctoral researcher at Cambridge University. Rorai developed these tools as part of the research for his doctoral degree and applied them to spectra of quasars with Hennawi and other colleagues.

The astronomers compared their measurements to supercomputer models that simulate the formation of cosmic structures from the Big Bang to the present. On a single laptop, these complex calculations would require almost 1,000 years to complete, but modern supercomputers enabled the researchers to carry them out in just a few weeks.

"The input to our simulations are the laws of physics and the output is an artificial universe, which can be directly compared to astronomical data," said co-author Jose Oñorbe, a postdoctoral researcher at the Max Planck Institute for Astronomy in Heidelberg, Germany, who led the supercomputer simulation effort. "I was delighted to see that these new measurements agree with the well-established paradigm for how cosmic structures form." 

"One reason why these small-scale fluctuations are so interesting is that they encode information about the temperature of gas in the cosmic web just a few billion years after the Big Bang," explained Hennawi. 

Astronomers believe that the matter in the universe went through phase transitions billions of years ago, which dramatically changed its temperature. Known as cosmic re-ionization, these transitions occurred when the collective ultraviolet glow of all stars and quasars in the universe became intense enough to strip electrons off atoms in intergalactic space. How and when re-ionization occurred is one of the biggest open questions in the field of cosmology, and these new measurements provide important clues that will help narrate this chapter of cosmic history.

Online, a wholly-owned subsidiary of the leading French Telecom company Iliad Group and one of the leading web hosting providers, has announced the commercial deployment of server platforms based on Cavium's ThunderX workload optimized processors as part of their Scaleway cloud service offering.

Online offers a range of services to Internet customers worldwide including domain names, web hosting, dedicated servers and hosting in their datacenter. With several hundred thousand servers deployed in their datacenter, Online is one of the largest web hosting providers in Europe. 

The ThunderX product family is Cavium's 64-bit ARMv8-A server processor for Datacenter and Cloud applications, and features high performance custom cores, single and dual socket configurations, high memory bandwidth and large memory capacity. The product family also includes integrated hardware accelerators, integrated feature rich high bandwidth network and storage IO, fully virtualized core and IO, and scalable high bandwidth, low latency Ethernet fabric, which affords ThunderX best-in-class performance per dollar. They are fully compliant with ARMv8-A architecture specifications as well as ARM's SBSA and SBBR standards, and widely supported by industry leading OS, Hypervisor and Software tool and application vendors.

Online is deploying dual socket 96 core ThunderX based platforms as part of their Scaleway IaaS cloud offering.  As part of this deployment, Online.net is introducing three starter ARMv8 servers at an attractive starting price of €0.006 per hour, which is less than one third of their current offering.  Scaleway cloud platform is fully supported by Ubuntu 16.04 OS, including LAMP stack, Docker, Puppet, Juju, Hadoop, MAAS, and more. The platforms also support all standard features of the Scaleway Cloud including flexible IPs, native IPv6, Snapshots and images. 

"Online success in the hosting server industry is built on providing disruptive technology with best-in-class customer experience. This requires us to deploy the most advanced, highest performance and highly scalable servers in our infrastructure," said Yann Léger, VP Cloud Computing at Online. "Cavium's ThunderX workload optimized servers provide an ideal vehicle to enable highly optimized platforms for scalable cloud workloads. We expect ThunderX based servers to deliver significant benefits in performance and TCO, thereby providing better performance and cost-efficiency than all existing solutions in the industry."

"ThunderX ARMv8 CPUs were designed to deliver best-in-class performance and TCO for targeted workloads and are being deployed at multiple hosting datacenters," said Gopal Hegde, VP/GM, Datacenter Processor Group at Cavium. "We are pleased to partner with one of Europe's elite hosting providers on server platforms for their next generation cloud datacenters. This partnership demonstrates continued acceptance of ThunderX platforms across largest and most demanding datacenters."  

Provides architecture best practices, migration management and cloud automation throughout customers' AWS cloud journey

Rackspace has announced the expansion of its managed service offerings for the Amazon Web Services (AWS) Cloud to include a portfolio of Professional Services. The Fanatical Support for AWS Professional Services are tailored to support customers who are new to, or growing on, AWS and need deep, customized expertise to help enable their AWS Cloud journey in key areas such as architectural design, migrations, cloud automation and DevOps. This new offering aligns with Rackspace's broader efforts to develop professional services that deliver end-to-end support and expertise to help customers move workloads out of their data centers and onto the world's leading cloud platforms. 

The addition of Fanatical Support for AWS Professional Services to the Rackspace suite of managed services for AWS reflects a continued commitment to help customers take full advantage of AWS. AWS-certified architects and engineers will work with customers to enable their journey to AWS, helping to ensure they achieve maximum performance, agility and cost-efficiency.

"Rackspace is famous for its support, and we use it a lot," said Paul Keen, CTO of Airtasker, a Fanatical Support for AWS customer. "We work with the Rackspace Professional Services team for any projects we can't do in-house, as well as using security services to tighten up our systems, and we iterate monthly to improve our services. Rackspace really cares about its customers. We expected that level of care during the sales cycle, but it has continued throughout our engagement with them. That's what really sets them apart." 

"We are seeing significant interest from AWS customers for specialized expertise in discrete, value added areas, and we are committed to developing our offers and capabilities to help them fully leverage AWS throughout the entirety of their cloud journey," said Prashanth Chandrasekar, vice president & general manager of Fanatical Support for AWS at Rackspace. 

Fanatical Support for AWS Professional Services address three critical areas where customers need support along their cloud journey:

  • Architecture Strategy & Guidance:Delivers extensive planning, review and consulting around architecture best practices, customer environments, account planning and review of customers' proposed VPC infrastructure design. Also included in this offer is guidance for AWS services that provide functionality in networking, storage, security and operations on AWS.
  • Cloud Migration: Provides assistance with moving web and database workloads to AWS using tried and tested migration tools and methodologies. In addition, Fanatical Support for AWS Professional Services gives customers solutions expertise and ongoing operational support throughout the migration process for a wide array of workloads. 
  • Cloud Automation: Offers assistance with building and delivering products using AWS and DevOps best practices, including serverless and containerized workloads. This also includes the implementation of Continuous Integration/Continuous Delivery (CI/CD), infrastructure automation and application deployment, plus integration of third-party tools and services. 

"At AWS, we're excited about helping customers realize their target business outcomes more quickly through a combination of Rackspace's migration capabilities, and Fanatical Support with our AWS architects and implementation specialists," said Todd Weatherby, vice president of AWS Professional Services Worldwide. 

In addition to offering professional services, Rackspace continues to develop its managed services offerings for AWS by investing in the expertise and software tooling necessary to keep pace with the growing complexity of AWS services. Rackspace was recently recognized as a Premier Consulting Partner, the highest tier within the AWS Partner Network, and a leader in the industry. The Fanatical Support for AWS team of experts has amassed more than 800 AWS technical certifications, and been awarded AWS Competency designations for their expertise in supporting DevOps and Marketing & Commerce workloads on AWS.

To learn more about Fanatical Support for AWS and the new Fanatical Support for AWS Professional Services, please visit www.rackspace.com/managed-aws/services

Much of what scientists know about human memory comes from studies involving relatively simple acts of recollection--remembering lists of words or associations between names and faces.

However, they know very little about the brain networks that support memories for complex events, like when we remember the plot of a book or movie or what we experienced, thought and felt during a childhood birthday party.

A multi-university study led by a neuroscientist at the University of California, Davis, aims to vastly deepen understanding by developing a supercomputer model of how the brain forms, stores and retrieves complex memories. The goal is that the model will have human-like abilities to remember, understand and learn from events.

The project--recently awarded a $7.5 million, five-year grant from the U.S. Department of Defense--could lead to an evolutionary leap in the development of artificial intelligence. It could also open new avenues for understanding Alzheimer's disease, dementia and other memory disorders.

"Our brains have a remarkable capability to remember past events and to use our memories to make inferences in the present and predictions about the future," said Charan Ranganath, a professor in the UC Davis Department of Psychology and the Center for Neuroscience who is the project's principal investigator.

"For instance, if you go out to eat at a formal restaurant, before you even walk in, you can guess the sequence of events that will unfold (seating, taking your order, appetizers... etc.), and the roles that people will play (host, waiter, bus-person, chef, etc.). People can learn this kind of knowledge about particular kinds of events--which we call 'event schemas'--after only a few experiences."

While scientists have identified areas of the brain involved in memory, they know very little about the brain networks involved in creating those schemas, which help us sort events into categories, learn and anticipate what might happen next.

Joining Ranganath in tackling this huge problem are researchers Ken Norman and Uri Hasson of Princeton University, Samuel Gershman of Jeffrey Zacks of Washington University in St. Louis, and Orrin Devinsky of New York University's Comprehensive Epilepsy Center. Each investigator brings a different kind of expertise needed to map and create a supercomputer model of the brain's memory networks.

Ranganath said every prong of the study will break new ground for research methods in the field. For instance, the team will develop new tools to help to decode brain activity in real time as a person recalls a past event.

"One of the key ideas in our project is that people intuitively understand and remember events in terms of relationships," Ranganath said. "Our modeling approach is intended to capture this human-like approach by extracting knowledge about roles and relationships from specific events."

The Harvard and Princeton teams will develop the supercomputer model, and the UC Davis, Washington University, and Princeton scientists will run functional magnetic resonance imaging studies to test and refine the model. The UC Davis team will also analyze recordings of electrical brain activity collected from patients at NYU who have electrodes implanted in the brain as part of a medical evaluation for epilepsy surgery. Ranganath, an expert on the neural mechanisms of human memory, will focus on the interactions between the hippocampus and cortex in retrieving and consolidating memories for real-life events.

Implications for Improving Memory

Ranganath's initial studies already have some practical implications for how memory can be improved in the real world. "In our studies, we have people take a tour of the UC Davis California Raptor Center, and after the tour, we use pictures from a wearable camera to test people on what they learned during the tour," he said. "Our initial results show that using cameras in this way can significantly improve one's memory. With this project, we will be able to investigate what is happening in the brain as people recall events from the raptor tour."

People sometimes think of memories like files on a computer, but that's not the way human memory works, Ranganath said. Instead of simply storing data, the human brain consolidates memories of experiences and perceptions to learn, make inferences, and modify and update knowledge. Ranganath's team predicts that, when an event is recollected, the memory can be modified and updated. "That dynamic aspect of memory is a little different than computer storage, where a file can be processed without saving anything to memory."

While the field of artificial intelligence is advancing rapidly, computers and smart devices do best when they are trained to do specific tasks, like recognizing a face, he said.

"It's much harder to train a computer to learn information that can be used to support a wide range of inferences and predictions. If you have tried to use Siri or voice commands on an Android phone, you'll soon see that there is a lot that gets lost in translation."

The supercomputer model envisioned in this project would be more like Star Wars droid R2D2--able to analyze, infer and learn from events.

The Department of Defense is excited about this because it could be used for national security purposes. For instance, such a model could be used to analyze video footage for signs of terrorist activity. But the technology could have much wider applications.

"Really, if you think about it, a machine that is capable of learning schemas to understand events would have an endless range of applications," Ranganath said.

Page 1 of 24