A group of researchers using the Suprime-Cam instrument on the Subaru Telescope has discovered about 80 young galaxies that existed in the early universe about 1.2 billion years after the Big Bang. The team, with members from Ehime University, Nagoya University, Tohoku University, Space Telescope Science Institute (STScI) in the U.S., and California Institute of Technology, then made detailed analyses of imaging data of these galaxies taken by the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope. At least 54 of the galaxies are spatially resolved in the ACS images. Among them, 8 galaxies show double-component structures (Figure 1) and the remaining 46 seem to have elongated structures (Figure 2). Through a further investigations using a supercomputer simulation, the group found that the observed elongated structures can be reproduced if two or more galaxies reside in close proximity to each other.

These results strongly suggest that 1.2 billion years after the Big Bang, galactic clumps in the young universe grow to become large galaxies through mergers, which then causes active star formation to take place. This research was conducted as part of the treasury program of Hubble Space Telescope (HST), "Cosmic Evolution Survey (COSMOS)". The powerful survey capability of the Subaru Telescope provided the essential database of the candidate objects in the early universe for this research project.

Figure 1

Figure 1: An example of galaxy that shows a double-component structure in the HST/ACS image. North is up and east is left. Each panel has a size of 4" x 4", which corresponds to 85 kly x 85 kly at the distance of 12.6 billion light-years. The thumbnails of the HST/ACS I band (effective wavelength = 814 nm), the Subaru Telescope/Suprime-Cam NB711 and i'- and z'-bands are shown from left to right. Note that the NB711 image, which captures the Lyman-alpha line emitted by neutral hydrogen, shows spatially extended gas which is ionized by ultraviolet (UV) radiation from many massive stars. On the other hand, other band images prove the UV radiation from massive stars themselves. (Credit: Ehime University)

Figure 2

Figure 2: Same as Figure 1 but for an example of galaxy that shows a single-component structure in the HST/ACS image. (Credit: Ehime University)

The Importance of Studying Early Galaxies

In the present universe, at a point 13.8 billion years after the Big Bang, there are many giant galaxies like our Milky Way, which contains about 200 billion stars in a disk a hundred thousand light years across. However, there were definitely no galaxies like it in the epoch just after the Big Bang.

Pre-galactic clumps appear to have formed in the universe about 200 million years after the Big Bang. These were cold gas clouds much smaller than the present giant galaxies by a factor of 100, with masses smaller by a factor of a million. The first galaxies were formed when the first stars were born in these gas clumps. These small galactic clumps then experienced continuous mergers with surrounding clumps and eventually grew into large galaxies.

Much effort has been made through deep surveys to detect actively star-forming galaxies in the young universe. As a result, the distances of the earliest galaxies are now known to be at more than 13 billion light-years. We see them at a time when the age of the universe was only 800 million years (or about 6% of the present age). However, since most of the galaxies in the young universe were quite small, their detailed structures have not yet been observed.

Exploring the Young Universe Using Subaru Telescope and Hubble Space Telescope

While the wide field of view of the Subaru Telescope has played an important role in finding such young galaxies, the high spatial resolution of the Hubble Space Telescope (HST) is required to investigate the details of their shapes and internal structures. The research team looked back to a point 12.6 billion years ago using a two-pronged approach. The first step was to use the Subaru Telescope in a deep survey to search out the early galaxies, and then follow that up to investigate their shapes using the Advanced Camera for Surveys (ACS) on board the HST. The ACS revealed 8 out of 54 galaxies to have double-component structures, where two galaxies seem to be merging with each other (Note 1), as shown in Figure 1.

Then, a question arose as to whether the remaining 46 galaxies are really single galaxies. Here, the research team questioned why many of these galaxies show elongated shapes in the HST/ACS images (Figure 2). This is because such elongated shapes, together with the positive correlation between ellipticity (Note 2) and size (Figure 3), strongly suggest a possibility that two small galaxies reside so close to each other that they cannot be resolved into two distinct galaxies, even using ACS.

To examine whether the idea of closely crowded galaxies is viable, the researchers conducted so-called Monte Carlo supercomputer simulations. First, the group put two identical artificial sources at random locations with various angular separations onto the real observed ACS image. Then, the group tried to extract the images with the same method used for the actual observed ACS image and measured their ellipticities and sizes. The results are shown in Figure 3.

Figure 2

Figure 3: The relation between ellipticity and size. Red data points are the observed data; most of them have elongated shapes and larger galaxies tend to have larger ellipticities. Gray-colored regions represent the probability distributions calculated with the supercomputer simulations, in which two galaxies are located at so close distance that they are blended as an elongated galaxy, as shown in the right pictures schematically. (Credit: Ehime University)

As shown in Figure 3, the simulated distribution reproduces the observed results very well. That is, most of the galaxies that were observed as single sources in the HST/ACS images are actually two merging galaxies. However, the distances between two merging galaxies are so small they cannot be spatially resolved, even by HST's high resolution!

If this idea is valid for the galaxies that appear to be single, then it's possible to assume that the galaxies with the highest rate of activities have the smallest sizes. This is expected because the smallest sizes imply the smallest separation between two merging galaxies. If this is the case, such galaxies would experience intense star formation activity triggered by their mergers.

On the other hand, some galaxies with the smallest sizes are moderately separated pairs, but are observed along the line of sight, or are just single, isolated star-forming galaxies. These are basically the same as large-size galaxies.

The research team has confirmed that the observed relation between star formation activity and size is consistently explained by the team's idea (Figure 4). 

Figure 4

Figure 4: The relation between star formation activity and size. Red circles and blue double-circles represent the galaxies with single- and double-component ACS sources, respectively. The research team's idea that most of the galaxies consist of two (or more) interacting small galaxies with small angular separation is also illustrated, which explains the observed trend very well. (Credit: Ehime University)

To date, the shapes and structures of small young galaxies have been investigated by using ACS on HST. If a source was detected as a single ACS source, it was treated as a single galaxy and its morphological parameters were evaluated. This research suggests that such a small galaxy can consist of two (or perhaps, more) interacting/merging galaxies located so close together that they cannot be resolved by even the high angular resolution of the ACS.

Looking into the Future of Studying the Past

Current galaxy formation theories predict that small galaxies in the young universe evolve into large galaxies via successive mergers. The question remains: what is the next step in observational studies for galaxy formation in the young universe? This is one of the frontier fields that requires future "super telescopes," e.g. Thirty Meter Telescope (TMT) and the James Webb Space Telescope (JWST). They will enable the next breakthroughs in the study of early galaxy formation and evolution.

This research will be published in the Astrophysical Journal titled "Morphological Properties of Lyman Alpha Emitters at Redshift 4.86 in the COSMOS Field: Clumpy Star Formation or Merger?" by Masakazu A. R. Kobayashi, Katsuhiro L. Murata, Anton M. Koekemoer, Takashi Murayama, Yoshiaki Taniguchi, Masaru Kajisawa, Yasuhiro Shioya, Nick Z. Scoville, Tohru Nagao, and Peter L. Capak. Online version was posted February 24, 2016 and print version was on March 1, 2016 (Volume 819, article id. 25).

The new material offers potential for data storage and spintronics applications.

Nanotechnologists at the UT research institute MESA+ are now able to create materials in which they can influence and precisely control the orientation of the magnetism at will. An interlayer just 0.4 nanometres thick is the key to this success. The materials present a range of interesting possibilities, such as a new way of creating computer memory as well as spintronics applications – a new form of electronics that works on the basis of magnetism instead of electricity. The research was published today in the leading scientific journal Nature Materials. 

Nanotechnologists at the University of Twente are specialized in creating new materials. Thanks to the top-level facilities at the MESA+ NanoLab they are able to combine materials as they wish, with the ability to control the material composition down to atom level. In particular, they specialize in creating materials composed of extremely thin layers, sometimes just one atom thick. 

COMPUTER MEMORY

In research published today in the scientific journal Nature Materials, they show their ability to create new materials within which they can precisely and locally control the orientation of the magnetism. This opens the way to new possibilities of creating computer memory. Moreover, this method of creating materials is interesting for spintronics, a new form of electronics that does not utilize the movement of charges but instead the magnetic properties of a material. This not only makes electronics very fast and efficient, but also allows them to be produced in extremely small dimensions.  

INTERLAYER

In the course of this research the scientists stacked up various thin layers of Perovskite materials. By placing an extremely thin interlayer of just 0.4 nanometres between the layers (a nanometre is a million times smaller than a millimetre), it becomes possible to influence the orientation of the magnetism in the individual Perovskite layers as desired, whereby the orientation of the magnetism in the bottom layer, for instance, is perpendicular to that of the layer above. By varying the location where the interlayer is applied, it becomes possible to select the local orientation of the magnetism anywhere in the material. This is an essential property for new forms of computer memory and for spintronics applications. This effect was already known for much thicker layers, but never before had researchers demonstrated that the orientation of the magnetism can be controlled so precisely with extremely thin layers, too. 

RESEARCH

The research has been conducted by scientists of the MESA+ research group https://www.utwente.nl/tnw/ims/ Inorganic Materials Science in collaboration with colleagues from other institutes, including the University of Antwerp (Belgium), the University of British Columbia (Canada) and TU Wien (Vienna, Austria). Within the research project, the Twente-based researchers were responsible for coordination and for creating the materials. The colleague researchers from Antwerp visualized the materials and were able to image even the smallest atoms in the material. The Canadian researchers created a magnetic cross-section of the material, while the Austrian researchers handled the theoretical calculations. 

The research is published under the title ‘Controlled lateral anistropy in correlated manganite heterostructures by interface-engineered oxygen octahedral coupling’by Z. Liao, M. Huijben, Z. Zhong, N. Gauquelin, S. Macke, R. J. Green, S. Van Aert, J. Verbeeck, G. Van Tendeloo, K. Held, G. A. Sawatzky, G. Koster and G. Rijnders.

Researchers have designed and built a quantum computer from five atoms in an ion trap. The computer uses laser pulses to carry out Shor’s algorithm on each atom, to correctly factor the number 15.  Image: Jose-Luis Olivares/MIT

What are the prime factors, or multipliers, for the number 15? Most grade school students know the answer -- 3 and 5 -- by memory. A larger number, such as 91, may take some pen and paper. An even larger number, say with 232 digits, can (and has) taken scientists two years to factor, using hundreds of classical computers operating in parallel. 

Because factoring large numbers is so devilishly hard, this "factoring problem" is the basis for many encryption schemes for protecting credit cards, state secrets, and other confidential data. It's thought that a single quantum supercomputer may easily crack this problem, by using hundreds of atoms, essentially in parallel, to quickly factor huge numbers. 

In 1994, Peter Shor, the Morss Professor of Applied Mathematics at MIT, came up with a quantum algorithm that calculates the prime factors of a large number, vastly more efficiently than a classical computer. However, the algorithm's success depends on a supercomputer with a large number of quantum bits. While others have attempted to implement Shor's algorithm in various quantum systems, none have been able to do so with more than a few quantum bits, in a scalable way. 

Now, in a paper published today in the journal Science, researchers from MIT and the University of Innsbruck in Austria report that they have designed and built a quantum computer from five atoms in an ion trap. The computer uses laser pulses to carry out Shor's algorithm on each atom, to correctly factor the number 15. The system is designed in such a way that more atoms and lasers can be added to build a bigger and faster quantum computer, able to factor much larger numbers. The results, they say, represent the first scalable implementation of Shor's algorithm. 

"We show that Shor's algorithm, the most complex quantum algorithm known to date, is realizable in a way where, yes, all you have to do is go in the lab, apply more technology, and you should be able to make a bigger quantum computer," says Isaac Chuang, professor of physics and professor of electrical engineering and computer science at MIT. "It might still cost an enormous amount of money to build -- you won't be building a quantum computer and putting it on your desktop anytime soon -- but now it's much more an engineering effort, and not a basic physics question."

Seeing through the quantum forest

In classical computing, numbers are represented by either 0s or 1s, and calculations are carried out according to an algorithm's "instructions," which manipulate these 0s and 1s to transform an input to an output. In contrast, quantum computing relies on atomic-scale units, or "qubits," that can be simultaneously 0 and 1 -- a state known as a superposition. In this state, a single qubit can essentially carry out two separate streams of calculations in parallel, making computations far more efficient than a classical computer. 

In 2001, Chuang, a pioneer in the field of quantum computing, designed a quantum computer based on one molecule that could be held in superposition and manipulated with nuclear magnetic resonance to factor the number 15. The results, which were published in Nature, represented the first experimental realization of Shor's algorithm. But the system wasn't scalable; it became more difficult to control the system as more atoms were added.

"Once you had too many atoms, it was like a big forest -- it was very hard to control one atom from the next one," Chuang says. "The difficulty is to implement [the algorithm] in a system that's sufficiently isolated that it can stay quantum mechanical for long enough that you can actually have a chance to do the whole algorithm."

"Straightforwardly scalable"

Chuang and his colleagues have now come up with a new, scalable quantum system for factoring numbers efficiently. While it typically takes about 12 qubits to factor the number 15, they found a way to shave the system down to five qubits, each represented by a single atom. Each atom can be held in a superposition of two different energy states simultaneously. The researchers use laser pulses to perform "logic gates," or components of Shor's algorithm, on four of the five atoms. The results are then stored, forwarded, extracted, and recycled via the fifth atom, thereby carrying out Shor's algorithm in parallel, with fewer qubits than is typically required. 

The team was able to keep the quantum system stable by holding the atoms in an ion trap, where they removed an electron from each atom, thereby charging it. They then held each atom in place with an electric field.

"That way, we know exactly where that atom is in space," Chuang explains. "Then we do that with another atom, a few microns away -- [a distance] about 100th the width of a human hair. By having a number of these atoms together, they can still interact with each other, because they're charged. That interaction lets us perform logic gates, which allow us to realize the primitives of the Shor factoring algorithm. The gates we perform can work on any of these kinds of atoms, no matter how large we make the system."

Chuang's team first worked out the quantum design in principle. His colleagues at the University of Innsbruck then built an experimental apparatus based on his methodology. They directed the quantum system to factor the number 15 -- the smallest number that can meaningfully demonstrate Shor's algorithm. Without any prior knowledge of the answers, the system returned the correct factors, with a confidence exceeding 99 percent. 

"In future generations, we foresee it being straightforwardly scalable, once the apparatus can trap more atoms and more laser beams can control the pulses," Chuang says. "We see no physical reason why that is not going to be in the cards."

What will all this eventually mean for encryption schemes of the future? 

"Well, one thing is that if you are a nation state, you probably don't want to publicly store your secrets using encryption that relies on factoring as a hard-to-invert problem," Chuang says. "Because when these quantum computers start coming out, you'll be able to go back and unencrypt all those old secrets."

World’s only self-tuning, MySQL-compliant database runs complex queries 100X faster than InnoDB, while requiring no training or special skills

Deep Information Sciences has announced that Florida State University is using deepSQL to facilitate research. deepSQL, the world’s only self-tuning, MySQL-compliant database, enables FSU’s Research Computer Center to speed researchers’ queries by up to 1,000 percent, streamline analyses and provide a new database-as-a-service offering without having to retrain its staff.

“When you’re dealing with complex data sets, like many of FSU’s researchers are, the ability to perform fast queries at scale, and expedite time-to-insights, is critical. However, because today’s databases are built on science designed for 1970s computing environments, they’re inherently limited and often hit the wall when faced with large-scale, data-intensive conditions,” said Chad Jones, Chief Strategy Officer, Deep Information Sciences. “We are thrilled that a university as renowned for research as FSU has chosen deepSQL to accelerate ingest and queries, improve analyses and fuel their researchers’ projects.”

“FSU supports hundreds of researchers on widely divergent projects, but the one thing they have in common is they’re not database experts. Researchers who use MySQL say it’s much too slow, especially on complex data sets, but they don’t know how to tinker with databases and don’t want to—and shouldn’t have to—learn. We needed a database solution that could speed projects without burdening researchers,” said Paul Van Der Mark, interim director of FSU’s Research Computing Center. “deepSQL solves the slow-database problem that other technologies have struggled with, and failed at, for so long. The more complex the research, the more the database accelerates. Plus, just as important for our researchers, thanks to deepSQL’s self-tuning and MySQL interface, it’s easy to use even if you’re not a DBA.”

deepSQL, the world’s only adaptive, MySQL-compliant database, is purpose-built for data-intensive, large-scale physical, virtual and cloud environments where fast load and accelerated queries are crucial. An autonomically self-tuning database that leverages machine learning and uses the familiar MySQL interface, deepSQL dynamically adapts to ever-changing application demands and traffic realities—eliminating the need for time-consuming trial-and-error configurations and ETLs that can slow research, and insights, to a crawl. It’s installed as either an engine or a stand-alone database and is 100% MySQL compliant, so no application changes are required. With DBA-less tuning, extreme scalability to hundreds of billions of rows, and blazing speed, deepSQL enables applications to perform optimally at all times, while minimizing storage footprints and reducing overall cost.

deepSQL enables FSU to:

  • Accelerate complex queries by 1,000%
    With deepSQL processing queries faster than any other solution on the market, time-to-answer is expedited as never before. Using deepSQL, FSU ran queries on a 200GB database 100x faster. Complex queries that took 200 seconds with InnoDB were completed in just two seconds with deepSQL.
  • Ease analysis for researchers
    Before deepSQL, RCC staff supported researchers who used databases when they needed help, but didn’t widely promote the use of databases because of their performance and difficulty issues. That’s changing now. According to Van Der Mark, “deepSQL is the one solution we’ve found that’s ideal for researchers who work with complex data sets but aren’t database experts. It’s incredibly easy to use and its auto-tuning does all the heavy lifting without going offline—all of which streamlines and speeds analyses, and thrills our researchers.”
  • Avoid RCC training
    If FSU had chosen a different database solution, they would have had to spend valuable time and money retraining RCC employees. That’s not the case with deepSQL. “Because I know MySQL, there was no learning curve, no new commands to learn. deepSQL just worked,” said Prasad Maddumage, an application specialist at the RCC.
  • Enable database-as-a-service
    FSU currently uses deepSQL on bare metal machines, but has the ability to run databases wherever it makes most sense—in physical, virtual or cloud environments. “Thanks to deepSQL, we can now offer high-performing database-as-a-service at minimal cost,” said Van Der Mark.
CAPTION A coronal mass ejection event showing a representation of the flux rope anchored at the sun and the propagation of the magnetic flux rope through space toward Earth. The white shaded lines indicate the magnetic field lines. Red shade indicates high speed stream in the front of the CME.

Coronal mass ejections (CMEs) are massive expulsions of magnetic flux into space from the solar corona, the ionized atmosphere surrounding the sun. Magnetic storms arising from CMEs pose radiation hazards that can damage satellites and that can negatively impact communications systems and electricity on Earth. Accurate predictions of such events are invaluable in space weather forecasting.

A new and robust simulation code for CME events was developed based on the realistic description of the mechanisms behind CME generation and their propagation through space. An article recently published in Space Weather presents their results from the method, which was successfully validated using observational data from a series of CME events reaching the Earth's position around Halloween of 2003.

"Our model is able to simulate complex 'flux ropes', taking into account the mechanisms behind CME generation derived from real-time solar observations. With this model, we can simulate multiple CMEs propagating through space. A part of the magnetic flux of the original flux rope inside the CME directed southward was found to reach the Earth, and that can cause a magnetic storm," explains lead author Daikou Shiota of the Nagoya University Institute of Space and Earth Environmental Research. The new model represents a significant step in space weather research. "The inclusion of the flux rope mechanism helps us predict the amplitude of the magnetic field within a CME that reaches the Earth's position, and accurately predicts its arrival time," Shiota says.

A series of CMEs occurring in late-October 2003 released large flares of magnetic energy that reached the Earth several days later, causing radio blackouts and satellite communications failures. Data from these events were used to validate the approach taken in the new model.

"In our validation, we were able to predict the arrival of a huge magnetic flux capable of causing one of the largest magnetic storms in the last two decades," says coauthor Ryuho Kataoka of the National Institute of Polar Research and the Department of Polar Science, SOKENDAI (Graduate University for Advanced Studies). "Because our model does not simulate the solar coronal region, its computational speed is fast enough to operate under real-time forecasting conditions. This has various applications in ensemble space weather forecasting, radiation belt forecasting, and for further study of the effects of CME-generated solar winds on the larger magnetic structure of our solar system." Shiota says.

This is a new generation of a well-developed complex flux rope within a CME model, and it provides a valuable step towards enhanced operational space weather forecasting. These findings will significantly contribute to accurately predicting magnetic fields in space and enhancing our understanding of the mechanisms behind CME events.

 

Page 7 of 48