Boltasseva nanotweezer This rendering depicts a new type of "nanotweezer," an example of advanced optical technologies that could emerge in the field of plasmonics. Whereas development of new plasmonic technologies has been hampered by "loss-induced plasmonic heating," researchers are now finding this heating could actually be key to various applications. (Purdue University file image/Mikhail Shalaginov and Pamela Burroff-Murr)

What researchers had thought of as a barrier to developing advanced technologies based on the emerging field of plasmonics is now seen as a potential pathway to practical applications in areas from cancer therapy to nanomanufacturing.

Plasmonic materials contain features, patterns or elements that enable unprecedented control of light by harnessing clouds of electrons called surface plasmons. It could allow the miniaturization of optical technologies, bringing advances such as nano-resolution imaging and computer chips that process and transmit data using light instead of electrons, representing a potential leap in performance.

However, the development of advanced optical technologies using plasmonics has been hampered because components under development cause too much light to be lost and converted into heat. But now researchers are finding that this "loss-induced plasmonic heating" could be key to development of various advanced technologies, said Vladimir M. Shalaev, co-director of the new Purdue Quantum Center, scientific director of nanophotonics at the Birck Nanotechnology Center in the university's Discovery Park and a distinguished professor of electrical and computer engineering.

The potential for practical applications using loss-induced plasmonic heating is discussed in a commentary that appeared on Jan. 22 in the Perspectives section of Science magazine. The article was written by doctoral student Justus Ndukaife, Shalaev and Alexandra Boltasseva, an associate professor of electrical and computer engineering.

"Plasmonics has generated significant interest because of the ability to squeeze light into nanoscale volumes in micro- and nano-devices, but progress has been hindered because of plasmonic losses," Ndukaife said. "We are saying we can use these losses to our advantage."

New technologies that could harness plasmonic heating include:

* A "nanotweezer" capable of positioning tiny objects quickly and accurately and freezing them in place, which could enable improved nanoscale sensing methods and aid research to manufacture advanced technologies such as quantum computers and ultra-high-resolution displays.

* A new magnetic storage technology called heat-assisted magnetic recording (HAMR), where nanoantennas, or near-field transducers, are used to focus light onto the magnetic medium. Nanoantennas could be leveraged in HAMR-based data storage. Moreover, plasmonic nanoparticles can be reshaped by heating and used to record images.

* Quadrapeutics, a clinical therapeutic approach using nanoparticles for cancer treatment. The nanoparticles are illuminated with laser light, producing plasmonic nanobubbles that can kill cancer cells.

* And a renewable energy concept that uses "plasmonic resonators" to improve the efficiency of solar cells.

"Harnessing the intrinsic loss in plasmonics could help to usher in transformative technological innovations affecting several fields, including information technology, life sciences and clean energy," Boltasseva said. "It is time for the plasmonic community to turn loss into gain." 

New research has found, for the first time, a scientific solution that enables future internet infrastructure to become completely open and programmable while carrying internet traffic at the speed of light.

The research by High Performance Networks (HPN) group in the University of Bristol's Department of Electrical and Electronic Engineering is published in the world's first scientific journal Philosophical Transactions of the Royal Society A.

The current internet infrastructure is not able to support independent development and innovation at physical and network layer functionalities, protocols, and services, while at the same time supporting the increasing bandwidth demands of changing and diverse applications.

The research addresses this problem with a new high performance network infrastructure that is open and programmable and uses light to carry internet traffic. It introduces new concepts of open source optical internet enabled by optical white box and software defined network technologies

Dr Reza Nejabati, Reader in Optical Networks in the HPN group, said: "Hardware and software technologies reported in this paper can potentially revolutionised optical network infrastructure the same way that Google Android and Apple iOS did for mobile phones. These technologies will hide complexity of optical networks and open them up for traditional programmers and application developers to create new type of internet applications taking advantages of speed of light."

Dimitra Simeonidou, Professor of High Performance Networks and who leads the HPN group, added: "New internet technologies frequently emerge, but most of them rarely result in new and revolutionary internet applications. The technologies suggested could pave the way for the creation of new internet services and applications not previously possible or disruptive. The technologies could also potentially change the balance of power from vendors and operators that are monopolising the current internet infrastructure to wider users and service providers."

CAPTION A unique object called HLX-1 (Hyper-Luminous X-ray source 1) is now the only reliable candidate as the intermediate-mass black hole. CREDIT NASA, ESA, S.Farrell (Sydney Institute for Astronomy, University of Sydney))

An international team of astronomers led by Ivan Zolotukhin is close to understanding the so-called intermediate-mass black holes (IMBH).

The term "black holes" was first used in the mid-20th century by theoretical physicist John Wheeler. This term denotes relativistic supermassive objects that are invisible in all electromagnetic waves, but a great number of astrophysical effects confirms their existence.

There are two basic types of black holes known to scientists according to observations: supermassive black holes and stellar-mass black holes. It is generally believed that stellar-mass black holes are formed in the end of the evolution of massive stars, when stellar energy sources are exhausted, and the star collapse due to its own gravity. Theoretical calculations impose restrictions on their mass to the extent of 5-50 solar masses.

It's less clear how supermassive black holes come to existence. Masses of these black holes sitting in the center of most galaxies range between millions and billions of solar masses. Quasars, the active galactic nuclei, are supermassive black holes observed by astronomers at high redshift. It means that these giants existed in the first few hundred million years after the Big Bang. Ivan Zolotukhin, who works at the Research Institute of Astrophysics and Planetology (Toulouse), said: "The astronomers look for black holes of intermediate mass, because no black hole that weighs a billion times more than the Sun could have been formed without them in just 700 million years."

It is believed that the first generation of stars did not contain metals and, therefore, their masses could have exceeded that of the Sun hundreds of times, and in the end of their evolution, they could become much more massive black holes than those observed today. These black holes merged into formation of thousands of solar masses, and further inclusion of galaxies and the accretion of matter led to the formation of supermassive black holes. Calculation models of hierarchical galaxy buildup have shown that there should have remained a small number of these intermediate mass black holes astronomers are looking for.

A small number means about hundred pieces per a galaxy similar to our Milky Way. They should fly somewhere high above the galaxy plain because while merging black holes acquire a huge impulse that sometimes can throw them out of the galaxy. About 10 years ago, the researchers were looking for such kind of holes (thousands of solar masses) among the heavy stellar-mass holes and the light supermassive ones, but nothing lighter than 500 thousand of solar masses has been found.

The paper was published in 2009 by astronomers from Toulouse, who in the course of a search for neutron stars in our galaxy accidentally found a bright X-ray source close to the galaxy, located in the distance of 100 Mpc from Earth. Luminosity evaluation showed that the mass of the object is about 10 thousand of solar masses. It is most likely that it shines due to the overflowing of matter into a black hole from a single star. A unique object called HLX-1 (Hyper-Luminous X-ray source 1) is now the only reliable candidate as the intermediate-mass black hole. Many astronomers were sure that this object is unique, and there won't be any similar to this. At the same time they didn't take into consideration that this object was found by chance, and in the catalog of sources covering only 1% of the sky. "I supposed that such objects should appear much more often, and we have proposed a method of large scale search", said Zolotukhin. The idea is to compare the objects from the wide-scale redshift survey of galaxies (SDSS) with the objects from a catalog of X-ray sources. "I suggested looking around galaxies for millions of X-ray objects with luminosity exceeding a certain value," the author explained.

Having applied the developed algorithm to both catalogs, the astronomers were able to find 98 objects, among which at least 16 must be associated with their galaxies. "These are the best candidates for intermediate-mass black hole. We have shown for the first time that a new type of hypothetical intermediate mass black holes (with masses from 100 to 100 000 of solar mass) not only exist, but also exist in a population. In other words, these objects are not unique, there are lots of them", clarified the author of the paper published in The Astrophysical Journal: http://iopscience.iop.org/0004-637X/817/2/88.

The methods of the Virtual Observatory were applied in the research, and all the conclusions were obtained exclusively with the use of publicly available data and, therefore, can be confirmed from any computer with Internet access.

Moreover, the authors used a new site to access the data of the XMM-Newton observatory. "The uniqueness of this web application is that for the first time in international fundamental science such a complicated project is made specifically for scientists by volunteers - highly skilled programmers, who, while working at the best IT-companies in Russia, devoted their free time to this web page. They are Alexey Sergeev, Askar Timirgazin, and Maxim Chernyshov," told Ivan Zolotukhin, "Many of my colleagues and I are still impressed by their work. The astronomers around the world can now enjoy the unique features of the site, and many discoveries can now be done directly online!" According to Zolotukhin, the current publication presents a series of studies based on this website. "It is important that thanks to simple and clear design scientists from other fields can now enjoy specific X-ray data," said the scientist.

This study essentially opens up the possibility for the search of intermediate-mass black holes. Since the researchers suggested more than a dozen of such candidates, it is expected that in the years to come they will be reliably confirmed with optical spectroscopic observations. In the near future it is expected to search for them by the six-meter telescope of the Special Astrophysical Observatory (Russia) as well. "If there is at least one confirmation - it will be published in Nature, and astronomers immediately will rush to explore these 98 objects," said the author of the work.

The candidates were found only in 2% of the sky, so astronomers hope to launch a Russian-German space telescope "Spektr-RG" in 2017. The researchers hope to discover hundreds of objects like HLX-1 through a deep X-ray view of the sky obtained by the means of this telescope.

In a new study, scientists from The University of Texas at Dallas and their colleagues suggest a novel way for probing the beginning of space and time, potentially revealing secrets about the conditions that gave rise to the universe.

The prevailing model of the birth of the universe is the big bang theory, which describes the rapid expansion of the universe from a highly compressed primordial state. While the big bang is a successful genesis model, it does, however, require special initial conditions.

Determining what produced those initial conditions is a major challenge in cosmology and astrophysics, said Dr. Xingang Chen, assistant professor of physics at UT Dallas and a visiting scholar at the Harvard-Smithsonian Center for Astrophysics.

"Several different scenarios have been proposed for the origin of the big bang and to explain its pre-existing, initial conditions," Chen said.

The leading explanation among theorists is the inflation scenario, which posits that the universe went through an exponential expansion in the first fleeting fraction of a second of its existence. Another scenario suggests that a universe preceded ours and contracted in a "big crunch" before transitioning into our big bang.

In a study appearing in an upcoming issue of the Journal of Cosmology and Astroparticle Physics, Chen and his colleagues, Dr. Mohammad Hossein Namjoo, a postdoctoral researcher at UT Dallas and the Center for Astrophysics, and Dr. Yi Wang of the Hong Kong University of Science and Technology, describe a new theory to determine which scenario is correct.

"Each scenario can have many details in its theoretical models that result in various astrophysical signals that can be observed today," Wang said. "Most of these signals may be shared by the different scenarios, but there are some signals that are unique fingerprints of each scenario. Although these signals are very rare, the latter can be used to distinguish inflation from other scenarios."

Astrophysical observations already have revealed information about the origins of the universe some 13.8 billion years ago, specifically about properties of initial fluctuations that took place in the early universe. For example, researchers have mapped patterns of tiny fluctuations in temperature in the otherwise smooth cosmic microwave background (CMB), which is the heat left over from the explosion of the big bang. Those tiny, "seed" irregularities became magnified as the universe expanded after the big bang, eventually forming all the large-scale structures we see in the universe today, such as stars and galaxies.

From those fluctuations scientists have learned a lot about the spatial variations of the primordial universe, but they have yet to determine the passage of time, Chen said. The phenomenon he and his colleagues discovered would allow that by putting "time stamps" on the evolutionary history of the primordial universe, shedding light on which scenario -- inflation or contraction -- produced the big bang's initial conditions.

"The information we currently have is akin to showing an audience many still pictures from a movie stacked on top of each other, but they lack proper time labeling for the correct sequence," Chen said. "As a result, we do not know for sure if the primordial universe was expanding or contracting."

Chen and his group devised a way to put the individual snapshots in order. They realized that heavy particles would be present before the big bang in both scenarios.

"These heavy particles have a simple but important property that can be used to resolve the competing scenarios. They oscillate just like a pendulum. They do so classically due to some kind of 'push,' or quantum-mechanically without having to be pushed initially," Chen said. "We call these heavy particles 'primordial standard clocks'."

The researchers found that in both the inflation and contraction scenarios, the oscillating particles generated time "ticks" on the seed fluctuations that the universe was experiencing at the same time.

"With the help of these time labels, we can turn the stacks of stills into a coherent movie and directly reveal the evolutionary history of the primordial universe," Chen said. "This should allow us to distinguish an inflationary universe from other scenarios, including one that previously contracted."

"The clock signals we are searching for are fine oscillatory structures that would manifest in measurements of the cosmic microwave background," Wang said. "Each primordial universe scenario predicts a unique signal pattern."

Namjoo said that detecting clock signals shouldn't require the design of new experiments. While current data is not accurate enough to spot such small variations, ongoing experiments worldwide are expected to gather extremely precise CMB data.

"Our theoretical proposal makes use of the same precision data that many experiments will be gathering in the next decade or so, but analyzes the data from a different angle to dig out a new type of signal," Namjoo said.

If the oscillations from the heavy particles are strong enough, experiments should find them in the next decade, Chen said. Supporting evidence could also come from other lines of investigation, such as maps of the large-scale structure of the universe, including galaxies and cosmic hydrogen.

The research was supported by UT Dallas, Harvard, the Hong Kong University of Science and Technology, and the National Science Foundation.

Martin Oberlack, Professor for computational fluid dynamics. Image: Katrin Binner

Turbulence makes life difficult for the designers of cars or aircrafts. It cannot be simulated with absolute precision. Martin Oberlack, head of Institute of Fluid Dynamics wants an original solution to the problem.

Albert Einstein grins impishly from a poster on the wall in Professor Martin Oberlack's office. Perhaps the genius already knew decades ago that his thinking would lend wings to machine construction engineers like Martin Oberlack and help them to solve apparently unsolvable problems involving aerodynamics. Martin Oberlack explains the problem troubling aircraft and car manufacturers using a diagram hanging on the wall in the corridor of the Institute of Fluid Dynamics on the Darmstadt Lichtwiese.

It looks like a brightly coloured abstract painting: a 5-metre long strip with smooth, even brush strokes dominating the left edge that become increasingly chaotic as they approach the right. In fact, it‘s almost as if the artist became less and less controlled as work progressed. But it‘s not a work of art. "It‘s a computer simulation of turbulence," says Professor Oberlack, and displays the vortices of air flowing over a flat panel. The vortices increase towards the right, that is as the gap to the panel increases. "That‘s why Business Class is at the front of an aircraft," he explains. Vortices at the back of the aircraft make that area noisier, adds the machine construction engineer.

Even supercomputers can not simulate turbulences with absolute precision

Noise isn‘t the only annoyance caused by turbulence. Vortices also cause air resistance, or drag, which increases fuel consumption. So the shape of a vehicle or aircraft should cause as few air vortices as possible. To establish the best shape, the developers experiment with various different versions in wind tunnels. Computers are also used to help with the design. All this effort, and it‘s still not quite enough. "Even the most powerful supercomputers in Germany can‘t simulate turbulence with absolute precision," explains Martin Oberlack. Why not: the less viscous a medium is, the tinier the tiniest vortices are. However, if you want to simulate the occurrence on a computer, it is needed to take vortices of all sizes into account.

Engineers are not interested in every single tiny vortex in the airflow, but in statistical sizes such as the average speed in the air at different distances from the surface, because air resistance can be calculated from this profile. "The tiniest differences in the speed profile matter," he says.

However, the subtle differences in the statistical values can only be derived from a complete simulation of the chaotic event – just as it takes lots of individual opinions in order for the results of a survey to be precise and reliable. This requires computers with vast memories and unimaginable computing speeds. And even though supercomputers are getting ever-faster and their memories ever-bigger, "It‘s still going to be about 50 years before they can calculate turbulence with a precision that eliminates the need for expensive experiments in wind tunnels," he adds.

In order to reduce computing time, the developers simplify their mathematical models using empirical assumptions that are based on experiments, but that makes the simulations inaccurate. "To an airline, though, the tiniest differences in kerosene consumption matter," he emphasises. And although there is a huge gap between this requirement for exact results and the precision of the simplified simulations, Martin Oberlack doesn‘t seem to be losing any sleep over it. That‘s because the gap defines his playing field. He and his 20-strong team are the only people in the world who are ploughing it with a new method. And they have solutions to offer.

Page 6 of 106