This months stories - News - SC ONLINE NEWS - SC ONLINE NEWS This months stories - News This months stories. http://www.supercomputingonline.com Thu, 21 Mar 2013 06:54:27 -0400 Joomla! - Open Source Content Management - Version 2.5.7 en-gb Supercomputer-assisted Calibration Methodology Enhances Accuracy of Energy Models http://www.supercomputingonline.com/this-months-stories/supercomputer-assisted-calibration-methodology-enhances-accuracy-of-energy-models http://www.supercomputingonline.com/this-months-stories/supercomputer-assisted-calibration-methodology-enhances-accuracy-of-energy-models Supercomputer-assisted Calibration Methodology Enhances Accuracy of Energy Models

Most people appreciate a bargain, an advantageous purchase that affords the customer value (such as convenience and comfort) at minimal cost. Improved energy efficiency equates to a bargain that can deeply impact a household or business budget, as well as a nation’s economy and security.

 

Awareness of the significance of sustainable energy practices that preserve finite resources for future generations is growing as the entire world faces the formidable challenge of developing and implementing strategies that will stem the increases in power consumption that are expected to occur during the next two decades.

 

Buildings and the Growing Demand for Energy

 

The world energy demand is rebounding after recovery from the global economic recession of 2008–2009 and will continue to grow, the U.S. Energy Information Administration (EIA) reports in its publication titled International Energy Outlook 2011. In fact, EIA states, the world marketed energy consumption expands by 53 percent from 2008 to 2035, with Asian countries—particularly China and India—leading the way; their combined energy demand is expected to more than double, collectively accounting for 31 percent of the total world energy consumption in 2035 [1].

 

Worldwide, buildings are major energy consumers, and consequently they represent an opportunity for both economic and environmental improvements through retrofitting and designing for optimal energy efficiency. Underscoring the possibilities are projections that 60 percent of potential urban building floor space in China [2], and 70 percent of such space in India [3], remains to be built between now and 2030.

 

While the U.S. contains 5 percent of the world’s population [4], it consumes 19 percent of global energy production [5]. Buildings in the U.S. use 41 percent of the country’s primary energy [5]. Similarly, in most of the other countries of the world, buildings consume 30 percent to 40 percent of the energy [6].

 

On the positive side, buildings don’t just use a lot of energy, they also perform the role of serving as deployment platforms for renewable-energy applications, including day lighting, solar water heating, photovoltaic electricity generation, and geothermal (ground-source) space conditioning and water heating [7].

 

Energy Modeling and Simulation

 

One way to improve the energy efficiency of buildings is modeling and simulation. Energy models are representations of buildings applied in simulations, which consist of design and operating parameters associated with energy consumption. Those parameters can be such things as cooling and heating equipment details, window performance values, installed electricity for lighting and user equipment, ventilation and others.

 

Energy models have been employed in devising the optimal weatherization package for millions of low-income houses; determining the optimal return-on-investment for building retrofit; and informing policy decisions and tax incentives at local, utility, state and federal levels.

 

A number of energy-modeling computer applications exist—such as EnergyPlus, eQuest, Trane TRACE and others—and creating the models isn’t difficult. However, making energy models that are accurate is a major challenge.

 

“Getting energy building models right is so hard because the thousands of input parameters for each building cause even calibrated models created by experts using official measurement and validation guidelines to differ from monthly utility bills by 30 percent to 93 percent [8],” says Joshua New of the Building Technologies Research and Integration Center (BTRIC) at Oak Ridge National Laboratory (ORNL). BTRIC focuses on developing ways to reduce the energy consumption of the nation’s buildings and the resulting carbon emissions essential to sustainability.

 

Guideline 14 of ASHRAE (American Society of Heating, Refrigerating and Air-conditioning Engineers) specifies a maximum normalized mean bias error for monthly energy use of 5 percent to be acceptable for use in building codes and tax-related performance criteria.

 

“The cost required to collect data and tune a model to such accuracy involves so much manual effort that it is rarely employed—outside of research—for energy-service company projects smaller than $1 million,” New says. “An automated methodology for model calibration that realistically adjusts input parameters would eliminate risk from energy savings estimates and open up new business opportunities and energy-savings performance contracts in the light commercial and residential sectors. A cost-effective methodology that can meet Guideline 14 requirements is estimated to lead to a cumulative U.S. energy savings of 27.4 TBtus per year, or $1.6 billion annually.”

 

Autotune and High-performance Computing (HPC)

 

New and ORNL colleague Jibo Sanyal are leading a research project focused on developing just such a methodology. Their project, called Autotune, is an advanced analytical and optimization methodology that leverages terabytes of HPC-generated simulation data and data mining with multiple-machine learning algorithms for quickly calibrating a building energy model to measured (utility or sensor) data.

 

Allocations of compute time on the Nautilus supercomputer from the National Science Foundation-funded Extreme Science and Engineering Discovery Environment (XSEDE) are making the Autotune project possible. Housed at ORNL, Nautilus is managed by the University of Tennessee’s National Institute for Computational Sciences (NICS). The Center for Remote Data Analysis and Visualization (RDAV) of NICS is providing HPC consultative services to New and Sanyal’s team.

 

The Autotune project is, in fact, one of Nautilus’ biggest users, having clocked approximately 300,000 service units, or compute hours, so far. New says his project has employed Nautilus to run about 75 percent of the simulations for warehouse and 100 percent for retail buildings, two of the most popular types, as categorized by square-footage use in the U.S.

 

“This project has completed about 1.6 million parametric simulations of the eventual 8 million, generating more than 50TB of data of the planned 270TB,” New says.

 

The components of the Autotune methodology are a simulation engine executable file, simulation input files, sensor utility data, and a mapping between simulation output and sensor or utility data. The output is a calibrated simulation input model.

 

“The numerical techniques themselves are domain-agnostic and could be used to calibrate inputs for any virtual model to match experimental data,” New says.

 

New and Sanyal have already adapted Autotune for the National Energy Audit Tool (NEAT) simulation engine computer program. NEAT was developed at ORNL for DOE’s Weatherization Assistance Program to assist state and local agencies in determining the most cost-effective retrofit measures for low-income, single-family homes to increase the comfort of occupants and reduce monthly utility costs.

 

The Autotune researchers plan to deploy the methodology as a desktop application, website and web service. In addition, New says all the HPC-calculated data emanating from the project are shared publicly online as part of an “open science” approach that allows citizen scientists and institutions to download and analyze the data for their own purposes. The data can be accessed through the Autotune website.

 To lower the labor-intensive and costly process of building-science experts manually tuning virtual building models to match measured data, the Autotune project is developing a fully automated calibration methodology that allows intelligent and quick modification of a virtual model to match measured data on commodity hardware (left side of diagram). The multi-terabyte database of building energy simulations is made freely and publicly available through several mechanisms (right side of diagram): web-based retrieval of input files, command-line access for SQL queries and phpMyAdmin for SQL access and browsing or exporting a subset of data to Excel and similar formats. The Autotune project also provides a means for the public to provide their own simulations, as well as an automated tool that can continually simulate a directory of EnergyPlus input files and upload everything to a central database as the simulation completes.

The results of all Autotune experiments, New explains, are reproducible and uploaded to the server for investigation of each experiment’s performance using interactive visualization provide by Tableau (business intelligence software) on the Autotune dashboard.

 The Autotune research team: (left to right) Jibo Sanyal, Mahabir Bhandari, Som Shrestha, Joshua New, Aaron Garret, Buzz Karpay, Richard Edwards

As the world applies energy modeling in its quest to attain cost savings, security and sustainability in the decades ahead, the Autotune methodology will place the bargain of energy efficiency within reach for more commercial and residential buildings.

Websites

Autotune

Autotune Dashboard

References

  1. International Energy Outlook 2011 from the U.S. Energy Information Administration (Report Number: DOE/EIA-0484 [2011])
  2. OECD Environmental Outlook to 2030 from the Organisation for Economic Co-operation and Development
  3. “Energy and Buildings in India: Setting a Course for Efficiency” from Institute for Building Efficiency
  4. "U.S. & World Population Clocks" from the United States Census Bureau
  5. Buildings Energy Data Book from the U.S. Department of Energy
  6. Buildings and Climate Change: Status, Challenges and Opportunities from the United Nations Environment Programme, 2007
  7. ORNL Building Technologies Research & Integration Center website
  8. Energy Performance Score 2008 Pilot: Findings and Recommendations Report, August 2009, prepared for Energy Trust of Oregon by Earth Advantage Institute and Conservation Services Group
]]>
THIS MONTHS STORIES Fri, 08 Mar 2013 09:48:46 -0500
Long Predicted Atomic Collapse State Observed in Graphene http://www.supercomputingonline.com/this-months-stories/long-predicted-atomic-collapse-state-observed-in-graphene http://www.supercomputingonline.com/this-months-stories/long-predicted-atomic-collapse-state-observed-in-graphene

Berkeley Lab researchers recreate elusive phenomenon with artificial nuclei

The first experimental observation of a quantum mechanical phenomenon that was predicted nearly 70 years ago holds important implications for the future of graphene-based electronic devices. Working with microscopic artificial atomic nuclei fabricated on graphene, a collaboration of researchers led by scientists with the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley have imaged the "atomic collapse" states theorized to occur around super-large atomic nuclei.

 

"Atomic collapse is one of the holy grails of graphene research, as well as a holy grail of atomic and nuclear physics," says Michael Crommie, a physicist who holds joint appointments with Berkeley Lab's Materials Sciences Division and UC Berkeley's Physics Department. "While this work represents a very nice confirmation of basic relativistic quantum mechanics predictions made many decades ago, it is also highly relevant for future nanoscale devices where electrical charge is concentrated into very small areas."

 

Crommie is the corresponding author of a paper describing this work in the journal Science. The paper is titled "Observing Atomic Collapse Resonances in Artificial Nuclei on Graphene."  Co-authors are Yang Wang, Dillon Wong, Andrey Shytov, Victor Brar, Sangkook Choi, Qiong Wu, Hsin-Zon Tsai, William Regan, Alex Zettl, Roland Kawakami, Steven Louie, and Leonid Levitov.

 

Originating from the ideas of quantum mechanics pioneer Paul Dirac, atomic collapse theory holds that when the positive electrical charge of a super-heavy atomic nucleus surpasses a critical threshold, the resulting strong Coulomb field causes a negatively charged electron to populate a state where the electron spirals down to the nucleus and then spirals away again, emitting a positron (a positively-charged electron) in the process. This highly unusual electronic state is a significant departure from what happens in a typical atom, where electrons occupy stable circular orbits around the nucleus.

 

"Nuclear physicists have tried to observe atomic collapse for many decades, but they never unambiguously saw the effect because it is so hard to make and maintain the necessary super-large nuclei," Crommie says. "Graphene has given us the opportunity to see a condensed matter analog of this behavior, since the extraordinary relativistic nature of electrons in graphene yields a much smaller nuclear charge threshold for creating the special supercritical nuclei that will exhibit atomic collapse behavior."

 

Perhaps no other material is currently generating as much excitement for new electronic technologies as graphene, sheets of pure carbon just one atom thick through which electrons can freely race 100 times faster than they move through silicon. Electrons moving through graphene's two-dimensional layer of carbon atoms, which are arranged in a hexagonally patterned honeycomb lattice, perfectly mimic the behavior of highly relativistic charged particles with no mass. Superthin, superstrong, superflexible, and superfast as an electrical conductor, graphene has been touted as a potential wonder material for a host of electronic applications, starting with ultrafast transistors.

 

In recent years scientists predicted that highly-charged impurities in graphene should exhibit a unique electronic resonance - a build-up of electrons partially localized in space and energy - corresponding to the atomic collapse state of super-large atomic nuclei. Last summer Crommie's team set the stage for experimentally verifying this prediction by confirming that graphene's electrons in the vicinity of charged atoms follow the rules of relativistic quantum mechanics. However,  the charge on the atoms in that study was not yet large enough to see the elusive atomic collapse.

 

"Those results, however, were encouraging and indicated that we should be able to see the same atomic physics with highly charged impurities in graphene as the atomic collapse physics predicted for isolated atoms with highly charged nuclei," Crommie says. "That is to say, we should see an electron exhibiting a semiclassical inward spiral trajectory and a novel quantum mechanical state that is partially electron-like near the nucleus and partially hole-like far from the nucleus. For graphene we talk about 'holes' instead of the positrons discussed by nuclear physicists."

 

To test this idea, Crommie and his research group used a specially equipped scanning tunneling microscope (STM) in ultra-high vacuum to construct, via atomic manipulation, artificial  nuclei on the surface of a gated graphene device. The "nuclei" were actually clusters made up of pairs, or dimers, of calcium ions. With the STM, the researchers pushed calcium dimers together into a cluster, one by one, until the total charge in the cluster became supercritical. STM spectroscopy was then used to measure the spatial and energetic characteristics of the resulting atomic collapse electronic state around the supercritical impurity.

 

"The positively charged calcium dimers at the surface of graphene in our artificial nuclei played the same role that protons play in regular atomic nuclei," Crommie says. "By squeezing enough positive charge into a sufficiently small area, we were able to directly image how electrons behave around a nucleus as the nuclear charge is methodically increased from below the supercritical charge limit, where there is no atomic collapse, to above the supercritical charge limit, where atomic collapse occurs."

 

Observing atomic collapse physics in a condensed matter system is very different from observing it in a particle collider, Crommie says. Whereas in a particle collider the "smoking gun" evidence of atomic collapse is the emission of a positron from the supercritical nucleus, in a condensed matter system the smoking gun is the onset of a signature electronic state in the region nearby the supercritical nucleus. Crommie and his group observed this signature electronic state with artificial nuclei of three or more calcium dimers.

 

"The way in which we observe the atomic collapse state in condensed matter and think about it is quite different from how the nuclear and high-energy physicists think about it and how they have tried to observe it, but the heart of the physics is essentially the same," says Crommie.

 

If the immense promise of graphene-based electronic devices is to be fully realized, scientists and engineers will need to achieve a better understanding of phenomena such as this that  involve the interactions of electrons with each other and with impurities in the material.

 

"Just as donor and acceptor states play a crucial role in understanding the behavior of conventional semiconductors, so too should atomic collapse states play a similar role in understanding the properties of defects and dopants in future graphene devices," Crommie says. "Because atomic collapse states are the most highly localized electronic states possible in pristine graphene, they also present completely new opportunities for directly exploring and understanding electronic behavior in graphene."

 

In addition to Berkeley Lab and UC Berkeley, other institutions represented in this work include UC Riverside, MIT, and the University of Exeter.

 

Berkeley Lab's work was supported by DOE's Office of Science.  Other members of the research team received support from the Office of Naval Research and the National Science Foundation.

]]>
tyler@supercomputingonline.com (Tyler O'Neal) THIS MONTHS STORIES Thu, 07 Mar 2013 14:59:15 -0500
How to predict the progress of technology http://www.supercomputingonline.com/this-months-stories/how-to-predict-the-progress-of-technology http://www.supercomputingonline.com/this-months-stories/how-to-predict-the-progress-of-technology

Researchers at MIT and the Santa Fe Institute have found that some widely used formulas for predicting how rapidly technology will advance — notably, Moore's Law and Wright's Law — offer superior approximations of the pace of technological progress. The new research is the first to directly compare the different approaches in a quantitative way, using an extensive database of past performance from many different industries.

 

Some of the results were surprising, says Jessika Trancik, an assistant professor of engineering systems at MIT. The findings could help industries to assess where to focus their research efforts, investors to pick high-growth sectors, and regulators to more accurately predict the economic impacts of policy changes.

 

The report is published in the online open-access journal PLOS ONE. Its other authors are Bela Nagy of the Santa Fe Institute, J. Doyne Farmer of the University of Oxford and the Santa Fe Institute, and Quan Bui of St. John's College in Santa Fe, N.M.

 

The best-known of the formulas is Moore's Law, originally formulated by Intel co-founder Gordon Moore in 1965 to describe the rate of improvement in the power of computer chips. That law, which predicts that the number of components in integrated circuit chips will double every 18 months, has since been generalized as a principle that can be applied to any technology; in its general form, it simply states that rates of improvement will increase exponentially over time. The actual rate of improvement — the exponent in the equation — varies depending on the technology.

 

The analysis indicates that Moore's Law is one of two formulas that best match actual technological progress over past decades. The top performer, called Wright's Law, was first formulated in 1936: It holds that progress increases with experience — specifically, that each percent increase in cumulative production in a given industry results in a fixed percentage improvement in production efficiency.

 

To carry out the analysis, the researchers amassed an extensive set of data on actual costs and production levels over time for 62 different industry sectors; these ranged from commodities such as aluminum, manganese and beer to more advanced products like computers, communications systems, solar cells, aircraft and cars.

 

"There are lots of proposals out there," Trancik says, for predicting the rate of advances in technologies. "But the data to test the hypotheses is hard to come by."

 

The research team scoured government reports, market-research publications, research reports and other published sources to compile their database. They only used sources for which at least a decade's worth of consistent data was available, and which contained metrics for both the rate of production and for some measure of improvement. They then analyzed the data by using the different formulas in "hindcasting": assessing which of the formulas best fit the actual pace of technological advances in past decades.

 

"We didn't know what to expect when we looked at the performance of these equations relative to one another," Trancik says, but "some of the proposals do markedly better than others."

 

Knowing which models work best in forecasting technological change can be very important for business leaders and policymakers. "It could be useful in things like climate-change mitigation," Trancik says, "where you want to know what you'll get out of your investment."

 

The rates of change vary greatly among different technologies, the team found.

 

"Information technologies improve the fastest," Trancik says, "but you also see the sustained exponential improvement in many energy technologies. Photovoltaics improve very quickly. … One of our main interests is in examining the data to gain insight into how we can accelerate the improvement of technology."

]]>
tyler@supercomputingonline.com (Tyler O'Neal) THIS MONTHS STORIES Thu, 07 Mar 2013 13:32:42 -0500
SuperComputer Model May Help Athletes, Soldiers Avoid Brain Damage, Concussions http://www.supercomputingonline.com/this-months-stories/supercomputer-model-may-help-athletes-soldiers-avoid-brain-damage-concussions http://www.supercomputingonline.com/this-months-stories/supercomputer-model-may-help-athletes-soldiers-avoid-brain-damage-concussions K. T. Ramesh.

Concussions can occur in sports and in combat, but health experts do not know precisely which jolts, collisions and awkward head movements during these activities pose the greatest risks to the brain. To find out, Johns Hopkins engineers have developed a powerful new supercomputer-based process that helps identify the dangerous conditions that lead to concussion-related brain injuries. This approach could lead to new medical treatment options and some sports rule changes to reduce brain trauma among players.

 

The research comes at a time when greater attention is being paid to assessing and preventing the head injuries sustained by both soldiers and athletes. Some kinds of head injuries are difficult to see with standard diagnostic imaging but can have serious long-term consequences. Concussions, once dismissed as a short-term nuisance, have more recently been linked to serious brain disorders.

 

“Concussion-related injuries can develop even when nothing has physically touched the head, and no damage is apparent on the skin,” said K. T. Ramesh, the Alonzo G. Decker Jr. Professor of Science and Engineering who led the research at Johns Hopkins. “Think about a soldier who is knocked down by the blast wave of an explosion, or a football player reeling after a major collision. The person may show some loss of cognitive function, but you may not immediately see anything in a CT-scan or MRI that tells you exactly where and how much damage has been done to the brain. You don’t know what happened to the brain, so how do you figure out how to treat the patient?”

 

To help doctors answer this question, Ramesh led a team that used a powerful technique called diffusion tensor imaging, together with a supercomputer model of the head, to identify injured axons, which are tiny but important fibers that carry information from one brain cell to another. These axons are concentrated in a kind of brain tissue known as “white matter,” and they appear to be injured during the so-called mild traumatic brain injury associated with concussions. Ramesh’s team has shown that the axons are injured most easily by strong rotations of the head, and the researchers’ process can calculate which parts of the brain are most likely to be injured during a specific event.

 

The team described its new technique in the Jan. 8 edition of the Journal of Neurotrauma. The lead author, Rika M. Wright, played a major role in the research while completing her doctoral studies in Johns Hopkins’ Whiting School of Engineering, supervised by Ramesh. Wright is now a postdoctoral research fellow at Carnegie Mellon University. Ramesh is continuing to conduct research using the technique at Johns Hopkins with support from the National Institutes of Health.

 

Beyond its use in evaluating combat and sports-related injuries, the work could have wider applications, such as detecting axonal damage among patients who have received head injuries in vehicle accidents or serious falls. “This is the kind of injury that may take weeks to manifest,” Ramesh said. “By the time you assess the symptoms, it may be too late for some kinds of treatment to be helpful. But if you can tell right away what happened to the brain and where the injury is likely to have occurred, you may be able to get a crucial head-start on the treatment.”

 

Armed with this knowledge, Ramesh and his colleagues want to use their new technology to examine athletes, particularly football and hockey players, who are tackled or struck during games in ways that inflict that violent side-to-side motion on the head. In the recent journal article, the authors point out that many professional sports games are recorded in high-definition video from multiple angles. This, they write, could allow researchers to reconstruct the motions involved in sport collisions that lead to the most serious head injuries.

 

The authors also noted that some sports teams equip their players’ helmets or mouth guards with instruments that can measure the acceleration of the head during an impact. Such data, entered into the researchers’ supercomputer model, could help determine the likely location of brain damage. These results, combined with neuropsychological tests, could be used to guide the athlete’s treatment and rehabilitation, the authors said, and to help a sports team decide when an athlete should be allowed to resume playing. This strategy also may help reduce the risk to athletes arising from a degenerative disease linked to repeated concussions.

 

More research, testing and validation must be conducted before the supercomputer model can become useful in a clinical setting. This will include animal experiments and the correlation of data from event reconstruction to make sure the model accurately identifies brain injuries.

 

Ideally, Ramesh would like to collect digital brain images from soldiers and athletes before they enter combat or join highly physical sports activities. “We would then be able to track a high-risk population and keep records detailing what types of head injuries they experience,” he said. “Then, we could look at how their brains may have changed since the original images were collected. This will also help guide the physicians and health professionals who provide treatment after critical events.”

 

In addition to Wright and Ramesh, the co-authors of the study were Andrew Post and Blaine Hoshizaki, both of the Neurotrauma Impact Science Library, Department of Human Kinetics, University of Ottawa, Canada. Funding for the research was provided by a National Science Foundation Graduate Research Fellowship and by the Whiting School-based Center for Advanced Metallic and Ceramic Systems. Ramesh is founding director of the Hopkins Extreme Materials Institute, of which the center is a part.

 

Ramesh and Jerry L. Prince, the William B. Kouwenhoven Professor of Electrical and Computer Engineering at Johns Hopkins, also are part of a team that recently received a five-year, $2.25 million National Institutes of Health grant to better understand traumatic brain injuries in order to improve methods for prevention and treatment. The principal investigator on the NIH project is Philip Bayly, the Lilyan and E. Lisle Hughes Professor of Mechanical Engineering at Washington University in St. Louis.

]]>
tyler@supercomputingonline.com (Tyler O'Neal) THIS MONTHS STORIES Thu, 07 Mar 2013 11:54:58 -0500
Mathematician Arthur Szlam named Sloan Research Fellow http://www.supercomputingonline.com/this-months-stories/mathematician-arthur-szlam-named-sloan-research-fellow http://www.supercomputingonline.com/this-months-stories/mathematician-arthur-szlam-named-sloan-research-fellow

Dr. Arthur Szlam, assistant professor of mathematics at The City College of New York, has been awarded the Sloan Research Fellowship for 2013. Professor Szlam develops mathematics for cutting-edge applications in machine learning, a branch of artificial intelligence research focused on improving the abilities of computers to learn in a more human way.

 

The Alfred P. Sloan Foundation fellowships identify early-career scientists and scholars judged to be rising stars, whose achievements and independent scholarship demonstrate their potential to become leaders in their field. "The Sloan Research Fellows are the best of the best among young scientists," said Dr. Paul Joskow, president of the Sloan Foundation. "If you want to know where the next big scientific breakthrough will come from, look to these extraordinary men and women."

 

The Foundation added 126 researchers from the U.S. and Canada to its ranks this year. Each will receive $50,000 to further their research. A blue ribbon panel of three mathematicians judged Professor Szlam to be among the most exciting and promising researchers in the field, said a spokesman for the Sloan Foundation.

 

"It's a huge prize and huge recognition," said Professor Christian Wolf, chair of the mathematics department at City College. "We are very, very proud of having him in the department."

 

Professor Szlam stands out for several reasons, one of which is his unusual approach, Professor Wolf explained. Applied mathematicians often begin with a particular mathematical technique and then find ways to apply it to solve problems in the real world – such as equipment failure or traffic routing.

 

"He does it the opposite way," said Professor Wolf. "He looks at interesting applications and modifies existing mathematical tools – or invents the necessary mathematics – to solve these problems. Frequently he comes up something completely new. In the past year his research has really exploded."

 

The mathematics for machine learning help a computer system discover how to analyze new types of data without being programmed with the specific steps to do so. Professor Szlam focuses on computer vision, allowing a computer to learn to distinguish and categorize objects in a collection of photos, for example, to allow precise image searching and sorting.

 

The field is still in its infancy, says Szlam, but the kinds of problems posed by computer vision and machine learning have spurred tremendous growth in mathematical tools.

 

"For most of the history of the universe – or the history of humans – the most interesting math came from physics," Professor Szlam explained. "Now a lot of math is coming from machine learning and machine vision – this is a new place to get inspiration."

 

 

This is also the math that could lead to so-called strong AI, the kind of artificial intelligence imagined in Ironman films, said Professor Szlam. "When Tony Stark is talking to his computer – if you're interested in making that happen, the basic steps of machine vision come first."

 

"The interesting thing here is that even if you don't care about machine learning, it is inspiring really beautiful math," he added.

 

Only six young faculty members from across The City University of New York have previously won the prize, created in 1955. Four of these hailed from City College at the time they won. In addition to Professor Szlam, two are current faculty: Professor of Mathematics Thea Pignataro (Sloan Fellow 1990) and Distinguished Professor of Science and Engineering - Physics, Robert Alfano (Sloan Fellow 1975). City College Professor of Mathematics Jay Jorgenson also won the prize in 1994, when he was at Yale University.

 

"The Sloan gives you a tremendous amount of breathing room and time to think about things," said Professor Szlam. With that freedom he plans to attend various mathematics institutes for intensive research in the next year.

 

Of the problems he will tackle, he said, "The main goal for mathematics is to do something pretty – but it's nice to have some goal, something to push you off."

]]>
tyler@supercomputingonline.com (Tyler O'Neal) THIS MONTHS STORIES Thu, 07 Mar 2013 11:51:49 -0500
NJIT new patent awards: Orthogonal space time codes, decoding data transmissions http://www.supercomputingonline.com/this-months-stories/njit-new-patent-awards-orthogonal-space-time-codes-decoding-data-transmissions http://www.supercomputingonline.com/this-months-stories/njit-new-patent-awards-orthogonal-space-time-codes-decoding-data-transmissions

Two new patents to improve orthogonal space time codes and decode data transmissions of space time spreading were recently awarded to NJIT Distinguished Professor Yeheskel Bar-Ness, executive director of the Elisha Yegal Bar-Ness Center for Wireless Communications and Signal Processing Research. Co-inventors with Bar-Ness on both patents were NJIT alums Amir Laufer and Kodzovi Acolatse.

 

"Method and Apparatus for Improving Transmission with Orthogonal Space Time Codes," (US Patent # 8.379.746) was awarded Feb.19 2013 to Bar-Ness and Laufer. "Modern wireless communication systems utilize multiple antennas for transmitting and receiving the data," said Bar-Ness. "A simple, yet powerful coding scheme for such systems is orthogonal space time coding. This invention involves a novel method for the transmission and the decoding of such codes resulting in better utilization of the channel, i.e., transmission with higher data rate along with lower error rate."

 

"Decoding Data Transmitted Space-Time Spreading in a Wireless Communication System Implementation and Performance Analysis of Space Time Spreading DS-CDMA System," (US Patent # 8.355426) was awarded Jan. 15 2013 to Bar-Ness and Acolatse.

 

Bar-Ness, a prominent expert in wireless communications and signal processing, has worked for four decades to advance the field of electrical and computer engineering. Bar-Ness, who still directs the Center for Wireless Communications and Signal Processing Research, has worked with industry, government and other universities to improve many aspects of wireless technology.

 

An especially notable achievement of the Center is the set of algorithms developed by its researchers. The algorithms have become industry standards, used to facilitate so-called code division multiple access (CDMA), a widely-used digital cell phone technology. Faculty affiliated with the center--the backbone of communications research in the department of electrical and computer engineering at NJIT for two decades--have received funding for projects from the National Science Foundation, the U.S. Army and Air Force and companies that include AT&T, ITT, InterDigital, Nokia, Mitsubishi, Panasonic, Samsung and Telcordia.

 

Both Laufer and Acolatse received doctorates in electrical engineering from the Department of Electrical and Computer Engineering at Newark College of Engineering in 2011 and 2010, respectively. Laufer is now a senior DSP Algorithms Engineer for Intel Israel at its development center in Jerusalem. Acolatse is a patent examiner at the US Patent and Trademark Office in Washington, DC.

]]>
tyler@supercomputingonline.com (Tyler O'Neal) THIS MONTHS STORIES Thu, 07 Mar 2013 11:50:17 -0500
The large-scale EU project EU BON: Towards integration with its global counterpart GEO BON http://www.supercomputingonline.com/this-months-stories/the-large-scale-eu-project-eu-bon-towards-integration-with-its-global-counterpart-geo-bon http://www.supercomputingonline.com/this-months-stories/the-large-scale-eu-project-eu-bon-towards-integration-with-its-global-counterpart-geo-bon

The project's main objective is to support the Group on Earth Observations Biodiversity Observation Network (GEO BON) and Global Earth Observation System of Systems (GEOSS)

The official Kickoff meeting of the Building the European Biodiversity Observation Network (EU BON) project, organized by the Museum für Naturkunde, Berlin, took place on 13-15 February 2013 to formally mark the beginning of the project and to set goals and objectives for the future. Among the hottest issues discussed was the integration of EU BON's framework with the Global Earth Biodiversity Observation Network project GEO BON and the Global Earth Observation System of Systems (GEOSS). Another intention set for the future is the enhanced communication and synchronization between the various partners and work packages.


The main objective set for EU BON is to facilitate with its contributions, and thus build a substantial part of GEO BON. EU BON Advisory Board, comprising ten leading experts in data management, biodiversity conservation and earth observation realms has been set up. Dr. Wouter Los - Chairman of the Expert centre for Taxonomic Identification (ETI), and 2nd Vice Chair of the Global Biodiversity Information Facility Governing Board (GBIF) was elected as a chair of the EU BON Advisory Board. It has been decided, with the directions and help of the EU BON's Advisory Board, to achieve that a substantial amount of work should be done towards a more comprehensive vision of the relationship between the two projects and the place EU BON takes as a major contributor.


Another aim delineated is working towards collaboration between the currently fragmented biodiversity data sources in Europe in an attempt to create an integrated network and framework for the benefit of the project objective itself, and GEO BON eventually. Dialogue and association with similar or relevant biodiversity projects and initiatives, on European and Global levels, are also encouraged. Organizing a conference is on the project's to do list.


Enhanced communication between the different partners and work packages has been outlined as the engine for achieving of the projects main objectives. A second official meeting has been already assigned for 2014 to measure the progress of EU BON and to set further goals. Meanwhile partners are already organizing workshops to work towards reflecting the directions for development currently set.

]]>
tyler@supercomputingonline.com (Tyler O'Neal) THIS MONTHS STORIES Thu, 07 Mar 2013 11:39:55 -0500
New flex-grid system prevents optical network 'traffic jams' http://www.supercomputingonline.com/this-months-stories/new-flex-grid-system-prevents-optical-network-traffic-jams http://www.supercomputingonline.com/this-months-stories/new-flex-grid-system-prevents-optical-network-traffic-jams This is a Centre Tecnologic de Telecomunicacions de Catalunya ADRENALINE testbed showing a MPLS-TP access and aggregation network over a 4-node wavelength switched optical transport network. The multi-layer network can be controlled by either a distributed GMPLS control plane or an integrated stateful PCE/OpenFlow centralized control plane.

OFC/NFOEC 2013 to feature talk on OpenFlow/PCE hybrid system that dynamically controls and manages optical network connections

Services like Google Maps use algorithms to determine the fastest route from point A to point B—even factoring in real-time traffic information as you travel to redirect you if, for example, a parade is blocking part of your route. Now, a team of researchers from Spain and Japan have achieved this kind of traffic control for the connections in optical networks by using a new dynamic network management system—and it does Google Maps one better. If necessary, the flexible-grid system can also redirect the traffic-congesting parade to another street (by re-arranging one or more existing connections), so you (a single new connection) wouldn't have to go out of your way to avoid gridlock.

 

Ramon Casellas, a research associate at the Catalonia Technological Center of Telecommunications (CTTC) near Barcelona, will describe the system developed by his team and colleagues at KDDI R&D Labs in Japan at the Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference (OFC/NFOEC) March 17-21 in Anaheim, Calif. The research represents one of many OFC/NFOEC talks on future network capabilities made possible by Software-Defined Networking, a popular topic at this year's event.

 

This particular system design combines two elements: an OpenFlow controller and a so-called "stateful" path computation element (PCE). An OpenFlow controller uses a protocol that allows the behavior of a network device—regardless of its manufacturer—to be remotely configured and, Casellas says, "by extension, provides a way to operate a network using a logically centralized element that can see the network as a whole." This enables packets of data to navigate the path of switches on a network much more efficiently than with traditional routing protocols, as if there were multiple, but coordinated remote traffic controllers helping to guide the network.

 

A PCE, in simple terms, is a dedicated computer that finds network routes between endpoints. "The functions of a PCE are conceptually similar to Google Maps or GPS navigation systems," Casellas says.

 

A stateful PCE, he says, is smarter because it keeps track of and considers current connections to improve and dynamically correct the path computations for all of the connections in the network. Because the existing connections are stored in an internal database, advanced algorithms can use information about them to enhance network speed and efficiency. They do this by improving the optimization of the active connections as a whole instead of individually.

 

"The underlying idea," Casellas explains, "is that having extra information is helpful to improve the performance of the path computation, and thus the network. An active, stateful PCE also can affect the status of the active connections. For example, an active, stateful PCE is able to re-arrange active connections to allocate new ones."

 

Essentially, the system knows every connection on a network and what it is doing at any given time, with the ability to reroute those connections midstream based on new connections coming in to the network.

 

Casellas and his colleagues successfully tested their system by using it to dynamically control the optical spectrum in the fibers in a flexi-grid optical network. In such networks, he says, the intrinsic constraints of the optical technology—for example, caused by physical defects in the network—justify the deployment of PCEs.

 

"Combining a stateful PCE with OpenFlow provides an efficient solution for operating transport networks," says Casellas. "An OpenFlow controller and a stateful PCE have several functions in common but also complement each other, and it makes sense to integrate them. This allows a return on investment and reduces operational expenses and time-to-market."

 

Casellas' presentation at OFC/NFOEC, titled "An Integrated Stateful PCE/OpenFlow controller for the Control and Management of Flexi-Grid Optical Networks," will take place Wednesday, March 20 at 3:45 p.m. in the Anaheim Convention Center.

]]>
tyler@supercomputingonline.com (Tyler O'Neal) THIS MONTHS STORIES Thu, 07 Mar 2013 11:36:00 -0500
New gender benchmarking study: Brazil succeeding in providing a positive STI environment for women http://www.supercomputingonline.com/this-months-stories/new-gender-benchmarking-study-brazil-succeeding-in-providing-a-positive-sti-environment-for-women http://www.supercomputingonline.com/this-months-stories/new-gender-benchmarking-study-brazil-succeeding-in-providing-a-positive-sti-environment-for-women

However overall numbers of women in engineering, physics and computer science are on the decline

 

In the first gender benchmarking study of its kind, researchers have found that numbers of women in the science, technology and innovation fields are alarmingly low in the world's leading economies, and are actually on the decline in many, including the United States. Results from Brazil show that despite women having a strong representation in parts of the science, technology and innovation sector, and a slight increase in engineering, physics and computer science, overall numbers are on the decline.

 

Brazil ranks the highest in this study after the EU and US, coming in above South Africa, India, Indonesia and even the Republic of Korea. Brazil is an example of a country with both a highly enabling policy environment and effective implementation strategies for women. It ranks third overall; first in women's participation in the knowledge economy and science, technology and innovation (STI) and second in health, opportunity & capability, and supportive policy. It is third in social status, economic status and access to resources for women. Its low ranking (4th) in knowledge society decision-making however shows where improvement needs to be made.

 

The full gender benchmarking study maps the opportunities and obstacles faced by women in science in Brazil, South Africa, India, the Republic of Korea, Indonesia, the US, the EU. The study was conducted by experts in international gender, science and technology issues from Women in Global Science & Technology (WISAT) and the Organization for Women in Science for the Developing World (OWSD), and funded by the Elsevier Foundation. The research was led by Dr. Sophia Huyer, Executive Director of WISAT and Dr. Nancy Hafkin, Senior Associate of WISAT.

 

Despite efforts by many countries to give women greater access to science and technology education, research shows negative results, particularly in the areas of engineering, physics and computer science.

 

  •     Women remain severely under-represented in degree programs for these fields (less than 30% in most countries), and 20% in Brazil.
  •     The representation of females in bio and health sciences enrollment is also low in the country, at 28%. In addition, the numbers of women actually working in these fields are declining across the board in all countries – at 18% in Brazil.
  •     Even in countries where the numbers of women studying science and technology have increased, it has not translated into more women in the workplace.
  •     The numbers of women professional and technical workers in Brazil are comparatively high, fluctuating between 53 and 63% over the last decade, although this category includes communications, arts and athletic as well as technical professions.

    The number of IT workers is less, at 33%, but the highest of all the countries studied. Brazil also sees among the highest percentage of female-run businesses with more than one employee, at 28%, and comparatively high levels of management participation (45%), with low representation at the highest decision making levels in the corporate and science sector at approximately 8%.

 

Two Brazilian researchers involved in the study, Maria Coleta Oliveira, from the State University of Campinas, and Alice Abreu from the Federal University of Rio de Janeiro, explain that Brazil has implemented a substantial number of policies and programs supporting women's education at all levels, including science engineering and technology, "However impressive the results in the last 10 years are, women in Brazil are nevertheless still not well represented at the decision making levels of the science and technology system. They remain a minority in engineering, physics and computer sciences, and have low participation in the knowledge society workforce. Even more creative actions will have to be developed to focus on these next steps, so that Brazil can fully profit from the investment it is making." noted Alice Abreu.

 

"These economies are operating under the existing paradigm that if we give girls and women greater access to education they will eventually gain parity with men in these fields," states Sophia Huyer, the lead researcher of the and founding executive director of Women in Global Science & Technology. "This has dictated our approach to the problem for over a decade and we are still only seeing incremental changes. The report indicates that access to education is not a solution in and of itself. It's only one part of what should be a multi-dimensional policymaking approach. There is no simple solution."

 

The overall data show that women's parity in the science, technology and innovation fields is tied to multiple empowerment factors, with the most influential being representation in the labor force, larger roles in government and politics, access to economic, productive and technological resources, quality healthcare and financial resources. Findings also show that women have greater parity in countries with government policies that support childcare, equal pay, and gender mainstreaming. One of the main findings is that few countries collect consistent and reliable sex-disaggregated data in all of these areas, which inhibits their ability to implement effective supporting policies and programs.

 

"We found that the absence of any one of these elements creates a situation of vulnerability for economies that want to be competitively positioned in the knowledge economy," Huyer says. "No one country or region is ticking off all the boxes, and some are falling dismally short. This is a tremendous waste of resources. We are wasting resources educating women without following through, and we are missing out on the enormous potential that women represent."

 

"This broad and ambitious assessment is a critical starting point for measuring the participation of women and girls in science, technology and innovation in emerging and developing worlds," said David Ruth, Executive Director of the Elsevier Foundation, "This study identifies key areas of national strength and weakness, and we hope it will help form the basis of evidence-based policy making and aid going forward."

 

The report, funded by The Elsevier Foundation, which provides grant programs targeting women scientists in the early stages of their careers, was also supported by futureInnovate.net, a non-profit that supports initiatives that strengthen innovation systems in Canada and around the world.

]]>
tyler@supercomputingonline.com (Tyler O'Neal) THIS MONTHS STORIES Thu, 07 Mar 2013 11:33:27 -0500
Radware, Mellanox Enable Mobile Carriers to Leverage NFV, SDN http://www.supercomputingonline.com/this-months-stories/radware-mellanox-enable-mobile-carriers-to-leverage-nfv-sdn http://www.supercomputingonline.com/this-months-stories/radware-mellanox-enable-mobile-carriers-to-leverage-nfv-sdn Collaboration between Radware and Mellanox enable extraction of network information to detect and protect against security threats in real-time for a more resilient and cost-effective mobile network
 
Radware has announced a collaborative effort with Mellanox Technologies.
 
Combining its Application Delivery and Attack Mitigation Security solutions with Mellanox Technologies' 10G and 40G NICs, illustrates an innovative ability to extract network and application information that can be translated and leveraged to a scalable solution for detection of various security threats, in real-time.
 
This unique solution not only permits a more resilient mobile network while enabling the Network Functions Virtualization (NFV) initiative, but it also allows a cost-effective, better-utilized mobile operation by eliminating high CPU overhead and performance hit associated with vSwitches. In addition embedded switching and SR-IOV technology results in accelerated performance and improved security, isolation, and scalability.
 
By using an OpenFlow interface to manage the embedded virtual switch, operators can now monitor and control traffic in real time, and through a standard control interface. Enabling to deploy end to end Software Defined Networking (SDN) solutions, the unique solution provides mobile carriers with highly secured, resilient mobile service operation and cost-effective mobile cloud services.
 
"Our ADC and Security are field-proven products that are already deployed in various virtualization environments. By integrating Mellanox's industry-leading and high-bandwidth solution offers mobile carriers to drive their NFV and SDN initiatives," says Avi Chesla, chief technology officer, Radware.

"Mellanox is a leading 10/40GbE server, storage and embedded communications interconnect provider and a key contributor to SDN and NFV forums," says Amir Prescher, senior vice president of business development at Mellanox Technologies. "By integrating solutions with Radware, we can provide mobile carriers with clear business benefits and unique value at the application-level through the integration with a NFV-based infrastructure."

Radware and Mellanox's offering to mobile carriers was designed to employ the virtues of both SDN and NFV, enabling easy adoption of commercial off-the-shelf (COTS) server infrastructure serving both the network level and application level needs, respectively. Furthermore, it can also enable different mobile carrier and mobile cloud use-cases including elastic application delivery, mobile security, and more.]]>
tyler@supercomputingonline.com (Tyler O'Neal) THIS MONTHS STORIES Thu, 07 Mar 2013 08:28:44 -0500