CAPTION Prototypes for cheaper computer chips are being built with metal-containing DNA origami structures. CREDIT Zoie Young, Kenny Lee and Adam Woolley

Electronics manufacturers constantly hunt for ways to make faster, cheaper computer chips, often by cutting production costs or by shrinking component sizes. Now, researchers report that DNA, the genetic material of life, might help accomplish this goal when it is formed into specific shapes through a process reminiscent of the ancient art of paper folding.

The researchers present their work on Sunday at the 251st National Meeting & Exposition of the American Chemical Society (ACS). ACS, the world's largest scientific society, is holding the meeting in San Diego through Thursday. It features more than 12,500 presentations on a wide range of science topics.

"We would like to use DNA's very small size, base-pairing capabilities and ability to self-assemble, and direct it to make nanoscale structures that could be used for electronics," Adam T. Woolley, Ph.D., says. He explains that the smallest features on chips currently produced by electronics manufacturers are 14 nanometers wide. That's more than 10 times larger than the diameter of single-stranded DNA, meaning that this genetic material could form the basis for smaller-scale chips.

"The problem, however, is that DNA does not conduct electricity very well," he says. "So we use the DNA as a scaffold and then assemble other materials on the DNA to form electronics."

To design computer chips similar in function to those that Silicon Valley churns out, Woolley, in collaboration with Robert C. Davis, Ph.D., and John N. Harb, Ph.D., at Brigham Young University, is building on other groups' prior work on DNA origami and DNA nanofabrication.

The most familiar form of DNA is a double helix, which consists of two single strands of DNA. Complementary bases on each strand pair up to connect the two strands, much like rungs on a twisted ladder. But to create a DNA origami structure, researchers begin with a long single strand of DNA. The strand is flexible and floppy, somewhat like a shoelace. Scientists then mix it with many other short strands of DNA -- known as "staples" -- that use base pairing to pull together and crosslink multiple, specific segments of the long strand to form a desired shape.

However, Woolley's team isn't content with merely replicating the flat shapes typically used in traditional two-dimensional circuits. "With two dimensions, you are limited in the density of components you can place on a chip," Woolley explains. "If you can access the third dimension, you can pack in a lot more components."

Kenneth Lee, an undergraduate who works with Woolley, has built a 3-D, tube-shaped DNA origami structure that sticks up like a smokestack from substrates, such as silicon, that will form the bottom layer of their chip. Lee has been experimenting with attaching additional short strands of DNA to fasten other components such as nano-sized gold particles at specific sites on the inside of the tube. The researchers' ultimate goal is to place such tubes, and other DNA origami structures, at particular sites on the substrate. The team would also link the structures' gold nanoparticles with semiconductor nanowires to form a circuit. In essence, the DNA structures serve as girders on which to build an integrated circuit.

Lee is currently testing the characteristics of the tubular DNA. He plans to attach additional components inside the tube, with the eventual aim of forming a semiconductor.

Woolley notes that a conventional chip fabrication facility costs more than $1 billion, in part because the equipment necessary to achieve the minuscule dimensions of chip components is expensive and because the multi-step manufacturing process requires hundreds of instruments. In contrast, a facility that harnesses DNA's knack for self-assembly would likely entail much lower start-up funding, he states. "Nature works on a large scale, and it is really good at assembling things reliably and efficiently," he says. "If that could be applied in making circuits for computers, there's potential for huge cost savings."

This agro-industrial field being prepared for soybean planting is in the midst of Brazil's Cerradao forest. A team of Stanford researchers has developed a computer model that can help understand the ways that activities such as clear-cutting might impact the future of the land and indigenous people who live in the Amazon rainforest. (Photo: Courtesy Jose Fragoso)

A supercomputer simulation shows that carefully designing government interactions with rural indigenous people is critical for protecting the sustainability of people, wildlife and the land.

People have thrived deep within the Amazon rainforest for hundreds of years without contact with the outside world. The constant encroachment of modern civilization, however, is putting the long-term sustainability of these people, and the ecosystems they inhabit, at risk.

Now a team of Stanford researchers has developed a computer model that can help understand the ways that activities such as clear-cutting and welfare programs might impact the future of the land and the people who live inside protected areas of the rainforest. They hope the simulation serves as a useful tool for governments and other organizations that interact with the world's indigenous people.

Indigenous people control about half of the planet's undeveloped land. And in the tropics, hunting and habitat degradation are the major drivers in animal and plant population changes. Understanding how external factors influence the relationship between indigenous people and their land has significant policy implications.

The Makushi, Wapishana and Wai Wai are indigenous tribes that inhabit the Rupununi region of southern Guyana, where they survive as traditional forest-dwellers, growing cassava (tapioca plant) and hunting. The region has recently faced social and environmental changes caused in part by the government's attempts to integrate rural and urban areas.

For three years, Stanford biologist Jose Fragoso and his collaborators worked with these people to collect extensive information on local plant and animal species, as well as demographic information on the nearly 10,000 residents of the Rupununi region.

The scientists then used this data to develop a supercomputer simulation model to gauge how future developments could impact the people, forest and wildlife in the region. They analyzed four of the most common drivers of social-ecological change in indigenous lands: introduction of advanced health care, abandonment of traditional religious and taboo beliefs, the conversion of land outside the indigenous area for large-scale agriculture, and the introduction of external food resources.

When the researchers evaluated changes in the intensity of the first three drivers via the simulation over a span of 250 years, agro-industrial land development outside lowered both biodiversity and the total number of animals and plants inside areas, as did loss of traditional taboos. Also, improved health care within areas had less of an impact on the environment. Eventually the ecosystem absorbed these impacts and leveled out to a new normal, even if those influences remained in place.

Clear-cutting the surrounding lands, for instance, lowered the amount of animals within the protected areas by a significant degree. But as the simulation played out, the system shifted to a new equilibrium with lower human population, animal abundance and forest cover.

The introduction of external food, however, may have lessened peoples' reliance on local resources and allowed the population to rapidly grow. This in turn placed higher pressures on both the animal populations and the forest, as people drew more heavily upon the natural resources of the region to sustain themselves. Eventually, the system collapsed in the simulation.

Population growth caused the disturbance, Fragoso said, but it was the introduction of food that triggered the population growth, highlighting the need for the careful introduction of essential resources from outside.

"It's important to bring food, but the way it is introduced makes a difference in whether the system stays stable or becomes unstable," Fragoso said. "The model behaves as if the food has been dropped in by a parachute, but in reality, local inhabitants and policymakers set policy for how the food arrives."

This is a modeling effort only, Fragoso said, but it provides a key for understanding the world and how society should proceed. Fragoso has been advising the Brazilian government on using the model to develop and implement strategic and responsible subsidy programs. He suggests that supplying cash transfer programs in tandem with cultural support, education, fisheries and wildlife management will help people remain connected to their land and culture to preserve sustainability and maintain tropical forests.

"I believe this modeling tool has good potential to support participatory management and conservation of biodiversity in the Amazon region," said Carlos Durigan, the director of the Wildlife Conservation Society in Brazil. "But of course, we must combine it with a strategy of local involvement and good investments in technologies. The idea is to both monitor and develop a good basis for more responsible natural resources management and to construct an alternative way to ensure quality of life to indigenous populations facing a changing scenario both in terms of socioeconomics and environmental issues."

The paper was published in the journal Frontiers in Ecology and the Environment. It was co-authored by Stanford scientists Eric Lambin, Jeffrey Luzar and Jose Fragoso; Takuya Iwamura, who conducted the work at Stanford and is now at Tel Aviv University; and Kirsten Silvius of the Virginia Polytechnic Institute and State University.

Small Business Vouchers to advance hydropower, energy efficiency & bio-based chemicals

Hydropower costs could be reduced, buildings could use less energy and adhesives could be made from plants under three new projects by the Department of Energy.

DOE's  is being awarded a total of $625,000 to advance these technologies. The three projects are part of the first round of funding for DOE's new . Nearly $6.7 million in total funding was announced today to support technologies being developed by 33 different small businesses. Each small business will also provide an additional 20 percent in cost-share funding or in-kind services for each project. More information on PNNL's three new projects is provided below.

"The Small Business Vouchers pilot allows innovative entrepreneurs greater access to the world-class resources and brilliant minds in our (national) labs," said David Danielson, assistant secretary for DOE's Office of Energy Efficiency and Renewable Energy. "These partnerships can help small businesses solve their most pressing technical challenges — and help bring clean energy technologies to commercialization much faster."

Initially  in July 2015, the pilot will help small clean energy firms receive technology assistance from DOE's national laboratory system.  leading the pilot and will specifically support small business in three areas: bioenergy, water power and buildings.

DOE also announced today that more small businesses can now apply to receive vouchers through the second round of this program. Second round applications are due April 10. More information can be found at the .

SuperComputers help make better hydro turbines

Improved, screw-shaped turbines could generate electricity in small U.S. waterways such as irrigation canals. These hydro turbines, called Archimedes Hydrodynamic Screw turbines, already are widely used in European waters. To lower their cost and make them more feasible for use in the U.S., Kennewick, Washington-based Percheron Power, LLC, wants to make these turbines out of composite materials instead of metal.

PNNL engineer Marshall Richmond and his team will use advanced supercomputer models to help Percheron advance its turbine designs. The researchers will run the models on PNNL's  to compare the performance of different turbine designs and predict the strength requirements for turbines. Percheron will use the results to build prototype composite turbines and test them in a lab and in the field. PNNL is being awarded $200,000.

Advancing algorithms for energy-efficient buildings

Small and medium-sized commercial buildings could cut their power bills with the help of national lab-developed algorithms that improve lighting, heating and cooling systems by identifying systems that aren't working as intended — such as thermostats that don't change temperatures at assigned times — and correcting them. But while these algorithms have worked well in experiments, they need further refinement to be ready for real-world use.

PNNL engineer Michael Brambley and his team will help Lake Oswego, Oregon-based make these algorithms ready for commercial use. PNNL will test and validate algorithm performance and help NorthWrite adapt them for the company's cloud-based software, among other tasks. The algorithms involved were developed by PNNL and Lawrence Berkeley National Laboratory. PNNL is being awarded $300,000.

Improving plant-based chemical production

The cost and carbon footprint of synthetic rubbers, latex and adhesives could be reduced by making them from plants instead of petroleum. Berkeley, California-based  has developed a new process using fermentation and catalysts to convert plant-derived sugars into isoprene, a chemical from which those materials are made.

PNNL and  will help Visolis scale up its process and produce samples that will be tested to ensure the process creates a quality chemical. PNNL engineer Karthi Ramasamy will lead a team that improves half of the process, while NREL will improve the other half. PNNL will receive $125,000 for the project and NREL will receive $175,000.

CAPTION The novel approach to making systems forget data is called "machine unlearning" by the two researchers who are pioneering the concept. Instead of making a model directly depend on each training data sample (left), they convert the learning algorithm into a summation form (right) - a process that is much easier and faster than retraining the system from scratch. CREDIT Yinzhi Cao and Junfeng Yang

Novel approach enables removal of data without retraining a computer learning system from scratch leading to quicker recovery from cyber-attacks, better privacy protection

Machine learning systems are everywhere. Computer software in these machines predict the weather, forecast earthquakes, provide recommendations based on the books and movies we like and, even, apply the brakes on our cars when we are not paying attention.

To do this, computer systems are programmed to find predictive relationships calculated from the massive amounts of data we supply to them. Machine learning systems use advanced algorithms--a set of rules for solving math problems--to identify these predictive relationships using "training data." This data is then used to construct the models and features within a system that enables it to correctly predict your desire to read the latest best-seller, or the likelihood of rain next week.

This intricate learning process means that a piece of raw data often goes through a series of computations in a given system. The data, computations and information derived by the system from that data together form a complex propagation network called the data's "lineage." The term was coined by researchers Yinzhi Cao of Lehigh University and Junfeng Yang of Columbia University who are pioneering a novel approach toward making such learning systems forget.

Considering how important this concept is to increasing security and protecting privacy, Cao and Yang believe that easy adoption of forgetting systems will be increasingly in demand. The pair has developed a way to do it faster and more effectively than what is currently available.

Their concept, called "machine unlearning," is so promising that the duo have been awarded a four-year, $1.2 million National Science Foundation grant--split between Lehigh andColumbia--to develop the approach.

"Effective forgetting systems must be able to let users specify the data to forget with different levels of granularity," said Yinzhi Cao, Assistant Professor of Computer Science and Engineering at Lehigh University's P.C. Rossin College of Engineering & Applied Science and a Principal Investigator on the project. "These systems must remove the data and undo its effects so that all future operations run as if the data never existed."

Increasing security & privacy protections

There are a number of reasons why an individual user or service provider might want a system to forget data and its complete lineage. Privacy is one.

After Facebook changed its privacy policy, many users deleted their accounts and the associated data. The iCloud photo hacking incident in 2014--in which hundreds of celebrities' private photos were accessed via Apple's cloud services suite--led to online articles teaching users how to completely delete iOS photos including the backups. New research has revealed that machine learning models for personalized medicine dosing leak patients' genetic markers. Only a small set of statistics on genetics and diseases are enough for hackers to identify specific individuals, despite cloaking mechanism.

Naturally, users unhappy with these newfound risks want their data and its influence on the models and statistics to be completely forgotten.

Security is another reason. Consider anomaly-based intrusion detection systems used to detect malicious software. In order to positively identify an attack, the system must be taught to recognize normal system activity. Therefore the security of these systems hinges on the model of normal behaviors extracted from the training data. By polluting the training data, attackers pollute the model and compromise security. Once the polluted data is identified, the system must completely forget the data and its lineage in order to regain security.

Widely-used learning systems such as Google Search are, for the most part, only able to forget a user's raw data upon request and not that data's lineage. While this is obviously problematic for users who wish to ensure that any trace of unwanted data is removed completely, this limitation is also a major challenge for service providers who have strong incentives to fulfill data removal requests, including the retention of customer trust.

Service providers will increasingly need to be able to remove data and its lineage completely to comply with laws governing user data privacy, such as the "right to be forgotten" ruling issued in 2014 by the European Union's top court. In October 2014 Google removed more than 170,000 links to comply with the ruling that affirmed an individual's right to control what appears when their name is searched online. In July 2015, Google said it had received more than a quarter-million requests.

Breaking down dependencies

Building on their previous work that was revealed at a 2015 IEEE Symposium and then published, Cao's and Yang's "machine unlearning" method is based on the fact that most learning systems can be converted into a form that can be updated incrementally without costly retraining from scratch.

Their approach introduces a layer of a small number of summations between the learning algorithm and the training data to eliminate dependency on each other. So, the learning algorithms depend only on the summations and not on individual data. Using this method, unlearning a piece of data and its lineage would no longer require re-building the models and features that predict relationships between pieces of data. Simply re-computing a small number of summations would remove the data and its lineage completely--and much faster than through retraining the system from scratch.

Cao says he believes they are the first to establish the connection between unlearning and the summation form.

And, it works. Cao and Yang evaluated their unlearning approach by testing it out on four real-world systems. The diverse set of programs serves as a representative benchmark for their method and included LensKit, an open-source recommendation system; Zozzle, a closed-source JavaScript malware detector; an open-source OSN spam filter and PJScan, an open-source PDF malware detector.

The success they achieved during these initial evaluations have set the stage for the next phases of the project, which include adapting the technique to other systems and creating verifiable machine unlearning to statistically test whether unlearning has indeed repaired a system or completely wiped out unwanted data.

In their paper's introduction, Cao and Yang look ahead to what's next for Big Data and are convinced that "machine unlearning" could play a key role in enhancing security and privacy and in our economic future:

"We foresee easy adoption of forgetting systems because they benefit both users and service providers. With the flexibility to request that systems forget data, users have more control over their data, so they are more willing to share data with the systems. More data also benefit the service providers, because they have more profit opportunities and fewer legal risks."

They add: "...we envision forgetting systems playing a crucial role in emerging data markets where users trade data for money, services, or other data because the mechanism of forgetting enables a user to cleanly cancel a data transaction or rent out the use rights of her data without giving up the ownership."

Climate model limitations tend to overestimate climate change attribution

The influence of climate change on extreme weather and climate events is topical in climate science. Recent extreme events (such as the European heatwave in 2015 [1]) have been partly attributed to climate change by comparing the probability that the event would occur in the world as we observe it with the probability that it would occur in a hypothetical world where climate change does not exist. These probabilities are typically estimated using climate model simulations with known limitations to simulate extreme events. A study published today in Geophysical Research Letters suggests that there is a tendency to overestimate the attribution as a result of the shortcomings of these models.

Climate models are the best tools we have to perform an event attribution study, yet the models have known imperfections with respect to reliably simulating the probability that an event might occur. The authors of the study point out that model reliability is not always ensured and that past studies have paid too little attention to this requirement. Attribution studies would therefore benefit from ensemble calibration methods conducted by today’s operational weather and climate forecasting centres. However, the authors also stress that, while there is a risk that its influence has been overestimated, climate change has been an important factor in the development of recent extreme events.

You can read the full article “Attribution of extreme weather and climate events overestimated by unreliable climate simulations” in:

Page 6 of 48