LATEST

Marvell Technology is based in Bermuda but run from headquarters in Santa Clara, CA.

  • Complementary portfolios and scale enable world-class end-to-end solutions
  • Diversifies revenue base and end markets; increases SAM to $16 billion+
  • Combined R&D innovation engine and IP portfolio accelerates product leadership
  • Creates best-in-class financial model

Marvell Technology Group Ltd. and Cavium, Inc. have announced a definitive agreement, unanimously approved by the boards of directors of both companies, under which Marvell will acquire all outstanding shares of Cavium common stock in exchange for consideration of $40.00 per share in cash and 2.1757 Marvell common shares for each Caviumshare. Upon completion of the transaction, Marvell will become a leader in infrastructure solutions with approximately $3.4 billion in annual revenue.

The transaction combines Marvell's portfolio of leading HDD and SSD storage controllers, networking solutions and high-performance wireless connectivity products with Cavium's portfolio of leading multi-core processing, networking communications, storage connectivity and security solutions. The combined product portfolios provide the scale and breadth to deliver comprehensive end-to-end solutions for customers across the cloud data center, enterprise and service provider markets, and expands Marvell's serviceable addressable market to more than $16 billion. This transaction also creates an R&D innovation engine to accelerate product development, positioning the company to meet today's massive and growing demand for data storage, heterogeneous computing and high-speed connectivity.

"This is an exciting combination of two very complementary companies that together equal more than the sum of their parts," said Marvell President and Chief Executive Officer, Matt Murphy. "This combination expands and diversifies our revenue base and end markets, and enables us to deliver a broader set of differentiated solutions to our customers. Syed Ali has built an outstanding company, and I'm excited that he is joining the Board. I'm equally excited that Cavium's Co-founder Raghib Hussain and Vice President of IC Engineering Anil Jain will also join my senior leadership team. Together, we all will be able to deliver immediate and long-term value to our customers, employees and shareholders."

"Individually, our businesses are exceptionally strong, but together, we will be one of the few companies in the world capable of delivering such a comprehensive set of end-to-end solutions to our combined customer base," said Cavium Co-founder and Chief Executive Officer, Syed Ali. "Our potential is huge. We look forward to working closely with the Marvell team to ensure a smooth transition and to start unlocking the significant opportunities that our combination creates."

The transaction is expected to generate at least $150 to $175 million of annual run-rate synergies within 18 months post close and to be significantly accretive to revenue growth, margins and non-GAAP EPS.

Transaction Structure and Terms 
Under the terms of the definitive agreement, Marvell will pay Cavium shareholders $40.00 in cash and 2.1757 Marvellcommon shares for each share of Cavium common stock. The exchange ratio was based on a purchase price of $80per share, using Marvell's undisturbed price prior to November 3, when media reports of the transaction first surfaced. This represents a transaction value of approximately $6 billion. Cavium shareholders are expected to own approximately 25% of the combined company on a pro forma basis.

Marvell intends to fund the cash consideration with a combination of cash on hand from the combined companies and $1.75 billion in debt financing. Marvell has obtained commitments consisting of an $850 million bridge loan commitment and a $900 million committed term loan from Goldman Sachs Bank USA and Bank of America Merrill Lynch, in each case, subject to customary terms and conditions. The transaction is not subject to any financing condition.

The transaction is expected to close in mid-calendar 2018, subject to regulatory approval as well as other customary closing conditions, including the adoption by Cavium shareholders of the merger agreement and the approval by Marvell shareholders of the issuance of Marvell common shares in the transaction.

Management and Board of Directors 
Matt Murphy will lead the combined company, and the leadership team will have strong representation from both companies, including Marvell's current Chief Financial Officer Jean Hu, Cavium's Co-founder and Chief Operating Officer Raghib Hussain and Cavium's Vice President of IC Engineering Anil Jain. In addition, Cavium's Co-founder and Chief Executive Officer, Syed Ali, will continue with the combined company as a strategic advisor and will join Marvell's Board of Directors, along with two additional board members from Cavium's Board of Directors, effective upon closing of the transaction.

Advisors
Goldman Sachs & Co. LLC served as the exclusive financial advisor to Marvell and Hogan Lovells US LLP served as legal advisor. Qatalyst Partners LP and J.P. Morgan Securities LLC served as financial advisors to Cavium and Skadden, Arps, Slate, Meagher & Flom LLP served as legal advisor.

Marvell Preliminary Third Fiscal Quarter Results 
Based on preliminary financial information, Marvell expects revenue of $610 to $620 million and non-GAAP earnings per share to be between $0.32 and $0.34, above the mid-point of guidance provided on August 24, 2017. Further information regarding third fiscal quarter results will be released on November 28, 2017 at 1:45 p.m. Pacific Time.

Transaction Website 
For more information, investors are encouraged to visit http://MarvellCavium.transactionannouncement.com, which will be used by Marvell and Cavium to disclose information about the transaction and comply with Regulation FD. 

Kyumin Lee, assistant professor of computer science at Worcester Polytechnic Institute (WPI), is building algorithms to weed out crowdturfers with a high rate of accuracy

A researcher at Worcester Polytechnic Institute (WPI) is using computer science to help fight the growing problem of crowdturfing--a troublesome phenomenon in which masses of online workers are paid to post phony reviews, circulate malicious tweets, and even spread fake news. Funded by a National Science Foundation CAREER Award, assistant professor Kyumin Lee has developed algorithms that have proven highly accurate in detecting fake "likes" and followers across various platforms like Amazon, Facebook, and Twitter.

Crowdturfing (a term that combines crowdsourcing and astroturf, a fake grass) is like an online black market for false information. Its consequences can be dangerous, including customers buying products that don't live up to their artificially inflated reviews, malicious information being pushed out in fake tweets and posts, and even elections swayed by concerted disinformation campaigns.

"We don't know what is real and what is coming from people paid to post phony or malicious information," said Lee, a pioneer in battling crowdturfing who said the problem can undermine the credibility of the Internet, leaving people feeling unsure about how much they can trust what they see even on their favorite websites.

"We believe less than we used to believe," he said. "That's because the amount of fake information people see has been increasing. They're manipulating our information, whether it's a product review or fake news. My goal is to reveal a whole ecosystem of crowdturfing. Who are the workers performing these tasks? What websites are they targeting? What are they falsely promoting?"

Lee, who joined WPI in July, focuses first on crowdsourcing sites like Amazon's Mechanical Turk (MTurk), an online marketplace where anyone can recruit workers to complete tasks for pay. While most of the tasks are legitimate, the sites have also been used to recruit people to help with crowdturfing campaigns. Some crowdsourcing sites try to weed out such illegitimate tasks, but others don't. And the phony or malicious tasks can be quite popular, as they usually pay better than legitimate ones.

Using machine learning and predictive modeling, Lee builds algorithms that sift through the posted tasks looking for patterns that his research has shown are associated with these illegitimate tasks: for example, higher hourly wages or jobs that involve manipulating or posting information on particular websites or clicking on certain kinds of links. The algorithm can identify the malicious organizations posting the tasks, the websites the crowdturfers are told to target, and even the workers who are signing up to complete the tasks.

Looking next at the targeted websites, the predictive algorithms can gauge the probability that new users are, in fact, there to carry out assigned crowdturfing tasks--for example "liking" content on social media sites or "following" certain social media users. In Lee's research, the algorithms have detected fake likes with 90 percent accuracy and fake followers with 99 percent accuracy.

While he hasn't specifically researched fake news, Lee said bots are not the only things pushing out propaganda and misleading stories online. People can easily be hired on crowdsourcing sites to spread fake news articles, increasing their reach and malicious intent.

"The algorithm will potentially prevent future crowdturfing because you can predict what users will be doing," he added. "Hopefully, companies can apply these algorithms to filter malicious users and malicious content out of their systems in real time. It will make their sites more credible. It's all about what information we can trust and improving the trustworthiness of cyberspace."

Lee's work is funded by his five-year, $516,000 NSF CAREER Award, which he received in 2016 while at Utah State University. In addition, in 2013 he was one of only 150 professors in the United States to receive a Google Faculty Research Award; that $43,295 award also supports his crowdturfing research.

Before turning his attention to crowdturfing, Lee conducted research on spam detection. One of his next goals is to adapt his algorithms to detect both spam and crowdturfing. He said crowdturfing is more difficult to detect because, for example, a review of a new product can look legitimate, even if it's been bought and paid for. "The ideal solution is one method that can detect all of these problems at once," he said. "We can build a universal tool that can detect all kinds of malicious users. That's my future work."

Lee added that he expects to make his algorithms openly available to companies and organizations, which could tailor them to their specific needs. "I expect to share the data set so people can come up with a better algorithm, adapted for their specific organization," he said. "When they read our papers, they can understand how this works and implement their own system."

Lee has been assisted in his research by WPI computer science graduate students Thanh Tran and Nguyen Vo.

CAPTION 3-D heart cell simulations exhibiting propagating waves of spontaneous calcium release that can trigger rare, deadly arrhythmias. CREDIT Walker et al.

Supercomputational tool could uncover molecular underpinnings of rare, deadly arrhythmias

A new supercomputational model of heart tissue allows researchers to estimate the probability of rare heartbeat irregularities that can cause sudden cardiac death. The model, developed by Mark Walker from Johns Hopkins University, is presented in PLOS Computational Biology.

An increased risk of sudden cardiac death is associated with some heart diseases. It occurs when an irregular heartbeat (arrhythmia) interferes with normal electrical signaling in the heart, leading to cardiac arrest. Previous research has shown that simultaneous, spontaneous calcium release by clusters of adjacent heart cells can cause premature heartbeats that trigger these deadly arrhythmias.

Despite their importance, arrhythmias that can cause sudden cardiac death are so rare that estimating their probability is difficult, even for powerful high performance computers. For instance, using a "brute force" approach, more than 1 billion simulations would be required to accurately estimate the probability of an event that has a one in 1 million chance of occurring.

In the new study, the researchers developed a method that requires just hundreds of simulations in order to estimate the probability of a deadly arrhythmia. These simulations are powered by a supercomputational model that, unlike previously developed models, realistically incorporates details of the molecular processes that occur in heart cells.

The researchers demonstrated that, by altering model parameters, they could use the model to investigate how particular molecular processes might control the probability of deadly arrhythmias. They found that specific, molecular-level electrical disruptions associated with heart failure increased the probability of deadly arrhythmias by several orders of magnitude.

"This study represents an important step forward in understanding how to pinpoint the molecular processes that are the primary regulators of the probability of occurrence of rare arrhythmic events," says study co-author Raimond Winslow. "As such, our approach offers a powerful new computational tool for identifying the optimal drug targets for pharmacotherapy directed at preventing arrhythmias."

"Multiscale [super]computer models are critical to link an improved understanding of drug-disease mechanisms to tissue and organ behavior in complex diseases like heart failure," says Karim Azer, Sr. Director and Head of Systems Pharmacology, Sanofi, who was not involved in the study. He added, "The models provide predictions that can be tested in the laboratory or the clinic, and as such, the pharmaceutical industry is increasingly utilizing mathematical and computational modeling approaches for enabling key drug discovery and development decisions regarding drug and patient characteristics (i.e., towards precision medicine)."

In the future, the team plans to apply their method to three-dimensional cell clusters, instead of the one-dimensional fibers explored in this study. Future work could also employ experiments with engineered human cardiac tissue to help verify the model's predictions.

Objects scattered to the inner region of the Solar System by Jupiter’s growth brought most of the water now found on Earth

Objects scattered to the inner region of the Solar System by Jupiter's growth brought most of the water now found on Earth

Equipped with Newton's law of universal gravitation (published in Principia 330 years ago) and powerful computational resources (used to apply the law to more than 10,000 interacting bodies), a young Brazilian researcher and his former postdoctoral supervisor have just proposed a new physical model to explain the origin of water on Earth and the other Earth-like objects in the Solar System.

André Izidoro, from the School of Engineering of Sao Paulo State University in Guaratinguetá, Brazil, explains that the novelty does not lie on the idea that Earth's water came predominantly from asteroids. "What we did was associate the asteroid contribution with the formation of Jupiter. Based on the resulting model, we 'delivered to Earth' amounts of water consistent with currently estimated values," said Izidoro, who is supported)by the Sao Paulo Research Foundation (FAPESP) through its Young Investigators Grants program.

His explanation lies in the article "Origin of water in the inner Solar System: Planetesimals scattered inward during Jupiter and Saturn's rapid gas accretion", jointly signed with the American astrophysicist Sean Raymond, who is currently with the Bordeaux Astrophysics Laboratory in France. The article was published in the planetary science journal Icarus.

Estimates of the amount of water on Earth vary a great deal. If the unit of measurement is terrestrial oceans, some scientists speak of three to four of them, while others estimate dozens. The variation derives from the fact that the amounts of water in the planet's hot mantle and its rocky crust are unknown. In any event, the model proposed covers the full range of estimates.

"First of all, it's important to leave aside the idea that Earth received all its water via the impacts of comets from very distant regions. These 'deliveries' also occurred, but their contributions came later and were far less significant in percentage terms," Izidoro said. "Most of our water came to the region currently occupied by Earth's orbit before the planet was formed."

Pre-history of the Solar System: water-rich protoplanets

To understand how this happened, it is worth restating the scenario defined in the conventional model of the Solar System's formation and then adding the new model for the advent of water. The initial condition is a gigantic cloud of gas and cosmic dust. Owing to some kind of gravitational disturbance or local turbulence, the cloud collapses and is attracted by a specific inner region that becomes a center.

With the accumulation of matter, at about 4.5 billion years ago, the center became so massive and hot that it began the process of nuclear fusion, which transformed it into a star. Meanwhile, the remaining cloud continued to orbit the center and its matter agglutinated to form a disk, which later fragmented to define protoplanetary niches.

"The water-rich region of this disk is estimated to have been located several astronomical units from our Sun. In the inner region, closer to the star, the temperature was too high for water to accumulate except, perhaps, in very small amounts in the form of vapor," Izidoro said.

An astronomical unit (AU) is the average distance from the Earth to the Sun. The region between 1.8 AU and 3.2 AU is currently occupied by the Asteroid Belt, with hundreds of thousands of objects. The asteroids located between 1.8 AU and 2.5 AU are mostly water-poor, whereas those located beyond 2.5 AU are water-rich. The process whereby Jupiter was formed can explain the origin of this division, according to Izidoro.

"The time elapsed between the Sun's formation and the complete dissipation of the gas disk was quite short on the cosmogonic scale: from only 5 million to, at most, 10 million years," he said. "The formations of gas giants as massive as Jupiter and Saturn can only have occurred during this youthful phase of the Solar System, so it was during this phase that Jupiter's rapid growth gravitationally disturbed thousands of water-rich planetesimals, dislodging them from their original orbits."

The traumatic birth of gas giants

Jupiter is believed to have a solid core with a mass equivalent to several times that of Earth. This core is surrounded by a thick and massive layer of gas. Jupiter could only have acquired this wrapping during the solar nebular phase, when the system was forming and a huge amount of gaseous material was available.

The acquisition of this gas by gravitational attraction was very fast because of the great mass of Jupiter's embryo. In the vicinity of the formation of the giant planet, located beyond the "snow line", thousands of planetesimals (rocky bodies similar to asteroids) orbited the center of the disk and, simultaneously, attracted each other.

The rapid increase of Jupiter's mass undermined the fragile gravitational equilibrium of this system with many bodies. Several planetesimals were engulfed by proto-Jupiter. Others were propelled to the outskirts of the Solar System. In addition, a smaller number were hurled into the disk's inner region, delivering water to the material that later formed the terrestrial planets and the Asteroid Belt.

"The period during which the Earth was formed is dated to between 30 million and 150 million years after the Sun's formation," Izidoro said. "When this happened, the region of the disk in which our planet was formed already contained large amounts of water, delivered by the planetesimals scattered by Jupiter and also by Saturn. A small proportion of Earth's water may have arrived later via collisions with comets and asteroids. An even smaller proportion may have been formed locally through endogenous physicochemical processes. But most of it came with the planetesimals."

Model simulates gravitational interference suffered by icycle objects

His argument is supported by the model he built with his former supervisor. "We used supercomputers to simulate the gravitational interactions among multiple bodies by means of numerical integrators in Fortran," he explained. "We introduced a modification to include the effects of the gas present in the medium during the era of planet formation because, in addition to all the gravitational interactions that were going on, the planetesimals were also impacted by the action of what's known as 'gas drag', which is basically a 'wind' blowing in the opposite direction of their movement. The effect is similar to the force perceived by a cyclist in motion as the molecules of air collide with his body."

Owing to gas drag, the initially very elongated orbits of the planetesimals scattered by Jupiter were gradually "circularized". It was this effect that implanted these objects in what is now the Asteroid Belt.

A key parameter in this type of simulation is the total mass of the solar nebula at the start of the process. To arrive at this number, Izidoro and Raymond used a model proposed in the early 1970s that was based on the estimated masses of all the objects currently observed in the Solar System.

To compensate for losses due to matter ejection during the formation of the system, the model corrects the current masses of the different objects such that the proportions of heavy elements (oxygen, carbon, etc.) and light elements (hydrogen, helium, etc.) are equal to those of the Sun. The rationale for this is the hypothesis that the compositions of the gas disk and the Sun were the same. Following these alterations, the estimated mass of the primitive cloud is obtained.

The researchers created a simulation from these parameters, available in the link . A graph is shown in the video; the horizontal axis shows the distance to the Sun in AU. The objects' orbital eccentricities are plotted along the vertical axis. As the animation progresses, it illustrates how the system evolved during the formative stage. The two black dots, located at just under 5.5 AU and a little past 7.0 AU, correspond to Jupiter and Saturn respectively. During the animation, these bodies grow as they accrete gas from the protoplanetary cloud, and their growth destabilizes planetesimals, scattering them in various directions. The different colors assigned to the planetesimals serve merely to show where they were to begin with and how they were scattered. The gray area marks the current position of the Asteroid Belt. Time passes in thousands of years, as shown at the top of the chart.

A second animation adds a key ingrediente, which are the migrations of Jupiter and Saturn to positions nearer the Sun during their growth processes.

All calculations of the gravitational interactions among the bodies were based on Newton's law. Numeric integrators enabled the researchers to calculate the positions of each body at different times, which would be impossible to do for some 10,000 bodies without a supercomputer.

The ZTF took this “first light” image on Nov. 1, 2017, after being installed at the 48-inch Samuel Oschin Telescope at Palomar Observatory. The Horsehead nebula is near center and the Orion nebula is at lower right. The full-resolution version is more than 24,000 pixels by 24,000 pixels. Each ZTF image covers a sky area equal to 247 full moons.Caltech Optical Observatories

The first astronomers had a limited toolkit: their eyes. They could only observe those stars, planets and celestial events bright enough to pick up unassisted. But today's astronomers use increasingly sensitive and sophisticated instruments to view and track a bevy of cosmic wonders, including objects and events that were too dim or distant for their sky-gazing forebears.

On Nov. 14, scientists with the California Institute of Technology, the University of Washington and eight additional partner institutions, announced that the Zwicky Transient Facility, the latest sensitive tool for astrophysical observations in the Northern Hemisphere, has seen "first light" and took its first detailed image of the night sky.

When fully operational in 2018, the ZTF will scan almost the entire northern sky every night. Based at the Palomar Observatory in southern California and operated by Caltech, the ZTF's goal is to use these nightly images to identify "transient" objects that vary between observations -- identifying events ranging from supernovae millions of light years away to near-Earth asteroids.

In 2016, the UW Department of Astronomy formally joined the ZTF team and will help develop new methods to identify the most "interesting" of the millions of changes in the sky -- including new objects -- that the ZTF will detect each night and alert scientists. That way, these high-priority transient objects can be followed up in detail by larger telescopes, including the UW's share of the Apache Point Observatory 3.5-meter telescope.

"UW is a world leader in survey astronomy, and joining the ZTF will deepen our ability to perform cutting-edge science on the ZTF's massive, real-time data stream," said Eric Bellm, a UW assistant professor of astronomy and the ZTF's survey scientist. "One of the strengths of the ZTF is its global collaboration, consisting of experts in the field of time-domain astronomy from institutions around the world."

Identifying, cataloguing and classifying these celestial objects will impact studies of stars, our solar system and the evolution of our universe. The ZTF could also help detect electromagnetic counterparts to gravitational wave sources discovered by Advanced LIGO and Virgo, as other observatories did in August when these detectors picked up gravitational waves from the merger of two neutron stars.

But to unlock this promise, the ZTF requires massive data collection and real-time analysis -- and UW astronomers have a history of meeting such "big data" challenges.

The roots of big data astronomy at the UW stretch back to the Sloan Digital Sky Survey, which used a telescope at the Apache Point Observatory in New Mexico to gather precise data on the "redshift" -- or increasing wavelength -- of galaxies as they move away from each other in the expanding universe. Once properly analyzed, the data helped astronomers create a more accurate 3-D "map" of the observable universe. The UW's survey astronomy group is gathered within the Data Intensive Research in Astrophysics and Cosmology (DIRAC) Institute, which includes scientists in the Department of Astronomy as well as the eScience Institute and the Paul G. Allen School of Computer Science & Engineering.

"It was natural for the UW astronomy department to join the ZTF team, because we have assembled a dedicated team and expertise for 'big data' astronomy, and we have much to learn from ZTF's partnerships and potential discoveries," said UW associate professor of astronomy Mario Juric.

From Earth, the sky is essentially a giant sphere surrounding our planet. That whole sphere has an area of more than 40,000 square degrees. The ZTF utilizes a new high-resolution camera mounted on the Palomar Observatory's existing Samuel Oschin 48-inch Schmidt Telescope. Together these instruments make up the duet that saw first light recently, and after months of fine-tuning they will be able to capture images of 3,750 square degrees each hour.

These images will be an order of magnitude more numerous than those produced by the ZTF's predecessor survey at Palomar. But since these transient objects might fade or change position in the sky, analysis tools must run in near real time as images come in.

"We'll be looking for anything subtle that changes over time," said Bellm. "And given how much of the sky ZTF will image each night, that could be tens of thousands of objects of potential interest identified every few days."

From a data analysis standpoint, these are no easy tasks. But, they're precisely the sorts of tasks that UW astronomers have been working on in preparation for the Large Synoptic Survey Telescope, which is expected to see first light in the next decade. The LSST, located in northern Chile, is another big data project in astrophysics, and is expected to capture images of almost the entire night sky every few days.

"Data from the ZTF surveys will impact nearly all fields of astrophysics, as well as prepare us for the LSST down the line," said Juric.

Carl Kingsford

Method should help scientists understand regulation of gene expression

Computational biologists at Carnegie Mellon University have developed a more accurate supercomputational method for reconstructing the full-length nucleotide sequences of the RNA products in cells, called transcripts, that transform information from a gene into proteins or other gene products.

Their software, called Scallop, will help scientists build a more complete library of RNA transcripts and thus help scientists better understand the regulation of gene expression.

A report on Scallop by Carl Kingsford, associate professor of computational biology, and Mingfu Shao, Lane Fellow in the School of Computer Science's Computational Biology Department, is being published online yesterday by the journal Nature Biotechnology.

Scallop is a so-called transcript assembler, taking fragments of RNA sequences, called reads, that are produced by high-throughput RNA sequencing technologies (RNA-seq), and putting them back together, like pieces of a puzzle, to reconstruct complete RNA transcripts.

"There are many existing assemblers," Shao said, "but these existing methods are still not accurate enough."

When compared to two leading assemblers, StringTie and TransComb, Scallop is 34.5 percent and 36.3 percent more accurate for transcripts consisting of multiple exons - subunits of a gene that encode part of the gene product.

Like other reference-based assemblers, Scallop begins by constructing a graph to organize reads that are mapped to the corresponding locations on the gene's DNA. Many alternative paths exist for connecting the reads together, however, so errors are easily made. Scallop improves its odds by using a novel algorithm to take full advantage of the information from reads that span several exons to guide it to the correct assembly paths.

Scallop proves particularly adept when assembling less abundant RNA transcripts, improving upon the accuracy of StringTie and TransComb by 67.5 percent and 52.3 percent.

The researchers already have released Scallop as open software on the GitHub repository.

"We've had more than 100 downloads already and, based on the feedback we've received, people are really using it," Shao said. "We expect more users now that our paper is out."

CAPTION This is the prototype of an artificial neural network based on a hybrid analog-digital electronic circuit and a memristive chip. CREDIT Elena Emel'janova

Living cell culture learning process to be implemented for the first time 

Lobachevsky University scientists under the supervision of Alexey Mikhailov, Head of the UNN PTRI Laboratory of Thin Film Physics and Technology, are working to develop an adaptive neural interface that combines, on the one hand, a living culture, and on the other, a neural network based on memristors. This project is one if the first attempts to combine living biological culture with a bio-like neural network based on memristors. Memristor neural networks will be linked to a multi-electrode system for recording and stimulating the bioelectrical activity of a neuron culture that performs the function of analyzing and classifying the network dynamics of living cells.

Compared with some international competitors who set the task of "connecting the living world and artificial architectures" (for example, the RAMP project), the advantage of the UNN project is that highly skilled experts in various fields (including physics and technology of memristive nanostructures, neural network modeling, electronic circuit design, neurodynamics and neurobiology) are concentrated both in terms of their location and organization within the same university.

According to Alexey Mikhailov, UNN scientists are now working to create a neural network prototype based on memristors, which is similar to a biological nervous system with regard to its internal structure and functionality.

"Due to the locality of the memristive effect (such phenomena occur at the nanoscale) and the use of modern standard microelectronic technologies, it will be possible to obtain a large number of neurons and synapses on a single chip. These are our long-time prospects for the future. It means, in fact, that one can "grow" the human brain on a chip. At present, we are doing something on a simpler scale: we are trying to create hybrid electronic circuits where some functions are implemented on the basis of traditional electronics (transistors), and some new functions that are difficult to implement in hardware are realized on the basis of memristors", said Alexey Mikhailov.

Currently, researchers are exploring the possibility of constructing a feedback whereby the output signal from the memristor network will be used to stimulate the biological network. Actually, it means that for the first time the process of learning will be realized for a living cell culture. The living culture used by the scientists is an artificially grown neuronal culture of brain cells. In principle, however, one can also use a slice of living tissue.

The aim of the project is to create compact electronic devices based on memristors that reproduce the property of synaptic plasticity and function as part of bio-like neural networks in conjunction with living biological cultures.

The use of hybrid neural networks based on memristors opens up amazing prospects. First, with the help of memristors it will be possible to implement the computing power of modern supercomputers on a single chip. Secondly, it will be possible to create robots that manage an artificially grown neuronal culture. Thirdly, such "brain-like" electronic systems can be used to replace parts of the living nervous system in the event of their damage or disease.

The project's tasks of creating electronic models of artificial neural networks (ANN), as well as the integration of memristive architectures into the systems for recording and processing the activity of living biological neural network structures are fully in line with the current world trends and priorities in the development of neuromorphic systems.

The balance in the combination of different approaches is the key to successful development and sustainability of the project. The first (and the main) of these approaches is to demonstrate the potential of the "traditional" ANN in the form of a two-layer perceptron based on programmable memristive elements. The key advantages of the artificial neural network being developed include, first of all, its multilayer structure, and hence the ability to solve nonlinear classification problems (based on the shape of the input signal), which is very important when dealing with complex bioelectric activity, and secondly, the hardware implementation of all artificial network elements on one board, including the memristive synaptic chip, control electronics and neuron circuits. In the future, this arrangement will allow us to implement the adaptive neural interface "living neural network - memristive ANN" in the form of a compact autonomous device.

The second approach that the researchers are pursuing in parallel is to find some alternative solutions for creating non-traditional neural network architectures where the stochastic nature and the "live" dynamics of memristive devices play a key role. These features of memristors make it possible to use them for direct processing and analysis of nerve cell activity, as well as for developing plausible physical models of spiking neural networks with self-organization of memristive connections between neurons. These results make an important contribution to the achievement of the project goal and lay a groundwork for the transition to a qualitatively new level in the field of bio-like memrisive systems.

 DSI Professors Tian Zheng and Shaw-Hwa Lo with DOT officials.

A team of statisticians from the Data Science Institute (DSI) received a National Science Foundation grant ($900,000) to develop a statistical method that will help researchers who work with big data make better predictions.

The team's method establishes statistical foundations for measuring "predictivity," the ability of a researcher to make predictions based on big data. The novel approach allows researchers to compare their predictions to a theoretical baseline, which will give their predictions greater accuracy. The method will also help statisticians and policy experts contend with complex social problems, for which big data sets are often difficult to assess.

The DSI team, led by DSI professors Shaw-Hwa Lo and Tian Zheng, is collaborating with New York City Department of Transportation (DOT) on Vision Zero, an initiative to end traffic deaths in the city. DOT collects big data from collisions to analyze the multiple factors that relate to traffic crashes. The potential interactions between the variables and datasets are extremely complex, which led to DOT's interest in working with the DSI team and using its statistical approach.

Lo, a professor of statistics and an affiliate of DSI, said, "we are developing a statistical way to evaluate performance of prediction methods that will be of immense help to DOT. Our method will help DOT identify key combinations of factors and intervention measures to predict where and when crashes are likely to occur."

Statistics can be difficult for the common reader to understand, but in general terms the new method can identify the variable with the highest "predictivity" in large data sets, explained Lo.Current statistical models consider a large number of X variables for predicting a Y variable, and selecting the likely small number of X variables most helpful to predict Y is the goal. But that goal is difficult to reach if the X variables interact in complicated ways. The new method, however, identifies groups of X variables that, once combined, have a stronger ability to predict. Statisticians thus no longer need to apply techniques such as cross validation with the Y variable to evaluate the predictive ability of X variables.

The DSI team will use its new method to help DOT identify risk factors for dangerous roads. It is often difficult to identify the potential risk factors and interactions that lead to the specific crash characteristics of high-crash roadways. The new statistical method, however, will allow DOT to account for all traffic variables, leading to better traffic assessments and enhanced public safety.

"We are excited to collaborate with Professors Lo and Zheng and the Data Science Institute to explore new, innovative research in statistical learning through the analysis of large and diverse transportation and safety datasets," said Seth Hostetter, Director, Safety Analytics and Mapping for DOT. "This is an excellent opportunity to explore the complex interactions between the various risk factors associated with traffic safety that may provide insights that will help us accelerate our progress in achieving the traffic safety goals of Vision Zero."

Zheng, a professor of statistics at Columbia and associate director of education at DSI, said the statistics team is happy to support the work of DOT.

"We are thrilled to be collaborating with DOT on this important project," said Zheng. "Vision Zero aims to end traffic fatalities and we are delighted that DOT is using our new statistical method to further that noble goal."

CAPTION Untangling quadrilateral meshes using locally injective mappings. CREDIT Krishnan Suresh

The supercomputer simulations used to design, optimize, test or control a vast range of objects and products in our daily lives are underpinned by finite element methods.

Finite element simulations use a mesh of geometric shapes -- triangles, tetrahedra, quadrilaterals or hexahedra, for instance. These shapes can be combined to form a mesh that approximates the geometry of a model. For example, meshes can be used to model the human knee in biomechanics simulations, create computer-animated movies or help developers bring products, like airplanes and cars, from concept to production more quickly via better prototypes, testing and development.

"When you were a kid you played with LEGOs and thought about building different projects -- like a house," said Suzanne Shontz, associate professor of electrical engineering & computer science at the University of Kansas. "You were basically stacking blocks and building an object. Meshes are a lot like that -- but they're more flexible than cubes. We're building with things like tetrahedra and hexahedra that you can combine to make different kinds of larger shapes. If you're doing an airplane simulation, you'll know the geometry of the airplane, and that determines with which shapes to build it."

But a problem arises with finite element meshes, especially when they're put into motion during a simulation: The shapes can tangle and overlap.

"The most common context for tangled meshes is a simulation involving motion," Shontz said. "Suppose you have a two-dimensional mesh made of triangles. Now focus on one triangle and its three vertices. If one vertex is moved too far to the left with respect to the vertex to its left, this causes the orientation of the triangle to be flipped and triangles to overlap. A tangled mesh is one that contains elements with a mixture of orientations."

The KU researcher said the use of a tangled mesh in a finite element simulation can lead to inaccurate results -- with potentially disastrous consequences in biomechanical design, product development or large-deformation analysis.

"If you try to run such a simulation, you'll get a physically invalid solution," Shontz said. "That will cause a host of problems. Engineers need accurate solutions in making design decisions. With an airplane, the pilot will make decisions about how to fly the plane in turbulent weather; it's crucial that these decisions are based on correct simulation results regarding the weather and the plane's response. When making important medical decisions, a doctor needs to be able to trust that the simulation results for the disease progression or treatment are correct."

For years, researchers have pursued a solution to the tangled mesh problem, proposing solutions like re-meshing, meshfree methods and the finite cell method. But no definitive answer has yet been developed.

With a new $250,000 award from the National Science Foundation, Shontz and her KU colleagues are working with a team at the University of Wisconsin-Madison headed by Krishnan Suresh, a professor of mechanical engineering, to explore new methods for addressing the tangled mesh problem. Suresh's team received a similar $250,000 award from the National Science Foundation for their research.

Shontz already has developed several promising untangling algorithms, but she said it has proven difficult to "completely" untangle a mesh. Working with Suresh, she said she hopes the collaboration might yield a breakthrough.

Under the new grant, Shontz's group will create new constrained optimization methods for mesh untangling to convert "severely tangled meshes into mildly tangled meshes." In the meantime, Suresh's group will hone the finite-cell method to ensure accurate finite-element solutions over these mildly tangled meshes.

"Our part at KU is to develop a method to untangle meshes so they can be used with standard finite element methods," Shontz said. "At the University of Wisconsin-Madison, they're coming up with a finite-element solver that can work on tangled meshes. We're also looking at a hybrid solution that uses some of their research and some of ours."

Among many biomechanics applications, the researchers hope their work could lead to improved untangling of finite element meshes used to model the brains of patients with hydrocephalus. In these patients, large ventricular displacements of the brain can be modeled with finite-element simulations -- but the models often result in tangled meshes.

"With hydrocephalus, the brain has excess fluid buildup from cerebrospinal fluid," Shontz said. "The brain changes shape due to the excess pressure that usually results within the skull. The idea is to be able to run simulations which will help doctors predict which surgery to perform. However, due to the nonlinear deformations of the brain ventricles, the meshes will often become tangled."

As part of the work under the award, the investigators at KU and UW-Madison will exchange teaching modules in the form of prerecorded lectures to be used in graduate-level classes. Shontz will deliver lectures on mesh generation, smoothing, tangling and untangling to students at UW-Madison, while Suresh will provide lectures to KU students on geometric modeling and computational mechanics.

The researchers will also develop a Design, Analyze and Print Workshop to be offered to middle and high school students on the campuses of the two universities.

Graduate students also will receive support and training via the NSF grant.

"There's funding for graduate-student salaries, faculty summer salary support, and conference travel," Shontz said. "We'll also do student exchanges. I'll send a KU graduate student to Wisconsin for a few weeks, and they'll send a student here, too. They'll get exposed to new ideas, so it's a great opportunity for students at both institutions."

  1. University of Bristol launches £43 million Quantum Technologies Innovation Centre
  2. Utah researchers develop mile stone for ultra-fast communication
  3. Pitt supercomputing helps doctors detect acute kidney injury earlier to save lives
  4. American University prof builds models to help solve few-body problems in physics
  5. UW prof helps supercompute activity around quasars, black holes
  6. Nottingham's early warning health, welfare system could save UK cattle farmers millions of pounds, reduce antibiotic use
  7. Osaka university researchers roll the dice on perovskite interfaces
  8. UM biochemist Prabhakar produces discovery that lights path for alzheimer's research
  9. Tafti lab creates an elusive material to produce a quantum spin liquid
  10. Purdue develops intrachip micro-cooling system for supercomputers
  11. Northeastern University, China's Xu develops machine learning system to identify shapes of red blood cells
  12. SDSU prof Vaidya produces models for HIV drug pharmacodynamics
  13. Los Alamos supercomputers help interpret the latest LIGO findings
  14. Emerson acquires Paradigm
  15. Chinese scientists discover more than 600 new periodic orbits of the famous three body problem
  16. KU Leuven computational biologists develop supercomputer program detects differences between human cells
  17. Seeing the next dimension of computer chips
  18. NOAA scientists produce new insights into how global warming is drying up the North American monsoon
  19. Paradigm launches cloud-based production management solution
  20. SEAS researchers add zero-index waveguide to photonics toolbox

Page 3 of 36