Information attacks have emerged as a major concern of societies worldwide. They come under different names and in different flavors — fake news, disinformation, political astroturfing, influence operations, etc. And they may arrive as a component of hybrid warfare — in combination with traditional cyber-attacks (use of malware), and with conventional military action or covert physical attacks. (Photo Credit: Shutterstock)

A team of U.S. Army researchers recently joined an international group of scientists in Chernihiv, Ukraine to initiate a first-of-its-kind global science and technology research program to understand and ultimately combat disinformation attacks in cyberspace. 

Scientists from the Bulgarian Defense Institute in Sophia, Bulgaria; the Chernihiv National University of Technology in Chernihiv, Ukraine; and the National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic" in Kyiv, Ukraine joined ARL researchers Nov. 14-15, at the kickoff meeting of the Cyber Rapid Analysis for Defense Awareness of Real-time Situation project. The participation of Bulgarian and Ukrainian institutions is funded by NATO Science for Peace and Security Programme, which promotes dialogue and practical cooperation between NATO member states and non-NATO partner nations -- in this case Ukraine -- based on scientific research, technological innovation and knowledge exchange.

Over the next three years, the group will develop theoretical foundations, methods, and approaches towards software tools for situational awareness that will enable a nation's defense forces to monitor cyberspace to detect malicious information injections and give timely notification of an information attack, said Dr. Alexander Kott, ARL chief scientist, who attended the meeting together with ARL's Dr. Brian Rivera, the chief of the Network Science Division. They'll also help create conditions necessary for decision making about prevention or timely response to adversarial disinformation injections or manipulations. Especially important in meeting these objectives will be the real world experiences pertaining to actual disinformation attacks directed against Ukraine.

"Information attacks have emerged as a major concern of societies worldwide. They come under different names and in different flavors -- fake news, disinformation, political astroturfing, influence operations, etc. And they may arrive as a component of hybrid warfare -- in combination with traditional cyber-attacks (use of malware), and with conventional military action or covert physical attacks. A particularly poignant example of a victim of such attacks has been Ukraine," Kott said.

He said the ARL scientists bring to this project a number of critical scientific elements. These include published research results -- theories and algorithms -- that explain and predict propagation of opinions and trust within a network, find untrustworthy sources within cyberspace, and detect false news. Much of these were developed in the context of ARL's extensive Network Science research in alliance with multiple academic institutions, and will help jump-start CyRADARS.

"ARL also operates a unique Open Campus business model. It enables scientists from both USA and other countries to conduct collaborative research at ARL. Within the context of CyRADARS, students and faculty from Ukraine and Bulgaria will be able to come to ARL and use ARL's Open Campus facilities and test beds while working on joint projects with ARL scientists," Kott said.

The research efforts will take place at all four institutions in a virtual, distributed networked laboratory that the project will create.

Each rectangular structure represents a heart cell in the supercomputer model. The color bursts depict propagating waves of calcium. Each cell is identical but exhibits a distinct pattern of calcium waves due to random ion channel gating. The team investigated how this randomness gives rise to sudden unpredictable heart arrhythmia. CREDIT PLOS Computational Biology/Mark A. Walker

Some heart disease patients face a higher risk of sudden cardiac death, which can happen when an arrhythmia -- an irregular heartbeat-- disrupts the normal electrical activity in the heart and causes the organ to stop pumping. Arrhythmias linked to sudden cardiac death are very rare, however, making it difficult to study how they occur and how they might be prevented.

To make it much easier to discover what triggers this deadly disorder, a team led by Johns Hopkins researchers constructed a powerful new supercomputer model that replicates the biological activity within the heart that precedes sudden cardiac death.

In a study published recently in PLOS Computational Biology, the team reported that the digital model has yielded important clues that could provide treatment targets for drug makers.

"For the first time, we have come up with a method for relating how distressed molecular mechanisms in heart disease determine the probability of arrhythmias in cardiac tissue," said Raimond L. Winslow, the Raj and Neera Singh Professor of Biomedical Engineering at Johns Hopkins and senior author of the study.

"The importance of this," said Winslow, who also is director of the university's Institute for Computational Medicine, "is that we can now quantify precisely how these dysfunctions drive and influence the likelihood of arrhythmias. That is to say, we now have a way of identifying the most important controlling molecular factors associated with these arrhythmias."

With this knowledge, Winslow said, researchers will be better able to develop treatments to keep some deadly heart rhythms from forming. "It could lead to better drugs that target the right mechanisms," Winslow said. "If you can find a molecule that blocks this particular action, then doing so will significantly reduce the probability of an arrhythmia, whereas other manipulations will have comparatively negligible effects."

The lead author of the study was Mark A. Walker, who worked on the problem while earning his Johns Hopkins doctoral degree under Winslow's supervision. Walker said he and his colleagues used supercomputer models to determine what activity was linked to arrhythmia at three biological levels: in the heart tissue as a whole, within individual heart cells, and within the molecules that make up the cells, including small proteins called ion channels that control the movement of calcium in the heart.

"Calcium is an important player in the functioning of a heart cell," said Walker, who now works as a computational biologist at the Broad Institute, a research center affiliated with Harvard University and the Massachusetts Institute of Technology. "There are a lot of interesting questions about how the handling of calcium in heart cells can sort-of go haywire."

Walker and his colleagues chose to focus on one intriguing question about this process: when heart cells possess too much calcium, which can happen in heart disease patients, how does this overload of calcium trigger an arrhythmia?

The team discovered that heart cells respond by expelling excess calcium, and in doing so, they generate an electrical signal. If by chance, a large enough number of these signals are generated at the same time, it can trigger an arrhythmia.

"Imagine if you have a bunch of people in a room, and you want to test the strength of the floor they are all standing on," Walker said. "It's not a very strong floor, so if there's enough force on it, it will break. You tell everyone that on the count of three, jump in the air. They all land on the floor, and you try to figure out what's the probability that the floor will break, given that everyone is going to jump at a slightly different time, people will weigh different amounts, and they might jump to different heights. These are all random variables."

Similarly, Walker said, random variables also exist in trying to determine the probability that enough calcium-related electrical signals will simultaneously discharge in the heart to set off a lethal arrhythmia. Because the circumstances that cause sudden cardiac death are so rare, it makes them very tough to predict.

"You're trying to figure out what the probability is," Walker said. "The difficulty of doing that is if the probability is one in a million, then you have to do tens of millions of trials to estimate that probability. One of the advances that we made in this work was that we were able to figure out how to do this with really just a handful of trials."

Walker and Winslow both cautioned that, at present, the new supercomputer model cannot predict which heart patients face a higher risk of sudden cardiac death. But they said the model should speed up the pace of heart research and the development of related medicines or treatments such as gene therapy. They said the model will be shared as free open-source software. 

Stefano Baroni professor of theoretical condensed-matter physics

From the evolution of planets to electronics: Thermal conductivity plays a fundamental role in many processes

"Our goal? To radically innovate numerical simulations in the field of thermal transport to take on the great science and technology issues in which this phenomenon is so central. This new study, which has designed a new method with which to analyze heat transfer data more efficiently and accurately, is an important step in this direction".

This is how Stefano Baroni describes this new research performed at Trieste's SISSA by a group led by him, which has just been published in the Scientific Reports journal.

The research team focused on studying thermal transfer, the physics mechanism by which heat tends to flow from a warmer to a cooler body. Familiar to everyone, this process is involved in a number of fascinating scientific issues such as, for example, the evolution of the planets, which depends crucially on the cooling process within them. But it is also crucial to the development of various technological applications: from thermal insulation in civil engineering to cooling in electronic devices, from maintaining optimal operating temperatures in batteries to nuclear plant safety and storage of nuclear waste.

"Studying thermal transfer in the laboratory is complicated, expensive and sometimes impossible, as in the case of planetology. Numerical simulation, on the other hand, enables us to understand the hows and whys of such phenomena, allowing us to calculate precisely physical quantities which are frequently not accessible in the lab, thereby revealing their deepest mechanisms", explains Baroni. The problem is that until a short time ago it was not possible to do supercomputing in this field with the same sophisticated quantum methodologies used so successfully for many other properties: "The equations needed to compute heat currents from the molecular properties of materials were not known. Our research group overcame this obstacle a few years ago formulating a new microscopic theory of heat transfer."

But a further issue needed resolving. The simulation times required to describe the heat transfer process are hundreds of times longer than those currently used to simulate other properties. And this understandably posed a number of problems.

"With this new research, bringing together concepts demonstrated by previous theories - especially that known as the Green-Kubo theory - with our knowledge of the quantum simulation field we understood how to analyse the data to simulate heat conductivity in a sustainable way in terms of supercomputer resources and, consequently, cost. And this opens up extremely important research possibilities and potential applications for these studies".

With one curiosity which Baroni reveals: "The technique we have formulated is adapted from a methodology used in completely different sectors, such as electronic engineering, to study the digitilization of sound, and quantitative social sciences and economics, to study the dynamics of complex processes such as financial markets, for example. It is interesting to see how unexpected points of contact and cross fertilization can sometimes arise amongst such different fields".

Amplitude of the displacement field after a train passes on the track. The left-hand figure corresponds to a simulation with homogeneous ballast and the right-hand image to a simulation with heterogeneous ballast. © Lucio de Abreu Corrêa, Laboratoire de Mécanique des Sols, Structures et Matériaux (CNRS/CentraleSupélec).

SNCF engineers have been using mathematical models for many years to simulate the dynamic behavior of railways. These models have not been able to take into account large portions of the track have been extremely limited at modelling ballast, the gravel layer located under railway tracks. This is why SNCF Innovation & Recherche asked for help from specialists in wave propagation for all types of media and at varied scales: CNRS and INSA Strasbourg researchers. Together, they have shown that a large part of the energy introduced by a train passing is trapped by the ballast. Their work, published in the November issue of Computational Mechanics, shows that this trapping phenomenon, which is very dependent on train speed, could cause accelerated ballast degradation in railway tracks.

SNCF engineers currently have two ways that they can take ballast into account to attempt to understand how railway tracks behave as a train passes. One is high-level modeling of interactions between each "grain" and the other is a simpler model where the ballast is represented as a homogeneous and continuous whole. Though taking into account interactions between grains allows demonstration of wear mechanisms locally, it becomes too complex to be applied to the entire track, to the passage of an entire train. By contrast the simple models can be used for large portions of tracks but cannot really tell us what happens in the gravel layer. In addition, measurements on vibrations near the tracks were much lower that what calculations predicted. In this context, the question is how to model an entire train passing, for several meters, or even kilometers, while retaining the specifics of the ballast's mechanical behavior. Something was missing in the modeling to be able to describe the influence of a train passing on the immediate surroundings of the railway.

The researchers have proposed a new mechanism that helps explain why vibrations are lower than predicted as the distance from the track increases. They stopped considering the ballast as a homogeneous medium and started considering it as a heterogeneous medium. This time, the mathematical model and physical measurements agree: they have shown that a large part of the energy introduced by a train passing is trapped in the heterogeneous ballast layer. This trapping phenomenon, very dependent on train speed, could cause degradation in the ballast layer, as the energy provided by the train passing dissipates by the grains rubbing together.

Therefore this work opens paths to a better understanding of the behavior of how railway tracks behave as a train passes. By understanding where in the tracks the ballast traps the most energy, these results particularly open new perspectives on increasing the lifetime of railway tracks and reducing maintenance costs.

Rodion Kutsaev @frostroomhead

Neutron stars are made out of cold ultra-dense matter. How this matter behaves is one of the biggest mysteries in modern nuclear physics. Researchers developed a new method for measuring the radius of neutron stars which helps them to understand what happens to the matter inside the star under extreme pressure.

A new method for measuring neutron star size was developed in a study led by a high-energy astrophysics research group at the University of Turku. The method relies on modeling how thermonuclear explosions taking place in the uppermost layers of the star emit X-rays to us. By comparing the observed X-ray radiation from neutron stars to the  state-of-the-art theoretical radiation models, researchers were able to put constraints on the size of the emitting source. This new analysis suggests that the neutron star radius should be about 12.4 kilometres.

– Previous measurements have shown that the radius of a neutron star is circa 10–16 kilometres. We constrained it to be around 12 kilometres with about 400 metres accuracy, or maybe 1000 metres if one wants to be really sure. Therefore, the new measurement is a clear improvement compared to that before, says Doctoral Candidate Joonas Nättilä who developed the method.

The new measurements help researchers to study what kind of nuclear-physical conditions exist inside extremely dense neutron stars. Researchers are particularly interested in determining equation of state of the neutron matter, which shows how compressible the matter is at extremely high densities.

– The density of neutron star matter is circa 100 million tons per cubic centimetre. At the moment, neutron stars are the only objects appearing in nature, with which these types of extreme states of matter can be studied, says Juri Poutanen, the leader of the research group.

The new results also help to understand the recently discovered gravitational waves that originated from the collision of two neutron stars. That is why the LIGO/VIRGO consortium that discovered these waves was quick to compare their recent observations with the new constraints obtained by the Finnish researchers.

– The specific shape of the gravitational wave signal is highly dependent on the radii and the equation of state of the neutron stars. It is very exciting how these two completely different measurements tell the same story about the composition of neutron stars. The next natural step is to combine these two results. We have already been having active discussions with our colleagues on how to do this, says Nättilä.

New study of the trading interactions that determine the stock price using AI algorithms reveals unexpected microstructure for stock evolution, useful for financial crash modeling

Every day, thousands of orders for selling or buying stocks are registered and processed within milliseconds. Electronic stock exchanges, such as NASDAQ, use what is referred to as microscopic modelling of the order flow - reflecting the dynamics of order bookings - to facilitate trading. The study of such market microstructures is a relatively new research field focusing on the trading interactions that determine the stock price. Now, a German team from the University of Duisburg-Essen has analysed the statistical regularities and irregularities in the recent order flow of 96 different NASDAQ stocks. Since prices are strongly correlated during financial crises, they evolve in a way that is similar to what happens to nerve signals during epileptic seizures. The findings of the Duisburg-Essen group, published in EPJ B, contribute to modeling price evolution, and could ultimately be used to evaluate the impact of financial crises.

The dynamics of stock prices typically shows patterns. For example, large price changes arise in a sequence, which is ten times larger than the average. By studying the microstructure of stock transactions, researchers have previously identified groups of stocks with similar stock order flow. However, there are still many open questions about the co-evolution of different stocks. In fact, our current knowledge of trading interactions is far less developed than our knowledge of the actual prices that are a result of the microscopic dynamics.

In this study, the authors analyse the co-evolution of order flow for pairs of stocks listed in the index NASDAQ 100. They observe an abstract distance between every pair of stocks. The distance is small if both stocks behave similarly, and large if they behave differently. Using machine learning algorithms, they find that there are four groups of stocks with large mutual differences (large distances). This is surprising, as this rich microscopic diversity is not reflected in the actual prices.

In the last issue of the prestigious journal EMS Surveys in Mathematical Sciences published by the European Mathematical Society there appeared a 102 pages long paper entitled "Numerical infinities and infinitesimals: Methodology, applications, and repercussions on two Hilbert problems" written by Yaroslav D. Sergeyev, Professor at Lobachevsky State University in Nizhni Novgorod, Russia and Distinguished Professor at the University of Calabria, Italy (see his Brief Bio below). The paper describes a recent computational methodology introduced by the author paying a special attention to the separation of mathematical objects from numeral systems involved in their representation. It has been introduced with the intention to allow people to work with infinities and infinitesimals numerically in a unique computational framework in all the situations requiring these notions. The methodology does not contradict Cantor's and non-standard analysis views and is based on the Euclid's Common Notion no. 5 "The whole is greater than the part" applied to all quantities (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The non-contradictory of the approach has been proved by the famous Italian logician Prof. Gabriele Lolli.

This computational methodology uses a new kind of supercomputer called the Infinity Computer (patented in USA and EU) working numerically (traditional theories work with infinities and infinitesimals only symbolically) with infinite and infinitesimal numbers that can be written in a positional numeral system with an infinite radix. There exists its working software prototype. The appearance of the Infinity Computer changes drastically the entire panorama of numerical computations enlarging horizons of what can be computed to different numerical infinities and infinitesimals. It is argued in the paper that numeral systems involved in computations limit our capabilities to compute and lead to ambiguities in theoretical assertions, as well. The introduced methodology gives the possibility to use the same numeral system for measuring infinite sets, working with divergent series, probability, fractals, optimization problems, numerical differentiation, ODEs, etc. Numerous numerical examples and theoretical illustrations are given.

In particular, it is shown that the new approach allows one to observe mathematical objects involved in the Hypotheses of Continuum and the Riemann zeta function with a higher accuracy than it is done by traditional tools. It is stressed that the hardness of both problems is not related to their nature but is a consequence of the weakness of traditional numeral systems used to study them. It is shown that the introduced methodology and numeral system change our perception of the mathematical objects studied in the two problems giving unexpected answers to both problems. The effect of employing the new methodology in the study of the above Hypotheses is comparable to the dissolution of computational problems posed in Roman numerals (e.g. X - X cannot be computed in Roman numerals since zero is absent in their numeral system) once a positional system capable of expressing zero is adopted. More papers on a variety of topics using the new computational methodology can be found at the Infinity computer web page:

Yaroslav D. Sergeyev, Ph.D., D.Sc., D.H.C. is President of the International Society of Global Optimization. His research interests include numerical analysis, global optimization, infinity computing and calculus, philosophy of computations, set theory, number theory, fractals, parallel computing, and interval analysis. Prof. Sergeyev was awarded several research prizes (Khwarizmi International Award, 2017; Pythagoras International Prize in Mathematics, Italy, 2010; EUROPT Fellow, 2016; Outstanding Achievement Award from the 2015 World Congress in Computer Science, Computer Engineering, and Applied Computing, USA; Honorary Fellowship, the highest distinction of the European Society of Computational Methods in Sciences, Engineering and Technology, 2015; The 2015 Journal of Global Optimization (Springer) Best Paper Award; Lagrange Lecture, Turin University, Italy, 2010; MAIK Prize for the best scientific monograph published in Russian, Moscow, 2008, etc.).

His list of publications contains more than 250 items (among them 6 books). He is a member of editorial boards of 6 international journals and co-editor of 8 special issues. He delivered more than 60 plenary and keynote lectures at prestigious international congresses. He was Chairman of 7 international conferences and a member of Scientific Committees of more than 60 international congresses.

Real-world examples of individual trajectories and collective movements. (a) Four examples of an individual trajectory from an empirical data set from mainland China and the corresponding collective movements. (b-d) Collective movements embedded in the data sets from the continental United States, Cote d'Ivoire and Belgium. Here the color bar represents the amount of mobility flux among locations per unit time, where a brighter (darker) line indicates a stronger (weaker) flux. Note that the spatial scales associated with these data sets are drastically different.

A new method to predict human mobility, which can be used to chart the potential spread of disease or determine rush hour bottlenecks, has been developed by a team of researchers, including one from Arizona State University.

The research, Universal model of individual and population mobility on diverse spatial scales, was published in the Nov. 21 issue of Nature Communications.

The research was conducted by Ying-Cheng Lai, a professor of electrical, computer and energy engineering at ASU. He worked with Xio-Yong Yan and Zi-You-Gao from the Institute of Transportation System Science and Engineering at Beijing Jiaotong University and Wen Xu Wang from the School of Systems Science and Center for Complexity Research at Beijing Normal University.

The researchers found that, based on empirical data from cell phones and GPS records, people are most inclined to travel to "attractive" locations they've visited before, and these movements are independent of the size of a region. The new mobility method uses mathematical calculations based on that data, providing insights that can be discerned regardless of size of the region being tracked.

"The new mobility prediction method is important because it works at both individual and population scales, regardless of region size," explained Arizona State University Professor Ying-Cheng Lai. "Until now, different models were necessary for predicting movement in large countries versus small countries or cities. You could not use the same prediction methods for countries like the U.S. or China that you'd use for Belgium or France."

Information gathered using the new process will be valuable for a variety of prediction tasks, such as charting potential spread of disease, urban transportation planning, and location planning for services and businesses like restaurants, hospitals and police and fire stations.

Tracking human movements began about a decade ago and revealed the necessity for two different prediction models - one for large geographic areas like large countries and one for small countries or cities. Additionally, tracking at scale currently relies on measuring travel flux between locations and travel trajectories during specific time frames, requiring large amounts of private data. The new algorithm, based solely on population distribution, provides an alternative, more practical approach.

Marvell Technology is based in Bermuda but run from headquarters in Santa Clara, CA.

  • Complementary portfolios and scale enable world-class end-to-end solutions
  • Diversifies revenue base and end markets; increases SAM to $16 billion+
  • Combined R&D innovation engine and IP portfolio accelerates product leadership
  • Creates best-in-class financial model

Marvell Technology Group Ltd. and Cavium, Inc. have announced a definitive agreement, unanimously approved by the boards of directors of both companies, under which Marvell will acquire all outstanding shares of Cavium common stock in exchange for consideration of $40.00 per share in cash and 2.1757 Marvell common shares for each Caviumshare. Upon completion of the transaction, Marvell will become a leader in infrastructure solutions with approximately $3.4 billion in annual revenue.

The transaction combines Marvell's portfolio of leading HDD and SSD storage controllers, networking solutions and high-performance wireless connectivity products with Cavium's portfolio of leading multi-core processing, networking communications, storage connectivity and security solutions. The combined product portfolios provide the scale and breadth to deliver comprehensive end-to-end solutions for customers across the cloud data center, enterprise and service provider markets, and expands Marvell's serviceable addressable market to more than $16 billion. This transaction also creates an R&D innovation engine to accelerate product development, positioning the company to meet today's massive and growing demand for data storage, heterogeneous computing and high-speed connectivity.

"This is an exciting combination of two very complementary companies that together equal more than the sum of their parts," said Marvell President and Chief Executive Officer, Matt Murphy. "This combination expands and diversifies our revenue base and end markets, and enables us to deliver a broader set of differentiated solutions to our customers. Syed Ali has built an outstanding company, and I'm excited that he is joining the Board. I'm equally excited that Cavium's Co-founder Raghib Hussain and Vice President of IC Engineering Anil Jain will also join my senior leadership team. Together, we all will be able to deliver immediate and long-term value to our customers, employees and shareholders."

"Individually, our businesses are exceptionally strong, but together, we will be one of the few companies in the world capable of delivering such a comprehensive set of end-to-end solutions to our combined customer base," said Cavium Co-founder and Chief Executive Officer, Syed Ali. "Our potential is huge. We look forward to working closely with the Marvell team to ensure a smooth transition and to start unlocking the significant opportunities that our combination creates."

The transaction is expected to generate at least $150 to $175 million of annual run-rate synergies within 18 months post close and to be significantly accretive to revenue growth, margins and non-GAAP EPS.

Transaction Structure and Terms 
Under the terms of the definitive agreement, Marvell will pay Cavium shareholders $40.00 in cash and 2.1757 Marvellcommon shares for each share of Cavium common stock. The exchange ratio was based on a purchase price of $80per share, using Marvell's undisturbed price prior to November 3, when media reports of the transaction first surfaced. This represents a transaction value of approximately $6 billion. Cavium shareholders are expected to own approximately 25% of the combined company on a pro forma basis.

Marvell intends to fund the cash consideration with a combination of cash on hand from the combined companies and $1.75 billion in debt financing. Marvell has obtained commitments consisting of an $850 million bridge loan commitment and a $900 million committed term loan from Goldman Sachs Bank USA and Bank of America Merrill Lynch, in each case, subject to customary terms and conditions. The transaction is not subject to any financing condition.

The transaction is expected to close in mid-calendar 2018, subject to regulatory approval as well as other customary closing conditions, including the adoption by Cavium shareholders of the merger agreement and the approval by Marvell shareholders of the issuance of Marvell common shares in the transaction.

Management and Board of Directors 
Matt Murphy will lead the combined company, and the leadership team will have strong representation from both companies, including Marvell's current Chief Financial Officer Jean Hu, Cavium's Co-founder and Chief Operating Officer Raghib Hussain and Cavium's Vice President of IC Engineering Anil Jain. In addition, Cavium's Co-founder and Chief Executive Officer, Syed Ali, will continue with the combined company as a strategic advisor and will join Marvell's Board of Directors, along with two additional board members from Cavium's Board of Directors, effective upon closing of the transaction.

Goldman Sachs & Co. LLC served as the exclusive financial advisor to Marvell and Hogan Lovells US LLP served as legal advisor. Qatalyst Partners LP and J.P. Morgan Securities LLC served as financial advisors to Cavium and Skadden, Arps, Slate, Meagher & Flom LLP served as legal advisor.

Marvell Preliminary Third Fiscal Quarter Results 
Based on preliminary financial information, Marvell expects revenue of $610 to $620 million and non-GAAP earnings per share to be between $0.32 and $0.34, above the mid-point of guidance provided on August 24, 2017. Further information regarding third fiscal quarter results will be released on November 28, 2017 at 1:45 p.m. Pacific Time.

Transaction Website 
For more information, investors are encouraged to visit, which will be used by Marvell and Cavium to disclose information about the transaction and comply with Regulation FD. 

  1. WPI researchers use machine learning to detect when online news are a paid-for pack of lies
  2. Johns Hopkins researchers develop model estimating the odds of events that trigger sudden cardiac death
  3. Young Brazilian researcher creates supercomputer model to explain the origin of Earth's water
  4. With launch of new night sky survey, UW researchers ready for era of big data astronomy
  5. CMU software assembles RNA transcripts more accurately
  6. Russian scientists create a prototype neural network based on memristors
  7. Data Science Institute at Columbia develops statistical method that makes better predictions
  8. KU researchers untangle vexing problem in supercomputer-simulation technology
  9. University of Bristol launches £43 million Quantum Technologies Innovation Centre
  10. Utah researchers develop mile stone for ultra-fast communication
  11. Pitt supercomputing helps doctors detect acute kidney injury earlier to save lives
  12. American University prof builds models to help solve few-body problems in physics
  13. UW prof helps supercompute activity around quasars, black holes
  14. Nottingham's early warning health, welfare system could save UK cattle farmers millions of pounds, reduce antibiotic use
  15. Osaka university researchers roll the dice on perovskite interfaces
  16. UM biochemist Prabhakar produces discovery that lights path for alzheimer's research
  17. Tafti lab creates an elusive material to produce a quantum spin liquid
  18. Purdue develops intrachip micro-cooling system for supercomputers
  19. Northeastern University, China's Xu develops machine learning system to identify shapes of red blood cells
  20. SDSU prof Vaidya produces models for HIV drug pharmacodynamics

Page 7 of 42