Davista Technologies LLC, a startup that licensed a Purdue University innovation, has developed a visual data analytics solutions technology that could provide real-time information to help first responders save lives.

"Companies and organizations, specifically those in law enforcement, public safety and health care, are constantly inundated with huge amounts of text, audio and visual data, and are required to make sense of this data as quickly as possible to make effective decisions," said Abish Malik, research scientist and co-founder of Davista Technologies. "What we have developed provides concise, relevant information to help first responders make faster decisions in a critical situation when every second matters."

Davista Technologies specializes in mass spatial-temporal and predictive data analytics solutions.

"Nearly all data sets deal with space and time but most analytics companies look at these two elements separately. We focus on them together to achieve more accurate forecasting so organizations can better plan their resources," said David Ebert, Silicon Valley Professor of Electrical and Computer Engineering, director of Visual Analytics for Command, Control and Interoperability Environmentscenter and co-founder of the company. "Davista Technologies software integrates a large amount of intelligence with a variety of visualization elements in an easy-to-use interface. This makes it easier for organizations to make better decisions from large, disparate and complex datasets from a multitude of sources and formats."

Davista Technologies originated out of the U.S. Department of Homeland Security Center of Excellence.

Ebert said the company's research was conducted over seven years with a primary goal to serve the public safety domain.

"The imperative from homeland security has been really beneficial because from the beginning it's allowed us to interact with end users such as police departments and other safety and security entities. These organizations have collaborated with us on designing problems and creating solutions for those problems, testing the product and giving real world feedback," he said. "All of the features that are in the product are driven by needs of actual professionals in the field."

Davista Technologies has established relationships with several government and corporate entities that use their products every day.

"Every time someone calls 911, a police officer is dispatched, the Coast Guard launches a boat or a patient goes into the emergency department, a record is created and data is formed and categorized," Malik said. "For police chiefs, understanding this data is vital to making decisions on when, where and how many police officers should be out patrolling. These decisions can be easily made by using our predictive capabilities which are based on past crime trends from location and time data. Making these effective decisions could potentially intercept or prevent further crimes happening in those areas."

Ebert said the company's social media solutions also have many benefits.

"Our anomaly detection capabilities used in our social media solutions could be especially beneficial in terrorism or gun violence situations," he said. "For example, if 5,000 people are tweeting about an event such as a football game and only 10 are tweeting about a potential threat such as someone with a gun, our technology can pick up those 10 tweets and can notify users and organizations so they can act immediately. Other services just pick up on the most talked about topics."

Davista Technologies offers three products and services - Visdom Visual, Visdom Analytics and Smart.

"Visdom Visual provides interactive business intelligence and reporting solutions for large data sets, illustrating complex information for real time decision-making. The product incorporates easy to use map and time visualizations and a dashboard interface," said Malik. "We also offer an analytics search engine that is able to ingest data from different sources and different varieties that incorporates enhanced correlative and predication techniques for situational awareness and risk based decision making. Smart is our scalable and interactive social media analytics and visualization solution, its anomaly detection capability is able to identify things that are not yet trending but are still being talked about."

Malik said although their original research was based on safety and security, the technology could have a range of applications.

"We have previously worked with emergency department data coming in daily from hospitals around the state, we've also worked with the Regenstrief Center for Healthcare Engineering, exploring insurance data," he said. "The products we have are generic and diversified and could be used in just about any organization that is trying to make sense of big streams of data."

Davista Technologies is exploring new markets and is acquiring their first paying customers. To grow, the company is seeking investors and personnel.

Technology used by Davista Technologies has been licensed through the Purdue Research Foundation Office of Technology Commercialization. The company is a member of the Purdue Startup Class of 2016. Purdue has 27 startups based on Purdue intellectual property that were launched in the 2016 fiscal year. The company also is a Purdue Foundry affiliated client with several members participating in its Launch Box program.   

For information on other Purdue intellectual property ready for licensing and commercialization, visit http://www.otc-prf.org. For more information about available leadership positions, investing in a Purdue startup or licensing a Purdue innovation, visit http://www.purduefoundry.com.

Patients with the same illness often receive the same treatment, even if the cause of the illness is different for each person. This represents a new step towards ultimately being able to offer every patient more personalised treatment.

Publication in Nature Genetics

Six Dutch universities are combining forces to chart the different disease processes for a range of common conditions. This represents a new step towards ultimately being able to offer every patient more personalised treatment. The results of this study have been published in two articles in the authoritative scientific journal Nature Genetics.

New phase

The researchers were able to make their discoveries thanks to new techniques that make it possible to simultaneously measure the regulation and activity of all the genes of thousands of people, and to link these data to millions of genetic differences in their DNA. The combined analysis of these ‘big data’ made it possible to determine which molecular processes in the body become dysregulated for a range of disparate diseases, from prostate cancer to ulcerative bowel disease, before the individuals concerned actually become ill.

Big data

“The emergence of ‘big data’, ever faster [super]computers and new mathematical techniques means it’s now possible to conduct extremely large-scale studies and gain an understanding of many diseases at the same time,” explains Lude Franke (UMCG), head of the research team in Groningen. The researchers show how thousands of disease-related DNA differences disrupt the internal working of a cell and how their effect can be influenced by environmental factors. And all this was possible without the need for a single lab experiment.

Dr. Bas Heijmans
Dr. Bas Heijmans

Large-scale collaboration in the Netherlands

The success of this research is the result of the decision taken six years ago by biobanks throughout the Netherlands to share data and biomaterials within the BBMRI consortium. This decision meant it became possible to gather, store and analyse data from blood samples of a very large number of volunteers. The present study illustrates the tremendous value of large-scale collaboration in the field of medical research in the Netherlands.

Netherlands in the lead

Heijmans (LUMC), research leader in Leiden and initiator of the partnership: “The Netherlands is leading the field in sharing molecular data. This enables researchers to carry out the kind of large-scale studies that are needed to gain a better understanding of the causes of diseases. This result is only just the beginning: once they have undergone a screening, other researchers with a good scientific idea will be given access to this enormous bank of anonymized data. Our Dutch ‘polder mentality’ is also advancing science.”

Dr Peter-Bram ’t Hoen
Dr Peter-Bram ’t Hoen

Personalized medicine

Mapping the various molecular causes for a disease is the first step towards a form of medical treatment that better matches the disease process of individual patients. To reach that ideal, however, we still have a long way to go. The large-scale molecular data that have been collected for this research are the cornerstone of even bigger partnerships, such as the national Health-RI initiative. The third research leader, Peter-Bram ’t Hoen (LUMC), says: “Large quantities of data should eventually make it possible to give everyone personalised health advice, and to determine the best treatment for each individual patient.”

CAPTION A graphic from a study led by The University of Texas at Austin shows how snow data from NASA satellites impacts seasonal temperature prediction. The negative values represented by warm colors indicate regions where temperature predictions improved and show the percentage by which errors were reduced. The graphics show the prediction results made with data from the satellites MODIS (MOD), and MODIS and GRACE (GRAMOD) against a prediction that did not incorporate snow satellite data. CREDIT Peirong Lin, UT Austin Jackson School of Geosciences.

Researchers with The University of Texas at Austin have found that incorporating snow data collected from space into supercomputer climate models can significantly improve seasonal temperature predictions.

The findings, published in November in Geophysical Research Letters, a publication of the American Geophysical Union, could help farmers, water providers, power companies and others that use seasonal climate predictions - forecasts of conditions months in the future - to make decisions. Snow influences the amount of heat that is absorbed by the ground and the amount of water available for evaporation into the atmosphere, which plays an important role in influencing regional climate.

"We're interested in providing more accurate climate forecasts because the seasonal timescale is quite important for water resource management and people who are interested in next season's weather," said Peirong Lin, the lead author of the study and a graduate student at the UT Jackson School of Geosciences.

Seasonal forecasts are influenced by factors that are significantly more difficult to account for than the variables for daily to weekly weather forecasts or long-term climate change, said Zong-Liang Yang, a co-author of the study and a professor at the Jackson School of Geosciences Department of Geological Sciences.

"Between the short and very long time scale there's a seasonal time scale that's a very chaotic system," Yang said. "But there is some evidence that slowly varying surface conditions, like snow cover, will have a signature in the seasonal timescale."

The researchers found that incorporating snow data collected by NASA satellites into climate models improved regional temperature predictions by 5 to 25 percent. These findings are the first to go beyond general associations and break down how much snow can impact the temperature of a region months into the future. Improving temperature predictions is a key element to improving the supercomputer models that provide climate predictions months in advance.

The researchers analyzed how data on snow cover and depth collected from two NASA satellites -- MODIS and GRACE -- affected temperature predictions of the Northern Hemisphere in a climate model. The study examined seasonal data from 2003 through 2009, so the researchers could compare the model's predictions to recorded temperatures. The model ran predictions in three-month intervals, with January, February and March each used as starting months.

The supercomputer model's temperature improvement changed depending on the region and time, with the biggest improvements happening in regions where ground-based measurements are sparse such as Siberia and the Tibetan Plateau. Climatic conditions of both these areas can influence the Indian Monsoon -- seasonal rains that are vital to agriculture in India, a fact that shows the far-reaching applicability of seasonal climate prediction, Yang said.

"This correlation between snow and future monsoon has been established for several decades, but here we are developing a predictive framework where you can run the model forward and get a quantity, not just a correlation," Yang said.

In the future the researchers plan to expand their research to predict other climatic factors, such as snowfall and rainfall. For the time being, they hope that their findings can be useful to national organizations that make climate predictions, such as the U.S. National Oceanic and Atmospheric Administration and the European Forecasting Center.

Randal Koster, a scientist at NASA's Goddard Space Flight Center who studies land-atmosphere interactions using supercomputer models, said that the study is an example of how satellites can improve climate forecasts by providing more accurate data to inform the starting conditions of the model.

"In the future such use of satellite data will be standard," said Koster, who was not involved with the study. "Pioneering studies like this are absolutely critical to seeing this happen."

Atlas of every drug on Earth points to treatments of the future

Scientists have created a map of all 1,578 licensed drugs and their mechanisms of action - as a means of identifying 'uncharted waters' in the search for future treatments.

Their analysis of drugs licensed through the Food and Drug Administration reveals that 667 separate proteins in the human body have had drugs developed against them - just an estimated 3.5% of the 20,000 human proteins.

And as many as 70 per cent of all targeted drugs created so far work by acting on just four families of proteins - leaving vast swathes of human biology untouched by drug discovery programmes.

The study is the most comprehensive analysis of existing drug treatments across all diseases ever conducted. It was jointly led by scientists at The Institute of Cancer Research, London, which also funded the research.

The new map reveals areas where human genes and the proteins they encode could be promising targets for new treatments - and could also be used to identify where a treatment for one disease could be effective against another.

The new data, published in a paper in the journal Nature Reviews Drug Discovery, could be used to improve treatments for all human aliments - as diverse as cancer, mental illness, chronic pain and infectious disease.

Scientists brought together vast amounts of information from huge datasets including the canSAR database at The Institute of Cancer Research (ICR), the ChEMBL database from the European Bioinformatics Institute (EMBL-EBI) in Cambridge and the University of New Mexico's DrugCentral database.

They matched each drug with prescribing information and data from published scientific papers, and built up a comprehensive picture of how existing medicines work - and where the gaps and opportunities for the future lie.

The researchers discovered that there are 667 unique human proteins targeted by existing approved drugs, and identified a further 189 drug targets in organisms that are harmful to humans, such as bacteria, viruses and parasites.

On average they found there were two drugs for every target in humans - but that a handful of proteins were targeted by many different drugs, such as the glucocorticoid receptor, which is the target of 61 anti-inflammatory drugs.

Cancer was found to be the most innovative disease area, with the greatest growth in 'first-in-class' drugs - those that use a new and unique mode of action.

Using complex 'Big Data' analytical techniques, the researchers identified four very frequently 'drugged' families of proteins - accounting for 43 per cent of all drug targets, and acting as targets for 70 per cent of all approved small-molecular drugs.

The new map of drugs can now be used to identify other proteins with similar properties to these most heavily drugged families - which might be potentially exciting treatment targets for diseases such as cancer.

And bringing together complex data from multiple sources within the drug map could predict the best combinations of drugs to give. Targeting two proteins which behave in a similar way is unlikely to be effective against diseases such as cancer, whereas targeting proteins with very different functions could be much more successful.

Study co-leader Dr Bissan Al-Lazikani, Head of Data Science at The Institute of Cancer Research, London, said:

"Our new study provides a comprehensive map of the current state of medicines for human disease. It identifies areas where drug discovery has been a spectacular success, others where there are major gaps in our armoury of medicines, and opportunities for the future in the form of promising targets and potential drug combinations. By revealing the uncharted waters of drug discovery, it will provide a clear pointer for future exploration and innovation."

Professor Paul Workman, Chief Executive of The Institute of Cancer Research, London, said:

"We need to do more to innovate in drug discovery if we are really going to tackle the major medical challenges we face, such as cancer's ability to evolve drug resistance in response to treatment. But to help direct future efforts in drug discovery, we first need a very accurate and comprehensive picture of the targets of the medicines that have been created so far, what is currently working, and most importantly where there is the greatest potential for the future. This new map of drugs, created through the latest computational analytical technologies, will enhance our ability to use rational, data-driven approaches to identify the most promising future targets and treatment combinations for the next generation of cancer and other diseases."

Scientists from the University of Twente’s MESA+ research institute have developed a method for studying individual defects in transistors. All computer chips, which are each made up of huge numbers of transistors, contain millions of minor ‘flaws’. Previously it was only possible to study these flaws in large numbers. However, fundamental research conducted by University of Twente scientists has now made it possible to zoom in on defects and study them individually. In due course, this knowledge will be highly relevant to the further development of the semiconductor industry. The research results were published today in Scientific Reports, a leading scientific journal produced by the Nature Publishing Group.

Computer chips typically contain numerous extremely small defects. There are often as many as ten billion defects per square centimetre. The bulk of these defects cause no problems in practice, but the large numbers involved pose enormous challenges for the industry. This is just one of the barriers to the further miniaturization of chips, based on existing technology. It is, therefore, vital to obtain a detailed understanding of how these defects arise, of where they are located, and of how they behave. Until now it has been impossible to study individual defects, due to the large number of defects on each chip, and the fact that closely spaced defects influence each other. For this reason, the defects were always studied in ensembles of several million at a time. However, this approach suffers from the drawback that it only yields a limited amount of information on individual defects.

Main tap

A group of University of Twente researchers led by Dr Floris Zwanenburg have now developed a clever method that, at long last, makes it possible to study individual defects in transistors. Working in the University of Twente’s NanoLab, the researchers first created chips containing eleven electrodes. These consisted of a group of ten electrodes 35 nanometres wide and, located perpendicularly above them, a single electrode 80 nanometres long (a nanometre is one million times smaller than a millimetre). Dr Zwanenburg compares these electrodes to taps – not for water, but for electrons – which the researchers can turn on and off. The researchers first turn on the long electrode, the ‘stopcock’. At a temperature of -270 degrees Celsius, they then open or close the other ‘taps’. This enables them to locate the ‘leaks’, or – in other words – identify the electrodes beneath which defects are located. It turned out that there were leaks under every single electrode.

Neutralizing the defects

In a subsequent step, the researchers were able to neutralize more than eighty percent of the defects by heating the chips to 300 degrees Celsius, in a furnace filled with argon. In some cases, there was only a single defect beneath a given electrode. Having reduced the density of defects in the material, the researchers were then able to study individual defects. Floris Zwanenburg explains that “The behaviour of individual defects is of great importance, as it will improve our understanding of defects in contemporary electronics. Of course, the electronics in question work at room temperature and not at the extremely low temperatures used in our study. Nevertheless, this is an important step for fundamental research and, ultimately, for the further development of modern IC technology.”

Research

The research was conducted by Paul-Christiaan Spruijtenburg, Sergey Amitonov, Filipp Mueller, Wilfred van der Wiel and Floris Zwanenburg of the Department of NanoElectronics, at the University of Twente’s MESA+ Institute for Nanotechnology. The study was jointly funded by the European Commission and the FOM Foundation. 

Page 2 of 392