image 62772 A novel visualization method for exploring dynamic patterns in real-time Bitcoin transactional data can zoom in on individual transactions in large blocks of data and also detect meaningful associations between large numbers of transactions and recurring patterns such as money laundering. The information and insights made possible by this top-down visualization of Bitcoin cryptocurrency transactions are described in an article in Big Data, the highly innovative, peer-reviewed journal from Mary Ann Liebert, Inc., publishers (http://www.liebertpub.com/). The article is available free for download on the Big Data (http://online.liebertpub.com/doi/full/10.1089/big.2015.0056) website.

In the article "Visualizing Dynamic Bitcoin Transaction Patterns (http://online.liebertpub.com/doi/full/10.1089/big.2015.0056)," Dan McGinn, David Birch, David Akroyd, Miguel Molina-Solana, Yike Guo, and William Knottenbelt, Imperial College London, U.K., compare their visualization approach to previous bottom-up methods, which examine data from single-source transactions. Top-down system-wide visualization enables pattern detection, and it is then possible to drill down into any particular transaction for more detailed information. The researchers describe the successful deployment of their visualization tool in a high-resolution 64-screen data observatory facility. 

"This is a bold attempt at a comprehensive visualization of bitcoin transactions for a lay audience," says Big Data Editor-in-Chief Vasant Dhar, Professor at the Stern School of Business and the Center for Data Science at New York University, "but should also be of great interest to regulators and bankers who are trying to make sense of blockchain and related methods that can work without a central trusted intermediary. There is a lot of confusion about these emerging methods and a real need for articles that cut through the clutter and explain them in simple terms. Visualization is a key to understanding them."

Tags:
CAPTION Co-Principal Investigators: Jason Leigh, Chris Lee and David Garmire.

The University of Hawai'i at Mānoa will be home to the best data visualization system in the United States, thanks to a major research infrastructure grant from the National Science Foundation (NSF).

The NSF provided $600,000 and the University of Hawai'i (UH) added $257,000 for a total of $857,000 to develop a large CyberCANOE, which stands for Cyber-enabled Collaboration Analysis Navigation and Observation Environment. The CyberCANOE is a visualization and collaboration infrastructure that allows students and researchers to work together more effectively using large amounts of data and information. It was designed by Computer and Information Science Professor Jason Leigh, who is also the founder and director of the Laboratory for Advanced Visualization and Applications (LAVA) at the University of Hawai'i at Mānoa.

UH's CyberCANOE represents the culmination of over two decades of experience and expertise for Leigh, the grant's principal investigator, who developed immersive virtual reality environments while at the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago.

The UH CyberCANOE will provide an alternative approach to constructing ultra-resolution display environments by using new and completely seamless direct view light emitting diode displays, rather than traditional projection technologies or liquid crystal displays. The net effect is a visual instrument that exceeds the capabilities and overcomes the limitations of the current best-in-class systems at other U.S. universities.

"This comes at the best time for Hawai'i as the number of students interested in information and computer science is skyrocketing. Last year about 170 freshman computer science students entered the program, this year we will receive 270," said Leigh. "The University of Hawai'i's CyberCANOE will give these students access to better technology than what will be available on the continent."

The new 2D and 3D stereoscopic display environment with almost 50 Megapixels of resolution will provide researchers with powerful and easy-to-use, information-rich instrumentation in support of cyberinfrastructure-enabled, data-intensive scientific discovery.

Increasingly, the nation's computational science and engineering research communities work with international collaborators to tackle complex global problems. Advanced visualization instruments serve as the virtual eyepieces of a telescope or microscope, enabling research teams and their students to view their data in cyberspace, and better manage the increased scale and complexity of accessing and analyzing the data.

"I'm highly excited about this multidisciplinary collaboration between information and computer sciences, the Academy for Creative Media System and electrical engineering," said co-principal investigator and UH Mānoa Associate Professor of Electrical Engineering David Garmire. "It will advance the state of the art in research infrastructure for information-rich visualization and immersive experience while providing unique opportunities for the student body."

At least 46 researchers, 28 postdocs, 833 undergraduates and 45 graduate students spanning disciplines that include oceanography, astrobiology, mathematics, computer science, electrical engineering, biomedical research, archeology, and computational media are poised to use the CyberCANOE for their large-scale data visualization needs. The CyberCANOE will also open up new opportunities in computer science research at the intersection of data-intensive analysis and visualization, human-computer interaction and virtual reality.

UH System's Academy for Creative Media (ACM) founder and director Chris Lee, who is also a co-principal investigator on the grant, said, "ACM System is thrilled to be able to continue to support Jason Leigh and his team in securing a second NSF Grant. This new CyberCANOE builds upon the two earlier 'mini' CyberCANOEs, which ACM System fully financed at UH Mānoa and UH West O'ahu."

The new CyberCANOE, which is expected to be built in about three years, will enable Leigh's advanced visualization laboratory to provide scientific communities with highly integrated, visually rich collaboration environments; to work with industry to facilitate the creation of new technologies for the advancement of science and engineering; and to continue ongoing partnerships with many of the world's best scientists in academia and industry. With the CyberCANOE, the lab will also support the country's leadership position in supercomputing and in contributing advancements to complex global issues, such as the environment, health and the economy.

CAPTION BioJS conference group is shown.

Drawing upon reusable components to visualise and analyse biological data on the web, BioJS data is freely available to users and developers where they can modify, extend and redistribute the software with few restrictions, at no cost. With a vision for 'every online biological dataset in the world should be visualised with BioJS tools', the community hopes to achieve the largest, most comprehensive repository of JavaScript tools to visualise online biological data, available for all.

Existing open-source biological data repositories are littered with abandoned projects that have failed to gain the support needed to continue on past the initial funding and enthusiasm. Therefore, with BioJS, buy-in from the lifesciences community is critical; to present such a suite of tools capable of displaying biological data requires expertise and capacity that is beyond working in isolated groups.

BioJS was initially developed in 2013 through a collaboration between TGAC and the European Bioinformatics Institute (EMBL-EBI). Starting off as a small set of individual graphical components in a bespoke register, it has evolved to a suite of over a 100 data visualisation tools with a combined download of near to 185k. A community of 41 code contributors spread across four continents, a Google Group forum with more than 150 members, and 15 published papers with multiple citations.

BioJS has been designed so that potential contributors face a limited amount of technical requirements. The user needs to know JavaScript but are not required to understand the core system. Users can work on multiple projects at once, allowing the user to work independently in creating their own data visualisation components.

To promote the project, TGAC recently held the first BioJS conference as an open event to potential developers and users of the online data repository. Followed by a hackathon to allow participants to integrate the toolset into the larger Galaxy network, an open, web-based platform for data intensive biological research.

Manuel Corpas, BioJS community lead and Project Leader at TGAC, said: BioJS has become a robust international project within a short period of time by fostering the right skills and technical expertise to develop the community. Contributors are rewarded to ensure members are motivated and to increase our impact.

"Time spent on promoting, evangelising and networking is one of the most fruitful investments in the BioJS community. We believe that BioJS will set an example for other to have the confidence to build their own similarly robust open source projects and communities."

The paper titled, "Cutting edge: Anatomy of BioJS, an open source community for the life sciences" is published in eLife.

TGAC is strategically funded by BBSRC and operates a National Capability to promote the application of genomics and bioinformatics to advance bioscience research and innovation.

Disney, Carnegie Mellon researchers present photogeometric scene flow

Disney Research and Carnegie Mellon University scientists have found that three computer vision methods commonly used to reconstruct 3-D scenes produce superior results in capturing facial details when they are performed simultaneously, rather than independently.

Photometric stereo (PS), multi-view stereo (MVS) and optical flow (OF) are well-established methods for reconstructing 3-D images, each with its own strengths and weaknesses that often complement the others. By combining them into a single technique, called photogeometric scene flow (PGSF), the researchers were able to create synergies that improved the quality and detail of the resulting 3-D reconstructions.

"The quality of a 3-D model can make or break the perceived realism of an animation," said Paulo Gotardo, an associate research scientist at Disney Research. "That's particularly true for faces; people have a remarkably low threshold for inaccuracies in the appearance of facial features. PGSF could prove extremely valuable because it can capture dynamically moving objects in high detail and accuracy."

Gotardo, working with principal research scientist Iain Matthews at Disney Research in collaboration with Tomas Simon and Yaser Sheikh of Carnegie Mellon's Robotics Institute, found that they could obtain better results by solving the three difficult problems simultaneously.

PS can capture the fine detail geometry of faces or other texture-less objects by photographing the object under different lighting conditions. The method is often used to enhance the detail of low-resolution geometry obtained by MVS, but requires a third technique, OF, to compensate for 3-D motion of the object over time. With each of these three steps, image misalignments and other errors can accumulate and lead to a loss of detail.

"The key to PGSF is the fact that PS not only benefits from, but also facilitates the computation of MVS and OF," Simon said.

The researchers found that facial details such as skin pores, eyes, brows, nostrils and lips that they obtained via PGSF were superior to those obtained using other state-of-the-art techniques.

To perform PGSF, the researchers created an acquisition setup consisting of two cameras and nine directional lights of three different colors. The lights are multiplexed in time and spectrally to sample the appearance of an actor's face within a very short interval -- three video frames. This minimizes the need for motion compensation while also minimizing self-shadowing, which can be problematic for 3-D reconstruction.

"The PGSF technique also can be applied to more complex acquisition setups with different numbers of cameras and light sources," Matthews said.

The researchers will present their findings on PGSF at ICCV 2015, the International Conference on Computer Vision, Dec. 11, in Santiago, Chile. For more information and a video, visit the project web site at http://www.disneyresearch.com/publication/pgsf/.

Tags: ,

Hybrid approach enables accurate rendering while reducing computation time

Computer graphics researchers have developed a way to efficiently render images of sand castles, gravel roads, snowmen, salt in a shaker or even ocean spray - any object consisting of randomly oriented, but discernible grains - that look realistic whether viewed from afar or up close.

The new method, developed by Disney Research in collaboration with researchers from Karlsruhe Institute of Technology, ETH Zurich, Cornell University and Dartmouth College, employs three different types of rendering techniques depending on the scale at which the object is viewed. 

A sand castle scene created by the researchers, which contains about two billion grains, appears uniformly light brown and continuous when seen from a distance. But when the view zooms in, individual grains of three different colors are apparent.

Details of the method will be presented at ACM SIGGRAPH 2015, the International Conference on Computer Graphics and Interactive Techniques, in Los Angeles Aug. 9-13.

"Granular materials are common in our everyday environment, but rendering these materials accurately and efficiently at arbitrary scales remains an open problem in computer graphics," said Marios Papas, a Ph.D. student in computer graphics at Disney Research and ETH Zurich. The rendering framework he and his colleagues used to tackle this problem adapts to the structure of scattered light at different scales.

At the smallest scale, they consider the geometry, size and material properties of the individual grains and the density at which they are packed together. They capture the appearance of these individual grains with a rendering technique called path tracing. This traces many light paths from each pixel back to their sources, building a highly detailed and realistic model of the aggregate material. Using just a few paths per pixel typically results in very noisy images that are quick to compute, but high-quality images require simulating thousands of such light paths per pixel.

The intense computation required for this technique isn't feasible for the entire object, which might contain millions or billions of grains. So, as the scale increases and as it becomes harder to track which rays bounced off which grains, they use a different rendering technique, volumetric path tracing, which approximates the material as a continuous medium and requires less computation. While typically used to render more tenuous atmospheric effects like clouds or smoke, the researchers showed how this technique can also be used to accurately simulate the way light scatters within such granular materials at larger scales.

At even greater scales, particularly for materials that are highly reflective such as snow or spray, they use a third technique leveraging the diffusion approximation.

"It would be possible to use any one of these techniques to render the image, but path tracing the individual grains would require prohibitive amounts of computation time and the other techniques would fail to capture the appearance of individual grains at small scales," said the project lead Wojciech Jarosz, formerly a senior research scientist at Disney Research and now an assistant professor of computer science at Dartmouth College.

"One of our core contributions is showing how to systematically combine these disparate methods and representations to ensure visual consistency between grains visible at vastly different scales, both across the image or across time in an animation," Jarosz said.

Depending on the type of material, the hybrid approach can speed up computation by tens or hundreds of times in comparison to renderings done entirely with path tracing, according to the researchers' calculations.

Page 1 of 51