Too many businesses failing to properly embrace AI into processes, not reaping benefits

Businesses actively embracing artificial intelligence and striving to bring technological advancements into their operations are reaping dividends not seen by companies who fail to properly adapt and adopt.

While most business and technology leaders are optimistic about the value-creating potential of AI in their enterprise - Enterprise Cognitive Computing (ECC) - the actual rate of adoption is low, and benefits have proved elusive for a majority of organizations.

A study involving Lancaster University Management School's Centre for Technological Futures and MIT Sloan School's Center for Information Systems Research, published in MIT Sloan Management Review, examined the adoption of ECC in 150 organizations from various industries across Europe, North America, Asia, and Australia, to understand why. {module In-article}

Companies who are able to generate value from ECC do so having built a number of organizational capabilities. They develop skills for data science and algorithmic expertise, shape their business and the roles of staff to accommodate and integrate ECC initiatives, and account for the need to include human judgment and digital inquisitiveness in order to see benefits. Such businesses have strong domain expertise and a good operating IT infrastructure.

They apply these capabilities to a number of practices across the organization, including co-creation involving people from across the business through the lifecycle of ECC applications, and developing use cases around pressing and meaningful business problems. They have strategies for managing and training AI algorithms within the ECC applications, and - importantly - they both create a positive buzz about ECC and at the same time have realistic and clear-eyed expectations of the benefits they can expect.

Professor Monideepa Tarafdar, Professor of Information Systems and Co-Director of the Centre for Technological Futures at Lancaster University, who co-authored the study, said: "Bringing AI successfully into a business has many positive effects. It can free employees to perform tasks that require adaptability and creativity found in human input, enhance operations, and augment employees' skills.

"But one of our studies showed half of the companies have no ECC in place, and only half of those who have to believe it to have produced measurable value. This suggests that generating value from such AI is not easy if organizations do not develop the needed capabilities and practices.

"Companies that are serious about AI applications spend the money to hire the right staff and develop the business practices that ensure ECC can improve their business operations, rather than spending money and harnessing massive amounts of data with no obvious benefits."

She added: "Having the proper capabilities in place enables employees to execute the new practices, and the practices, in turn, strengthen the capabilities of the ECC programmes. Such a virtuous cycle can lead to dramatic improvements in operational and financial performance, and customer satisfaction."

Citizen scientists retune Hubble’s galaxy classification

Hundreds of thousands of volunteers have Citizen science has helped to overturn almost a century of galaxy classification, in a new study using data from the longstanding Galaxy Zoo project. The new investigation, published in the journal Monthly Notices of the Royal Astronomical Society, uses classifications of over 6000 galaxies to reveal that “well known” correlations between different features are not found in this large and complete sample.

Almost 100 years ago, in 1927, astronomer Edwin Hubble wrote about the spiral galaxies he was observing at the time and developed a model to classify galaxies by type and shape. Known as the “Hubble Tuning Fork” due to its shape, this model takes account of two main features: the size of the central region (known as the ‘bulge’), and how tightly wound any spiral arms are.

Hubble’s model soon became the authoritative method of classifying spiral galaxies and is still used widely in astronomy textbooks to this day. His key observation was that galaxies with larger bulges tended to have more tightly wound spiral arms, lending vital support to the ‘density wave’ model of spiral arm formation.
Spiral structure in the Pinwheel Galaxy (Messier 101), as observed by the Hubble Space Telescope. Credit: NASA, ESA, CXC, SSC, and STScI{module In-article}
Now though, in contradiction to Hubble’s model, the new work finds no significant correlation between the sizes of the galaxy bulges and how tightly wound the spirals are. This suggests that most spirals are not static density waves after all.

Galaxy Zoo Project Scientist and first author of the new work, Professor Karen Masters from Haverford College in the USA explains: “This non-detection was a big surprise because this correlation is discussed in basically all astronomy textbooks – it forms the basis of the spiral sequence described by Hubble.”

Hubble was limited by the technology of the time, and could only observe the brightest nearby galaxies. The new work is based on a sample 15 times larger from the Galaxy Zoo project, where members of the public assess images of galaxies taken by telescopes around the world, identifying key features to help scientists to follow up and analyze in more detail.

“We always thought that the bulge size and winding of the spiral arms were connected”, says Masters. “The new results suggest otherwise, and that has a big impact on our understanding of how galaxies develop their structure.”

There are several proposed mechanisms for how spiral arms form in galaxies. One of the most popular is the density wave model - the idea that the arms are not fixed structures, but caused by ripples in the density of material in the disc of the galaxy. Stars move in and out of these ripples as they pass around the galaxy.

New models, however, suggest that some arms at least could be real structures, not just ripples. These may consist of collections of stars that are bound by each other’s gravity and physically rotate together. This dynamic explanation for spiral arm formation is supported by state-of-the-art supercomputer models of spiral galaxies.
The Hubble Tuning Fork illustrated with images of nearby galaxies from the Sloan Digital Sky Survey (SDSS).Credit: Karen Masters, Sloan Digital Sky Survey{module In-article}
“It’s clear that there is still lots of work to do to understand these objects, and it’s great to have new eyes involved in the process”, adds Brooke Simmons, Deputy Project Scientist for the Galaxy Zoo project.

“These results demonstrate that over 170 years after the spiral structure was first observed in external galaxies, we still don’t fully understand what causes these beautiful features.”

NYU researchers devise an AI-driven imaging system that protects authenticity

NYU Tandon researchers implant “digital watermarks” using a neural network to easily spot manipulated photos and video

To thwart sophisticated methods of altering photos and video, researchers at the NYU Tandon School of Engineering have demonstrated an experimental technique to authenticate images throughout the entire pipeline, from acquisition to delivery, using artificial intelligence (AI).

In tests, this prototype imaging pipeline increased the chances of detecting manipulation from approximately 45 percent to over 90 percent without sacrificing image quality.

Determining whether a photo or video is authentic is becoming increasingly problematic. Sophisticated techniques for altering photos and videos have become so accessible that so-called “deep fakes” — manipulated photos or videos that are remarkably convincing and often include celebrities or political figures — have become commonplace. To thwart sophisticated deep fake methods of altering photos and video, researchers at the NYU Tandon School of Engineering devised a technique to authenticate images throughout the entire pipeline, from acquisition to delivery, using artificial intelligence (AI). In tests, a prototype pipeline increased the ability to detect manipulation from approximately 45 percent to over 90 percent without sacrificing image quality.

Paweł Korus, a research assistant professor in the Department of Computer Science and Engineering at NYU Tandon, pioneered this approach. It replaces the typical photo development pipeline with a neural network, one form of AI, that introduces carefully crafted artifacts directly into the image at the moment of image acquisition. These artifacts, akin to "digital watermarks," are extremely sensitive to manipulation.

“Unlike previously used watermarking techniques, these AI-learned artifacts can reveal not only the existence of photo manipulations but also their character,” Korus said.

The process is optimized for in-camera embedding and can survive image distortion applied by online photo sharing services.

The advantages of integrating such systems into cameras are clear.

“If the camera itself produces an image that is more sensitive to tampering, any adjustments will be detected with high probability,” said Nasir Memon, a professor of computer science and engineering at NYU Tandon and co-author, with Korus, of a paper detailing the technique. “These watermarks can survive post-processing; however, they’re quite fragile when it comes to modification: If you alter the image, the watermark breaks,” Memon said.

Most other attempts to determine image authenticity examine only the end product — a notoriously difficult undertaking.

Korus and Memon, by contrast, reasoned that modern digital imaging already relies on machine learning. Every photo taken on a smartphone undergoes near-instantaneous processing to adjust for low light and to stabilize images, both of which take place courtesy of onboard AI. In the coming years, AI-driven processes are likely to fully replace the traditional digital imaging pipelines. As this transition takes place, Memon said that “we have the opportunity to dramatically change the capabilities of next-generation devices when it comes to image integrity and authentication. Imaging pipelines that are optimized for forensics could help restore an element of trust in areas where the line between real and fake can be difficult to draw with confidence.”  

Korus and Memon note that while their approach shows promise in testing, additional work is needed to refine the system. This solution is open-source and can be accessed at https://github.com/pkorus/neural-imaging. The researchers will present their paper, “Content Authentication for Neural Imaging Pipelines: End-to-end Optimization of Photo Provenance in Complex Distribution Channels,” at the Conference on Computer Vision and Pattern Recognition in Long Beach, California, in June.