Researchers from Queen Mary University of London (QMUL) have built the first computer program that can recognize hand-drawn sketches better than humans.

Known as Sketch-a-Net, the program is capable of correctly identifying the subject of sketches 74.9 per cent of the time compared to humans that only managed a success rate of 73.1 per cent.

As sketching becomes more relevant with the increase in the use of touchscreens, the development could provide a foundation for new ways to interact with computers.

Touchscreens could understand what you are drawing enabling you to retrieve a specific image by drawing it with your fingers, which is more natural than keyword searches for finding items such as furniture or fashion accessories. The improvement could also aid police forensics when an artist’s impression of a criminal needs to be matched to a mugshot or CCTV database.

The research, which was accepted at the British Machine Vision Conference, also showed that the program performed better at determining finer details in sketches. For example, it was able to successfully distinguish the specific bird variants ‘seagull’, ‘flying-bird’, ‘standing-bird’ and ‘pigeon’ with 42.5 per cent accuracy compared to humans that only achieved 24.8 per cent.

Sketches are very intuitive to humans and have been used as a communication tool for thousands of years but recognising free-hand sketches is challenging because they are abstract, varied and consist of black and white lines rather than coloured pixels like a photo. Solving sketch recognition will lead to a greater scientific understanding of visual perception.

Sketch-a-Net is a ‘deep neural network’ – a type of computer program designed to emulate the processing of the human brain. It is particularly successful because it accommodates the unique characteristics of sketches, particularly the order the strokes were drawn. This was information that was previously ignored but is especially important for understanding drawings on touchscreens.

Timothy Hospedales, co-author of the study and Lecturer in the School of Electronic Engineering and Computer Science, QMUL, said: “It’s exciting that our computer program can solve the task even better than humans can. Sketches are an interesting area to study because they have been used since pre-historic times for communication and now, with the increase in use of touchscreens, they are becoming a much more common communication tool again. This could really have a huge impact for areas such as police forensics, touchscreen use and image retrieval, and ultimately will help us get to the bottom of visual understanding.”

Still from "Jurassic World" (C) 2015 Universal Studios

Conference to present special event with an Industrial Light & Magic 40th Anniversary Production

SIGGRAPH 2015 has announced the complete lineup for this year’s Production Sessions program. As part of its Computer Animation Festival, SIGGRAPH 2015 will hold Production Sessions, which allow the world’s most talented and elite computer graphics experts to go through the processes and techniques used to create the compelling content featured in blockbuster films from the past year. The 42nd annual SIGGRAPH Conference will take place 9-13 August 2015 at the Los Angeles Convention Center in Los Angeles, CA.

The conference will also present a special production event to celebrate Industrial Light & Magic’s (ILM) 40th anniversary on Monday, 10 August. ILM has set the standard for visual effects, creating some of the most memorable and even iconic images in the history of modern filmmaking. The company’s pioneering efforts in computer graphics, digital compositing, morphing, and character animation, among a long list of other advances, have changed the way visual effects are utilized in film, television, and other forms of entertainment.

“The 2015 Production Sessions are not to be missed,” said Roy C. Anthony, Production Sessions program chair for SIGGRAPH 2015. “In addition to our amazing program, we’re celebrating ILM’s 40th anniversary with a special presentation. ILM has had such a massive impact on people around the world, and especially in production roles, and we wanted to commemorate their work. As always, the production sessions feature the creative masterminds behind some of the best special effects, computer graphics, and animation in film this past year. Many of these experts have broken new ground with new methods and practices to produce things that have never been seen before. We’re excited to have them join us to talk about their work and share what they do with attendees.”

The complete 2015 Production Session lineup includes*:

  • Building San Fransokyo: Creating the World of Disney’s “Big Hero 6”
    Cinematography and technical supervisors from Walt Disney Animation Studios will be on the panel to explain the creation of the rich and vibrant metropolis of San Fransokyo. From art direction to final frames, the discussion will be around the new approaches to rendering to bring the detailed world in “Big Hero 6” to life.
  • Disney-Pixar’s “Lava”: Moving Mountains
    Pixar Animation Studios will lead the panel discussion on “Lava.” A love letter to volcanoes and the beauty of tropical islands, “Lava” is a unique collaboration among Pixar’s artists. From designing the main character to integrating new lighting tools, the panelists will explain how “Lava” came to life and how each member was inspired to create this vision.
  • Weta Digital Presents: Over 20 Years of Creativity and Innovation
    From the visual effects team behind “The Hobbit” trilogy, Weta Digital will examine the extraordinary undertaking of ground-breaking CG character development to battle sequences to fire and water effects. For more than 20 years, Weta Digital used creativity and innovation to achieve a new era of digital filmmaking.
  • Double Negative Presents: The Visual Effects of “Interstellar”
    The epic science-fiction film “Interstellar” was a true collaboration of art and science. Double Negative will talk about all aspects of the visual effects work on the film, from the use of traditional, practical techniques to the role of how theoretical physics played in the design of the visual effects. Professor Emeritus Kip Thorne will be on-hand to explain how he and Double Negative developed a new renderer to work on the project where gravitationally warped space, a 4,000-foot wave and a virtual environment to represent higher spatial dimensions were created.
  • Inside the Mind: The Making of Disney-Pixar’s “Inside Out”
    From concept art to the bright and vibrant animated world, this session will have Pixar Animation Studios describe the process of designing, building and bringing the world from inside a young girl’s mind to life. They will discuss the challenges of “Inside Out,” including turning emotions into characters and translating the mind into a set where adventure unfolds.
  • From Post-it to Post Production, The Uncompromising Journey of “The Book of Life”
    Award-winning director Jorge Gutierrez and the art, animation, CG and VFX leads behind the Golden Globe™-nominated feature “The Book of Life” present a behind-the-scenes look at this visually inspiring film. From working with the Gutierrez’s unique artistic vision and translating it into 3D, the team at Reel FX Creative Studios will talk about the ideas behind the designs, innovative rigging and animation techniques, lighting and texture challenges, and maintaining the director’s vision.
  • Image Engine Presents: Breathing Life Into “CHAPPiE”
    The team behind director Neill Blomkamp’s sci-fi comedy, “CHAPPiE,” will discuss how a small production team brought “CHAPPiE,” a digitally created childlike robot, to life. Image Engine Design will present a behind-the-scenes look at the challenges, from concept design to final integration.
  • DreamWorks Animation Presents: “HOME”: Just Another Post-Apocalyptic-Alien-Invasion-Buddy-Road-Movie?
    Director Tim Johnson and the creative team from the animated feature “HOME” explains the complexity behind “simple” alien characters and the challenges behind a character who constantly changes hairstyles. The DreamWorks Animation SKG team will discuss their research, the technology and techniques behind this feature film.
  • The Park is Open: Journey to “Jurassic World” with Industrial Light & Magic
    The Industrial Light & Magic team behind the worldwide blockbuster hit “Jurassic World” shares the on-set visualization tools used during production, as well as the new visual effects techniques developed for modeling and texturing, environment creation and advanced motion capture.
  • Fix the Future: Industrial Light & Magic and Visual Effects of “Tomorrowland”
    The Industrial Light & Magic team behind the visual effects of “Tomorrowland” will talk about methodologies, the architecture behind the CG city. They will also open up about the production challenges and workflow solutions that were developed to deliver this first-ever, 4K release.
  • “The Peanuts Movie”: From Comic Strip to Feature Film
    Bringing the iconic characters of the beloved comic strip “Peanuts” to life was an exciting and unprecedented prospect for the team at Blue Sky Studios. The Blue Sky Studios team will share insight of their design and animation style, and how artistic and technical challenges brought the classic pen lines of “Peanuts” creator Charles Schulz to the big screen.
  • The Making of Marvel’s “Ant-Man”
    The creative minds from Marvel Entertainment, Double Negative, Luma Pictures and Method Studios will be on hand for a behind-the-scenes session that explores the visual effects in the making of “Ant-Man.”
  • The Making of the Characters of Marvel’s “Avengers: Age of Ultron”
    For the making of “Avengers: Age of Ultron,” the teams from Marvel Entertainment, Industrial Light & Magic and Lola VFX will discuss what went on behind-the-scenes, from animation to visual effects teams and the technologies and processes.

For more information about this year’s Production Sessions or other SIGGRAPH 2015 programs, follow the conference on FacebookTwitterGoogle+YouTubeInstagram, or the ACM SIGGRAPH blog.

New transregional collaborative research centre  for the University of Konstanz and the University of Stuttgart

The German Research Foundation (DFG) has approved the creation of a new transregional collaborative research centre (SFB/Transregio) for the University of Konstanz, University of Stuttgart, and the Max-Planck-Institute for biological cybernetics in Tübingen. The new SFB/Transregio 161 "Quantitative Methods for Visual Computing" is concerned with the computer-assisted processing and representation of image information.  The goal is to make the quality and applicability of data and images determinable and measurable. The DFG will support the research for the next four years with approximately eight million euro.

"The representation and analysis of visual information is a central research focus of the department of computer and information science", explains Professor Ulrich Rüdiger, rector of the University of Konstanz. "Through the acquisition of the transregional collaborative research centre 'Quantitative Methods for Visual Computing', we can develop this key research area further."

SFB/Transregio 161 focusses on the computer-assisted processing and representation of image information. This research will influence future applications in the  research and industry sectors, as well as in the private market. Examples include: the visualisation of collected data or simulations, virtual maps and tours or computer generated film scenes. "Computer scientists in various fields along with engineers and psychologist are developing new techniques in order to simplify the representation and handling of ever-increasing amounts of data, and to improve the quality of computer-generated images", says computer scientist Professor Daniel Weiskopf, spokesperson for the new research project in Stuttgart. "Until now, the quantifiablility of visual computing methods has been neglected. We want to take on this challenge."

The goal of the the approximately 40 scientists comprising the new research group is to make the quality and accuracy of visual computing methods determinable and measurable. The requirements of different applications and users will also be coordinated. "We will carry out studies and measurements, verify visualisations, and examine interaction opportunities", explains Professor Oliver Deussen, vice-spokesperson for the collaborative research project and professor for computer graphics and media design at the University of Konstanz. "In this way, existing techniques and algorithms will be optimized and developed further."

Examples of research objectives include evaluating the effects of virtual environments and city models on humans; capturing and illustrating three-dimensional data from real scenes or simulations; and the development of new technologies such as brain-computer interfaces. Does the representation include all important information? How difficult is it for humans to comprehend  this information? What added value do these interaction opportunities provide? In order to establish a comprehensive quantitative foundation and to promote progress in this field, these and similar questions are to be answered through the upcoming research activities.

The collaborative research centres supported by the DFG are installed as research institutions at the universities for a period of up to 12 years. As a result, a SFB/Transregio extends to several research locations. The SFB/Transregio 161 "Quantitative Methods for Visual Computing" will begin its research on 1 July.

Viewing of videos helps system enhance its understanding of objects

A research group at Disney Research Pittsburgh has developed a computer vision system that, much like humans, can continuously improve its ability to recognize objects by picking up hints while watching videos.

Like most other object recognition systems, the Disney system builds a conceptual model of an object, be it an airplane or a soap dispenser, by using a learning algorithm to analyze a number of example images of the object.

What's different about the Disney system is that it then uses that model to identify objects, when it can, in videos. As it does, it sometimes is able to glean new information about such objects, enabling it to make its own model of the object more complex. And that in turn enables the system to more readily recognize such objects in a wider variety of conditions.

"This process continues, potentially indefinitely, over the lifetime of the recognition system," said Leonid Sigal, a senior research scientist at Disney Research Pittsburgh. "This is a learning system that is continuously evolving through unsupervised experience to build a more complete and complex model of the world."

Sigal and his co-investigators - Alina Kuznetsova and Bodo Rosenhahn of Leibniz University Hannover, and former Disney post-doctoral researcher Sung Ju Hwang, now of Ulsan National Institute of Science and Technology in South Korea - will present their findings at the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, June 7-12, in Boston.

Recognizing objects in images, though often easy for humans, remains a challenge for automated systems. Systems that learn to recognize objects using one set of images may have difficulty recognizing those same objects in the real world, or under different sets of conditions, or domains.

Rather than try to get a system to more accurately recognize objects using its original model for that object in new domains, the Disney group took a different approach - expanding the object domain incrementally. That means that the system's model for each object will be continuously fine-tuned as the system encounters new information.

One potential problem is that the system, which does this fine tuning without human supervision, may start ascribing attributes to an object that aren't pertinent and lead to errors in detection, but thus far this "domain drift" has not been detected by the Disney researchers.

They tested their incremental learning method against several other leading object recognition methods, using two standard video datasets that included a variety of objects found in the home. In most instances, it outperformed the other methods in detecting items such as microwave ovens, mugs and stoves and demonstrated that it not only got better with experience at detecting these objects in the videos, but also in detecting objects from its original training images.

ARLINGTON, VA – The following two brief releases highlight recent achievements by the Office of Naval Research. The first development is an implantable computer chip that gives the blind a chance at sight. The second is a new computer network aimed at illuminating all the available Naval medical resources in one visual map display.

Page 2 of 51