Project Details


Computer interpretation of images has taken huge strides in recent years, but even the most modern algorithms can't come close to matching human capabilities on simple visual tasks. For example, in a brief glance at an image, people reflexively classify the objects in it in terms of the categories they belong to--people, animals, tools, and other significant classes. This allows us to understand the objects' meaning in the image, for example understanding that a scene with many pieces of food might be a dinner table. Because even modern computer vision systems can't make such a classification, they can't automatically detect when an object in a scene doesn't belong, that is, when it is abnormal relative to the categories present in the scene. Detecting such 'oddball' or atypical objects is essential to understanding visual scenes, because objects that don't belong are often the ones that play the most important role and require immediate action (like a cat on the dinner table). Studies of human subjects have shown that humans are indeed especially adept at detecting atypical items, which often draw our visual attention even before we become consciously aware of them.

This project aims at developing algorithmic techniques to endow computer visions systems with the same ability. By adapting modern vision techniques to mimic the way human observers classify visual atypicality, researchers will develop computer systems that can examine an image and automatically detect abnormal objects, as well as identifying the nature of the abnormality and quantifying the degree of abnormality. The project involves a collaboration among researchers at multiple universities and multiple scientific specialties, including both computer vision and human vision. The result will be a new and useful class of computer vision techniques that can be applied to visual image understanding in many contexts.

Effective start/end date5/15/134/30/17


  • National Science Foundation: $339,962.00


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.