Sponsor
This material is based upon work supported by the National Science Foundation under Grant Number IIS- 1423651.
Document Type
Pre-Print
Publication Date
7-2016
Subjects
Computer vision, Pattern perception, Pattern recognition systems, Adaptive computing systems
Abstract
—We describe a method for performing active localization of objects in instances of visual situations. A visual situation is an abstract concept—e.g., “a boxing match”, “a birthday party”, “walking the dog”, “waiting for a bus”—whose image instantiations are linked more by their common spatial and semantic structure than by low-level visual similarity. Our system combines given and learned knowledge of the structure of a particular situation, and adapts that knowledge to a new situation instance as it actively searches for objects. More specifically, the system learns a set of probability distributions describing spatial and other relationships among relevant objects. The system uses those distributions to iteratively sample object proposals on a test image, but also continually uses information from those object proposals to adaptively modify the distributions based on what the system has detected. We test our approach’s ability to efficiently localize objects, using a situation-specific image dataset created by our group. We compare the results with several baselines and variations on our method, and demonstrate the strong benefit of using situation knowledge and active context-driven localization. Finally, we contrast our method with several other approaches that use context as well as active search for object localization in images.
Persistent Identifier
http://archives.pdx.edu/ds/psu/18312
Citation Details
Quinn, M. H., Rhodes, A. D., & Mitchell, M. (2016). Active Object Localization in Visual Situations. arXiv preprint arXiv:1607.00548