Image recognition (Computer vision), Computer algorithms, Mathematical models
Deep Convolutional Neural Networks (DCNN) require millions of labeled training examples for image classification and object detection tasks, which restrict these models to domains where such a dataset is available. We explore the use of unsupervised sparse coding applied to stereo-video data to help alleviate the need for large amounts of labeled data. In this paper, we show that unsupervised sparse coding is able to learn disparity and motion sensitive basis functions when exposed to unlabeled stereo-video data. Additionally, we show that a DCNN that incorporates unsupervised learning exhibits better performance than fully supervised networks. Furthermore, finding a sparse representation in the first layer, which infers a sparse set of activations, allows for consistent performance over varying initializations and ordering of training examples when compared to representations computed from a single convolution. Finally, we compare activations between the unsupervised sparse-coding layer and the supervised layer when applied to stereo-video data, and show that sparse coding exhibits an encoding that is depth selective, whereas encodings from a single convolution do not. These result indicates promise for using unsupervised sparse-coding approaches in real-world computer vision tasks in domains with limited labeled training data.
Lundquist, S. Y., Mitchell, M., & Kenyon, G. T. (2017). Sparse Coding on Stereo Video for Object Detection. arXiv preprint arXiv:1705.07144.