Sponsor
This material is based upon work supported by the National Science Foundation under Grant Nos. 1018967 and 0749348, as well as the Laboratory Directed Research and Development program at Los Alamos National Laboratory (Project 20090006DR).
Published In
Sante Fe Institute Working Papers
Document Type
Working Paper
Publication Date
2013
Subjects
Computer vision, Image processing -- Digital techniques, Artificial intelligence, Support vector machines, Pattern recognition systems
Abstract
Hierarchical networks are known to achieve high classification accuracy on difficult machine-learning tasks. For many applications, a clear explanation of why the data was classified a certain way is just as important as the classification itself. However, the complexity of hierarchical networks makes them ill-suited for existing explanation methods. We propose a new method, contribution propagation, that gives per-instance explanations of a trained network's classifications. We give theoretical foundations for the proposed method, and evaluate its correctness empirically. Finally, we use the resulting explanations to reveal unexpected behavior of networks that achieve high accuracy on visual object-recognition tasks using well-known data sets.
Persistent Identifier
http://archives.pdx.edu/ds/psu/18313
Citation Details
Landecker, Will; Thomure, Michael David; Bettencourt, Luis M.A.; Mitchell, Melanie; Kenyon, Garrett T.; and Brumby, Steven P., "Interpreting Individual Classifications of Hierarchical Networks" (2013). Computer Science Faculty Publications and Presentations. 165.
http://archives.pdx.edu/ds/psu/18313
Description
Sante Fe Institute Working Paper: 2013-02-007, final version subsequently appeared in Computational Intelligence and Data Mining (CIDM), 2013 IEEE Symposium on (pp. 32-38). IEEE, found at DOI: 10.1109/CIDM.2013.6597214