This material is based upon work supported by the National Science Foundation under Grant Nos. 1018967 and 0749348, as well as the Laboratory Directed Research and Development program at Los Alamos National Laboratory (Project 20090006DR).
Sante Fe Institute Working Papers
Computer vision, Image processing -- Digital techniques, Artificial intelligence, Support vector machines, Pattern recognition systems
Hierarchical networks are known to achieve high classification accuracy on difficult machine-learning tasks. For many applications, a clear explanation of why the data was classified a certain way is just as important as the classification itself. However, the complexity of hierarchical networks makes them ill-suited for existing explanation methods. We propose a new method, contribution propagation, that gives per-instance explanations of a trained network's classifications. We give theoretical foundations for the proposed method, and evaluate its correctness empirically. Finally, we use the resulting explanations to reveal unexpected behavior of networks that achieve high accuracy on visual object-recognition tasks using well-known data sets.
Landecker, Will; Thomure, Michael David; Bettencourt, Luis M.A.; Mitchell, Melanie; Kenyon, Garrett T.; and Brumby, Steven P., "Interpreting Individual Classifications of Hierarchical Networks" (2013). Computer Science Faculty Publications and Presentations. 165.