Presentation Type

Poster

Location

Portland State University

Start Date

5-7-2019 11:00 AM

End Date

5-7-2019 1:00 PM

Subjects

Neural networks -- Algorithms, Machine learning, Categories (Mathematics), Neural networks -- Classifiers, Explanation

Abstract

Neural Networks (NNs) have become a basis of almost all state-of-the-art machine learning algorithms and classifiers. While NNs have been shown to generalize well to real-world examples, researchers have struggled to show why they work on an intuitive level. We designed several methods to explain the decisions of two state-of-the-art NN classifiers, ResNet and an All-CNN, in the context of the Japanese Society of Radiological Technology (JSRT) lung nodule dataset and the CIFAR-10 image dataset. Leading explanation methods LIME and Grad-CAM generate variations of heat maps which represent the regions of the input determined salient by the NN. We analyze these salient regions highlighted by these algorithms, show how these explanations may be misleading, and discuss future directions including methods which construct full color images rather than heat maps to provide more complete explanations of NN classifiers. This work is relevant in sensitive problems and fields that require validity in decisions made by a classifier such as medical imaging or fraud detection.

Rights

© Copyright the author(s)

IN COPYRIGHT:
http://rightsstatements.org/vocab/InC/1.0/
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

DISCLAIMER:
The purpose of this statement is to help the public understand how this Item may be used. When there is a (non-standard) License or contract that governs re-use of the associated Item, this statement only summarizes the effects of some of its terms. It is not a License, and should not be used to license your Work. To license your own Work, use a License offered at https://creativecommons.org/

Persistent Identifier

https://archives.pdx.edu/ds/psu/28612

Share

COinS
 
May 7th, 11:00 AM May 7th, 1:00 PM

Explanation Methods for Neural Networks

Portland State University

Neural Networks (NNs) have become a basis of almost all state-of-the-art machine learning algorithms and classifiers. While NNs have been shown to generalize well to real-world examples, researchers have struggled to show why they work on an intuitive level. We designed several methods to explain the decisions of two state-of-the-art NN classifiers, ResNet and an All-CNN, in the context of the Japanese Society of Radiological Technology (JSRT) lung nodule dataset and the CIFAR-10 image dataset. Leading explanation methods LIME and Grad-CAM generate variations of heat maps which represent the regions of the input determined salient by the NN. We analyze these salient regions highlighted by these algorithms, show how these explanations may be misleading, and discuss future directions including methods which construct full color images rather than heat maps to provide more complete explanations of NN classifiers. This work is relevant in sensitive problems and fields that require validity in decisions made by a classifier such as medical imaging or fraud detection.