Document Type

Poster

Publication Date

3-2018

Subjects

Machine learning, Memristors, Human-computer interaction, Computer vision

Abstract

This poster addresses the development of a new Machine Learning (ML) mechanism, the Sensory Relevance Model (SRM), as a means of splitting information processing tasks into two sub-tasks with more intuitive properties. Specifically, SRMs are a front-end for other ML techniques, re-mapping the input data to a similar space with significantly more sparsity, achieved through the transformation and suppression of inputs irrelevant to the task. Prior work has attempted to reveal this information for Neural Networks (NNs) either as a post-processing step via saliency maps or through a simple masking of the input achieved with a dot product (so-called ``attention'' models). In contrast, SRMs integrate this functionality directly into the ML method as an intermediate step. As a consequence, the transformation realized by SRMs allows for visualizing the intermediate state of the network in a way that has not been previously achieved. This intermediate step may also be manipulated to follow some teacher signal, analogous to teaching students in classrooms through worked problems rather than only showing them input-output pairs, a technique proven to significantly accelerate learning and improve generalization.

Refining and progressing SRM theory may lead to significant changes in the already-broadly-applicable field of ML. The target of this work is to improve the explainability and robustness of ML techniques. In a world of increasingly automatic data processing through black-box methods, the involvement of human operators has been diminishing. However, the communication of knowledge and insight is critical for advancing analysis techniques, and non-theoretical fields looking to adopt ML techniques often want to validate network operation before allowing these techniques to make mission-critical decisions. SRMs are a feasible step towards models that combine the high performance yielded from state-of-the-art NNs on real-world problems with internal mechanisms that are readily interpretable by human operators.

Persistent Identifier

http://archives.pdx.edu/ds/psu/25111

Share

COinS