Document Type

Pre-Print

Publication Date

4-2017

Subjects

Emerging technologies, Memristors -- Technological innovations, Neural networks (Computer science), Sparse coding -- Algorithms

Abstract

Memristive crossbars have become a popular means for realizing unsupervised and supervised learning techniques. Often, to preserve mathematical rigor, the crossbar itself is separated from the neuron capacitors. In this work, we sought to simplify the design, removing extraneous components to consume significantly lower power at a minimal cost of accuracy. This work provides derivations for the design of such a network, named the Simple Spiking Locally Competitive Algorithm, or SSLCA, as well as CMOS designs and results on the CIFAR and MNIST datasets. Compared to a non-spiking model which scored 33% on CIFAR-10 with a single-layer classifier, this hardware scored 32% accuracy. When used with a state-of-the-art deep learning classifier, the non-spiking model achieved 82% and our simplified, spiking model achieved 80%, while compressing the input data by 79%. Compared to a previously proposed spiking model, our proposed hardware consumed 99% less energy to do the same work at 21 times the throughput. Accuracy held out with online learning to a write variance of 3% and a read variance of 40%. The proposed architecture's excellent accuracy and significantly lower energy usage demonstrate the utility of our innovations. This work provides a means for extremely low-energy sparse coding in mobile devices, such as cellular phones, or for very sparse coding as is needed by self-driving cars or robotics that must integrate data from multiple, high-resolution sensors.

Description

Originally published in arXiv.org. This is the author manuscript of a paper submitted to IEEE Transactions on Neural Networks and Learning Systems.

Persistent Identifier

http://archives.pdx.edu/ds/psu/19765

Share

COinS