Approximate Vector Matrix Multiplication Implementations for Neuromorphic Applications using Memristive Crossbars

Published In

Proceedings of the IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH 2017)

Document Type

Citation

Publication Date

2017

Abstract

Modern neuromorphic deep learning techniques, as well as unsupervised techniques like the locally competitive algorithm, rely on Vector Matrix Multiplications (VMMs). When designing biologically-inspired circuits, a VMM is used to represent synapse weighting between neighboring neurons. In hardware, this means that efficient implementations of VMMs are desirable for ASICs implementing neuromorphic algorithms. Next-generation nanodevices, such as memristors, provide the potential for not only power-efficient but also extremely fast calculation of these quantities. In this work, we set out to characterize different architectures using memristive crossbars that implement VMMs, as well as address the benefits of spiking in a VMM context. Acceptable VMM output errors were shown in relation to MNIST and CIFAR-10 classification tasks, and were tolerated up to ±0.001 per input with only a 10% loss in relative accuracy. Due to intrinsic noise and sneak paths within the proposed architectures, a spiking approach is only viable for lowthroughput applications with sparse inputs; however, with 1% of inputs active, it consumed 99.9% less power than other options. When amplifier noise is low, a termination-resistor-based voltage differential architecture was found to consume 75% less power than the virtual ground approach.With high noise, the traditional virtual ground architecture was shown to be capable of boosting the pre-differential signal beyond the noise threshold. This work should inform future implementations of VMMs for deep learning hardware, and provide insight into the requirements of nextgeneration, nanodevice-based machine learning ASICs.

Persistent Identifier

https://archives.pdx.edu/ds/psu/25932

Publisher

IEEE

Share

COinS