Synaptic Weight States in a Locally Competitive Algorithm for Neuromorphic Memristive Hardware
Memristors promise a means for high-density neuromorphic nanoscale architectures that leverage in situ learning algorithms. While traditional learning algorithms commonly assume analog values for synaptic weights, actual physical memristors may have a finite set of achievable states during online learning. In this paper, we simulate a learning algorithm with limitations on both the resolution of its weights and the means of switching between them to explore how these properties affect classification performance. For our experiments, we use the locally competitive algorithm (LCA) by Rozell et al. in conjunction with the MNIST dataset and a set of natural images. We investigate the effects of both linear and non-linear distributions of weight states. Our results show that as long as the weights are distributed roughly close to linear, the algorithm is still effective for classifying digits, while reconstructing images benefits from non-linearity. Further, the resolution required from a device depends on its transition function between states; for transitions akin to round-to-nearest, synaptic weights should have around 16 possible states (4-bit resolution) to obtain optimal results. We find that lowering the threshold required to change states or adding stochasticity to the system can reduce that requirement down to four states (2-bit resolution). The outcomes of our research are relevant for building effective neuromorphic hardware with state-of-the-art memristive devices.
Woods, W., Burger, J., & Teuscher, C. (2015). Synaptic Weight States in a Locally Competitive Algorithm for Neuromorphic Memristive Hardware. IEEE Transactions on Nanotechnology, 14(6).