Sponsor
This work was partly supported by NSF grants #1028238 and #1028120, #1518861, and #1525553.
Document Type
Pre-Print
Publication Date
4-26-2016
Subjects
Neural networks (Computer science), Memory management (Computer science), Machine learning
Abstract
Recurrent neural networks (RNN) are simple dynamical systems whose computational power has been attributed to their short-term memory. Short-term memory of RNNs has been previously studied analytically only for the case of orthogonal networks, and only under annealed approximation, and uncorrelated input. Here for the first time, we present an exact solution to the memory capacity and the task-solving performance as a function of the structure of a given network instance, enabling direct determination of the function-structure relation in RNNs. We calculate the memory capacity for arbitrary networks with exponentially correlated input and further related it to the performance of the system on signal processing tasks in a supervised learning setup. We compute the expected error and the worst-case error bound as a function of the spectra of the network and the correlation structure of its inputs and outputs. Our results give an explanation for learning and generalization of task solving using short-term memory, which is crucial for building alternative computer architectures using physical phenomena based on the short-term memory principle.
Persistent Identifier
http://archives.pdx.edu/ds/psu/17187
Citation Details
Goudarzi, A., Marzen, S., Banda, P., Feldman, G., Teuscher, C., & Stefanovic, D. (2016). Memory and Information Processing in Recurrent Neural Networks. Neural and Evolutionary Computing. https://arxiv.org/abs/1604.06929