This work was partly supported by NSF grants #1028238 and #1028120, #1518861, and #1525553.
Recurrent neural networks (RNN) are simple dynamical systems whose computational power has been attributed to their short-term memory. Short-term memory of RNNs has been previously studied analytically only for the case of orthogonal networks, and only under annealed approximation, and uncorrelated input. Here for the first time, we present an exact solution to the memory capacity and the task-solving performance as a function of the structure of a given network instance, enabling direct determination of the function-structure relation in RNNs. We calculate the memory capacity for arbitrary networks with exponentially correlated input and further related it to the performance of the system on signal processing tasks in a supervised learning setup. We compute the expected error and the worst-case error bound as a function of the spectra of the network and the correlation structure of its inputs and outputs. Our results give an explanation for learning and generalization of task solving using short-term memory, which is crucial for building alternative computer architectures using physical phenomena based on the short-term memory principle.
Goudarzi, A., Marzen, S., Banda, P., Feldman, G., Teuscher, C., & Stefanovic, D. (2016). Memory and Information Processing in Recurrent Neural Networks. Neural and Evolutionary Computing. https://arxiv.org/abs/1604.06929