Presentation Type
Poster
Start Date
5-4-2022 11:00 AM
End Date
5-4-2022 1:00 PM
Subjects
neural networks, reservoir computing, modularity, genetic algorithms, evolving network topology
Advisor
Christof Teuscher
Student Level
Undergraduate
Abstract
Typical Artificial Neural Networks (ANNs) have static architectures. The number of nodes and their organization must be chosen and tuned for each task. Choosing these values, or hyperparameters, is a bit of a guessing game, and optimizing must be repeated for each task. If the model is larger than necessary, this leads to more training time and computational cost. The goal of this project is to evolve networks that grow according to the task at hand. By gradually increasing the size and complexity of the network to the extent that the task requires, we will build networks that are more optimal and efficient for the task. We also hypothesize that such evolved networks will exhibit modularity. The type of ANN we use in this research is an Echo State Network (ESN), a type of Reservoir Computer (RC). ESNs have lower training complexity than a typical neural network, only requiring output weights to be trained. While a traditional ESN has random connections between nodes in its reservoir, recent research has shown that creating sub-reservoirs, or modularity, increases performance. We generate and optimize minimal network architectures using a genetic algorithm called Deep HyperNEAT (DHN). The resultant architectures from various tasks are analyzed using graph-theoretical measures to see how information is processed. We hypothesize that reservoirs evolved with DHN will be smaller, more efficient, and exhibit more modularity than randomly generated reservoirs. Multitasking, or training on multiple tasks, will be performed to investigate whether structures within the evolved architecture are shared between tasks.
Rights
© Copyright the author(s)
IN COPYRIGHT:
http://rightsstatements.org/vocab/InC/1.0/
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
DISCLAIMER:
The purpose of this statement is to help the public understand how this Item may be used. When there is a (non-standard) License or contract that governs re-use of the associated Item, this statement only summarizes the effects of some of its terms. It is not a License, and should not be used to license your Work. To license your own Work, use a License offered at https://creativecommons.org/
Persistent Identifier
https://archives.pdx.edu/ds/psu/37464
Growing Reservoir Networks Using the Genetic Algorithm Deep HyperNEAT
Typical Artificial Neural Networks (ANNs) have static architectures. The number of nodes and their organization must be chosen and tuned for each task. Choosing these values, or hyperparameters, is a bit of a guessing game, and optimizing must be repeated for each task. If the model is larger than necessary, this leads to more training time and computational cost. The goal of this project is to evolve networks that grow according to the task at hand. By gradually increasing the size and complexity of the network to the extent that the task requires, we will build networks that are more optimal and efficient for the task. We also hypothesize that such evolved networks will exhibit modularity. The type of ANN we use in this research is an Echo State Network (ESN), a type of Reservoir Computer (RC). ESNs have lower training complexity than a typical neural network, only requiring output weights to be trained. While a traditional ESN has random connections between nodes in its reservoir, recent research has shown that creating sub-reservoirs, or modularity, increases performance. We generate and optimize minimal network architectures using a genetic algorithm called Deep HyperNEAT (DHN). The resultant architectures from various tasks are analyzed using graph-theoretical measures to see how information is processed. We hypothesize that reservoirs evolved with DHN will be smaller, more efficient, and exhibit more modularity than randomly generated reservoirs. Multitasking, or training on multiple tasks, will be performed to investigate whether structures within the evolved architecture are shared between tasks.